<> <http://www.w3.org/2000/01/rdf-schema#comment> "The repository administrator has not yet configured an RDF license."^^<http://www.w3.org/2001/XMLSchema#string> . <> <http://xmlns.com/foaf/0.1/primaryTopic> <https://khub.utp.edu.my/scholars/id/eprint/16700> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://purl.org/ontology/bibo/AcademicArticle> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://purl.org/ontology/bibo/Article> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/title> "Translating medical image to radiological report: Adaptive multilevel multi-attention approach"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/abstract> "Background and Objective: Medical imaging techniques are widely employed in disease diagnosis and treatment. A readily available medical report can be a useful tool in assisting an expert for investigating the patient's health. A radiologist can benefit from an automatic medical image to radiological report translation system while preparing a final report. Previous attempts on automatic medical report generation task includes image captioning algorithms without taking domain-specific visual and textual contents into account, thus arises the question about credibility of generated report. Methods: In this work, a novel Adaptive Multilevel Multi-Attention (AMLMA) approach is proposed by offering domain-specific visual-textual knowledge to generate a thorough and believable radiological report for any view of a human chest X-ray image. The proposed approach leverages the encoder-decoder framework incorporated with multiple adaptive attention mechanisms. The potential of a convolutional neural network (CNN) with residual attention module (RAM) is demonstrated as a strong visual encoder for multi-label abnormality detection. The multilevel visual features (local and global) are extracted from proposed visual encoder to retrieve regional-level and abstract-level radiology-based semantic information. The Word2Vec and FastText word embeddings are trained on medical reports to acquire radiological knowledge and further used as textual encoders, feeding as input to Bi-directional Long Short Term Memory (Bi-LSTM) network to learn the co-relationship between medical terminologies in radiological reports. The AMLMA employs a weighted multilevel association of adaptive visual-semantic attention and visual-based linguistic attention mechanisms. This association of adaptive attention is exploited as a decoder and produces significant improvements in the report generation task. Results: The proposed approach is evaluated on a publicly available Indiana University chest X-ray (IU-CXR) dataset. The CNN with RAM shows the significant improvement in recall (0.4423), precision (0.1803) and F1-score (0.2551) for prediction of multiple abnormalities in X-ray image. The results of language generation metrics for proposed variants were acquired using the COCO-caption evaluation Application Program Interface (API). The trained embeddings with AMLMA model generates the convincing radiology report and outperform state-of-the-art (SOTA) approaches with high evaluation metrics scores for Bleu-4 (0.172), Meteor (0.247), RougeL (0.376) and CIDEr (0.381). In addition, a new �Unique Index� (UI) statistic is introduced to highlight the model's ability for generating unique reports. Conclusion: The overall architecture aids to the understanding of various X-ray image views and generating the relevant normal and abnormal radiography statements. The proposed model is emphasized on multi-level visual-textual knowledge with adaptive attention mechanism to balance visual and linguistic information for the generation of admissible radiology report. © 2022 Elsevier B.V."^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/date> "2022" . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/2002/07/owl#sameAs> <https://doi.org/10.1016/j.cmpb.2022.106853> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/volume> "221" . <https://khub.utp.edu.my/scholars/id/org/ext-397098334e78fe76ebf6fbdd1e6da759> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Organization> . <https://khub.utp.edu.my/scholars/id/org/ext-397098334e78fe76ebf6fbdd1e6da759> <http://xmlns.com/foaf/0.1/name> "Elsevier Ireland Ltd"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/publisher> <https://khub.utp.edu.my/scholars/id/org/ext-397098334e78fe76ebf6fbdd1e6da759> . <https://khub.utp.edu.my/scholars/id/publication/ext-01692607> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://purl.org/ontology/bibo/Collection> . <https://khub.utp.edu.my/scholars/id/publication/ext-01692607> <http://xmlns.com/foaf/0.1/name> "Computer Methods and Programs in Biomedicine"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/isPartOf> <https://khub.utp.edu.my/scholars/id/publication/ext-01692607> . <https://khub.utp.edu.my/scholars/id/publication/ext-01692607> <http://www.w3.org/2002/07/owl#sameAs> <urn:issn:01692607> . <https://khub.utp.edu.my/scholars/id/publication/ext-01692607> <http://purl.org/ontology/bibo/issn> "01692607" . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/status> <http://purl.org/ontology/bibo/status/peerReviewed> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/status> <http://purl.org/ontology/bibo/status/published> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/creator> <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/authorList> <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> . <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> <http://www.w3.org/1999/02/22-rdf-syntax-ns#_1> <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/creator> <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/authorList> <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> . <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> <http://www.w3.org/1999/02/22-rdf-syntax-ns#_2> <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/creator> <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/ontology/bibo/authorList> <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> . <https://khub.utp.edu.my/scholars/id/eprint/16700#authors> <http://www.w3.org/1999/02/22-rdf-syntax-ns#_3> <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> . <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> <http://xmlns.com/foaf/0.1/givenName> "I."^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> <http://xmlns.com/foaf/0.1/familyName> "Faye"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-45d7734b6f1d7d57d77ec3cbd75ed943> <http://xmlns.com/foaf/0.1/name> "I. Faye"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> <http://xmlns.com/foaf/0.1/givenName> "G.O."^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> <http://xmlns.com/foaf/0.1/familyName> "Gajbhiye"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-1959f3456788ebbb6580ea0b719c03a0> <http://xmlns.com/foaf/0.1/name> "G.O. Gajbhiye"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> <http://xmlns.com/foaf/0.1/givenName> "A.V."^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> <http://xmlns.com/foaf/0.1/familyName> "Nandedkar"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/person/ext-0f243f22d1397da1a5be078d80d9663c> <http://xmlns.com/foaf/0.1/name> "A.V. Nandedkar"^^<http://www.w3.org/2001/XMLSchema#string> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://eprints.org/ontology/EPrint> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://eprints.org/ontology/ArticleEPrint> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://purl.org/dc/terms/isPartOf> <https://khub.utp.edu.my/scholars/id/repository> . <https://khub.utp.edu.my/scholars/id/eprint/16700> <http://www.w3.org/2000/01/rdf-schema#seeAlso> <https://khub.utp.edu.my/scholars/16700/> . <https://khub.utp.edu.my/scholars/16700/> <http://purl.org/dc/elements/1.1/title> "HTML Summary of #16700 \n\nTranslating medical image to radiological report: Adaptive multilevel multi-attention approach\n\n" . <https://khub.utp.edu.my/scholars/16700/> <http://purl.org/dc/elements/1.1/format> "text/html" . <https://khub.utp.edu.my/scholars/16700/> <http://xmlns.com/foaf/0.1/primaryTopic> <https://khub.utp.edu.my/scholars/id/eprint/16700> .