The chest x-ray is the most frequently requested radiologic examination.In fact every radiologst should be an expert in chest film reading.The interpretation of a chest film requires the understanding of basic principles.
Whenever you review a chest x-ray, always use a systematic approach.We use an inside-out approach from central to peripheral.First the heart figure is evaluated, followed by mediastinum and hili.Subsequently the lungs, lungborders and finally the chest wall and abdomen are examined.
The loss of the normal silhouette of a structure is called the silhouette sign.This is an important sign, because it enables us to find subtle pathology and to locate it within the chest.
On a chest film only the outer contours of the heart are seen.In many cases we can only tell whether the heart figure is normal or enlarged and it will be difficult to say anything about the different heart compartments.However it can be helpful to know where the different compartments are situated.
On the right side of the chest the lung will lie against the anterior chest wall.On the left however the inferior part of the lung may not reach the anterior chest wall, since the heart or pericardial fat or effusion is situated there.
This causes a density on the anteroinferior side on the lateral view which can have many forms.It is a normal finding, which can be seen on many chest x-rays and should not be mistaken for pathology in the lingula or middle lobe.
Necrosis of the fat pad has pathologic features similar to fat necrosis in epiploic appendagitis.It is an uncommon benign condition, that manifests as acute pleuritic chest pain in previously healthy persons (10).
Notice the displacement of the upper part of the azygoesophageal line on the chest x-ray in the area below the carina.This is the result of massive lymphadenopathy in the subcarinal region (station 7).
Is your patient's chest X-ray unremarkable, or does it show a life-threatening abnormality Use this article to gain a basic understanding of chest X-ray interpretation to sharpen your assessment skills, promote patient safety, and optimize care.
On appropriately exposed chest X-ray, this division should be clearly visible. The carina is an important landmark when assessing nasogastric (NG) tube placement, as the NG tube should bisect the carina if it is correctly placed in the gastrointestinal tract.
Cardiomegaly is said to be present if the heart occupies more than 50% of the thoracic width on a PA chest X-ray. Cardiomegaly can develop for a wide variety of reasons including valvular heart disease, cardiomyopathy, pulmonary hypertension and pericardial effusion.
The diaphragm should be indistinguishable from the underlying liver in healthy individuals on an erect chest X-ray, however, if free gas is present (often as a result of bowel perforation), air accumulates under the diaphragm causing it to lift and become visibly separate from the liver. If you see free gas under the diaphragm you should seek urgent senior review, as further imaging (e.g. CT abdomen) will likely be required to identify the source of free gas.
The costophrenic angles are formed from the dome of each hemidiaphragm and the lateral chest wall.
If you go to your doctor or the emergency room with chest pain, a chest injury or shortness of breath, you will typically get a chest X-ray. The image helps your doctor determine whether you have heart problems, a collapsed lung, pneumonia, broken ribs, emphysema, cancer or any of several other conditions.
Chest X-rays are a common type of exam. A chest X-ray is often among the first procedures you'll have if your doctor suspects heart or lung disease. A chest X-ray can also be used to check how you are responding to treatment.
Before the chest X-ray, you generally undress from the waist up and wear an exam gown. You'll need to remove jewelry from the waist up, too, since both clothing and jewelry can obscure the X-ray images.
During the procedure, your body is positioned between a machine that produces the X-rays and a plate that creates the image digitally or with X-ray film. You may be asked to move into different positions in order to take views from both the front and the side of your chest.
Saliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Since IoU computes the overlap of two segmentations but pointing game hit rate better captures diagnostic attention, we suggest using both metrics when evaluating localization performance in the context of medical imaging. While IoU is a commonly used metric for evaluating semantic segmentation outputs, there are inherent limitations to the metric in the pathological context. This is indicated by our finding that even the human benchmark segmentations had low overlap with the ground-truth segmentations (the highest expert mIoU was 0.720 for cardiomegaly). One potential explanation for this consistent underperformance is that pathologies can be hard to distinguish, especially without clinical context. Furthermore, whereas many people might agree on how to segment, say, a cat or a stop sign in traditional computer vision tasks, radiologists use a certain amount of clinical discretion when defining the boundaries of a pathology on a CXR. There can also be institutional and geographic differences in how radiologists are taught to recognize pathologies, and studies have shown that there can be high interobserver variability in the interpretation of CXRs56,57,58. We sought to address this with the hit/miss evaluation metric, which highlights when two radiologists share the same diagnostic intention, even if it is less exact than IoU in comparing segmentations directly. The human benchmark localization using hit rate was above 0.9 for four pathologies (pneumothorax, cardiomegaly, enlarged cardiomediastinum and support devices); these are pathologies for which there is often little disagreement between radiologists about where the pathologies are located, even if the expert segmentations are noisy. Further work is needed to demonstrate which segmentation evaluation metrics, even beyond IoU and hit/miss, are more appropriate for certain pathologies and downstream tasks when evaluating saliency methods for the clinical setting.
Conceptualization: P.R. and A.P. Design: P.R., A.P., A.S., X.G. and A.A. Data analysis and interpretation: A.S., X.G., A.A., P.R., A.P., S.Q.H.T., C.D.T.N., V.-D.N., J.S. and F.G.B. Drafting of the manuscript: A.S., X.G., A.A. and P.R. Critical revision of the manuscript for important intellectual content: A.P., S.Q.H.T., C.D.T.N., V.-D.N., J.S., F.G.B., A.Y.N. and M.P.L. Supervision: A.Y.N., M.P.L. and P.R. Research was primarily performed while A.S. was at Stanford University. M.P.L. and P.R. contributed equally.
This tutorial describes the important anatomical structures visible on a chest X-ray. These structures are discussed in a specific order to help you develop your own systematic approach to viewing chest X-rays.
By the end of the tutorial you will be familiar with all the important visible structures of the chest, which should be checked whenever you look at a chest X-ray. The tutorial also discusses anatomical structures that are not easily seen, but become visible when abnormal due to disease. You will learn more about these structures and diseases in the tutorial on chest X-ray abnormalities.
Many structures of the chest are readily visible on a chest X-ray. Other important structures, such as the pleura, only become visible when abnormal, and some are not visible at all, such as the phrenic nerve.
BackgroundDeep learning has the potential to augment the use of chest radiography in clinical radiology, but challenges include poor generalizability, spectrum bias, and difficulty comparing across studies.PurposeTo develop and evaluate deep learning models for chest radiograph interpretation by using radiologist-adjudicated reference standards.Materials and MethodsDeep learning models were developed to detect four findings (pneumothorax, opacity, nodule or mass, and fracture) on frontal chest radiographs. This retrospective study used two data sets. Data set 1 (DS1) consisted of 759 611 images from a multicity hospital network and ChestX-ray14 is a publicly available data set with 112 120 images. Natural language processing and expert review of a subset of images provided labels for 657 954 training images. Test sets consisted of 1818 and 1962 images from DS1 and ChestX-ray14, respectively. Reference standards were defined by radiologist-adjudicated image review. Performance was evaluated by area under the receiver operating characteristic curve analysis, sensitivity, specificity, and positive predictive value. Four radiologists reviewed test set images for performance comparison. Inverse probability weighting was applied to DS1 to account for positive radiograph enrichment and estimate population-level performance.ResultsIn DS1, population-adjusted areas under the receiver operating characteristic curve for pneumothorax, nodule or mass, airspace opacity, and fracture were, respectively, 0.95 (95% confidence interval [CI]: 0.91, 0.99), 0.72 (95% CI: 0.66, 0.77), 0.91 (95% CI: 0.88, 0.93), and 0.86 (95% CI: 0.79, 0.92). With ChestX-ray14, areas under the receiver operating characteristic curve were 0.94 (95% CI: 0.93, 0.96), 0.91 (95% CI: 0.89, 0.93), 0.94 (95% CI: 0.93, 0.95), and 0.81 (95% CI: 0.75, 0.86), respectively.ConclusionExpert-level models for detecting clinically relevant chest radiograph findings were developed for this study by using adjudicated reference standards and with population-level performance estimation. Radiologist-adjudicated labels for 2412 ChestX-ray14 validation set images and 1962 test set images are provided. RSNA, 2019Online supplemental material is available for this article.See also the editorial by Chang in this issue. 59ce067264