Table of Contents
List of Figures
List of Tables
List of Examples
The information in this publication was considered technically sound by the consensus of persons engaged in the development and approval of the document at the time it was developed. Consensus does not necessarily mean that there is unanimous agreement among every person participating in the development of this document.
NEMA standards and guideline publications, of which the document contained herein is one, are developed through a voluntary consensus standards development process. This process brings together volunteers and/or seeks out the views of persons who have an interest in the topic covered by this publication. While NEMA administers the process and establishes rules to promote fairness in the development of consensus, it does not write the document and it does not independently test, evaluate, or verify the accuracy or completeness of any information or the soundness of any judgments contained in its standards and guideline publications.
NEMA disclaims liability for any personal injury, property, or other damages of any nature whatsoever, whether special, indirect, consequential, or compensatory, directly or indirectly resulting from the publication, use of, application, or reliance on this document. NEMA disclaims and makes no guaranty or warranty, expressed or implied, as to the accuracy or completeness of any information published herein, and disclaims and makes no warranty that the information in this document will fulfill any of your particular purposes or needs. NEMA does not undertake to guarantee the performance of any individual manufacturer or seller's products or services by virtue of this standard or guide.
In publishing and making this document available, NEMA is not undertaking to render professional or other services for or on behalf of any person or entity, nor is NEMA undertaking to perform any duty owed by any person or entity to someone else. Anyone using this document should rely on his or her own independent judgment or, as appropriate, seek the advice of a competent professional in determining the exercise of reasonable care in any given circumstances. Information and other standards on the topic covered by this publication may be available from other sources, which the user may wish to consult for additional views or information not covered by this publication.
NEMA has no power, nor does it undertake to police or enforce compliance with the contents of this document. NEMA does not certify, test, or inspect products, designs, or installations for safety or health purposes. Any certification or other statement of compliance with any health or safety-related information in this document shall not be attributable to NEMA and is solely the responsibility of the certifier or maker of the statement.
This DICOM Standard was developed according to the procedures of the DICOM Standards Committee.
The DICOM Standard is structured as a multi-part document using the guidelines established in [ISO/IEC Directives, Part 2].
PS3.1 should be used as the base reference for the current parts of this standard.
DICOM® is the registered trademark of the National Electrical Manufacturers Association for its standards publications relating to digital communications of medical information, all rights reserved.
HL7® and CDA® are the registered trademarks of Health Level Seven International, all rights reserved.
SNOMED®, SNOMED Clinical Terms®, SNOMED CT® are the registered trademarks of the International Health Terminology Standards Development Organisation (IHTSDO), all rights reserved.
LOINC® is the registered trademark of Regenstrief Institute, Inc, all rights reserved.
This part of the DICOM Standard contains explanatory information in the form of Normative and Informative Annexes.
The following standards contain provisions which, through reference in this text, constitute provisions of this Standard. At the time of publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this Standard are encouraged to investigate the possibilities of applying the most recent editions of the standards indicated below.
[ISO/IEC Directives, Part 2] 2016/05. 7.0. Rules for the structure and drafting of International Standards. http://www.iec.ch/members_experts/refdocs/iec/isoiecdir-2%7Bed7.0%7Den.pdf .
Terms listed in Section 3 are capitalized throughout the document.
This Annex was formerly located in Annex E “Explanation of Patient Orientation (Normative)” in PS3.3 in the 2003 and earlier revisions of the standard.
This Annex provides an explanation of how to use the patient orientation data elements.
As for the hand, the direction labels are based on the foot in the standard anatomic position. For the right foot, for example, RIGHT will be in the direction of the 5th toe. This assignment will remain constant through movement or positioning of the extremity. This is also true of the HEAD and FOOT directions.
This Annex was formerly located in Annex G “Integration of Modality Worklist and Modality Performed Procedure Step in the Original DICOM Standard (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
DICOM was published in 1993 and effectively addresses image communication for a number of modalities and Image Management functions for a significant part of the field of medical imaging. Since then, many additional medical imaging specialties have contributed to the extension of the DICOM Standard and developed additional Image Object Definitions. Furthermore, there have been discussions about the harmonization of the DICOM Real-World domain model with other standardization bodies. This effort has resulted in a number of extensions to the DICOM Standard. The integration of the Modality Worklist and Modality Performed Procedure Step address an important part of the domain area that was not included initially in the DICOM Standard. At the same time, the Modality Worklist and Modality Performed Procedure Step integration make steps in the direction of harmonization with other standardization bodies (CEN TC 251, HL7, etc.).
The purpose of this Annex is to show how the original DICOM Standard relates to the extension for Modality Worklist Management and Modality Performed Procedure Step. The two included figures outline the void filled by the Modality Worklist Management and Modality Performed Procedure Step specification, and the relationship between the original DICOM Data Model and the extended model.
Figure B-1. Functional View - Modality Worklist and Modality Performed Procedure Step Management in the Context of DICOM Service Classes
The management of a patient starts when the patient enters a physical facility (e.g., a hospital, a clinic, an imaging center) or even before that time. The DICOM Patient Management SOP Class provides many of the functions that are of interest to imaging departments. Figure B-1 is an example where one presumes that an order for a procedure has been issued for a patient. The order for an imaging procedure results in the creation of a Study Instance within the DICOM Study Management SOP Class. At the same time (A) the Modality Worklist Management SOP Class enables a modality operator to request the scheduling information for the ordered procedures. A worklist can be constructed based on the scheduling information. The handling of the requested imaging procedure in DICOM Study Management and in DICOM Worklist Management are closely related. The worklist also conveys patient/study demographic information that can be incorporated into the images.
Worklist Management is completed once the imaging procedure has started and the Scheduled Procedure Step has been removed from the Worklist, possibly in response to the Modality Performed Procedure Step (B). However, Study Management continues throughout all stages of the Study, including interpretation. The actual procedure performed (based on the request) and information about the images produced are conveyed by the DICOM Study Component SOP Class or the Modality Performed Procedure Step SOP Classes.
Figure B-2. Relationship of the Original Model and the Extensions for Modality Worklist and Modality Performed Procedure Step Management
Figure B-2 shows the relationship between the original DICOM Real-World model and the extensions of this Real-World model required to support the Modality Worklist and the Modality Performed Procedure Step. The new parts of the model add entities that are needed to request, schedule, and describe the performance of imaging procedures, concepts that were not supported in the original model. The entities required for representing the Worklist form a natural extension of the original DICOM Real-World model.
Common to both the original model and the extended model is the Patient entity. The Service Episode is an administrative concept that has been shown in the extended model in order to pave the way for future adaptation to a common model supported by other standardization groups including HL7, CEN TC 251 WG 3, CAP-IEC, etc. The Visit is in the original model but not shown in the extended model because it is a part of the Service Episode.
There is a 1 to 1 relationship between a Requested Procedure and the DICOM Study (A). A DICOM Study is the result of a single Requested Procedure. A Requested Procedure can result in only one Study.
A n:m relationship exists between a Scheduled Procedure Step and a Modality Performed Procedure Step (B). The concept of a Modality Performed Procedure Step is a superset of the Study Component concept contained in the original DICOM model. The Modality Performed Procedure Step SOP Classes provide a means to relate Modality Performed Procedure Steps to Scheduled Procedure Steps.
This Annex was formerly located in Annex J “Waveforms (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
Waveform acquisition is part of both the medical imaging environment and the general clinical environment. Because of its broad use, there has been significant previous and complementary work in waveform standardization of which the following are particularly important:
Specification for Transferring Digital Neurophysiological Data Between Independent Computer Systems
Standard Communications Protocol for Computer-Assisted Electrocardiography (SCP-ECG).
For DICOM, the domain of waveform standardization is waveform acquisition within the imaging context. It is specifically meant to address waveform acquisitions that will be analyzed with other data that is transferred and managed using the DICOM protocol. It allows the addition of waveform data to that context with minimal incremental cost. Further, it leverages the DICOM persistent object capability for maintaining referential relationships to other data collected in a multi-modality environment, including references necessary for multi-modality synchronization.
Waveform interchange in other clinical contexts may use different protocols more appropriate to those domains. In particular, HL7 may be used for transfer of waveform observations to general clinical information systems, and MIB may be used for real-time physiological monitoring and therapy.
The waveform information object definition in DICOM has been specifically harmonized at the semantic level with the HL7 waveform message format. The use of a common object model allows straightforward transcoding and interoperation between systems that use DICOM for waveform interchange and those that use HL7, and may be viewed as an example of common semantics implemented in the differing syntaxes of two messaging systems.
HL7 allows transport of DICOM SOP Instances (information objects) encapsulated within HL7 messages. Since the DICOM and HL7 waveform semantics are harmonized, DICOM Waveform SOP Instances need not be transported as encapsulated data, as they can be transcoded to native HL7 Waveform Observation format.
The following are specific use case examples for waveforms in the imaging environment.
Case 1: Catheterization Laboratory - During a cardiac catheterization, several independent pieces of data acquisition equipment may be brought together for the exam. An electrocardiographic subsystem records surface ECG waveforms; an X-ray angiographic subsystem records motion images; a hemodynamic subsystem records intracardiac pressures from a sensor on the catheter. These subsystems send their acquired data by network to a repository. These data are assembled at an analytic workstation by retrieving from the repository. For a left ventriculographic procedure, the ECG is used by the physician to determine the time of maximum and minimum ventricular fill, and when coordinated with the angiographic images, an accurate estimate of the ejection fraction can be calculated. For a valvuloplasty procedure, the hemodynamic waveforms are used to calculate the pre-intervention and post-intervention pressure gradients.
Case 2: Electrophysiology Laboratory - An electrophysiological exam will capture waveforms from multiple sensors on a catheter; the placement of the catheter in the heart is captured on an angiographic image. At an analytic workstation, the exact location of the sensors can thus be aligned with a model of the heart, and the relative timing of the arrival of the electrophysiological waves at different cardiac locations can be mapped.
Case 3: Stress Exam - A stress exam may involve the acquisition of both ECG waveforms and echocardiographic ultrasound images from portable equipment at different stages of the test. The waveforms and the echocardiograms are output on an interchange disk, which is then input and read at a review station. The physician analyzes both types of data to make a diagnosis of cardiac health.
Synchronization of acquisition across multiple modalities in a single study (e.g., angiography and electrocardiography) requires either a shared trigger, or a shared clock. A Synchronization Module within the Frame of Reference Information Entity specifies the synchronization mechanism. A common temporal environment used by multiple equipment is identified by a shared Synchronization Frame of Reference UID. How this UID is determined and distributed to the participating equipment is outside the scope of the standard.
The method used for time synchronization of equipment clocks is implementation or site specific, and therefore outside the scope of this proposal. If required, standard time distribution protocols are available (e.g., NTP, IRIG, GPS).
An informative description of time distribution methods can be found at: http://www.bancomm.com/cntpApp.htm
A second method of synchronizing acquisitions is to utilize a common reference channel (temporal fiducial), which is recorded in the data acquired from the several equipment units participating in a study, and/or that is used to trigger synchronized data acquisitions. For instance, the "X-ray on" pulse train that triggers the acquisition of frames for an X-ray angiographic SOP Instance can be recorded as a waveform channel in a simultaneously acquired hemodynamic waveform SOP Instance, and can be used to align the different object instances. Associated with this Supplement are proposed coded entry channel identifiers to specifically support this synchronization mechanism (DICOM Terminology Mapping Resource Context Group ID 3090).
Figure C.4-1 shows a canonical model of waveform data acquisition. A patient is the subject of the study. There may be several sensors placed at different locations on or in the patient, and waveforms are measurements of some physical quality (metric) by those sensors (e.g., electrical voltage, pressure, gas concentration, or sound). The sensor is typically connected to an amplifier and filter, and its output is sampled at constant time intervals and digitized. In most cases, several signal channels are acquired synchronously. The measured signal usually originates in the anatomy of the patient, but an important special case is a signal that originates in the equipment, either as a stimulus, such as a cardiac pacing signal, as a therapy, such as a radio frequency signal used for ablation, or as a synchronization signal.
The part of the composite information object that carries the waveform data is the Waveform Information Entity (IE). The Waveform IE includes the technical parameters of waveform acquisition and the waveform samples.
The information model, or internal organizational structure, of the Waveform IE is shown in Figure C.5-1. A waveform information object includes data from a continuous time period during which signals were acquired. The object may contain several multiplex groups, each defined by digitization with the same clock whose frequency is defined for the group. Within each multiplex group there will be one or more channels, each with a full technical definition. Finally, each channel has its set of digital waveform samples.
This Waveform IE definition is harmonized with the HL7 waveform semantic constructs, including the channel definition attributes and the use of multiplex groups for synchronously acquired channels. The use of a common object model allows straightforward transcoding and interoperation between systems that use DICOM for waveform interchange and those that use HL7, and may be viewed as an example of common semantics implemented in the differing syntaxes of two messaging systems.
This section describes the congruence between the DICOM Waveform IE and the HL7 version 2.3 waveform message format (see HL7 version 2.3 Chapter 7, sections 7.14 - 7.20).
Waveforms in HL7 messages are sent in a set of OBX (Observation) Segments. Four subtypes of OBX segments are defined:
The CHN subtype defines one channel in a CD (Channel Definition) Data Type
The TIM subtype defines the start time of the waveform data in a TS (Time String) Data Type
The WAV subtype carries the waveform data in an NA (Numeric Array) or MA (Multiplexed Array) Data Type (ASCII encoded samples, character delimited)
The ANO subtype carries an annotation in a CE (Coded Entry) Data Type with a reference to a specific time within the waveform to which the annotation applies
Other segments of the HL7 message definition specify patient and study identification, whose harmonization with DICOM constructs is not defined in this Annex.
The Waveform Module Channel Definition sequence attribute (003A,0200) is defined in harmonization with the HL7 Channel Definition (CD) Data Type, in accordance with the following Table. Each Item in the Channel Definition sequence attribute corresponds to an OBX Segment of subtype CHN.
Table C.6-1. Correspondence Between DICOM and HL7 Channel Definition
In the DICOM information object definition, the sampling frequency is defined for the multiplex group, while in HL7 it is defined for each channel, but is required to be identical for all multiplexed channels.
Note that in the HL7 syntax, Waveform Source is a string, rather than a coded entry as used in DICOM. This should be considered in any transcoding between the two formats.
In HL7, the exact start time for waveform data is sent in an OBX Segment of subtype TIM. The corresponding DICOM attributes, which must be combined to form the equivalent time string, are:
The DICOM binary encoding of data samples in the Waveform Data attribute (5400,1010) corresponds to the ASCII representation of data samples in the HL7 OBX Segment of subtype WAV. The same channel-interleaved multiplexing used in the HL7 MA (Multiplexed Array) Data Type is used in the DICOM Waveform Data attribute.
Because of its binary representation, DICOM uses several data elements to specify the precise encoding, as listed in the following Table. There are no corresponding HL7 data elements, since HL7 uses explicit character-delimited ASCII encoding of data samples.
In HL7, Waveform Annotation is sent in an OBX Segment of subtype ANO, using the CE (Coded Entry) Data Type CE. This corresponds precisely to the DICOM Annotation using Coded Entry Sequences. However, HL7 annotation ROI is to a single point only (time reference), while DICOM allows reference to ranges of samples delimited by time or by explicit sample position.
The SCP-ECG standard is designed for recording routine resting electrocardiograms. Such ECGs are reviewed prior to cardiac imaging procedures, and a typical use case would be for SCP-ECG waveforms to be translated to DICOM for inclusion with the full cardiac imaging patient record.
SCP-ECG provides for either simultaneous or non-simultaneous recording of the channels, but does not provide a multiplexed data format (each channel is separately encoded). When translating to DICOM, each subset of simultaneously recorded channels may be encoded in a Waveform Sequence Item (multiplex group), and the delay to the recording of each multiplex group shall be encoded in the Multiplex Group Time Offset (0018,1068).
The electrode configuration of SCP-ECG Section 1 may be translated to the DICOM Acquisition Context (0040,0555) sequence items using TID 3401 “ECG Acquisition Context” and Context Groups 3263 and 3264.
The lead identification of SCP-ECG Section 3, a term coded as an unsigned integer, may be translated to the DICOM Waveform Channel Source (003A,0208) coded sequence using CID 3001 “ECG Leads”.
Pacemaker spike records of SCP-ECG Section 7 may be translated to items in the Waveform Annotations Sequence (0040,B020) with a code term from CID 3335 “ECG Annotations”. The annotation sequence item may record the spike amplitude in its Numeric Value and Measurement Units attributes.
This Annex was formerly located in Annex K “SR Encoding Example (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
The following is a simple and non-comprehensive illustration of the encoding of the Informative SR Content Tree Example in PS3.3.
This Annex was formerly located in Annex L “Mammography CAD (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
The templates for the Mammography CAD SR IOD are defined in Mammography CAD SR IOD Templates in PS3.16 . Relationships defined in the Mammography CAD SR IOD templates are by-value, unless otherwise stated. Content items referenced from another SR object instance, such as a prior Mammography CAD SR, are inserted by-value in the new SR object instance, with appropriate original source observation context. It is necessary to update Rendering Intent, and referenced content item identifiers for by-reference relationships, within content items paraphrased from another source.
The Document Root, Image Library, Summaries of Detections and Analyses, and CAD Processing and Findings Summary sub-trees together form the content tree of the Mammography CAD SR IOD. There are no constraints regarding the 1-n multiplicity of the Individual Impression/Recommendation or its underlying structure, other than the TID 4001 “Mammography CAD Overall Impression/Recommendation” and TID 4003 “Mammography CAD Individual Impression/Recommendation” requirements in PS3.16. Individual Impression/Recommendation containers may be organized, for example per image, per finding or composite feature, or some combination thereof.
The Summary of Detections and Summary of Analyses sub-trees identify the algorithms used and the work done by the CAD device, and whether or not each process was performed on one or more entire images or selected regions of images. The findings of the detections and analyses are not encoded in the summary sub-trees, but rather in the CAD Processing and Findings Summary sub-tree. CAD processing may produce no findings, in which case the sub-trees of the CAD Processing and Findings Summary sub-tree are incompletely populated. This occurs in the following situations:
If the tree contains no Individual Impression/Recommendation nodes and all attempted detections and analyses succeeded then the mammography CAD device made no findings.
Detections and Analyses that are not attempted are not listed in the Summary of Detections and Summary of Analyses trees.
If the code value of the Summary of Detections or Summary of Analyses codes in TID 4000 “Mammography CAD Document Root” is "Not Attempted" then no detail is provided as to which algorithms were not attempted.
Figure E.1-3. Example of Individual Impression/Recommendation Levels of Mammography CAD SR Content Tree
The shaded area in Figure E.1-3 demarcates information resulting from Detection, whereas the unshaded area is information resulting from Analysis. This distinction is used in determining whether to place algorithm identification information in the Summary of Detections or Summary of Analyses sub-trees.
The clustering of calcifications within a single image is considered to be a Detection process that results in a Single Image Finding. The spatial correlation of a calcification cluster in two views, resulting in a Composite Feature, is considered Analysis. The clustering of calcifications in a single image is the only circumstance in which a Single Image Finding can result from the combination of other Single Image Findings, which must be Individual Calcifications.
Once a Single Image Finding or Composite Feature has been instantiated, it may be referenced by any number of Composite Features higher in the tree.
Any content item in the Content tree that has been inserted (i.e., duplicated) from another SR object instance has a HAS OBS CONTEXT relationship to one or more content items that describe the context of the SR object instance from which it originated. This mechanism may be used to combine reports (e.g., Mammography CAD 1, Mammography CAD 2, Human).
By-reference relationships within Single Image Findings and Composite Features paraphrased from prior Mammography CAD SR objects need to be updated to properly reference Image Library Entries carried from the prior object to their new positions in the present object.
The Impression/Recommendation section of the SR Document Content tree of a Mammography CAD SR IOD may contain a mixture of current and prior single image findings and composite features. The content items from current and prior contexts are target content items that have a by-value INFERRED FROM relationship to a Composite Feature content item. Content items that come from a context other than the Initial Observation Context have a HAS OBS CONTEXT relationship to target content items that describe the context of the source document.
In Figure E.2-1, Composite Feature and Single Image Finding are current, and Single Image Finding (from Prior) is duplicated from a prior document.
The following is a simple and non-comprehensive illustration of an encoding of the Mammography CAD SR IOD for Mammography computer aided detection results. For brevity, some Mandatory content items are not included, such as several acquisition context content items for the images in the Image Library.
A mammography CAD device processes a typical screening mammography case, i.e., there are four films and no cancer. Mammography CAD runs both density and calcification detection successfully and finds nothing. The mammograms resemble:
The content tree structure would resemble:
A mammography CAD device processes a screening mammography case with four films and a mass in the left breast. Mammography CAD runs both density and calcification detection successfully. It finds two densities in the LCC, one density in the LMLO, a cluster of two calcifications in the RCC and a cluster of 20 calcifications in the RMLO. It performs two clustering algorithms. One identifies individual calcifications and then clusters them, and the second simply detects calcification clusters. It performs mass correlation and combines one of the LCC densities and the LMLO density into a mass; the other LCC density is flagged Not for Presentation, therefore not intended for display to the end-user. The mammograms resemble:
The content tree structure in this example is complex. Structural illustrations of portions of the content tree are placed within the content tree table to show the relationships of data within the tree. Some content items are duplicated (and shown in boldface) to facilitate use of the diagrams.
The patient in Example 2 returns for another mammogram. A more comprehensive mammography CAD device processes the current mammogram; analyses are performed that determine some content items for Overall and Individual Impression/Recommendations. Portions of the prior mammography CAD report (Example 2) are incorporated into this report. In the current mammogram the number of calcifications in the RCC has increased, and the size of the mass in the left breast has increased from 1 to 4 cm2.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of content tree items reused from the prior Mammography CAD SR instance (Example 2).
While the Image Library contains references to content tree items reused from the prior Mammography CAD SR instance, the images are actually used in the mammography CAD analysis and are therefore not italicized as indicated above.
Included content from prior mammography CAD report (see Example 2, starting with node 1.2.1.2)
Included content from prior mammography CAD report (see Example 2, starting with node 1.2.4.2)
Computer-aided detection algorithms often compute an internal "CAD score" for each Single Image Finding detected by the algorithm. In some implementations the algorithms then group the findings into "bins" as a function of their CAD score. The number of bins is a function of the algorithm and the manufacturer's implementation, and must be one or more. The bins allow an application that is displaying CAD marks to provide a number of operating points on the Free-response Receiver-Operating Characteristic (FROC) curve for the algorithm, as illustrated in Figure E.4-1.
This is accomplished by displaying all CAD marks of Rendering Intent "Presentation Required" or "Presentation Optional" according to the following rules:
if the display application's Operating Point is 0, only marks with a Rendering Intent = "Presentation Required" are displayed
if the display application's Operating Point is 1, then marks with a Rendering Intent = "Presentation Required" and marks with a Rendering Intent = "Presentation Optional" with a CAD Operating Point = 1 are displayed
if the display application's Operating Point is n, then marks with a Rendering Intent = "Presentation Required" and marks with a Rendering Intent = "Presentation Optional" with a CAD Operating Point <= n are displayed
If a Mammography CAD SR Instance references Digital Mammography X-ray Image Storage - For Processing Instances, but a review workstation has access only to Digital Mammography X-Ray Image Storage - For Presentation Instances, the following steps are recommended in order to display such Mammography CAD SR content with Digital Mammography X-Ray Image - For Presentation Instances.
In most scenarios, the Mammography CAD SR Instance is assigned to the same DICOM Patient and Study as the corresponding Digital Mammography "For Processing" and "For Presentation" image Instances.
If a workstation has a Mammography CAD SR Instance, but does not have images for the same DICOM Patient and Study, the workstation may use the Patient and Study attributes of the Mammography CAD SR Instance in order to Query/Retrieve the Digital Mammography "For Presentation" images for that Patient and Study.
Once a workstation has the Mammography CAD SR Instance and Digital Mammography "For Presentation" image Instances for the Patient and Study, the Source Image Sequence (0008,2112) attribute of each Digital Mammography "For Presentation" Instance will reference the corresponding Digital Mammography "For Processing" Instance. The workstation can match the referenced Digital Mammography "For Processing" Instance to a Digital Mammography "For Processing" Instance referenced in the Mammography CAD SR.
The workstation should check for Spatial Locations Preserved (0028,135A) in the Source Image Sequence of each Digital Mammography "For Presentation" image Instance, to determine whether it is spatially equivalent to the corresponding Digital Mammography "For Processing" image Instance.
If the value of Spatial Locations Preserved (0028,135A) is YES, then the CAD results should be displayed.
If the value of Spatial Locations Preserved (0028,135A) is NO, then the CAD results should not be displayed.
If Spatial Locations Preserved (0028,135A) is not present, whether or not the images are spatially equivalent is not known. If the workstation chooses to proceed with attempting to display CAD results, then compare the Image Library (see TID 4020 “CAD Image Library Entry”) content item values of the Mammography CAD SR Instance to the associated attribute values in the corresponding Digital Mammography "For Presentation" image Instance. The content items (111044, DCM, "Patient Orientation Row"), (111043, DCM, "Patient Orientation Column"), (111026, DCM, "Horizontal Pixel Spacing"), and (111066, DCM, "Vertical Pixel Spacing") may be used for this purpose. If the values do not match, the workstation needs to adjust the coordinates of the findings in the Mammography CAD SR content to match the spatial characteristics of the Digital Mammography "For Presentation" image Instance.
This Annex was formerly located in Annex M “Chest CAD (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
The templates for the Chest CAD SR IOD are defined in Annex A “Structured Reporting Templates (Normative)” in PS3.16. Relationships defined in the Chest CAD SR IOD templates are by-value, unless otherwise stated. Content items referenced from another SR object instance, such as a prior Chest CAD SR, are inserted by-value in the new SR object instance, with appropriate original source observation context. It is necessary to update Rendering Intent, and referenced content item identifiers for by-reference relationships, within content items paraphrased from another source.
The Document Root, Image Library, CAD Processing and Findings Summary, and Summaries of Detections and Analyses sub-trees together form the content tree of the Chest CAD SR IOD. See Annex E for additional explanation of the Summaries of Detections and Analyses sub-trees.
The shaded area in Figure F.1-2 demarcates information resulting from Detection, whereas the unshaded area is information resulting from Analysis. This distinction is used in determining whether to place algorithm identification information in the Summary of Detections or Summary of Analyses sub-trees.
The identification of a lung nodule within a single image is considered to be a Detection, which results in a Single Image Finding. The temporal correlation of a lung nodule in two instances of the same view taken at different times, resulting in a Composite Feature, is considered Analysis.
Once a Single Image Finding or Composite Feature has been instantiated, it may be referenced by any number of Composite Features higher in the CAD Processing and Findings Summary sub-tree.
Any content item in the Content tree that has been inserted (i.e., duplicated) from another SR object instance has a HAS OBS CONTEXT relationship to one or more content items that describe the context of the SR object instance from which it originated. This mechanism may be used to combine reports (e.g., Chest CAD SR 1, Chest CAD SR 2, Human).
By-reference relationships within Single Image Findings and Composite Features paraphrased from prior Chest CAD SR objects need to be updated to properly reference Image Library Entries carried from the prior object to their new positions in the present object.
The CAD Processing and Findings Summary section of the SR Document Content tree of a Chest CAD SR IOD may contain a mixture of current and prior single image findings and composite features. The content items from current and prior contexts are target content items that have a by-value INFERRED FROM relationship to a Composite Feature content item. Content items that come from a context other than the Initial Observation Context have a HAS OBS CONTEXT relationship to target content items that describe the context of the source document.
In Figure F.2-1, Composite Feature and Single Image Finding are current, and Single Image Finding (from Prior) is duplicated from a prior document.
The following is a simple and non-comprehensive illustration of an encoding of the Chest CAD SR IOD for chest computer aided detection results. For brevity, some mandatory content items are not included, such as several acquisition context content items for the images in the Image Library.
A chest CAD device processes a typical screening chest case, i.e., there is one image and no nodule findings. Chest CAD runs lung nodule detection successfully and finds nothing.
The chest radiograph resembles:
The content tree structure would resemble:
A chest CAD device processes a screening chest case with one image, and a lung nodule detected. The chest radiograph resembles:
The content tree structure in this example is complex. Structural illustrations of portions of the content tree are placed within the content tree table to show the relationships of data within the tree. Some content items are duplicated (and shown in boldface) to facilitate use of the diagrams.
The content tree structure would resemble:
The patient in Example 2 returns for another chest radiograph. A more comprehensive chest CAD device processes the current chest radiograph, and analyses are performed that determine some temporally related content items for Composite Features. Portions of the prior chest CAD report (Example 2) are incorporated into this report. In the current chest radiograph the lung nodule has increased in size.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of content tree items reused from the prior Chest CAD SR instance (Example 2).
While the Image Library contains references to content tree items reused from the prior Chest CAD SR instance, the images are actually used in the chest CAD analysis and are therefore not italicized as indicated above.
The CAD processing and findings consist of one composite feature, comprised of single image findings, one from each year. The temporal relationship allows a quantitative temporal difference to be calculated:
The patient in Example 3 is called back for CT to confirm the Lung Nodule found in Example 3. The patient undergoes CT of the Thorax and the initial chest radiograph and CT slices are sent to a more comprehensive CAD device for processing. Findings are detected and analyses are performed that correlate findings from the two data sets. Portions of the prior CAD report (Example 3) are incorporated into this report.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of content tree items reused from the prior Chest CAD SR instance (Example 3).
While the Image Library contains references to content tree items reused from the prior Chest CAD SR instance, the images are actually used in the CAD analysis and are therefore not italicized as indicated above.
Most recent examination content:
This Annex was formerly located in Annex N “Explanation of Grouping Criteria for Multi-frame Functional Group IODs (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
When considering how to group an attribute, one needs to consider first of all whether or not the values of an attribute are different per frame. The reasons to consider whether to allow an attribute to change include:
The more attributes that change, the more parsing a receiving application has to do in order to determine if the multi-frame object has frames the application should deal with. The more choices, the more complex the application becomes, potentially resulting in interoperability problems.
The frequency of change of an attribute must also be considered. If an attribute could be changed every frame then obviously it is not a very good candidate for making it fixed, since this would result in a multi-frame size of 1.
The number of applications that depend on frame level attribute grouping is another consideration. For example, one might imagine a pulse sequence being changed in a real-time acquisition, but the vast majority of acquisitions would leave this constant. Therefore, it was judged not too large a burden to force an acquisition device to start a new object when this happens. Obviously, this is a somewhat subjective decision, and one should take a close look at the attributes that are required to be fixed in this document.
The attributes from the image pixel module must not change in a multi-frame object due to legacy tool kits and implementations.
The potential frequency of change is dependent on the applications both now and likely during the life of this standard. The penalty for failure to allow an attribute to change is rather high since it will be hard/impossible to change later. Making an attribute variable that is static is more complex and could result in more header space usage depending on how it is grouped. Thus there is a trade-off of complexity and potentially header size with not being able to take advantage of the multi-frame organization for an application that requires changes per frame.
Once it is decided which attributes should be changed within a multi-frame object then one needs to consider the criteria for grouping attributes together:
Groupings should be designed so those attributes that are likely to vary together should be in the same sequence. The goal is to avoid the case where attributes that are mostly static have to be included in a sequence that is repeated for every frame.
Care should be taken so that we define a manageable number of grouping sequences. Too few sequences could result in many static attributes being repeated for each frame, when some other element in their sequence was varying, and too many sequences becomes unwieldy.
The groupings should be designed such that modality independent attributes are kept separate from those that are MR specific. This will presumably allow future working groups to reuse the more general groupings. It also should allow software that operates on multi-frame objects from multiple objects maximize code reuse.
Grouping related attributes together could convey some semantics of the overall contents of the multi-frame object to receiving applications. For instance, if a volumetric application finds the Plane Orientation Macro present in the Per-frame Functional Groups Sequence, it may decide to reject the object as inappropriate for volumetric calculations.
Specific notes on attribute grouping:
Attributes not allowed to change: Image Pixel Module (due to legacy toolkit concerns); and Pulse Sequence Module attributes (normally do not change except in real-time - it is expected real time applications can handle the complexity and speed of starting new IODs when pulse sequence changes).
Sequences not starting with the word "MR" could be applied to more modalities than just MR.
All attributes that must be in a frame header were placed in the Frame Content Macro.
Position and orientation are in separate sequences since they are changed independently.
For real-time sequences there are contrast mechanisms that can be applied to base pulse sequences and are turned on and off by the operator depending on the anatomy being imaged and the time/contrast trade-off associated with these. Such modifiers include: IR, flow compensation, spoiled, MT, and T2 preparation… These probably are not changed in non-real-time scans. These are all kept in the MR Modifier Macro.
"Number of Averages" attributes is in its own sequence because real-time applications may start a new averaging process every time a slice position/orientation changes. Each subsequent frame will average with the preceding N frames where N is chosen based on motion and time. Each frame collected at a particular position/orientation will have a different number of averages, but all other attributes are likely to remain the same. This particular application drives this attribute being in its own group.
This Annex was formerly located in Annex O “Clinical Trial Identification Workflow Examples (Informative)” in PS3.3 in the 2003 and earlier revisions of the standard.
The Clinical Trial Identification modules are optional. As such, there are several points in the workflow of clinical trial or research data at which the Clinical Trial Identification attributes may be added to the data. At the Clinical Trial Site, the attributes may be added at the scanner, a PACS system, a site workstation, or a workstation provided to the site by a Clinical Trial Coordinating Center. If not added at the site, the Clinical Trial Identification attributes may be added to the data after receipt by the Clinical Trial Coordinating Center. The addition of clinical trial attributes does not itself require changes to the SOP Instance UID. However, the clinical trial or research protocol or the process of de-identification may require such a change.
Images are obtained for the purpose of comparing patients treated with placebo or the drug under test, then evaluated in a blinded manner by a team of radiologists at the Clinical Trial Coordinating Center (CTCC). The images are obtained at the clinical sites, collected by the CTCC, at which time their identifying attributes are removed and the Clinical Trial Identification (CTI) module is added. The de-identified images with the CTI information are then presented to the radiologists who make quantitative and/or qualitative assessments. The assessments, and in some cases the images, are returned to the sponsor for analysis, and later are contributed to the submission to the regulating authority.
The templates for ultrasound reports are defined in Annex A “Structured Reporting Templates (Normative)” in PS3.16. Figure I.1-1 is an outline of the common elements of ultrasound structured reports.
The Patient Characteristics Section is for medical data of immediate relevance to administering the procedure and interpreting the results. This information may originate outside the procedure.
The Procedure Summary Section contains exam observations of immediate or primary significance. This is key information a physician typically expects to see first in the report.
Measurements typically reside in a measurement group container within a Section. Measurement groups share context such as anatomical location, protocol or type of analysis. The grouping may be specific to a product implementation or even to a user configuration. OB-GYN measurement groups have related measurements, averages and other derived results.
If present, the Image Library contains a list of images from which observations were derived. These are referenced from the observations with by-reference relationships.
The Procedure Summary Section contains the observations of most immediate interest. Observations in the procedure summary may have by-reference relationships to other content items.
Where multiple fetuses exist, the observations specific to each fetus must reside under separate section headings. The section heading must specify the fetus observation context and designate so using Subject ID (121030, DCM, "Subject ID") and/or numerical designation (121037, DCM, "Fetus Number") as shown below. See TID 1008 “Subject Context, Fetus”.
Reports may specify dependencies of a calculation on its dependent observations using by-reference relationships. This relationship must be present for the report reader to know the inputs of the derived value.
Optionally, the relationship of an observation to its image and image coordinates can be encoded with by-reference content items as Figure I.5-1 shows. For conciseness, the by-reference relationship points to the content item in the Image Library, rather than directly to the image.
R-INFERRED FROM relationships to IMAGE content items specify that the image supports the observation. A purpose of reference in an SCOORD content item may specify an analytic operation (performed on that image) that supports or produces the observation.
A common OB-GYN pattern is that of several instances of one measurement type (e.g., BPD), the calculated average of those values, and derived values such as a gestational age calculated according to an equation or table. The measurements and calculations are all siblings in the measurement group. A child content item specifies the equation or table used to calculate the gestational age. All measurement types must relate to the same biometric type. For example, it is not allowed to mix a BPD and a Nuchal Fold Thickness measurement in the same biometry group.
The example above is a gestational age calculated from the measured value. The relationship is to an equation or table. The inferred from relationship identifies equation or table in the Concept Name. Codes from CID 12013 “Gestational Age Equations and Tables” identify the specific equation or table.
Another use case is the calculation of a growth parameter's relationship to that of a referenced distribution and a known or assumed gestational age. CID 12015 “Fetal Growth Equations and Tables” identify the growth table. Figure I.6-2 shows the assignment of a percentile for the measured BPD, against the growth of a referenced population. The dependency relationship to the gestational age is a by-reference relationship to the established gestational age. Though the percentile rank is derived from the BPD measurement, a by-reference relationship is not essential if one BPD has a concept modifier indicating that it is the mean or has selection status (see TID 300 “Measurement”). A variation of this pattern is the use of Z-score instead of percentile rank. Not shown is the expression of the normal distribution mean, standard deviation, or confidence limits.
Estimated fetal weight (EFW) is a fetus summary item as shown below. It is calculated from one or more growth parameters (the inferred from relationships are not shown). TID 315 “Equation or Table” allows specifying how the value was derived. Terms from CID 12014 “OB Fetal Body Weight Equations and Tables” specify the table or equation that yields the EFW from growth parameters.
"EFW percentile rank" is another summary term. By definition, this term depends upon the EFW and the population distribution of the ranking. A Reference Authority content item identifies the distribution. CID 12016 “Estimated Fetal Weight Percentile Equations and Tables” is list of established reference authorities.
When multiple observations of the same type exist, one of these may be the selected value. Typically, this value is the average of the others, or it may be the last entered, or user chosen. TID 310 “Measurement Properties” provides a content item with concept name of (121404, DCM, "Selection Status") and a value set specified by DCID 224 “Selection Method”.
There are multiple ways that a measurement may originate. The measurement value may result as an output of an image interactive, system tool. Alternatively, the user may directly enter the value, or the system may create a value automatically as the mean of multiple measurement instances. TID 300 “Measurement” provides that a concept modifier of the numeric content item specify the derivation of the measurement. The concept name of the modifier is (121401, DCM, "Derivation"). CID 3627 “Measurement Type” provides concepts of appropriate measurement modifiers. Figure I.7-2 illustrates such a case.
The following are simple, non-comprehensive illustrations of report sections.
The following example shows the highest level of content items for a second or third trimester OB exam. Subsequent examples show details of section content,
The following example shows the highest level of content items for a GYN exam. Subsequent examples show details of section content.
Optionally, but not shown, the ratios may have by-reference, inferred-from relationships to the content items holding the numerator and denominator values.
This example shows measurements and estimated gestational age.
This example shows measurements and with percentile ranking.
The content structure in the example below conforms to TID 5012 “Ovaries Section”. The example shows the volume derived from three perpendicular diameters.
The content structure in the example below conforms to TID 5013 “Follicles Section”. It uses multiple measurements and derived averages for each of the perpendicular diameters.
This Annex was formerly located in Annex M “Handling of Identifying Parameters (Informative)” in PS3.4 in the 2003 and earlier revisions of the standard.
The DICOM Standard was published in 1993 and addresses medical images communication between medical modalities, workstations and other medical devices as well as data exchange between medical devices and the Information System (IS). DICOM defines SOP Instances with Patient, Visit and Study information managed by the Information System and allows to communicate the Attribute values of these objects.
Since the publication of the DICOM Standard great effort has been made to harmonize the Information Model of the DICOM Standard with the models of other relevant standards, especially with the HL7 model and the CEN TC 251 WG3 PT 022 model. The result of these effort is a better understanding of the various practical situations in hospitals and an adaptation of the model to these situations. In the discussion of models, the definition of Information Entities and their Identifying Parameters play a very important role.
The purpose of this Informative Annex is to show which identifying parameters may be included in Image SOP Instances and their related Modality Performed Procedure Step (MPPS) SOP Instance. Different scenarios are elucidated to describe varying levels of integration of the Modality with the Information System, as well as situations in which a connection is temporarily unavailable.
In this Annex, "Image SOP Instance" is used as a collective term for all Composite Image Storage SOP Instances.
The scenarios described here are informative and do not constitute a normative section of the DICOM Standard.
"Integrated" means in this context that the Acquisition Modality is connected to an Information System or Systems that may be an SCP of the Modality Worklist SOP Class or an SCP of the Modality Performed Procedure Step SOP Class or both. In the following description only the behavior of "Modalities" is mentioned, it goes without saying that the IS must conform to the same SOP Classes.
The Modality receives identifying parameters by querying the Modality Worklist SCP and generates other Attribute values during image generation. It is desirable that these identifying parameters be included in the Image SOP Instances as well as in the MPPS object in a consistent manner. In the case of a Modality that is integrated but unable to receive or send identifying parameters, e.g., link down, emergency case, the Modality may behave as if it were not integrated.
The Study Instance UID is a crucial Attribute that is used to relate Image SOP Instances (whose Study is identified by their Study Instance UID), the Modality PPS SOP Instance that contains it as a reference, and the actual or conceptual Requested Procedure (i.e., Study) and related Imaging Service Request in the IS. An IS that manages an actual or conceptual Detached Study Management entity is expected to be able to relate this Study Instance UID to the SOP Instance UID of the Detached Study Management SOP Instance, whether or not the Study Instance UID is provided by the IS or generated by the modality.
For a detailed description of an integrated environment see the IHE Radiology Technical Framework. This document can be obtained at http://www.ihe.net/
N-CREATE a MPPS SOP Instance and include its SOP Instance UID in the Image SOP Instances within the Referenced Performed Procedure Step Sequence Attribute.
Copy the following Attribute values from the Modality Worklist information into the Image SOP Instances and into the related MPPS SOP Instance:
Create the following Attribute value and include it into the Image SOP Instances and the related MPPS SOP Instance:
Include the following Attribute values that may be generated during image acquisition, if supported, into the Image SOP Instances and the related MPPS SOP Instance:
In the absence of the ability to N-CREATE a MPPS SOP Instance, generate a MPPS SOP Instance UID and include it into the Referenced Performed Procedure Step Sequence Attribute of the Image SOP Instances. A system that later N-CREATEs a MPPS SOP Instance may use this UID extracted from the related Image SOP Instances.
Copy the following Attribute values from the Modality Worklist information into the Image SOP Instances:
Create the following Attribute value and include it into the Image SOP Instances:
A system that later N-CREATEs a MPPS SOP Instance may use this Attribute value extracted from the related Image SOP Instances.
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
N-CREATE a MPPS SOP Instance and include its SOP Instance UID in the Image SOP Instances within the Referenced Performed Procedure Step Sequence Attribute.
Create the following Attribute values and include them in the Image SOP Instances and the related MPPS SOP Instance:
Copy the following Attribute values, if available to the Modality, into the Image SOP Instances and into the related MPPS SOP Instance:
If sufficient identifying information is included, it will allow the Image SOP Instances and the MPPS SOP Instance to be later related to the Requested Procedure and the actual or conceptual Detached Study Management entity.
"Non-Integrated" means in this context that the Acquisition Modality is not connected to an Information System Systems, does not receive Attribute values from an SCP of the Modality Worklist SOP Class, and cannot create a Performed Procedure Step SOP Instance.
In the absence of the ability to N-CREATE a MPPS SOP Instance, generate a MPPS SOP Instance UID and include it into the Referenced Performed Procedure Step Sequence Attribute of the Image SOP Instances. A system that later N-CREATEs a MPPS SOP Instance may use this UID extracted from the related Image SOP Instances.
Create the following Attribute values and include them in the Image SOP Instances:
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
If sufficient identifying information is be included, it will allow the Image SOP Instances to be later related to the Requested Procedure and the actual or conceptual Detached Study Management entity.
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
In the MPPS SOP Instance, all the specific Attributes of a Scheduled Procedure Step or Steps are included in the Scheduled Step Attributes Sequence. In the Image SOP Instances, these Attributes may be included in the Request Attributes Sequence. This is an optional Sequence in order not to change the definition of existing SOP Classes by adding new required Attributes or changing the meaning of existing Attributes.
Both Sequences may have more than one Item if more than one Requested Procedure results in a single Performed Procedure Step.
Because of the definitions of existing Attributes in existing Image SOP Classes, the following solutions are a compromise. The first one chooses or creates a value for the single valued Attributes Study Instance UID and Accession Number. The second one completely replicates the Image data with different values for the Attributes Study Instance UID and Accession Number.
create a Request Attributes Sequence containing two or more Items each containing the following Attributes:
create a Referenced Study Sequence containing two or more Items sufficient to contain the Study SOP Instance UID values from the Modality Worklist for both Requested Procedures
select one value from the Modality Worklist or generate a new value for:
select one value from the Modality Worklist or generate a new value or assign an empty value for:
An alternative method is to replicate the entire Image SOP Instance with a new SOP Instance UID, and assign each Image IOD it's own identifying Attributes. In this case, each of the Study Instance UID and the Accession Number values can be used in their own Image SOP Instance.
Both Image SOP Instances may reference a single MPPS SOP Instance (via the MPPS SOP Instance UID in the Referenced Performed Procedure Step Sequence).
Each individual Image SOP Instance may reference it's own related Study SOP Instance, if it exists (via the Referenced Study Sequence). This Study SOP Instance has a one to one relationship with the corresponding Requested Procedure.
If an MPPS SOP Instance is created, it may reference both related Study SOP Instances.
For all Series in the MPPS, replicate the entire Series of Images using new Series Instance UIDs
Create replicated Image SOP Instances with different SOP Instance UIDs that use the new Series Instance UIDs, for each of the two or more Requested Procedures
In each of the Image SOP Instances, using values from the corresponding Requested Procedure:
In the MPPS SOP Instance (if supported):
In both the Image SOP Instances and the MPPS SOP Instance (if supported):
If for some reason the Modality was unable to create the MPPS SOP Instance, another system may wish to perform this service. This system must make sure that the created PPS SOP Instance is consistent with the related Image SOP Instances.
Depending on the availability and correctness of values for the Attributes in the Image SOP Instances, these values may be copied into the MPPS SOP Instance, or they may have to be coerced, e.g., if they are not consistent with corresponding values available from the IS.
For example, if the MPPS SOP Instance UID is already available in the Image SOP Instance (in the Referenced Performed Procedure Step Sequence), it may be utilized to N-CREATE the MPPS SOP Instance. If not available, a new MPPS SOP Instance UID may be generated and used to N-CREATE the MPPS SOP Instance. In this case there may be no MPPS SOP Instance UID in the Referenced Performed Procedure Step Sequence in the corresponding Image SOP Instances. An update of the Image SOP Instances will restore the consistency, but this is not required.
The purpose of this annex is to enhance consistency and interoperability among creators and consumers of Ultrasound images within Staged Protocol Exams. An ultrasound "Staged Protocol Exam" is an exam that acquires a set of images under specified conditions during time intervals called "Stages". An example of such an exam is a cardiac stress-echo Staged Protocol.
This informative annex describes the use of ultrasound Staged Protocol attributes within the following DICOM Services: Ultrasound Image, Ultrasound Multi-frame Image, and Key Object Selection Storage, Modality Worklist, and Modality Performed Procedure Step Services.
The support of ultrasound Staged Protocol Data Management requires support for the Ultrasound Image SOP Class or Ultrasound Multi-frame Image SOP Class as appropriate for the nature of the Protocol. By supporting some optional Elements of these SOP Classes, Staged-Protocols can be managed. Support of Key Object Selection allows control of the order of View and Stage presentation. Support of Modality Worklist Management and Modality Performed Procedure Step allow control over specific workflow use cases as described in this Annex.
A "Staged Protocol Exam" acquires images in two or more distinct time intervals called "Stages" with a consistent set of images called "Views" acquired during each Stage of the exam. A View is of a particular cross section of the anatomy acquired with a specific ultrasound transducer position and orientation. During the acquisition of a Staged Protocol Exam, the modality may also acquire non-Protocol images at one or more Protocol Stages.
A common real-world example of an ultrasound Staged Protocol exam is a cardiac stress-echo ultrasound exam. Images are acquired in distinct time intervals (Stages) of different levels of stress and Views as shown in Figure K.3-1. Typically, stress is induced by means of patient exercise or medication. Typical Stages for such an exam are baseline, mid-stress, peak-stress, and recovery. During the baseline Stage the patient is at rest, prior to inducing stress through medication or exercise. At mid-stress Stage the heart is under a moderate level of stress. During peak-stress Stage the patient's heart experiences maximum stress appropriate for the patient's condition. Finally, during the recovery Stage, the heart recovers because the source of stress is absent.
At each Stage an equivalent set of Views is acquired. Examples of typical Views are parasternal long axis and parasternal short axis. Examination of wall motion between the corresponding Views of different Stages may reveal ischemia of one or more regions ("segments") of the myocardium. Figure K.3-1 illustrates the typical results of a cardiac stress-echo ultrasound exam.
The DICOM standard includes a number of attributes of significance to Staged Protocol Exams. This Annex explains how scheduling and acquisition systems may use these attributes to convey Staged Protocol related information.
Table K.4-1 lists all the attributes relevant to convey Staged Protocol related information (see PS3.3 for details about these attributes).
Table K.4-1. Attributes That Convey Staged Protocol Related Information
This annex provides guidelines for implementation of the following aspects of Staged Protocol exams:
The attributes Number of Stages (0008,2124) and Number of Views in Stage (0008,212A) are each Type 2C with the condition "Required if this image was acquired in a Staged Protocol." These two attributes will be present with values in image SOP Instances if the exam meets the definition of a Staged Protocol Exam stated in Section K.3. This includes both the Protocol View images as well as any extra-Protocol images acquired during the Protocol Stages.
The attributes Protocol Name (0018,1030) and Performed Protocol Code Sequence (0040,0260) identify the Protocol of a Staged Protocol Exam, but the mere presence of one or both of these attributes does not in itself identify the acquisition as a Staged Protocol Exam. If both Protocol Name and Performed Protocol Code Sequence attributes are present, the Protocol Name value takes precedence over the Performed Protocol Code Sequence Code Meaning value as a display label for the Protocol, since the Protocol Name would convey the institutional preference better than the standardized code meaning.
Display devices usually include capabilities that aid in the organization and presentation of images acquired as part of the Staged Protocol. These capabilities allow a clinician to display images of a given View acquired during different Stages of the Protocol side by side for comparison. A View is a particular combination of the transducer position and orientation at the time of image acquisition. Images are acquired at the same View in different Protocol Stages for the purpose of comparison. For these features to work properly, the display device must be able to determine the Stage and View of each image in an unambiguous fashion.
There are three possible mechanisms for conveying Stage and View identification in the image SOP Instances:
"Numbers" (Stage Number (0008,2122) and View Number (0008,2128) ), which number Stages and Views, starting with one.
"Names" (Stage Name (0008,2120) and View Name (0008,2127) ), which specify textual names for each Stage and View, respectively.
"Code sequences" (Stage Code Sequence (0040,000A) for Stage identification, and View Code Sequence (0054,0220) for View identification), which give identification "codes" to the Stage and View respectively.
The use of code sequences to identify Stage and View, using Context Group values specified in PS3.16 (e.g., CID 12002 “Ultrasound Protocol Stage Types” and CID 12226 “Echocardiography Image View”), allows a display application with knowledge of the code semantics to render a display in accordance with clinical domain uses and user preferences (e.g., populating each quadrant of an echocardiographic display with the user desired stage and view). The IHE Echocardiography Workflow Profile requires such use of code sequences for stress-echo studies.
Table K.5-1 provides an example of the Staged Protocol relevant attributes in images acquired during a typical cardiac stress-echo ultrasound exam.
Table K.5-1. Staged Protocol Image Attributes Example
At any Stage of a Staged Protocol exam, the operator may acquire images that are not part of the Protocol. These images are so-called "extra-Protocol images". Information regarding the performed Protocol is still included because such images are acquired in the same Procedure Step as the Protocol images. The Stage number and optionally other Stage identification attributes (Stage Name and/or Stage Code Sequence) should still be conveyed in extra-Protocol images. However, the View number should be omitted to signify that the image is not one of the standard Views in the Protocol. Other View identifying information, such as name or code sequences, may indicate the image location.
Table K.5-2. Comparison Of Protocol And Extra-Protocol Image Attributes Example
Ultrasound systems often acquire multiple images at a particular stage and view. If one image is difficult to interpret or does not fully portray the ventricle wall, the physician may choose to view an alternate. In some cases, the user may identify the preferred image. The Key Object Selection Document can identify the preferred image for any or all of the Stage-Views. This specific usage of the Key Object Selection Document has a Document Title of (113013, DCM, "Best In Set") and Document Title Modifier of (113017, DCM, "Stage-View").
Modality Performed Procedure Step (MPPS) is the basic organizational unit of Staged Protocol Exams. It is recommended that a single MPPS instance encompass the entire acquisition of an ultrasound Staged Protocol Exam if possible.
There are no semantics assigned to the use of Series within a Staged Protocol Exam other than the DICOM requirements as to the relationship between Series and Modality Performed Procedure Steps. In particular, all of the following scenarios are possible:
There is no recommendation on the organization of images into Series because clinical events make such recommendations impractical. Figure K.5.5-1 shows a possible sequence of interactions for a protocol performed as a single MPPS.
A special case arises when the acquisition during a Protocol Stage is halted for some reason. For example, such a situation can occur if signs of patient distress are observed, such as angina in a cardiac stress exam. These criteria are part of the normal exam Protocol, and as long as the conditions defined for the Protocol are met the MPPS status is set to COMPLETED. Only if the exam terminates before meeting the minimum acquisition requirements of the selected Protocol would MPPS status be set to DISCONTINUED. It is recommended that the reason for discontinuation should be conveyed in the Modality Procedure Step Discontinuation Reason Code Sequence (0040,0281). Staged Protocols generally include criteria for ending the exam, such as when a target time duration is reached or if signs of patient distress are observed.
If a Protocol Stage is to be acquired at a later time with the intention of using an earlier completed Protocol Stage of a halted Staged Protocol then a new Scheduled Procedure Step may or may not be created for this additional acquisition. Workflow management recommendations vary depending on whether the care institution decides to create a new Scheduled Procedure Step or not.
Follow-up Stages must use View Numbers, Names, and Code Sequences identical to those in the prior Stages to enable automatically correlating images of the original and follow-up Stages.
K.5.5.2.1 Unscheduled Follow-up Stages
Follow-up Stages require a separate MPPS. Since follow-up stages are part of the same Requested Procedure and Scheduled Procedure Step, all acquired image SOP Instances and generated MPPS instances specify the same Study Instance UID. If the Study Instance UID is different, systems will have difficulty associating related images. This creates a significant problem if Modality Worklist is not supported. Therefore systems should assign the same Study Instance UID for follow-up Stages even if Modality Worklist is not supported. Figure K.5.5-2 shows a possible interaction sequence for this scenario.
In some cases a new Scheduled Procedure Step is created to acquire follow-up Stages. For example, a drug induced stress-echo exam may be scheduled because an earlier exercise induced stress-echo exam had to be halted due to patient discomfort. In such cases it would be redundant to reacquire earlier Stages such as the rest Stage of a cardiac stress-echo ultrasound exam. One MPPS contains the Image instances of the original Stage and a separate MPSS contains the follow-up instances.
If Scheduled and Performed Procedure Steps for Staged Protocol Exam data use the same Study Instance UID, workstations can associate images from the original and follow-up Stages. Figure K.5.5-3 shows a possible interaction sequence for this scenario.
The Hemodynamics Report is based on TID 3500 “Hemodynamics Report”. The report contains one or more measurement containers, each corresponding to a phase of the cath procedure. Within each container may be one or more sub-containers, each associated with a single measurement set. A measurement set consists of measurements from a single anatomic location. The resulting hierarchical structure is depicted in Figure L-1.
The container for each phase has an optional subsidiary container for Clinical Context with a parent-child relationship of has-acquisition-context. This Clinical Context container allows the recording of pertinent patient state information that may be essential to understanding the measurements made during that procedure phase. It should be noted that any such patient state information is necessarily only a summary; a more complete clinical picture may be obtained by review of the cath procedure log.
The lowest level containers for the measurement sets are specialized by the class of anatomic location - arterial, venous, atrial, ventricular - for the particular measurements appropriate to that type of location. These containers explicitly identify the anatomic location with a has-acquisition-context relationship. Since such measurement sets are typically measured on the same source (e.g., pressure waveform), the container may also have a has-acquisition-context relationship with a source DICOM waveform SOP Instance.
The "atomic" level of measurements within the measurement set containers includes three types of data. First is the specific measurement data acquired from waveforms related to the site. Second is general measurement data that may include any hemodynamic, patient vital sign, or blood chemistry data. Third, derived data are produced from a combination of other data using a mathematical formula or table, and may provide reference to the equation.
The vascular procedure report partitions numeric measurements into section headings by anatomic region and by laterality. A laterality concept modifier of the section heading concept name specifies whether laterality is left or right. Therefore, laterally paired anatomy sections may appear two times, once for each laterality. Findings of unpaired anatomy, are separately contained in a separate "unilateral" section container. Therefore, in vascular ultrasound, laterality is always expressed at the section heading level with one of three states: left, right, or unilateral (unpaired). There is no provision for anatomy of unknown laterality other than as a TEXT content item in the summary.
Note that expressing laterality at the heading level differs from OB-GYN Pelvic and fetal vasculature, which expresses laterality as concept modifiers of the anatomic containers.
The common vascular pattern is a battery of measurements and calculations repeatedly applied to various anatomic locations. The anatomic location is the acquisition context of the measurement group. For example, a measurement group may have a measurement source of Common Iliac Artery with several measurement instances and measurement types such as mean velocity, peak systolic velocity, acceleration time, etc.
There are distinct anatomic concepts to modify the base anatomy concept. The modification expression is a content item with a modifier concept name and value selected from a Context Group as the table shows below.
The templates for ultrasound reports are defined in PS3.16. Figure N.1-1 is an outline of the echocardiography report.
The common echocardiography measurement pattern is a group of measurements obtained in the context of a protocol. Figure N.1-2 shows the pattern.
DICOM identifies echocardiography observations with various degrees of pre- and post-coordination. The concept name of the base content item typically specifies both anatomy and property for commonly used terms, or purely a property. Pure property concepts require an anatomic site concept modifier. Pure property concepts such as those in CID 12222 “Orifice Flow Properties” and CID 12239 “Cardiac Output Properties” use concept modifiers shown below.
Further qualification specifies the image mode and the image plane using HAS ACQ CONTEXT with the value sets shown below.
The content of this section provides recommendations on how to express the concepts from draft ASE guidelines with measurement type concept names and concept name modifiers.
The leftmost column is the name of the ASE concept. The Base Measurement Concept Name is the concept name of the numeric measurement content item. The modifiers column specifies a set of modifiers for the base measurement concept name. Each modifier consists of a modifier concept name (e.g., method or mode) and its value (e.g., Continuity). Where no Concept Modifier appears, the base concept matches the ASE concept.
Aortic Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Aortic Valve with the concept modifier (G-C0E3, SRT, "Finding Site") = (T-35400, SRT, "Aortic Valve").Therefore, the Finding Site modifier does not appear in the right column.
Measurements in the Left Ventricle section have context of Left Ventricle and do not require a Finding Site modifier (G-C0E3, SRT, "Finding Site") = T-32600, SRT, "Left Ventricle") to specify the site. The Finding Site modifier appears for more specificity.
|
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
||
|
Left Ventricular Outflow Tract Systolic Cross Sectional Area |
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
|
|
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
||
|
Left Ventricular Outflow Tract Systolic Peak Instantaneous Gradient |
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
|
|
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
||
|
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
||
|
Left Ventricular Outflow Tract Systolic Velocity Time Integral |
(G-C0E3, SRT, "Finding Site") = (T-32650, SRT, "Left Ventricular Outflow Tract") |
|
Left Ventricular Mass by 2-D Method of Disks, Single Plane (4-Chamber) |
(G-0373, SRT, "Image Mode") = (G-03A2, SRT, "2D mode") (G-C036, SRT, "Measurement Method") = (125208, DCM, "Method Of Disks, single plane") |
|
|
(G-0373, SRT, "Image Mode") = (G-03A2, SRT, "2D mode") (G-C036, SRT, "Measurement Method") = (125207, DCM, "Method of disks, biplane") |
||
Mitral Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Mitral Valve with the concept modifier (G-C0E3, SRT, "Finding Site") = (T-35300, SRT, "Mitral Valve").Therefore, the Finding Site modifier does not appear in the right column.
Pulmonic Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Pulmonic Valve with the concept modifier (G-C0E3, SRT, "Finding Site") = (T-35100, SRT, "Pulmonic Valve"). Therefore, this Finding Site concept modifier does not appear in the right column.
TRICUSPID Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Tricuspid Valve with the concept modifier (G-C0E3, SRT, "Finding Site") = (T-35100, SRT, "Tricuspid Valve"). Therefore, the Finding Site modifier does not appear in the right column.
|
(29460-3, LN, "Thoracic Aorta Coarctation Systolic Peak Velocity") |
||
|
Thoracic Aorta Coarctation Systolic Peak Instantaneous Gradient |
(G-C0E3, SRT, "Finding Site") = (D4-32030, SRT, "Thoracic Aortic Coarctation") |
|
|
(17995-2, LN, "Thoracic Aorta Coarctation Systolic Peak Instantaneous Gradient") |
||
|
(G-C0E3, SRT, "Finding Site") = (D4-31150, SRT, "Ventricular Septal Defect") |
||
|
Ventricular Septal Defect Systolic Peak Instantaneous Gradient |
(G-C0E3, SRT, "Finding Site") = (D4-31150, SRT, "Ventricular Septal Defect") |
|
|
(G-C0E3, SRT, "Finding Site") = (D4-31150, SRT, "Ventricular Septal Defect") |
||
|
(G-C0E3, SRT, "Finding Site") = (D4-31150, SRT, "Ventricular Septal Defect") |
||
|
(G-C0E3, SRT, "Finding Site") = (D4-31220, SRT, "Atrial Septal Defect") |
||
|
Pulmonary-to-Systemic Shunt Flow Ratio by Doppler Volume Flow |
(G-C036, SRT, "Measurement Method") = (125219, DCM, "Doppler Volume Flow") |
The IVUS Report contains one or more vessel containers, each corresponding to the vessel (arterial location) being imaged. Each vessel is associated with one or more IVUS image pullbacks (Ultrasound Multi-frame Images), acquired during a phase of a catheterization procedure. Each vessel may contain one or more sub-containers, each associated with a single lesion. Each lesion container includes a set of IVUS measurements and qualitative assessments. The resulting hierarchical structure is depicted in Figure N.5-1.
These SOP Classes allow describing spatial relationships between sets of images. Each instance can describe any number of registrations as shown in Figure O.1-1. It may also reference prior registration instances that contribute to the creation of the registrations in the instance.
A Reference Coordinate System (RCS) is a spatial Frame of Reference described by the DICOM Frame of Reference Module. The chosen Frame of Reference of the Registration SOP Instance may be the same as one or more of the Referenced SOP Instances. In this case, the Frame of Reference UID (0020,0052) is the same, as shown by the Registered RCS in the figure. The registration information is a sequence of spatial transformations, potentially including deformation information. The composite of the specified spatial transformations defines the complete transformation from one RCS to the other.
Image instances may have no DICOM Frame of Reference, in which case the registration is to that single image (or frame, in the case of a multi-frame image). The Spatial Registration IOD may also be used to establish a coordinate system for an image that has no defined Frame of Reference. To do this, the center of the top left pixel of the source image is treated as being located at (0, 0, 0). Offsets from the first pixel are computed using the resolution specified in the Source IOD. Multiplying that coordinate by the Transformation matrix gives the patient coordinate in the new Frame of Reference.
A special case is an atlas. DICOM has defined Well-Known Frame of Reference UIDs for several common atlases. There is not necessarily image data associated with an atlas.
When using the Spatial Registration or Deformable Registration SOP Classes there are two types of coordinate systems. The coordinate system of the referenced data is the Source RCS. The coordinate system established by the SOP instance is the Registered RCS.
The sense of the direction of transformation differs between the Spatial Registration SOP Class and the Deformable Spatial Registration SOP Class. The Spatial Registration SOP Class specifies a transformation that maps Source coordinates, in the Source RCS, to Registered coordinates, in the Registered RCS. The Deformable Spatial Registration SOP Class specifies transformations that map Registered coordinates, in the Registered RCS, to coordinates in the Source RCS.
The Spatial Fiducials SOP Class stores spatial fiducials as implicit registration information.
Multi-Modality Fusion: A workstation or modality performs a registration of images from independent acquisition modalities-PET, CT, MR, NM, and US-from multiple series. The workstation stores the registration data for subsequent visualization and image processing. Such visualization may include side-by-side synchronized display, or overlay (fusion) of one modality image on the display of another. The processes for such fusion are beyond the scope of the Standard. The workstation may also create and store a ready-for-display fused image, which references both the source image instances and the registration instance that describes their alignment.
Prior Study Fusion: Using post processing or a manual process, a workstation creates a spatial object registration of the current Study's Series from prior Studies for comparative evaluation.
Atlas Mapping: A workstation or a CAD device specifies fiducials of anatomical features in the brain such as the anterior commissure, posterior commissure, and points that define the hemispheric fissure plane. The system stores this information in the Spatial Fiducials SOP Instance. Subsequent retrieval of the fiducials enables a device or workstation to register the patient images to a functional or anatomical atlas, presenting the atlas information as overlays.
CAD: A CAD device creates fiducials of features during the course of the analysis. It stores the locations of the fiducials for future analysis in another imaging procedure. In the subsequent CAD procedure, the CAD device performs a new analysis on the new data set. As before, it creates comparable fiducials, which it may store in a Spatial Fiducials SOP Instance. The CAD device then performs additional analysis by registering the images of the current exam to the prior exam. It does so by correlating the fiducials of the prior and current exam. The CAD device may store the registration in Registration SOP Instance.
Adaptive Radiotherapy: A CT Scan is taken to account for variations in patient position prior to radiation therapy. A workstation performs the registration of the most recent image data to the prior data, corrects the plan, and stores the registration and revised plan.
Image Stitching: An acquisition device captures multiple images, e.g., DX images down a limb. A user identifies fiducials on each of the images. The system stores these in one or more Fiducial SOP Instances. Then the images are "stitched" together algorithmically by means that utilize the Fiducial SOP Instances as input. The result is a single image and optionally a Registration SOP Instance that indicates how the original images can be transformed to a location on the final image.
Figure O.3-1 shows the system interaction of storage operations for a registration of MR and CT using the Spatial Registration SOP Class. The Image Plane Module attributes of the CT Series specify the spatial mapping to the RCS of its DICOM Frame of Reference.
The receiver of the Registration SOP Instance may use the spatial transformation to display or process the referenced image data in a common coordinate system. This enables interactive display in 3D during interpretation or planning, tissue classification, quantification, or Computer Aided Detection. Figure O.3-2 shows a typical interaction scenario.
In the case of coupled acquisition modalities, one acquisition device may know the spatial relationship of its image data relative to the other. The acquisition device may use the Registration SOP Class to specify the relationship of modality B images to modality A images as shown below in Figure O.3-3. In the most direct case, the data of both modalities are in the same DICOM Frame of Reference for each SOP Class Instance.
A Spatial Registration instance consists of one or more instances of a Registration. Each Registration specifies a transformation from the RCS of the Referenced Image Set, to the RCS of this Spatial Registration instance (see PS3.3) identified by the Frame of Reference UID (0020,0052).
Figure O.4-1 shows an information model of a Spatial Registration to illustrate the relationship of the attributes to the objects of the model. The DICOM attributes that describe each object are adjacent to the object.
Figure O.4-2 shows an information model of a Deformable Spatial Registration to illustrate the relationship of the attributes to the objects of the model. The DICOM attributes that describe each object are adjacent to the object.
Figure O.4-3 shows a Spatial Fiducials information model to illustrate the relationship of the attributes to the objects of the model. The DICOM attributes that describe each object are adjacent to the object.
A 4x4 affine transformation matrix describes spatial rotation, translation, scale changes and affine transformations that register referenced images to the Registration IE's homogeneous RCS. These steps are expressible in a single matrix, or as a sequence of multiple independent rotations, translations, or scaling, each expressed in a separate matrix. Normally, registrations are rigid body, involving only rotation and translation. Changes in scale or affine transformations occur in atlas registration or to correct minor mismatches.
Fiducials are image-derived reference markers of location, orientation, or scale. These may be labeled points or collections of points in a data volume that specify a shape. Most commonly, fiducials are individual points.
Correlated fiducials of separate image sets may serve as inputs to a registration process to estimate the spatial registration between similar objects in the images. The correlation may, or may not, be expressed in the fiducial identifiers. A fiducial identifier may be an arbitrary number or text string to uniquely identify each fiducial from others in the set. In this case, fiducial correlation relies on operator recognition and control.
Alternatively, coded concepts may identify the acquired fiducials so that systems can automatically correlate them. Examples of such coded concepts are points of a stereotactic frame, prosthesis points, or well-resolved anatomical landmarks such as bicuspid tips. Such codes could be established and used locally by a department, over a wider area by a society or research study coordinator, or from a standardized set.
The table below shows each case of identifier encoding. A and B represent two independent registrations: one to some image set A, and the other to image set B.
Fiducials may be a point or some other shape. For example, three or more arbitrarily chosen points might designate the inter-hemispheric plane for the registration of head images. Many arbitrarily chosen points may identify a surface such as the inside of the skull.
A fiducial also has a Fiducial UID. This UID identifies the creation of the fiducial and allows other SOP Instances to reference the fiducial assignment.
The Affine Transform Matrix is of the following form.
This matrix requires the bottom row to be [0 0 0 1] to preserve the homogeneous coordinates.
The matrix can be of type: RIGID, RIGID_SCALE and AFFINE. These different types represent different conditions on the allowable values for the matrix elements.
This transform requires the matrix obey orthonormal transformation properties:
for all combinations of j = 1,2,3 and k = 1,2,3 where δ = 1 for i = j and zero otherwise.
The expansion into non-matrix equations is:
The Frame of Reference Transformation Matrix AMB describes how to transform a point (Bx,By,Bz) with respect to RCSB into (Ax,Ay,Az) with respect to RCSA.
The matrix above consists of two parts: a rotation and translation as shown below;
The first column [M11,M21,M31 ] are the direction cosines (projection) of the X-axis of RCSB with respect to RCSA . The second column [M12,M22,M32] are the direction cosines (projection) of the Y-axis of RCSB with respect to RCSA. The third column [M13,M23,M33] are the direction cosines (projection) of the Z-axis of RCSB with respect to RCSA. The fourth column [T1,T2,T3] is the origin of RCSB with respect to RCSA.
There are three degrees of freedom representing rotation, and three degrees of freedom representing translation, giving a total of six degrees of freedom.
The following constraint applies:
for all combinations of j = 1,2,3 and k = 1,2,3 where δ = 1 for i=j and zero otherwise.
The expansion into non-matrix equations is:
The above equations show a simple way of extracting the spatial scaling parameters Sj from a given matrix. The units of Sj 2 is the RCS unit dimension of one millimeter.
This type can be considered a simple extension of the type RIGID. The RIGID_SCALE is easily created by pre-multiplying a RIGID matrix by a diagonal scaling matrix as follows:
where MRBWS is a matrix of type RIGID_SCALE and MRB is a matrix of type RIGID.
No constraints apply to this matrix, so it contains twelve degrees of freedom. This type of Frame of Reference Transformation Matrix allows shearing in addition to rotation, translation and scaling.
For a RIGID type of Frame of Reference Transformation Matrix, the inverse is easily computed using the following formula (inverse of an orthonormal matrix):
For RIGID_SCALE and AFFINE types of Registration Matrices, the inverse cannot be calculated using the above equation, and must be calculated using a conventional matrix inverse operation.
The templates for the Breast Imaging Report are defined in PS3.16. Relationships defined in the Breast Imaging Report templates are by-value. This template structure may be conveyed using the Enhanced SR SOP Class or the Basic Text SR SOP Class.
As shown in Figure Q.1-1, the Breast Imaging Report Narrative and Breast Imaging Report Supplementary Data sub-trees together form the content tree of the Breast Imaging Report.
The Breast Imaging Procedure Reported sub-tree is a mandatory child of the Supplementary Data content item, to describe all of the procedures to which the report applies using coded terminology. It may also be used as a sub-tree of sections within the Supplementary Data sub-tree, for the instance in which a report covers more than one procedure, but different sections of the Supplementary Data record the evidence of a subset of the procedures.
An instance of the Breast Imaging Report Narrative sub-tree contains one or more text-based report sections, with a name chosen from CID 6052 “Breast Imaging Report Section Title”. Within a report section, one or more observers may be identified. This sub-tree is intended to contain the report text as it was created, presented to, and signed off by the verifying observer. It is not intended to convey the exact rendering of the report, such as formatting or visual organization. Report text may reference one or more image or other composite objects on which the interpretation was based.
An instance of the Breast Imaging Report Supplementary Data sub-tree contains one or more of: Breast Imaging Procedure Reported, Breast Composition Section, Breast Imaging Report Finding Section, Breast Imaging Report Intervention Section, Overall Assessment. This sub-tree is intended to contain the supporting evidence for the Breast Imaging Report Narrative sub-tree, using coded terminology and numeric data.
The Breast Imaging Assessment sub-tree may be instantiated as the content of an Overall Assessment section of a report (see Figure Q.1-4), or as part of a Findings section of a report (see TID 4206 “Breast Imaging Report Finding Section”). Reports may provide an individual assessment for each Finding, and then an overall assessment based on an aggregate of the individual assessments.
The following are simple illustrations of encoding Mammography procedure based Breast Imaging Reports.
A screening mammography case, i.e., there are typically four films and no suspicious abnormalities. The result is a negative mammogram with basic reporting. This example illustrates a report encoded as narrative text only:
Example Q.2-1. Report Sample: Narrative Text Only
Film screen mammography, both breasts.
Comparison was made to exam from 11/14/2001. The breasts are heterogeneously dense. This may lower the sensitivity of mammography. No significant masses, calcifications, or other abnormalities are present. There is no significant change from the prior exam.
BI-RADS® Category 1: Negative. Recommend normal interval follow-up in 12 months
Table Q.2-1. Breast Image Report Content for Example 1
A screening mammography case, i.e., there are typically four films and no suspicious abnormalities. The result is a negative mammogram with basic reporting. This example illustrates a report encoded as narrative text with minimal supplementary data, and follows BI-RADS® and MQSA:
Example Q.2-2. Report Sample: Narrative Text with Minimal Supplementary Data
Film screen mammography, both breasts.
Comparison was made to exam from 11/14/2001.
The breasts are heterogeneously dense. This may lower the sensitivity of mammography.
No significant masses, calcifications, or other abnormalities are present. There is no significant change from the prior exam.
BI-RADS® Category 1: Negative. Recommend normal interval follow-up in 12 months.
Table Q.2-2. Breast Imaging Report Content for Example 2
A diagnostic mammogram was prompted by a clinical finding. The result is a probably benign finding with a short interval follow-up of the left breast. This report provides the narrative text with more extensive supplementary data.
Example Q.2-3. Report Sample: Narrative Text with More Extensive Supplementary Data
Film screen mammography, left breast.
Non-bloody discharge left breast.
The breast is almost entirely fat.
Film screen mammograms were performed. There are heterogeneous calcifications regionally distributed in the 1 o'clock upper outer quadrant, anterior region of the left breast. There is an increase in the number of calcifications from the prior exam.
BI-RADS® Category 3: Probably Benign Finding. Short interval follow-up of the left breast is recommended in 6 months.
Table Q.2-3. Breast Imaging Report Content for Example 3
Following a screening mammogram, the patient was asked to return for additional imaging and an ultrasound on the breast, for further evaluation of a mammographic mass. This example demonstrates a report on multiple breast imaging procedures. This report provides the narrative text with some supplementary data.
Example Q.2-4. Report Sample: Multiple Procedures, Narrative Text with Some Supplementary Data
Film screen mammography, left breast; Ultrasound procedure, left breast.
Additional evaluation requested at current screening.
Comparison was made to exam from 11/14/2001.
Film Screen Mammography: A lobular mass with obscured margins is present measuring 7mm in the upper outer quadrant.
Ultrasound demonstrates a simple cyst.
BI-RADS® Category 2: Benign, no evidence of malignancy. Normal interval follow-up of both breasts is recommended in 12 months.
Table Q.2-4. Breast Imaging Report Content for Example 4
The following use cases are the basis for the decisions made in defining the Configuration Management Profiles specified in PS3.15. Where possible specific protocols that are commonly used in IT system management are specifically identified.
When a new machine is added there need to be new entries made for:
The service staff effort needed for either of these should be minimal. To the extent feasible these parameters should be generated and installed automatically.
The need for some sort of ID is common to most of the use cases, so it is assumed that each machine has sufficient non-volatile storage to at least remember its own name for later use.
Updates may be made directly to the configuration databases or made via the machine being configured. A common procedure for large networks is for the initial network design to assign these parameters and create the initial databases during the complete initial network design. Updates can be made later as new devices are installed.
One step that specifically needs automation is the allocation of AE Titles. These must be unique. Their assignment has been a problem with manual procedures. Possibilities include:
Fully automatic allocation of AE Titles as requested. This interacts with the need for AE title stability in some use cases. The automatic process should permit AE Titles to be persistently associated with particular devices and application entities. The automatic process should permit the assignment of AE titles that comply with particular internal structuring rules.
Assisted manual allocation, where the service staff proposes AE Titles (perhaps based on examining the list of present AE Titles) and the system accepts them as unique or rejects them when non-unique.
These AE Titles can then be associated with the other application entity related information. This complete set of information needs to be provided for later uses.
The local setup may also involve searches for other AEs on the network. For example, it is likely that a search will be made for archives and printers. These searches might be by SOP class or device type. This is related to vendor specific application setup procedures, which are outside the scope of DICOM.
The network may have been designed in advance and the configuration specified in advance. It should be possible to pre-configure the configuration servers prior to other hardware installation. This should not preclude later updates or later configuration at specific devices.
The DHCP servers have a database that is manually maintained defining the relationship between machine parameters and IP parameters. This defines:
Hardware MAC addresses that are to be allocated specific fixed IP information.
Client machine names that are to be allocated specific fixed IP information.
Hardware MAC addresses and address ranges that are to be allocated dynamically assigned IP addresses and IP information.
Client machine name patterns that are to be allocated dynamically assigned IP addresses and IP information.
The IP information that is provided will be a specific IP address together with other information. The present recommendation is to provide all of the following information when available.
The manual configuration of DHCP is often assisted by automated user interface tools that are outside the scope of DICOM. Some people utilize the DHCP database as a documentation tool for documenting the assignment of IP addresses that are preset on equipment. This does not interfere with DHCP operation and can make a gradual transition from equipment presets to DHCP assignments easier. It also helps avoid accidental re-use of IP addresses that are already manually assigned. However, DHCP does not verify that these entries are in fact correct.
There are several ways that the LDAP configuration information can be obtained.
A complete installation may be pre-designed and the full configuration loaded into the LDAP server, with the installation attribute set to false. Then as systems are installed, they acquire their own configurations from the LDAP server. The site administration can set the installation attribute to true when appropriate.
When the LDAP server permits network clients to update the configuration, they can be individually installed and configured. Then after each device is configured, that device uploads its own configuration to the LDAP server.
When the LDAP server does not permit network clients to update configurations, they can be individually installed and configured. Then, instead of uploading their own configuration, they create a standard format file with their configuration objects. This file is then manually added to the LDAP server (complying with local security procedures) and any conflicts resolved manually.
The network may have been designed in advance and the configuration specified in advance. It should be possible to pre-configure the configuration servers prior to other hardware installation. This should not preclude later updates or later configuration at specific devices.
LDAP defines a standard file exchange format for transmitting LDAP database subsets in an ASCII format. This file exchange format can be created by a variety of network configuration tools. There are also systems that use XML tools to create database subsets that can be loaded into LDAP servers. It is out of scope to specify these tools in any detail. The use case simply requires that such tools be available.
When the LDAP database is pre-configured using these tools, it is the responsibility of the tools to ensure that the resulting database entries have unique names. The unique name requirement is common to any LDAP database and not just to DICOM AE Titles. Consequently, most tools have mechanisms to ensure that the database updates that they create do have unique names.
At an appropriate time, the installed attribute is set on the device objects in the LDAP configuration.
The "unconfigured" device start up begins with use of the pre-configured services from DHCP, DNS, and NTP. It then performs device configuration and updates the LDAP database. This description assumes that the device has been given permission to update the LDAP database directly.
DHCP is used to obtain IP related parameters. The DHCP request can indicate a desired machine name that DHCP can associate with a configuration saved at the DHCP server. DHCP does not guarantee that the desired machine name will be granted because it might already be in use, but this mechanism is often used to maintain specific machine configurations. The DHCP will also update the DNS server (using the DDNS mechanisms) with the assigned IP address and hostname information. Legacy note: A machine with pre-configured IP addresses, DNS servers, and NTP servers may skip this step. As an operational and documentation convenience, the DHCP server database may contain the description of this pre-configured machine.
The list of NTP servers is used to initiate the NTP process for obtaining and maintaining the correct time. This is an ongoing process that continues for the duration of device activity. See Time Synchronization below.
The list of DNS servers is used to obtain the address of the DNS servers at this site. Then the DNS servers are queried to get the list of LDAP servers. This utilizes a relatively new addition to the DNS capabilities that permit querying DNS to obtain servers within a domain that provide a particular service.
The LDAP servers are queried to find the server that provides DICOM configuration services, and then obtain a description for the device matching the assigned machine name. This description includes device specific configuration information and a list of Network AEs. For the unconfigured device there will be no configuration found.
Through a device specific process it determines its internal AE structure. During initial device installation it is likely that the LDAP database lacks information regarding the device. Using some vendor specific mechanism, e.g., service procedures, the device configuration is obtained. This device configuration includes all the information that will be stored in the LDAP database. The fields for "device name" and "AE Title" are tentative at this point.
Each of the Network AE objects is created by means of the LDAP object creation process. It is at this point that LDAP determines whether the AE Title is in fact unique among all AE Titles. If the title is unique, the creation succeeds. If there is a conflict, the creation fails and "name already in use" is given as a reasonless uses propose/create as an atomic operation for creating unique items. The LDAP approach permits unique titles that comply with algorithms for structured names, check digits, etc. DICOM does not require structured names, but they are a commonplace requirement for other LDAP users. It may take multiple attempts to find an unused name. This multiple probe behavior can be a problem if "unconfigured device" is a common occurrence and name collisions are common. Name collisions can be minimized at the expense of name structure by selecting names such as "AExxxxxxxxxxxxxx" where "xxxxxxxxxxxxxx" is a truly randomly selected number. The odds of collision are then exceedingly small, and a unique name will be found within one or two probes.
The device object is created. The device information is updated to reflect the actual AE titles of the AE objects. As with AE objects, there is the potential for device name collisions.
The network connection objects are created as subordinates to the device object.
The AE objects are updated to reflect the names of the network connection objects.
The "unconfigured device" now has a saved configuration. The LDAP database reflects its present configuration.
In the following example, the new system needs two AE Titles. During its installation another machine is also being installed and takes one of the two AE Titles that the first machine expected to use. The new system then claims another different EYE-title that does not conflict.
Much of the initial start up is the same for restarting a configured device and for configuring a client first and then updating the server. The difference is two-fold.
The AE Title uniqueness must be established manually, and the configuration information saved at the client onto a file that can then be provided to the LDAP server. There is a risk that the manually assigned AE Title is not unique, but this can be managed and is easier than the present entirely manual process for assigning AE Titles.
The larger enterprise networks require prompt database responses and reliable responses during network disruptions. This implies the use of a distributed or federated database. These have update propagation issues. There is not a requirement for a complete and accurate view of the DICOM network at all times. There is a requirement that local subsets of the network maintain an accurate local view. E.g., each hospital in a large hospital chain may tolerate occasional disconnections or problems in viewing the network information in other hospitals in that chain, but they require that their own internal network be reliably and accurately described.
LDAP supports a variety of federation and distribution schemes. It specifically states that it is designed and appropriate for federated situations where distribution of updates between federated servers may be slow. It is specifically designed for situations where database updates are infrequent and database queries dominate.
Legacy devices utilize some internal method for obtaining the IP addresses, port numbers, and AE Titles of the other devices. For legacy compatibility, a managed node must be controlled so that the IP addresses, port numbers, and AE Titles do not change. This affects DHCP because it is DHCP that assigns IP addresses. The LDAP database design must preserve port number and AE Title so that once the device is configured these do not change.
DHCP was designed to deal with some common legacy issues:
Documenting legacy devices that do not utilize DHCP. Most DHCP servers can document a legacy device with a DHCP entry that describes the device. This avoids IP address conflicts. Since this is a manual process, there still remains the potential for errors. The DHCP server configuration is used to reserve the addresses and document how they are used. This documented entry approach is also used for complex multi-homed servers. These are often manually configured and kept with fixed configurations.
Specifying fixed IP addresses for DHCP clients. Many servers have clients that are not able to use DNS to obtain server IP addresses. These servers may also utilize DHCP for start up configuration. The DHCP servers must support the use of fixed IP allocations so that the servers are always assigned the same IP address. This avoids disrupting access by the server's legacy clients. This usage is quite common because it gives the IT administrators the centralized control that they need without disrupting operations. It is a frequent transitional stage for machines on networks that are transitioning to full DHCP operation.
There are two legacy-related issues with time configuration:
The NTP system operates in UTC. The device users probably want to operate in local time. This introduces additional internal software requirements to configure local time. DHCP will provide this information if that option is configured into the DHCP server.
Device clock setting must be documented correctly. Some systems set the battery-powered clock to local time; others use UTC. Incorrect settings will introduce very large time transient problems during start up. Eventually NTP clients do resolve the huge mismatch between battery clock and NTP clock, but the device may already be in medical use by the time this problem is resolved. The resulting time discontinuity can then pose problems. The magnitude of this problem depends on the particular NTP client implementation.
Managed devices can utilize the LDAP database during their own installation to establish configuration parameters such as the AE Title of destination devices. They may also utilize the LDAP database to obtain this information at run time prior to association negotiation.
The LDAP server supports simple relational queries. This query can be phrased:
Then, for each of those devices, query
The result will be the Network AE entries that match those two criteria. The first criteria selects the device type match. There are LDAP scoping controls that determine whether the queries search the entire enterprise or just this server. LDAP does not support complex queries, transactions, constraints, nesting, etc. LDAP cannot provide the hostnames for these Network AEs as part of a single query. Instead, the returned Network AEs will include the names of the network connections for each Network AE. Then the application would need to issue LDAP reads using the DN of the NetworkConnection objects to obtain the hostnames.
Normal start up of an already configured device will obtain IP information and DICOM information from the servers.
The device start up sequence is:
DHCP is used to obtain IP related parameters. The DHCP request can indicate a desired machine name that DHCP can associate with a configuration saved at the DHCP server. DHCP does not guarantee that the desired machine name will be granted because it might already be in use, but this mechanism is often used to maintain specific machine configurations. The DHCP will also update the DNS server (using the DDNS mechanisms) with the assigned IP address and hostname information. Legacy note: A machine with pre-configured IP addresses, DNS servers, and NTP servers may skip this step. As an operational and documentation convenience, the DHCP server database may contain the description of this pre-configured machine.
The list of NTP servers is used to initiate the NTP process for obtaining and maintaining the correct time. This is an ongoing process that continues for the duration of device activity. See Time Synchronization below.
The list of DNS servers is used to obtain the list of LDAP servers. This utilizes a relatively new addition to the DNS capabilities that permit querying DNS to obtain servers within a domain that provide a particular service.
The "nearest" LDAP server is queried to obtain a description for the device matching the assigned machine name. This description includes device specific configuration information and a list of Network AEs.
The AE descriptions are obtained from the LDAP server. Key information in the AE description is the assigned AE Title. The AE descriptions probably include vendor unique information in either the vendor text field or vendor extensions to the AE object. The details of this information are vendor unique. DICOM is defining a mandatory minimum capability because this will be a common need for vendors that offer dynamically configurable devices. The AE description may be present even for devices that do not support dynamic configuration. If the device has been configured with an AE Title and description that is intended to be fixed, then a description should be present in the LDAP database. The device can confirm that the description matches its stored configuration. The presence of the AE Title in the description will prevent later network activities from inadvertently re-using the same AE Title for another purpose. The degree of configurability may also vary. Many simple devices may only permit dynamic configuration of the IP address and AE Title, with all other configuration requiring local service modifications.
The device performs whatever internal operations are involved to configure itself to match the device description and AE descriptions.
At this point, the device is ready for regular operation, the DNS servers will correctly report its IP address when requested, and the LDAP server has a correct description of the device, Network AEs, and network connections.
The lease timeouts eventually release the IP address at DHCP, which can then update DNS to indicate that the host is down. Clients that utilize the hostname information in the LDAP database will initially experience reports of connection failure; and then after DNS is updated, they will get errors indicating the device is down when they attempt to use it. Clients that use the IP entry directly will experience reports of connection failure.
A device may be deliberately placed offline in the LDAP database to indicate that it is unavailable and will remain unavailable for an extended period of time. This may be utilized during system installation so that pre-configured systems can be marked as offline until the system installation is complete. It can also be used for systems that are down for extended maintenance or upgrades. It may be useful for equipment that is on mobile vans and only present for certain days.
For this purpose a separate Installed attribute has been given to devices, Network AEs, and Network Connections so that it can be manually managed.
Medical device time requirements primarily deal with synchronization of machines on a local network or campus. There are very few requirements for accurate time (synchronized with an international reference clock). DICOM time users are usually concerned with:
Other master clocks and time references (e.g., sidereal time) are not relevant to medical users.
High accuracy time synchronization is needed for devices like cardiology equipment. The measurements taken on various different machines are recorded with synchronization modules specifying the precise time base for measurements such as waveforms and multi-frame images. These are later used to synchronize data for analysis and display.
Synchronized to within approximately 10 millisecond. This corresponds to a few percent of a typical heartbeat. Under some circumstances, the requirements may be stricter than this.
During the measurement period there should be no discontinuities greater than a few milliseconds. The time base rate should be within 0.01% of standard time rate.
International Time Synchronization
There are no special extra requirements. Note however that time base stability conflicts with time synchronization when UTC time jumps (e.g., leap seconds).
Ordinary medical equipment uses time synchronization to perform functions that were previously performed manually, e.g., record-keeping and scheduling. These were typically done using watches and clocks, with resultant stability and synchronization errors measured in seconds or longer. The most stringent time synchronization requirements for networked medical equipment derive from some of the security protocols and their record keeping.
Synchronized to within approximately 500 milliseconds. Some security systems have problems when the synchronization error exceeds 1 second.
Large drift errors may cause problems. Typical clock drift errors approximately 1 second/day are unlikely to cause problems. Large discontinuities are permissible if rare or during start up. Time may run backwards, but only during rare large discontinuities.
International Time Synchronization
Some sites require synchronization to within a few seconds of UTC. Others have no requirement.
The local system time of a computer is usually provided by two distinct components.
There is a battery-powered clock that is used to establish an initial time estimate when the machine is turned on. These clocks are typically very inaccurate. Local and international synchronization errors are often 5-10 minutes. In some cases, the battery clock is incorrect by hours or days.
The ongoing system time is provided by a software function and a pulse source. The pulse source "ticks" at some rate between 1-1000Hz. It has a nominal tick rate that is used by the system software. For every tick the system software increments the current time estimate appropriately. E.g., for a system with a 100Hz tick, the system time increments 10ms each tick.
This lacks any external synchronization and is subject to substantial initial error in the time estimate and to errors due to systematic and random drift in the tick source. The tick sources are typically low cost quartz crystal based, with a systematic error up to approximately 10-5 in the actual versus nominal tick rate and with a variation due to temperature, pressure, etc. up to approximately 10-5. This corresponds to drifts on the order of 10 seconds per day.
There is a well established Internet protocol (NTP) for maintaining time synchronization that should be used by DICOM. It operates in several ways.
The most common is for the computer to become an NTP client of one or more NTP servers. As a client it uses occasional ping-pong NTP messages to:
Estimate the network delays. These estimates are updated during each NTP update cycle.
Obtain a time estimate from the server. Each estimate includes the server's own statistical characteristics and accuracy assessment of the estimate.
Use the time estimates from the servers, the network delay estimates, and the time estimates from the local system clock, to obtain a new NTP time estimate. This typically uses modern statistical methods and filtering to perform optimal estimation.
The local applications do not normally communicate with the NTP client software. They normally continue to use the system clock services. The NTP client software adjusts the system clock. The NTP standard defines a nominal system clock service as having two adjustable parameters:
The clock frequency. In the example above, the nominal clock was 100Hz, with a nominal increment of 10 milliseconds. Long term measurement may indicate that the actual clock is slightly faster and the NTP client can adjust the clock increment to be 9.98 milliseconds.
The clock phase. This adjustment permits jump adjustments, and is the fixed time offset between the internal clock and the estimated UTC.
The experience with NTP in the field is that NTP clients on the same LAN as their NTP server will maintain synchronization to within approximately 100 microseconds. NTP clients on the North American Internet and utilizing multiple NTP servers will maintain synchronization to within approximately 10 milliseconds.
There are low cost devices with only limited time synchronization needs. NTP has been updated to include SNTP for these devices. SNTP eliminates the estimation of network delays and eliminates the statistical methods for optimal time estimation. It assumes that the network delays are nil and that each NTP server time estimate received is completely accurate. This reduces the development and hardware costs for these devices. The computer processing costs for NTP are insignificant for a PC, but may be burdensome for very small devices. The SNTP synchronization errors are only a few milliseconds in a LAN environment. They are very topology sensitive and errors may become huge in a WAN environment.
Most NTP servers are in turn NTP clients to multiple superior servers and peers. NTP is designed to accommodate a hierarchy of server/clients that distributes time information from a few international standard clocks out through layers of servers.
The NTP implementations anticipate the use of three major kinds of external clock sources:
Many ISPs and government agencies offer access to NTP servers that are in turn synchronized with the international standard clocks. This access is usually offered on a restricted basis.
The US, Canada, Germany, and others offer radio broadcasts of time signals that may be used by local receivers attached to an NTP server. The US and Russia broadcast time signals from satellites, e.g., GPS. Some mobile telephone services broadcast time signals. These signals are synchronized with the international standard clocks. GPS time signals are popular worldwide time sources. Their primary problem is difficulties with proper antenna location and receiver cost. Most of the popular low cost consumer GPS systems save money by sacrificing the clock accuracy.
For extremely high accuracy synchronization, atomic clocks can be attached to NTP servers. These clocks do not provide a time estimate, but they provide a pulse signal that is known to be extremely accurate. The optimal estimation logic can use this in combination with other external sources to achieve sub microsecond synchronization to a reference clock even when the devices are separated by the earth's diameter.
The details regarding selecting an external clock source and appropriate use of the clock source are outside the scope of the NTP protocol. They are often discussed and documented in conjunction with the NTP protocol and many such interfaces are included in the reference implementation of NTP.
In theory, servers can be SNTP servers and NTP servers can be SNTP clients of other servers. This is very strongly discouraged. The SNTP errors can be substantial, and the clients of a server using SNTP will not have the statistical information needed to assess the magnitude of these errors. It is feasible for SNTP clients to use NTP servers. The SNTP protocol packets are identical to the NTP protocol packets. SNTP differs in that some of the statistical information fields are filled with nominal SNTP values instead of having actual measured values.
There are several public reference implementations of NTP server and client software available. These are in widespread use and have been ported to many platforms (including Unix, Windows, and Macintosh). There are also proprietary and built-in NTP services for some platforms (e.g., Windows 2000). The public reference implementations include sample interfaces to many kinds of external clock sources.
There are significant performance considerations in the selection of locations for servers and clients. Devices that need high accuracy synchronization should probably be all on the same LAN together with an NTP server on that LAN.
Real time operating system (RTOS) implementations may have greater difficulties. The reference NTP implementations have been ported to several RTOSs. There were difficulties with the implementations of the internal system clock on the RTOS. The dual frequency/phase adjustment requirements may require the clock functions to be rewritten. The reference implementations also require access to a separate high resolution interval timer (with sub microsecond accuracy and precision). This is a standard CPU feature for modern workstation processors, but may be missing on low end processors.
An RTOS implementation with only ordinary synchronization requirements might choose to write their own SNTP only implementation rather than use the reference NTP implementation. The SNTP client is very simple. It may be based on the reference implementation or written from scratch. The operating system support needed for accurate adjustment is optional for SNTP clients. The only requirement is the time base stability requirement, which usually implies the ability to specify fractional seconds when setting the time.
The conflict between the user desire to use local time and the NTP use of UTC must be resolved in the device. DHCP offers the ability to obtain the offset between local time and UTC dynamically, provided the DHCP server supports this option. There remain issues such as service procedures, start up in the absence of DHCP, etc.
The differences between local time, UTC, summer time, etc. are a common source of confusion and errors setting the battery clock. The NTP algorithms will eventually resolve these errors, but the final convergence on correct time may be significantly delayed. The device might be ready for medical use before these errors are resolved.
There will usually be a period of time where a network will have some applications that utilize the configuration management protocols coexisting with applications that are only manually configured. The transition issues arise when a legacy Association Requester interacts with a managed Association Acceptor or when a managed Association Requester interacts with a legacy Association Acceptor. Some of these issues also arise when the Association Requester and Association Acceptor support different configuration management profiles. These are discussed below and some general recommendations made for techniques that simplify the transition to a fully configuration managed network.
The legacy Association Requester requires that the IP address of the Association Acceptor not change dynamically because it lacks the ability to utilize DNS to obtain the current IP address of the Association Acceptor. The legacy Association Requester also requires that the AE Title of the Association Acceptor be provided manually.
The DHCP server should be configurable with a database of hostname, IP, and MAC address relationships. The DHCP server can be configured to provide the same IP address every time that a particular machine requests an IP address. This is a common requirement for Association Acceptors that obtain IP addresses from DHCP. The Association Acceptor may be identified by either the hardware MAC address or the hostname requested by the Association Acceptor.
The IP address can be permanently assigned as a static IP address so that legacy Association Requester can be configured to use that IP address while managed Association Requester can utilize the DNS services to obtain its IP address.
No specific actions are needed, although see below for the potential that the DHCP server does not perform DDNS updates.
Although the managed Association Acceptor may obtain information from the LDAP server, the legacy Association Requester will not. This means that the legacy mechanisms for establishing EYE-Titles and related information on the Association Requester will need to be coordinated manually. Most LDAP products have suitable GUI mechanisms for examining and updating the LDAP database. These are not specified by this standard.
An LDAP entry for the Association Requester should be manually created, although this may be a very abbreviated entry. It is needed so that the EYE-Title mechanisms can maintain unique AE Titles. There must be entries created for each of the AEs on the legacy Association Requester.
The legacy Association Requester will need to be configured based on manual examination of the LDAP information for the server and using the legacy procedures for that Association Requester.
The DHCP server may need to be configured with a pre-assigned IP address for the Association Requester if the legacy Association Acceptor restricts access by IP addresses. Otherwise no special actions are needed.
The legacy Association Acceptor hostname and IP address should be manually placed into the DNS database.
The LDAP server should be configured with a full description of the legacy Association Acceptor, even though the Association Acceptor itself cannot provide this information. This will need to be done manually, most likely using GUI tools. The legacy Association Acceptor will need to be manually configured to match the EYE-Titles and other configuration information.
In the event that the DHCP server or DNS server do not support or permit DDNS updates, then the DNS server database will need to be manually configured. Also, because these updates are not occurring, all of the machines should have fixed pre-assigned IP addresses. This is not strictly necessary for clients, since they will not have incoming DICOM connections, but may be needed for other reasons. In practice maintaining this file is very similar to the maintenance of the older hostname files. There is still a significant administrative gain because only the DNS and DHCP configuration files need to be maintained, instead of maintaining files on each of the servers and clients
It is likely that some devices will support only some of the system management profiles. A typical example of such partial support is a node that supports:
Configurations like this are common because many operating system platforms provide complete tools for implementing these clients. The support for LDAP Client requires application support and is often released on a different cycle than the operating system support. These devices will still have their DICOM application manually configured, but will utilize the DHCP, DNS, and NTP services.
The addition of the first fully managed device to a legacy network requires both server setup and device setup.
The managed node requires that servers be installed or assigned to provide the following actors:
These may be existing servers that need only administrative additions, they may be existing hardware that has new software added, and these may be one or multiple different systems. DHCP, DNS, and NTP services are provided by a very wide variety of equipment.
The NTP server location relative to this device should be reviewed to be sure that it meets the timing requirements of the device. If it is an NTP client with a time accuracy requirement of approximately 1 second, almost any NTP server location will be acceptable. For SNTP clients and devices with high time accuracy requirements, it is possible that an additional NTP server or network topology adjustment may be needed.
If the NTP server is using secured time information, certificates or passwords may need to be exchanged.
There are advantages to documenting the unmanaged nodes in the DHCP database. This is not critical for operations, but it helps avoid administrative errors. Most DHCP servers support the definition of pre-allocated static IP addresses. The unmanaged nodes can be documented by including entries for static IP addresses for the unmanaged nodes. These nodes will not be using the DHCP server initially, but having their entries in the DHCP database helps reduce errors and simplifies gradual transitions. The DHCP database can be used to document the manually assigned IP addresses in a way that avoids unintentional duplication.
The managed node must be documented in the DHCP database. The NTP and DNS server locations must be specified.
If this device is an association acceptor it probably should be assigned a fixed IP address. Many legacy devices cannot operate properly when communicating with devices that have dynamically assigned IP addresses. The legacy device does not utilize the DNS system, so the DDNS updates that maintain the changing IP address are not available. So most managed nodes that are association acceptors must be assigned a static IP address. The DHCP system still provides the IP address to the device during the boot process, but it is configured to always provide the same IP address every time. The legacy systems are configured to use that IP address.
Most DNS servers have a database for hostname to IP relationships that is similar to the DHCP database. The unmanaged devices that will be used by the managed node must have entries in this database so that machine IP addresses can be found. It is often convenient to document all of the hostnames and IP addresses for the network into the DNS database. This is a fairly routine administrative task and can be done for the entire network and maintained manually as devices are added, moved, or removed. There are many administrative tools that expect DNS information about all network devices, and this makes that information available.
If DDNS updates are being used, the manually maintained portion of the DNS database must be adjusted to avoid conflicts.
There must be DNS entries provided for every device that will be used by the managed node.
The LDAP database should be configured to include device descriptions for this managed device, and there should be descriptions for the other devices that this device will communicate with. The first portion is used by this device during its start up configuration process. The second portion is used by this device to find the services that it will use.
The basic structural components of the DICOM information must be present on the LDAP server so that this device can find the DICOM root and its own entry. It is a good idea to fully populate the AE Title registry so that as managed devices are added there are no AE Title conflicts.
This device needs to be able to find the association acceptors (usually SCPs) that it will use during normal operation. These may need to be manually configured into the LDAP server. Their descriptions can be highly incomplete if these other devices are not managed devices. Only enough information is needed to meet the needs of this device. If this device is manually configured and makes no LDAP queries to find services, then none of the other device descriptions are needed.
There are some advantages to manually maintaining the LDAP database for unmanaged devices. This can document the manually assigned AE Titles. The service and network connection information can be very useful during network planning and troubleshooting. The database can also be useful during service operations on unmanaged devices as a documentation aid. The decision whether to use the LDAP database as a documentation aid often depends upon the features provided with the LDAP server. If it has good tools for manually updating the LDAP database and good tools for querying and reporting, it is often a good investment to create a manually maintained LDAP database.
During the transition period devices will be switched from unmanaged to managed. This may be done in stages, with the LDAP client transition being done at a different time than the DHCP, DNS, and NTP client. This section describes a switch that changes a device from completely unmanaged to a fully managed device. The device itself may be completely replaced or simply have a software upgrade. Details of how the device is switched are not important.
If the device was documented as part of an initial full network documentation process, the entries in the DHCP and DNS databases need to be checked. If the entry is missing, wrong, or incomplete, it must be corrected in the DHCP and DNS databases. If the entries are correct, then no changes are needed to those servers. The device can simply start using the servers. The only synchronization requirement is that the DHCP and DNS servers be updated before the device, so these can be scheduled as convenient.
If the device is going to be dynamically assigned an IP address by the DHCP server, then the DNS server database should be updated to reflect that DDNS is now going to be used for this device. This update should not be made ahead of time. It should be made when the device is updated.
The NTP server location relative to this device should be reviewed to be sure that it meets the timing requirements of the device. If it is an NTP client with a time accuracy requirement of approximately 1 second, almost any NTP server location will be acceptable. For SNTP clients and devices with high time accuracy requirements, it is possible that an additional NTP server or network topology adjustment may be needed.
If the NTP server is using secured time information, certificates or passwords may need to be exchanged.
The association acceptors may be able to simply utilize the configuration information from the LDAP database, but it is likely that further configuration will be needed. Unmanaged nodes probably have only a minimal configuration in the database.
The Diameter Symmetry of a Stenosis is a parameter determining the symmetry in arterial plaque distribution.
The Symmetry Index is defined by: a / b where a is smaller or equal to than b . a and b are measured in the reconstructed artery at the position of the minimal luminal diameter.
Possible values of symmetry range from 0 to 1, where 0 indicates complete asymmetry and 1 indicates complete symmetry.
Reference: Quantitative coronary arteriography; physiological aspects, page 102-103 in: Reiber and Serruys, Quantitative coronary arteriography, 1991
To compare the quantitative results with those provided by the usual visual interpretation, the left ventricular boundary is divided into 5 anatomical regions, denoted:
The computer-defined obstruction analysis calculates the reconstruction diameter based on the diameters outside the stenotic segment. This method is completely automated and user independent. The reconstructed diameter represents the diameters of the artery had the obstruction not been present.
The proximal and distal borders of the stenotic segment are automatically calculated.
The difference between the detected contour and the reconstructed contour inside the reconstructed diameter contour is considered to be the plaque.
Based on the reconstruction diameter at the Minimum Luminal Diameter (MLD) position a reference diameter for the obstruction is defined.
The interpolated reference obstruction analysis calculates a reconstruction diameter for each position in the analyzed artery. This reconstructed diameter represents the diameters of the artery when no disease would be present. The reconstruction diameter is a line fitted through at least two user-defined reference markers by linear interpolation.
By default two references are used at the positions of the reference markers are automatically positioned at 5% and 95% of the artery length.
To calculate a percentage diameter stenosis the reference diameter for the obstruction is defined as the reconstructed diameter at the position of the MLD.
In cases where the proximal and distal part of the analyzed artery have a stable diameter during the treatment and long-term follow-up, this method will produce a stable reference diameter for all positions in the artery.
A vessel segment length as seen in the image is not always indicated as the same X-axis difference in the graph.
The X-axis of the graph is based on pixel positions on the midline and these points are not necessarily equidistant. This is caused by the fact that vessels do not only run perfectly horizontally or vertically, but also at angles.
When a vessel midline is covering a number of pixel positions perfectly horizontal or vertical, it will cover less space in mm compared to a vessel that covers the same number of pixel positions under an angle. When a segment runs perfectly horizontal or vertical, the segment length is equal to the amount of midline pixel points times the pixel separation (each point of the midline is separated exactly the pixel spacing in mm) and the points on the X-axis also represent exactly one pixel space. This is not the case when the vessel runs under an angle. For example an artery that is positioned at a 45 angle, the distance between two points on the midline is 0.7 times the pixel spacing.
As example, the artery consists of 10 elements (n =10); each has a length of 1mm (pixel size). If the MLD was exactly in the center of the artery you would expect the length from 0 to the MLD would be 5 sub segments long, thus 5 mm. This is true if the artery runs horizontal or vertically (assumed aspect ratio is 1).
If the artery is positioned in a 45º angle then the length of each element is √2 times the pixel size compared to the previous example. Thus the length depends on the angle of the artery.
The following use cases are examples of how the DICOM Ophthalmology Photography objects may be used. These use cases utilize the term "picture" or "pictures" to avoid using the DICOM terminology of frame, instance or image. In the use cases, the series means DICOM Series.
An N-spot retinal photography exam means that "N" predefined locations on the retina are examined.
A routine N-spot retinal photography exam is ordered for both eyes. There is nothing unusual observed during the exam, so N pictures are taken of each retina. This healthcare facility also specifies that in an N-spot exam a routine external picture is captured of both eyes, that the current intraocular pressure (IOP) is measured, and that the current refractive state is measured.
2N pictures of the retina and one external picture. Each retinal picture is labeled in the acquisition information to indicate its position in the local N-spot definition. The series is not labeled, each picture is labeled OS or OD as appropriate.
In the acquisition information of every picture, the IOP and refractive state information is replicated.
Since there are no stereo pictures taken, there is no Stereometric Relationship IOD instance created.
A routine N-spot retinal photography exam is ordered for both eyes. During the exam a lesion is observed in the right eye. The lesion spans several spots, so an additional wide angle view is taken that captures the complete lesion. Additional narrow angle views of the lesion are captured in stereo. After completing the N-spot exam, several slit lamp pictures are taken to further detail the lesion outline.
2N pictures of the retina and one external picture, one additional wide angle picture of the abnormal retina, 2M additional pictures for the stereo detail of the abnormal retina, and several slit lamp pictures of the abnormal eye. The different lenses and lighting parameters are documented in the acquisition information for each picture.
One instance of a Stereometric Relationship IOD, indicating which of the stereo detail pictures above should be used as stereo pairs.
A routine fluorescein exam is ordered for one eye. The procedure includes:
Routine stereo N-spot pictures of both eyes, routine external picture, and current IOP.
Reference stereo picture of each eye using filtered lighting
Capture of 20 stereo pairs with about 0.7 seconds between pictures in a pair and 3-5 seconds between pairs.
Stereo pair capture of each eye at increasing intervals for the next 10 minutes, taking a total of 8 pairs for each eye.
Four pictures taken with filtered lighting (documented in acquisition information) that constitute a stereo pair for each eye.
40 pictures (20 pairs) for one eye of near term fluorescein. These include the acquisition information, lighting context, and time stamp.
32 pictures (8 pairs for each eye) of long term fluorescein. These include acquisition information, lighting context, and time stamp.
One Stereometric Relationship IOD, indicating which of the above OP instances should be used as stereo pairs.
The pictures of a) through d) may or may not be in the same series.
The patient presents with a generic eye complaint. Visual examination reveals a possible abrasion. The general appearance of the eyes is documented with a wide angle shot, followed by several detailed pictures of the ocular surface. A topical stain is applied to reveal details of the surface lesion, followed by several additional pictures. Due to the nature of the examination, no basic ophthalmic measurements were taken.
The result is a study with one or more series that contains:
The patient is suspected of a nervous system injury. A series of external pictures are taken with the patient given instructions to follow a light with his eyes. For each picture the location of the light is indicated by the patient intent information, (e.g., above, below, patient left, patient right).
The result is a study with one or more series that contains:
Patient is suspected of myaesthenia gravis. Both eyes are imaged in normal situation. Then after Tensilon® (edrophonium chloride) injection a series of pictures is taken. The time, amount, and method of Tensilon® (edrophonium chloride) administration is captured in the acquisition information. The time stamps of the pictures are used in conjunction with the behavior of the eyelids to assess the state of the disease.
The result is a study with one or more series that contains:
A stereo optic disk examination is ordered for a patient with glaucoma. For this examination, the IOP does not need to be measured. The procedure includes:
Ophthalmic mapping usually occurs in the posterior region of the fundus, typically in the macula or the optic disc. However, this or other imaging may occur anywhere in the fundus. The mapping data has clinical relevance only in the context of its location in the fundus, so this must be appropriately defined. CID 4207 “Ophthalmic Image Position” codes and the ocular fundus locations they represent are defined by anatomical landmarks and are described using conventional anatomic references, e.g., superior, inferior, temporal, and nasal. Figure U.1.8-1 is a schematic representation of the fundus of the left eye, and provides additional clarification of the anatomic references used in the image location definitions. A schematic of the right eye is omitted since it is identical to the left eye, except horizontally reversed (Temporal→Nasal, Nasal→Temporal).
The spatial precision of the following location definitions vary depending upon their specific reference. Any location that is described as "centered" is assumed to be positioned in the center of the referenced anatomy. However, the center of the macula can be defined visually with more precision than that of the disc or a lesion. The locations without a "center" reference are approximations of the general quadrant in which the image resides.
Following are general definitions used to understand the terminology used in the code definitions.
Central zone - a circular region centered vertically on the macula and extending one disc diameter nasal to the nasal margin of the disc and four disc diameters temporal to the temporal margin of the disc.
Equator - the border between the mid-periphery and periphery of the retinal and corresponding to a circle approximately coincident with the ampulae of the vortex veins
Superior - any region that is located superiorly to a horizontal line bisecting the macula
Inferior - any region that is located inferiorly to a horizontal line bisecting the macula
Temporal - any region that is located temporally to a vertical line bisecting the macula
Nasal - any region that is located nasally to a vertical line bisecting the macula
Mid-periphery - A circular zone of the retina extending from the central zone to the equator
Periphery - A zone of the retinal extending from the equator to the ora serrata.
Ora Serrata - the most anterior extent and termination of the retina
Figure U.1.8-1 illustrates anatomical representation of defined regions of the fundus of the left eye according to anatomical markers. The right eye has the same representations but reversed horizontally so that temporal and nasal are reversed with the macula remaining temporal to the disc.
Modified after Welch Allyn: http://www.welchallyn.com/wafor/students/Optometry-Students/BIO-Tutorial/BIO-Observation.htm.
The following shows the proposed sequence of events using individual images that are captured for later stereo viewing, with the stereo viewing relationships captured in the stereometric relationship instance.
The instances captured are all time stamped so that the fluorescein progress can be measured accurately. The acquisition and equipment information captures the different setups that are in use:
Acquisition information A is the ordinary illumination and planned lenses for the examination.
Acquisition information B is the filtered illumination, filtered viewing, and lenses appropriate for the fluorescein examination.
Acquisition information C indicates no change to the equipment settings, but once the injection is made, the subsequent images include the drug, method, dose, and time of delivery.
Optical tomography uses the back scattering of light to provide cross-sectional images of ocular structures. Visible (or near-visible) light works well for imaging the eye because many important structures are optically transparent (cornea, aqueous humor, lens, vitreous humor, and retina - see Figure U.3-1).
To provide analogy to ultrasound imaging, the terms A-scan and B-scan are used to describe optical tomography images. In this setting, an A-scan is the image acquired by passing a single beam of light through the structure of interest. An A-scan image represents the optical reflectivity of the imaged tissue along the path of that beam - a one-dimensional view through the structure. A B-scan is then created from a collection of adjacent A-scan images - a two dimensional image. It is also possible to combine multiple B-scans into a 3-dimensional image of the tissue.
When using optical tomography in the eye it is desirable to have information about the anatomic and physiologic state of the eye. Measurements like the patient's refractive error and axial eye length are frequently important for calculating magnification or minification of images. The accommodative state and application of pupil dilating medications are important when imaging the anterior segment of the eye as they each cause shifts in the relative positions of ocular structures. The use of dilating medications is also relevant when imaging posterior segment structures because a small pupil can account for poor image quality.
Ophthalmic tomography may be used to plan placement of a phakic intraocular lens (IOL). A phakic IOL is a synthetic lens placed in the anterior segment of the eye in someone who still has their natural crystalline lens (i.e., they are "phakic"). This procedure is done to correct the patient's refractive error, typically a high degree of myopia (near-sightedness). The exam will typically be performed on both eyes, and each eye may be examined in a relaxed and accommodated state. Refractive information for each eye is required to interpret the tomographic study.
A study consists of one or more B-scans (see Figure U.3-2) and one or more instances of refractive state information. There may be a reference image of the eye associated with each B-scan that shows the position of the scan on the eye.
The anterior chamber angle is defined by the angle between the iris and cornea where they meet the sclera. This anatomic feature is important in people with narrow angles. Since the drainage of aqueous humor occurs in the angle, a significantly narrow angle can impede outflow and result in increased intraocular pressure. Chronically elevated intraocular pressures can result in glaucoma. Ophthalmic tomography represents one way of assessing the anterior chamber angle.
B-scans are obtained of the anterior segment including the cornea and iris. Scans may be taken at multiple angles in each eye (see Figure U.3-2). A reference image may be acquired at the time of each B-scan(s). Accommodative and refractive state information are also important for interpretation of the resulting tomographic information.
Note in the Figure the ability to characterize the narrow angle between the iris and peripheral cornea.
As a transparent structure located at the front of the eye, the cornea is ideally suited to optical tomography. There are multiple disease states including glaucoma and corneal edema where the thickness of the cornea is relevant and tomography can provide this information using one or more B-scans taken at different angles relative to an axis through the center of the cornea.
Tomography is also useful for defining the curvature of the cornea. Accurate measurements of the anterior and posterior curvatures are important in diseases like keratoconus (where the cornea "bulges" abnormally) and in the correction of refractive error via surgery or contact lenses. Measurements of corneal curvature can be derived from multiple B-scans taken at different angles through the center of the cornea.
In both cases, a photograph of the imaged structure may be associated with each B-scan image.
The Retinal Nerve Fiber Layer (RNFL) is made up of the axons of the ganglion cells of the retina. These axons exit the eye as the optic nerve carrying visual signals to the brain. RNFL thinning is a sign of glaucoma and other optic nerve diseases.
An ophthalmic tomography study contains one or more circular scans, perhaps at varying distances from the optic nerve. Each circular scan can be "unfolded" and treated as a B-scan used to assess the thickness of the nerve fiber layer (see Figure U.3-3). A fundus image that shows the scan location on the retina may be associated with each B-scan. To detect a loss of retinal nerve fiber cells the exam might be repeated one or multiple times over some period of time. The change in thickness of the nerve fiber tissue or a trend (serial plot of thickness data) might be used to support the diagnosis.
In the Figure, the pseudo-colored image on the left shows the various layers of the retina in cross section with the nerve fiber layer between the two white lines. The location of the scan is indicated by the bright circle in the photograph on the right.
The macula is located roughly in the center of the retina, temporal to the optic nerve. It is a small and highly sensitive part of the retina responsible for detailed central vision. Many common ophthalmic diseases affect the macula, frequently impacting the thickness of different layers in the macula. A series of scans through the macula can be used to assess those layers (see Figure U.3-4).
A study may contain a series of B-scans. A fundus image showing the scan location(s) on the retina may be associated with one or more B-scans. In the Figure, the corresponding fundus photograph is in the upper left.
Figure U.3-4. Example of a macular scan showing a series of B-scans collected at six different angles
Some color retinal imaging studies are done to determine vascular caliber of retinal vessels, which can vary throughout the cardiac cycle. Images are captured while connected to an ECG machine or a cardiac pulse monitor allowing image acquisition to be synchronized to the cardiac cycle.
Angiography is a procedure that requires a dye to be injected into the patient for the purpose of enhancing the imaging of vascular structures in the eye. A standard step in this procedure is imaging the eye at specified intervals to detect the pooling of small amounts of dye and/or blood in the retina. For a doctor or technician to properly interpret angiography images it is important to know how much time had elapsed between the dye being injected in the patient (time 0) and the image frame being taken. It is known that such dyes can have an affect on OPT tomographic images as well (and it may be possible to use such dyes to enhance vascular structure in the OPT images), therefore time synchronization will be applied to the creation of the OPT images as well as any associated OP images
The angiographic acquisition is instantiated as a multi-frame OPT Image. The variable time increments between frames of the image are captured in the Frame Time Vector of the OPT Multi-frame Module. For multiple sets of images, e.g., sets of retinal scan images, the Slice Location Vector will be used in addition to the Frame Time Vector. For 5 sets of 6 scans there will be 30 frames in the multi-frame image. The first 6 values in the Frame Time Vector will give the time from injection to the first set of scans, the second 6 will contain the time interval for the second set of 6 scans, and so on, for a total of 5 time intervals.
Another example of an angiographic study with related sets of images is a sequence of SLO/OCT/"ICG filtered" image triples (or SLO/OCT image pairs) that are time-stamped relative to a user-defined event. This user-defined event usually corresponds to the inject time of ICG (indocyanine green) into the patients blood stream. The resultant images form an angiography study where the patient's blood flow can be observed with the "ICG filtered" images and can be correlated with the pathologies observed in the SLO and OCT images that are spatially related to the ICG image with a pixel-to-pixel correspondence on the X-Y plane.
The prognosis of some pathologies can be aided by a 3D visualization of the affected areas of the eye. For example, in certain cases the density of cystic formations or the amount of drusen present can be hard to ascertain from a series of unrelated two-dimensional longitudinal images of the eye. However, some OCT machines are capable of taking a sequence of spatially related two-dimensional images in a suitably short period of time. These images can either be oriented longitudinally (perpendicular to the retina) or transversely (near-parallel to the retina). Once such a sequence has been captured, it then becomes possible for the examined volume of data to be reconstructed for an interactive 3D inspection by a user of the system (see Figure U.3-5). It is also possible for measurements, including volumes, to be calculated based on the 3D data set.
A reference image is often combined with the OCT data to provide a means of registering the 3D OCT data-set with a location on the surface of the retina (see Figure U.3-6 and Figure U.3-7).
While the majority of ophthalmic tomography imaging consists of sets of longitudinal images (also known as B scans or line scans), transverse images (also known as coronal or "en face" images) can also provide useful information in determining the full extent of the volume affected by pathology.
Longitudinal images are oriented in a manner that is perpendicular to the structure being examined, while transverse images are oriented in an "en face" or near parallel fashion through the structure being examined.
Transverse images can be obtained from a directly as a single scan (as shown in Figure U.3-8 and Figure U.3-9) or they can also be reconstructed from a 3D data set (as shown in Figure U.3-10 and Figure U.3-11). A sequence of transverse images can also be combined to form a 3D data set.
Figure U.3-9. Correlation between a Transverse OCT Image and a Reference Image Obtained Simultaneously
Figure U.3-8, Figure U.3-9, Figure U.3-10 and Figure U.3-11 are all images of the same pathology in the same eye, but the two different orientations provide complementary information about the size and shape of the pathology being examined. For example, when examining macular holes, determining the amount of surrounding cystic formation is important aid in the following treatment. Determining the extent of such cystic formation is much more easily ascertained using transverse images rather than longitudinal images. Transverse images are also very useful in locating micro-pathologies such as covered macular holes, which may be overlooked using conventional longitudinal imaging.
In Figure U.3-10, the blue green and pink lines show the correspondence of the three images. In Figure U.3-11, the Transverse image is highlighted in yellow.
The Hanging Protocol Composite IOD contains information about user viewing preferences, related to image display station (workstation) capabilities. The associated Service Classes support the storage (C-STORE), query (C-FIND) and retrieve (C-MOVE and C-GET) of Hanging Protocol Instances between servers and workstations. The goal is for users to be able to conveniently define their preferred methods of presentation and interaction for different types of viewing circumstances once, and then to automatically layout image sets according to the users' preferences on workstations of similar capability.
The primary expectation is to facilitate the automatic and consistent hanging of images according to definitions provided by the users, sites or vendors of the workstations by providing the capability to:
Search for Hanging Protocols by name, level (single user, user group, site, manufacturer), user identification code, modality, anatomy, and laterality.
Allow automatic hanging of image sets to occur for all studies on workstations with sufficiently compatible capabilities by matching against user or site defined Hanging Protocols. This includes supporting automatic hanging when the user reads from different locations, or on different but similar workstation types.
How relevant image sets (e.g., from the current and prior studies) are obtained is not defined by the Hanging Protocol IOD or Service Classes.
Conformance with the DICOM Grayscale Standard Display Function and the DICOM Softcopy Presentation States in conjunction with the Hanging Protocol IOD allows the complete picture of what the users see, and how they interact with it, to be defined, stored and reproduced as similarly as possible, independent of workstation type. Further, it is anticipated that implementers will make it easy for users to point to a graphical representation of what they want (such as 4x1 versus 12x1 format with a horizontal alternator scroll mechanism) and select it.
User A sits down at workstation X, with two 1024x1280 resolution screens (Figure V.1-1) that recently has been installed and hence has no user specific Hanging Protocols defined. The user brings up the list of studies to be read and selects the first study, a chest CT, together with the relevant prior studies. The workstation queries the Hanging Protocol Query SCP for instances of the Hanging Protocol Storage SOP Class. It finds none for this specific user, but matches a site specific Hanging Protocol Instance, which was set up when the workstation was installed at the site. It applies the site Hanging Protocol Instance, and the user reads the current study in comparison to the prior studies.
The user decides to customize the viewing style, and uses the viewing application to define what type of Hanging Protocol is preferred (layout style, interaction style) by pointing and clicking on graphical representations of the choices. The user chooses a 3-column by 4-row tiled presentation with a "vertical alternator" interaction, and a default scroll amount of one row of images. The user places the current study on the left screen, and the prior study on the right screen. The user requests the application to save this Hanging Protocol, which causes the new Hanging Protocol Instance to be stored to the Hanging Protocol Storage SCP.
When the same user comes back the next day to read chest CT studies at workstation X and a study is selected, the application queries the Hanging Protocol Query SCP to determine which Hanging Protocol Instances best match the scenario of this user on this workstation for this study. The best match returned by the SCP in response to the query is with the user ID matching his user ID, the study type matched to the study type(s) of the image set selected for viewing, and the screen types matching the workstation in use.
A list of matches is produced, with the Hanging Protocol Instance that the user defined yesterday for chest CT matching the best, and the current CT study is automatically displayed on the left screen with that Hanging Protocol. Alternative next best matches are available to the user via the application interface's pull-down menu list of all closely matching Hanging Protocol Instances.
Because this Hanging Protocol defines an additional image set, the prior year's chest CT study for the same patient is displayed next to the current study, on the right screen.
The next week, the same user reads chest CTs at a different site in the same enterprise on a similar type workstation, workstation Y, from a different vendor. The workstation has a single 2048x2560 screen (Figure V.1-1). This workstation queries the Hanging Protocol Query SCP, and retrieves matching Hanging Protocol Instances, choosing as the best match the Hanging Protocol Instance used on workstation X before by user A. This Hanging Protocol is automatically applied to display the chest CT study. The current chest CT study is displayed on the left half of the 2048x2560 screen, and the prior chest CT study is displayed on the right half of the screen, with 3 columns and 8 rows each, maintaining the same vertical alternator layout. The sequence of communications between the workstations and the SCP is depicted in Figure V.1-2.
The overall process flow of Hanging Protocols can be seen in Figure V.2-1, and consists of three main steps: selection, processing, and layout. The selection is defined in the Section C.23.1 “Hanging Protocol Definition Module” in PS3.3 . The processing and layout are defined in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . The first process step, the selection of sets of images that need to be available from DICOM image objects, is defined by the Image Sets Sequence of the Section C.23.1 “Hanging Protocol Definition Module” in PS3.3 . This is a N:M mapping, with multiple image sets potentially drawing from the same image objects.
The second part of the process flow consists of the filtering, reformatting, sorting, and presentation intent operations that map the Image Sets into their final form, the Display Sets. This is defined in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . This is a 1:M relationship, as multiple Display Sets may draw their images from the same Image Set. The filtering operation allows for selecting a subset of the Image Set and is defined by the Hanging Protocol Display Module Filter Operations Sequence. Reformatting allows operations such as multiplanar reformatting to resample images from a volume (Reformatting Operation Type, Reformatting Thickness, Reformatting Interval, Reformatting Operation Initial View Direction, 3D Rendering Type). The Hanging Protocol Display Module Sorting Operations Sequence allows for ordering of the images. Default presentation intent (a subset of the Presentation State operations such as intensity window default setting) is defined by the Hanging Protocol Display Module presentation intent attributes. The Display Sets are containers holding the final sets of images after all operations have occurred. These sets contain the images ready for rendering to locations on the screen(s).
The rendering of a Display Set to the screen is determined by the layout information in the Image Boxes Sequence within a Display Sets Sequence Item in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . A Display Set is mapped to a single Image Boxes Sequence. This is generally a single Image Box (rectangular area on screen), but may be an ordered set of image boxes. The mapping to an ordered set of image boxes is a special case to allow the images to flow in an ordered sequence through multiple locations on the screen (e.g., newspaper columns). Display Environment Spatial Position specifies rectangular locations on the screen where the images from the Display Sets will be rendered. The type of interaction to be used is defined by the Image Boxes Sequence Item attributes. A vertically scrolling alternator could be specified by having Image Box Layout Type equal TILED and Image Box Scroll Direction equal VERTICAL.
An example of this processing is shown in Figure V.2-2. The figure is based on the Neurosurgery Planning Hanging Protocol Example contained in this Annex, and corresponds to the display sets for Display Set Presentation Group #1 (CT only display of current CT study).
Goal: A Hanging Protocol for Chest X-ray, PA & Lateral (LL, RL) views, current & prior, with the following layout:
The Hanging Protocol Definition does not specify a specific modality, but rather a specific anatomy (Chest). The Image Sets Sequence provides more detail, in that it specifies the modalities in addition to the anatomy for each image set.
Hanging Protocol Description: "Current and Prior Chest PA and Lateral"
Hanging Protocol Definition Sequence:
Item 1: (T-D3000, SRT, "Chest")
Item 1: (T-D3000, SRT, "Chest")
Hanging Protocol User Identification Code Sequence: zero length
Goal: A Hanging Protocol for MR & CT of Head, for a neurosurgery plan. 1Kx1K screen on left shows orthogonal MPR slices through the acquisition volume, and in one presentation group has a 3D interactive volume rendering in the lower right quadrant. In all display sets the 1Kx1K screen is split into 4 512x512 quadrants. The 2560x2048 screen has a 4 row by 3 column tiled display area. There are 4 temporal presentation groups: CTnew, MR, combined CTnew and MR, combined CTnew and CTold.
Display Environment Spatial Position attribute values for image boxes are represented in terms of ratios in pixel space [(0/3072, 512/2560), (512/3072,0/2560)] rather than (0.0,0.0), (1.0,1.0) space, for ease of understanding the example.
Hanging Protocol Description: "Neurosurgery planning, requiring MR and CT of head"
Hanging Protocol Definition Sequence:
Item 1: (T-D1100, SRT, "Head")
Item 1: (T-D1100, SRT, "Head")
Hanging Protocol User Identification Code Sequence: zero length
Synchronized Scrolling Sequence: [Link up (synchronize) the MR and CT tiled scroll panes in Display Sets 15 and 16, and the CT new and CT old tiled scroll panes in Display Sets 21 and 22]
The following is an example of a general C-FIND Request for the Hanging Protocol Information Model - FIND SOP Class that is searching for all Chest related Hanging Protocols for the purpose of reading projection Chest X-ray. The user is at a workstation that has two 2Kx2.5K screens.
The following is an example of a set of C-FIND Responses for the Hanging Protocol Information Model - FIND SOP Class, answering the C-FIND Request listed above. There are a few matches for this general query. The application needs to select the best choice among the matches, which is the second response. The first response is for Chest CT, and the third response does not match the user's workstation environment as well as does the second.
For Display Set Patient Orientation (0072,0700) with value "A\F", the application interpreting the Hanging Protocol will arrange sagittal images oriented with the patient's anterior toward the right side of the image box, and the patient's foot will be toward the bottom of the image box. An incoming sagittal MRI image as shown in Figure V.6-1 will require a horizontal flip before display in the image box.
The scenarios in which Digital Signatures would be used in DICOM Structured Reports include, but are not limited to the following.
Case 1: Human Signed Report and Automatically Signed Evidence.
The archive, after receiving an MPPS complete and determining that it has the complete set of objects created during an acquisition procedure step, creates a signed Key Object Selection Document Instance with secure references to all of the DICOM composite objects that constitute the exam. The Document would include a Digital Signature according to the Basic SR Digital Signatures Secure Use Profile with the Digital Signature Purpose Code Sequence (0400,0401) of (14,ASTM-sigpurpose, "Source Signature"). It would set the Key Object Selection Document Title of that Instance to (113035, DCM, "Signed Complete Acquisition Content"). Note that the objects that are referenced in the MPPS may or may not have Digital Signatures. By creating the Key Object Selection Document Instance, the archive can in effect add the equivalent of Digital Signatures to the set of objects.
A post-processing system generates additional evidence objects, such as measurements or CAD reports, referring to objects in the exam. This post-processing system may or may not include Digital Signatures in the evidence objects, and may or may not be included as secure references in a signed Key Object Selection Document.
Working at a reporting station, a report author gathers evidences from a variety of sources, including those referenced by the Key Object Selection Document Instance and the additional evidence objects generated by the post-processing system, and incorporates his or her own observations and conclusions into one or more reports.
It is desired that all evidence references from a DICOM SR be secure. The application creating the SR may either:
create secure references by copying a verified Digital Signature from the referenced object or by generating a MAC code directly from the referenced object,.
make a secure reference to a signed Key Object Selection Document that in turn securely references the SOP Instances, or.
copy the secure reference information from a trusted Key Object Selection Document to avoid the overhead of recalculating the MAC codes or revalidating the reference Digital Signatures.
When the author completes a DICOM SR, the system, using the author's X.509 Digital Signature Certificate generates a Digital Signature with the Digital Signature Purpose Code Sequence (0400,0401) of (1, ASTM-sigpurpose, "Author Signature") for the report.
The author's supervisor reviews the DICOM SR. If the supervisor approves of the report, the system sets the Verification Flag to "VERIFIED" and adds a Digital Signature with the Digital Signature Purpose Code Sequence (0400,0401) of (5, ASTM-sigpurpose, "Verification Signature") or (6, ASTM-sigpurpose, "Validation Signature") using the supervisor's X.509 certificate.
At some later time, someone who is reading the DICOM SR SOP Instance wishes to verify its authenticity. The system would verify that the Author Signature, as well as any Verification or Validation Signature present are intact (i.e., that the signed data has not been altered based on the recorded Digital Signatures, and that the X.509 Certificates were valid at the time that the report was created).
If the report reader wishes to inspect DICOM source materials referenced in a DICOM SR, the system can insure that the materials have not been altered since the report was written by verifying the Referenced Digital Signatures or the Referenced SOP Instance MAC that the report creator generated from the referenced materials.
Case 2: Cross Enterprise Document Exchange
An application sends by any means a set of DICOM composite objects to an entity outside of the institutional environment (e.g., for review by a third party).
The application creates a signed Key Object Selection Document Instance with a Key Object Selection Document Title of (113031, DCM, "Signed Manifest") referencing the set of DICOM Data Objects that it sent outside the institutional environment, and sends that SR to the external entity as a shipping manifest.
The external entity may utilize the Key Object Selection SR SOP Instance to confirm that it received all of the referenced objects intact (i.e., without alterations). Because the signed Key Object Selection Instance must use secure references, it can verify that the objects have not been modified.
This Annex describes a use of Key Object Selection (KO) and Grayscale Softcopy Presentation State (GSPS) SOP Instances, in conjunction with a typical dictation/transcription process for creating an imaging clinical report. The result is a clinical report as a Basic Text Structured Report (SR) SOP Instance that includes annotated image references (see Section X.2). This report may also (or alternatively) be encoded as an HL7 Clinical Document Architecture (CDA) document (see Section X.3).
Similar but more complex processes that include, for instance, numeric measurements and Enhanced or Comprehensive SR, are not addressed by this Annex. This Annex also does not specifically address the special issues associated with reporting across multiple studies (e.g., the "grouped procedures" case).
During the softcopy reading of an imaging study, the physician dictates the report, which is sent to a transcription service or is processed by a voice recognition system. The transcribed dictation arrives at the report management system (typically a RIS) by some mechanism not specified here. The report management system enables the reporting physician to correct, verify, and "sign" the transcribed report. See Figure X.1-1. This data flow applies to reports stored in a proprietary format, reports stored as DICOM Basic Text SR SOP Instances, or reports stored as HL7 CDA instances.
The report management system has flexibility in encoding the report title. For example, it could be any of the following:
There are LOINC codes associated with each of these types of titles, if a coded title is used on the report (see CID 7000 “Diagnostic Imaging Report Document Titles”).
The transcribed dictation may be either a single text stream, or a series of text sections each with a title. Division of reports into a limited number of canonically named sections may be done by the transcriptionist, or automated division of typical free text reports may be possible with voice recognition or a natural language processing algorithm.
For an electronically stored report, the signing function may or may not involve a cryptographic digital signature; any such cryptographic signature is beyond the scope of this description.
To augment the basic dictation/transcription reporting use case, it is desired to select significant images to be attached (by reference) to the report. During the softcopy reading, the physician may select images from those displayed on his workstation (e.g., by a point-and-click function through the user interface). The selection of images is conveyed to the image repository (PACS) through a DICOM Key Object Selection (KO) document. When the report management system receives the transcribed dictation, it queries the image repository for any KO documents, and appends the image references from the KO to the transcription. In this process step, the report management system does not need to access the referenced images; it only needs to copy the references into the draft report. The correction and signature function potentially allows the physician to retrieve and view the referenced images, correct and change text, and to delete individual image references. See Figure X.1-2.
The transcribed dictation must have associated with it sufficient key attributes for the report management system to query for the appropriate KO documents in the image repository (e.g., Study ID, or Accession Number).
Each KO document in this process includes a specific title "For Report Attachment", a single optional descriptive text field, plus a list of image references using the SR Image Content Item format. The report management system may need to retrieve all KO documents of the study to find those with this title, since the image repository might not support the object title as a query return key.
Multiple KO instances may be created for a study report, e.g., to facilitate associating different descriptive text (included in the KO document) with different images or image sets. All KOs with the title "For Report Attachment" in the study are to be attached to the dictated report by copying their content into the draft report (see Section X.2 and Section X.3). (There may also be KOs with other titles, such as "For Teaching", that are not to be attached to the report.)
The nature of the image reference links will differ depending on the format of the report. A DICOM SR format report will use DICOM native references, and other formats may use a hyperlink to the referenced images using the Web Access to DICOM Persistent Objects (WADO) service (see PS3.18).
The KO also allows the referencing of a Grayscale Softcopy Presentation State (GSPS) instance for each selected image. A GSPS instance can be created by the workstation for annotation ("electronic grease pencil") of the selected image, as well as to set the window width/window level, rotation/flip, and/or display area selection of the image attached to the report. The created GSPS instances are transferred to the image repository (PACS) and are referenced in the KO document.
As with image references, the report management system may include the GSPS instance references in the report. When the report is subsequently displayed, the reader may retrieve the referenced images together with the referenced GSPS, so that the image is displayed with the annotations and other GSPS display controls. See Figure X.1-3.
Note that the GSPS display controls can also be included in WADO hyperlinks and invoked from non-DICOM display stations.
This section describes the use of transcribed dictation and Key Object Selection (KO) instances to produce a DICOM Basic Text SR instance. A specific SR Template, TID 2005 “Transcribed Diagnostic Imaging Report”, is defined to support transcribed diagnostic imaging reports created using this data flow.
The attributes of the Patient and Study Modules will be identical to those of the Study being reported. The following information is encoded in the SR Document General Module:
Identity of the dictating physician (observer context) in the Author Sequence
Identity of the transcriptionist or transcribing device (voice recognition) in the Participant Sequence
Identity of the report signing physician in the Verifying Observer Sequence
Identity of the institution owning the report in the Custodial Organization Sequence
Linkages to the order and requested procedures in the Referenced Request Sequence
A list of all images in the study in the Current Requested Procedure Evidence Sequence (from MPPS SOP Instances of the Study, or from query of the image repository)
A list of all images not in the study, but also attached to the report as referenced significant images, in the Pertinent Other Evidence Sequence
The transcribed dictation is used to populate one or more section containers in the content tree of the SR Instance. If the transcription consists of a single undifferentiated text stream, it will typically be encoded using a single CONTAINER content item with Concept Name "Findings", and the text encoded as the value in a subsidiary TEXT content item with Concept Name "Finding". When the transcription is differentiated into multiple sections with captions, e.g., using the concepts in CID 7001 “Diagnostic Imaging Report Headings”, each section may be encoded in a separate CONTAINER, with the concept from CID 7001 “Diagnostic Imaging Report Headings” as the container Concept Name, and the corresponding term from CID 7002 “Diagnostic Imaging Report Elements” as the Concept Name for a subsidiary TEXT content item. See Figure X.2-1.
The content items from each associated KO object will be included in the SR in a separate CONTAINER with Concept Name (121180, DCM, "Key Images"). The text item "Key Object Description" and all image reference items shall be copied from the KO content tree to the corresponding SR container. See Figure X.2-2.
The KO and SR IMAGE content item format allows the encoding of an icon (image thumbnail) with the image reference, as well as a reference to a GSPS instance controlling image presentation. Whether or not to include icons or GSPS references is an implementation decision of the softcopy review station that creates the KO; the IMAGE content item as a whole may be simply copied by the report management system from the KO to the Basic Text SR instance.
The intended process is that all KOs "For Report Attachment" are to be automatically included in the draft report. Therefore, the correction and signature function of the report management system should allow the physician to delete image references that were included, perhaps unintentionally, by the automatic process.
This section describes the use of transcribed dictation and Key Object Selection (KO) documents to produce an HL7 Clinical Document Architecture (CDA) Release 2 document.
While this section describes encoding as CDA Release 2, notes are provided about encoding issues for CDA Release 1.
The header of the CDA instance includes:
Identity of the requested procedure ("documentationOf" act relationship)
Identity of the dictating physician ("author" participation)
Identity of the transcriptionist ("dataEnterer" participation)
Identity of the report signing physician ("legalAuthenticator" participation)
Identity of the institution owning the report ("custodian" participation)
Identity of the request/order ("inFulfillmentOf" act relationship)
Each transcription section can be encoded in a Section in the CDA document. The Section.Code and/or Section.Title can be derived from the corresponding transcription section title, if any. Although the transcription text can be encoded in the Section.Text without further markup, it is recommended that it be enclosed in <paragraph> tags.
Images are referenced using hypertext links in the narrative text. These links in CDA are not considered part of the attested content.
The primary use case for this Annex is the dictation/transcription reporting model. In the historical context of that model, the images (film sheets) are usually not considered part of the attested content of the report, although they are part of the complete exam record. I.e., the report is clinically complete without the images, and the referenced images are not formally part of the report. Therefore, this Annex discusses only the use of image references, not images embedded in the report.
Being part of the attested content would require the images to be displayed every time the report is displayed - i.e., they are integral to understanding the report. If the images are attested, they must also be encapsulated with the CDA package. I.e., the CDA document itself is only one part of the interchanged package; the referenced images must also always be sent with the CDA document. If the images are for reference only and not attested, the Image Content Item may be transformed to a simple hypertext link; it is then the responsibility of CDA document receiver to follow or not follow the hyperlink. Moreover, as the industry moves toward ubiquitous network access to a distributed electronic healthcare record, there will be less need to prepackage the referenced images with the report.
In the current use case, there will be one or more KO instances with image references. Each KO instance can be transformed to a Section in the CDA document with a Section.Title "Key Images", and a Section.Code of 121180 from the DICOM Controlled Terminology (see PS3.16). If the KO includes a TEXT content item, it can be transformed to <paragraph> data in that Section.Text of the CDA document. Each IMAGE content item can be transformed to a link item using the <linkHtml> markup.
Within the <linkHtml> markup, the value of the href attribute is the DICOM object reference as a Web Access to Persistent DICOM Objects (WADO) specified URI (see Table X.3-1).
When a DICOM object reference is included in an HL7 CDA document, it is presumed the recipient would not be a DICOM application; it would have access only to general Internet network protocols (and not the DICOM upper layer protocol), and would not be configured with the means to display a native DICOM image. Therefore, the recommended encoding of a DICOM Object Reference in the CDA narrative block <linkHtml> uses WADO for access by the HTTP/HTTPS network protocol (see PS3.18), using one of the formats broadly supported in Web browsers (image/jpeg or video/mpeg) as the requested content type.
In CDA Release 1, the markup tag for hyperlinks is <link_html> within the scope of a <link> tag.
Table X.3-1. WADO Reference in an HL7 CDA <linkHtml>
Literal strings are in normal typeface, while <italic typeface within angle brackets> indicates values to be copied from the identified source.
The default contentType for single frame images is image/jpeg, which does not need to be specified as a WADO component. However, the default contentType for multiple frame images is application/dicom, which needs to be overridden with the specific request for video/mpeg.
There is not yet a standard mechanism for minimizing the potential for staleness of the <scheme>://<authority>/<path>component.
If the IMAGE content item includes an Icon Image Sequence, the report creation process may embed the icon in the Section.Text narrative. The Icon Image Sequence Pixel Data is converted into an image file, e.g., in JPEG or GIF format, and base64 encoded. The file is encoded in an ObservationMedia entry in the CDA instance, and a <renderMultimedia> tag reference to the entry is encoded in the Section.Text adjacent to the <linkHtml> of the image reference.
The Current Requested Procedure Evidence Sequence (0040,A375) of the KO instance lists all the SOP Instances referenced in the IMAGE content items in their hierarchical Study/Series/Instance context. It is recommended that this list be transcoded to CDA Entries in a Section with Section.Title "DICOM Object Catalog" and a Section.Code of 121181 from the DICOM Controlled Terminology (see PS3.16).
Since the image hypertext links in the Section narrative may refer to both an image and a softcopy presentation state, as well as possibly being constrained to specific frame numbers, in general there is not a simple mapping from the <linkHtml> to an entry. Therefore it is not expected that there would be ID reference links between the <linkHtml> and related entries.
The purpose of the Structured Entries is to allow DICOM-aware applications to access the referenced images in their hierarchical context.
The encoding of the DICOM Object References in CDA Entries is shown in Figure X.3-1 and Tables X.3-2 through X.3-6. All of the mandatory data elements for the Entries are available in the Current Requested Procedure Evidence Sequence; optional elements (e.g., instance DateTimes) may also be included if known by the encoding application.
The format of Figure X.3-1 follows the conventions of HL7 v3 Reference Information Model diagrams.
Table X.3-2. DICOM Study Reference in an HL7 V3 Act (CDA Act Entry)
Table X.3-3. DICOM Series Reference in an HL7 V3 Act (CDA Act Entry)
|
<Series Instance UID (0020,000E) as root property with no extension property > |
|||
|
1.2.840.10008.2.16.4 as codeSystem property, DCM as codeSystemName property, "DICOM Series" as displayName property, Modality as qualifier property (see text and Table X.3-4) > |
|||
The code for the Act representing a Series uses a qualifier property to indicate the modality. The qualifier property is a list of coded name/value pairs. For this use, only a single list entry is used, as described in Table X.3-4.
Table X.3-4. Modality Qualifier for The Series Act.Code
|
1.2.840.10008.2.16.4 as codeSystem property, |
||
|
< Modality (0008,0060) as code property, 1.2.840.10008.2.16.4 as codeSystem property, DCM as codeSystemName property, Modality code meaning (from PS3.16) as displayName property> |
Table X.3-5. DICOM Composite Object Reference in an HL7 V3 Act (CDA Observation Entry)
|
< SOP Instance UID (0008,0018) as root property with no extension property> |
|||
|
< SOP Class UID (0008,0016) as code property, 1.2.840.10008.2.6.1 as codeSystem property, DCMUID as codeSystemName property, SOP Class UID Name (from PS3.6) as displayName property> |
|||
|
<application/DICOM as mediaType property, WADO reference (see Table X.3-6) as reference property> |
|||
Table X.3-6. WADO Reference in an HL7 DGIMG Observation.Text
An application that receives a CDA with image references, and is capable of using the full services of DICOM upper layer protocol directly, can use the WADO parameters in either the linkHtml or in the DGIMG Observation.Text to retrieve the object using the DICOM network services. Such an application would need to be pre-configured with the hostname/IP address, TCP port, and AE Title of the DICOM object server (C-MOVE or C-GET SCP); this network address is not part of the WADO string. (Note that pre-configuration of this network address is typical for DICOM applications, and is facilitated by the LDAP-based DICOM Application Configuration Management Profile; see PS3.15.)
The application would open a Query/Retrieve Service Association with the configured server, and send a C-MOVE or C-GET command using the study, series, and object instance UIDs identified in the WADO query parameters. Such an application might also reasonably query the server for related objects, such as Grayscale Softcopy Presentation State.
When using the C-GET service, the retrieving application needs to specify and negotiate the SOP Class of the retrieved objects when it opens the Association. This information is not available in the linkHtml WADO reference; however, it is available in the DGIMG Observation.Code. It may also be obtained from the configured server using a C-FIND query on a prior Association.
The report may be created as both an SR instance and a CDA instance. In this case, the two instances are equivalent, and can cross-reference each other.
The CDA Document shall contain clinical content equivalent to the SR Document.
The HL7 CDA standard specifically addresses transformation of documents from a non-CDA format. The requirement in the CDA specification is: "A proper transformation must ensure that the human readable clinical content of the report is not impacted."
There is no requirement that the transform or transcoding between DICOM SR and HL7 CDA be reversible. In particular, some attributes of the DICOM Patient, Study, and Series IEs have no corresponding standard encoding in the HL7 CDA Header, and vice versa. Such data elements, if transcoded, may need to be encoded in "local markup" (in HL7 CDA) or private data elements (in DICOM SR) in an implementation-dependent manner; and some such data elements may not be transcoded at all. It is a responsibility of the transforming application to ensure clinical equivalence.
Many attributes of the SR Document General Module can be transcoded to CDA Header participations or related acts.
Due to the inherent differences between DICOM SR and HL7 CDA, a transcoded document will have a different UID than the source document. However, the SR Document may reference the CDA Document as equivalent using the Equivalent CDA Document Sequence (0040,A090) attribute, and the CDA Document may reference the SR Document with a relatedDocument act relationship.
Since the ParentDocument target of the relatedDocument relationship is constrained to be a simple DOCCLIN act, it is recommended that the reference to the DICOM SR be encoded per Table X.3-4, without explicit identification of the Study and Series Instance UIDs, and with classCode DOCCLIN (rather than DGIMG).
Digital projection X-ray images typically have a very high dynamic range due to the digital detector's performance. In order to display these images, various Values Of Interest (VOI) transformations can be applied to the images to facilitate diagnostic interpretation. The original description of the DICOM grayscale pipeline assumed that either the parameters of a linear LUT (window center and width) are used, or a static non-linear LUT is applied (VOI LUT).
Normally, a display application interprets the window center and width as parameters of a function following a linear law (see Figure Y-1).
A VOI LUT sequence can be provided to describe a non-linear LUT as a table of values, with the limitation that the parameters of this LUT cannot be adjusted subsequently, unless the application provides the ability to scale the output of the LUT (and there is no way in DICOM to save such a change unless a new scaled LUT is built), or to fit a curve to the LUT data, which may then be difficult to parametrize or adjust, or be a poor fit.
Digital X-ray applications all have their counterpart in conventional film/screen X-ray and a critical requirement for such applications is to have an image "look" close to the film/screen applications. In the film/screen world the image dynamics are mainly driven by the H-D curve of the film that is the plot of the resulting optical density (OD) of the film with respect to the logarithm of the exposure. The typical appearance of an H-D curve is illustrated in Figure Y-2.
In digital applications, a straightforward way to mock up a film-like look would be to use a VOI LUT that has a similar shape to an H-D curve, namely a toe, a linear part and a shoulder instead of a linear ramp.
While such a curve could be encoded as data within a VOI LUT, DICOM defines an alternative for interpreting the existing window center and width parameters, as the parameters of a non-linear function.
Figure Y-3 illustrates the shape of a typical sigmoid as well as the graphical interpretation of the two LUT parameters window center and window width. This figure corresponds to the equation definition in PS3.3 for the VOI LUT Function (0028,1056) is SIGMOID.
If a receiving display application does not support the SIGMOID VOI LUT Function, then it can successfully apply the same window center and window width parameters to a linear ramp and achieve acceptable results, specifically a similar perceived contrast but without the roll-off at the shoulder and toe.
A receiving display application that does support such a function is then able to allow the user to adjust the window center and window width with a more acceptable resulting appearance.
The Isocenter Reference System Attributes describe the 3D geometry of the X-Ray equipment composed by the X-Ray positioner and the X-Ray table.
These attributes define three coordinate systems in the 3D space:
The Isocenter Reference System attributes describe the relationship between the 3D coordinates of a point in the table coordinate system and the 3D coordinates of such point in the positioner coordinate system (both systems moving in the equipment), by using the Isocenter coordinate system that is fixed in the equipment.
Any point of the Positioner coordinate system (PXp, PYp, PZp) can be expressed in the Isocenter coordinate system (PX, PY, PZ) by applying the following transformation:
And inversely, any point of the Isocenter coordinate system (P X , P Y , P Z ) can be expressed in the Positioner coordinate system (P Xp , P Yp , P Zp ) by applying the following transformation:
Where R1, R2 and R3 are defined as follows:
Any point of the table coordinate system (PXt, PYt, PZt) (see Figure Z-1) can be expressed in the Isocenter Reference coordinate system (PX, PY, PZ) by applying the following transformation:
And inversely, any point of the Isocenter coordinate system (PX, PY, PZ) can be expressed in the table coordinate system (PXt, PYt, PZt) by applying the following transformation:
Where R1, R2 and R3 are defined as follows:
This Annex describes the use of the X-Ray Radiation Dose SR Object. Multiple systems contributing to patient care during a visit may expose the patient to irradiation during diagnostic and/ or interventional procedures. Each of those equipments may record the dose in an X-Ray Dose Reporting information object. Radiation safety information reporting systems may take advantage of this information and create dose reports for a visit, parts of a procedure performed or accumulation for the patient in total, if information is completely available in a structured content.
An irradiation event is the loading of X-Ray equipment caused by a single continuous actuation of the equipment's irradiation switch, from the start of the loading time of the first pulse until the loading time trailing edge of the final pulse. The irradiation event is the "smallest" information entity to be recorded in the realm of Radiation Dose reporting. Individual Irradiation Events are described by a set of accompanying physical parameters that are sufficient to understand the "quality" of irradiation that is being applied. This set of parameters may be different for the various types of equipment that are able to create irradiation events. Any on-off switching of the irradiation source during the event is not treated as separate events, rather the event includes the time between start and stop of irradiation as triggered by the user. E.g., a pulsed fluoro X-Ray acquisition is treated as a single irradiation event.
Irradiation events include all exposures performed on X-Ray equipment, independent of whether a DICOM Image Object is being created. That is why an irradiation event needs to be described with sufficient attributes to exchange the physical nature of irradiation applied.
Accumulated Dose Values describe the integrated results of performing multiple irradiation events. The scope of accumulation is typically a study or a performed procedure step. Multiple Radiation Dose objects may be created for one Study or one Radiation Dose object may be created for multiple performed procedures.
The following use cases illustrate the information flow between participating roles and the possible capabilities of the equipment that is performing in those roles. Each case will include a use case diagram and denote the integration requirements. The diagrams will denote actors (persons in role or other systems involved in the process of data handling and/or storage). Furthermore, in certain cases it is assumed that the equipment (e.g., Acquisition Modality) is capable of displaying the contents of any dose reports it creates.
These use cases are only examples of possible uses for the Dose Report, and are by no means exhaustive.
This is the basic use case for electronic dose reporting. See Figure AA.3-1.
In this use case the user sets up the Acquisition Modality, and performs the study. The Modality captures the irradiation event exposure information, and encodes it together with the accumulated values in a Dose Report. The Modality may allow the user to review the dose report, and to add comments. The acquired images and Dose Report are sent to a Long-Term Storage system (e.g., PACS) that is capable of storing Dose Report objects.
A Display Station may retrieve the Dose Report from the Storage system, and display it. Because the X-Ray Radiation Dose SR object is a proper subset of the Enhanced SR object, the Display Station may render it using the same functionality as used for displaying any Enhanced SR object.
The Dose Report may also be used for image acquisitions using non-digital Acquisition Modalities. See Figure AA.3-2.
In this use case the user may manually enter the irradiation event exposure information into a Dose Reporting Station, possibly transcribing it from a dosimeter read-out display. The station encodes the data in a Dose Report and sends it to a Storage system. The same Dose Reporting Station may be used to support several acquisition modalities.
This case may be useful in film-only radiography environments, or in mixed film and digital environments, where the DICOM X-Ray Radiation Dose SR Object provides a standard format for recording and storing irradiation events.
Note that in a non-PACS environment, the Dose Reports may be sent to a Long-Term Storage function built into a Radiation Safety workstation or information system.
A specialized Radiation Safety workstation the may contribute to the process of dose reporting in terms of more elaborate calculations or graphical dose data displays, or by aggregating dose data over multiple studies. See Figure AA.3-3. The Radiation Safety workstation may or may not be integrated with the Long-Term Storage function in a single system; such application entity architectural decisions are outside the scope of DICOM, but DICOM services and information objects do facilitate a variety of possible architectures.
The Radiation Safety workstation may be able to create specific reports to respond to dose registry requirements, as established by local regulatory authorities. These reports would generally not be in DICOM format, but would be generated from the data in DICOM X-Ray Radiation Dose SR objects.
The Radiation Safety workstation may also be used to generate more elaborate reports on patient applied dose. The workstation may retrieve the Dose Reports for multiple procedures performed on a particular patient. A report of the cumulative dose for a specified time period, or for a visit/admission, may be generated, encoded as a DICOM Dose Report, and stored in the Long-Term Storage system. Any such further reports will be stored in addition to the "basic report".
Note that such cumulative Dose Reports may describe irradiation events that are also documented in other Dose Reports. The assignment of a UID to each irradiation event allows the application to identify unique irradiation events that may be reported in multiple objects. The structure of the X-Ray Radiation Dose SR object also allows a cumulative report to reference the contributing report objects using the Predecessor Documents Sequence (0040,A360) attribute.
An advanced application may be able to use the Dose Report data, potentially supplemented by the data in the image objects referenced in the Dose Report, to create a Dose Map that visualizes applied dose. Such a Dose Map may be sent to the Long-Term Storage system using an appropriate object format.
Other purposes of the Radiation Safety workstation may include statistical analyses over all Dose Report Objects in order to gain information for educational or quality control purposes. This may include searches for Reports performed in certain time ranges, or with specific equipment, or using certain protocols.
This example of a Print Management SCU Session is provided for informational purposes only. It illustrates the use of one of the Basic Print Management Meta SOP Classes.
Example BB.1-1. Simple Example of Print Management SCU Session
This Section and its sub-sections contain examples of ways in which the Storage Commitment Service Class could be used. This is not meant to be an exhaustive set of scenarios but rather a set of examples.
Figure CC.1-1 is an example of the use of the Storage Commitment Push Model SOP Class.
Node A (an SCU) uses the services of the Storage Service Class to transmit one or more SOP Instances to Node B (1). Node A then issues an N-ACTION to Node B (an SCP) containing a list of references to SOP Instances, requesting that the SCP take responsibility for storage commitment of the SOP Instances (2). If the SCP has determined that all SOP Instances exist and that it has successfully completed storage commitment for the set of SOP Instances, it issues an N-EVENT-REPORT with the status successful (3) and a list of the stored SOP Instances. Node A now knows that Node B has accepted the commitment to store the SOP Instances. Node A might decide that it is now appropriate for it to delete its copies of the SOP Instances. The N-EVENT-REPORT may or may not occur on the same Association as the N-ACTION.
If the SCP determines that committed storage can for some reason not be provided for one or more SOP Instances referenced by the N-ACTION request, then instead of reporting success it would issue an N-EVENT-REPORT with a status of completed - failures exists. With the EVENT-REPORT it would include a list of the SOP Instances that were successfully stored and also a list of the SOP Instances for which storage failed.
Figure CC.1-3 explains the use of the Retrieve AE Title. Using the push model a set of SOP Instances will be transferred from the SCU to the SCP. The SCP may decide to store the data locally or, alternatively, may decide to store the data at a remote location. This example illustrates how to handle the latter case.
Node A, an SCU of the Storage Commitment Push Model SOP Class, informs Node B, an SCP of the corresponding SOP Class, of its wish for storage commitment by issuing an N-ACTION containing a list of references to SOP Instances (1). The SOP Instances will already have been transferred from Node A to Node B (Push Model) (2). If the SCP has determined that storage commitment has been achieved for all SOP Instances at Node C specified in the original Storage Commitment Request (from Node A), it issues an N-EVENT-REPORT (3) like in the previous examples. However, to inform the SCU about the address of the location at which the data will be stored, the SCP includes in the N-EVENT-REPORT the Application Entity Title of Node C.
The Retrieve AE Title can be included in the N-EVENT-REPORT at two different levels. If all the SOP Instances in question were stored at Node C, a single Retrieve AE Title could be used for the whole data set. However, the SCP could also choose not to store all the SOP Instances at the same location. In this case the Retrieve AE Title Attribute must be provided at the level of each single SOP Instance in the Referenced SOP Instance Sequence.
This example also applies to the situation where the SCP decides to store the SOP Instances on Storage Media. Instead of providing the Retrieve AE Title, the SCP will then provide a pair of Storage Media File-Set ID and UID.
Figure CC.1-4 is an example of how to use the Push Model with Storage Media to perform the actual transfer of the SOP Instances.
Node A (an SCU) starts out by transferring the SOP Instances for which committed storage is required to Node B (an SCP) by off-line means on some kind of Storage Media (1). When the data is believed to have arrived at Node B, Node A can issue an N-ACTION to Node B containing a list of references to the SOP Instances contained on the Storage Media, requesting that the SCP perform storage commitment of these SOP Instances (2). If the SCP has determined that all the referenced SOP Instances exist (they may already have been loaded into the system or they may still reside on the Storage Media) and that it has successfully completed storage commitment for the SOP Instances, it issues an N-EVENT-REPORT with the status successful (3) and a list of the stored SOP Instances like in the previous examples.
If the Storage Media has not yet arrived or if the SCP determines that committed storage can for some other reason not be provided for one or more SOP Instances referenced by the N-ACTION request it would issue an N-EVENT-REPORT with a status of completed - failures exists. With the EVENT-REPORT it would include a list of the SOP Instances that were successfully stored and also a list of the SOP Instances for which storage failed. The SCP is not required to wait for the Storage Media to arrive (however it may chose to wait) but is free to reject the Storage Commitment request immediately. If so, the SCU may decide to reissue another N-ACTION at a later point in time.
These typical examples of Modality Worklists are provided for informational purposes only.
A Worklist consisting of Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Scheduled Station AE title (namely the modality, where the Scheduled Procedure Step is going to be performed). See Figure DD.1-1.
A Worklist consisting of the Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Modality type (e.g., CT machines). This is a scenario, where scheduling is related to a pool of modality resources, and not for a single resource.
A Worklist consisting of the Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Scheduled Performing Physician. This is a scenario, where scheduling is related to human resources and not for equipment resources.
A Worklist consisting of a single Scheduled Procedure Step entity that has been scheduled for a specific Patient. In this scenario, the selection of the Scheduled Procedure Step was done beforehand at the modality. The rationale to retrieve this specific worklist is to convey the most accurate and up-to-date information from the IS, right before the Procedure Step is performed.
The Modality Worklist SOP Class User may retrieve additional Attributes. This may be achieved by Services outside the scope of the Modality Worklist SOP Class.
The following is a simple and non-comprehensive example of a C-FIND Request for the Relevant Patient Information Query Service Class, specifically for the Breast Imaging Relevant Patient Information Query SOP Class, requesting a specific Patient ID, and requiring that any matching response be structured in the form of TID 9000 “Relevant Patient Information for Breast Imaging”.
The following is a simple and non-comprehensive example of a C-FIND Response for the Relevant Patient Information Query Service Class, answering the C-FIND Request listed above, and structured in the form of TID 9000 “Relevant Patient Information for Breast Imaging” as required by the Affected SOP Class.
The following is a simple, non-comprehensive illustration of a report for a morphological examination with stenosis findings.
Example FF.3-1. Presentation of Report Example #1
The JPIP Referenced Pixel Data transfer syntaxes allow transfer of image objects with a reference to a non-DICOM network service that provides the pixel data rather than encoding the pixel data in (7FE0,0010).
The use cases for this extension to the standard relate to an application's desire to gain access to a portion of DICOM pixel data without the need to wait for reception of all the pixel data. Examples are:
Stack Navigation of a large CT Study.
In this case, it is desirable to quickly scroll through this large set of data at a lower resolution and once the anatomy of interest is located the full resolution data is presented. Initially lower resolution images are requested from the server for the purpose of stack navigation. Once a specific image is identified the system requests the rest of the detail from the server.
Large Single Image Navigation.
In cases such as microscopy, very large images may be generated. It is undesirable to wait for the complete pixel data to be loaded when only a small portion of the specific image is of interest. Additionally, this large image may exceed the display capabilities thus resulting in a decimation of the image when displayed. A lower resolution image (i.e., one that matches the resolution of the display) is all that is required, as additional data cannot be fully rendered. Once an area of interest is determined, the application can pan and zoom to this area and request additional detail to fill the screen resolution.
It is desirable to generate thumbnail representations for a study. This has been accomplished through various means, many of which require the client to receive the complete pixel data from the server to generate the thumbnail image. This uses significant network bandwidth.
The thumbnails can be considered low-resolution representations of the image. The application can request a low-resolution representation of the image for use as a thumbnail.
Multi-frame images may encode multiple dimensions. It is desirable for an application to access only the specific frames of interest in a particular dimension without the need to receive the complete pixel data set. By using the multi-dimensional description, applications using the JPIP protocol may request frames of the multi-frame image.
The association negotiation between the initiator and acceptor controls when this method of transfer is used. An acceptor can potentially accept both the JPIP Referenced Pixel Data transfer syntax and a non-JPIP transfer syntax on different presentation contexts. When an acceptor accepts both of these transfer syntaxes, the initiator chooses the presentation context.
AE1 and AE2 both support both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
AE2 proposes two presentation contexts to AE1, one for with a JPIP Referenced Pixel Data Transfer Syntax, and the other with a non-JPIP Transfer Syntax
AE2 may choose either presentation context to send the object
AE1 must be able to either receive the pixel data in the C-STORE message, or to be able to obtain it from the provider URL
AE1 supports only the JPIP Referenced Pixel Data Transfer Syntax
AE2 supports both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
AE1 accepts only the presentation context with the JPIP Referenced Pixel Data Transfer Syntax, or only the JPIP Referenced Pixel Data Transfer Syntax within the single presentation context proposed
AE2 sends the object with the JPIP Referenced Pixel Data Transfer Syntax
AE1 must be able to either retrieve the pixel data from the provider URL
AE1 and AE2 both support both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
In addition to the C-GET presentation context, AE2 proposes to AE1 two presentation contexts for storage sub-operations, one for with a JPIP Referenced Pixel Data Transfer Syntax, and the other with a non-JPIP Transfer Syntax
AE2 may choose either presentation context to send the object
AE1 must be able to either receive the pixel data in the C-STORE message, or to be able to obtain it from the provider URL
AE1 supports only the JPIP Referenced Pixel Data Transfer Syntax
AE2 supports both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
In addition to the C-GET presentation context, AE2 proposes to AE1 a single presentation context for storage sub-operations with a JPIP Referenced Pixel Data Transfer Syntax
AE2 sends the object with the JPIP Referenced Pixel Data Transfer Syntax
AE1 must be able to either retrieve the pixel data from the provider URL
Figure HH-1 depicts an example of how the data is organized within an instance of the Segmentation IOD. Each item in the Segment Sequence provides the attributes of a segment. The source image used in all segmentations is referenced in the Shared Functional Groups Sequence. Each item of the Per-frame Functional Groups Sequence maps a frame to a segment. The Pixel Data classifies the corresponding pixels/voxels of the source Image.
Bar coding or RFID tagging of contrast agents, drugs, and devices can facilitate the provision of critical information to the imaging modality, such as the active ingredient, concentration, etc. The Product Characteristics Query SOP Class allows a modality to submit the product bar code (or RFID tag) to an SCP to look up the product type, active substance, size/quantity, or other parameters of the product.
This product information can be included in appropriate attributes of the Contrast/Bolus, Device, or Intervention Modules of the Composite SOP Instances created by the modality. The product information then provides key acquisition context data necessary for the proper interpretation of the SOP Instances.
This annex provides informative information about mapping from the Product Characteristics Module attributes of the Product Characteristics Query to the attributes of Composite IODs included in several Modules.
Within this section, if no Product Characteristics Module source for the attribute value is provided, the modality would need to provide local data entry or user selection from a pick list to fill in appropriate values. Some values may need to be calculated based on user-performed dilution of the product at the time of administration.
Table II-1. Contrast/Bolus Module Attribute Mapping
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) >'Code Sequence Macro' |
|
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
|
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A) where:
|
||
|
If contrast is administered using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168) > Code Meaning (0008,0104), where:
|
||
|
If contrast is administered without dilution: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Table II-2. Enhanced Contrast/Bolus Module Attribute Mapping
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168), where:
|
|
|
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
If contrast is administered without dilution: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168) > Code Meaning (0008,0104), where:
|
||
|
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Table II-3. Device Module Attribute Mapping
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
|
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Parameter Sequence (0044,0013) > Measurement Units Code Sequence (0040,08EA) > Code Meaning (0008,0104), where:
|
||
|
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
|
Product Name (0044,0008) and/or Product Description (0044,0009) |
||
Table II-4. Intervention Module Attribute Mapping
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
For a general introduction into the underlying principles used in the Section C.27.1 “Surface Mesh Module” in PS3.3 see:
Foley & van Dam [et al], Computer Graphics: Principles and Practice, Second Edition, Addison-Wesley, 1990.
The dimensionality of the Vectors Macro (Section C.27.3 in PS3.3 ) is not restricted to accommodate broader use of this macro in the future. Usage beyond 3-dimensional Euclidean geometry is possible The Vectors Macro may be used to represent any multi-dimensional numerical entity, like a set of parameters that are assigned to a voxel in an image or a primitive in a surface mesh.
In electroanatomical mapping, one or more tracked catheters are used to sample the electrophysiological parameters of the inner surface of the heart. Using magnetic tracking information, a set of vertices is generated according to the positions the catheter was moved to during the examination. In addition to its 3D spatial position each vertex is loaded with a 7D-Vector containing the time it was measured at, the direction the catheter pointed to, the maximal potential measured in that point, the duration of that potential and the point in time (relative to the heart cycle) the potential was measured.
For biomechanical simulation the mechanical properties of a vertex or voxel can be represented with a n-dimensional vector.
The following example demonstrates the usage of the Surface Mesh Module for a tetrahedron.
|
4 triplets. The points are marked a,b,c,d in Figure JJ.2-1. |
|||
|
The second triangle is the one marked green in Figure JJ.2-1. |
|||
The use cases fall into five broad groups:
A referring physician receives radiological diagnostic reports on CT or MRI examinations. These reports contain references to specific images. He chooses to review these specific images himself and/or show the patient. The references in the report point to particular slices. If the slices are individual images, then they may be obtained individually. If the slices are part of an enhanced multi-frame CT/MR object, then retrieval of the whole multi-frame object might take too long. The Composite Instance Root Retrieve Service allows retrieval of only the selected frames.
The source of the image and frame references in the report could be KOS, CDA, SR, presentation states or other sources.
Selective retrieval can also be used to retrieve 2 or more arbitrary frames, as may be used for digital subtraction (masking), and may be used with any multi-frame objects, including multi-frame ultrasound, XR etc.
Features of interest in many long "video" examinations (e.g., endoscopy) are commonly referenced as times from the start of the examination. The same benefits of reduced WAN bandwidth use could be obtained by shortening the MPEG-2, MPEG-4 AVC/H.264, HEVC/H.265 or JPEG 2000 Part 2 Multi-component based stream prior to transmission.
There are times when it would be useful to retrieve from a multi-frame image only those frames satisfying certain dimensionality criteria, such as those CT slices fitting within a chosen volume. Initial retrieval of the image using the Composite Instance Retrieve Without Bulk Data Retrieve Service allows determination and retrieval of a suitable sub-set of frames.
Given the massively enhanced amount of dimensional information in the new CT/MR objects, applications could be developed that would use this for statistical purposes without needing to fetch the whole (correspondingly large) pixel data set. The Composite Instance Retrieve Without Bulk Data Retrieve Service permits this.
There are many modules in DICOM that use the Image SOP Instance Reference Macro (Table 10-3 “Image SOP Instance Reference Macro Attributes” in PS3.3 ), which includes the SOP Instance UID and SOP class UID, but not the Series Instance UID and Study Instance UID. Using the Composite Instance Root Retrieval Classes however, retrieval of such instances is simple, as a direct retrieval may be requested, including only the SOP Instance UID in the Identifier of the C-GET request.
Where the frames to be retrieved and viewed are known in advance, - e.g., when they are referenced by an Image Reference Macro in a structured report, then they may be retrieved directly using either of the Composite Instance Root Retrieval Classes.
If the image has been stored in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 format, and if the SCU has knowledge independent of DICOM as to which section of a "video" is required for viewing (e.g., perhaps notes from an endoscopy) then the SCU can perform the following steps:
Use known configuration information to identify the available transfer syntaxes.
If MPEG-2, MPEG-4 AVC/H.264, HEVC/H.265 or JPEG 2000 Part 2 Multi-component transfer syntaxes are available, then issue a request to retrieve the required section.
The data received may be slightly longer than that requested, depending on the position of key frames in the data.
If only other transfer syntaxes are available, then the SCU may need to retrieve most of the object using Composite Instance Retrieve Without Bulk Data Retrieve Service to find the frame rate or frame time vector, and then calculate a list of frames to retrieve as in the previous sections.
The purpose of this annex is to aid those developing SCPs of the Composite Instance Root Retrieve Service Class. The behavior of the application when making any of the changes discussed in this annex should be documented in the conformance statement of the application.
There are many different aspects to consider when extracting frames to make a new object, to ensure that the new image remains a fully valid SOP Instance, and the following is a non-exhaustive list of important issues
Any attributes that refer to start and end times such as Acquisition Time (0008,0032) and Content Time (0008,0033) must be updated to reflect the new start time if the first frame is not the same as the original. This is typically the case where the multi-frame object is a "video" and where the first frame is not included. Likewise, Image Trigger Delay (0018,1067) may need to be updated.
The Frame Time (0018,1063) may need to be modified if frames in the new image are not a simple contiguous sequence from the original, and if they are irregular, then the Frame Time Vector (0018,1065) will need to be used in its place, with a corresponding change to the Frame Increment Pointer (0028,0009). This also needs careful consideration if non-consecutive frames are requested from an image with non-linearly spaced frames.
Identifying the location of the requested frames within an MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 data stream is non-trivial, but if achieved, then little else other than changes to the starting times are likely to be required for MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 encoded data, as the use-cases for such encoded data (e.g., endoscopy) are unlikely to include explicit frame related data. See the note below however for comments on "single-frame" results.
An application holding data in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 format is unlikely to be able to create a range with a frame increment of greater than one (a calculated frame list with a 3rd value greater than one), and if such a request is made, it might return a status of AA02: Unable to extract Frames.
The approximation feature of the Time Range form of request is especially suitable for data held in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 form, as it allows the application to find the nearest surrounding key frames, which greatly simplifies editing and improves quality.
Similar issues exist as for MPEG-2, MPEG-4 AVC/H.264 and HEVC/H.265 data and similar solutions apply.
It is very important that functional groups for enhanced image objects are properly re-created to reflect the reduced set of frames, as they include important clinical information. The requirement in the standard that the resulting object be a valid SOP instance does make such re-creations mandatory.
Images of the Nuclear Medicine SOP class are described by the Frame Increment Pointer (0028,0009), which in turn references a number of different "Vectors" as defined in Table "NM Multi-frame Module" in PS3.3. Like the Functional Groups above, these Vectors are required to contain one value for each frame in the Image, and so their contents must be modified to match the list of frames extracted, ensuring that the values retained are those corresponding to the extracted frames.
The requirement that the newly created image object generated in response to a Frame level retrieve request must be the same as the SOP class will frequently result in the need to create a single frame instance of an object that is more commonly a multi-frame object, but this should not cause any problems with the IOD rules, as all such objects may quite legally have Number of Frames = 1.
However, a single frame may well cause problems for a transfer syntax based on "video" such as those using MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265, and therefore the SCU when negotiating a C-GET should consider this problem, and include one or more transfer syntaxes suitable for holding single or non-contiguous frames where such a retrieval request is being made.
Frame numbers are indexes, not identifiers for frames. In every object, the frame numbers always start at 1 and increment by 1, and therefore they will not be the same after extraction into a new SOP Instance.
A SOP Instance may contain internal references to its own frames such as mask frames. These may need to be corrected.
There is no requirement in the Frame Level Retrieve Service for the SCP to cache or otherwise retain any of the information it uses to create the new SOP Instance, and therefore, an SCU submitting multiple requests for the same information cannot expect to receive the "same" object with the same Instance and Series UIDs each time. However, an SCP may choose to cache such instances, and if returning an instance identical to one previously created, then the same Instance and Series UIDs may be used. The newly created object is however guaranteed to be a valid SOP instance and an SCU may therefore choose to send such an instance to an SCP using C-STORE, in which case it should be handled exactly as any other Composite Instance of that SOP class.
The time base for the new composite instance should be the same as for the source image and should use the same time synchronization frame of reference. This allows the object to retain synchronization to any simultaneously acquired waveform data
Where the original object is MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 with interleaved audio data in the MPEG-2 System, and where the retrieved object is also MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 encoded, then audio could normally be preserved and maintain synchronization, but in other cases, the audio may be lost.
As with all modifications to existing SOP instances, an application should remove any data that it cannot guarantee to make consistent with the modifications it is making. Therefore, an application creating new images from multi-frame images should remove any private attributes about which it lacks sufficient information to allow safe and consistent modification. This behavior should be documented in the conformance statement.
This annex explains the use of the Specimen Module for pathology or laboratory specimen imaging.
The concept of a specimen is deeply connected to analysis (lab) workflow, the decisions made during analysis, and the "containers" used within the workflow.
Typical anatomic pathology cases represent the analysis of (all) tissue and/or non-biologic material (e.g., orthopedic hardware) removed in a single collection procedure (e.g., surgical operation/event, biopsy, scrape, aspiration etc.). A case is usually called an "Accession" and is given a single accession number in the Laboratory Information System.
During an operation, the surgeon may label and send one or more discrete collections of material (specimens) to pathology for analysis. By sending discrete, labeled collections of tissue in separate containers, the surgeon is requesting that each discrete labeled collection (specimen) be analyzed and reported independently - as a separate "Part" of the overall case. Therefore, each Part is an important, logical component of the laboratory workflow. Within each Accession, each Part is managed separately from the others and is identified uniquely in the workflow and in the Laboratory Information System.
During the initial gross (or "eyeball") examination of a Part, the pathologist may determine that some or all of the tissue in a Part should be analyzed further (usually through histology). The pathologist will place all or selected sub-samples of the material that makes up the Part into labeled containers (cassettes). After some processing, all the tissue in each cassette is embedded in a paraffin block (or epoxy resin for electron microscopy); at the end of the process, the block is physically attached to the cassette and has the same label. Therefore, each "Block" is an important, logical component of the laboratory workflow, which corresponds to physical material in a container for handling, separating and identifying material managed in the workflow. Within the workflow and Laboratory Information System, each Block is identified uniquely and managed separately from all others.
From a Block, technicians can slice very thin sections. One or more of these sections is placed on one or more slides. (Note, material from a Part can also be placed directly on a slide bypassing the block). A slide can be stained and then examined by the pathologists. Each "Slide", therefore, is an important, logical component of the laboratory workflow, which corresponds to physical material in a container for handling, separating and identifying material managed in the workflow. Within the workflow and within the Laboratory Information Systems, each Slide is identified uniquely and managed separately from all others.
While "Parts" to "Blocks" to "Slides" is by far the most common workflow in pathology, it is important to note that there can be numerous variations on this basic theme. In particular, laser capture microdissection and other slide sampling approaches for molecular pathology are in increasing use. Such new workflows require a generic approach in the Standard to identifying and managing specimen identification and processing, not one limited only to "Parts", "Blocks", and "Slides". Therefore, the Standard adopts a generic approach of describing uniquely identified Specimens in Containers.
A physical object (or a collection of objects) is a specimen when the laboratory considers it a single discrete, uniquely identified unit that is the subject of one or more steps in the laboratory (diagnostic) workflow.
To say the same thing in a slightly different way: "Specimen" is defined as a role played by a physical entity (one or more physical objects considered as single unit) when the entity is identified uniquely by the laboratory and is the direct subject of more steps in a laboratory (diagnostic) workflow.
It is worthwhile to expand on this very basic, high level definition because it contains implications that are important to the development and implementation of the DICOM Specimen Module. In particular:
A single discrete physical object or a collection of several physical objects can act as a single specimen as long as the collection is considered a unit during the laboratory (diagnostic) process step involved. In other words, a specimen may include multiple physical pieces, as long as they are considered a single unit in the workflow. For example, when multiple fragments of tissue are placed in a cassette, most laboratories would consider that collection of fragments as one specimen (one "block").
A specimen must be identified. It must have an ID that identifies it as a unique subject in the laboratory workflow. An entity that does not have an identifier is not a specimen.
Specimens are sampled and processed during a laboratory's (diagnostic) workflow. Sampling can create new (child) specimens. These child specimens are full specimens in their own right (they have unique identifiers and are direct subjects in one or more steps in the laboratory's (diagnostic) workflow. This property of specimens (that can be created from existing specimens by sampling) extends a common definition of specimen, which limits the word to the original object received for examination (e.g., from surgery).
However, child specimens can and do carry some attributes from ancestors. For example, a tissue section cut from a formalin fixed block remains formalin fixed, and a tissue section cut from a block dissected from the proximal margin of a colon resection is still made up of tissue from the proximal margin. A description of a specimen therefore, may require description of its parent specimens.
A specimen is defined by decisions in the laboratory workflow. For example, in a typical laboratory, multiple tissue sections cut from a single block and placed on the same slide are considered a single specimen (as single unit identified by the slide number). However, if the histotechnologists had placed each tissue section on its own slide (and given each slide a unique number), each tissue section would be a specimen in its own right .
Specimen containers (or just "containers") play an important role in laboratory (diagnostic) processes. In most, but not all, process steps, specimens are held in containers, and a container often carries its specimen's ID. Sometimes the container becomes intimately involved with the specimen (e.g., a paraffin block), and in some situations (such as examining tissue under the microscope) the container (the slide and coverslip) become part of the optical path.
Containers have identifiers that are important in laboratory operations and in some imaging processes (such as whole slide imaging). The DICOM Specimen Module distinguishes the Container ID and the Specimen ID, making them different data elements. In many laboratories where there is one specimen per container, the value of the specimen ID and container ID will be same. However, there are use cases in which there are more than one specimen in a container. In those situations, the value of the container ID and the specimen IDs will be different (see Section NN.3.5).
Containers are often made up of components. For example, a "slide" is container that is made up of the glass slide, the coverslip and the "glue" the binds them together. The Module allows each component to be described in detail.
The Specimen Module (see PS3.3) defines formal DICOM attributes for the identification and description of laboratory specimens when said specimens are the subject of a DICOM image. The Module is focused on the specimen and laboratory attributes necessary to understand and interpret the image. These include:
Attributes that identify (specify) the specimen (within a given institution and across institutions).
Attributes that identify and describe the container in which the specimen resides. Containers are intimately associated with specimens in laboratory processes, often "carry" a specimen's identity, and sometimes are intimately part of the imaging process, as when a glass slide and coverslip are in the optical path in microscope imaging.
Attributes that describe specimen collection, sampling and processing. Knowing how a specimen was collected, sampled, processed and stained is vital in interpreting an image of a specimen. One can make a strong case that those laboratory steps are part of the imaging process.
Attributes that describe the specimen or its ancestors (see Section NN.2.1) when these descriptions help with the interpretation of the image.
Attributes that convey diagnostic opinions or interpretations are not within the scope of the Specimen Module. The DICOM Specimen Module does not seek to replace or mirror the pathologist's report.
The Laboratory Information System (LIS) is critical to management of workflow and processes in the pathology lab. It is ultimately the source of the identifiers applied to specimens and containers, and is responsible for recording the processes that were applied to specimens.
An important purpose of the Specimen Module is to store specimen information necessary to understand and interpret an image within the image information object, as images may be displayed in contexts where the Laboratory Information System is not available. Implementation of the Specimen Module therefore requires close, dynamic integration between the LIS and imaging systems in the laboratory workflow.
It is expected that the Laboratory Information Systems will participate in the population of the Specimen Module by passing the appropriate information to a DICOM compliant imaging system in the Modality Worklist, or by processing the image objects itself and populating the Specimen Module attributes.
The nature of the LIS processing for imaging in the workflow will vary by product implementation. For example, an image of a gross specimen may be taken before a gross description is transcribed. A LIS might provide short term storage for images and update the description attributes in the module after a particular event (such as sign out). The DICOM Standard is silent on such implementation issues, and only discusses the attributes defined for the information objects exchanged between systems.
A pathology "case" is a unit of work resulting in a report with associated codified, billable acts. Case Level attributes are generally outside the scope of the Specimen Module. However, a case is equivalent to a DICOM Requested Procedure, for which attributes are specified in the DICOM Study level modules.
DICOM has existing methods to handle most "case level" issues, including accepting cases referred for other institutions, clinical history, status codes, etc. These methods are considered sufficient to support DICOM imaging in Pathology.
The concept of an "Accession Number" in Pathology has been determined to be sufficiently equivalent to an "Accession Number" in Radiology that the DICOM data element "Accession Number" at the Study level at the DICOM information model may be used for the Pathology Accession Number with essentially the existing definition.
It is understood that the value of the laboratory accession number is often incorporated as part of a Specimen ID. However, there is no presumption that this is always true, and the Specimen ID should not be parsed to determine an accession number. The accession number will always be sent in its own discrete attribute.
While created with anatomic pathology in mind, the DICOM Specimen Module is designed to support specimen identification, collection, sampling and processing attributes for a wide range of laboratory workflows. The Module is designed in a general way so not to limit the nature, scope, scale or complexity of laboratory (diagnostic) workflow that may generate DICOM images.
To provide specificity on the general process, the Module provides extendable lists of Container Types, Container Component Types, Specimen Types, Specimen Collection Types, Specimen Process Types and Staining Types. It is expected that the value sets for these "types" can be specialized to describe a wide range of laboratory procedures.
In typical anatomic pathology practice, and in Laboratory Information Systems, there are conventionally three identified levels of specimen preparation - part, block, and slide. These terms are actually conflations of the concepts of specimen and container. Not all processing can be described by only these three levels.
A part is the uniquely identified tissue or material collected from the patient and delivered to the pathology department for examination. Examples of parts would include a lung resection, colon biopsy at 20 cm, colon biopsy at 30 cm, peripheral blood sample, cervical cells obtained via scraping or brush, etc. A part can be delivered in a wide range of containers, usually labeled with the patients name, medical record number, and a short description of the specimen such as "colon biopsy at 20 cm". At accession, the lab creates a part identifier and writes it on the container. The container therefore conveys the part's identifier in the lab.
A block is a uniquely identified container, typically a cassette, containing one or more pieces of tissue dissected from the part (tissue dice). The tissue pieces may be considered, by some laboratories, as separate specimens. However in most labs, all the tissue pieces in a block are considered a single specimen.
A slide is a uniquely identified container, typically a glass microscope slide, containing tissue or other material. Common slide preparations include:
Virtually all specimens in a clinical laboratory are associated with a container, and specimens and containers are both important in imaging (see "Definitions", above). In most clinical laboratory situations there is a one to one relationship between specimens and containers. In fact, pathologists and LIS systems routinely consider a specimen and its container as single entity; e.g., the slide (a container) and the tissue sections (the specimen) are considered a single unit.
However, there are legitimate use cases in which a laboratory may place two or more specimens in the same container (see Section NN.4 for examples). Therefore, the DICOM Specimen Module distinguishes between a Specimen ID and a Container ID. However, in situations where there is only one specimen per container, the value of the Specimen ID and Container ID may be the same (as assigned by the LIS).
Some Laboratory Information System may, in fact, not support multiple specimens in a container, i.e., they manage only a single identifier used for the combination of specimen and container. This is not contrary to the DICOM Standard; images produced under such a system will simply always assert that there is only one specimen in each container. However, a pathology image display application that shows images from a variety of sources must be able to distinguish between container and specimen IDs, and handle the 1:N relationship.
In allowing for one container to have multiple specimens, the Specimen Module asserts that it is the Container, not the Specimen, that is the unique target of the image. In other words, one Container ID is required in the Specimen Module, and multiple Specimen IDs are allowed in the Specimen Sequence. See Figure NN.3-1.
If there is more than one specimen in a container, there must be a mechanism to identify and locate each specimen. When there is more than one specimen in a container, the Module allows various approaches to specify their locations. The Specimen Localization Content Item Sequence (0040,0620), through its associated TID 8004 “Specimen Localization”, allows the specimen to be localized by a distance in three dimensions from a reference point on the container, by a textual description of a location or physical attribute such as a colored ink, or by its location as shown in a referenced image of the container. The referenced image may use an overlay, burned-in annotation, or an associated Presentation State SOP Instance to specify the location of the specimen.
Because the Module supports one container with multiple specimens, the Module can be used with an image of:
However the Module is not designed for use with an image of:
Multiple specimens that are not associated with the same container, e.g., two gross specimens (two Parts) on a photography table, each with a little plastic label with their specimen number.
Multiple containers that hold specimens (e.g., eight cassettes containing breast tissue being X-Rayed for calcium).
Such images may be included in the Study, but would not use the Specimen Module; they would, for instance, be general Visible Light Photographic images. Note, however, that the LIS might identify a "virtual container" that contains such multiple real containers, and manage that virtual container in the laboratory workflow.
In normal clinical practice, when there is one specimen per container, the value of the specimen identifier and the value of the container identifier will be the same. In Figure NN.4-1, each slide is prepared from a single tissue sample from a single block (cassette).
Figure NN.4-2 shows more than one tissue item on the same slide coming from the same block (but cut from different levels). The laboratory information system considers two tissue sections (on the same slide) to be separate specimens.
Two Specimen IDs will be assigned, different from the Container (Slide) ID. The specimens may be localized, for example, by descriptive text "Left" and "Right".
If the slide is imaged, a single image with more than one specimen may be created. In this case, both specimens must be identified in the Specimen Sequence of the Specimen Module. If only one specimen is imaged, only its Specimen ID must be included in the Specimen Sequence; however, both IDs may be included (e.g., if the image acquisition system cannot determine which specimens in/on the container are in the field of view).
Figure NN.4-3 shows processing where more than one tissue item is embedded in the same block within the same Cassette, but coming from different clinical specimens (parts). This may represent different lymph nodes embedded into one cassette, or different tissue dice coming from different parts in a frozen section examination, or tissue from the proximal margin and from the distal margin, and both were placed in the same cassette. Because the laboratory wanted to maintain the sample as separate specimens (to maintain their identity), the LIS gave them different IDs and the tissue from Part A was inked blue and the tissue from Part B was inked red.
The specimen IDs must be different from each other and from the container (cassette) ID. The specimens may be localized, for example, by descriptive text "Red" and "Blue" for Visual Coding of Specimen.
If a section is made from the block, each tissue section will include fragments from two specimens (red and blue). The slide (container) ID will be different from the section id (which will be different form each other).
If the slide is imaged, a single image with more than one specimen may be created but the different specimens must be identified and unambiguously localized within the container.
Figure NN.4-4 shows the result of two tissue collections placed on the same slide by the surgeon. E.g., in gynecological smears the different directions of smears might represent different parts (portio, cervix).
The specimen IDs must be different from each other and from the container (slide) ID. The specimens may be localized, for example, by descriptive text "Short direction smear" and "Long direction smear".
Slides created from a TMA block have small fragments of many different tissues coming from different patients, all of which may be processed at the same time, under the same conditions by a desired technique. These are typically utilized in research. See Figure NN.4-5. Tissue items (spots) on the TMA slide come from different tissue items (cores) in TMA blocks (from different donor blocks, different parts and different patients).
Each Specimen (spot) must have its own ID. The specimens may be localized, for example, by X-Y coordinates, or by a textual column-row identifier for the spot (e.g., "E3" for fifth column, third row).
If the TMA slide is imaged as a whole, e.g., at low resolution as an index, it must be given a "pseudo-patient" identifier (since it does not relate to a single patient). Images created for each spot should be assigned to the real patients.
The Specimen Module content is specified as a Macro as an editorial convention to facilitate its use in both Composite IODs and in the Modality Worklist Information Model.
The Module has two main sections. The first deals with the specimen container. The second deals with the specimens within that container. Because more than one specimen may reside in single container, the specimen section is set up as a sequence.
The Container section is divided two "sub-sections":
The Specimen Description Sequence contains five "sub-sections"
One deals with preparation of the specimen and its ancestor specimens (including sampling, processing and staining). Because of its importance in interpreting slide images, staining is distinguished from other processing. Specimen preparation is set up as sequence of process steps (multiple steps are possible); each step is in turn a sequence of content items (attributes using coded vocabularies). This is the most complex part of the module.
One deals with the original anatomic location of the specimen in the patient.
One deals with specimen localization within a container. This is used to identify specimens when there is more than one in a container. It is set up as sequence.
This section includes examples of the use of the Specimen Module. Each example has two tables.
The first table contains the majority of the container and specimen elements of the Specimen Module. The second includes the Specimen Preparation Sequence (which documents the sampling, processing and staining steps).
In the first table, invocations of Macros have been expanded to their constituent attributes. The Table does not include Type 3 (optional) attributes that are not used for the example case.
The second table shows the Items of the Specimen Preparation Sequence and its subsidiary Specimen Preparation Step Content Item Sequence. That latter sequence itself has subsidiary Code Sequence Items, but these are shown in the canonical DICOM "triplet" format (see PS3.16), e.g., (T-28600, SRT, "Left Upper Lobe of Lung"). In the table, inclusions of subsidiary templates have been expanded to their constituent Content Items. The Table does not include Type U (optional) Content Items that are not used for the example case.
Values in the colored columns of the two tables actually appear in the image object.
This is an example of how the Specimen Module can be populated for a gross specimen (a lung lobe resection received from surgery). The associated image would be a gross image taken in gross room.
Table NN.6-1. Specimen Module for Gross Specimen
|
The identifier for the container that contains the specimen(s) being imaged. |
Note that the container ID is required, even though the container itself does not figure in the image. |
|||
|
Type of container that contains the specimen(s) being imaged. Zero or one items shall be permitted in this sequence |
This would likely be a default container value for all gross specimens. The LIS does not keep information on the gross container type, so this is an empty sequence. |
|||
|
Sequence of identifiers and detailed description of the specimen(s) being imaged. One or more Items shall be included in this Sequence. |
||||
|
A departmental information system identifier for the Specimen. |
||||
|
The name or code for the institution that has assigned the Specimen Identifier. |
||||
|
The LIS "Specimen Received" field is mapped to this DICOM field |
||||
|
A: Received fresh for intraoperative consultation, labeled with the patient's name, number and "left upper lobe," is a pink-tan, wedge-shaped segment of soft tissue, 6.9 x 4.2 x 1.0 cm. The pleural surface is pink-tan and glistening with a stapled line measuring 12.0 cm. in length. The pleural surface shows a 0.5 cm. area of puckering. The pleural surface is inked black. The cut surface reveals a 1.2 x 1.1 cm, white-gray, irregular mass abutting the pleural surface and deep to the puckered area. The remainder of the cut surface is red-brown and congested. No other lesions are identified. Representative sections are submitted. |
This is a mapping from the LIS "Gross Description" field. Note that in Case S07-100 there were six parts. This means the LIS gross description field will have six sections (A - F). We would have to parse the gross description field into those parts (A-F) and then only incorporate section "A" into this attribute. NOTE: One could consider listing all the Blocks associated with Part A. It would be easy to do and might give useful information. |
|||
|
Sequence of Items identifying the process steps used to prepare the specimen for image acquisition. One or more Items may be present. This Sequence includes description of the specimen sampling step from a parent specimen, potentially back to the original part collection. |
(see Table NN.6-2) |
|||
|
Sequence of Content Items identifying the processes used in one preparation step to prepare the specimen for image acquisition. One or more Items may be present. |
||||
|
Original anatomic location in patient of specimen. This location may be inherited from the parent specimen, or further refined by modifiers depending on the sampling procedure for this specimen. |
||||
This is an example of how the Specimen Module can be populated for a slide (from a lung lobe resection received from surgery). The associated image would be a whole slide image.
Table NN.6-3. Specimen Module for a Slide
|
The identifier for the container that contains the specimen(s) being imaged. |
||||
|
Type of container that contains the specimen(s) being imaged. Only a single item shall be permitted in this sequence |
This would likely be a default container value for all slide specimens. |
|||
|
Description of one or more components of the container (e.g., description of the slide and of the coverslip). One or more Items may be included in this Sequence. |
||||
|
Type of container component. One Item shall be included in this Sequence. |
||||
|
Sequence of identifiers and detailed description of the specimen(s) being imaged. One or more Items shall be included in this Sequence. |
||||
|
A departmental information system identifier for the Specimen. |
||||
|
The name or code for the institution that has assigned the Specimen Identifier. |
||||
|
This attribute concatenates four LIS fields: 1. Specimen Received, 2. Cassette Summary, 3. Number of Pieces in Block, 4. Staining. This does not always work this nicely. Often one or more of fields is empty or confusing. |
||||
|
A: Received fresh for intraoperative consultation, labeled with the patient's name, number and "left upper lobe," is a pink-tan, wedge-shaped segment of soft tissue, 6.9 x 4.2 x 1.0 cm. The pleural surface is pink-tan and glistening with a stapled line measuring 12.0 cm. in length. The pleural surface shows a 0.5 cm. area of puckering. The pleural surface is inked black. The cut surface reveals a 1.2 x 1.1 cm, white-gray, irregular mass abutting the pleural surface and deep to the puckered area. The remainder of the cut surface is red-brown and congested. No other lesions are identified. Representative sections are submitted. |
This is a mapping from the LIS Gross Description Field and the Block Summary. Note that in Case S07-100, there were six parts. This means the LIS gross description field will have six sections (A - F). We would have to parse the gross description field into those parts (A-F) and then only incorporate section "A" into this attribute. The same would be true of the Blocks. |
|||
|
Sequence of Items identifying the process steps used to prepare the specimen for image acquisition. One or more Items may be present. This Sequence includes description of the specimen sampling step from a parent specimen, potentially back to the original part collection. |
(see Table NN.6-4) |
|||
|
Sequence of Content Items identifying the processes used in one preparation step to prepare the specimen for image acquisition. One or more Items may be present. |
||||
|
Original anatomic location in patient of specimen. This location may be inherited from the parent specimen, or further refined by modifiers depending on the sampling procedure for this specimen. |
||||
The example Specimen Preparation Sequence first describes the most recent processing of the slide (staining), then goes back to show its provenance. Notice that there is no sampling process for the slide described here; the LIS did not record the step of slicing of blocks into slides.
Workflow management in the DICOM imaging environment utilizes the Modality Worklist (MWL) and Modality Performed Procedure Step (MPPS) services. Within the pathology department, these services support both human controlled imaging (e.g., gross specimen photography), as well as automated slide scanning modalities.
While this section provides an overview of the DICOM services for managing workflow, the reader is referred to the IHE Anatomic Pathology Domain Technical Framework for specific use cases and profiles for pathology imaging workflow management.
The contents of the Specimen Module may be conveyed in the Scheduled Specimen Sequence of the Modality Worklist query. This feature allows an imaging system (Modality Worklist SCU) to query for work items by Container ID. The worklist server (SCP) of the laboratory information system can then return all the necessary information for creating a DICOM specimen-related image. This information includes patient identity and the complete slide processing history (including stain applied). It may be used for imaging set-up and/or inclusion in the Image SOP Instance.
In addition to the Specimen Module attributes, the set up of an automated whole slide scanner requires the acquisition parameters such as scan resolution, number of Z-planes, fluorescence wavelengths, etc. A managed set of such parameters is called a Protocol (see PS3.3), and the MWL response may contain a Protocol Code to control scanning set up. Additional set-up parameters can be passed as Content Items in the associated Protocol Context Sequence; this might be important when the reading pathologist requests a rescan of the slide with slightly different settings.
When scanning is initiated, the scanner reports the procedure step in a Modality Performed Procedure Step (MPPS) transaction.
Upon completion (or cancellation) of an image acquisition, the modality reports the work completed in an update to the MPPS. The MPPS can convey both the Container ID and the image UIDs, so that the workflow manager (laboratory information system) is advised of the image UIDs associated with each imaged specimen.
Intra-oral radiography typically involves acquisition of multiple images of various parts of the dentition. Many digital radiographic systems offer customized templates that are used for displaying the images in a study on the screen. These templates may also be referred to as mounts or view sets. The Structured Display Object represents a standard method of encoding and exchanging the layout and intended display of Structured Displays. A structured display object created in this manner could be stored with a study and exchanged with images to allow for complete reproduction of the original exam.
A patient visits a General Dentist where a Full Mouth Series Exam with 18 images is acquired. The dentist observes severe bone loss and refers the patient to a Periodontist. The 18 images from the Full Mouth Series along with a Structured Display are copied to a DICOM Interchange CD and sent with the patient to see the specialist. The Periodontist uses the CD to open the exam in his Dental Radiographic Software and consults via phone with the General Dentist. Both are able to observe the same exam showing the images on each user's display using the exact same layout.
A patient requests cosmetic surgery to enhance their facial appearance. The case requires consultation between an orthodontist in New York and an oral surgeon in California. The cephalometric series of 2D projections constructed from a volumetric CT data set that is used for the discussion is arranged by a Structured Display for transfer between the two practitioners.
A dental provider wishes to capture a series of DICOM IO images for the patient’s dentition. The tooth morphology, teeth are divided into molars, premolars, canines and incisors, and a number of images for each jaw. The anatomic information was captured utilizing the triplet of schema. This standard code sequence is based on ISO 3950-2010, Dentistry - Designation system for teeth and areas of the oral cavity.
Every IO image should have anatomic information either through the primary or modifier sequence.
In most standard cases, images are oriented in structured layouts. These structured displays are useful to be shared between providers for reference purposes.
Table OO.1.1-1 shows structured display standard templates, where Viewset ID is based on the Japanese Society for Oral and Maxillofacial Radiology (JSOMR) classification provided by JIRA (Japan Medical Imaging and Radiological Systems Industries Association, www.jira-net.or.jp). Expected or typical teeth to be imaged location, region and designation codes are based on ISO 3950-2010, Dentistry - Designation system for teeth and areas of the oral cavity. For all the hanging protocols listed in OO.1.1-1, the value to use for Hanging Protocol Creator (0072,0008) is "JSOMR" and the value to use for Hanging Protocol Name (0072,0002) does not include "JSOMR" (e.g., "DL-S001A", not "JSOMR DL-S001A").
Table OO.1.1-1. Hanging Protocol Names for Dental Image Layout based on JSOMR classification
A patient in rural Canada visits a general ophthalmologist and is found to have diabetic macular edema. The general ophthalmologist would like to discuss the case with a retina specialist before performing laser surgery. A fluorescein angiogram is done with multiple retinal images taken in a timed series after an intravenous injection. The images along with a Structured Display are shared via a Health Information Exchange with a retina specialist in Calgary, who opens them using his Ophthalmology EMR software and consults via phone with the general ophthalmologist. Both physicians view the images in the same layout so the retina specialist can provide accurate guidance for treating the patient.
A patient in rural Iowa visits his primary care physician for management of diabetes. Three non-mydriatic (patient's eyes are not dilated) photographs are taken of the back of each eye, and forwarded electronically along with a Structured Display to an ophthalmologist in Iowa City. The ophthalmologist reads the photos in an agreed upon layout so there is no mistake about what portion of which eye is being viewed. The ophthalmologist is able to tell the primary care physician that his patient does not need to come to Iowa City for face to face ophthalmologic care, but that there is a particular view of the left eye that should be photographed again in 6 months.
A patient in rural Minnesota experiences sudden vision loss and goes to a general ophthalmologist, who acquires OCT images and forwards them electronically along with a Structured Display to a retina specialist six travel hours away. The retina specialist is able to view the images in the standard layout that he is comfortable with, and to confirm that the patient has a choroidal neovascular membrane. He determines that is would be worthwhile for the patient to travel for treatment.
Cardiac stress testing acquires images in at least two patient states, rest and stress, and typically with several different views of the heart to highlight function of different cardiac anatomic regions. Image review typically involves simultaneous display of the same anatomy at two patient states, or multiple anatomic views at one patient state, or even simultaneous display of multiple anatomic views at multiple states. This applies to all cardiac imaging modalities, including ultrasound, nuclear, and MR. The American College of Cardiology and American Society of Nuclear Medicine have adopted standard display layouts for nuclear cardiology rest-stress studies.
A radiologist on his PACS assembles a screen layout of a stack of CT images of a current lung study, a secondary capture of a 3-D rendering of the CT, and a prior chest radiograph for the patient. He adjusts the window width / window level for the CT images, and zooms and annotates the radiograph to clearly indicate the tumor. He saves a Structured Display object representing that screen layout, including Grayscale Softcopy Presentation State objects for the CT WW/WL and the radiograph zoom and annotation. During the weekly radiology department conference, on an independent (non-PACS) workstation, he accesses the Structured Display object, and the display workstation automatically loads and places the images on the display, and presents them with the recorded WW/WL, zoom settings, and annotations.
A mammographer reviews a screening exam on a mammo workstation. She wishes to discuss the exam with the patient's general practitioner, who does not have a mammo-specific workstation. She saves a structured display, with presentation states for each image that replicate the display rendered by the mammo workstation (scaling, horizontal and vertical alignment, view and laterality annotation, etc.).
The purpose of this annex is to identify the clinical use cases that drove the development of Enhanced US Volume Object Definition for 3D Ultrasound image storage. They represent the clinical needs that must be addressed by interoperable Ultrasound medical devices and compatible workstations exchanging 3D Ultrasound image data. The use cases listed here are reviewed by representatives of the clinical community and are believed to cover most common applications of 3D Ultrasound data sets.
The following use cases consider the situations in which 3D Ultrasound data is produced and used in the clinical setting:
An ultrasound scanner generates a Volume Data set consisting of a set of parallel XY planes whose positions are specified relative to each other and/or a transducer frame-of-reference, with each plane containing one or more frames of data of different ultrasound data types. Ultrasound data types include, but are not limited to reflector intensity, Doppler velocity, Doppler power, Doppler variance, etc.
An ultrasound scanner generates a set of temporally related Volume Data sets, each as described in Case1. Includes a set of volumes that are acquired sequentially, or acquired asynchronously and reassembled into temporal sequence (such as through the "Spatial-Temporal Image Correlation" (STIC) technique).
Any Volume Data set may be operated upon by an application to create one or more Multi-Planar Reconstruction (MPR) views (as in Case7)
Any Volume Data set may be operated upon by an application to create one or more Volume Rendered views (as in Case8)
An ultrasound scanner generates 3D image data consisting of one or more 2D frames that may be displayed, including
An ultrasound scanner generates 3D image data consisting of one or more MPR Views that may be displayed as ordinary 2D frames, including
A loop of MPR Views representing different spatial positions and/or orientations relative to one another
A loop of MPR Views representing different spatial positions, orientations, and/or times relative to one another
A collection of MPR Views related to one another (example: 3 mutually orthogonal MPR Views around the point of intersection)
An ultrasound scanner generates 3D image data consisting of one or more Volume Rendered Views that may be displayed as ordinary 2D frames, including
Allow successive display of frames in multi-frame objects in cases 6, 7, and 8.
Separation of different data types allows for independent display and/or processing of image data (for example, color suppression to expose tissue boundaries, grayscale suppression for vascular flow trees, elastography, etc.)
Represent ECG and other physiological waveforms synchronized to acquired images.
Two-stage Retrieval: The clinician initially queries for and retrieves all the images in an exam that are directly viewable as sets of frames. Based on the review of these images (potentially on a legacy review application), the clinician may decide to perform advanced analysis of a subset of the exam images. Volume Data sets corresponding to those images are subsequently retrieved and examined.
An ultrasound scanner allows user to specify qualitative patient orientation (e.g., Left, Right, Medial, etc.) along with the image data.
An ultrasound scanner may maintain a patient-relative frame of reference (obtained such as through a gantry device) along with the image data.
Fiducial markers that tag anatomical references in the image data may be specified along with the image data.
Key Images of clinical interest are identified and either the entire image, or one or more frames or a volume segmentation within the image must be tagged for later reference.
This section organizes the list of use cases into a hierarchy. Section PP.3 maps items in this hierarchy to specific solutions in the DICOM Standard.
This section maps the use case hierarchy in Section PP.2.2 to specific solutions in the DICOM Standard. As described in items 1a and 1b, there are two different types of data related to 3D image acquisition: the 3D volume data set itself and 2D images derived from the volume data set. See Figure PP.3-1.
The 3D volume data set is conveyed via the Enhanced US Volume SOP Class, which represents individual 3D Volume Data sets or collections of temporally-related 3D Volume Data sets using the 'enhanced' multi-frame features used by Enhanced Storage SOP Classes for other modalities, including shared and per-frame functional group sequences and multi-frame dimensions. The 3D Volume Data sets represented by the Enhanced Ultrasound IOD (the striped box in Figure PP.3-1) are suitable for Multi-Planar Reconstruction (MPR) and 3D rendering operations. Note that the generation of the Cartesian volume, its relationship to spatially-related 2D frames (whether the volume was created from spatially-related frames, or spatially-related frames extracted from the Cartesian volume), and the algorithms used for MPR or 3D rendering operations are outside the scope of this standard.
Functional Group Macros allow the storage of many parameters describing the acquisition and positioning of the image planes relative to the patient and external frame of references (such as a gantry or probe locating device). These macros may apply to the entire instance (Shared Functional Group) or may vary frame-to-frame (Per-Frame Functional Group).
Multi-frame Dimensions are used to organize the data type, spatial, and temporal variations among frames. Of particular interest is Data Type used as a dimension to relate frames of different data types (like tissue and flow) comprising each plane of an ultrasound image (item 1c in the use case hierarchy). Refer to Section C.8.24.3.3 for the use of Dimensions with the Enhanced US Volume SOP Class.
Sets of temporally-related volumes may have been acquired sequentially or acquired asynchronously and reassembled into a temporal sequence, such as through Spatial-Temporal Image Correlation (STIC). Regardless of how the temporal volume sequence was acquired, frames in the resultant volumes are marked with a temporal position value, such as Temporal Position Time Offset (0020,930D) indicating the temporal position of the resultant volumes independent of the time sequence of the acquisition prior to reassembly into volumes.
The 2D image types represent collections of frames that are related to or derived from the volume data set, namely Render Views (projections), separate Multi-Planar Reconstruction (MPR) views, or sets of spatially-related source frames, either parallel or oblique (the cross-hatched images in Figure PP.3-1). The Ultrasound Image and Ultrasound Multi-frame Image IODs are used to represent these related or derived 2D images. The US Image Module for the Ultrasound Image Storage and Ultrasound Multi-frame Image Storage SOP Classes have defined terms for "3D Rendering" (render or MPR views) and "Spatially Related Frames" in value 4 of the Image Type (0008,0008) attribute to specify that the object contains these views while maintaining backwards compatibility with Ultrasound review applications for frame-by-frame display, which may be displayed sequentially ("fly-through" or temporal) loop display or as a side-by-side ("light-box") display of spatially-related slices. Also, the optional Source Image Sequence (0008,2112) and Derivation Code Sequence (0008,9215) attributes may be included to more succinctly specify the type of image contained in the instance and the 3D Volume Data set from which it was derived.
2D Derived image instances should be linked to the source 3D Volume Data set through established DICOM reference mechanisms. This is necessary to support the "Two-Stage Review" use case. Consider the following examples:
In the case of a 3D Volume Data set created from a set of spatially-related frames within the ultrasound scanner,
the Enhanced US Volume instance should include
Referenced Image Sequence (0008,1140) to the source Ultrasound Image and/or Multi-frame Image instances
Referenced Image Purpose of Reference Code Sequence (0040,A170) using (121346, DCM, "Acquisition frames corresponding to volume")
and the Ultrasound Image and/or Multi-frame Image instances should include:
Referenced Image Sequence (0008,1140) to the 3D Volume Data set
Referenced Image Purpose of Reference Code Sequence (0040,A170) using (121347, DCM, "Volume corresponding to spatially-related acquisition frames")
In the case of an Ultrasound Image or Ultrasound Multi-frame Image instance containing one or more of the spatially-related frames derived from a 3D volume data set, the ultrasound image instance should include:
Source Image Sequence (0008,2112) referencing the Enhanced US Volume instance
Source Image Sequence Purpose of Reference Code Sequence (0040,A170) using (121322, DCM, "Source of Image Processing Operation")
Derivation Code Sequence (0008,9215) using (113091, DCM, "Spatially-related frames extracted from the volume")
In the case of separate MPR or 3D rendered views derived from a 3D Volume Data set, the image instance(s) should include:
Source Image Sequence (0008,2112) referencing the Enhanced US Volume instance
Source Image Sequence Purpose of Reference Code Sequence (0040,A170) using (121322, DCM, "Source of Image Processing Operation")
Derivation Code Sequence (0008,9215) using CID 7203 “Image Derivation” code(s) describing the specific derivation operation(s)
ECG or other physiological waveforms associated with an Enhanced US Volume (item 1d in the use case hierarchy) are to be conveyed via a one or more companion instances of Waveform IODs linked bidirectionally to the Enhanced US Volume instance. Physiological waveforms associated with Ultrasound image acquisition may be represented using any of the Waveform IODs, and are linked with the Enhanced US Volume instance and to other simultaneous waveforms through the Referenced Instance Sequence in the image instance and each waveform instance. The Synchronization module and the Acquisition DateTime attribute (0018,1800) are used to synchronize the waveforms with the image and each other.
The use case of two-step review (item 2a in the use case hierarchy) is addressed by the use of separate SOP Classes for 2D and 3D data representations. A review may initially be performed on the Ultrasound Image and Ultrasound Multi-frame image instances created during the study. If additional operations on the 3D volume data set are desired, the Enhanced US Volume instance referenced in the Source Image Sequence of the derived object may be individually retrieved and operated upon by an appropriate application.
The 3D volume data set spatially relates individual frames of the image to each other using the Transducer Frame of Reference defined in Section C.8.24.2 in PS3.3 (items 2b in the use case hierarchy). This permits alignment of frames with each other in the common situation where a hand-held ultrasound transducer is used without an external frame of reference. However, the Transducer Frame of Reference may in turn be related to an external Frame of Reference through the Transducer Gantry Position and Transducer Gantry Orientation attributes. This would permit the creation of optional Image Position and Orientation values relative to the Patient when this information is available. In addition to these frames of reference, the spatial registration, fiducials, segmentation, and deformation objects available for other Enhanced objects may also be used with the Enhanced US Volume instances.
The Key Object Selection Document SOP Class may be used to identify specific Enhanced US Volume instances of particular interest (item 2d in the use case hierarchy).
This Annex contains a number of examples illustrating Ultrasound's use of the Blending and Display Pipeline. An overview of the examples included is found in Table QQ.1-1.
Table QQ.1-1. Enhanced US Data Type Blending Examples (Informative)
In the examples below, the following attributes are referenced:
Grayscale mapping only from 1 data frame:
Compared to Example 1, the perceived contrast of the displayed grayscale image will likely be different as a consequence of the use of PCS-Values as opposed to P-Values unless color management software interpreting the PCS-Values attempts to approximate the Grayscale Standard Display Function. This is true regardless of whether a color or grayscale display is used.
Each output value is either the grayscale tissue intensity value or the colorized flow velocity value based on the magnitude of the flow velocity sample value:
Each output value is either the grayscale tissue intensity value or a colorized flow/variance value determined by a 2-dimensional Secondary RGB Palette Color Lookup Table, based on flow/variance values. The colorized flow/variance value comes from a 2-dimensional Secondary RGB Palette Color LUT:
Each output value is a combination of colorized tissue intensity and a colorized flow/variance value determined by a 2-dimensional Secondary RGB Palette Color Lookup Table using the upper 5 bits of the FLOW_VELOCITY value and upper 3 bits of the FLOW_VARIANCE value to allow the use of 256-value Secondary Palette Color Lookup Tables. The blending proportion is based on values from both data paths. If the sum of the two RGB values exceeds 1.0, the value is clamped to 1.0. The colorized flow/variance value comes from a 2-dimensional Secondary RGB Palette Color LUT:
Refractive instruments are the most commonly used instruments in eye care. At present many of them have the capability for digital output, but their data is most often addressed by manual input into a paper or electronic record.
Refractive instruments address the power of a lens or of a patient's eye to bend light. In order for a patient to see well light must be focused on the retina in the back of the eye. If the natural optics of a patient's eye do not accomplish this, corrective lenses can bend incident light so that it will be focused on the retina after passing through the optics of the eye. The power of an optical system such as a spectacle lens or the eye is measured by its ability to bend light, and is measured in diopters (D). In practical clinical applications, this is measured to 3 decimal points, in increments of 0.125 D. The power of a lens is measured in at least two major meridians. A spherical lens power occurs when the power is the same in all meridians (0-180 degrees). A cylindrical lens power occurs when there is a difference in lens power across the various meridians. The shape of the anterior surface of the eye largely determines what type of correcting lens is needed. An eye that requires only spherical lens power is usually shaped spherically, more like a ball, while an eye that requires cylindrical lens power is ellipsoid and shaped more like a football.
Lenses can also bend light without changing its focal distance. This type of refraction simply displaces the position of the image laterally. The power of a prism to bend light is measured in prism diopters. In practical clinical applications this is measured to 1 decimal point, in increments of 0.5 prism diopters. Prism power is required in a pair of spectacles most commonly when both eyes are not properly aligned with the object of regard. Clinical prisms are considered to bend all light coming in from the lens either up, down, in toward the nose, or out away from the nose, in order to compensate for ocular misalignment.
Visual acuity is measured in various scales, all of which indicate a patient's vision as a fraction of what a reference standard patient would see at any given distance. For example, if a patient has 20/30 vision it means that he sees from a distance of 20 feet what a reference standard patient would see from a distance of 30 feet. These measurements are determined by presentation of standardized objects or symbols (optotypes) of varying sizes calibrated to reference standard vision (20/20). The smallest discernible optotype defines the patient's visual acuity expressed in a variety of formats (letters, numbers, pictures, tumbling E, Landolt C, etc).
Visual acuity is measured in two categories of viewing distances: distance, and near. Distance visual acuity is measured at 20' or six meters. This distance is roughly equivalent to optical infinity for clinical purposes. The near viewing distance can vary from 30cm to 75 cm depending on a variety of other conditions, but most commonly is measured at 40 cm.
Visual acuity is measured under several common viewing conditions: 1) Uncorrected vision is measured using the autoprojector to project the above mentioned optotypes for viewing, with no lenses in front of the patient's eyes. The line of smallest optotypes of which the patient can see more than half is determined, and that information is uploaded to a computer system. 2) The patient's vision using habitual correction is measured in a similar fashion using whichever vision correction the patient customarily wears. 3) Pinhole vision is measured in a similar fashion, with the patient viewing the optotypes through a pinhole occluder held in front of the eye. Pinhole visual acuity testing reduces retinal blur, providing an approximation of what the patient's vision should be with the best possible refractive correction (spectacles) in place. 4) Best corrected visual acuity is the visual acuity with the best refractive correction in place. 5) Crowding visual acuity measures the presence and amount of disparity in acuity between single optotype and multiple optotype presentations.
A patient's spectacle prescription may or may not represent the same lenses that provided best corrected visual acuity in his refraction. Subjective comfort plays a role in determining the final spectacle prescription.
Autolensometer: an autolensometer is used to measure the refractive power of a patient's spectacles. This is done by the automatic analysis of the effect of the measured lens upon a beam of light passing through it. Output from an autolensometer can be uploaded to a phoropter to provide a baseline for subjective refraction (discussed below), and it can be uploaded to a computerized medical record. Lenses may also be measured to confirm manufacturing accuracy.
Autorefractor: an autorefractor is used to automatically determine, without patient input, what refractive correction should provide best corrected visual acuity. Output from an autorefractor can be uploaded to a phoropter to provide a baseline for subjective refraction (discussed below), and it can be uploaded to a computerized medical record.
Phoropter (or phoroptor):an instrument containing multiple lenses, that is used in the course of an eye exam to determine the individual's subjective response to various lenses (subjective refraction) and the need for glasses or contact lenses.. The patient looks through the phoropter lenses at an eye chart that may be at 20 ft or 6m or at a reading chart that may be at 40 cm. Information from the subjective refraction can be uploaded from an autophoropter to a computer. The best corrected vision that was obtained is displayed in an autoprojector, and that information can also be uploaded to a computer.
Autokeratometer: an autokeratometer is used to measure the curvature, and thus the refractive power, of a patient's cornea. Two measurements are generally taken, one at the steepest and one at the flattest meridian of the cornea. The meridian measured is expressed in degrees, whole integers, in increments of 1 degree. If the measurement is expressed as power, the unit of measurement is diopters, to 3 decimal points, in increments of 0.125D. If the measurement is expressed as radius of curvature, the unit of measurement is millimeters, to 2 decimal points, in increments of 0.01 mm.
Visual acuity is defined as the reciprocal of the ratio between the letter size that can just be recognized by a patient, relative to the size just recognized by a standard eye. If the patient requires letters that are twice as large (or twice as close), the visual acuity is said to be 1/2; if the letters need to be 5x larger, visual acuity is 1/5, and so on.
Note that the scales in the tables extend well above the reference standard (1.0, 20/20, the ability to recognize a letter subtending a visual angle 5 min. of arc), since normal acuity is often 1.25 (20/16), 1.6 (20/12.5) or even 2.0 (20/10).
Today, the ETDRS chart and ETDRS protocol, established by the National Eye Institute in the US, are considered to represent the de facto gold standard for visual acuity measurements The International Council Of Ophthalmology, Visual Standard, Aspects and Ranges of Vision Loss (April, 2002) is a good reference document.
The full ETDRS protocol requires a wide chart, in the shape of an inverted triangle, on a light box, and cannot be implemented on the limited screen of a projector (or similar) chart.
For most routine clinical measurements projector charts or traditional charts with a rectangular shape are used; these non-standardized tools are less accurate than ETDRS measurements.
This appendix contains two lookup tables, one for traditional charts and one for ETDRS measurements.
Various notations may be used to express visual acuity. Snellen (in 1862) used a fractional notation in which the numerator indicated the actual viewing distance; this notation has long been abandoned for the use of equivalent notations, where the numerator is standardized to a fixed value, regardless of the true viewing distance. In Europe the use of decimal fractions is common (1/2 = 0.5, 1/5 = 0.2); in the US the numerator is standardized at 20 (1/2 = 20/40, 1/5 = 20/100), while in Britain the numerator 6 is common (1/2 = 6/12, 1/5 = 6/30).
The linear scales on the right side of the tables are not meant for clinical records. They are required for statistical manipulations, such as calculation of differences, trends and averages and preferred for graphical presentations. They convert the logarithmic progression of visual acuity values to a linear one, based on Weber-Fechner's law, which states that proportional stimulus increases lead to linear increases in perception.
The logMAR scale is calculated as log (MAR) = log (1/V) = - log (V). LogMAR notation is widely used in scientific publications. Note that it is a scale of vision loss, since higher values indicate poorer vision. The value "0" indicates "no loss", that is visual acuity equal to the reference standard (1.0, 20/20). Normal visual acuity (which is better than 1.0 (20/20) ) is represented by negative logMAR values.
The VAS scale (VAS = Visual Acuity Score) serves the same purpose. Its formula is: 100 - 50 x logMAR or 100 + 50 x log (V). It is more user friendly, since it avoids decimal values and is more intuitive, since higher values indicate better vision. The score is easily calculated on ETDRS charts, where 1 point is credited for each letter read correctly. The VAS scale also forms the basis for the calculation of visual impairment ratings in the AMA Guides to the Evaluation of Permanent Impairment.
Data input: Determine the notation used in the device and the values of the lines presented. No device will display all the values listed in each of the traditional columns. Convert these values to the decimal DICOM storage values shown on the left of the same row. DICOM values are not meant for data display. In the table, they are listed in scientific notation to avoid confusion with display notations.
In the unlikely event that a value must be stored that does not appear in the lookup table, calculate the decimal equivalent and round to the nearest listed storage value.
Data display: If the display notation is the same as the input notation, convert the DICOM storage values back to the original values. If the notation chosen for the display is different from the input notation, choose the value on the same row from a different column. In certain cases this may result in an unfamiliar notation; unfortunately, this is unavoidable, given the differences in size progressions between different charts. If a suffix (see attribute "Visual Acuity Modifiers" (0046,0135) ) is present, that suffix will be displayed as it was recorded.
Suffixes: Suffixes may be used to indicate steps that are smaller than a 1 line difference. On traditional charts, such suffixes have no defined numerical value. Suffixes +1, +2, +3 and -1, -2, -3 may be encountered. These suffixes do not correspond to a defined number of rows in the table.
The Traditional charts used in clinical practice are not standardized; they have an irregular progression of letter sizes and a variable number of characters per line. Measurement accuracy may further suffer from hidden errors that cannot be captured by any recording device, such as an inconsistent, non-standardized protocol, inaccurate viewing distance, inaccurate projector adjustment and contrast loss from room illumination. Therefore, the difference between two routine clinical measurements should not be considered significant, unless it exceeds 5 rows in the table (1 line on an ETDRS chart).
Table RR-1 contains many blank lines to make the vertical scale consistent with that used in Table RR-2. Notations within the same gray band are interchangeable for routine clinical use, since their differences are small compared to the clinical variability, which is typically in the order of 5 rows (1 ETDRS line).
ETDRS charts feature Sloan letters with proportional spacing, 5 letters on each line, and a logarithmic progression of letter sizes with consistent increments of approximately 25% per line (10 lines equal a factor 10x). The ETDRS protocol specifies letter-by-letter scoring, viewing distance, illumination, use of different charts for right and left eye and other presentation parameters.
The full ETDRS protocol requires a wide chart on a light box, and cannot be implemented on the limited screen of a projector (or similar) chart. The logarithmic progression, however, can be implemented on any device. This progression was first proposed by John Green in 1868 and follows the standard "Preferred Numbers, ISO standard 3 (1973) " series and the rounding preferences.
Use of ETDRS charts allows use of letter-by-letter scoring, which is more accurate than the line-by-line scoring used on traditional charts. Each row in the table is equivalent to 1 letter on an ETDRS chart (50 letters for a factor 10x). These steps are smaller than the just discernible difference; steps this small only become significant in statistical studies where a large number of measurements is averaged.
The smaller steps for letter by letter scoring may be expressed in two ways; either by using suffixes to a familiar (sometimes slightly rounded) set of values or by using calculated values. For clinical use suffixes have the advantage of using only familiar acuity notations and reverting to the nearest clinical notation when the suffix is omitted. Calculated values look less familiar; but are sometimes used in statistical studies. Note that suffixes used in the context of an ETDRS chart have a defined value and affect the DICOM storage value, whereas suffixes used in the context of traditional charts do not.
The templates for the Colon CAD SR IOD are defined in Colon CAD SR IOD Templates in PS3.16 . All relationships defined in the Colon CAD SR IOD templates are by-value. Content items referenced from another SR object instance, such as a prior Colon CAD SR, are inserted by-value in the new SR object instance, with appropriate original source observation context. It is necessary to update Rendering Intent, and referenced content item identifiers for by-reference relationships, within content items paraphrased from another source.
The Document Root, Image Set Properties, CAD Processing and Findings Summary, and Summaries of Detections and Analyses sub-trees together form the content tree of the Colon CAD SR IOD. See Annex E for additional explanation of the Summaries of Detections and Analyses sub-trees.
The identification of a polyp within an image set is considered to be a Detection. The temporal correlation of a polyp in two image sets taken at different times is considered Analysis. This distinction is used in determining whether to place algorithm identification information in the Summary of Detections or Summary of Analyses sub-trees.
Once a Single Image Finding or Composite Feature has been instantiated, it may be referenced by any number of Composite Features higher in the CAD Processing and Findings Summary sub-tree.
Any content item in the Content tree that has been inserted (i.e., duplicated) from another SR object instance has a HAS OBS CONTEXT relationship to one or more content items that describe the context of the SR object instance from which it originated. This mechanism may be used to combine reports (e.g., Colon CAD SR 1, Colon CAD SR 2, Human).
The CAD Processing and Findings Summary section of the SR Document Content tree of a Colon CAD SR IOD may contain a mixture of current and prior single image findings and composite features. The content items from current and prior contexts are target content items that have a by-value INFERRED FROM relationship to a Composite Feature content item. Content items that come from a context other than the Initial Observation Context have a HAS OBS CONTEXT relationship to target content items that describe the context of the source document.
In Figure SS.2-1, Composite Feature and Single Image Finding are current, and Single Image Finding (from Prior) is duplicated from a prior document.
The following is a simple and non-comprehensive illustration of an encoding of the Colon CAD SR IOD for colon computer aided detection results. For brevity, some mandatory content items are not included.
A colon CAD device processes a typical screening colon case, i.e., there are several hundred images and no polyp findings. Colon CAD runs polyp detection successfully and finds nothing.
The colon radiograph resembles:
The content tree structure would resemble:
A colon CAD device processes a screening colon case with several hundred images, and a colon polyp detected. The colon radiograph resembles:
The content tree structure in this example is complex. Structural illustrations of portions of the content tree are placed within the content tree table to show the relationships of data within the tree. Some content items are duplicated (and shown in boldface) to facilitate use of the diagrams.
The content tree structure would resemble:
The patient in Example 2 returns for another colon radiograph. A more comprehensive colon CAD device processes the current colon radiograph, and analyses are performed that determine some temporally related content items for Composite Features. Portions of the prior colon CAD report (Example 2) are incorporated into this report. In the current colon radiograph the colon polyp has increased in size.
The CAD processing and findings consist of one composite feature, comprised of single image findings, one from each year. The temporal relationship allows a quantitative temporal difference to be calculated:
The Stress Testing Report is based on TID 3300 “Stress Testing Report”. The first part of the report contains sections (containers) describing the patient characteristics (height, weight, etc.), medical history, and presentation at the time of the exam.
The next part describes the technical aspects of the exam. It includes zero or more findings containers, each corresponding to a phase of the stress testing procedure. Within each container may be one or more sub-containers, each associated with a single measurement set. A measurement set consists of measurements at a single point in time. There are measurement sets defined for both stress monitoring and for imaging.
The final part of the report includes a summary of significant findings or measurements, and any conclusions or recommendations
The resulting hierarchical structure is depicted in Figure TT-1.
Ophthalmologists use OPT data to diagnose and characterize tissues and abnormalities in transverse and axial locations within the eye. For example, an ophthalmologist might request an OPT of the macula, the optic nerve or the cornea in either or both eyes for a given patient. Serial reports can be compared to monitor disease progression and response to treatment. OPT devices produce two categories of clinical data: B-scan images and tissue measurements.
Prior to interpreting an OPT B-scan (or set of B-scans), users must first determine if the study is of adequate quality to answer the diagnostic question. Examples of inadequate studies include:
In some cases, inadequate images can be corrected by capturing another scan in the same area. However, in other cases, the patient's eye disease interferes with visualization of the tissues of interest making adequate image quality impossible. Ideally, when choosing between multiple scans of the same tissue area, physicians would have access to information about the above questions so they can select only the best scan(s).
The physician may then choose to view and assess each B-scan in the data set individually. When assessing OPT B-scans, ophthalmologists often identify normal or expected tissue boundaries first, then proceed to identify abnormal interfaces or structures next. The identification of pathology is both qualitative (i.e., does a structure exist) and quantitative (i.e., how thick is it). If previous scans are present for this patient, the physician may choose to compare the most recent scan data with prior visits. Due to workflow constraints, it may be difficult for B-scan interpretations to happen on the same machine that captures the images. Therefore, remote image assessment, such as image viewing in the examining room with the patient, is optimal.
In addition to viewing B-scan image data, clinicians also use quantitative measurements of tissue thicknesses or volumes extracted automatically from the OPT images. As with image quality, the accuracy of automated segmentation must be assessed prior to use of the numerical measurements based on these boundaries. This is typically accomplished by visual inspection of boundary lines placed on the OPT images but also can be inferred from analysis confidence measurements provided by the device software. In addition to segmentation accuracy, it is also important to determine if the region of interest has been aligned appropriately with the intended sampling area of the OPT.
The analysis software application segments OPT images using the raw data of the instrument to quantify tissue optical reflectivity and location in longitudinal scan or B-scan images. Many boundaries can be identified automatically with software algorithms, see Figure UU.3-1.
The innermost (anterior) layer of the retina, the internal limiting membrane (ILM) is often intensely hyperreflective and defines the innermost border of the nerve fiber layer. The nerve fiber layer (NFL) is bounded posteriorly by the ganglion cell layer and is not visible within the central foveal area. In high quality OPT scans, the sublamina of the inner plexiform layer may be identifiable. The external limiting membrane is the subtle interface between the outer nuclear layer and the photoreceptors. The junction between the photoreceptor inner segments and outer segments (IS/OS junction) is often intensely hyperreflective and in time domain OPT systems, was thought to represent the outermost boundary of the retina. Current thought, however, suggests that the photoreceptors extend up to the next bright interface, often referred to as the retinal pigment epithelium (RPE) interdigitation. This interface may be more than 35 micrometers beyond the IS/OS junction. When three high intensity lines are not present under the retina, however, this interdigitation area may not be visible. The next bright region typically represents the RPE cell bodies, which consist of a single layer of cuboidal cells with reflective melanosomes oriented at the innermost portion of the cells. Below the RPE cells is a structure called Bruch's membrane, which is contiguous with the outer RPE cell membrane.
The axial thickness and volume of tissue layers can be measured using the boundaries defined above. For example, the nerve fiber layer is typically measured from the innermost ILM interface to the interface of the NFL with the retina. Time domain OPT systems measure retinal thickness as the axial distance between the innermost ILM interface and the IS/OS junction. However, high resolution OPT systems now offer the potential to measure true retinal thickness (ILM to outermost photoreceptor interface) in addition to variants that include tissue and fluid that may intervene between the retina and the RPE. The RPE layer is measured from the innermost portion of the RPE cells, which is the hyper reflective melanin-containing layer to the outermost highly reflective interface. Pathologic structures that may intervene between normal tissue layers may obscure their appearance but often can be measured using the same methods as normal anatomic layers.
The macular grid is based upon the grid employed by the Early Treatment of Diabetic Retinopathy Study (ETDRS) to measure area and proximity of macular edema to the anatomic center of the macula, also called the fovea. This grid was developed as an overlay for use with 32mm film color transparencies and fluorescein angiograms in the seminal trials of laser photocoagulation for the treatment of diabetic retinopathy. Subsequently, this grid has been in common use at reading centers since the 1970s, has been incorporated into ophthalmic camera digital software, and has been employed in grading other macular disease in addition to diabetic retinopathy. This grid was slightly modified for use in Time Domain OPT models developed in the 1990s and early 2000s in that the dimensions of the grid were sized to accommodate a 6 mm diameter sampling area of the macula.
The grid for macular OPT is bounded by circular area with a diameter of 6 mm. The center point of the grid is the center of the circle. The grid is divided into 9 standard subfields. The center subfield is a circle with a diameter of 1 mm. The grid is divided into 4 inner and 4 outer subfields by a circle concentric to the center with a diameter of 3 mm. The inner and outer subfields are each divided by 4 radial lines extending from the center circle to the outermost circle, at 45, 135, 225, and 315 degrees, transecting the 3 mm circle in four places. Each of the 4 inner and 4 outer subfields is labeled by its orientation with regard to position relative to the center of the macula - superior, nasal, inferior, and temporal. For instance, the superior inner subfield is the region bounded by the center circle and the 3 mm circle the 315 degree radial line, and the 45 degree radial line. The nasal subfields are those oriented toward the midline of the patient's face, nearest to the optic nerve head. The grids for the left and right eyes are reversed with respect to the positions of the nasal and temporal subfields - in viewing the grid for the left eye along the antero-posterior (Z) axis, the nasal subfields are on the left side and in the right eye the nasal subfields are on the right side (nasal as determined by the location of the subfield closest to the nose).
The OPT macula thickness report consists of the thickness at the center point of the grid, and the mean retinal thickness calculated for each of the 9 subfields of the grid. In the context of the macular disease considered for the diagnosis, and qualitative interpretation of morphology from examination and OPT and/or other modalities, the clinician uses the macula thickness report to determine if the center and the grid subfield averages fall outside the normative range. Monitoring of macular disease by serial grid measurements allows assessment of disease progression and response to intervention. Serial measurements are assessed by comparing OPT thickness or volume reports, provided that the grids are appropriately centered upon the same location in the macula for each visit.
The center point of the grid should be aligned with the anatomic center of the macula, the fovea. This can be approximated by having the patient fixate upon a target coincident with the center of the grid. However, erroneous retinal thickness measurements are obtained when the center of the grid is not aligned with the center of the macula. This may occur in patients with low vision that cannot fixate upon the target, or in patients that blink or move fixation during the study. To determine the expected accuracy of inter-visit comparisons, clinicians would benefit from knowing the alignment accuracy of the OPT data from the two visits. Ophthalmologists may also want to customize locations on the fundus to be monitored at each visit.
The following figure illustrates how the content items of the Macular Grid Thickness and Volume Report are related to the ETDRS Grid. Figure shown is not drawn to scale.
The process of evaluation of diabetic macular edema will help illustrate the role of the OPT macula thickness report. In diabetic macular edema there is a breakdown in the blood retina barrier, which can lead to focal and/or diffuse edema (or thickening) of the macula. The report of the thickness of each subfield area of the macula grid will help direct treatment. For instance, laser treatment to a specific thickened quadrant would be expected to reduce the thickness of retina in the treated zone. Serial comparisons of OPT thicknesses should demonstrate a reduction in thickness in the successfully treated zone. A zone that subsequently became thicker on follow-up scans may warrant further treatment. In addition to an expected local response to specific zonal treatment such as laser, there are treatments with drugs and biologics that are less localized. For instance, the injection of intravitreal drugs in a successfully treated eye would be expected to have a global reduction of thickness in all zones with DME. Patients with severe retinal disease may lose the ability to fixate making the acquisition of OPT images to represent a specific zone less reliable.
Figure VV.1-1 is an outline of the Pediatric, Fetal and Congenital Cardiac Ultrasound Reports.
The common Pediatric, Fetal and Congenital Cardiac Ultrasound measurement pattern is a group of measurements obtained in the context of a protocol. Figure VV.2-1 shows the pattern.
Because of the wide variety of congenital issues in fetal and pediatric cardiology, DICOM identifies these findings primarily with post-coordination. The concept name of the base content item typically specifies a property, which then requires an anatomic site concept modifier.
Further qualification specifies the image mode and the image plane using HAS ACQ CONTEXT with the value sets shown below.
This annex holds examples of audit messaging, as described by the Audit Trail Message Format Secure Use Profile in PS3.15.
An example of one of the DICOM Instances Transferred messages is shown in Example WW.1-1.
Example WW.1-1. Sample Audit Event Report
<?xml version="1.0" encoding="UTF-8"?>
<AuditMessage
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="D:\data\DICOM\security\audit-message.rnc">
<EventIdentification
EventActionCode="C"
EventDateTime="2001-12-17T09:30:47"
EventOutcomeIndicator="0">
<EventID code="110104"
codeSystemName="DCM"
displayName="DICOM Instances Transferred"/>
</EventIdentification>
<ActiveParticipant
UserID="123"
AlternativeUserID="AETITLE=AEFOO"
UserIsRequestor="false"
NetworkAccessPointID="192.168.1.2"
NetworkAccessPointTypeCode="2">
<RoleIDCode
code="110153"
codeSystemName="DCM"
displayName="Source Role ID"/>
</ActiveParticipant>
<ActiveParticipant
UserID="67562"
AlternativeUserID="AETITLE=AEPACS"
UserIsRequestor="false"
NetworkAccessPointID="192.168.1.5"
NetworkAccessPointTypeCode="2">
<RoleIDCode
code="110152"
codeSystemName="DCM"
displayName="Destination Role ID"/>
</ActiveParticipant>
<ActiveParticipant
UserID="smitty@readingroom.hospital.org"
AlternativeUserID="smith@nema"
UserName="Dr. Smith"
UserIsRequestor="true"
NetworkAccessPointID="192.168.1.2"
NetworkAccessPointTypeCode="2">
<RoleIDCode
code="110153"
codeSystemName="DCM"
displayName="Source Role ID"/>
</ActiveParticipant>
<AuditSourceIdentification
AuditEnterpriseSiteID="Hospital"
AuditSourceID="ReadingRoom">
<AuditSourceTypeCode code="1"/>
</AuditSourceIdentification>
<ParticipantObjectIdentification
ParticipantObjectID="1.2.840.10008.2.3.4.5.6.7.78.8"
ParticipantObjectTypeCode="2"
ParticipantObjectTypeCodeRole="3"
ParticipantObjectDataLifeCycle="1">
<ParticipantObjectIDTypeCode
code="110180"
codeSystemName="DCM"
displayName="Study Instance UID"/>
<ParticipantObjectDescription>
<MPPS UID="1.2.840.10008.1.2.3.4.5"/>
<Accession Number="12341234" />
<SOPClass UID="1.2.840.10008.5.1.4.1.1.2" NumberOfInstances="1500"/>
<SOPClass UID="1.2.840.10008.5.1.4.1.1.11.1" NumberOfInstances="3"/>
</ParticipantObjectDescription>
</ParticipantObjectIdentification>
<ParticipantObjectIdentification
ParticipantObjectID="ptid12345"
ParticipantObjectTypeCode="1"
ParticipantObjectTypeCodeRole="1">
<ParticipantObjectIDTypeCode code="2"/>
<ParticipantObjectName>John Doe</ParticipantObjectName>
</ParticipantObjectIdentification>
</AuditMessage>
The message describes a study transfer initiated at the request of Dr. Smith on the system at the IP address 192.168.1.2 to a system at IP address 192.168.1.5. The study contains 1500 CT SOP Instances and 3 GSPS SOP Instances. The audit report came from the audit source "ReadingRoom".
The following is an example of audit trail message use in a hypothetical workflow. It is not intended to be all-inclusive, nor does it cover all possible scenarios for audit trail message use. There are many alternatives that can be utilized by the system designer, or that could be configured by the local site security administrator to fit security policies.
As this example scenario begins, an imaging workstation boots up. During its start up process, a DICOM-enabled viewing application is launched by the start up sequence. This triggers an Application Activity message with the Event Type Code of (110120, DCM, "Application Start").
After start up, a curious, but unauthorized visitor attempts to utilize the reviewing application. Since the reviewing application cannot verify the identity of this visitor, the attempt fails, and the reviewing application generates a User Authentication message, recording the fact that this visitor attempted to enter the application, but failed.
Later, an authorized user accesses the reviewing application. Upon successfully identifying the user, the reviewing application generates a User Authentication message indicating a successful login to the application.
The user, in order to locate the data of a particular examination, issues a query, which the reviewing application directs to a DICOM archive. The details of this query are recorded by the archive application in a Query message.
The reviewing application, in delivering the results of the query to the user, displays certain patient related information. The reviewing application records this fact by sending a Patient Record message that is defined by some other standard. Audit logs will contain messages specified by a variety of different standards. The MSG-ID field is used to aid the recognition of the defining standard or proprietary source documentation for a particular message.
From the query results, the user selects a set of images to review. The reviewing application requests the images from the archive, and records this fact in a Begin Transferring Instances message.
The archive application locates the images, sends them back to the reviewing application, and records this fact in an Instances Transferred message.
The reviewing application displays the images to the user, recording this fact via an Instances Accessed message.
During the reviewing process, the use looks up details of the procedure from the hospital information system. The reviewing application performs this lookup using HL7 messaging, and records this fact in a Procedure Record message.
The user decides that a follow-up examination is needed, and generates a new order via HL7 messaging to the hospital information system. The reviewing application records this in an Order Record message.
The user decides that a second opinion is desirable, and selects certain images to send to a colleague in an e-mail message. The reviewing application records the fact that it packaged and sent images via e-mail in an Export message.
Many metabolic/contrast agents require more than just simple imaging to provide data for decision making. Rather than just detecting the presence or absence of the metabolic/contrast agents, calculations based on relative uptake rates, or decay rates, comparisons with previous or neighboring data, fusion of data from multiple sources or time points, etc. may be necessary to properly evaluate image data with these metabolic/contrast agents. Often the nature of this processing is closely related to the type of agent, the anatomy, and the disease process being targeted. The processing may be so specific that the general-purpose image processing features found on medical imaging workstations are inadequate to properly perform the procedure. The effective use of a particular agent for a particular procedure may depend on having properly tuned, targeted post-processing. Both the algorithms used, as well as the workflow in performing the analysis, may be customized for performing procedures with a particular agent.
The stakeholders interested in developing such agent- and exam-specific post-processing applications may have a vested interest in insuring that such post-processing applications can run on a wide variety of systems. The standard post-processing software API outlined in PS3.19 could simplify the distribution of such agent-specific analysis applications. Rather than creating multiple versions of the same application, each version targeted to a particular medical imaging vendor's system, the application developer need only create a single version of the application, which would run on any system that implemented the standard API.
Differences in physical characteristics, acquisition technique and equipment, and user preference affect image quality and processing requirements. By allowing the sharing of applications based on device-independent (or conversely, device-specific) procedures, the Hosted Application technology will reduce these differences to a minimum.
A common API for Application Hosting facilitates multi-site research.
Site-specific problems : The development of molecular imaging applications can be accelerated with multiple site cooperation in the validation of new algorithms and software. However, the run-time environment and tools available at one site typically are not matched identically at other sites, hampering the sharing of applications between sites. Using the same tools allows them to share applications. One cannot simply take an application written at one of these sites, and make it run on the other site without major software work involving the installation and configuration of multiple tool packages. Even after installing the needed tools and libraries, software developed at one site may be trying to access facilities that are unavailable at the other site, for example, facilities to store, access, and organize the image data. Often the data formats applications from one site are expecting are incompatible with the data formats available at other sites. Having a standard API could help minimize these data incompatibilities.
Gap between research and clinical environments : The initial versions of agent-specific applications are typically created in a research environment, and are not easily accessible in the clinical environment. The early experimental work generally is done by exporting the image data out of the clinical environment to research workstations, and then importing the results back into the clinical system once the analysis is done. While exporting and importing the images may be sufficient for the early research work, clinical acceptance of an application can be significant enhanced if that application could run in the same clinical environment where the images are collected, in order to better fit into the clinical workflow.
The problem of mismatched run time environments becomes even more acute when attempting to run the typical research application on a production clinical workstation. Due to a variety of legal and commercial concerns, vendors of the systems utilized in the clinical environment generally do not support running unknown software, nor do most commercial vendors have the time or resources to assist the hundreds of researchers who may wish to port a particular application to that vendor's system. Even if researchers manage to load an experimental program onto a clinical system, the experimental program rarely has direct access to the data stored on that clinical system, nor can it directly store results back into the system's clinical database. Without a single standard interface, users have to resort to the cumbersome and time-consuming export and input routines to be able to run research programs on clinical data. It is expected that the constrained environment that a standard API provides would be simpler to validate, particularly if it is universally deployed by multiple vendors, and could lessen the burden on any individual system vendor.
Computer Aided Diagnosis and Decision Making (CAD) is becoming more prevalent in radiology departments. Many classes of exams now routinely go through a computer screening process prior to reading. One potential barrier to more widespread use of CAD screening is that the various vendors of CAD applications typically only allow their applications to run on servers or workstations provided by those companies. A clinical site that wishes to utilize, for example, mammo CAD from one vendor and lung CAD from another often is forced to acquire two different servers or workstations from the two different vendors.
The Hosted Application concept described in PS3.19 could be used to facilitate the running of multiple CAD applications from multiple vendors on the same computer system.
As medical imaging technology progresses, new modalities are added to the standard. For example, vessel wall detection in intravascular ultrasound is often easier if the images are left in radial form. Unfortunately, most DICOM workstations would not know how to deal with images in such a strange format even though the workstation might recognize that it is an image.
One possible solution is for a workstation to seek out an appropriate Hosted Application for handling Modalities or SOP classes that it does not recognize. This would allow for automatic handling of all image types by a generic imaging platform. Similarly, SOP Classes, even private SOP Classes, could be created that depend on particular Hosted Applications to prepare data for display.
Another natural use for such a standardized API is the creation of exam-specific analysis and measurement programs for the creation of Evidence Documents (Structured Reports). The standardized API would allow the same analysis program to run on a variety of host systems, reducing the amount of development needed to support multiple platforms.
Often the regulatory approval for CAD systems includes the method by which the CAD marks are presented to the user. Providers of CAD systems have used dedicated workstations for such display in the past in order to insure that the CAD marks are presented as intended. If there were a suitable standardized API for launching hosted applications, a Hosted Application could handle the display of CAD results on any workstation that supports that standardized API.
Presentation States may contain Compound Graphics and combined graphic objects. Two illustrative examples are given in this informative annex to explain these two concepts.
First, an example of a Compound Graphic is given, an AXIS object, and secondly an example of a combined graphic object is given, a distance line.
The rendered appearance of the Compound Graphics (such as illustrated in Figure YY-1) are recommendations and are not mandatory. For example, the Compound Graphic 'AXIS' can look slightly different on different viewing workstations.
The AXIS from Figure YY-1 is defined in the following Compound Graphic Sequence (0070,0209) (see the following Table YY-1). An AXIS object is typically used for measurement purposes.
The following table shows the simple graphic objects for an axis. The breakdown of the axis into simple graphics is up to the implementation. The Compound Graphic Instance ID (0070,0226) is used to relate the compound and the simple representation. To keep the example short only the first major tick is shown.
Now, a distance line is defined as a combined graphic object, i.e., grouping a text object with a polyline graphic object (see Figure YY-2). Distance lines are typically used for measurements and for computing the grayscale values along this line to build up a profile curve.
This simple example is intended to show how the Graphic Group ID (0070,0295) is used for grouping of graphic annotations.
In this section, the usage of mating features for assembly of implants is declared.
These Attributes establish a Cartesian coordinate system relative to the Frame of Reference of the implant. When two implants are assembled using a pair of mating features, a rigid spatial registration can be established, that transforms one Frame of Reference so that the mating features align. The figure below gives a simple example in 2D how two implants (symbolized by two rectangles) are matched according to a mating feature pair. For each 2D and 3D template present, a set of coordinates is assigned to each Mating Feature Sequence Item.
It is recommended to give Mating Features that are somehow related, the same Mating Feature ID (0068,63F0) in different implant templates. This may help applications to switch between components while keeping connections to other components. The Example in Figure ZZ.1-2 shows that the first and the last hole in the plates get the same Mating Feature ID in each Template.
The Mating Features are organized in sets of alternative features: Only one feature of any set shall be used for assembly with other components in one plan. This enables the definition of variants for one kind of contact a component can make while ensuring consistent plans.
An example for Mating Feature Sets is shown in Figure ZZ.1-3. A hip stem template shows a set of five mating features, drawn as circles on the tip of its cone. Different head components use different mating points, depending on the base radius of the conic intake on the head.
For each Item of the Mating Feature Sequence (0068,63E0), degrees of freedom can be specified. A degree of freedom is defined by one axis, and can be either rotational or translational. For each 2D and 3D template present, the geometric specifications of the mating points can be provided.
Instances of the Implant Assembly Template IOD are utilized to define intended combinations of implant templates. An Implant Assembly Template consists of a sequence of component type definitions (Component Type Sequence (0076,0032)) that references Implant Template Instances and assigns roles to the referenced implants. In the example in Figure ZZ.1-4, the component types "Stems" and "Heads" are defined. Four different stems and two different heads are referenced. Both groups are flagged mandatory and exclusive, i.e., a valid assembly requires exactly one representative of each group.
The Component Assembly Sequence (0076,0060) declares possible connections between components referenced by the component groups. Each sequence item refers to exactly two implant templates that are part of at least one component group in the same Implant Assembly Template Instance. An Component Assembly Sequence Item references one mating feature in each of the templates according to which the assembly is geometrically constrained. The double-pointed dashed lines represent the Items of the Component Assembly Sequence in Figure ZZ.1-4.
Registration of implant templates with patient images according to anatomical landmarks is one of the major features of implantation planning. For that purpose, geometric features can be attached to Implant Template Instances. Three kinds of landmarks are defined: Points, lines, and planes. Each landmark consists of its geometric definition, which is defined per template, and a description.
When registering an Implant Template to a patient data set like an Image or a Surface Segmentation, the planning software should establish a spatial transformation that matches to planning landmarks to corresponding geometric features in the patient data set.
In this section, an example is presented that shows the usage of Implant Templates together with an Implant Assembly Template to create an Implantation Plan with patient images. The example is in 2D but can easily be extended to 3D as well. The example looks at a simplified case of hip reconstruction planning, using a monoblock stem component and a monoblock cup component.
Planning consists of 2 steps: Selection and placement of the best fitting cup from the cups referenced by the Assembly Template based on the dimension of the patient's hip is the first step. With that done, a stem is selected that can be mated with the selected cup and has a neck configuration that leads to an optimal outcome with regard to leg length and other parameters. Therefore, the available stems are placed so that the features align. The femoral planning landmarks are used to calculate the displacement of the femur this configuration would result in. The workflow is shown in the following set of figures.
In the first step, the planning landmarks marked with the green arrows in Figure ZZ.3-2 are aligned with compliant positions in the patient's x-ray.
In the second step, the femoral length axis is detected from the patient's x-ray and the stem template is aligned accordingly using the femoral axis landmark. The proximal and distal fixation boundary planes are used to determine the insertion depth of the stem along that axis.
In the third step, the image is split into a femoral and a pelvic part according to the proposed resection plane of the stem template. The mating features are used to calculate the spatial relation between the femoral and the pelvic component.
The hip joint has several degrees of freedom, of course. The Implant Template should contain this information in the Mating Features. In the given 2D projections, the rotational freedom of the joint is expressed by one single rotation around the axis of projection intersecting with the printing space at the 2D coordinate of the Mating Feature. Therefore, a Degree Of Freedom Sequence Item added to either the stem, the cup, or both.
In planning, this information could be used to visualize the rotational capacities of the joint after implantation.
Technically, the degree of freedom could also have been added to the cup or even (each with half the range of freedom) to both. But since we are used to seeing the femur's rotation with respect to the pelvis and not the other way around, it seemed natural to do it that way.
The Templates used in the example can be encoded as follows:
The Generic Implant Module contains several Attributes to express the relations between different versions of implant templates. These Attributes are
Different versions of Implant Templates reflect the changes a manufacturer is doing on the Implant Templates he issues. The Implant Templates that are issued by a manufacturer (or a third party who is acting on behalf of the manufacturer) are always ORIGINAL. Software vendors, PACS integrators, or other stakeholders will add information to such templates for different purposes. The Instances that are generated by this process is called derivation and the resulting instances are labeled DERIVED. Implantation Plans, i.e., electronic documents describing the result of implantation planning, are specified in an instance of the Implantation Plan SR Document. There, the implants that are relevant for one plan are included by reference. When such plans are exchanged between systems or organizations it is likely that the receiving party has access to other versions of templates as the sending party has. In order to maintain readability of exchanged plans, the following is required:
All necessary information about an implant that is relevant to display and understand a plan is present in the ORIGINAL Implant Templates that were issued by a manufacturer. This is assured by these Attributes being Type 1 in the IOD.
When deriving Instances, information may only be added but not removed from the ORIGINAL Instance. This information may be encoded in standard or private Tags.
Derived Instances contain the information about the source Instances they were derived from. All Instances contain a reference to the ORIGINAL Instance they were derived from. If an application receives a plan that references an implant it does not have in its database, it will find the UID of the ORIGINAL Instance in the plan, too. It can query its database for an instance that was derived from that Instance and thereby find an Instance it can use to present the plan.
Figure ZZ.5-1 shows an example of the relationships between two versions of a manufacturer's Implant Template and several different Implant Templates derived by software vendors from these versions.
For the implantation of bone mounted implants, information that has been generated during the implantation planning phase is needed in the OR. To convey this information to the OR, this supplement to the DICOM standard introduces the DICOM format for the results of an implantation planning activity referring to implant templates. An Implantation Plan SR Document should be utilized by surgeons, navigation devices, and for documentation purposes. The Plan contains relevant intraoperative information concerning the assembly of the implant components, resection lines, registration information, and relevant patient data. Thus, the Implantation Plan SR Document can help to enhance information logistics within the workflow. It does not contain any information about the planned surgical workflow. This information may be addressed by other DICOM Supplements. Nevertheless, this SR document may reference to or may be referenced by objects containing workflow information.
Additionally, once an implantation plan has been generated, it can be used as input for a planning application to facilitate adaption of a plan in cases where this is necessary due to unforeseen situations.
The workflow is considered to be the following:
Some kind of planning application helps the user to perform implantation planning; he can choose the optimal implant for a patient using implant templates from a repository. The user aligns the implant template with patient data with or without the help of the application. (Planning without patient data can be stored in the Implantation Plan SR Document as well.)
Subsequently, an Implantation Plan SR Document Instance will be created that contains the results of the planning. No information of the process itself (previously chosen implant templates, methods, etc.) will be stored. However, an Implantation Plan Document is considered to contain the important parameters to retrace a planning result.
There are two main components an Implantation Plan SR Document consists of (see Figure AAA.1-1). The implant component selection is used to point to a selected implant template in the repository, whereas the assembly is used to describe the composition of the selected implant templates. Figure AAA.2-1 shows how the Implantation Plan SR Document parts make references to the implant templates. Each Implantation Plan SR Document can contain a single implant component selection and several assemblies but it describes only one planning result for one particular patient.
The recipient of the Implantation Plan SR Document can decide whether to read only the "list" of used implants or to go into detail and read the compositions as well. In both cases, he must have access to the repository of the Implant Templates to get detailed information about the implant (such as its geometry).
The following structure shows the main content of an Implantation Plan SR Document. As can be seen in Figure AAA.1-1, the Implantation Plan consists mainly of the selected Implant Components and their Assemblies.
The Implantation Plan SR Document is tightly related to Implantation Templates (see PS3.3 and PS3.16). The following Figure AAA.2-1 shows the relationship between the Implant Templates and the Implantation Plan.
The following example shows the planning result of a simple THR (Total Hip Replacement) without any registration information. One Patient Image was used and one visualization was produced. One Femoral Stem, one Femoral Head, one Acetabular Bearing Insert and one Acetabular Fixation Cup were selected to be implanted (see Figure AAA.3-1).
The following example shows the result of a planning activity for a dental implantation using a dental drilling template. The implant positioning is based on a CT-Scan during which the patient has been wearing a bite plate with 3 markers. In this example the markers (visible in the patient's CT images) are detected by the planning application. After the implants have been positioned, the bite plate, in combination with the registration information of the implants, can be used to produce the dental drilling template.
In the following example, two implants are inserted that are not assembled using Mating Points.
The markers of the bite plate are identified and stored as 3 Fiducials in one Fiducial Set. This Fiducial Set has its own Frame of Reference (1.2.3.4.100).
The Registration Object created by the planning application uses the patient's CT Frame of Reference as main Frame of Reference (see Figure AAA.4-1).
This annex provides examples of message sequencing when using the Unified Procedure Step SOP Classes in a radiotherapy context. This section is not intended to provide an exhaustive set of use cases but rather an informative example. There are other valid message sequences that could be used to obtain an equivalent outcome and there are other valid combinations of actors that could be involved in the workflow management.
The current use cases assume that tasks are always scheduled by the scheduler prior to being performed. It does not address the use case of an emergency or otherwise unscheduled treatment, where the procedure step will be created by a different device. However, Unified Procedure Step does provide a convenient mechanism for doing this.
The use cases addressed in this annex are:
Treatment Delivery Normal Flow - Treatment Delivery System (TDS) performs the treatment delivery that was scheduled by the Treatment Management System (TMS). Both the "internal verification" and "external verification" flavors are modeled in these use cases.
Treatment Delivery - Override or Additional Information Required. Operating in the external verification mode, the Machine Parameter Verifier (MPV) detects an out-of-tolerance parameter of missing information, and requests the user to override the parameter or supply or correct the missing information. This use case addresses the situation where the 'verify' function is split from the TDS, but does not address verification of a subset of parameters by an external delivery accessory such as a patient positioner.
The following actors are used in the use cases below:
Stores SOP Instances (images, plans, structures, dose distributions, etc).
Manages worklists and tracks performance of procedures. This role is commonly filled by a Treatment Management System (Oncology Information System) in the Oncology Department. Acts as a UPS Pull SCP. The TMS has a user interface that may potentially be located in the treatment delivery control area. In addition, TMS terminals may be located throughout the institution.
Performs the treatment delivery specified by the worklist, updating a UPS, and stores treatment records and related SOP Instances such as verification images. Acts as a UPS Pull SCU. The TDS user interface is dedicated to the safe and effective delivery of the treatment, and is located in the treatment control area, typically just outside the radiation bunker.
Oversees and potentially inhibits delivery of the treatment. This role is commonly filled by a Treatment Management System in the Oncology Department, when the TDS is in the external verification mode. The MPV does not itself act as a UPS Pull SCU, but communicates directly with the TDS, which acts as a UPS Pull SCU. The MPV user interface may be shared with the TMS (in the treatment delivery control area), or could be located on a separate console.
Figure BBB.3.1.1-1 illustrates a message sequence example in the case where a treatment procedure delivery is requested and performed by a delivery device that has internal verification capability. In the example, no 'setup verification' is performed, i.e., the patient is assumed to be in the treatment position. Unified Procedure Step (UPS) is used to request delivery of a session of radiation therapy (commonly known as a "fraction") from a specialized Application Entity (a "Treatment Delivery System"). That entity performs the requested delivery, completing normally. Further examples could be constructed for discontinued, emergency (unscheduled) and interrupted treatment delivery use cases, but are not considered in this informative section (see DICOM Part 17 for generic examples).
In this example the Treatment Delivery System conforms to the UPS Pull SOP Class as an SCU, and the Treatment Management System conforms to the UPS Pull SOP Class as an SCP. In alternative implementations requiring on-the-fly scheduling and notification, other UPS SOP classes could be implemented.
Italic text in Figure BBB.3.1.1-1 denotes messages that will typically be conveyed by means other than DICOM services.
This section describes in detail the interactions illustrated in Figure BBB.3.1.1-1.
'List Procedures for Delivery' on TDS console.
The User uses a control on the user interface of the TDS to indicate that he or she wishes to see the list of patients available for treatment.
The TDS queries the TMS for Unified Procedure Steps (UPSs) matching its search criteria. For example, all worklist items with a Unified Procedure Step Status of "SCHEDULED", and Input Readiness State (0040,4041) of "READY". This is conveyed using the C-FIND request primitive of the UPS Pull SOP Class.
The TDS receives the set of Unified Procedure Steps (UPSs) resulting from the Query UPS message. The Receive UPS is conveyed via one or more C-FIND response primitives of the UPS Pull SOP Class. Each response (with status pending) contains the requested attributes of a single Unified Procedure Step (UPS).
The TMS returns a list of one or more UPSs based on its own knowledge of the planned tasks for the querying device. Two real-world scenarios are common in this step:
There is no TMS Console located in the treatment area, and selection of the delivery to be performed has not been made. In this case, the TMS returns a list of potentially many UPSs (for different patients), and the User picks from the list the UPS that they wish to deliver.
The User has direct access to the TMS in the treatment area, and has already selected the delivery to be performed on the console of the TMS, located in the treatment room area. In this case, a single UPS is returned. The TDS may either display the single item for confirmation, or proceed directly to loading the patient details.
A returned set of UPSs may have more than one UPS addressing a given treatment delivery. For example, in the case where a patient position verification is required prior to delivery, there might be a UPS with Requested Procedure Code Sequence item having a Code Value of 121708 ("RT Patient Position Acquisition, CT MV"), another UPS with a Code Value of 121714 ("RT Patient Position Registration, 3D CT general"), another UPS with a Code Value of 121722 ("RT Patient Position Adjustment"),and a fourth UPS whose Requested Procedure Code Sequence item would have a Code Value of 121726 ("RT Treatment With Internal Verification").
'Select Procedure' on TDS console
The User selects one of the scheduled procedures specified on the TDS console. If exactly one UPS was returned from the UPS query described above, then this step can be omitted.
Get UPS Details and Retrieve Archive Objects
The TDS may request the details of one or more procedure steps. This is conveyed using the N-GET primitive of the UPS Pull SOP Class, and is required when not all necessary information can be obtained from the query response alone.
The TDS then retrieves the required SOP Classes from the Input Information Sequence of the returned UPS query response. In response to a C-MOVE Request on those objects (5a), the Archive then transmits to the TDS the SOP Instances to be used as input information during the task. These SOP Instances might include an RT Plan SOP Instance, and verification images (CT Image or RT Image). They might also include RT Beams Treatment Record SOP Instances if the Archive is used to store these SOP Instances rather than the TMS. The TDS knows of the existence and whereabouts of these SOP Instances by virtue of the fully-specified locations in the N-GET response.
Although the TDS could set the UPS to 'IN PROGRESS' prior to retrieving the archive instances, this example shows the archive instances being retrieved prior to the UPS being 'locked' with the N-ACTION step. This avoids the UPS being set 'IN PROGRESS' if the required instances are not available, and therefore avoids the need to schedule another (different) procedure step in this case, as required by the Unified Procedure Step State Diagram State Diagram (PS3.4). However, some object instances dynamically created to service performing of the UPS step should be supplied after setting the UPS 'IN PROGRESS' (see Step 7).
Change UPS State to IN PROGRESS
The TDS sets the UPS (which is managed by the TMS) to have the Unified Procedure Step Status of 'IN PROGRESS' upon starting work on the item. The SOP Instance UID of the UPS will normally have been obtained in the worklist item. This is conveyed using the N-ACTION primitive of the UPS Pull SOP Class with an action type "UPS Status Change". This message allows the TMS to update its worklist and permits other Performing Devices to detect that the UPS is already being worked on.
The UPS is updated in this step before the required dynamic SOP Instances are obtained from the TMS (see Step 7). In radiation therapy, it is desirable to signal as early as possible that a patient is about to undergo treatment, to allow the TMS to begin other activities related to the patient delivery. If the TMS implements the UPS Watch SOP Class, other systems will be able to subscribe for notifications regarding the progress of the procedure step.
In response to a C-MOVE Request, the TMS transmits to the TDS the RT Beams Delivery Instruction and possibly RT Treatment Summary Record SOP Instances to be used as input information during the task. These SOP Instances may be created "on-the-fly" by the TMS (since it was the TMS itself that transmitted the UIDs in the UPS). The RT Treatment Summary SOP Instance may be required by the TDS to determine the delivery context, although the UPS does specify a completion delivery (following a previous delivery interruption). RT Beams Treatment Record instances might also be retrieved from the TMS in this step if the TMS is used to manage these SOP Instances rather than the Archive.
'Start Treatment Session' on TDS console
The User uses a control on the user interface of the TDS to indicate that he or she wishes to commence the treatment delivery session. A Treatment Session may involve fulfillment of more than one UPS, in which case Steps 4-13 may be repeated.
Set UPS Progress and Beam Number, Verify, and Deliver Radiation
For each beam, the TDS updates the UPS on the TMS just prior to starting the radiation delivery sequencing. This is conveyed using the N-SET primitive of the UPS Pull SOP Class.
The completion percentage of the entire UPS is indicated in the Unified Procedure Step Progress attribute. The algorithm used to calculate this completion percentage is not specified here, but should be appropriate for user interface display.
The Referenced Beam Number of the beam about to be delivered is specified by encoding it as a string value in the Procedure Step Progress Description (0074,1006).
The TDS then performs internal verifications to determine that the machine is ready to deliver the radiation, and then delivers the therapeutic radiation for the specified beam. In the current use case, it is assumed that the radiation completes normally, delivering the entire scheduled fraction. Other use cases, such as voluntary interruption by the User, or interruption by the TDS, will be described elsewhere.
If there is more than one beam to be delivered, the verification, UPS update, and radiation delivery is repeated once per beam.
This example does not specify whether or not treatment should be interrupted or terminated if a UPS update operation fails. The successful transmittal of updates is not intended as a gating requirement for continuation of the delivery, but could be used as such if the TDS considers that interrupting treatment is clinically appropriate at that moment of occurrence.
Set UPS to Indicate Radiation Complete
The TDS may then update the UPS Progress Information Sequence upon completion of the final beam (although this is not required), and set any other attributes of interest to the SCP. This is conveyed using the N-SET primitive of the UPS Pull SOP Class.
The TDS stores any generated results to the Archive. This would typically be achieved using the Storage and/or Storage Commitment Service Classes and may contain one or more RT Beams Treatment Records or RT Treatment Summary Records, RT Images (portal verification images), CT Images (3D verification images), RT Dose (reconstructed or measured data), or other relevant Composite SOP Instances. References to the results and their storage locations are associated with the UPS in the Set UPS to Final State message (below). The RT Beams Treatment Record instances might be stored to the TMS instead, if the TMS is used to manage these SOP Instances rather than the Archive.
The required SOP Instances are stored to the Archive in this step before the UPS is status is set to COMPLETED. In radiation therapy, it is desirable to ensure that the entire procedure is complete, including storage of important patient data, before indicating that the step completed successfully. For some systems, such as those using Storage Commitment, this may not be possible, in which case another service such as Instance Availability Notification (not shown here) would have to be used to notify the TMS of SOP Instance availability. For the purpose of this example, it is assumed that the storage commitment response occurs in a short time frame.
Set UPS Attributes to Meet Final State Requirements
The TDS then updates the UPS with any further attributes required to conform to the UPS final state requirements. Also, references to the results SOP Instances stored in Step 11 are supplied in the Output Information Sequence. This is conveyed using the N-SET primitive of the UPS Pull SOP Class.
The TDS changes the Unified Procedure Step Status of the UPS to COMPLETED upon completion of the scheduled step and storage or results. This is conveyed using the N-ACTION primitive of the UPS Pull SOP Class with an action type "UPS Status Change". This message informs the TMS that the UPS is now complete.
Indicate 'Treatment Session Completed' on TDS Console
The TDS then signals to the User via the TDS user interface that the requested procedure has completed successfully, and all generated SOP Instances have been stored.
Figure BBB.3.2.1-1 illustrates a message sequence example in the case where a treatment procedure delivery is requested and performed by a conventional delivery device requiring an external verification capability.
In the case where external verification is requested (i.e., where the UPS Requested Procedure Code Sequence item has a value of "RT Treatment With External Verification"), the information contained in the UPS and potentially other required delivery data must be communicated to the Machine Parameter Verifier (MPV). In many real-world situations the Oncology Information System fulfills both the role of the TMS and the MPV, hence this communication is internal to the device and not standardized. If separate physical devices perform the two roles, the communication may also be non-standard, since these two devices must be very closely coupled.
Elements in bold indicate the additional messages required when the Machine Parameter Verifier is charged with validating the beam parameters for each beam, prior to radiation being administered. These checks can be initiated by the User on a beam-by-beam basis ('manual sequencing', shown with the optional 'Deliver Beam x' messages), or can be performed by the Machine Parameter Verifier without intervention ('automatic sequencing'). The TDS would typically store an RT Treatment Record SOP Instance after each beam.
This example illustrates the case where photon or electron beams are being delivered. If ion beams are to be delivered, instances of the RT Conventional Machine Verification IOD will be replaced with instances of the RT Ion Machine Verification IOD.
Delivery of individual beams can be explicitly requested by the User (as shown in this example), or sequenced automatically by the TDS.
This section describes in detail the additional interactions illustrated in Figure BBB.3.2.1-1.
After the TDS has retrieved the necessary treatment SOP Instances (Step 7), the following step is performed:
7a. Communicate UPS and Required Delivery Data to MPV
The MPV must receive information about the procedure to be performed, and any other data required in order to carry out its role. This communication typically occurs outside the DICOM standard, since the TMS and MPV are tightly coupled (and may be the same physical device). In cases where standardized network communication of these parameters is required, this could be achieved using DICOM storage of RT Plan and RT Delivery Instruction SOP Instances, or alternatively by use of the UPS Push SOP Class.
After the User has initiated the treatment session on the TDS console (Step 8), the following steps are then performed:
8a. 'Deliver Beam x' on TDS console
In some implementations, parameter verification for each beam may be initiated manually by the User (as shown in this example). In other approaches, the TDS may initiate these verifications automatically.
8b. Create RT Conventional Machine Verification Instance
The TDS creates a new RT Conventional Machine Verification instance on the MPV prior to beam parameter verification of the first beam to be delivered. This is conveyed using the N-CREATE primitive of the RT Conventional Machine Verification SOP Class.
After the TDS has signaled the UPS current Referenced Beam Number and completion percentage for a given beam (9), the following sequence of steps is performed:
9a. Set 'Beam x' RT Conventional Machine Verification Instance
The TDS sets the RT Conventional Machine Verification SOP Instance to transfer the necessary verification parameters. This is conveyed using the N-SET primitive of the RT Conventional Machine Verification SOP Class. The Referenced Beam Number (300C,0006) attribute is used to specify the beam to be delivered. It is the responsibility of the SCU to keep track of the verification parameters such that the complete list of required attributes can be specified within the top-level sequence items.
The TDS sets the RT Conventional Machine Verification SOP Instance to indicate that the TDS is ready for external verification to occur. This is conveyed using the N-ACTION primitive of the RT Conventional Machine Verification SOP Class.
The MPV then attempts to verify the treatment parameters for 'Beam x'. The MPV sends one or more N-EVENT-REPORT signals to the TDS during the verification process. The permissible event types for these signals in this context are 'Pending' (zero or more times, not shown in this use case), and 'Done' when the verification is complete (successful or otherwise).
9d. Get RT Conventional Machine Verification (optional step)
The TDS may then request attributes of the RT Conventional Machine Verification instance. This is conveyed using the N-GET primitive of the RT Conventional Machine Verification SOP Class. If verification has occurred normally and the N-EVENT-REPORT contained a Treatment Verification Status of VERIFIED (this use case), then this step is not necessary unless the TDS wishes to record additional parameters associated with the verification process.
The TDS then delivers the therapeutic radiation. In the current use case, it is assumed that the radiation completes normally, delivering the entire scheduled fraction. Other use cases, such as voluntary interruption by the User, or interruption by the TDS or MPV, are not described here. If the delivery requires an override of additional information, a different message flow occurs. This is illustrated in the use case described in the next section.
9e. Store 'Beam x' RT Beams Treatment Record to Archive
The TDS stores an RT Beams Treatment Record to the Archive (or potentially the TMS as described in Section BBB.3.1.2 Transactions and Message Flow). The RT Beams Treatment Record is therefore not stored in Step 11 for the external verification case (since it has already been stored in the step on a per-beam basis).
For each subsequent beam in the sequence of beams being delivered, steps 8a (optional), 9, 9a, 9b, 9c, 9d (optional), and 9e are then repeated, i.e., N-SET, N-ACTION, and N-GET operations are performed on the same instance of the RT Conventional Machine Verification SOP Class, which persists throughout the beam session.
9f. Delete RT Conventional Machine Verification Instance
When all beams have been processed, the TDS deletes the RT Conventional Machine Verification SOP Instance to indicate to the MPV that verification is no longer required. This is conveyed using the N-DELETE primitive of the RT Conventional Machine Verification SOP Class.
Figure BBB.3.3.1-1 illustrates a message sequence example for the external verification model in the case where the Machine Parameter Verifier (MPV) either detects that an override is required, or requires additional information (such as a bar code) before authorizing treatment.
The steps in this use case replace Steps 8a to 9f in Use Case BBB.3.2, for the case where only a single beam is delivered.
Figure BBB.3.3.1-1. Treatment Delivery Message Sequence - Override or Additional Information Required
This section describes in detail the interactions illustrated in Figure BBB.3.3.1-1.
'Deliver Beam x' on TDS console (optional step)
See use case BBB.3.2.
Create RT Conventional Machine Verification Instance
See use case BBB.3.2.
Set 'Beam x' RT Conventional Machine Verification Instance
See use case BBB.3.2.
See use case BBB.3.2.
The MPV then attempts to verify the treatment parameters for 'Beam x'. The MPV determines that one or more treatment parameters are out-of-tolerance, or that information such as a bar code is missing. It sends an N-EVENT-REPORT signal to the TDS with an Event Type of Done and an RT Machine Verification Status of NOT_VERIFIED. The MPV also shows the reason for the override/information request on its display (5a).
Supply Override Instruction or Bar Code
The User observes on the MPV console that an override or missing information is required, and supplies the override approval or missing information to the MPV via its user interface, or equivalent proxy.
The TDS performs another N-ACTION on the RT Conventional Machine Verification SOP Instance to indicate that the TDS is once again ready for treatment verification. See use case BBB.3.2. This may be initiated by the user (as shown in this example), or may be initiated automatically by the TDS using a polling approach.
The MPV verifies the treatment parameters, and determines that all parameters are now within tolerance and all required information is supplied. It sends an N-EVENT-REPORT signal to the TDS with an Event Type of Done and an RT Machine Verification Status of VERIFIED_OVR.
Get RT Conventional Machine Verification (optional step)
See use case BBB.3.2. If an N-GET is requested, the parameters that were overridden are available in Overridden Parameters Sequence (0074,104A).
Store 'Beam x' RT Beams Treatment Record to Archive
See use case BBB.3.2. Overridden parameters are ultimately captured in the treatment record.
Delete RT Conventional Machine Verification Instance
See use case BBB.3.2.
Figure BBB.3.4.1-1 illustrates a message sequence example for the external verification model in the case where the Machine Parameter Verifier (MPV) detects that one or more machine adjustments are required before authorizing treatment, and the TDS has been configured to retrieve the failure information and make the required adjustments.
The steps in this use case replace Steps 8a to 9f in Use Case BBB.3.2, for the case where only a single beam is delivered.
This section describe in detail the interactions illustrated in Figure BBB.3.4.1-1.
'Deliver Beam x' on TDS console (optional step)
See use case BBB.3.2.
Create RT Conventional Machine Verification Instance
See use case BBB.3.2.
Set 'Beam x' RT Conventional Machine Verification Instance
See use case BBB.3.2.
See use case BBB.3.2.
The MPV then attempts to verify the treatment parameters for 'Beam x'. The MPV determines that one or more treatment parameters are out-of-tolerance. It sends an N-EVENT-REPORT signal to the TDS with an Event Type of Done and an RT Machine Verification Status of NOT_VERIFIED. It may also display the verification status and information to the user (5a).
Get RT Conventional Machine Verification
The TDS then requests the failed verification parameters of the verification process. This is conveyed using the N-GET primitive of the RT Conventional Machine Verification SOP Class. The MPV replies with an N-GET-RESPONSE having a Treatment Verification Status of NOT_VERIFIED. The reason(s) for the failure is encoded in the Failed Parameters Sequence (0074,1048) attribute of the response.
As illustrated in this example, some implementations may require that the User observes the failed verification parameters on the MPV console and manually request the required machine adjustment. In this case the User makes the request to the TDS via its user interface. In other implementations the TDS makes the adjustments automatically and request verification without User intervention.
Adjust TDS and Set 'Beam x' RT Conventional Machine Verification Instance
The TDS adjusts one or more of its parameters as requested, then sets the RT Conventional Machine Verification SOP Instance to indicate that the TDS is once again ready for treatment delivery. This is conveyed using the N-SET primitive of the RT Conventional Machine Verification SOP Class. The N-SET command provides values for all applicable parameters (not just those that have been modified) since if one or more parameters within a top-level sequence is supplied, then all the applicable parameters within that sequence must also be supplied (otherwise DICOM requires their values to be cleared).
The TDS performs another N-ACTION on the RT Conventional Machine Verification SOP Instance to request that the MPV re-perform treatment verification. See use case BBB.3.2.
As an optional step, the MPV may notify the TDS that the verification is in process at any time, by sending an N-EVENT-REPORT signal to the TDS with an Event Type of Pending (9a).
The MPV verifies the treatment parameters, and determines that the required adjustments have been made, i.e., all parameters are now within tolerance. It sends an N-EVENT-REPORT signal to the TDS with an Event Type of Done and an RT Conventional Machine Verification Status of VERIFIED.
Get RT Conventional Machine Verification (optional step)
See use case BBB.3.2.
Store 'Beam x' RT Beams Treatment Record to Archive
See use case BBB.3.2.
Delete RT Conventional Machine Verification Instance
See use case BBB.3.2.
An axial measurements device is used to take axial measurements of the eye, from the anterior surface of the cornea to either the surface of the retina (ultrasound) or the retinal photoreceptors (optical). The axial measurements are typically expressed in mm (Ophthalmic Axial Length (0022,1010). Currently these measurements are taken using ultrasound or laser light. The measurements are used in calculation of intraocular lens power for cataract surgery. Axial measurements devices and software on other systems perform intraocular lens power calculations using the axial measurements in addition to measurements from other sources (currently by manual data entry, although importation from other software systems is expected in the future).
When the natural lens of the eye turns opaque it is called a cataract. The cataract is surgically removed, and a synthetic intraocular lens is placed where the natural lens was before. The power of the lens that is placed determines what the patient's refractive error will be, meaning what power his glasses will need to be to maximize vision after surgery.
Axial measurements devices provide graphical displays that help clinicians to determine whether or not the probe used in taking the measurements is aligned properly. Annotations on the display provide information such as location of gates that assists the clinician in assessing measurement quality. High, fairly even waveform spikes suggest that the measurement producing a given graph is likely to be reliable. The quality of the graphical display is one of the factors that a clinician considers when choosing which axial length measurement to use in calculating the correct intraocular lens power for a given patient.
Axial measurements devices and software on other systems perform intraocular lens power calculations for cataract surgery patients. The power selection of intraocular lens to place in a patient's eye determines the refractive correction (e.g., glasses, contact lenses, etc.) the patient will require after cataract surgery.
The data input for these calculations consists of ophthalmic axial length measurements (one dimensional ultrasound scans that are called "A-scans" in the eye care domain) and keratometry (corneal curvature) measurements in addition to constants and sometimes others kinds of measurements. The data may come from measurements performed by the device, on which the intraocular lens calculation software resides, or from manual data entry, or from an external source. There are a number of different formulas and constants available for doing these calculations. The selection of formula to use is based on clinician preference and on patient factors such as the axial length of the eye. The most commonly used constants, encoded by Concept Name Code Sequence (0040,A043) using CID 4237 “Lens Constant Type”, are a function of the model of intraocular lens to be used.
The most commonly used formulas, encoded by IOL Formula Code Sequence (0022,1029) using CID 4236 “IOL Calculation Formula”, for intraocular lens calculation are inaccurate in a patient who has had refractive surgery, and numerous other formulas are available for these patients. Since most of them have not been validated to date, they were not included in this document.
Intraocular lens calculation software typically provides tabular displays of intraocular lens power in association with each lens's predicted refractive error (e.g., glasses, contact lenses, etc).
Figure CCC.2-1. Sagittal Diagram of Eye Anatomy (when the lens turns opaque it is called a cataract)
Courtesy; National Eye Institute, National Institutes of Health; ftp://ftp.nei.nih.gov/eyean/eye_72.tif
Courtesy; National Eye Institute, National Institutes of Health; ftp://ftp.nei.nih.gov/eyedis/EDA13_72.tif
This file is licensed under the Creative Commons Attribution Share Alike 2.5 License, Author is Rakesh Ahuja, MD (http://en.wikipedia.org/wiki/Image:Posterior_capsular_opacification_on_retroillumination.jpg)
Figure CCC.3-1 demonstrates an A-scan waveform - produced by an ultrasound device used for ophthalmic axial length measurement. This is referenced in the Ophthalmic Axial Measurements IOD in Referenced Ophthalmic Axial Length Measurement QC Image Sequence (0022,1033).
Time (translated into distance using an assumed velocity) is on the x-axis, and signal strength is on the y-axis. This waveform allows clinicians to judge the quality of an axial length measurement for use in calculating the power of intraocular lens to place in a patient's eye in cataract surgery. Figure CCC.3-1 above demonstrates a high quality scan, with tall, even spikes representing the ocular structures of interest. This tells the clinician that the probe was properly aligned with the eye. The first, double spike on the left represents anterior cornea followed by posterior cornea. The second two, more widely spaced spikes represent anterior and posterior lens. The first tall spike on the right side of the display is the retinal spike, and the next tall spike to the right is the sclera. Smaller spikes to the far right are produced by orbital tissues. Arrows at the bottom of the waveform indicate the location of gates, which may be manually adjusted to limit the range of accepted values. Note that in the lower right corner of the display two measurements are recorded. In the column labeled AXL is an axial length measurement, which on this device is the sum of the measurements for ACD (anterior chamber depth), lens, and VCD (vitreous chamber depth). The measured time value for each of the segments and a presumed velocity of sound for that segment are used to calculate the axial length for that segment. An average value for each column is displayed below along with the standard deviation of measurements in that column. The average axial length is the axial length value selected by this machine, although often a clinician will make an alternative selection.
Figure CCC.4-1 demonstrates the waveform-output of a partial coherence interferometry (PCI) device used for optical ophthalmic axial length measurement. This is referenced in the Ophthalmic Axial Measurements IOD in Referenced Ophthalmic Axial Length Measurement QC Image Sequence (0022,1033).
Physical distance is on the x axis, and signal strength is on the y axis. What is actually measured is phase shift, determined by looking at interference patterns of coherent light. Physical distance is calculated by dividing "optical path length" by the "refractive group index" - using an assumed average refractive group index for the entire eye. The "optical path length" is derived from the phase shift that is actually observed. Similar to ultrasound, this waveform allows clinicians to judge the quality of an axial length measurement.
Figure CCC.4-1 above demonstrates a high quality scan, with tall, straight spikes representing the ocular axial length. The corneal spike is suppressed (outside the frame on the left hand side) and represents the reference 0 mm. The single spike on this display represents the signal from the retinal pigment epithelium (RPE) and provides the axial length measurement value (position of the circle marker). Sometimes smaller spikes can be observed on the left or right side of the RPE peak. Those spikes represent reflections from the internal limiting membrane (ILM,150-350 µm before RPE) or from the choroid (150-250 µm behind RPE) respectively.
Because all classical IOL power calculation formulas expect axial lengths measured to the internal limiting membrane (as provided by ultrasound devices), axial length measurements obtained with an optical device to the retinal pigment epithelium are converted to this convention by subtracting the retinal thickness.
Figure CCC.4-1 above displays five axial length measurements obtained for each eye (one column for each eye) and the selected axial length value is shown below the line.
Figure CCC.5-1 demonstrates a typical display of IOL (intraocular lens) calculation results.
On the right the selected target refractive correction (e.g., glasses, contact lenses, etc.) is -0.25 diopters. At the top of the table three possible intraocular lens models are displayed, along with the constants (CID 4237 “Lens Constant Type”) specific to those lens models. Each row in that part of the table displays constants required for a particular formula. In this example the Holladay formula has been selected by the operator, and results are displayed in the body of the table below. Calculated intraocular lens powers are displayed with the predicted postoperative refractive error (e.g., glasses, contact lenses, etc.) for each lens. K1 and K2 on the right refer to the keratometry values (corneal curvature), in diopters, used for these calculations.
Automated visual fields are the most commonly used method to assess the function of the visual system. This is accomplished by sequentially presenting visual stimuli to the patient and then requiring the patient press a button if he/she perceives a stimulus. The stimuli are presented at a variety of points within the area expected to be visible to the patient and each of those points is tested with multiple stimuli of varying intensity. The result of this is a spatial map indicating how well the patient can see throughout his/her visual field.
The diagnosis and management of Glaucoma, a disease of the optic nerve, is the primary use of visual field testing. In this regard, automated visual fields are used to assess quantitatively the function of the optic nerve with the intent of detecting defects caused by glaucoma.
The first step in analyzing a visual field report is to confirm that it came from the correct patient. Demographic information including the patient's name, gender, date of birth, and perhaps medical record number are therefore essential data to collect. The patient's age is also important in the analysis of the visual field (see below) as optic nerve function changes with age. Finally, it is important to document the patient's refractive error as this needs to be corrected properly for the test to be valid.
Second, the clinician needs to assess the reliability of the test. This can be determined in a number of ways. One of these is by monitoring patient fixation during the test. To be meaningful, a visual field test assumes that the subject was looking at a fixed point throughout the test and was responding to stimuli in the periphery. Currently available techniques for monitoring this fixation include blind spot mapping, pupil tracking, and observation by the technician conducting the test. Blind spot mapping starts by identifying the small region of the visual field corresponding to the optic nerve head. Since the patient cannot detect stimuli in this area, any positive response to a stimulus placed there later in the test indicates that the patient has lost fixation and the blind spot has "moved". Both pupil tracking and direct observation by the technician are now easily carried out using a camera focused on the patient's eye.
Another means of assessing the reliability of the test is to count both false positive and false negative responses. False positives occur when the subject presses the button either in response to no stimulus or in response to a stimulus with intensity significantly below one they had not detected previously. False negatives are recorded when the patient fails to respond to stimulus significantly more intense than one they had previously seen. Taken together, fixation losses, false positives, and false negatives provide an indication of the quality of the test.
The next phase of visual field interpretation is to assess for the presence of disease. The first aspect of the visual field data used here are the raw sensitivity values. These are usually expressed as a function of the amount of attenuation that could be applied to the maximum possible stimulus such that the patient could still see it when displayed. Since a value is available at each point tested in the visual field, these values can be represented either as raw values or as a graphical map.
Figure DDD.2-4. Sample Output from an Automated VF Machine Including Raw Sensitivity Values (Left, Larger Numbers are Better) and an Interpolated Gray-Scale Image
Because the raw intensity values can be affected by a number of factors including age and other non-optic nerve problems including refractive error or any opacity along the visual axis (cornea, lens, vitreous), it is helpful to also evaluate some corrected values. One set of corrected intensity values is usually some indication of the difference of each tested point from its expected value based on patient age. Another set of corrected intensity values, referred to as "Pattern deviation or "Corrected comparison" are normalized for age and also have a value subtracted from the deviation at each test point, which is estimated to be due to diffuse visual field loss This latter set is useful for focal rather than diffuse defects in visual function. In the case of glaucoma and most other optic nerve disease, clinicians are more interested in focal defects so this second set of normalized data is useful.
Figure DDD.2-5. Examples of Age Corrected Deviation from Normative Values (upper left) and Mean Defect Corrected Deviation from Normative Data (upper right)
For all normalized visual field sensitivity data, it is useful to know how a particular value compares to a group of normal patients. Vendors of automated visual field machines therefore go to great lengths to collect data on such "normal" subjects to allow subsequent analysis. Furthermore, the various sets of values mentioned above can be summarized further using calculations like a mean and standard deviation. These values give some idea about the average amount of field loss (mean) and the focality of that loss (standard deviation).
A final step in the clinical assessment of a visual field test is to review any disease-specific tests that are performed on the data. One such test is the Glaucoma Hemifield Test, which has been designed to identify field loss consistent with glaucoma. These tests are frequently vendor-specific.
In addition to primary diseases of the optic nerve, like glaucoma, visual fields are useful for assessing damage to the visual pathway occurring between the optic chiasm and occipital cortex. There is the same need for demographic information, for assessment of reliability, and for the various raw and normalized sensitivity values. At this time, there are no well-established automated tests for the presence of neurological defects.
Figure DDD.2-6. Example of Visual Field Loss Due to Damage to the Occipital Cortex Because of a Stroke
The Diffuse Defect is an estimate of the portion of a patient's visual field loss that is diffuse, or spread evenly across all portions of the visual field, in dB. In this graphical display, deviation from the average normal value for each test point is ranked on the x axis from 1 to 59, with 59 being the test point that has the greatest deviation from normal. Deviations from normal at each test point are represented on the y axis, in dB. The patient's actual test point deviations are represented by the thin blue line. Age corrected normal values are represented by the light blue band. The patient's deviation from normal at the test point ranked 25% among his or her own deviations is then estimated to be his or her diffuse visual field loss, represented by the dark blue band. This provides a graphical estimate of the remaining visual field loss for this patient, which is then presumed to consist of local visual field defects, which are more significant in management of glaucoma than diffuse defects.
The Local Defect is an estimate of the portion of a patient's visual field loss that is local, or not spread evenly across all portions of the visual field. The x and y axis in this graphical display have the same meaning as in the diffuse defect. In this graphical display the top line/blue band represent age corrected normal values. This line is shifted downward by the amount estimated to be due to diffuse visual field loss for this patient, according to the calculation in Figure DDD.2-7 (Diffuse Defect). The difference between the patient's test value at each point in the ranking on the horizontal axis and the point on the lower curve at the 50% point is represented by the dark blue section of the graph. This accentuates the degree of local visual field defect, which is more significant in management of glaucoma than diffuse defects. The Local Defect is an index that highly correlates with square root of the loss variance (sLV) but is less susceptible to false positives. In addition to the usage in white/white perimetry it is especially helpful as early identifier for abnormal results in perimetry methods with higher inter subject variability such as blue/yellow (SWAP) or flicker perimetry. An example of Local Defect is shown in Figure DDD.2-8 and is expressed in dark blue in dB and is normalized to be comparable between different test patterns.
The purpose of this annex is to explain key IVOCT FOR PROCESSING parameters, describe the relationship between IVOCT FOR PROCESSING and FOR PRESENTATION images. It also explains Intravascular Longitudinal Reconstruction.
When an OCT image is acquired, the path length difference between the reference and sample arms may vary, resulting in a shift along the axial direction of the image, known as the Z Offset. With FOR PROCESSING images, in order to convert the image in Cartesian coordinates and make measurements, this Z Offset should be corrected, typically on a per-frame or per-image basis. Z Offset is corrected by shifting Polar data rows (A-lines) + OCT Z Offset Correction (0052,0030) pixels along the axial dimension of the image.
Z Offset correction may be either a positive or negative value. Positive values mean that the A-lines are shifted further away from the catheter optics. Negative values mean that the A-lines are shifted closer to the catheter optics. Figure EEE.2-1 illustrates a negative Z Offset Correction.
The axial distances in an OCT image are dependent on the refractive index of the material that IVOCT light passes through. As a result, in order to accurately make measurements in images derived from FOR PROCESSING data, the axial dimension of the pixels should be globally corrected by dividing the A-line Pixel Spacing (0052,0014) value (in air) by the Effective Refractive Index (0052,0004) and setting the Refractive Index Applied (0052,003A) to YES. Although not recommended, if A-line Pixel Spacing (0052,0014) is reported in air (i.e., not corrected by dividing by Effective Refractive Index) then the Refractive Index Applied value shall be set to NO.
FOR PROCESSING Polar data is specified such that each column represents a subsequent axial (z) location and each row an angular (q) coordinate. Following Z Offset and Refractive Index Correction, Polar data can be converted to Cartesian data by first orienting the seam line position so that it is at the correct row location. This can be accomplished by shifting the rows Seam Line Index (0052,0036) pixels so that its Seam Line Location (0052,0033) is located at row "A-lines Per Frame * Seam Line Location / 360". Once the seam line is positioned correctly, the Cartesian data can be obtained by remapping the Polar (z, q) data into Cartesian (x, y) space, where the leftmost column of the Polar image corresponds to the center of the Cartesian image. Figure EEE.2-2 illustrates the Polar to Cartesian conversion. The scan-converted frames are constructed using the Catheter Direction of Rotation (0052,0031) attribute to determine the order in which the A-lines are acquired. Scan-converted frames are constructed using A-lines that contain actual data (I.e., not padded A-lines). Padded A-lines are added at the end of the frame and are contiguous. Figure EEE.2-2 is an example of Polar to Cartesian conversion.
An Intravascular Longitudinal Image (L-Mode) is a constrained three-dimensional reconstruction of an IVUS or IVOCT multi-frame image. The Longitudinal Image can be reconstructed from either FOR PROCESSING or FOR PRESENTATION Images. Figure EEE.3-1 is an example of an IVUS cross-sectional image (on the left) with a reconstructed longitudinal view (on the right).
The Longitudinal reconstruction is comprised of a series of perpendicular cut planes, typically consisting of up to 360 slices spaced in degree increments. The cut planes are perpendicular to the cross-sectional plane, and rotate around the catheter axis (I.e., center of the catheter) to provide a full 360 degrees of rotation. A longitudinal slice indicator is used to select the cut plane to display, and is normally displayed in the associated cross-sectional image (e.g., blue arrow cursor in Figure EEE.3-1). A current frame marker (e.g., yellow cursor located in the longitudinal view) is used to indicate the position of the corresponding cross-sectional image, within the longitudinal slice.
When pullback rate information is provided, distance measurements are possible along the catheter axis. The Intravascular Longitudinal Distance (0052,0028) or IVUS Pullback Rate (0018,3101) attributes are used along with the Frame Acquisition DateTime (0018,9074) attribute to facilitate measurement calculations. This allows for lesion, calcium, stent and stent gap length measurements. Figure EEE.3-2 is an example of an IVOCT cross-sectional image (on the top), with a horizontal longitudinal view on the bottom. The following example also illustrates how the tint specified by the Palette Color LUT is applied to the OCT image.
Figure EEE.3-3 illustrates how the 2D cross-sectional frames are stacked along the catheter longitudinal axis. True geometric representation of the vessel morphology cannot be rendered, since only the Z position information is known. Position (X and Y) and rotation (X, Y and Z) information of the acquired cross-sectional frames is unknown.
This chapter describes the general concepts of the X-Ray Angiography equipment and the way these concepts can be encoded in SOP Instances of the Enhanced XA SOP Class. It covers the time relationships during the image acquisition, the X-Ray generation parameters, the conic projection geometry in X-Ray Angiography, the pixel size calibration as well as the display pipeline.
The following general concepts provide better understanding of the examples for the different application cases in the rest of this Annex.
The following figure shows the time-related attributes of the acquisition of X-Ray multi-frame images. The image and frame time attributes are defined as absolute times, the duration of the entire image acquisition can be then calculated.
This chapter illustrates the relationships between the geometrical models of the patient, the table, the positioner, the detector and the pixel data.
The following figure shows the different steps in the X-Ray acquisition that influences the geometrical relationship between the patient and the pixel data.
Figure FFF.1.2-1. Acquisition Steps Influencing the Geometrical Relationship Between the Patient and the Pixel Data
Refer to Annex A for the definition of the patient orientation.
A point of the patient is represented as: P = (Pleft, Pposterior, Phead).
The table coordinates are defined in Section C.8.7.4.1.4 “Table Motion With Patient in Relation to Imaging Chain” in PS3.3 .
The table coordinate system is represented as: (Ot, Xt, Yt, Zt) where the origin Ot is located on the tabletop and is arbitrarily defined for each system.
The position of the patient in the X-Ray table is described in Section C.7.3.1.1.2 “Patient Position” in PS3.3 .
The table below shows the direction cosines for each of the three patient directions (Left, Posterior, Head) related to the Table coordinate system (Xt, Yt, Zt), for each patient position on the X-Ray table:
The Isocenter coordinate system is defined in Section C.8.19.6.13.1.1 “Isocenter Coordinate System” in PS3.3 .
The table coordinate system is defined in Section C.8.19.6.13.1.3 “Table Coordinate System” in PS3.3 where the table translation is represented as (TX,TY,TZ). The table rotation is represented as (At1, At2, At3).
A point (P Xt , P Yt , P Zt ) in the table coordinate system (see Figure FFF.1.2-7) can be expressed as a point (P X , P Y , P Z ) in the Isocenter coordinate system by applying the following transformation:
(PX, PY, PZ)T= (R3 .R2 .R1)T .(PXt, PYt, PZt)T+ (TX, TY, TZ)T
And inversely, a point (P X , P Y , P Z ) in the Isocenter coordinate system can be expressed as a point (P Xt , P Yt , P Zt ) in the table coordinate system by applying the following transformation:
(PXt, PYt, PZt)T= (R3 .R2 .R1).((PX, PY, PZ)T- (TX, TY, TZ)T)
Where R1 , R2 and R3 are defined in Figure FFF.1.2-7.
The positioner coordinate system is defined in Section C.8.19.6.13.1.2 “Positioner Coordinate System” in PS3.3 where the positioner angles are represented as (Ap1, Ap2, Ap3).
A point (P Xp , P Yp , P Zp ) in the positioner coordinate system can be expressed as a point (P X , P Y , P Z ) in the Isocenter coordinate system by applying the following transformation:
(PX, PY, PZ)T= (R2 .R1)T .(R3 T .(PXp, PYp, PZp)T)
And inversely, a point (P X , P Y , P Z ) in the Isocenter coordinate system can be expressed as a point (P Xp , P Yp , P Zp ) in the positioner coordinate system by applying the following transformation:
The following concepts illustrate the model of X-Ray cone-beam projection:
The X-Ray incidence represents the vector going from the X-Ray source to the Isocenter.
The receptor plane represents the plane perpendicular to the X-Ray Incidence, at distance SID from the X-Ray source. Applies for both image intensifier and digital detector. In case of digital detector it is equivalent to the detector plane.
The image coordinate system is represented by (o, u, v), where "o" is the projection of the Isocenter on the receptor plane.
The source to isocenter distance is called ISO. The source image receptor distance is called SID.
The projection of a point (P Xp , P Yp , P Zp ) in the positioner coordinate system is represented as a point (P u , P v ) in the image coordinate system.
A point (P Xp , P Yp , P Zp ) in the positioner coordinate system (Op, Xp, Yp, Zp) can be expressed as a point (P u , Pv) in the image coordinate system by applying the following transformation:
The ratio SID / (ISO - PYp) is also called magnification ratio of this particular point.
The following concepts illustrate the model of the X-Ray detector:
Physical detector array (or physical detector matrix) is the matrix composed of physical detector elements .
Not all the detector elements are activated during an X-Ray exposure. The active detector elements are in the detector active area, which can be equal to or smaller than the physical detector area.
Physical detector element coordinates represented as (idet, jdet) are columns and rows of the physical detector element in the physical detector array.
Detector TLHC element is the detector element in the Top Left Hand Corner of the physical detector array and corresponds to (idet, jdet) = (0,0).
The attribute Detector Element Physical Size (0018,7020) represents the physical dimensions in mm of a detector element in the row and column directions.
The attribute Detector Element Spacing (0018,7022) contains the two values Djdet and Didet, which represent the physical distance in mm between the centers of each physical detector element:
The attribute Detector Element Physical Size (0018,7020) may be different from the Detector Element Spacing (0018,7022) due to the presence of spacing material between detector elements.
The attribute Position of Isocenter Projection (0018,9430) contains the point (ISO_Pidet, ISO_Pjdet), which represents the projection of the Isocenter on the detector plane, measured as the offset from the center of the detector TLHC element. It is measured in physical detector elements.
The attribute Imager Pixel Spacing (0018,1164) contains the two values Dj and Di, which represent the physical distance measured at the receptor plane between the centers of each pixel of the FOV image:
The zoom factor represents the ratio between Imager Pixel Spacing (0018,1164) and Detector Element Spacing (0018,7022). It may be different from the detector binning (e.g., when a digital zoom has been applied to the pixel data).
The following concepts illustrate the model of the field of view:
The field of view (FOV) corresponds to a region of the physical detector array that has been irradiated.
The field of view image is the matrix of pixels of a rectangle circumscribing the field of view. Each pixel of the field of view image may be generated by multiple physical detector elements.
The attribute FOV Origin (0018,7030) contains the two values (FOV idet, FOV jdet ), which represent the offset of the center of the detector element at the TLHC of the field of view image, before rotation or flipping, from the center of the detector TLHC element. It is measured in physical detector elements. FOV Origin = (0,0) means that the detector TLHC element is at the TLHC of a rectangle circumscribing the field of view.
The attribute FOV Dimension (0018,9461) contains the two values FOV row dimension and FOV column dimension, which represent the dimension of the FOV in mm:
FOV pixel coordinates represented as (i, j) are columns and rows of the pixels in the field of view image.
FOV TLHC pixel is the pixel in the Top Left Hand Corner of the field of view image and corresponds to (i, j) = (0,0).
As an example, the point (ISO_Pi, ISO_Pj) representing the projection of the Isocenter on the field of view image, and measured in FOV pixels as the offset from the center of the FOV TLHC pixel, can be calculated as follows:
ISO_Pi = (ISO_Pidet - FOVidet).Didet / Di - (1 - Didet / Di) / 2
ISO_Pj = (ISO_Pjdet - FOVjdet).Djdet / Dj - (1 - Djdet / Dj) / 2
The attribute FOV Rotation (0018,7032) represents the clockwise rotation in degrees of field of view relative to the physical detector.
The attribute FOV Horizontal Flip (0018,7034) defines whether or not a horizontal flip has been applied to the field of view after rotation relative to the physical detector.
The attribute Pixel Data (7FE0,0010) contains the FOV image after rotation and/or flipping.
Pixel data coordinates is the couple (c,r) where c is the column number and r is the row number.
The X-Ray Projection Pixel Calibration Macro of the Section C.8.19.6.9 “X-Ray Projection Pixel Calibration Macro” in PS3.3 specifies the attributes of the image pixel size calibration model in X-Ray conic projection, applicable to the Enhanced XA SOP Class.
In this model, the table plane is specified relative to the Isocenter. As default value for the attribute Distance Object to Table Top (0018,9403), the half distance of the patient thickness may be used.
Oblique projections are considered in this model by the encoding of the attribute Beam Angle (0018,9449), which can be calculated from Positioner Primary Angle (0018,1510) and Positioner Secondary Angle (0018,1511) as follows:
For Patient Positions HFS, FFS, HFP, FFP:Beam Angle = arcos( |cos(Positioner Primary Angle) | * |cos(Positioner Secondary Angle) | ).
For Patient Positions HFDR, FFDR, HFDL, FFDL:Beam Angle = arcos( |sin(Positioner Primary Angle) | * |cos(Positioner Secondary Angle) | ).
The resulting pixel spacing, defined as D Px * SOD / SID, is encoded in the attribute Object Pixel Spacing in Center of Beam (0018,9404). Its accuracy is practically limited to a beam angle range of +/- 60 degrees.
This chapter illustrates the relationships between the X-Ray generation parameters:
Values per frame are represented by the following symbols in this section:
In the Frame Content Sequence (0020,9111):
· Frame Acquisition Duration (0018,9220) in ms of frame « i » =Dti
In the Frame Acquisition Sequence (0018,9417):
· KVP (0018,0060) of frame « i » = kVpi
· X-Ray Tube Current in mA (0018,9330) of frame « i » = mAi
The following shows an example of calculation of the cumulative and average values per image relative to the values per-frame:
This chapter describes the concepts of the display pipeline.
The X-Ray intensity (I) at the image receptor is inversely proportional to the exponential function of the product of the object's thickness (x) traversed by the X-Ray beam and its effective absorption coefficient (m): I ~ e- m x.
The X-Ray intensity that comes into contact with the image receptor is converted to the stored pixel data by applying specific signal processing. As a first step in this conversion, the amplitude of the digital signal out of the receptor is linearly proportional to the X-Ray intensity. In further steps, this digital signal is processed in order to optimize the rendering of the objects of interest present on the image.
The Enhanced XA IOD includes attributes that describe the characteristics of the stored pixel data, allowing to relate the stored pixel data to the original X-Ray intensity independently from the fact that the image is "original" or "derived".
When the attribute Pixel Intensity Relationship (0028,1040) equals LIN:
When the attribute Pixel Intensity Relationship (0028,1040) equals LOG:
In order to ensure consistency of the displayed stored pixel data, the standard display pipeline is defined.
On the other side, the stored pixel data is also used by applications for further analysis like segmentation, structure detection and measurement, or for display optimization like mask subtraction. For this purpose, the Pixel Intensity Relationship LUT described in Section C.7.6.16.2.13.1 “Pixel Intensity Relationship LUT” in PS3.3 defines a transformation LUT enabling the conversion from the stored pixel data values to linear, logarithmic or other relationship.
For instance, if the image processing applied to the X-Ray intensity before storing the Pixel Data allows returning to LIN, then a Pixel Intensity Relationship LUT with the function "TO_LINEAR" is provided. The following figure shows some examples of image processing, and the corresponding description of the relationship between the stored pixel data and the X-Ray intensity.
No solution is proposed in the Enhanced XA SOP Class to standardize the subtractive display pipeline. As the Enhanced XA image is not required to be stored in a LOG relationship, the Pixel Intensity Relationship LUT may be provided to convert the images to the logarithmic space before subtraction. The creation of subtracted data to be displayed is a manufacturer-dependent function.
As an example of subtractive display, the pixel values are first transformed to a LOG relationship, and then subtracted to bring the background level to zero and finally expanded to displayable levels by using a non-linear function EXP similar to an exponential.
This chapter describes different scenarios and application cases organized by domains of application. Each application case is basically structured in four sections:
1) User Scenario : Describes the user needs in a specific clinical context, and/or a particular system configuration and equipment type.
2) Encoding Outline : Describes the specificities of the XA SOP Class and the Enhanced XA SOP Class related to this scenario, and highlights the key aspects of the Enhanced XA SOP Class to address it.
3) Encoding Details : Provides detailed recommendations of the key attributes of the object(s) to address this particular scenario.
4) Example : Presents a typical example of the scenario, with realistic sample values, and gives details of the encoding of the key attributes of the object(s) to address this particular scenario. In the values of the attributes, the text in bold face indicates specific attribute values; the text in italic face gives an indication of the expected value content.
This application case is related to the results of an X-Ray acquisition and parallel ECG data recording on the same equipment.
The image acquisition system records ECG signals simultaneously with the acquisition of the Enhanced XA multi-frame image. All the ECG signals are acquired at the same sampling rate.
The acquisition of both image and ECG data are not triggered by an external signal.
The information can be exchanged via Offline Media or Network.
Synchronization between the ECG Curve and the image frames allows synchronized navigation in each of the data sets.
The General ECG IOD is used to store the waveform data recorded in parallel to the image acquisition encoded as Enhanced XA IOD.
The Synchronization Module is used to specify a common time-base.
The option of encoding trigger information is not recommended by this case.
The solution assumes implementation on a single imaging modality and therefore the mutual UID references between the General ECG and Enhanced XA objects is recommended. This will allow faster access to the related object.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-1. Enhanced X-Ray Angiographic Image IOD Modules
| C.7.3.1 |
The General Series Module Modality (0008,0060) attribute description in PS3.3 enforces the storage of waveform and pixel data in different Series IE. |
||
| C.7.4.2 |
Specifies that the image acquisition is synchronized. Will have the same content as the General ECG SOP Instance. |
||
| C.7.5.1 | |||
| C.7.6.18.1 |
Contains information of the type of relationship between the ECG waveform and the image. |
||
| C.8.19.2 |
Contains UID references to the related General ECG SOP Instance. |
Table FFF.2.1-2. Enhanced XA Image Functional Group Macros
| C.7.6.16.2.2 |
Provides timing information to correlate each frame to the recorded ECG samples. |
|
| C.7.6.16.2.7 |
Provides time relationships between the angiographic frames and the cardiac cycle. |
The usage of this Module is recommended to encode a "synchronized time" condition.
The specialty of Synchronization Triggers is not part of this scenario.
Table FFF.2.1-3. Synchronization Module Recommendations
The usage of this Module is recommended to assure that the image contains identical equipment identification information as the referenced General ECG SOP Instance.
The usage of this module is recommended to indicate that the ECG is not used to trig the X-Ray acquisition, rather to time relate the frames to the ECG signal.
The usage of this module is recommended to reference from the image object to the related General ECG SOP Instance that contains the ECG data recorded simultaneously.
Table FFF.2.1-5. Enhanced XA/XRF Image Module Recommendations
|
Reference to "General ECG SOP Instance" acquired in conjunction with this image. Contains a single item. |
||
|
"1.2.840.10008.5.1.4.1.1.9.1.2" i.e., reference to an General ECG SOP Instance |
||
|
CID 7004 “Waveform Purposes of Reference” is used; identify clear reason for the Reference. |
If there is a specific ECG analysis that determines the time between the R-peaks and the angiographic frames, the usage of this macro is recommended.
As the frames are acquired at a frame rate independent of cardiac phases, this macro is used in a "per frame functional group" to encode the position of each frame relative to its prior R-peak.
In this scenario the timing information is important to correlate each frame to the recorded ECG.
If there is a specific ECG analysis, this macro allows the encoding of the position in the cardiac cycle that is most representative of each frame.
The following table gives recommendations for usage in this scenario.
This IOD will encode the recorded ECG waveform data, which is done by the image acquisition system. Since this is not a dedicated waveform modality device, appropriate defaults for most of the data have to be recommended to fulfill the requirements according to PS3.3.
Table FFF.2.1-7. General ECG IOD Modules
A new Series is created to set the modality "ECG" for the waveform.
Most of the attributes are aligned with the contents of the related series level attributes in the image object.
The Related Series Sequence (0008,1250) is not recommended because instance level relationship can be applied to reference the image instances.
The usage of this Module is recommended to encode a "synchronized time" condition, which was previously implicit when using the curve module.
Table FFF.2.1-9. Synchronization Module Recommendations
The usage of this Module is recommended to assure that the General ECG SOP Instance contains identical equipment identification information as the referenced image objects.
The usage of this module is recommended to relate the acquisition time of the waveform data to the image acquired simultaneously.
The module additionally includes an instance level reference to the related image.
Table FFF.2.1-10. Waveform Identification Module Recommendations
The usage of this module is a basic requirement of the General ECG IOD.
Any application displaying the ECG is recommended to scale the ECG contents to its output capabilities (esp. the amplitude resolution).
If more than one ECG signal needs to be recorded, the grouping of the channels in multiplex groups depends on the ECG sampling rate. All the channels encoded in the same multiplex group have identical sampling rate.
Table FFF.2.1-11. Waveform Module Recommendations
In the two following examples, the Image Modality acquires a multi-frame image of the coronary arteries lasting 4 seconds, at 30 frames per second.
Simultaneously, the same modality acquires two channels of ECG from a 2-Lead ECG (the first channel on Lead I and the second on Lead II) starting one second before the image acquisition starts, and lasting 5 seconds, with a sampling frequency of 300 Hz on 16 bits signed encoding, making up a number of 1500 samples per channel. The first ECG sample is 10 ms after the nominal start time of the ECG acquisition. Both ECG channels are sampled simultaneously. The time skew of both channels is 0 ms.
In this example, the Enhanced XA image does not contain information of the cardiac cycle phases.
The attributes that define the two different SOP Instances (Enhanced XA and General ECG) of this example are described in Figure FFF.2.1-3.
In this example, the heart rate is 75 beats per minute. As the image is acquired during a period of four seconds, it contains five heartbeats.
The ECG signal is analyzed to determine the R-peaks and to relate them to the angiographic frames. Thus the Enhanced XA image contains information of this relationship between the ECG signal and the frames.
The attributes that define the two different SOP Instances (Enhanced XA and General ECG) of this example are described in the figures of the previous example, in addition to the attributes described in Figure FFF.2.1-5.
These application cases are related to the results of an X-Ray acquisition and simultaneous ECG data recording on different equipment. The concepts of synchronized time and triggers are involved.
The two modalities may share references on the various entity levels below the Study, i.e., Series and Image UID references using non standard mechanisms. Nothing in the workflow requires such references. For more details about UID referencing, refer to the previous application case "ECG Recording at Acquisition Modality" (see Section FFF.2.1.1).
If both modalities share a common data store, a dedicated post-processing station can be used for combined display of waveform and image information, and/or combined functional analysis of signals and pixel data to time relate the cardiac cycle phases to the angiographic frames. The storage of the waveform data and images to PACS or media will preserve the combined functional capabilities.
In these application cases, this post-processing activity is outside the scope of the acquisition modalities. For more details about the relationship between cardiac cycle and angiographic frames, refer to the previous application case "ECG Recording at Acquisition Modality" (see Section FFF.2.1.1).
Image runs are taken by the image acquisition modality. Waveforms are recorded by the waveform acquisition modality. Both modalities are time synchronized via NTP. The time server may be one of the modalities or an external server. The resulting objects will include the time synchronization concept.
Dedicated Waveform IODs exist to store captured waveforms. In this case, General ECG IOD is used to store the waveform data.
Depending on the degree of coupling of the modalities involved, the usage of references on the various entity levels can vary. While there is a standard DICOM service to share Study Instance UID between modalities (i.e., Worklist), there are no standard DICOM services for sharing references below the Study level, so any UID reference to the Series and Image levels is shared in a proprietary manner.
With the Synchronization Module information, the method to implement the common time-base can be documented.
The Enhanced XA IOD provides a detailed "per frame" timing to encode timing information related to each frame.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-12. Enhanced X-Ray Angiographic Image IOD Modules
Table FFF.2.1-13. Enhanced XA Image Functional Group Macros
| C.7.6.16.2.2 |
Provides timing information to correlate each frame to any externally recorded waveform. |
This Module is used to document the synchronization of the two modalities.
Table FFF.2.1-14. Synchronization Module Recommendations
This module includes the acquisition date and time of the image, which is in the same time basis as the acquisition date and time of the ECG in this scenario.
The ECG recording system will take care of filling in the waveform-specific contents in the General ECG SOP Instance. This section will address only the specifics for attributes related to synchronization.
Table FFF.2.1-16. Waveform IOD Modules
| C.7.4.2 |
Specifies that the ECG acquisition is time synchronized with the image acquisition. Will have the same content as the Enhanced XA SOP Instance. See Section FFF.2.1.2.1.3.1.1. |
||
| C.10.8 |
Provides timing information to correlate the waveform data to any externally recorded image. |
FFF.2.1.2.1.3.2.1 Waveform Identification Recommendations
The usage of this module is recommended to relate the acquisition time of the waveform data to the related image(s).
In this example, there are two modalities that are synchronized with an external clock via NTP. The Image Modality acquires three multi-frame images within the same Study and same Series. Simultaneously, the Waveform Modality acquires the ECG non-stop during the same period, leading to one single Waveform SOP Instance on a different Study.
In this example, there is no UID referencing capability between the two modalities.
The attributes that define the relevant content in the two different SOP Instances (Enhanced XA and General ECG) are described in Figure FFF.2.1-8.
Image runs are taken by the image acquisition modality. Waveforms are recorded by waveform recording modality. Both modalities are time synchronized via NTP. The acquisition in one modality is triggered by the other modality. The resulting objects will include the time synchronization and trigger synchronization concepts.
There are two cases depending on the triggering modality:
1- At X-Ray start, the image modality sends a trigger signal to the waveform modality.
2- The waveform modality sends trigger signals to the image modality to start the acquisition of each frame.
Dedicated Waveform IODs exist to store captured waveforms. In this case, General ECG IOD is used to store the waveform data.
With the Synchronization Module information, the method to implement the triggers can be documented.
The Enhanced XA IOD provides per-frame encoding of the timing information related to each frame.
This section provides detailed recommendations of the key attributes to address this particular scenario.
The usage of this Module is recommended to document the triggering role of the image modality.
Table FFF.2.1-21. Synchronization Module Recommendations
This module includes the acquisition date and time of the image.
The recording system will take care of filling in the waveform-specific contents, based on the IOD relevant for the type of system (e.g., EP, Hemodynamic, etc.). This section will address only the specifics for attributes related to synchronization.
Table FFF.2.1-24. Waveform IOD Modules
The usage of this Module is recommended to document the triggering role of the waveform modality.
Table FFF.2.1-25. Synchronization Module Recommendations
This module includes the acquisition date and time of the waveform, which may be different than the acquisition date and time of the image in this scenario.
The usage of this module is recommended to encode the time relationship between the trigger signal and the ECG samples.
Table FFF.2.1-27. Waveform Module Recommendations
In this example, there are two modalities that are synchronized with an external clock via NTP. The Image Modality acquires three multi-frame images within the same Study and same Series. Simultaneously, the Waveform Modality acquires the ECG non-stop during the same period, leading to one single Waveform SOP Instance on a different Study. The ECG sampling frequency is 300 Hz on 16 bits signed encoding, making up a number of 1500 samples per channel. The first ECG sample is acquired at nominal start time of the ECG acquisition.
The image modality sends a trigger to the waveform modality at the start time of each of the three images. This signal is stored in one channel of the waveform modality, together with the ECG signal.
In this example, there is no UID referencing capability between the two modalities.
The attributes that define the relevant content in the two different SOP Instances (Enhanced XA and General ECG) are described in Figure FFF.2.1-11.
In this example, there are two modalities that are synchronized with an external clock via NTP.
The Image Modality starts the X-Ray image acquisition and simultaneously the Waveform Modality acquires the ECG and analyzes the signal to determine the phases of the cardiac cycles. At each cycle, the waveform modality sends a trigger to the image modality to start the acquisition of a frame. This trigger is stored in one channel of the waveform modality, together with the ECG signal.
The ECG sampling frequency is 300 Hz on 16 bits signed encoding, making up a number of 1500 samples per channel. The first ECG sample is acquired 10 ms after the nominal start time of the ECG acquisition.
In this example, there is no UID referencing capability between the two modalities.
The attributes that define the relevant content in the two different SOP Instances (Enhanced XA and General ECG) are described in Figure FFF.2.1-13.
This section provides information on the encoding of the movement of the X-Ray Positioner during the acquisition of a rotational angiography.
The related image presentation parameters of the rotational acquisition that are defined in the Enhanced XA SOP Class, such as the mask information of subtracted display, are described in further sections of this annex.
The multi-frame image acquisition is performed during a continuous rotation of the X-Ray Positioner, starting from the initial incidence and acquiring frames in a given angular direction at variable angular steps and variable time intervals.
Typically such rotational acquisition is performed with the purpose of further 3D reconstruction. The rotation axis is not necessarily the patient head-feet direction, which may lead to images where the patient is not heads-up oriented.
There may be one or more rotations of the X-Ray Positioner during the same image acquisition, performed by following different patterns, such as:
The XA SOP Class encodes the absolute positioner angles as the sum of the angle of the first frame and the increments relative to the first frame. The Enhanced XA SOP Class encodes per-frame absolute angles.
In the XA SOP Class, the encoding of the angles is always with respect to the patient, so-called anatomical angles, and the image is assumed to be patient-oriented (i.e., heads-up display). In case of positioner rotation around an axis oblique to the patient, not aligned with the head-feet axis, it is not possible to encode the rotation of the image necessary for 3D reconstruction.
The Enhanced XA SOP Class encodes the positioner angles with respect to the patient as well as with respect to a fixed coordinate system of the equipment.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-29. Enhanced XA Image Functional Group Macros
| C.8.19.6.10 | ||
| C.8.19.6.13 |
Specifies the angles of the positioner per-frame in equipment coordinates for further applications based on the acquisition geometry (e.g., 3D reconstruction, registration…). |
The usage of this module is recommended to define the type of positioner.
This macro is used in the per-frame context in this scenario.
If the value of the C-arm Positioner Tabletop Relationship (0018,9474) is NO, the following macro may not be provided by the acquisition modality. This macro is used in the per-frame context in this scenario.
Table FFF.2.1-32. X-Ray Isocenter Reference System Macro Example
In this example, the patient is on the table, in position "Head First Prone". The table horizontal, tilt and rotation angles are equal to zero.
The positioner performs a rotation of 180 deg from the left to the right side of the patient, with the image detector going above the back of the patient, around an axis parallel to the head-feet axis of the patient.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-15.
This section provides information on the encoding of the movement of the X-Ray Table during the acquisition of a stepping angiography.
The related image presentation parameters of the stepping acquisition that are defined in the Enhanced XA SOP Class, such as the mask information of subtracted display, are described in further sections of this annex.
The multi-frame image acquisition is performed during a movement of the X-Ray Table, starting from the initial position and acquiring frames in a given direction along the Z axis of the table at variable steps and variable time intervals.
There may be one or more "stepping movements" of the X-Ray Table during the same image acquisition, leading to one or more instances of the Enhanced XA SOP Class. The stepping may be performed by different patterns, such as:
The XA SOP Class encodes table position as increments relative to the position of the first frame, while the position of the first frame is not encoded.
The Enhanced XA SOP Class encodes per-frame absolute table vertical, longitudinal and lateral position, as well as table horizontal rotation angle, table head tilt angle and table cradle tilt angle.
This allows registration between separate multi-frame images in the same table frame of reference, as well as accounting for magnification ratio and other aspects of geometry during registration. Issues of patient motion during acquisition of the images is not addressed in this scenario.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-33. Enhanced X-Ray Angiographic Image IOD Modules
| C.8.19.3 |
Specifies the relationship between the table and the positioner. |
Table FFF.2.1-34. Enhanced XA Image Functional Group Macros
| C.8.19.6.11 | ||
| C.8.19.6.13 |
Specifies the position and the angles of the table per-frame in equipment coordinates, for further applications based on the acquisition geometry (e.g., registration…). |
The usage of this module is recommended to specify the relationship between the table and the positioner.
This macro is used in the per-frame context in this scenario.
Table FFF.2.1-36. X-Ray Table Position Macro Example
If the value of the C-arm Positioner Tabletop Relationship (0018,9474) is NO, the following macro may not be provided by the acquisition modality. This macro is used in the per-frame context in this scenario.
Table FFF.2.1-37. X-Ray Isocenter Reference System Macro Example
In this example, the patient is on the table in position "Head First Supine". The table is tilted of -10 degrees, with the head of the patient below the feet, and the image detector is parallel to the tabletop plane. The table cradle and rotation angles are equal to zero.
The image acquisition is performed during a movement of the X-Ray Table in the tabletop plane, at constant speed and of one meter of distance, acquiring frames from the abdomen to the feet of the patient in one stepping movement for non-subtracted angiography.
The table is related to the C-arm positioner so that the coordinates of the table position are known in the isocenter reference system. This allows determining the projection magnification of the table top plane with respect to the detector plane.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-18.
This section provides information on the encoding of the "sensitive areas" used for regulation control of the X-Ray generation of an image that resulted from applying these X-Rays.
The user a) takes previous selected regulation settings or b) manually enters regulation settings or c) automatically gets computer-calculated regulation settings from requested procedures.
Acquired images are networked or stored in offline media.
Later problems of image quality are determined and user wants to check for reasons by assessing the positions of the sensing regions.
The Enhanced XA IOD includes a module to supply information about active regulation control sensing fields, their shape and position relative to the pixel matrix.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-38. Enhanced XA Image Functional Group Macros
| C.8.19.6.3 |
Specifies the shape and size of the sensing regions in pixels, as well as their position relative to the top left pixel of the image. |
This macro is recommended to encode details about sensing regions.
If the position of the sensing regions is fixed during the multi-frame acquisition, the usage of this macro is shared.
If the position of the sensing regions was changed during the multi-frame acquisition, this macro is encoded per-frame to reflect the individual positions.
The same number of regions is typically used for all the frames of the image. However it is technically possible to activate or deactivate some of the regions during a given range of frames, in which case this macro is encoded per-frame.
In this section, two examples are given.
The first example shows how three sensing regions are encoded: 1) central (circular), 2) left (rectangular) and 3) right (rectangular).
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-20.
The second example shows the same regions, but the field of view region encoded in the Pixel Data matrix has been shifted of 240 pixels right and 310 pixels down, thus the left rectangular sensing region is outside the Pixel Data matrix as well as both rectangular regions overlap the top row of the image matrix.
Figure FFF.2.1-21. Example of X-Ray Exposure Control Sensing Regions partially outside the Pixel Data matrix
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-22.
This section provides information on the encoding of the image detector parameters and field of view applied during the X-Ray acquisition.
The user selects a given size of the field of view before starting the acquisition. This size can be smaller than the size of the Image Detector.
The position of the field of view in the detector area changes during the acquisition in order to focus on an object of interest.
Acquired image is networked or stored in offline media, then the image is:
Displayed and reviewed in cine mode, and the field of view area needs to be displayed on the viewing screen;
Used for quality assurance, to relate the pixels of the stored image to the detector elements, for instance to understand the image artifacts due to detector defects;
Used to measure the dimension of organs or other objects of interest;
Used to determine the position in the 3D space of the projection of the objects of interest.
The XA SOP Class does not encode some information to fully characterize the geometry of the conic projection acquisition, such as the position of the Positioner Isocenter on the FOV area. Indeed, the XA SOP Class assumes that the isocenter is projected in the middle of the FOV.
The Enhanced XA SOP Class encodes the position of the Isocenter on the detector, as well as specific FOV attributes (origin, rotation, flip) per-frame or shared. It encodes some existing attributes from DX to specify information of the Digital Detector and FOV. It also allows differentiating the image intensifier vs. the digital detector and then defines conditions on attributes depending on image intensifier or digital detector.
This section provides detailed recommendations of the key attributes to address this particular scenario.
The usage of this module is recommended to specify the type and details of the receptor.
Distance Receptor Plane to Detector Housing (0018,9426) is a positive value except in the case of an image intensifier where the receptor plane is a virtual plane located outside the detector housing, which depends on the magnification factor of the intensifier.
The Distance Receptor Plane to Detector Housing (0018,9426) may be used to calculate the pixel size of the plane in the patient when markers are placed on the detector housing.
When the X-Ray Receptor Type (0018,9420) equals "IMG_INTENSIFIER" this module specifies the type and characteristics of the image intensifier.
The Intensifier Size (0018,1162) is defined as the physical diameter of the maximum active area of the image intensifier. The active area is the region of the input phosphor screen that is projected on the output phosphor screen. The image intensifier device may be configured for several predefined active areas to allow different levels of magnification.
The active area is described by the Intensifier Active Shape (0018,9427) and the Intensifier Active Dimension(s) (0018,9428).
The field of view area is a region equal to or smaller than the active area, and is defined as the region that is effectively irradiated by the X-Ray beam when there is no collimation. The stored image is the image resulting from digitizing the field of view area.
There is no attribute that relates the FOV origin to the intensifier. It is commonly assumed that the FOV area is centered in the intensifier.
The position of the projection of the isocenter on the active area is undefined. It is commonly understood that the X-Ray positioner is calibrated so that the isocenter is projected in the approximate center of the active area, and the field of view area is centered in the active area.
When the X-Ray Receptor Type (0018,9420) equals "DIGITAL_DETECTOR" this module specifies the type and characteristics of the image detector.
The size and pixel spacing of the digital image generated at the output of the digital detector are not necessarily equal to the size and element spacing of the detector matrix. The detector binning is defined as the ratio between the pixel spacing of the detector matrix and the pixel spacing of the digital image.
If the detector binning is higher than 1.0 several elements of the detector matrix contribute to the generation of one single digital pixel.
The digital image may be processed, cropped and resized in order to generate the stored image. The schema below shows these two steps of the modification of the pixel spacing between the detector physical elements and the stored image:
Table FFF.2.1-43. X-Ray Detector Module Recommendations
The usage of this macro is recommended to specify the characteristics of the field of view.
When the field of view characteristics change across the multi-frame image, this macro is encoded on a per-frame basis.
The field of view region is defined by a shape, origin and dimension. The region of irradiated pixels corresponds to the interior of the field of view region.
When the X-Ray Receptor Type (0018,9420) equals "IMG_INTENSIFIER", the intensifier TLHC is undefined. Therefore the field of view origin cannot be related to the physical area of the receptor. It is commonly understood that the field of view area corresponds to the intensifier active area, but there is no definition in the DICOM standard that forces a manufacturer to do so. As a consequence, it is impossible to relate the position of the pixels of the stored area to the isocenter reference system.
Table FFF.2.1-44. X-Ray Field of View Macro Recommendations
The usage of this macro is recommended to specify the Imager Pixel Spacing.
When the field of view characteristics change across the multi-frame image, this macro is encoded on a per-frame basis.
In case of image intensifier, the Imager Pixel Spacing (0018,1164) may be non-uniform due to the pincushion distortion, and this attribute corresponds to a manufacturer-defined value (e.g., average, or value at the center of the image).
This example illustrates the encoding of the dimensions of the intensifier device, the intensifier active area and the field of view in case of image intensifier.
In this example, the diameter of the maximum active area is 410 mm. The image acquisition is performed with an electron lens that focuses the photoelectron beam inside the intensifier so that an active area of 310 mm of diameter is projected on the output phosphor screen.
The X-Ray beam is projected on an area of the input phosphor screen of 300 mm of diameter, and the corresponding area on the output phosphor screen is digitized on a matrix of 1024 x1024 pixels. This results on a pixel spacing of the digitized matrix of 0.3413 mm.
The distance from the Receptor Plane to the Detector Housing in the direction from the intensifier to the X-Ray tube is 40 mm.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-25.
The following examples show three different ways to create the stored image from the same detector matrix.
The blue dotted-line squares
represent the physical detector pixels;
The blue square
represents the TLHC pixel of the physical detector area;
The purple square
represents the physical detector pixel in whose center the Isocenter is projected;
The dark green square
represents the TLHC pixel of the region of the physical detector that is exposed to X-Ray when there is no collimation inside the field of view;
The light green square
represents the TLHC pixel of the stored image;
The thick black straight line square
represents the stored image, which is assumed to be the field of view area. The small thin black straight line squares represent the pixels of the stored image;
The blue dotted-line arrow
represents Field Of View Origin (0018,7030);
The purple arrow
represents the position of the Isocenter Projection (0018,9430).
Note that the detector active dimension is not necessarily the FOV dimension.
In the first example, there is neither binning nor resizing between the detector matrix and the stored image.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-26.
In the second example, there is a binning factor of 2 between the detector matrix and the digital image. There is no resizing between the digital image (binned) and the stored image.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-27.
In the third example, in addition to the binning factor of 2 between the detector matrix and the digital image, there is a resizing of 0.5 (downsizing) between the digital image (binned) and the stored image.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-28.
Note that the description of the field of view attributes (dimension, origin) is the same in these three examples. The field of view definition is independent from the binning and resizing processes.
This section provides information on the encoding of the presence and type of contrast bolus administered during the X-Ray acquisition.
The user performs image acquisition with injection of contrast agent during the X-Ray acquisition. Some frames are acquired without contrast, some others with contrast.
The type of contrast agent can be radio-opaque (e.g., iodine) or radio-transparent (e.g., CO2).
The information of the type of contrast and its presence or absence in the frames can be used by post-processing applications to set up e.g., vessel detection or image quality algorithms automatically.
The Enhanced XA SOP Class encodes the characteristics of the contrast agent(s) used during the acquisition of the image, including the type of absorption (radio-opaque or radio-transparent).
The Enhanced XA SOP Class also allows encoding the presence of contrast in a particular frame or set of frames, by encoding the Contrast/Bolus Usage per-frame.
This section provides detailed recommendations of the key attributes to address this particular scenario.
The usage of this module is recommended to specify the type and characteristics of the contrast agent administered.
The usage of this macro is recommended to specify the characteristics of the contrast per-frame.
Table FFF.2.1-48. Contrast/Bolus Usage Macro Recommendations
In this example, the user starts the X-Ray acquisition at 4 frames per second at 3:35pm. After one second the user starts the injection of 45 milliliters of contrast media Iodipamide (350 mg/ml Cholographin (Bracco) ) at a flow rate of 15 ml/sec during three seconds, in intra-arterial route. When the injection of contrast agent is finished, the user continues the X-Ray acquisition for two seconds until wash out of the contrast agent.
There could be two ways to determine the presence of contrast agent on the frames:
The injector is connected to the X-Ray acquisition system, the presence of contrast agent is determined based on the injector start/stop signals and a preconfigured delay to allow the contrast to reach the artery of interest, or.
The X-Ray system processes the images in real time and detects the presence or absence of contrast agent on the images.
In this example, the image acquired contains 25 frames: From frames 5 to 17, the contrast is being injected. From frames 8 to 23, the contrast is visible on the pixel data.
The figure below shows the attributes of this example in a graphical representation of the multi-frame acquisition.
The encoded values of the key attributes of this example are shown in Figure FFF.2.1-30.
This section provides information on the encoding of the parameters related to the X-Ray generation.
The user performs X-Ray acquisitions during the examination. Some of them are dynamic acquisitions where the positioner and/or the table have moved between frames of the multi-frame image, the acquisition parameters such as kVp, mA and pulse width may change per-frame to be adapted to the different anatomy characteristics.
Later quality assurance wants to assess the X-Ray generation techniques in order to understand possible degradation of image quality, or to estimate the level of irradiation at different skin areas and body parts examined.
The XA SOP Class encodes the attributes kVp, mA and pulse duration as a unique value for the whole multi-frame image. For systems that can provide only average values of these attributes, this SOP Class is more appropriate.
The Enhanced XA SOP Class encodes per-frame kVp, mA and pulse duration, thus the estimated dose per frame can be now correlated to the positioner angles and table position of each frame.
In order to accurately estimate the dose per body area, other attributes are needed such as positioner angles, table position, SID, ISO distances, Field of View, etc.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.1-49. Enhanced X-Ray Angiographic Image IOD Modules
| C.8.19.3 |
Specifies average values for the X-Ray generation techniques. |
The usage of this module is recommended to specify the average values of time, voltage and current applied during the acquisition of the multi-frame image.
It gives general information of the X-Ray radiation during the acquisition of the image. In case of dynamic acquisitions, this module is not sufficient to estimate the radiation per body area and additional per-frame information is needed.
Table FFF.2.1-51. XA/XRF Acquisition Module Recommendations
Note that the three attributes X-Ray Tube Current in mA (0018,9330), Exposure Time in ms (0018,9328) and Exposure in mAs (0018,9332) are mutually conditional to each other but all three may be present. In this scenario it is recommended to include the three attributes.
The usage of this macro is recommended to specify the duration of each frame of the multi-frame image.
Note that this macro is allowed to be used only in a per-frame context, even if the pulse duration is constant for all the frames.
The usage of this macro is recommended to specify the values of voltage (kVp) and current (mA) applied for the acquisition of each frame of the multi-frame image.
If the system can provide only average values of kVp and mA, the usage of the X-Ray Frame Acquisition macro is not recommended, only the XA/XRF Acquisition Module is recommended.
If the system predefines the values of the kVp and mA to be constant during the acquisition, the usage of the X-Ray Frame Acquisition macro in a shared context is recommended in order to indicate that the value of kVp and mA is identical for each frame.
If the system is able to change dynamically the kVp and mA during the acquisition, the usage of the X-Ray Frame Acquisition macro in a per-frame context is recommended.
For more details, refer to the Section FFF.1.4
This application case provides information on how X-Ray acquisitions with variable time between frames can be organized by groups of frames to be reviewed with individual group settings.
The image acquisition system performs complex acquisition protocols with groups of frames to be displayed at different frame rates and others to be skipped.
Allow frame-rates in viewing applications to be different than acquired rates.
The XA IOD provides only one group of frames between start and stop trim.
The Enhanced XA/XRF IOD allows encoding of multiple groups of frames (frame collections) with dedicated display parameters.
The Enhanced XA IOD provides an exact acquisition time for each frame.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.2-1. Enhanced X-Ray Angiographic Image IOD Modules
| C.8.19.7 |
Specifies the groups of frames and their display parameters. |
An example of a 4 position peripheral stepping acquisition with different frame-rates is provided. One group is only 2 Frames (e.g., due to fast contrast bolus) and will be skipped for display purposes.
The whole image is reviewed in looping mode:
The first group, from frames 1 to 17, is to be reviewed at 4 frames per second;
The second group, from frames 18 to 25, is to be reviewed at 2 frames per second;
The third group, of frames 26 and 27, is not to be displayed;
The fourth group, from frames 28 to 36, is to be reviewed at 1.5 frames per second.
The encoded values of the key attributes of this example are shown in Figure FFF.2.2-1.
This section provides information on the encoding of the density and geometry characteristics of the stored pixel data and the ways to display it.
The image acquisition may be performed with a variety of settings on the detector image pre-processing component that modifies the way the gray levels are stored in the pixel data.
In particular, it may impact the relationship between the X-Ray intensity and the gray level stored (e.g., non-linear function), as well as the geometry of the X-Ray beam (e.g., pincushion distortion).
Based on the characteristics of the stored pixel data, the acquisition system determines automatically an optimal way to display the pixel data on a frame-by-frame basis, which is expected to be applied by the viewing applications.
The XA SOP Class encodes the VOI settings to be common to all the frames of the image. It also restricts the Photometric Interpretation (0028,0004) to MONOCHROME2.
The Enhanced XA SOP Class encodes per-frame VOI settings. Additionally it allows the Photometric Interpretation (0028,0004) to be MONOCHROME1 in order to display low pixel values in white while using window width and window center VOI. Other characteristics and settings can be defined, such as:
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.3-1. Enhanced X-Ray Angiographic Image IOD Modules
Table FFF.2.3-2. Enhanced XA Image Functional Group Macros
| C.7.6.16.2.10 |
Specifies the VOI transformation to be applied during display. |
|
| C.7.6.16.2.13 |
Specifies the different LUTs to transform the stored pixel values to a given function of the X-Ray intensity. |
|
| C.8.19.6.4 |
The usage of this module is recommended to specify the sign of the slope of the VOI transformation to be applied during display of the multi-frame image.
Table FFF.2.3-3. Enhanced XA/XRF Image Module Recommendations
The usage of this module is recommended to specify some presentation settings:
The recommended filter percentage does not guaranty a full consistency of the image presentation across applications, rather gives an indication of the user sensitivity to such filtering to be applied consistently. To optimize the consistency of the filtering perception, the applications sharing the same images should be customized to calibrate the highest filtering (i.e., 100%) to similar perception by the users. Setting the application to the lowest filtering (i.e., 0%) means that no filter is applied at all.
The usage of this macro is recommended to specify the windowing to be applied to the pixel data in native mode, i.e., non-subtracted.
The usage of this macro is recommended to enable the applications to get the values of the stored pixel data back to a linear relationship with the X-Ray intensity.
When the value of Pixel Intensity Relationship (0028,1040) equals LOG, a LUT to get back to linear relationship (TO_LINEAR) is present to allow applications to handle linear pixel data.
Other LUTs can be added, for instance to transform to logarithmic relationship for subtraction (TO_LOG) in case the relationship of the stored pixel data is linear. Other LUTs with manufacturer-defined relationships are also allowed.
The LUTs of this macro are not used for the standard display pipeline.
In this example, two different systems perform an X-Ray Acquisition of the coronary arteries injected with radio-opaque contrast agent.
The system A is equipped with a digital detector, and stores the pixel data with the lower level corresponding to the lower X-Ray intensity. Then the user creates two instances: one to display the injected vessels as black, and other to display the injected vessels as white.
The system B is equipped with an image intensifier configured to store the pixel data with the lower level corresponding to the higher X-Ray intensity. Then the user creates two instances: one to display the injected vessels as black, and other to display the injected vessels as white.
The figure below illustrates, for the two systems, the gray levels of the injected vessels on both the stored pixel data and the displayed pixels, which depend on the value of the attributes Pixel Intensity Relationship Sign (0028,1041), Photometric Interpretation (0028,0004), and Presentation LUT Shape (2050,0020).
This section provides information on the usage of attributes to encode an image acquisition in subtracted display mode.
A straightforward DSA acquisition is performed. The first few frames do not contain contrast, then the rest of frames contain contrast. An "averaged mask" may be selected to average some of the first frames without contrast.
A peripheral stepping DSA acquisition is performed. The acquisition is running in N steps and is timed to perform a mask run (e.g., from feet to abdomen) and then perform contrast runs at the positions of each mask, as triggered by the user.
One or more ranges of contrast frames will be used for subtraction from the mask for loop display. During the display, some ranges are to be fully subtracted, some others may be partially subtracted allowing a certain degree of visibility of the anatomical background visible on the mask, and finally some ranges are to be displayed un-subtracted.
The Enhanced XA SOP Class allows the encoding of the mask attributes similar to what the XA SOP Class provides.
The Enhanced XA SOP Class allows defining of specific display settings to be applied to a subset of frames, for instance the recommended viewing mode and the degree of visibility of the mask.
This section provides detailed recommendations of the key attributes to address this particular scenario.
This module is used to specify the subtraction parameters. The number of Items depends on the number of Subtractions to be encoded. Typically, in case of AVG_SUB, the number of Items is at least the number of ranges of contrast frames to be subtracted from a different mask.
Table FFF.2.3-5. Mask Module Recommendations
The frame ranges of this module typically include all the masks and contrast frames defined in the Mask Module, and their presentation settings are consistent with the Mask Module definitions.
The mask frames are typically displayed non-subtracted, i.e., Recommended Viewing Mode (0028,1090) equals NAT.
If there is a frame range without mask association, the value "NAT" is used for Recommended Viewing Mode (0028,1090) in the item of the Frame Display Sequence (0008,9458) of that frame range.
In case where Recommended Viewing Mode (0028,1090) equals "NAT", the display is expected to be un-subtracted even if the Recommended Viewing Mode (0028,1090) of the Mask module equals "SUB".
The user performs an X-Ray acquisition in three steps:
First step of 5 frames for mask acquisition, without contrast agent injection;
Second step of 20 frames to assess the arterial phase, with contrast agent injection, to be subtracted to the average of the 5 mask frames acquired in the first phase;
Third step of 10 frames to assess the venous phase, without further contrast agent injection, to be subtracted to a new mask related to that phase and with a 20% of mask visibility.
In the three steps, the system automatically identifies the mask frame(s) to be associated with the contrast frames.
The encoded values of the key attributes of this example are shown in Figure FFF.2.3-2.
This section provides information on the attribute encoding for use with image acquisitions that require subtracted display modes with multiple pixel shift ranges e.g., multiple subtracted views on a DSA acquisition.
When performing DSA acquisitions, the acquisition system may choose a default subtraction pixel-shift to allow review of the whole multi-frame, as acquired.
With advanced post-processing function the medical user may add further subtraction pixel-shifts to carve out certain details or improve contrast bolus visualization of a part of the anatomy that suffered from different movement during the acquisition.
The Mask Module is used to encode the various subtractions applicable to a multi-frame image.
The Enhanced XA IOD allows creating groups of mask-contrast pairs in the Mask Module, each group identified by a unique Subtraction Item ID within the multi-frame image.
The Enhanced XA IOD, with per frame macro encoding, supports multiple and different pixel-shift values per frame, each pixel-shift value is related to a given Subtraction Item ID.
It has to be assured that all the frames in the scope of a Subtraction Item ID have the pixel-shift values defined under that Subtraction Item ID.
In case a frame does not belong to any Subtraction Item ID, that frame does not necessarily have a pixel shift value encoded.
This section provides detailed recommendations of the key attributes to address this particular scenario. The usage of the "Frame Pixel Shift" macro in a 'per frame' context is recommended. Only the usage of Mask Module and the Frame Pixel Shift Macro is further detailed.
Table FFF.2.3-6. Enhanced X-Ray Angiographic Image IOD Modules
| C.7.6.10 |
Specifies the groups of mask-contrast pairs identified by a Subtraction Item ID. |
Table FFF.2.3-7. Enhanced XA Image Functional Group Macros
| C.7.6.16.2.14 |
Specifies the pixel shift associated with the Subtraction IDs. |
This module is recommended to specify the subtraction parameters. The number of Items depends on the number of Subtractions to be applied (see Section FFF.2.3.2).
Table FFF.2.3-8. Mask Module Recommendations
The usage in this scenario is on a "per frame" context to allow individual pixel shift factors for each Subtraction Item ID.
The Subtraction Item ID specified in the Mask Subtraction Sequence (0028,6100) as well as in the Frame Pixel Shift Sequence (0028,9415) allows creating a relationship between the subtraction (mask and contrast frames) and a corresponding set of pixel shift values.
The Pixel Shift specified for a given frame in the Frame Pixel Shift Macro is the shift to be applied when this frame is subtracted to its associated mask for the given Subtraction Item ID.
Not all frames may have the same number of Items in the Frame Pixel Shift Macro, but all frames that are in the scope of a Subtraction Item ID and identified as "contrast" frames in the Mask module are recommended to have a Frame Pixel Shift Sequence item with the related Subtraction Item ID.
In this example, the pixel shift -0.3\2.0 is applied to the frames 2 and 3 when they are subtracted to the mask frame 1 as defined in the Mask Subtraction Sequence.
The usage in a per-frame context is expected in a typical clinical scenario where the shift between the mask and the contrast frames is not constant across the frames of the multi-frame image to compensate for patient/organ movement.
The encoded values of the key attributes of this example are shown in Figure FFF.2.3-4.
The usage in a per-frame context is also appropriate to specify more than one set of shifts in case of more than one region of interest suffered from patient/organ movement independently, like in case of the two legs imaged simultaneously.
In this example, two Subtraction Item IDs are defined in the Mask Subtraction Sequence.
The encoded values of the key attributes of this example are shown in Figure FFF.2.3-5.
This section provides information on the encoding of the projection pixel size calibration and the underlying geometry.
The user wants to measure the size of objects in the patient with a default system calibration based on the acquisition geometry and the default distance from the table to the object. In order to have more accurate measurements than this default calibration, the user may provide information of the distance from the table to the object to be measured.
The image is stored in an archive system and retrieved by a second user who wants to re-use the calibration and needs to know which object this calibration applies to.
This second user may need to re-calibrate based on another object at a different geometry.
In conic projection imaging, the pixel size in the patient is not constant. If a value of Pixel Spacing (0028,0030) is provided, it is best appropriate at a given distance from the X-Ray source to the object of interest in the patient (patient plane). It is less exact for other objects at other distances.
In addition, the distance from the X-Ray source to the object of interest may change per frame in case of gantry or table motion. In this case the Enhanced XA SOP Class allows the pixel size in the patient to be defined per-frame.
A macro provides a compound set of all relevant attributes.
The value "Table to Object Height" can be used for individual patient plane definition.
Automatic isocenter calibration method is supported.
Values of gantry and table positions are provided to complete all necessary attributes for a later re-calibration.
This section provides detailed recommendations of the key attributes to address this particular scenario. See Section C.8.19.6.9.1 in PS3.3 for detailed description of the attributes involved in the calculation of the calibration.
Table FFF.2.4-1. Enhanced X-Ray Angiographic Image IOD Modules
| C.8.19.3 |
Specifies system characteristics relevant for this scenario. |
In order to check if a calibration is appropriate, certain values have to be set in the XA/XRF Acquisition Module.
Table FFF.2.4-3. XA/XRF Acquisition Module Recommendations
This macro is recommended to provide the Pixel Spacing in the receptor plane. Typically the Image Pixel Spacing is identical for all frames. Future acquisition system techniques may result in per frame individual values.
This macro contains the core inputs and results of calibration.
When there is no movement of the gantry and table, the macro is typically used in shared functional group context.
The attribute Beam Angle (0018,9449) is supplementary for the purpose of calibration; it is derived from the Primary and Secondary Positioner Angles but is not intended to replace them as they provide information for other purposes.
Table FFF.2.4-5. X-Ray Projection Pixel Calibration Macro Recommendations
The user performs an X-Ray acquisition with movement of the positioner during the acquisition. The patient is in Head First Supine position. During the review of the multi-frame image, a measurement of the object of interest in the frame "i" needs to be performed, which requires the calculation of the pixel spacing at the object location for that frame.
For the frame "i", the Positioner Primary Angle is -30.0 degrees, and the Positioner Secondary Angle is 20.0 degrees. According to the definition of the positioner angles and given the patient position, the Beam Angle is calculated as follows:
Beam Angle = arcos( |cos(-30.0) | * |cos(20.0) | ) = 35.53 degrees
The value of the other attributes defining the geometry of the acquisition for the frame "i" are the following:
ISO = 750 mm SID = 983 mm TH = 187 mm
ΔPx (Imager Pixel Spacing) = 0.2 mm/pix
The user provides, via the application interface, an estimated value of the distance from the object of interest to the tabletop: TO = 180 mm. This value can be encoded in the attribute Distance Object to Table Top (0018,9403) of the Projection Pixel Calibration Sequence (0018,9401) for further usage.
This results in an SOD of 741.4 mm (according to the equation SOD = 750mm - [(187mm-180mm) / cos(35.53°) ] ), and in a magnification ratio of SID/SOD of 1.32587.
The resulting pixel spacing at the object location and related to the center of the X-Ray beam is calculated as ΔPx * SOD / SID = 0.150844 mm/pix. This value can be encoded in the attribute Object Pixel Spacing in Center of Beam (0018,9404) of the Projection Pixel Calibration Sequence (0018,9401) for further usage.
The encoded values of the key attributes of this example are shown in Figure FFF.2.4-1.
This section provides information on the encoding of the derivation process and the characteristics of the stored pixel data.
An acquisition system performs several processing steps on an original image, and then it creates a derived image with the processed pixel data.
A viewing application applies post-processing algorithms to that derived image, e.g., measurements, segmentation etc. This application needs to know what kind of post-processing can or cannot be applied depending on the characteristics of the derived image.
The XA SOP Class does not encode any specific attribute values to characterize the type of derivation.
The Enhanced XA SOP Class encodes defined terms for processing applied to the Pixel Data, and allows getting back to linear relationship between pixel values and X-Ray intensity. Viewing applications can consistently interpret the stored pixel data and enable/disable applications like edge detection algorithms, subtraction, filtering, etc.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.4-7. Enhanced XA Image Functional Group Macros
| C.7.6.16.2.6 |
Specifies the different derivation steps (including the latest step) that led to this instance. |
|
| C.7.6.16.2.13 |
Specifies the relationship between the stored pixel data values and the X-Ray intensity of the resulting derived instance. |
|
| C.8.19.6.1 |
Specifies the latest derivation step that led to this instance. |
|
| C.8.19.6.4 |
Specifies the characteristics of the derived pixel data, both geometric and densitometric. |
The usage of this module is recommended to specify the image type.
The usage of this macro is recommended to encode the information of the different derivation processes and steps, as well as the source SOP instance(s) when the image or frame are derived from other SOP Instance(s).
Table FFF.2.4-9. Derivation Image Macro Recommendations
If this image is not derived from source SOP Instances, the Derivation Image macro is not present, and the XA/XRF Frame Characteristics macro is used instead.
The usage of this macro is recommended to enable the applications to get the pixel values back to a linear relationship with the X-Ray intensity.
If readers of the image do not recognize the Pixel Intensity Relationship value, readers can use the value "OTHER" as default.
The number of bits in the LUT Data attribute (0028,3006) may be different from the value of Bits Stored attribute (0028,0101).
The usage of this macro is recommended to specify the derivation characteristics
Table FFF.2.4-10. XA/XRF Frame Characteristics Macro Recommendations
If the image is derived from one or more SOP Instances, the XA/XRF Frame Characteristics Sequence always contains the same values as the last item of the Derivation Image Sequence.
If the image is derived but not from other SOP Instances, it means that the derivation was performed on the Acquisition system, and the Acquisition Device Processing Description (0018,1400) and the Acquisition Device Processing Code (0018,1401) contain the information of that derivation.
An image derived from a derived image will change the Derivation Description but not the Acquisition Device Processing Description.
The usage of this macro is recommended to specify the type of processing applied to the stored pixel data of the derived frames.
Table FFF.2.4-11. XA/XRF Frame Pixel Data Properties Macro Recommendations
In this example, the acquisition modality creates two instances of the Enhanced XA object: the instance "A" with mask frames and the instance "B" with contrast frames. A temporal filtering has been applied by the modality before the creation of the instances.
The workstation 1 performs a digital subtraction of the frames of the instance "B" by using the frames of the instance "A" as mask, then the resulting subtracted frames are stored in a new instance "C".
Finally the workstation 2 processes the instance "C" by applying a zoom and edge enhancement, and the resulting processed frames are stored in a new instance "D".
Figure FFF.2.4-3 shows the values of the attributes of the instance "D" in the corresponding modules and macros related to derivation information. The Source Image Sequence (0008,2112) of the Derivation Image Sequence (0008,9124) does not contain the attribute Referenced Frame Number (0008,1160) because all the frames of the source images are used to generate the derived images.
In this example, the acquisition modality creates the instance "A" of the Enhanced XA object with 14 bits stored where the relationship between the pixel intensity and the X-Ray intensity is linear.
A workstation reads the instance "A", transforms the pixel values of the stored pixel data by applying a square root function and stores the resulting frames on the instance "B" with 8 bits stored.
The following figure shows the values of the attributes of the instance "B" in the corresponding modules and macros related to derivation information.
Note that the Derivation Code Sequence (0008,9215) is present when the Derivation Image Sequence (0008,9124) includes one or more items, even if the derivation code is not defined in the CID 7203 “Image Derivation”.
The Pixel Intensity Relationship LUT Sequence (0028,9422) contains a LUT with the function "TO_LINEAR" to allow the calculation of the gray level intensity to be linear to the X-Ray intensity. Since the instance "B" has 8 bits stored, this LUT contains 256 entries (starting the mapping at pixel value 0) and is encoded in 16 bits.
The value of the Pixel Intensity Relationship (0028,1040) in the Frame Pixel Data Properties Sequence (0028,9443) could be "OTHER" as it is described in the defined terms. However, a more explicit term like "SQRT" is also allowed and will have the same effect in the reading application.
In the case of a modification of the pixel intensity relationship of an image, the value of the attribute Image Processing Applied (0028,9446) in the Frame Pixel Data Properties Sequence (0028,9443) can be "NONE" in order to indicate to the reading applications that there was no image processing applied to the original image that could modify the spatial or temporal characteristics of the pixels.
This section provides information on the encoding of the acquisition geometry in a fixed reference system.
The operator identifies the position of an object of interest projected on the stored pixel data of an image A, and estimates the magnification of the conic projection by a calibration process.
The operator wants to know the position of the projection of such object of interest on a second image B acquired under different geometry, assuming that the patient does not move between image A and image B (i.e., the images share the same frame of reference).
The XA SOP Class encodes the information in a patient-related coordinate system.
The Enhanced XA SOP Class additionally encodes the geometry of the acquisition system with respect to a fixed reference system defined by the manufacturer, so-called Isocenter reference system. Therefore, it allows encoding the absolute position of an object of interest and to track the projection of such object across the different images acquired under different geometry.
This section provides detailed recommendations of the key attributes to address this particular scenario.
Table FFF.2.5-1. Enhanced X-Ray Angiographic Image IOD Modules
Table FFF.2.5-2. Enhanced XA Image Functional Group Macros
| C.8.19.6.2 |
Specifies the dimension of the Field of View as well as the flip and rotation transformations. |
|
| C.8.19.6.13 |
Specifies the acquisition geometry in a fixed reference system. |
|
| C.8.19.6.14 | ||
| C.8.19.6.4 |
Specifies the dimensions of the pixels at the image reception plane. |
The usage of this module is recommended to specify the number of rows and columns of the Pixel Data, as well as the aspect ratio.
The usage of this module is recommended to give the necessary conditions to enable the calculations of this scenario.
In case of X-Ray Receptor Type (0018,9420) equals "IMG_INTENSIFIER", there are some limitations that prevent the calculations described on this scenario:
As a consequence, in case of image intensifier it is impossible to relate the position of the pixels of the stored area to the isocenter reference system.
In case of X-Ray Receptor Type (0018,9420) equals "DIGITAL_DETECTOR" the usage of this module is recommended to specify the type and characteristics of the image detector.
The usage of this macro is recommended to specify the characteristics of the field of view.
The field of view characteristics may change per-frame across the multi-frame image.
The usage of this macro is recommended to specify the fixed reference system of the acquisition geometry.
In this example, the operator identifies the position (i, j) of an object of interest projected on the stored pixel data of an image A, and estimates the magnification of the conic projection by a calibration process.
The operator wants to know the position of the projection of such object of interest on a second image B acquired under different geometry.
The attributes that define the geometry of both images A and B are described in the following figure:
The following steps describe the process to calculate the position (i, j)B of the projection of the object of interest in the Pixel Data of the image B, assuming that (i, j)A is known and is the offset of the projection of the object of interest from the TLHC of the Pixel Data of the image A, measured in pixels of the Pixel Data matrix as a column offset "i" followed by a row offset "j". TLHC is defined as (0,0).
Step 1: Calculate the point (i, j)A in FOV coordinates of the image A.
Step 2: Calculate the point (i, j)A in physical detector coordinates of the image A.
Step 3: Calculate the point (Pu, Pv)A in positioner coordinates of the image A.
Step 4: Calculate the point (PXp, PYp, PZp)A in positioner coordinates of the image A.
Step 5: Calculate the point (PX, PY, PZ)A in Isocenter coordinates of the image A.
Step 6: Calculate the point (PXt, PYt, PZt)A in Table coordinates of the image A.
Step 7: Calculate the point (PXt, PYt, PZt)B in Table coordinates in mm of the image B.
Step 8: Calculate the point (PX, PY, PZ)B in Isocenter coordinates in mm of the image B.
Step 9: Calculate the point (PXp, PYp, PZp)B in positioner coordinates of the image B.
Step 10: Calculate the point (Pu, Pv)B in positioner coordinates of the image B.
Step 11: Calculate the point (i, j)B in physical detector coordinates of the image B.
Step 12: Calculate the point (i, j)B in FOV coordinates of the image B.
Step 13: Calculate the point (i, j)B in Pixel Data of the image B.
Step 1 : Image A: Point (i, j)A in FOV coordinates
In this step, the FOV coordinates are calculated by taking into account the FOV rotation and Horizontal Flip applied to the FOV matrix when the Pixel Data were created:
new i = (columns -1) - i = 850 - 1 - 310 = 539
1.2: Image Rotation : 90 (clockwise)
new j = (columns -1) - i = 850 - 1 - 539 = 310
(i, j)A = (122, 310) in stored pixel data.
Step 2: Image A: Point (i, j)A in physical detector coordinates
In this step, the physical detector coordinates are calculated by taking into account the FOV origin and the ratio between Imager Pixel Spacing and Detector Element Spacing:
Di = Imager Pixel Spacing (column) = 0.2 mm
Dj = Imager Pixel Spacing (row) = 0.2 mm
Didet = Detector Element Spacing between two adjacent columns = 0.2 mm
Djdet = Detector Element Spacing between two adjacent rows = 0.2 mm
Zoom Factor (column) = Di / Didet = 1.0
Zoom Factor (row) = Dj / Djdet = 1.0
FOV Origin (column) = FOVidet = 600.0
FOV Origin (row) = FOVjdet = 600.0
new i = FOVidet + (i + (1 - Didet / Di) / 2) * Dj / Djdet = 600 + 122 * 1.0 = 722
new j = FOVjdet + (j + (1 - Djdet / Dj) / 2) * Di / Didet = 600 + 310 * 1.0 = 910
(i, j)A = (722, 910) in detector elements.
Step 3: Image A: Point ( Pu, Pv)A in positioner coordinates
In this step, the (Pu, Pv)A coordinates in mm are calculated from (i, j)A by taking into account the projection of the Isocenter in physical detector coordinates, and the Detector Element Spacing:
ISO_Pidet = Position of Isocenter Projection (column) = 1024.5
ISO_Pjdet = Position of Isocenter Projection (row) = 1024.5
Didet = Detector Element Spacing between two adjacent columns = 0.2 mm
Djdet = Detector Element Spacing between two adjacent rows = 0.2 mm
Pu = (i - ISO_Pidet) * Didet = (722 - 1024.5) * 0.2 = -60.5 mm
Pv = (ISO_Pjdet - j) * Djdet = (1024.5 - 910) * 0.2 = 22.9 mm
( Pu, Pv)A = (-60.5, 22.9) in mm.
Step 4: Image A: Point (PXp, PYp, PZp)A in positioner coordinates
In this step, the positioner coordinates (PXp, PYp, PZp)A are calculated from (Pu, Pv)A by taking into account the magnification ratio, the Distance Source to Detector and the Distance Source to Isocenter:
SID = Distance Source to Detector = 1300 mm
ISO = Distance Source to Isocenter = 780 mm
Magnification ratio = SID / (ISO - P Yp ) = 1.3
P Yp = ISO - SID / 1.3 = 780 - 1300/1.3 = -220 mm
P Xp = Pu / Magnification ratio = -60.5 / 1.3 = -46.54 mm
P Zp = Pv / Magnification ratio = 22.9 / 1.3 = 17.62 mm
(PXp, PYp, PZp)A = (-46.54, -220, 17.62) in mm.
Step 5: Image A: Point (PX, PY, PZ)A in Isocenter coordinates
In this step, the isocenter coordinates (PX, PY, PZ)A are calculated from the positioner coordinates (PXp, PYp, PZp)A by taking into account the positioner angles of the image A in the Isocenter coordinate system:
Ap1 = Positioner Isocenter Primary Angle = 60.0 deg
Ap2 = Positioner Isocenter Secondary Angle = 20.0 deg
Ap3 = Positioner Isocenter Detector Rotation Angle = 0.0 deg
(PX, PY, PZ)T= (R2 ·R1)T·(R3 T·(PXp, PYp, PZp)T)
(PX, PY, PZ)A = (150.55, -65.41, 91.80) in mm.
Step 6: Image A: Point (PXt, PYt, PZt)A in Table coordinates
In this step, the table coordinates (PXt, PYt, PZt)A are calculated from the isocenter coordinates (PX, PY, PZ)A by taking into account the table position and angles of the image A in the Isocenter coordinate system:
Tx =Table X Position to Isocenter = 10.0 mm
Ty =Table Y Position to Isocenter = 30.0 mm
Tz =Table Z Position to Isocenter = 100.0 mm
At1 = Table Horizontal Rotation Angle = -10.0 deg
At2 = Table Head Tilt Angle = 0.0 deg
At3 = Table Cradle Tilt Angle = 0.0 deg
(PXt, PYt, PZt)T= (R3 · R2 · R1) · ((PX, PY, PZ)T- (TX, TY, TZ)T)
(PXt, PYt, PZt)A = (136.99, -95.41, -32.48) in mm.
Step 7: Image B: Point (PXt, PYt, PZt)B in Table coordinates
In this step, the table has moved from image A to image B. The table coordinates of the object of interest are the same on image A and image B because it is assumed that the patient is fixed on the table.
(PXt, PYt, PZt)B = (136.99, -95.41, -32.48) in mm.
Step 8: Image B: Point (PX, PY, PZ)B in Isocenter coordinates
In this step, the isocenter coordinates (PX, PY, PZ)B are calculated from the table coordinates (PXt, PYt, PZt)B by taking into account the table position and angles of the image B in the Isocenter coordinate system:
Tx =Table X Position to Isocenter = 20.0 mm
Ty =Table Y Position to Isocenter = 100.0 mm
Tz =Table Z Position to Isocenter = 0.0 mm
At1 = Table Horizontal Rotation Angle = 0.0 deg
At2 = Table Head Tilt Angle = 10.0 deg
At3 = Table Cradle Tilt Angle = 0.0 deg
(PX, PY, PZ)T= (R3 · R2 · R1)T· (PXt, PYt, PZt)T+ (TX, TY, TZ)T
(PX, PY, PZ)B = (156.99, -12.11, -48.55) in mm.
Step 9: Image B: Point (PXp, PYp, PZp)B in positioner coordinates
In this step, the positioner coordinates (PXp, PYp, PZp)B are calculated from the isocenter coordinates (PX, PY, PZ)B by taking into account the positioner angles of the image B in the Isocenter coordinate system:
Ap1 = Positioner Isocenter Primary Angle = -30.0 deg
Ap2 = Positioner Isocenter Secondary Angle = 0.0 deg
Ap3 = Positioner Isocenter Detector Rotation Angle = 0.0 deg
(PXp, PYp, PZp)T= R3 · ((R2 · R1) · (PX, PY, PZ)T)
(PXp, PYp, PZp)B = (142.01, 68.00, -48.55) in mm.
Step 10: Image B: Point ( Pu, Pv)B in positioner coordinates
In this step, the (Pu, Pv)B coordinates in mm are calculated from the positioner coordinates (PXp, PYp, PZp)B by taking into account the Distance Source to Detector and the Distance Source to Isocenter of the image B:
SID = Distance Source to Detector = 1000 mm
ISO = Distance Source to Isocenter = 800 mm
Magnification ratio = SID / (ISO - P Yp ) = 1200/(800-68) = 1.366
Pu = P Xp * Magnification ratio = 142.01 * 1.64 = 194.00 mm
Pv = P Z p * Magnification ratio = -48.55 * 1.64 = -66.33 mm
( Pu, Pv)B = (194.00, -66.33) in mm.
Step 11: Image B: Point (i, j)B in physical detector coordinates
In this step, the physical detector coordinates (i, j)B are calculated from the positioner coordinates ( Pu, Pv)B by taking into account the projection of the Isocenter in physical detector coordinates, and the Detector Element Spacing of the image B:
ISO_Pidet = Position of Isocenter Projection (column) = 1024.5
ISO_Pjdet = Position of Isocenter Projection (row) = 1024.5
Didet =Detector Element Spacing between two adjacent columns = 0.2
Djdet =Detector Element Spacing between two adjacent rows = 0.2
i = ISO_Pidet + Pu / Didet = 1024.5 + 194.00 / 0.2 = 1994.5
j = ISO_Pidet - Pv / Didet = 1024.5 - (-66.33) / 0.2 = 1356.2
(i, j)B = (1994.5, 1356.2) in detector elements.
Step 12 : Image B: Point (i, j)B in FOV coordinates
In this step, the FOV coordinates are calculated from the physical detector coordinates by taking into account the FOV origin and the ratio between Imager Pixel Spacing and Detector Element Spacing of the image B:
Di = Imager Pixel Spacing (column) = 0.4 mm
Dj = Imager Pixel Spacing (row) = 0.4 mm
Didet = Detector Element Spacing between two adjacent columns = 0.2 mm
Djdet = Detector Element Spacing between two adjacent rows = 0.2 mm
Zoom Factor (column) = Di / Didet = 2.0
Zoom Factor (row) = Dj / Djdet = 2.0
FOV Origin (column) = FOVidet = 25.0
FOV Origin (row) = FOVjdet = 25.0
new i = (i - FOVidet).Didet / Di - (1 - Didet / Di) / 2 = (1994.5 - 25.0) / 2.0 - 0.25 = 984.5
new j = (j - FOVjdet).Djdet / Dj - (1 - Djdet / Dj) / 2 = (1356.2 - 25.0) / 2.0 - 0.25 = 665.35
(i, j)B = (984.50, 665.35) in stored pixel data.
Step 13 : Image B: Point (i, j)B in Pixel Data
In this step, the position (i, j)B of the projection of the object of interest in the Pixel Data of the image B is calculated from the FOV coordinates by taking into account the FOV rotation and Horizontal Flip applied to the FOV matrix when the Pixel Data were created:
1.2: Image Rotation : 180 (clockwise)
new i = (columns -1) - i = 1000 - 1 - 984.50 = 14.50
This section provides examples of different implementations and message sequencing when using the Unified Worklist and Procedure Step SOP Classes (UPS Push, UPS Pull, UPS Watch and UPS Event).
The examples are intended to provide a sense of how the UPS SOP Classes can be used to support a variety of workflow use cases. For the detailed specification of how the underlying DIMSE Services function, please refer to Annex CC “Unified Procedure Step Service and SOP Classes (Normative)” in PS3.4. For the detailed specification of how the RESTful services function, please refer to Section 6.9 “UPS-RS Worklist Service” in PS3.18 .
The Unified Worklist and Procedure Step Service Class combines the information that is conveyed separately by the Modality Worklist and Modality Performed Procedure Step into a single normalized object. This object is created to represent the planned step and then updated to reflect its progress from scheduled to complete and record details of the procedure performed and the results created. Additionally, the Unified Worklist supports subscription based notifications of progress and completion.
The Unified Worklist and Modality Procedure Step Service Class does not include support for complex internal task structures. It describes a single task to be performed in terms of the task request and the task results. Additional complexity is managed by the business logic.
The UPS SOP Classes define services so UPSs can be created, their status managed, notifications sent and their attributes set, queried, and retrieved. DICOM intentionally leaves open the many combinations in which these services can be implemented and applied to enact a variety of approaches to workflow.
Pull Workflow and Push Workflow
Similar to previous SOP Classes like Modality Worklist, UPS allows a performing system (using the UPS Pull SOP Class as a C-FIND SCU) to query a worklist manager (the SCP) for relevant tasks and choose which one to start working on. This is sometimes called "Pull Workflow" since the performer pulls down the list and selects an item.
UPS adds the ability for a scheduling system (using the UPS Push SOP Class as an N-CREATE SCU) to "push" a workitem onto the performing system (here an SCP). In "Push Workflow" the scheduler makes the choice of which system becomes responsible for the workitem.
Performing systems (again as an SCP) could also schedule/create their own workitems, while allowing other systems (using the UPS Watch and UPS Event SOP Classes as N-EVENT-REPORT SCUs and N-GET SCUs) to receive notifications of the activities of the performer and examine the results.
Push and Pull can also be combined in various ways. A high level departmental scheduler could break down orders and push tasks onto the acquisition worklist manager and reporting worklist manager from which modalities and reporting workstations could pull their tasks. In another scenario, a modality that has pulled an acquisition workitem off a worklist, could push a follow-up task onto a workstation to perform 3D processing or CAD on the results.
Reliable Watchers and Deletion Locks
Some UPS features (specifically the Deletion Lock - See Section CC.2.3.2, “Service Class User Behavior”) were introduced to support Reliable Watchers. By subscribing with a Deletion Lock, an SCU wishing to be a reliable watcher can signal the SCP to persist instances until the watcher has been able to retrieve final state information and remove the lock.
This means that network latency, slight delays in processing threads, or even the watcher being offline for a short time, will not prevent the watcher from reliably collecting the final state details from UPS instances it is interested in. This can be very important since the watcher may be responsible for monitoring completion of those instances, extracting details from them, and based on that and other internal logic, creating subsequent UPS Instances and populating the input data fields with information from the completed UPS. Without some form of persistence guarantee, UPS instances could disappear immediately upon entering a completed state.
Having established the Deletion Lock mechanism, it is possible that, due to equipment or processing errors, there could be cases where locks are not properly removed and some UPS instances might remain when they are no longer needed. Most SCP implementations will likely provide a way for such orphaned UPS instances to be removed under administrator control.
The following sections describe ways UPS workflows could be used to address some typical scenarios.
The decision of which SOP Classes to implement in which systems will revolve partly around where it makes the most sense for the business logic to reside, what information each system would have access to, and what kind of workflow is most effective for the users.
Table GGG.1-1 shows a number of hypothetical systems and the combination of SOP Classes they might implement. For example, a typical worklist manager would support all four SOP Classes as an SCP. A typical scheduling system might want to be a UPS Push SCU to submit work items to the worklist manager, a UPS Watch SCU to subscribe for notifications and get details of the results, and a UPS Event SCU to receive the progress notifications. A simple "pull performer" might only be a UPS Pull SCU, similar to modalities today.
Other examples are listed for:
"Minimal Scheduler", a requesting system that is not interested in monitoring progress or results.
"Watcher", a system interested in tracking the progress and/or results of Unified Procedure Steps.
"General Contractor", a system that accepts work items pushed to it, then uses internal business logic to subdivide/create work items that it pushes or makes available to systems that will actually perform the work.
"Push Performer", a system, for example a CAD system, that has work pushed to it, and provides status and results information to interested observers.
"Self-Scheduled Performer", which internally schedules it's own work, but supports notifications and N-GET so the details of the work can be made available to other departmental systems.
"Self-Scheduled Pull Performer", which pushes a workitem onto a worklist manager and then pulls it off to perform it. This allows it to work on "unscheduled" procedures without taking on the responsibility of being an SCP for notifications and events.
A system that implements UPS Watch as an SCP will also need to implement UPS Event as an SCP to be able to send Event Reports to the systems from whom it accepts subscriptions.
This example shows how a typical pull workflow could be used to manage the work of a 3D Lab. A group of 3D Workstations query a 3D Worklist Manager for work items that they perform and report their progress. In this example, the RIS would be a "Typical Scheduler", the 3D Workstation is a "Pull Performer" as seen in Table GGG.1-1 and the PACS and Modality do not implement any UPS SOP Classes.
We will assume the RIS decides which studies require 3D views and puts them on the worklist once the acquiring modality has reported it's MPPS complete. The RIS identifies the required 3D views and lists the necessary input objects in the UPS based on the image references recorded in the MPPS.
Assume the RIS has subscribed globally for all UPS instances managed by the 3D Worklist Manager.
This example shows a reporting workflow with a "hand-off". Reporting Workstations query a RIS for work items to interpret/report. In this example, the RIS is a "Worklist Manager", the Reporting Workstation is both a "Pull Performer" and a "Minimal Scheduler" as shown in Table GGG.1-1 and the PACS and Modality do not implement any UPS SOP Classes. A reporting workstation claims Task X but can't complete it and "puts it back on the worklist" by canceling Task X and creating Task Y as a replacement, recording Task X as the Replaced Procedure Step.
Assume the RIS is picking up where example GGG.2.2 left off and was waiting for the 3D view generation task to be complete before putting the study on the reading worklist. The RIS identifies the necessary input objects in the UPS based on the image references recorded in the acquisition MPPS and the 3D UPS.
You could also imagine the 3D workstation is a Mammo CAD workstation. If the first radiologist completed the report, the RIS could easily schedule Task Y as the over-read by another radiologist.
For further discussion, refer to the Section GGG.2.7 material on Hand-offs, Fail-overs and Putting Tasks Back on the Worklist.
Cancel requests are always directed to the system managing the UPS instance since it is the SCP. When the UPS is being managed by one system (for example a Treatment Management System) and performed by a second system (for example a Treatment Delivery System), a third party would send the cancel request to the TMS and cancellation would take place as shown below.
Performing SCUs are not required to react to cancel requests, or even to listen for them, and in some situations would be unable to abort the task represented by the UPS even if they were listening. In the diagram below we assume the performing SCU is listening, willing, and able to cancel the task.
If the User had sent the cancel request while the UPS was still in the SCHEDULED state, the SCP (i.e., the TMS) could simply have canceled the UPS internally. Since the UPS state was IN PROGRESS, it was necessary to send the messages as shown. Note that since the TDS has no need for the UPS instance to persist, it subscribed without setting a Deletion Lock, and so it didn't need to bother unsubscribing later.
In this example, users schedule tasks to a shared dose calculation system and need to track progress. This example is intended as a demonstration of UPS and should not be taken as prescriptive of RT Therapy procedures.
Pushing the tasks avoids problems with a pull workflow such as the server having to continually poll worklists on (a large number of) possible clients; needing to configure the server to know about all the clients; reporting results to a user who might be at several locations; and associating the results with clients automatically. Also, when performing machines each have unique capabilities, the scheduling must target individual machines, and there can be advantages for integrating the scheduling and performing activities like this.
Although not shown in the diagram, the User could have gone to a User Terminal ("Watcher") and monitored the progress from there by doing a C-FIND and selecting/subscribing to Task X.
In a second example, the User monitors progress from another User Terminal ("Watcher") and decides to request cancellation after 3 beams.
In this example, arriving patients are admitted at the RIS and sent to a specific X-Ray room for their exam.
The RIS is shown here subscribing globally for events from each Room. Alternatively the RIS could subscribe individually to each Task right after the N-CREATE is requested.
It is left open whether the patient demographics have been previously registered and the patients scheduled on the RIS or whether they are registered on the RIS when they arrive.
A wide variety of workflow methods are possible using the UPS SOP Classes. In addition to those diagrammed in the previous sections, a few more are briefly described here. These include examples of ways to handle unscheduled tasks, grouped tasks, append cases, "event forwarding", etc.
Self-Scheduling Push & Pull: Unscheduled and Append Cases
In radiation therapy a previously unscheduled ("emergency") procedure may be performed on a Treatment Delivery System. Normally a TDS performs scheduled procedures as a Performing SCU in a Typical Pull Workflow like that shown in GGG.2.2. A TDS that might need to perform unscheduled procedures could additionally implement UPS Push (as an SCU) and push the "unscheduled" procedure to the departmental worklist server then immediately set it IN PROGRESS as a UPS Pull SCU. The initial Push to the departmental server allows the rest of the departmental workflow to "sync up" normally to the new task on the schedule.
A modality choosing to append some additional images after the original UPS was completed could use a similar method. Since the original UPS can no longer be modified, the modality could push a new UPS instance to the Worklist Manager and then immediately set it IN PROGRESS. Many of the attribute values in the new UPS would be the same as the original UPS.
Note that for a Pull Performer that wants to handle unscheduled cases, this Push & Pull approach is pretty simple to implement. Becoming a UPS Push SCU just requires N-CREATE and N-ACTION (Request Cancel) that are quite similar to the N-SET and N-ACTION it already supports as a UPS Pull SCU.
The alternative would be implementing both UPS Watch and UPS Event as an SCP, which would be more work. Further, potential listeners would have to be aware of and monitor the performing system to track the unscheduled steps, instead of just monitoring the departmental Pull SCP.
An example of an alternative method for handling unscheduled procedures is a CAD workstation that decides for itself to perform processing on a study. By implementing UPS Watch as an SCP and UPS Event as an SCP, the workstation can create UPS instances internally and departmental systems such as the RIS can subscribe globally to the workstation to monitor its activities.
The workstation might create the UPS tasks in response to having data pushed to it, or potentially the workstation could itself also be a Watch and Event SCU and subscribe globally to relevant modality or PACS systems and watch for appropriate studies.
Sometimes the performer of the current task is in the best position to decide what the next task should be.
An alternative to centralized task management is daisy-chaining where each system pushes the next task to the next performer upon completion of the current task. Using a workflow similar to the X-Ray Clinic example in GGG.6, a modality could push a task to a CAD workstation to process the images created by the modality. The task would specify the necessary images and perhaps parameters relevant to the acquisition technique. The RIS could subscribe globally with the CAD workstation to track events. Another example of push daisy chain would be for the task completed at each step in a reporting process to be followed by scheduling the next logical task.
Hand-offs, Fail-overs and Putting Tasks Back on the Worklist
Sometimes the performer of the current task, after setting it to IN PROGRESS, may determine it cannot complete the task and would like the task performed by another system. It is not permitted to move the task backwards to the SCHEDULED state.
One approach is for the performer to cancel the old UPS and schedule a new UPS to be pulled off the worklist by another system or by itself at some point in the future. The new UPS would be populated with details from the original. The details of the new UPS, such as the Input Information Sequence (0040,4021), the Scheduled Workitem Code Sequence (0040,4018), and the Scheduled Processing Parameters Sequence (0074,1210), might be revised to reflect any work already completed in the old UPS. By including the "Discontinued Procedure Step rescheduled" code in the Procedure Step Discontinuation Reason Code Sequence (0074,100e) of the old UPS, the performer can allow watchers and other systems monitoring the task to know that there is a replacement for the old canceled UPS. By referencing the UID of the old UPS in the Replaced Procedure Step Sequence (0074,1224) of the new UPS, the performer can allow watchers and other systems to find the new UPS that replaced the old. A proactive SCP might even subscribe watchers of the old UPS to the new UPS that replaces it.
Alternatively, if the performer does not have the capability to create a new UPS, it could include the "Discontinued Procedure Step rescheduling recommended" code in the Procedure Step Discontinuation Reason Code Sequence (0074,100e). A very smart scheduling system could observe the cancellation reason and create the new replacement UPS as described above on behalf of the performer.
Another approach is for the performer to "sub-contract" to another system by pushing a new UPS onto that system and marking the original UPS complete after the sub-contractor finishes.
Yet another approach would be for the performer to deliver the Locking UID (by some unspecified mechanism) to another system allowing the new system to continue the work on the existing UPS. Coordination and reconciliation would be very important since the new system would need to review the current contents of the UPS, understand the current state, update the performing system information, etc.
The performing system for a UPS instance determines what details to put in the attributes of the Performed Procedure Information Module. It is possible that the procedure performed may differ in some details from the procedure scheduled. It is up to the performing system to decide how much the performed procedure can differ from the scheduled procedure before it is considered a different procedure, or how much must be performed before the procedure is considered complete.
In the case of cancellation, it is possible that some details of the situation may be indeterminable. Beyond meeting the Final State requirements, accurately reflecting in the CANCELED UPS instance the actual state of the task (e.g., reflecting partial work completed and/or any cleanup performed during cancellation), is at the discretion of the performing system.
In general it is expected that:
An SCU that completes a UPS differently than described in the scheduled details, but accomplishes the intended goal, would record the details as performed in the existing UPS and set it to COMPLETED. Interested systems may choose to N-GET the Performed Codes from the UPS and confirm whether they match the Scheduled Codes.
An SCU that completes part of the work described in a UPS, but does not accomplish the intended goal, would set the Performed Protocol Codes to reflect what work was fully or partially completed, set the Output Sequence to reflect the created objects and set the UPS state to CANCELED since the goal was not completed.
An SCU that completes a step with a different intent and scope in place of a scheduled UPS would cancel the original scheduled UPS, listing no work output products, and schedule a new UPS describing what was actually done, and reference the original UPS that it replaces in the Replaced Procedure Step Sequence to facilitate monitoring systems "closing the loop".
An SCU that completes multiple steps, scheduled as separate UPS instances (e.g., a dictation & a transcription & a verification), as a block would individually report each of them as completed.
An SCU that completes additional unscheduled work in the course of completing a scheduled UPS would either report additional procedure codes in the completed UPS, or create one or more new UPS instances to record the unscheduled work.
There are cases where it may be useful to schedule a complex procedure that is essentially a grouping of multiple workitems. Placing multiple workitem codes in the Scheduled Workitem Code Sequence is not permitted (partly due to the additional complexities that would result related to sequencing, dependency, partial completion, etc.)
One approach is to schedule separate UPS instances for each of the component workitems and to identify the related UPS instances based on their use of a common Study UID or Order Number.
Another approach is for the site to define a single workitem code that means a pre-defined combination of what would otherwise be separate workitems, along with describing the necessary sequencing, dependencies, etc.
The UPS Subscription allows the Receiving AE Title to be different than the AE Title of the SCU of the N-ACTION request. This allows an SCU to sign up someone else who would be interested for a subscription. For example, a reporting workflow manager could subscribe the RIS to UPSs the reporting workflow manager creates for radiology studies, and subscribe the CIS to UPSs it creates for cardiology studies. Or a RIS could subscribe the MPPS broker or the order tracking system to the high level UPS instances and save them from having independent business logic to determine which ones are significant.
This can provide an alternative to systems using global subscriptions to stay on top of things. It also has the benefit of providing a way to avoid having to "forward" events. All interested SCUs get their events directly from the SCP. Instead of SCU A forwarding relevant events to SCU B, SCU A can simply subscribe SCU B to the relevant events.
This annex discusses the design considerations that went into the definition of the WADO extension to Web and REST services.
The WADO-RS and STOW-RS requests have no parameters because data is requested through well defined URLs and content negotiation through HTTP headers.
In the URI based WADO, the response is the single payload returned in the HTTP Get response. It may be the DICOM object in a DICOM format or in a rendered format.
The WADO-RS Service is a transport service, as opposed to a rendering service, which provides resources that enable machine to machine transfers of binary instances, pixel data, bulk data, and metadata. These services are not primarily intended to be directly displayable in a browser.
In the REST Services implementation:
For the "DICOM Requester", one or more multipart/related parts are returned containing PS3.10 binary DICOM instances of a Study, Series, or a single Instance.
For the "Frame Pixel Data Requester", one or more multipart/related parts are returned containing the pixel data of a multi-frame SOP Instance.
For the "Bulk Data Requester", one or more multipart/related parts are returned containing the bulk data of a Study, Series or SOP Instance.
For the "Metadata Requester", an item is returned containing the XML encoded metadata selected from the retrieved objects header as described in the Native DICOM Model defined in PS3.19.
The STOW-RS Service provides the ability to STore Over the Web using RESTful Services (i.e., HTTP based functionality equivalent to C-Store).
For the "DICOM Creator", one or more multipart/related parts are stored (posted to a STOW-RS Service) containing one or more DICOM Composite SOP Instances.
For the "Metadata and Bulk Data Creator", one or more multipart/related parts are stored (posted to a STOW-RS Service) containing the XML encoded metadata defined in PS3.19 and one or more parts containing the bulk data of a Study, Series or SOP Instance.
The implementation architecture has to maximize interoperability, preserve or improve performance and minimize storage overhead.
The Web Services technologies have been selected to:
The WADO-RS response will be provided as a list of XML and/or binary instances in a multipart/related response. The type of response depends on the media types listed in the Accept header.
The STOW-RS response is a standard HTTP status line and possibly an XML response message body. The meaning of the success, warning, or failure statuses are defined in PS3.18.
Imaging information is important in the context of EMR/EHR. But EMR/EHR systems often do not support the DICOM protocol. The EMR/EHR vendors need access using web and web service technologies to satisfy their users.
Examples of use cases / clinical scenarios, as the basis to develop the requirements, include:
Providing access to images and reports from a point-of-service application e.g., EMR.
Following references to significant images used to create an imaging report and displaying those images.
Following references / links to relevant images and imaging reports in email correspondence or clinical reports e.g., clinical summary.
Providing access to anonymized DICOM images and reports for clinical research and teaching purposes.
Providing access to a DICOM encoded imaging report associated with the DICOM IE (patient/study/series/objects) to support remote diagnostic workflows e.g., urgent medical incidents, remote consultation, clinical training, teleradiology/telemedicine applications.
Providing access to summary or selected information from DICOM objects.
Providing access to complete studies for caching, viewing, or image processing.
Storing DICOM SOP Instances using HTTP over a Network from PACS to PACS, from PACS to VNA, from VNA to VNA, from clinical application to PACS, or any other DICOM SCP.
Web clients, including mobile ones, retrieving XML and bulk data from a WADO-RS Service and adding new instances to a study.
Examples of the use cases described in 1 above are:
The EMR displays in JPEG one image with annotations on it (patient and/or technique related), based upon information provided in a report.
The EMR retrieves from a "Manifest" document all the referenced objects in DICOM and launches a DICOM viewer for displaying them (use case addressed by the IHE XDS-I.b profile).
The EMR displays in JPEG one image per series with information describing every series (e.g., series description).
The EMR displays in JPEG all the images of a series with information describing the series as well as every image (e.g., instance number and slice location for scanner images).
The EMR populates in its database for all the instances referred in a manifest (KOS) the relevant information (study ID/UID/AccessionNumber/Description/DateTime, series UID/Modality/Description/DateTime, instance UID/InstanceNumber/SliceLocation).
The EMR displays patient demographics and image slices in a browser by accessing studies through URLs that are cached and rendered in a remote data center.
A hospital transfers a DICOM Study over a network to another healthcare provider without needing special ports opened in either firewall.
A diagnostic visualization client, during post-processing, adds a series of Instances containing measurements, annotations, or reports.
A healthcare provider transfers a DICOM Study to a Patient Health Record (PHR) at the request of the patient.
As an example, the 1c use case is decomposed in the following steps (all the other use cases can be implemented through a similar sequence of basic transactions):
The EMR sends to the DICOM server the list of the objects ("selection"), asking for the object content.
The DICOM server sends back the JPEG images corresponding to the listed objects.
The EMR sends to the DICOM server the "selection" information for obtaining the relevant information about the objects retrieved.
The DICOM server sends back the corresponding information in the form of a "metadata" document, converted in XML.
The use cases described above in terms of clinical scenarios correspond to the following technical implementation scenarios. In each case the use is distinguished by the capabilities of the requesting system:
Does it prefer the URI based requests, or the web-services based requests.
Does it have the ability to decode and utilize the DICOM PS3.10 format or not.
Does it need the metadata describing the image and its acquisition, and/or does it need an image to be displayed.
These then become the following technical use cases.
The requesting system is Web Browser or other application that can make simple HTTP/HTTPS requests,
Reference information is provided as URL or similar information that can be easily converted into a URL.
SOP instance, reformatted and subset as requested. This may be encoded as a DICOM PS3.10 instance, or rendered into a generic image format such as JPEG.
The requesting system is an application capable of making Web Service requests and able to process data encoded as a DICOM File, per DICOM PS3.10 encodings.
Reference information may come in a wide variety of forms. It is expected to include at least the Study UID, Series UID, and Individual SOP instance information. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
SOP Instances, encoded per DICOM PS3.10.
The requesting system: application capable of making Web Service requests. System is not capable of decoding DICOM PS3.10 formats. The system is capable of processing images in JPEG or other more generic formats.
Reference information may come in a wide variety of forms. It is expected to include at least the Study UID, Series UID, and Individual SOP instance information. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
The requesting system: application capable of making Web Service requests. The requesting System is not capable of decoding DICOM PS3.10 formats. The system is capable of processing metadata that describes the image, provided that the metadata is encoded in an XML format. The system can be programmed based upon the DICOM definitions for XML encoding and attribute meanings.
Reference information may come in a wide variety of forms. It is expected to include at least the Study UID, Series UID, and Individual SOP instance information. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
The requesting system is an application capable of making HTTP Service requests and able to process data encoded as a DICOM File, per DICOM PS3.10 encodings.
Requesting information for DICOM Instances may come from a wide variety of forms. It is expected to include at least the Study UID. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
SOP Instances, encoded per DICOM PS3.10.
The requesting system is an application capable of making HTTP requests and able to process pixel data.
Requesting information for pixel data may come in a wide variety of forms. It is expected to include at least the Study UID, Series UID, Individual SOP Instance, and Frame List information. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
The requesting system is an application capable of making HTTP requests and able to process bulk data.
Requesting information for bulk data may come in a wide variety of forms. It is expected to include the Bulk Data URL as provided by the RetrieveMetadata resource. This may be encoded as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
The requesting system is an application capable of making HTTP requests and able to process data encoded as a XML, per DICOM PS3.19 encodings.
The Study UID may be obtained as part of an HL7 reference within a CDA document, a DICOM SOP Instance reference, or other formats.
The response provides full study metadata encoded in XML, encoded per DICOM PS3.19.
The requesting system is an application capable of making HTTP Service requests and able to process data encoded as PS3.10 binary instances.
Optionally, it may also specify Study Instance UID indicating all POST requests are for the indicated study.
SOP Instances, per DICOM PS3.10 encoding.
The response is a standard HTTP status line and an XML response message body. The meaning of the success, warning, or failure statuses are defined in PS3.18.
The requesting system is an application capable of making HTTP requests and able to process data encoded as PS3.19 XML metadata.
Optionally, it may also specify Study Instance UID indicating all POST requests are for the indicated study.
XML metadata, per DICOM PS3.19 encodings, and bulk data.
The response is a standard HTTP status line and an XML response message body. The meaning of the success, warning, or failure statuses are defined in PS3.18.
Imaging information is important in the context of EMR/EHR. But EMR/EHR systems often do not support DICOM service classes. The EMR/EHR vendors need access using web and web service technologies to satisfy their users.
Examples of use cases / clinical scenarios, used as the basis for the development of the QIDO-RS requirements, include:
A General Practitioner (GP) in a clinic would like to check for imaging studies for the current patient. These studies are stored in a PACS, Vendor Neutral Archive (VNA) or HIE that supports QIDO functionality. The GP launches an Electronic Medical Record (EMR) application, and keys in the patient demographics to search for the patient record within the EMR. Once the record is open, the EMR, using QIDO, makes requests to the back-end systems, supplying Patient ID (including issuer) and possibly other parameters (date of birth, date range, modality, etc.). That system returns the available studies along with meta-data for each study that will help the GP select the study to open. The meta-data would include, but is not limited to, Study Description, Study Date, Modality, and Referring Physician.
HL7 has introduced FHIR (Fast Healthcare Interoperability Resources) as a means of providing access to healthcare informatics information using RESTful web services.
While FHIR will not replicate the information contained in a PACS or other medical imaging storage system, it is desirable for FHIR to present a view of the medical imaging studies available for a particular patient along with the means of retrieving the imaging data using other RESTful services.
A Radiologist, is reading studies in the office, using software that maintains diagnostic orders for the facility. This system produces the radiology worklist of studies to be read and provides meta-data about each scheduled procedure, including the Study Instance UID. When the next study is selected to be read on the worklist, the system, using the Study Instance UID, makes a QIDO request to the local archive to discover the instances and relevant study meta-data associated with the procedure to display. Subsequent QIDO requests are made to the local archive and to connected VNA archives to discover candidate relevant prior studies for that patient.
For each candidate relevant prior, the full study metadata will be retrieved using WADO-RS and processed to generate the list of relevant priors.
A Radiologist is working in a satellite clinic, which has a system with QIDO functionality and small image cache. The main hospital with which the clinic is affiliated has a system with QIDO functionality and a large historical image archive or VNA. The viewing software displays a worklist of patients, and a study is selected for viewing. The viewer checks for prior studies, by making QIDO requests to both the local cache and remote archive using the Patient ID, Name and Date of Birth, if available. If the Patient Identifier isn't available, other means (such as by other demographics, or a Master Patient Index) could be utilized. Any studies that meet relevant prior criteria can be pre-fetched.
A Neurologist is preparing a surgical plan for a patient with a brain tumor using three-dimensional reconstruction software, which takes CT images and builds a 3D model of various structures. After supplying the patient demographics (or Patient Identifier), the software requests a list of appropriate studies for reconstruction (based on Study Date, Body Region and Modality). Once the user has selected a study and series, the software contacts the QIDO server again, requesting the SOP Instance UIDs of all images of a certain thickness (specified in specific DICOM tags) and frame of reference to be returned. The software then uses this information to retrieve, using the WADO-RS service, the appropriate DICOM objects needed to prepare the rendered volume for display.
A General Practitioner (GP) has left the medical ward for a few hours, and is paged with a request to look at a patient X-Ray image in order to grant a discharge. The GP carries a smart phone that has been pre-loaded with credentials and secured. The device makes a QIDO request to the server, to look for studies from the last hour that list the GP as the Referring Physician. The GP is able to retrieve and view the matching studies, and can make a determination whether to return to the ward for further review or to sign the discharge order using the phone.
The use cases described above in terms of clinical scenarios correspond to the following technical implementation scenarios. In each case the use is distinguished by the capabilities of the requesting system:
These questions can be applied to the use cases:
All attributes required by the FHIR Imaging Study Resource (see http://www.hl7.org/implement/standards/fhir/imagingstudy.htm)
These then become the following technical use cases.
The requesting web-based application can make QIDO-RS requests, parse XML and then make WADO-RS requests
One PS3.19 XML NativeDicomModel element for each matching Study
The requesting system identifies the Studies of interest and uses WADO-RS to retrieve data
The requesting system is a simple web-based application that can make QIDO-RS requests and parse XML and then make WADO URL requests
One PS3.19 XML NativeDicomModel element for each matching Study
The requesting system identifies the Study of interest and uses Search For Series to identify a series of interest
The requesting system uses WADO URL to retrieve specific instances
The requesting system is a mobile application that can make QIDO-RS requests, parse JSON and then make WADO URL requests.
The requesting system identifies the Study of interest and uses Search For Series to identify a series of interest
The requesting system uses WADO URL to retrieve specific instances
Clients would like to be able to discover a list of devices that support DICOM RESTful services and query a DICOM RESTful service to determine which options are supported, such as:
The following WADL XML example contains all the required elements for an origin-server that supports WADO-RS, QIDO-RS and STOW-RS with all required services and parameters.
<application xsi:schemaLocation="http://wadl.dev.java.net/2009/02 wadl.xsd"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns="http://wadl.dev.java.net/2009/02">
<resources base="http://medical.examplehospital.org/dicomweb">
<resource path="studies">
<method name="GET" id="SearchForStudies">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="StudyDate" style="query" />
<param name="00080020" style="query" />
<param name="StudyTime" style="query" />
<param name="00080030" style="query" />
<param name="AccessionNumber" style="query" />
<param name="00080050" style="query" />
<param name="ModalitiesInStudy" style="query" />
<param name="00080061" style="query" />
<param name="ReferringPhysicianName" style="query" />
<param name="00080090" style="query" />
<param name="PatientName" style="query" />
<param name="00100010" style="query" />
<param name="PatientID" style="query" />
<param name="00100020" style="query" />
<param name="StudyInstanceUID" style="query" repeating="true" />
<param name="0020000D" style="query" repeating="true" />
<param name="StudyID" style="query" />
<param name="00200010" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
<method name="POST" id="StoreInstances">
<request>
<param name="Accept" style="header" default="application/dicom+xml">
<option value="application/dicom+xml" />
</param>
<representation mediaType="multipart/related; type=application/dicom" />
<representation mediaType="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<representation mediaType="multipart/related; type=application/dicom+xml" />
</request>
<response status="202 409">
<representation mediaType="application/dicom+xml" />
</response>
<response status="400 401 403 503" />
</method>
<resource path="{StudyInstanceUID}">
<method name="GET" id="RetrieveStudy">
<request>
<param name="Accept" style="header"
default="multipart/related; type=application/dicom">
<option value="multipart/related; type=application/dicom" />
<option value="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<option value="multipart/related; type=application/octet-stream" />
</param>
</request>
<response status="200 206">
<representation mediaType="multipart/related; type=application/dicom" />
<representation mediaType="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<representation mediaType="multipart/related; type=application/octet-stream" />
</response>
<response status="400 404 406 410 503"></response>
</method>
<method name="POST" id="StoreStudyInstances">
<request>
<param name="Accept" style="header" default="application/dicom+xml">
<option value="application/dicom+xml" />
</param>
<representation mediaType="multipart/related; type=application/dicom" />
<representation mediaType="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<representation mediaType="multipart/related; type=application/dicom+xml" />
</request>
<response status="202 409">
<representation mediaType="application/dicom+xml" />
</response>
<response status="400 401 403 503" />
</method>
<resource path="series">
<method name="GET" id="SearchForStudySeries">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="Modality" style="query" />
<param name="00080060" style="query" />
<param name="SeriesInstanceUID" style="query" repeating="true" />
<param name="0020000E" style="query" repeating="true" />
<param name="SeriesNumber" style="query" />
<param name="00200011" style="query" />
<param name="PerformedProcedureStepStartDate" style="query" />
<param name="00400244" style="query" />
<param name="PerformedProcedureStepStartTime" style="query" />
<param name="00400245" style="query" />
<param name="RequestAttributeSequence" style="query" />
<param name="00400275" style="query" />
<param name="RequestAttributeSequence.ScheduledProcedureStepID" style="query" />
<param name="00400275.00400009" style="query" />
<param name="RequestAttributeSequence.RequestedProcedureID" style="query" />
<param name="00400275.00401001" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
<resource path="{SeriesInstanceUID}">
<method name="GET" id="RetrieveSeries">
<request>
<param name="Accept" style="header"
default="multipart/related; type=application/dicom">
<option value="multipart/related; type=application/dicom" />
<option value="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<option value="multipart/related; type=application/octet-stream" />
</param>
</request>
<response status="200 206">
<representation mediaType="multipart/related; type=application/dicom" />
<representation mediaType="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<representation mediaType="multipart/related; type=application/octet-stream" />
</response>
<response status="400 404 406 410 503"></response>
</method>
<resource path="instances">
<method name="GET" id="SearchForStudySeriesInstances">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="SOPClassUID" style="query" repeating="true" />
<param name="00080016" style="query" repeating="true" />
<param name="SOPInstanceUID" style="query" repeating="true" />
<param name="00080018" style="query" repeating="true" />
<param name="InstanceNumber" style="query" />
<param name="00200013" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
<resource path="{SOPInstanceUID}">
<method name="GET" id="RetrieveInstance">
<request>
<param name="Accept" style="header"
default="multipart/related; type=application/dicom">
<option value="multipart/related; type=application/dicom" />
<option value="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<option value="multipart/related; type=application/octet-stream" />
</param>
</request>
<response status="200 206">
<representation mediaType="multipart/related; type=application/dicom" />
<representation mediaType="multipart/related; type=application/dicom;
transfer-syntax=1.2.840.10008.1.2.1" />
<representation mediaType="multipart/related; type=application/octet-stream" />
</response>
<response status="400 404 406 410 503"></response>
</method>
<resource path="frames">
<resource path="{framelist}">
<method name="GET" id="RetrieveFrames">
<request>
<param name="Accept" style="header"
default="multipart/related; type=application/octet-stream">
<option value="multipart/related; type=application/octet-stream" />
</param>
</request>
<response status="200">
<representation mediaType="multipart/related; type=application/octet-stream" />
</response>
<response status="400 404 406 410 503"></response>
</method>
</resource>
</resource>
<resource path="metadata">
<method name="GET" id="RetrieveInstanceMetadata">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
</request>
<response status="200">
<representation mediaType=" multipart/related; type=application/dicom+xml" />
</response>
<response status="400 404 406 410 503"></response>
</method>
</resource>
</resource>
</resource>
<resource path="metadata">
<method name="GET" id="RetrieveSeriesMetadata">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
</request>
<response status="200">
<representation mediaType="multipart/related; type=application/dicom+xml" />
</response>
<response status="400 404 406 410 503"></response>
</method>
</resource>
</resource>
</resource>
<resource path="instances">
<method name="GET" id="SearchForStudyInstances">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="SOPClassUID" style="query" />
<param name="00080016" style="query" />
<param name="SOPInstanceUID" style="query" repeating="true" />
<param name="00080018" style="query" repeating="true" />
<param name="Modality" style="query" />
<param name="00080060" style="query" />
<param name="SeriesInstanceUID" style="query" repeating="true" />
<param name="0020000E" style="query" repeating="true" />
<param name="SeriesNumber" style="query" />
<param name="00200011" style="query" />
<param name="InstanceNumber" style="query" />
<param name="00200013" style="query" />
<param name="PerformedProcedureStepStartDate" style="query" />
<param name="00400244" style="query" />
<param name="PerformedProcedureStepStartTime" style="query" />
<param name="00400245" style="query" />
<param name="RequestAttributeSequence" style="query" />
<param name="00400275" style="query" />
<param name="RequestAttributeSequence.ScheduledProcedureStepID" style="query" />
<param name="00400275.00400009" style="query" />
<param name="RequestAttributeSequence.RequestedProcedureID" style="query" />
<param name="00400275.00401001" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
</resource>
<resource path="metadata">
<method name="GET" id="RetrieveStudyMetadata">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
</request>
<response status="200">
<representation mediaType="multipart/related; type=application/dicom+xml" />
</response>
<response status="400 404 406 410 503"></response>
</method>
</resource>
</resource>
</resource>
<resource path="series">
<method name="GET" id="SearchForSeries">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="StudyDate" style="query" />
<param name="00080020" style="query" />
<param name="StudyTime" style="query" />
<param name="00080030" style="query" />
<param name="AccessionNumber" style="query" />
<param name="00080050" style="query" />
<param name="Modality" style="query" />
<param name="00080060" style="query" />
<param name="ModalitiesInStudy" style="query" />
<param name="00080061" style="query" />
<param name="ReferringPhysicianName" style="query" />
<param name="00080090" style="query" />
<param name="PatientName" style="query" />
<param name="00100010" style="query" />
<param name="PatientID" style="query" />
<param name="00100020" style="query" />
<param name="StudyInstanceUID" style="query" repeating="true" />
<param name="0020000D" style="query" repeating="true" />
<param name="SeriesInstanceUID" style="query" />
<param name="0020000E" style="query" />
<param name="StudyID" style="query" />
<param name="00200010" style="query" />
<param name="SeriesNumber" style="query" />
<param name="00200011" style="query" />
<param name="PerformedProcedureStepStartDate" style="query" />
<param name="00400244" style="query" />
<param name="PerformedProcedureStepStartTime" style="query" />
<param name="00400245" style="query" />
<param name="RequestAttributeSequence" style="query" />
<param name="00400275" style="query" />
<param name="RequestAttributeSequence.ScheduledProcedureStepID" style="query" />
<param name="00400275.00400009" style="query" />
<param name="RequestAttributeSequence.RequestedProcedureID" style="query" />
<param name="00400275.00401001" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
<resource path="{SeriesInstanceUID}">
<resource path="instances">
<method name="GET" id="SearchForSeriesInstances">
<request>
<param name="Accept" style="header"
default="type=application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="SOPClassUID" style="query" repeating="true" />
<param name="00080016" style="query" repeating="true" />
<param name="SOPInstanceUID" style="query" repeating="true" />
<param name="00080018" style="query" repeating="true" />
<param name="InstanceNumber" style="query" />
<param name="00200013" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="application/dicom+json" />
<representation mediaType="multipart/related; type=application/dicom+xml" />
</response>
<response status="400 401 403 413 503" />
</method>
</resource>
</resource>
</resource>
<resource path="instances">
<method name="GET" id="SearchForInstances">
<request>
<param name="Accept" style="header"
application/dicom+json">
<option value="multipart/related; type=application/dicom+xml" />
<option value="application/dicom+json" />
</param>
<param name="Cache-control" style="header">
<option value="no-cache" />
</param>
<param name="limit" style="query" />
<param name="offset" style="query" />
<param name="SOPClassUID" style="query" repeating="true" />
<param name="00080016" style="query" repeating="true" />
<param name="SOPInstanceUID" style="query" repeating="true" />
<param name="00080018" style="query" repeating="true" />
<param name="StudyDate" style="query" />
<param name="00080020" style="query" />
<param name="StudyTime" style="query" />
<param name="00080030" style="query" />
<param name="AccessionNumber" style="query" />
<param name="00080050" style="query" />
<param name="Modality" style="query" />
<param name="00080060" style="query" />
<param name="ModalitiesInStudy" style="query" />
<param name="00080061" style="query" />
<param name="ReferringPhysicianName" style="query" />
<param name="00080090" style="query" />
<param name="PatientName" style="query" />
<param name="00100010" style="query" />
<param name="PatientID" style="query" />
<param name="00100020" style="query" />
<param name="StudyInstanceUID" style="query" repeating="true" />
<param name="0020000D" style="query" repeating="true" />
<param name="SeriesInstanceUID" style="query" repeating="true" />
<param name="0020000E" style="query" repeating="true" />
<param name="SeriesNumber" style="query" />
<param name="00200011" style="query" />
<param name="InstanceNumber" style="query" />
<param name="00200013" style="query" />
<param name="PerformedProcedureStepStartDate" style="query" />
<param name="00400244" style="query" />
<param name="PerformedProcedureStepStartTime" style="query" />
<param name="00400245" style="query" />
<param name="RequestAttributeSequence" style="query" />
<param name="00400275" style="query" />
<param name="RequestAttributeSequence.ScheduledProcedureStepID" style="query" />
<param name="00400275.00400009" style="query" />
<param name="RequestAttributeSequence.RequestedProcedureID" style="query" />
<param name="00400275.00401001" style="query" />
<param name="includefield" style="query" repeating="true">
<option value="all" />
</param>
</request>
<response status="200">
<param name="Warning" style="header"
fixed="299 {SERVICE}: The fuzzymatching parameter is not supported.
Only literal matching has been performed." />
<representation mediaType="multipart/related; type=application/dicom+xml" />
<representation mediaType="application/dicom+json" />
</response>
<response status="400 401 403 413 503" />
</method>
</resource>
<resource path="{BulkDataURL}">
<method name="GET" id="RetrieveBulkData">
<request>
<param name="Accept" style="header"
default="multipart/related; type=application/octet-stream">
<option value="multipart/related; type=application/octet-stream" />
</param>
</request>
<response status="200">
<representation mediaType="multipart/related; type=application/octet-stream" />
</response>
<response status="400 404 406 410 503"></response>
</method>
</resource>
</resources>
</application>
Several ophthalmic devices produce thickness and/or height measurements of certain anatomical features of the posterior eye (e.g., optic nerve head topography, retinal thickness map, etc.). The measurements are mapped topographically as monochromatic images with pseudo color maps, and used extensively for diagnostic purposes by clinicians.
Quantitative ophthalmic OCT image analysis provides essential thickness measurement data of the retina. In the clinical practice two thickness parameters are commonly used: total retinal thickness (TR) in macular region and retinal nerve fiber layer thickness (RNFL) in optic nerve head (ONH) region. TR is widely applied to assess various retinal pathologies involving macula (e.g., cystoid macular edema, age-related macular degeneration, macular hole, etc.). The RNFL thickness measurement is most commonly used for glaucoma assessment.
Figure III.2-1 is an example of 2D TR map computed on a 3D OCT cube data from a healthy eye. The color bar on the left provides a color-to-thickness representation to allow interpretation of the false color coded 2D thickness map in the middle. The image on the right shows one OCT frame representing a retinal cross section along the red line (across the middle of the thickness map). TR is defined as the thickness between internal limiting membrane (white line on the OCT frame on the right) and RPE/Choroid interface (blue line on the OCT frame). These two borders are automatically detected using a segmentation algorithm applied to the entire 3D volume.
Figure III.3-1 is an example of a 2D RNFL map computed on a 3D OCT cube data from a healthy eye. The figure layout is the same as the previous example. The RNFL thickness is limited to the thickness of this single layer of the retina that is comprised of the ganglion cell axons that course to the optic nerve head and exit the eye as the optic nerve. Note that this image depicts a BMP mask in the center of the map where the optic nerve head (ONH) exists and no RNFL measurements can be obtained. In this example, the mask is displayed as a black area, which does not contain any thickness information (not zero micron thickness). Since the color bar representation is not relevant at the ONH, common practice is to mask it to avoid confusion or misinterpretation due to meaningless thickness data in this area.
A 48 year old Navajo male with diabetes, decreased visual acuity and fundoscopic stigmata of diabetic retinopathy receives several tests to assess his likelihood of macular edema. Optical coherence tomography (OPT) is performed to assess the thickness of the retina in the macular area. This is performed with retinal thickness depicted by ophthalmic mapping. The results is an Ophthalmic Thickness Map SOP instance with the Ophthalmic Thickness Mapping Type Code Sequence (0022,1436) set to "Absolute Ophthalmic Thickness" and the Measurements Units Code Sequence (0040,08EA) in the Real World Value Mapping Macro, set to "micrometer". The OPT image is also referenced in attribute Referenced Instance Sequence (0008,114A).
Since the thickness of the macula varies normally based upon a number of dependencies such as age, gender, race, etc. Interpretation of the retinal thickness in any given patient may be done in the context of normative data that accounts for these variables. The thickness data used to generate the thickness map is analyzed using a manufacturer specific algorithm for comparison to normative data relevant to this specific patient. The results of this analysis is depicted on a second thickness "map" (second SOP Instance) showing each pixel's variation from normal in terms of confidence that the variation is real and not due to chance. Specific confidence levels are then depicted by arbitrary color mapping registered to the fundus photograph. This is typically noted as the percent probability that the variation is abnormal e.g., p >5%, p <5%, p <1% etc. The results is an Ophthalmic Thickness Map SOP instance with the Ophthalmic Thickness Mapping Type Code Sequence (0022,1436) set to "Thickness deviation category from normative data". Mapping the "categories" to a code concept is accomplished via Attribute Pixel Value Mapping to Coded Concept Sequence (0022,1450).
A patient was presented with normal visual acuity OU (both eyes), intraocular pressures (IOP) of 18 mm Hg OU (both eyes), and 0.7 C/D OD (right eye) and 0.6 C/D ratio OS (left eye). Corneal pachymetry showed slight thinning in both eyes at 523µ OD (right eye) and 530µ OS (left eye). Static threshold perimetry testing showed nonspecific defects OU (both eyes) and was unreliable due to multiple fixation losses. Confocal scanning laser ophthalmoscopy produced OPM topographic representations of both optic nerves suggestive of glaucoma. The contouring of the optic nerve head (ONH) in the left eye showed a slightly enlarged cup with diffuse thinning of the superior neural rim. In the right eye, there was greater enlargement of the cup and sloping of the cup superior-temporally with a clear notch of the neural rim at the 12:30 position. Corneal compensated scanning laser polarimetry was performed bilaterally. Analysis of the OPM representation of the retinal nerve fiber layer (RNFL) thickness map showed moderate retinal nerve fiber loss with accentuation at the superior pole bilaterally. The patient was diagnosed with normal tension glaucoma and started on a glaucoma medication. Follow-up examinations showed stable reduction in his IOP to 11 mm Hg OU (both eyes) and no further progression of his ONH or RNFL defects.
Using OCT technology, there are typically 2 major highly reflective bands generally visible; inner and outer highly reflective bands (IHRB and OHRB).
The inner band corresponds to the inner portion of the retina, which consists of ILM (internal limiting membrane), RNFL (retinal nerve fiber layer), GCL (ganglion cell layer), IPL (inner plexiform layer), INL (inner nuclear layer), and OPL (outer plexiform layer). In terms of the reflectivity, they present a high-low-high-low-high pattern, in general. Presumably RNFL, IPL, and OPL are the highly reflective layers and GCL and INL are of low reflectivity. ILM itself may or may not be visible in OCT images (depending on the scanning beam incidence angle), but for convenience it is used to label the vitreo-retinal interface.
The outer band is considered as the RPE (retinal pigment epithelium) /Choroid complex that consist of portion of photoreceptor, RPE, Bruch's membrane, and portion of choroid. Within the RPE/Choroid complex, there are 3 highly reflective interfaces identifiable, presumably corresponding to IS/OS (photoreceptor inner/out segment junction), RPE, and Bruch's membrane.
Clinically 3 retinal thickness measurements are generally acknowledged and utilized; RNFL thickness, GCC (ganglion cell complex) thickness, and total retinal thickness.
RNFL thickness is defined as the distance between ILM and outer interface of the inner most highly reflective layer presumably RNFL.
GCC thickness is defined as the distance between ILM and the outer interface of the second inner highly reflective layer presumably the outer border of inner plexiform layer (IPL).
Total retinal thickness definition varies among OCT manufacturers. The classic definition is the distance between ILM and the first highly reflective interface (presumably IS-OS) in the OHRB (Total retinal thickness (ILM to IS-OS) ). A second definition is the distance between ILM and the second highly reflective interface (presumably RPE) in the OHRB (Total retinal thickness (ILM to RPE) ). A third definition is the distance between ILM and the third highly reflective interface (presumably Bruch's membrane) in the OHRB (Total retinal thickness (ILM to BM) ).
When interpreting quantitative data obtained from imaging devices, comparing may be an issue. Using different devices manufactured by different companies usually ends up with non-comparable measurements because they use different optics and different algorithms to make measurements.
Currently there are multiple SD-OCT devices independently manufactured, and data comparability has become problematic. When patients change doctors or otherwise receive care from more than one provider, previously acquired data may occur on different devices and become almost useless simply because the present doctor has no access to the same device. Another problem occurs with longitudinal assessments on the same device after it has undergone upgrade to a newer generation. In this case new baseline measurements must be obtained due to incomparability of the data (this happens even for the same make different generation devices). Attempts to normalize the measurements have been unsuccessful.
The manufacturer, model, serial number, and software version information are available in the Equipment Module, and is very important for considering the significant importance of the information to the quantitative data between various SOP Instances.
When supporting textures within one acquisition process, multiple series are generated. There is one Series containing the Surfaces and another containing the textures. References are used to link Instances in different series together.
Use cases: A single surface record of a patient is made, for example teeth, nose, or breast. If third party software does the post-processing only the point cloud needs to be stored.
The Surface Scan Point Cloud instance will be used because a point cloud is stored. A study with a single series is created.
Use cases: A scanner device providing triangulated objects with textures, e.g., for documentation of burns or virtual autopsy.
The Surface Scan Mesh instance will be used because a triangulated object is stored. A study with two series will be created. One series contains a Surface Mesh instance and the other series a VL Photographic Image instance. The latter stores the texture, which is mapped on the surface mesh and is linked to the Surface Scan Mesh instance via the UV Mapping Sequence (0080,0008).
Use cases: The surface of a textured object has been modified, for example artifacts have been manually removed after the study or surgery. The new result is stored.
In the study of the origin Surface Scan Point Cloud instance a Surface Scan Mesh instance is created in its own series containing the modified mesh. The Referenced Surface Data Sequence (0080,0013) will be used to reference the original instance. The mesh as well as the point cloud points to the texture using the Referenced Surface Data Sequence (0080,0012).
Use-case: Objects, which need to be scanned from multiple points of view, such as the nose.
After the acquired point clouds have been merged by a post-processing software application, the calculated surface mesh is stored in the same study in a new series. The Referenced Surface Data Sequence (0080,0013) points to all origin Surface Scan Point Cloud instances that have been used for reconstruction. The Registration Method Code Sequence (0080,0003) is used to indicate that multiple point clouds have been merged.
Use-case: In the application field of dental procedures some products support switching between two different textures for the same surface.
In this case a number of VL Photographic Image instances are stored in the same series.
The UV Mapping Sequence (0080,0008) is used to associate the VL Photographic Image instances with the Surface Scan Point Cloud instance. The Texture Label (0080,0009) is used to identify the textures of one point cloud.
Use-case: A single surface record of a patient is made, for example teeth, nose, or breast. If third party software does the post-processing only the point cloud needs to be stored. Gray or color values can be assigned to each point in the point cloud.
The point cloud is stored in a Surface Scan Point Cloud instance. A study with a single series is created. One or both of the attributes Surface Point Presentation Value Data (0080,0006), or Surface Point Color CIELab Value Data (0080,0007) may be used to assign gray or color values to each point in the point cloud.
Use-case: To replay a sequence of multiple 3D shots of different facial expressions of a patient before facial surgeries such as facial transplantation.
A time stamp for each shot is stored in the Acquisition DateTime attribute (0008,002A).
Traditionally, images from cross-sectional modalities like CT, MR and PET have been stored with one reconstructed slice in a single frame instance. Large studies with a large number of slices potentially pose a problem for many existing implementations, both for efficient transfer from the central store to the user's desktop for viewing or analysis, and for bulk transfer between two stores (e.g., between a PACS and another archive or a regional image repository).
Transporting large numbers of slices as separate single instances (files) is potentially extremely inefficient due to the overhead associated with each transfer (such as C-STORE acknowledgment and database insertion).
Replicating the Attributes describing the entire patient/study/series/acquisition in every separate single instance is also potentially extremely inefficient, and though the size of the this information is trivial by comparison with the bulk data, the effort to repeatedly parse it and sort out what it means as a whole on the receiving end is not trivial.
The Enhanced family of modality-specific multi-frame IODs is intended to address both these concerns, but there is a large installed base of older equipment that does not yet support these, both on the sending and receiving end, and a large archive of single frame instances.
An interim step, a legacy transition strategy for a mixed environment containing older and newer modalities, PACS and workstations, is described here. It is predicated on the ability to "convert" single frame instances into new "enhanced multi-frame instances".
The Enhanced family of modality-specific multi-frame IODs contain many requirements that cannot be satisfied by the limited information typically available in the older single frame objects. A family of Multi-frame Secondary Capture IODs is available, but their use would mean that a recipient could not depend on the presence of important cross-sectional information like spacing, position and orientation. Accordingly, a new family of modality-specific Legacy Converted Enhanced Image Storage IODs has been defined that bridge the gap in conversion complexity and usability between these two extremes.
Figure KKK-1 illustrates the approach to enabling a heterogeneous environment with conversion from single to multi-frame objects as appropriate. In this figure, modalities that generate single or enhanced images peacefully co-exist with PACS or workstations that support either or both.
The following use-cases are explicitly supported:
A PACS that accepts single frame images, and converts them to multi-frame images for its own internal use.
A PACS that accepts single frame images, and converts them to multi-frame images for externalization via DICOM services (Query/Retrieval) so that they can be used by external workstations (or other processing applications) that support multi-frame images.
A PACS that accepts multi-frame images from a modality, and converts them to single frame images for its own internal use.
A PACS that accepts true and/or legacy converted enhanced multi-frame images, and converts them to single frame images for externalization via DICOM services (Query/Retrieval) so that they can be used by external workstations (or other processing applications) that do not support multi-frame images.
A modality that can create true enhanced multi-frame images, as well as receive true (+/- legacy converted) enhanced multi-frame images.
Return of results from workstations in either single frame or true or legacy converted enhanced multi-frame form.
The amount of standard information is the same in single frame and transitional legacy-converted multi-frame images, but greater in the true enhanced multi-frame images, and this affects the level of functionality obtainable within the PACS or with an external workstation (without depending on private information).
Since the transitional legacy-converted and true enhanced multi-frame images share a common structure and common functional group macros, this scalability can be implemented incrementally.
It is NOT the expectation that modalities will generate Legacy Converted Enhanced Image Storage SOP Instances; rather, they should create True Enhanced Image Storage SOP Instances fully populated with the appropriate standard attributes and codes.
This strategy is compatible with an approach commonly implemented on acquisition modalities when deciding which SOP Class to use to encode images.
Normally a modality will propose in the Association that images be transferred using the SOP Class for which the IOD provides the richest set of information (i.e., the True Enhanced Image Storage SOP Class), and will choose the corresponding Abstract Syntax for C-STORE Operations if the Association Acceptor accepts multiple choices of SOP Class.
Consider a modality that supports the appropriate modality-specific Enhanced Image Storage SOP Class, but which is faced with the dilemma of a PACS that does not. In this case, it will commonly "fall back" to sending images the "old" way as single-frame SOP Class Instances, either because it has been pre-configured that way by service personnel, or because it discovers this limitation during Association Negotiation. This strategy is also common amongst modalities for which there are different choices of single frame SOP Class (e.g., DX versus CR versus Secondary Capture, for Digital X-Rays). In some cases, this may be implemented formally using the ability during Association Negotiation to specify a Related General SOP Class (Section B.4.2.1 “SCU Fall-Back Behavior” in PS3.4 ).
If the PACS is upgraded to include multi-frame conversion capability, and no change is made in the configuration of the modality, or in the SOP Classes accepted by the PACS, then in this scenario, the PACS can potentially convert the single-frame instances into Legacy Converted Enhanced instances. The net result is continuing sacrifice of information compared to what the modality is actually capable of.
A better choice, since the PACS is now capable of handling multi-frame images, is to also reconfigure it to also accept the "true" Enhanced Image rather than just "transitional" Legacy Converted Enhanced Storage SOP Classes. Since the two SOP Class families use the same structure and common important Functional Groups, in all likelihood the PACS will be able to use either class of objects, and in a future upgrade take advantage of the additional information in the superior object (perhaps for more complex processing or annotation or rendering). In any case, storing the modality's best output in the archive will benefit future re-use as priors and may enable greater functionality in external workstations.
A special consideration is when prior images need to be displayed on the modality before starting a new study (perhaps to setup a comparable protocol or better understand the request). In this case, care needs to be taken with respect to which images are accessible to the modality (either pushed to it or retrieved by it), and the question of "round trip fidelity" of conversion arises.
The coexistence (either actually or logically) of two different representations of the same information creates a potential challenge in that the user must not be presented with both sets simultaneously.
A naïve conversion that added converted images to the study without an ability to distinguish or "filter" them from view would not only be confusing but would potentially result in twice as much data to transfer.
Accordingly, the Query/Retrieve mechanism is extended with an optional extended negotiation capability to specify which "view" of the information is required by the SCU:
A "classic" view, which includes either original (as received) classic single frame images or enhanced multi-frame images converted to single frame.
An "enhanced" view, which includes either original (as received) enhanced multi-frame images, or classic single frame images converted to true or legacy converted enhanced multi-frame.
Often instances within a Study will cross-reference each other. For example, a Presentation State or a Structured Report or an RT Structure Set will reference the images to which they apply, cross-sectional images may reference localizer images, and images that were acquired with annotations may contain references to Presentation States encoding those annotations.
Accordingly, when there are multiple "views" of the same study content (classic or enhanced), the instances will have different SOP Instance and Series Instance UIDs for converted content in each view. Hence any references within an instance to a converted instance needs to be updated as well. In doing such an update of references to UIDs, instances that might not otherwise have needed to be converted do need to be converted, and so on, until the entire set of instances within the scope of the conversion for the view has referential integrity.
In practice, the only instances that do not need to be converted (and assigned new UIDs) are those that contain no references and are not classic or enhanced images to be converted.
Whether or not assignment of a converted instance to a new Series triggers the need to convert all instances in that Series to the new Series, even if they would not otherwise be converted, is not defined (i.e., it is neither required nor prohibited, and hence a Series can be "split" as a consequence of conversion).
The scope of referential integrity required is defined to be the Patient. Instances in one Study may be referenced from another (e.g., as prior images).
The rules for conversion specify that the SOP Instance and Series Instance UIDs of converted images be changed, and that the same UIDs be used each time that a query or retrieval is performed. The strict separation of the two "views" of the same information, coupled with the "determinism" that results in the same identification and organization of each view every time, are required for stability across successive operations.
Were this not to be the case, for example, the results of a query (C-FIND) might be different from the results of a subsequent retrieval (C-MOVE or C-GET), or for that matter, successive queries. Further, references to specific instance UID in either view may be recorded in external systems (e.g., in an EMR), hence it is important that these remain stable and accessible.
This places a burden on the Q/R SCP to either retain a record of the mapping of UIDs from one view to the other, or to use some deterministic process that results in the same UIDs (one could envisage some hashing scheme, for instance). How this is implemented is beyond the scope of the standard to define. The determinism requirement does not remove the uniqueness requirement; in particular it is not appropriate to attempt to derive new UIDs by adding a suffix to a UID generated by a different application, for example.
There is no time limit placed on the determinism; it is expected to be indefinite, at least within the control of the system. This is a factor that should be taken into account both in the design of federated Q/R SCPs that may integrate subsidiary SCPs that support this mechanism. It should also be considered during migration to a new Q/R SCP, which ideally should support the mechanism, and should support the same mapping from one view to another as was provided by the Q/R SCP being migrated. This may be non-trivial, since the algorithm for conversion may be different between the two systems. It may be necessary to define some persistent, standard, serialized mapping of one set of UIDs to the other.
It is also useful to save references in converted SOP instances to their source. Accordingly, converted instances are required to contain such references, both for image conversions as well as for ancillary instances that may be updated, such as Presentation States and Structured Reports.
Obviously, the references to the source instances for the conversion are excluded from conversion themselves. If the instances have been converted on different systems, however, there is a possibility that the source references will be "replaced" and a record of the "chain" of multiple conversions will not be persisted.
There is no mechanism to define forward references in the source to the converted instances, since that would imply changing the source instances from their original form, and while this is acceptable within the scope of the normal "coercion" that a Storage SCP is permitted to perform, it is probably not sufficiently useful to justify the effort. This does imply some asymmetry however, depending on the direction of conversion (classic to enhanced or vice versa); only one set will contain the references.
In performing round trip conversion, without access to the source instances, the referenced source UIDs can be used as the UIDs for the newly created converted instances.
When does a converted view come into existence? By definition, when it is "observed". However, a practical question is when to start conversion. A Study is never, theoretically, complete, yet the semantics for conversion and consistency are defined at the Study level.
Another practical question is whether or not to make the received instances available, even though the converted ones may not yet have been created.
In the absence of the concept of "study completion" in DICOM, no firm rules can be defined. However, in practice, most systems have an internal "completion" concept, which may or may not be related to the completion of the Performed Procedure Steps that are related to the sets of instances in question, or may be established through some other mechanism, such as operator intervention, possibly via a RIS message (e.g., after QC checks are signed off as complete, or after a Study has been declared as "ready to read").
A system may elect to "dynamically" begin conversion as instances arrive and update the information in the conversion as new instances are encountered, or it may wait until some state is established that allows it to perform the conversion "statically". In either case, the information in the converted view via the query/retrieval mechanisms should be immutable once made available. I.e., once a conversion has been "distributed", it would be desirable for the system to block subsequent changes to the Study, except to the extent that there is a need for correction and management of errors (in which case mechanisms such as IHE Image Object Change Management (IOCM) may be appropriate).
In this example, two consecutive transverse CT slices encoded as CT Image SOP Class Instances are shown, with a Grayscale Softcopy Presentation State reference to one of them, compared to the converted Legacy Converted Enhanced CT Image SOP Class Instance and a revised Grayscale Softcopy Presentation State that applies to it.
This Annex contains examples of query and retrieval when the images are supplied in one form, and both forms are accessible via the two alternative CLASSIC and ENHANCED views.
Baseline (non-extended negotiation) is not illustrated, since the instances were supplied to the SCP in their Classic form, and hence the responses would be identical to those illustrated for the CLASSIC view, except for the presence of or value returned in Query/Retrieve View (0008,0053).
This example presumes that the Q/R SCP contains the same images and presentation states described in Annex LLL.
Study Root Study Level C-FIND Request with Patient ID and Accession Number as keys:
Study Root Study Level C-FIND Response:
Study Root Study Level C-MOVE Request with Study Instance UID as unique key:
Study Root Study Level C-MOVE Pending Responses illustrating SOP Instances retrieved:
Study Root Study Level C-FIND Request with Patient ID and Accession Number as keys:
This is exactly the same as for the CLASSIC view, except that Query/Retrieve View (0008,0053) has a value of ENHANCED rather than CLASSIC.
Study Root Study Level C-FIND Response:
This is the same as for the CLASSIC view, except that Query/Retrieve View (0008,0053) has a value of ENHANCED rather than CLASSIC, the SOP Classes in Study (0008,0062) has a different value for the Image Storage SOP Class, and the Number of Study Related Instances (0020,1208) is fewer.
Study Root Study Level C-MOVE Request with Study Instance UID as unique key:
This is exactly the same as for the CLASSIC view, except that Query/Retrieve View (0008,0053) has a value of ENHANCED rather than CLASSIC. In particular, the same Study Instance UID is retrieved.
Study Root Study Level C-MOVE Pending Responses illustrating SOP Instances retrieved:
Several ophthalmic devices produce curvature and/or elevation measurements of corneal anterior and posterior surfaces (e.g., maps that display corneal curvatures, corneal elevations, and corneal power, etc.). The principle methods used include reflection of light from the corneal surface (e.g., Placido ring topography) and multiple optical sectioning or slit beam imaging (e.g., Scheimpflug tomography). The measurements are mapped topographically as pseudo-color maps, and used extensively for diagnostic purposes by clinicians and to fit contact lenses in difficult cases. The underlying data from these measurements is also used to guide laser sculpting in keratorefractive surgery.
The method for presenting corneal topography maps with pseudo-colored images has been studied extensively. Contour maps are effective for diagnostic purposes. The proper scaling is important so that clinically important detail is not obscured as well irrelevant detail masked. This can be done with a scale that has fixed dioptric intervals. The choice of color palette to represent different levels of corneal power is equally important. There must be enough contrast between adjacent contour colors to provide pattern recognition; it is the corneal topography pattern that is used for clinical interpretation. A color palette can be chosen so that lower corneal powers are represented with cooler colors (blue shades), while higher corneal powers are represented with the warmer colors (red shades). Green shades are used to represent corneal powers associated with normal corneas. The standard scale is shown in Figure NNN.2-1.
Quantitative measurements of anterior corneal surface curvature (corneal topography) are made with the Placido ring approach. Patterns on an illuminated target take the form of mires or a grid pattern. Their reflection from the anterior corneal surface tear film, shown in Figure NNN.3-1, is captured with a video camera. Their positions relative to the instrument axis are determined through image analysis and these data are used to calculate anterior corneal curvature distribution.
Corneal curvature calculations are accomplished with three different methods that provide corneal powers. The axial power map, shown in Figure NNN.3-2, is most useful clinically for routine diagnostic use as the method of calculation presents corneal topography maps that match the transitions known for corneal shape-the cornea is relatively steep in its central area, flattening toward the periphery. This figure shows an example where the map is superimposed over the source image based upon the corneal vertex Frame of Reference. The Blending Presentation State SOP Class may be used to specify this superimposed processing.
The instantaneous power map, shown in Figure NNN.3-3, reveals more detail for corneas that have marked changes in curvature as with the transition zone that rings the intended optical zone of a refractive surgical procedure.
The refractive power map, shown in Figure NNN.3-4, uses Snell's Law of refraction to calculate corneal power to reveal, for example, uncompensated spherical aberration.
The height map, shown in Figure NNN.3-5, displays the height of the cornea relative to a sphere or ellipsoid.
Knowledge of the anterior corneal shape is helpful in the fitting of contact lenses particularly in corneas that are misshapen by trauma, surgery, or disease. A contact lens base curve inventory or user design criteria are provided and these are used to evaluate contact lens fit and wear tolerance using a simulated clinical fluorescein test, shown in Figure NNN.4-1. The fluorescein pattern shows the contact lens clearance over the cornea. Numbers indicate local clearance in micrometers.
Ocular wavefront produces a measurement of optical path difference (OPD) between ideal optical system and the one being measured. Typically the OPD is measured and displayed in units of microns. Wavefront maps can be produced from the corneal surfaces, most often the front surface, since this is the major refracting surface in the eye account for about 80% of the ocular power.
Wavefront maps can be calculated directly from corneal elevation data most often using the Zernike polynomial fitting series. With this method, corneal optical characteristics such as astigmatism, spherical aberration, and coma can be calculated. Generally, the lower order (LO) aberrations (offsets, refractive error and prism) are eliminated from display, so that only the higher order (HO) aberrations remain, shown in Figure NNN.5-1.
Figure NNN.5-1. Corneal Axial Topography Map of keratoconus (left) with its Wavefront Map showing higher order (HO) aberrations (right)
This Annex describes the use of the Radiopharmaceutical Radiation Dose (RRD) object. PET, Nuclear Medicine and other non-imaging procedures necessitate that radiopharmaceuticals are administered to patients. The RRD records the amount of activity and estimates patient dose. Radiopharmaceuticals are often administered to patients several minutes before the imaging step begins. A dose management system records the amount of activity administered to the patients. Currently these systems can be configured to receive patient information from HIS/RIS systems via HL7 or DICOM messaging. Figure OOO-1 demonstrates a workflow for a "typical" Nuclear Medicine or PET department.
Figure OOO-2 demonstrates a Hot Lab management system as the RRD creator. It records the activity amount and the administration time. It creates the RRD report and sends it to the modality. Consistent time is required to accurately communicate activity amount. The consistent time region highlights systems and steps where accurate time reporting is essential. A DICOM Store moves the report to the modality.
Figure OOO-3 demonstrates RRD workflow where a radiopharmaceutical is administered to a patient for a non-imaging procedure. The report is sent to the image manager/image archive for storage and reporting.
Figure OOO-4 demonstrates when an infusion system or a radioisotope generator is the RRD creator.
Figure OOO-5 is a UML sequence diagram to illustrate steps for creation and downstream use case for Radiopharmaceutical Radiation Dose report and CT dose report for the PET-CT system. The RRD is stored to an image archive and retrieved by the PET-CT scanner.
Figure OOO-6 is a UML sequence diagram to illustrate steps for creation and downstream use for radiopharmaceutical that is administered when the modality starts acquisition. The diagram illustrates that the dose report is reconciled with the image at later time by an image processing step.
Figure OOO-6. UML Sequence Diagram for when Radiopharmaceutical and the Modality are Started at the Same Time
The Radiopharmaceutical Radiation Dose (RRD) template provides a means to report the radiopharmaceutical identification number and the identification numbers of its components.
A typical use case is that when a radio-pharmacist elutes a radionuclide from a generator into a vial. The radionuclide elution is given an identification number (Radionuclide Vial Identifier). The pharmacists then draws some radionuclide from the vial to compound with a reagent (Reagent Vial Identifier) creating a multidose vial of a radiopharmaceutical. The multidose vial is given identification number (Radiopharmaceutical Lot Identifier). Individual doses are drawn from the multidose vial for administration to patients. Each of the doses is given an identification number (Radiopharmaceutical Identifier).
A second use case is that when a patient is prescribed 2 MBq of an oral radiopharmaceutical. The radio-pharmacist dispenses two 1 MBq capsules. Each capsule may have different lot number (Radiopharmaceutical Lot Identifier). The two capsules are administered at the same time as one dose (Radiopharmaceutical Identifier). The report may contain two Radiopharmaceutical Lot Identifiers one for each capsule and one radiopharmaceutical identifier for the dose.
Figure OOO-7 is a diagram the displays the hierarchical relationship between the radiopharmaceutical dispense unit identifier, radiopharmaceutical lot identifier, reagent vial identifier and the radionuclide vial identifier.
The Display System SCU and the Display System SCP are peer DICOM Communication of Display Parameters management application entities. The application entity of the Display System SCP supports one or more display subsystems.
Display System SCU and the SCP establish an association by using the association services of the OSI upper layer service.
While the association is being established, each of application entity negotiates the supported SOP classes.
This section provides an examples of message sequencing when using the Display System SOP Class. This section is not intended to provide an exhaustive set of use cases but rather an informative example. There are other valid message sequences that could be used to obtain an equivalent outcome.
A typical Display System is shown in Figure PPP.3.1-1.
The following is an example of an N-GET Request/Response pair for the Display System SOP Class.
This example is encoded with Undefined Sequence Length and Undefined Item Length, so it contains Sequence Delimitation Items and Item Delimitation Items.
Table PPP.3.1-1. N-GET Request/Response Example
|
>>Item Delimiter of Item #1 of Person Identification Code Sequence |
|||
|
>Item Delimiter of Item #1 of Equipment Administrator Sequence |
|||
|
>>Item Delimiter of Item #1 of Display Device Type Code Sequence |
|||
|
>>Item Delimiter of Item #1 of Display Subsystem Configuration Sequence |
|||
|
>>Sequence Delimiter of Display Subsystem Configuration Sequence |
|||
|
>>Sequence Delimiter of Display Subsystem Configuration Sequence |
|||
|
>>Item Delimiter of Item #1 of Measurement Equipment Sequence |
|||
|
>>Item Delimiter of Item #1 of Display Device Type Code Sequence |
|||
|
>>Item Delimiter of Item #1 of Display Subsystem Configuration Sequence |
|||
|
>>Sequence Delimiter of Display Subsystem Configuration Sequence |
|||
|
>>Item Delimiter of Item #1 of Measurement Equipment Sequence |
|||
|
>Item Delimiter of Item #1 of Target Luminance Characteristics Sequence |
|||
|
>Item Delimiter of Item #2 of Target Luminance Characteristics Sequence |
|||
|
>Item Delimiter of Item #3 of Target Luminance Characteristics Sequence |
|||
|
>Sequence Delimiter of Target Luminance Characteristics Sequence |
|||
|
See Table PPP.3.1-2. |
|||
This example is encoded with Undefined Sequence Length and Undefined Item Length , so it contains Sequence Delimitation Items and Item Delimitation Items.
Table PPP.3.1-2. Example of N-GET Request/Response for QA Result Module
A Tablet Display System is shown in Figure PPP.3.2-1.
The following is an example of an N-GET Request/Response pair for the Display System SOP Class.
This example is encoded with Undefined Sequence Length and Undefined Item Length, so it contains Sequence Delimitation Items and Item Delimitation Items.
Table PPP.3.2-1. N-GET Request/Response Example
This Annex contains examples of the use of the Parametric Map IOD.
This Section contains an example of the use of the Parametric Map IOD to encode Ktrans for a Dynamic Contrast Enhanced (DCE) MR.
The frames comprise a single traversal of a regularly sampled 3D volume, described as a single stack and a single quantity, with dimensions of Stack ID, In-Stack Position Number and Quantity. A reference is also provided to the (single entire multi-frame) MR image from which the parametric map was derived. Only the Frame Content Sequence and Plane Position Sequence vary per-frame; all other functional groups are shared in this example.
This Annex contains examples of the use of ROI templates within Measurement Report SR Documents.
This CT example describes the minimum content necessary to encode a single measurement (volume) made from a single volumetric ROI encoded as a single segment that spans two source CT images.
This CT example describes a set of measurements (volume. long axis and mean attenuation coefficient) made from a single volumetric ROI encoded as a single segment that spans two source CT images, and includes a description of the measurement methods and the finding site, as well as an image library to describe characteristics of the images used, and categorical observations at the measurement group and entire subject level.
For a different modality than CT, the choice of measurement for the mean intensity would not be (122713, DCM, "Attenuation Coefficient").
For MR one might use (110852, DCM, "MR signal intensity"), or (110804, DCM, "T1 Weighted MR Signal Intensity"), etc. See also CID 7180 “Abstract Multi-dimensional Image Model Component Semantics” for various appropriate signal intensity types for MR and other modalities.
For PET one might use (110821, DCM, "Nuclear Medicine Tomographic Activity"), in which case the specific type of signal would be apparent from the units, e.g., ({SUVbw}g/ml, UCUM, "Standardized Uptake Value body weight") or for activity-concentration, (Bq/ml, UCUM, "Becquerels/milliliter"). See also CID 84 “PET Units”.
Care should be taken when selecting modifiers such as (G-C036, SRT, "Measurement Method") versus (121401, DCM, "Derivation").
The finding site and laterality within the measurement template (TID 1419 “ROI Measurements”) are factored out and shared by both measurements.
The pattern used for the image library uses TID 4020 “CAD Image Library Entry”, though commonality may be refactored.
This DCE-MR example illustrates encoding measurements of mean and standard deviation Ktrans values in a planar ROI.
The measurement method and finding site and laterality within the measurement template (TID 1419) are factored out and shared by both measurements.
This FDG PET example illustrates encoding measurements of various SUVbw related measurements.
The real world value map reference (for intensity, not size measurements) and finding site within the measurement template (TID 1419) are factored out and shared by measurements.
The time point is described in this case only with a simple label.
This Annex contains examples of the use of Image Library templates within SR Documents.
This PET-CT example dillustrates an Image Library in which attributes of images for two modalities are described, with common attributes factored out of the individual image references.
Only the attributes of relevance to SUV and spatial measurements are included, not a complete description of all aspects of acquisition.
Only two images for each modality are described, rather than all slices acquired, since it is usually only necessary to describe images that are referenced elsewhere in the SR content tree, e.g., on which a region of interest is specified from which measurements are made.
This chapter describes the general concepts of the X-Ray 3D Angiography: the acquisition of the projection images, the 3D reconstruction, and the encoding of the X-Ray 3D Angiographic Image SOP instances. They provide better understanding of the different application cases in the rest of this Annex.
Two main steps are involved in the process of creating an X-Ray 3D Angiographic Instance: The acquisition of 2D projections and the 3D reconstruction of the volume.
The X-Ray equipment acquires 2D projections at different angles. The Acquisition Context describes the technical parameters of a set of 2D projection acquisitions that are used to perform a 3D reconstruction. In the scope of the X-Ray 3D Angiographic SOP Class, all the projections of an Acquisition Context share common parameter values, such as:
If one value of such common parameters changes during the acquisition of the projections, then more than one Acquisition Context will be defined.
Typically the projections of an Acquisition Context are the result of a rotational acquisition where the X-Ray positioner follows a circular trajectory. However, it is possible to define an Acquisition Context as the set of multiple projections at different X-Ray incidences without a particular spatial trajectory.
An Acquisition Context is characterized by a period of time in which all the projections are acquired. Some other parameters are used to describe the Acquisition Context: start and end DateTime, average exposure techniques (mA, kVp, exposure duration, etc.), positioner start, end and increment angles.
Additionally, other technical parameters that change at each projection can be documented in the X-Ray 3D Angiographic SOP Class on a per-projection basis:
The 3D Reconstruction Application performing the 3D Reconstruction can be located in the same X-Ray equipment or in another workstation.
A 3D Reconstruction in the scope of the X-Ray 3D Angiographic SOP Class is the creation of one X-Ray 3D Angiographic volume from a set of projections from one or more Acquisition Context(s). Therefore, one 3D Reconstruction in this scope refers to the resulting volume, and not to the application logic to process the projections. This application logic is out of the scope of this SOP Class, the same encoding will result whether several 3D Reconstructions are performed in a single or in multiple application steps to create several volumes (e.g., low and high resolution volumes) from the same set of projections.
One 3D Reconstruction is characterized by some parameters like name, version, manufacturer, description and the type of algorithm used to process the projections.
The 3D Reconstruction can use one or more Acquisition Contexts to generate one single X-Ray 3D Angiographic Volume. Several 3D Reconstructions can be encoded in one single X-Ray 3D Angiographic Instance.
This section describes the relationships between the real world entities involved in X-Ray 3D Angiography.
The X-Ray equipment creates one or more acquisition contexts (i.e., one or more rotational acquisitions with different technical parameters). The projections can be kept internal to the equipment (i.e., not exported outside the equipment) or can be encoded as DICOM instances. In the scope of the X-Ray 3D Angiographic SOP Class, the projections can be encoded either as X-Ray Angiography SOP Class or Enhanced XA SOP Class.
If the projections are encoded as DICOM Instances, they can be referenced in the X-Ray 3D Angiographic image as Contributing Sources. Each Acquisition Context refers to all the DICOM instances involved in that context. If the projections are kept internal to the equipment, the X-Ray 3D Angiographic image can still describe the technical parameters of each acquisition context without referencing any DICOM instance.
The 3D Reconstruction Application creates one or more 3D Reconstructions, each 3D Reconstruction uses one or more Acquisition Contexts. One or more 3D Reconstructions can be encoded in one single X-Ray 3D Angiographic Instance.
Similarly to other 3D modalities like CT or MR, the X-Ray 3D Angiographic image is generated from original source data (i.e., original projections) which can be kept internal to the equipment. In this sense, the 3D data set resulting from the reconstruction of the original projections is considered as original (i.e., the Value 1 of the attributes Image Type (0008,0008) and Frame Type (0008,9007) equals ORIGINAL).
Note that the original 2D projections can be stored as DICOM instances, and the X-Ray 3D Angiographic image can be created from a later reconstruction on a different equipment. In this case, since the source data is the same original set of projections, the 3D data set is still considered as original.
This chapter describes different scenarios and application cases where the 3D volume is reconstructed from rotational angiography. Each application case is structured in four sections:
User Scenario : Describes the user needs in a specific clinical context, and/or a particular system configuration and equipment type.
Encoding Outline : Describes the X-Ray 3D Angiographic Image SOP Class related to this scenario, and highlights key aspects.
Encoding Details : Provides detailed recommendations of the key attributes of the Image IOD(s) to address this particular scenario. The tables are similar to the IOD tables of the PS3.3. Only attributes with a specific recommendation in this particular scenario have been included.
Example : Presents a typical example of the scenario, with realistic sample values, and gives details of the encoding of the key attributes of the Image IOD(s) to address this particular scenario. In the values of the attributes, the text in bold face indicates specific attribute values; the text in italic face gives an indication of the expected value content.
The first application case describes the most general reconstruction scenario, and can be considered as a baseline. The further application cases only describe the specificities of the new scenario vs. the baseline.
This application case is related to the most general reconstruction of a 3D volume directly from all the frames of a rotational 2D projection acquisition.
The image acquisition system performs a rotational acquisition around the patient and a volume is reconstructed from the acquired data (e.g., through "back-projection" algorithm). The reconstruction can either occur on the same system (e.g., Acquisition Modality) or a secondary processing system (e.g., Co-Workstation).
The reconstructed Volume needs to be encoded and kept saved for interchange with 3D rendering application or further equipment involved during an interventional procedure.
This is the basic use case of X-Ray 3D Angiographic image encoding.
The rotational acquisition can be encoded either as a multifamily XA Image with limited frame-specific attributes or as an Enhanced XA Image, with frame-specific attributes encoded that support the algorithms to reconstruct a volume data set.
The volume data set is encoded as an X-Ray 3D Angiographic instance. The volume data set typically spans the complete region of the projected matrix size (in number of rows and columns).
All the projections of the original XA instance or Enhanced XA instance are used to reconstruct the volume.
The X-Ray 3D Angiographic instance references the original XA instance or Enhanced XA instance and uses attributes to define the context on how the original 2D image frames are used to create the volume.
These modules encode the Series relationship of the created volume.
Table TTT.2.1-1. General and Enhanced Series Modules Recommendations
This module encodes the identifier for the spatial relationship base of this volume. If the originating 2D images do not deliver a value, it has to be created for the reconstructed volume.
Table TTT.2.1-2. Frame of Reference Module Recommendations
This module encodes the equipment identification information of the system that reconstructed the volume data. Since the reconstruction is not necessarily performed by the same system that acquired the projections, the identification of the Equipment performing the reconstruction is recommended. Furthermore the Contributing Equipment Sequence (0018,A001) of the SOP Common Module is recommended to be used to preserve the identification of the system that created the projection image that was base for the reconstruction.
This module encodes the actual pixels of the volume slices. Each slice is encoded as one frame of the X-Ray 3D Angiographic instance. The order of the frames encoded in the pixel data is aligned with the Image Position (Patient) attribute. The order of frames is optimal for simple 2D viewing if the x-,y-,z-values steadily increase or decrease.
This module encodes the contrast media applied. The minimum information that needs to be provided is related to the contrast agent and the administration route. In the reconstructed image, the contrast information comes either from the acquisition system in case of direct reconstruction without source DICOM instances, or from the projection images in case of reconstruction from source DICOM instances.
Table TTT.2.1-3. Enhanced Contrast/Bolus Module Recommendations
If the source instance is encoded as an Enhanced XA instance, the Enhanced Contrast/Bolus Module is specified in that IOD, then those values are copied from the source instance.
If the source instance is encoded as an XA Image, only the Contrast/Bolus Module is specified in that IOD. Although acquisition devices are encouraged to provide details of the contrast, most of the relevant attributes are type 3, so it is possible that if contrast was applied, the only indication will be the presence of Contrast/Bolus Agent (0018,0010) since that attribute is type 2. In that case, if the application is unable to get more specific information from the operator, it may populate the contrast details with the generic (C-B0300, SRT, "Contrast agent") code for contrast agent and the (R-41198, SRT, "Unknown") code for the administration route.
This module encodes a (default) presentation order of the image frames.
Table TTT.2.1-4. Multi-frame Dimensions Module Recommendations
This module encodes the orientation of the Patient for later use with same or other equipment. The related coded terms can be derived from the Patient Position (0018,5100) according to the following table, where:
Table TTT.2.1-5. Patient Position to Orientation Conversion Recommendations
|
PO: (F-10450, SRT, "recumbent") |
|
|
PO: (F-10450, SRT, "recumbent") |
|
|
PO: (F-10450, SRT, "recumbent") |
|
|
PO: (F-10450, SRT, "recumbent") |
This module encodes the specific content of the reconstructed volume.
Table TTT.2.1-6. X-Ray 3D Image Module Recommendations
This module encodes the source SOP instances used to create the X-Ray 3D Angiographic instance.
This module encodes the important technical and physical parameters of the source SOP instances used to create the X-Ray 3D Angiographic Image instance.
The contents of the Enhanced XA Image IOD and XA Image IOD are significantly different. Therefore the contents of the X-Ray 3D Acquisition Sequence will vary depending on availability of encoded data in the source instance.
The content of the X-Ray 3D General Positioner Movement Macro provides a general overview on the Positioner data. In case a system does not support the Isocenter Reference System, it may still be of advantage to provide the patient-based Positioner Primary and Secondary Angles in the Per Projection Acquisition Sequence (0018,9538).
The contents of the Per Projection Acquisition Sequence (0018,9538) need to be carefully aligned with the list of frame numbers in the Referenced Frame Numbers (0008,1160) attribute in the Source Image Sequence (0008,2112).
This module encodes the detailed size of the volume element (Pixel Spacing for row/column dimension of each slice, and Slice Thickness for the distance between slices). It depends on the reconstruction algorithm and is not necessarily identical to the related sizes in the projection images.
For a single volume this macro is encoded "shared" as all the slices will have the same Pixel Spacing and Slice Thickness.
This module encodes the timing information of the frames, as well as dimension and stack index values.
In the reconstruction from rotational projections the figure C.7.6.16-2 of Section C.7.6.16.2.2.1 “Timing Parameter Relationships” in PS3.3 should be interpreted carefully. All the frames forming one X-Ray 3D Angiographic volume have been reconstructed simultaneously, therefore all of them have a same time reference and the same acquisition duration.
The projections have been acquired over a period of time, all of them contributing to each 3D frame. Therefore, it's recommended to encode the 3D frame acquisition duration as the elapsed time from the first to the last projection frame time that contributed to that volume.
Table TTT.2.1-9. Frame Content Macro Recommendations
The volume is directly reconstructed from the original set of projections and therefore not "derived" in this sense. Thus this macro is not applicable in this scenario as the contents of the Contributing Sources Sequence (0018,9506) and the X-Ray 3D Acquisition Sequence (0018,9507) are sufficient to describe the relationship to the originating image.
This macro encodes the anatomical context. It can be important to parameterize the presentation of the volumes. For a single volume this macro is encoded "shared". Typically the anatomy of the volume is only available if the information is already provided within the originating projection image, either by detection algorithm or by user input.
This macro encodes the general characteristics of the volume slices like color information for presentation, volumetric properties for geometrical manipulations etc. In case of a single volume, this macro is encoded "shared" as each slice of the volume has identical characteristics. If multiple volumes are encoded in a single instance, this macro may be encoded "per frame".
This basic example is the reconstruction of a volume by a back-projection from all frames of a rotational acquisition which have been encoded as an Enhanced XA Instance. The rotational acquisition takes 5 seconds to acquire all the projections.
The dimension organization is based on the spatial position of the 3D frames. The frames are to be displayed in the same order as stored.
The UIDs of this example correspond to the diagram shown in Figure TTT.2.1-1
This application case is related to a reconstruction from a sub-set of projection frames.
The image acquisition system performs one rotational acquisition. Not all of the acquired frames, but every Nth frame is used to reconstruct the volume, e.g., to speed-up the reconstruction.
Only selected frames of the original XA instance or Enhanced XA instance are used to reconstruct the volume.
The X-Ray 3D instance references the original XA instance or Enhanced XA instance and uses attributes to define the context on how and which of the original image frames are used to create the volume.
This module encodes the important technical and physical parameters of the source SOP instances and the frames used to create the X-Ray 3D Angiographic instance.
Table TTT.2.2-1. X-Ray 3D Angiographic Acquisition Module Recommendations
This application case is related to a regular reconstruction of the full field of view of a rotational acquisition followed by a specific reconstruction of a sub-region that contains an object of interest (e.g., interventional device implanted, stent, coils etc.).
The image acquisition system performs one rotational acquisition after the intervention, on the region of the patient where an implant has been placed.
Two 3D volumes are reconstructed; one of the full field of view of the projection images, another of a sub-region of each of the acquired frames, e.g., to extract the object of interest into a smaller volume data set. The second volume is likely performed at higher resolution and likely applies different 3D reconstruction techniques, for instance to highlight the material of the implant. The purpose is to overlap the two volumes and enhance the visibility of the object of interest over the full field volume.
The rotational acquisition can either be encoded as XA Image or as Enhanced XA Image.
Each reconstruction is encoded in a different X-Ray 3D Angiographic instance.
Not all parts of each frame of the original XA instance or Enhanced XA instance are used to reconstruct the second volume.
The X-Ray 3D instance references the original XA instance or Enhanced XA instance and uses attributes to define the context on how and which part of the original image frames are used to create the Volume.
Since the two volumes are reconstructed from the same projections, the reconstruction application will use the same patient coordinate system on both volumes so that the spatial location of the object of interest in both volumes will be the same. Therefore the two X-Ray 3D Instances will have the same Frame of Reference (FoR) UID. If the originating 2D Instances do not deliver a value of FoR UID, a new FoR UID has to be created for the reconstructed volumes.
The detailed size of the volume element (Pixel Spacing for x/y dimension and Slice Thickness for z dimension) may be different between the full field of view reconstruction and the sub-region reconstruction.
The plane position of the first slice in the first volume may have a different value than in the second volume, as the sub-region volume can be smaller and shifted with respect to the full field of view volume.
The plane orientation could be different in the second volume depending on the application needs, e.g., to align the slices with the object of interest.
This module encodes the timing information of the frames, as well as dimension and stack index values.
The volume directly reconstructed from a sub-region of each of the original projection X-Ray frames does not necessarily reflect the same anatomy or laterality as the full field of view volume. Therefore the Frame Anatomy macro may point to a different anatomic context than the one documented for the originating frames.
In this example, the slices of the two volumes are reconstructed in the axial plane of the patient; the row direction is aligned in the positive x-direction of the patient (right-left) and the column direction is aligned in the positive y-direction of the patient (anterior-posterior).
The full field of view reconstruction in encoded with the Instance UID "Z1" and consists of a 512 cube volume of 0.2 mm of voxel size. The sub-region reconstruction in encoded with the Instance UID "Z2" and consists of a 256 cube volume of the voxel size of 0.1 mm.
Both volumes share the same Frame of Reference UID.
Figure TTT.2.3-2. Attributes of 3D Reconstruction of the full field of view of the projection frames
This application case is related to a high resolution reconstruction from several rotations around the same anatomy.
The image acquisition system performs multiple 2D rotational acquisitions around the patient with movements in the same or opposite directions in the patient's transverse plane. A single volume is reconstructed from the acquired data (e.g., through "back-projection" algorithm). The reconstruction can either occur on the same system (e.g., Acquisition Modality) or a secondary processing system (e.g., Co-Workstation).
The reconstructed Volume needs to be encoded and saved for further use.
The rotational acquisitions can be encoded either as a single instance (e.g., "C") containing several rotations or as several instances (e.g., "C1", "C2", etc.) containing one rotation per instance. The rotational acquisitions can either be encoded as XA Image(s) with limited frame-specific attributes or as Enhanced XA Image(s), with frame-specific attributes encoded that inform the algorithms to reconstruct a volume data set.
The reconstructed volume data set is encoded as a single X-Ray 3D Angiographic instance. The reconstructed region covers typically the full field of view of the projected matrix size.
All frames of the original XA Images or Enhanced XA Images are used to reconstruct the volume.
The X-Ray 3D instance references the original acquisition instances and records attributes of the projections describing the acquisition context.
Figure TTT.2.4-1. Encoding of one 3D reconstruction from three rotational acquisitions in one instance
Figure TTT.2.4-2. Encoding of one 3D reconstruction from two rotational acquisitions in two instances
This scenario is based on the encoding of the different rotations in one or more 2D instance(s), which can be encoded either as X-Ray Angiography or Enhanced XA Images.
In the case of multiple source 2D Instances, the acquisition equipment assumes that the patient has not moved between the different rotations. This module encodes the same FoR UID in all the rotations, identifying a common spatial relationship between them, thus allowing the 3D reconstruction to use the projections of all the rotations to perform a single volume reconstruction.
If the source 2D Instances do not provide a value of FoR UID, it has to be created for the reconstructed volume.
Table TTT.2.4-1. Frame of Reference Module Recommendations
This module encodes the source SOP instance(s) used to create the X-Ray 3D Angiographic instance.
There are multiple acquisition contexts, one per rotation of the equipment. This module encodes the frame numbers of the source SOP instance that belong to each acquisition context, as well as the important technical and physical parameters of the source SOP instances used to create the X-Ray 3D Angiographic instance.
Table TTT.2.4-3. X-Ray 3D Angiographic Acquisition Module Recommendations
This module encodes the timing information of the frames, as well as dimension and stack index values.
Table TTT.2.4-4. Frame Content Macro Recommendations
This application case is related to a rotational acquisition of several cardiac cycles with related ECG signal information.
The image acquisition system performs one 2D rotational acquisition of the heart in a cardiac procedure. The gantry is continuously rotating at a constant speed. The ECG is recorded during the rotation, and the cardiac trigger delay time is known for each frame of the rotational acquisition allowing it to be assigned to a given cardiac phase.
Several 3D volumes are reconstructed, one for each cardiac phase.
The rotational acquisition can either be encoded as XA Image or as Enhanced XA Image. The XA instance (let's call it "C") is encoded in the Series "B" of the Study "A".
Each reconstruction is related to one cardiac phase corresponding to a sub-set of frames of the rotational acquisition. Therefore, each cardiac phase represents one acquisition context.
Each reconstruction leads to one volume, all volumes are encoded in one single X-Ray 3D Angiographic instance ("Z"). Each volume is for a different cardiac phase. All volumes share the same stack id.
This figure shows only the first three cardiac phases. An implementation may chose how many phases it will reconstruct.
Projection frames are assigned to a phase based on their cardiac trigger delay time. The rotation speed and acquisition pulse rate will not necessarily align uniformly with the cardiac cycle (especially if the heartbeat is irregular). Thus different phases may end up with different number of projections assigned to them. The reconstructed volumes will have the same space.
This scenario is based on the encoding of a single rotational acquisition in one 2D instance, together with the information of the ECG and/or the cardiac trigger delay times of each frame of the rotational image.
This module encodes the description of the pixels of the slices of the volumes, each slice being one frame of the X-Ray 3D Angiographic instance. The pixel data encodes all the frames of the first cardiac phase followed by all the frames of the second cardiac phase and so on. Within one cardiac phase, the order of the frames is aligned with the Image Position (Patient) attribute.
This module encodes the dimensions for the presentation order of the image frames.
Table TTT.2.5-1. Multi-frame Dimension Module Recommendations
There are multiple acquisition contexts, one per cardiac phase. This module encodes the frame numbers of the source SOP instance that belong to each acquisition context and have the same cardiac phase.
Table TTT.2.5-2. X-Ray 3D Angiographic Acquisition Module Recommendations
|
One item for each acquisition context (i.e., each cardiac phase). |
||
|
The frame numbers of the source SOP instance that belong to this acquisition context (i.e., that have the same cardiac phase). NoteThe number of projection frames may be different for each acquisition context. See Note 2 of Section TTT.2.5.2. |
This module encodes the identification of the reconstructions performed to create the X-Ray 3D Angiographic Instance.
Table TTT.2.5-3. X-Ray 3D Reconstruction Module Recommendations
This module encodes the timing information of the frames, as well as dimension and stack index values. All frames forming a volume of one cardiac phase have the same time reference, and a single dimension index value for the first dimension. All volumes for all cardiac phases share the same stack id because they span the same space.
Table TTT.2.5-4. Frame Content Macro Recommendations
This module encodes a value representing the cardiac phase of the 3D frames (i.e., the time of the frame relative to the R-peak).
Table TTT.2.5-5. Cardiac Synchronization Macro Recommendations
In this example the gantry performs one single rotation around the heart at 20 degrees per second, covering an arc of 200 degrees during 10 seconds. Approximately 10 cardiac cycles are acquired. The frame rate is 8 frames per second, resulting in 8 projections acquired at each cardiac cycle corresponding to 8 different cardiac phases.
Overall there will be 80 projections; 10 projections for each of the 8 cardiac phases. Each cardiac phase represents one acquisition context. The information of the cardiac trigger delay time is encoded for each projection. The projections are encoded as an XA Image with the Instance UID "C".
The reconstruction application creates 8 volumes, each volume is reconstructed by a back-projection from the 10 frames having the same cardiac trigger delay time, i.e., the frames acquired at the same cardiac phase. Each volume contains 256 frames. The 8 reconstructed volumes are encoded in one single X-Ray 3D Angiographic instance of Instance UID "Z".
This application case is related to two rotational acquisitions on the same anatomical region before and after the intervention, with table movement between the two acquisitions. The two reconstructed volumes are created and automatically registered on the same patient coordinate system.
The image acquisition system performs two different 2D rotational acquisitions at two different times of the interventional procedure: the first acquisition before the intervention (e.g., before placement of a stent) and the second one after the intervention.
Between the two acquisitions the table position has changed with respect to the Isocenter. The rotational acquisitions are performed with the same spatial trajectory of the X-Ray Detector relative to the Isocenter; therefore the second acquisition contains a slightly different region of the patient.
Two 3D volumes are reconstructed, one for each rotational acquisition. After the intervention, the two 3D volumes are displayed together on the same patient coordinate system. The user can visually assess the placement of the stent over the anatomy pre-intervention. The patient position on the table does not change during the procedure.
The rotational acquisitions can either be encoded as XA Image or as Enhanced XA Image. The two XA instances (let's call them "C1" and "C2") are encoded in two different Series ("B1" and "B2") of the same Study ("A").
The volume data sets are encoded as two X-Ray 3D Angiographic instances ("Z1" and "Z2"). The volumes are typically a full set (in number of rows, columns and slices) of the projected matrix size (in number of rows and columns).
Each reconstructed volume contains one acquisition context consisting of all the frames of the corresponding source 2D XA Image. To display the two volumes together, they share the same Frame of Reference UID.
Since the purpose of this scenario is to overlap the two volumes without additional spatial registration, the spatial location of the anatomy of interest in both volumes needs to be the same. To keep the two volumes spatially registered, the reconstruction application will use the table position of both rotations to correct the table movement with respect to the Isocenter, thus creating both volumes with the same spatial origin and axis, i.e., same patient coordinate system.
Therefore, it is recommended to encode both instances with the same FoR UID, equal to the Frame of Reference UID of the XA projection images. If the originating XA images do not contain a Frame of Reference UID, the reconstruction application will create the FoR UID equal for the two reconstructed volumes.
This module encodes the patient orientation with respect to the table. It is supposed to contain the same values in both 3D volumes, since the patient does not move between the two rotational acquisitions.
The detailed size of the volume element (Pixel Spacing for row/column dimension of each slice and Slice Thickness for the distance of slices) depends on the reconstruction algorithm and is not necessarily identical to the related sizes in the source (projection) image(s).
Table TTT.2.6-2. Pixel Measures Macro Recommendations
This macro encodes the position of the 3D slices relative to the patient.
It is assumed that the patient does not move on the table between the two rotational acquisitions, but the table moves with respect to the Isocenter. Although the spatial trajectory of the X-Ray Detector relative to the Isocenter of the two rotational acquisitions is the same, the two volumes contain a different region of the patient.
To allow spatial registration between the two volumes, the position of the slices of the two volumes need to be defined with respect to the same point of the patient. As the patient does not move on the table, the reconstruction application will define the patient origin as a fixed point on the table, so that the 3D slices of the two volumes are all related to the same fixed point on the table (i.e., same point of the patient) by the attribute Image Position (Patient) (0020,0032).
The volume is positioned in the spatial coordinates identified by the frame of reference, which is common to the two volumes. Therefore, the position of the slices of both volumes is defined with respect to the same patient origin.
In this example, two rotational images are acquired; the first one before the intervention and the second one after the intervention. They are encoded with the Instance UIDs "C1" and "C2" respectively.
In both rotational acquisitions, the patient position with respect to the table is head-first prone, and the table is not rotated nor tilted with respect to the Isocenter. The patient coordinates and the Isocenter coordinates are then aligned on x, y and z.
The patient origin is defined by the application as a fixed point on the table.
During the first rotational acquisition, the table position with respect to the Isocenter in the lateral direction [x] is +20mm, in the vertical direction [y] is +40mm, and in the longitudinal direction [z] is +60mm.
During the second rotational acquisition, the table position with respect to the Isocenter in the lateral direction [x] is -10mm, in the vertical direction [y] is +80mm, and in the longitudinal direction [z] is +110mm.
The second acquisition is performed with a relative table movement of (-30,40,50) mm vs. the first acquisition in the patient coordinates system. Therefore, for a given 3D slice "i" of the two volumes, the Image Position (Patient) (0020,0032) of the second volume is translated of (+30,-40,-50) mm vs. the Image Position (Patient) (0020,0032) of the first volume.
The two reconstructions are performed with the same number of rows, columns and slices, and both at the same resolution of 0.2 mm/voxel. Note that if the resolution was different, the Image Position (Patient) (0020,0032) of the second volume would be additionally translated by the shift of the TLHC pixels relative to the center of the volume, because both volumes are centered at the Isocenter.
The reconstructions are encoded in two X-Ray 3D Angiographic instances of Instance UIDs "Z1" and "Z2" respectively.
This application case is related to the spatial registration of the X-Ray 3D volume with a static projection acquisition on the same anatomical region during the procedure.
The image acquisition system performs two different 2D acquisitions at two different times of the interventional procedure: one rotational acquisition with a 3D reconstruction, and one static acquisition.
Between the two acquisitions, the table position has changed with respect to the Isocenter. As the acquisitions are performed with the X-Ray Detector centered on the Isocenter, in the second static acquisition the anatomical region of the 3D volume is not centered anymore at the Isocenter due to the table movement. It's assumed that there is still part of the anatomy of the 3D volume that is projected in the static acquisition.
During the intervention the 3D volume is segmented to extract some anatomy that is less or not visible in the static acquisition (e.g., injected vessels, heart chambers). The user will want to display such 3D anatomy over the 2D static image to visually assess the placement of interventional devices like guide wires, needles etc. The patient position on the table does not change during the procedure.
The two 2D acquisitions are encoded as two Enhanced XA Images, and both contain the attributes of the X-Ray Isocenter Reference System Macro (see Section C.8.19.6.13 “X-Ray Isocenter Reference System Macro” in PS3.3 ). The two XA instances (let's call them "C1" and "C2") are encoded in two different Series ("B1" and "B2") of the same Study ("A"). They share the same Frame of Reference UID.
The volume data set is encoded as an X-Ray 3D Angiographic instance ("Z1").
The reconstructed volume contains one acquisition context consisting of all the frames of the corresponding source 2D XA Image. To display the volume over the projection image, both volume and projection image share the same Frame of Reference UID.
This scenario is based on the encoding of the 2D acquisition as an Enhanced XA Image, containing the attributes of the X-Ray Isocenter Reference System Macro (see Section C.8.19.6.13 “X-Ray Isocenter Reference System Macro” in PS3.3 ).
This module encodes the identifier for the spatial relationship, which will be the same for the volume and the projection image. The reconstruction application will assign the Frame of Reference UID to the reconstruction equal to the Frame of Reference UID of the Enhanced XA projection image.
This module encodes the patient position and orientation with respect to the table. It is supposed to contain the same values in the 3D volume and in the 2D static image.
This module encodes the coordinate transformation matrix to allow the spatial registration of the volume with the Isocenter reference system of the angiographic equipment.
The reconstruction application defines the patient origin as an arbitrary point on the equipment. The 3D slices of the volume are all related to the patient coordinate system by the attributes Image Position (Patient) (0020,0032) and Image Orientation (Patient) (0020,0037).
The patient is related to the Isocenter by the attribute Image to Equipment Mapping Matrix (0028,9520) which indicates the spatial transformation from the patient coordinates to the Isocenter coordinates. A point in the Patient Coordinate System (Bx, By, Bz) can be expressed in the Isocenter Coordinate System (Ax, Ay, Az) by applying the Image to Equipment Mapping Matrix as follows.
The terms (Tx,Ty,Tz) of this matrix indicate the position of the patient origin (i.e., a fixed point on the table) in the Isocenter coordinate system.
This module encodes the table position and angles used during the rotational acquisition to allow the spatial transformation of the volume points from the Isocenter coordinates to the table coordinates. See Section C.8.19.6.13.1 “Isocenter Reference System Attribute Description” in PS3.3 for further explanation about the spatial transformation from the Isocenter reference system to the table reference system.
As soon as the volume points are related to the table coordinate system, and assuming that the patient does not move on the table between the 2D acquisitions, the volume points can be projected on the image plane of any further projection acquisition even if the table has moved between the acquisitions. See Section C.8.19.6.13.1 “Isocenter Reference System Attribute Description” in PS3.3 for further explanation about the projection on the image plane of a point defined in the table coordinate system.
In this example, one rotational image is acquired before the intervention. It is encoded with the Instance UID "C1". Then a second projection static image is acquired during the intervention. It is encoded with the Instance UID "C2". Both acquisitions are encoded as Enhanced XA SOP Class.
In both acquisitions, the patient position with respect to the table is head-first prone, and the table is not rotated nor tilted with respect to the Isocenter. Therefore, the axis of the patient coordinate system and the Isocenter coordinate system are aligned, and the 3x3 matrix Mij of the Image to Equipment Mapping Matrix (0028,9520) is the identity.
In this example, the patient origin is defined by the application as a fixed point on the table; when the table position is zero, the patient origin is the point (0,0,200) in the Isocenter coordinates system (in mm).
During the rotational acquisition, the table position with respect to the Isocenter in the lateral direction [x] is +20mm, in the vertical direction [y] is +40mm, and in the longitudinal direction [z] is +60mm. Therefore, the terms (Tx,Ty,Tz) of the Image to Equipment Mapping Matrix (0028,9520) are (20,40,260).
During the second acquisition, the table position with respect to the Isocenter in the lateral direction [x] is +40mm, in the vertical direction [y] is +30mm, and in the longitudinal direction [z] is +20mm.
Consequently, the second acquisition is performed with a relative table translation of (30,-10,-50) mm vs. the first acquisition in the Isocenter coordinate system. The positioner primary angle is -30 deg. (RAO) and the secondary angle is 20 deg. (CRA). The distances from the source to the Isocenter and from the source to the detector are 780 mm and 1200 mm respectively.
The reconstruction is encoded in an X-Ray 3D Angiographic instance of Instance UID "Z1".
Any 2-dimensional representation of a 3-dimensional object must undergo some kind of projection or mapping to form the planar image. Within the context of imaging of the retina, the eye can be approximated as a sphere and mathematical cartography can be used to understand the impact of projecting a spherical retina on to a planar image. When projecting a spherical geometry on a planar geometry, not all metric properties can be retained at the same time; some distortion will be introduced. However, if the projection is known it may be possible to perform calculations "in the background" that can compensate for these distortions.
The example in Figure UUU.1-1 shows an ultra-wide field image of the human retina. The original image has been remapped to a stereographic projection according to an optical model of the scanning laser ophthalmoscope it was captured on. Two circles have been annotated with an identical pixel count. The circle focused on the fovea (A) has an area of 4.08 mm2 whereas the circle nasally in the periphery (B) has an area of 0.97 3mm2, both as measured with the Area Measurement using the Stereographic Projection method. The difference in measurement is more than 400%, which indicates how measurements on large views of the retina can be deceiving.
The fact that correct measurement on the retina in physical units is difficult to do is acknowledged in the original DICOM OP SOP Classes in the description of the Pixel Spacing (0028,0030) tag.
These values are specified as nominal because the physical distance may vary across the field of the images and the lens correction is likely to be imperfect.
The following use cases are examples of how the DICOM Wide Field Ophthalmology Photography objects may be used.
On routine wide-field imaging for annual surveillance for diabetic retinopathy a patient is noted to have no retinopathy, but demonstrates a pigmented lesion of the mid-periphery of the right eye. Clinically this appears flat or minimally elevated, irregularly pigmented without lacunae, indistinct margins on two borders, and has a surface that is stippled with orange flecks. The lesion is approximately 3 X 5 DD. This lesion appears clinically benign, but requires serial comparison to rule out progression requiring further evaluation. Careful measurements are obtained in 8 cardinal positions using a standard measurement tool in the reading software that calculates the shortest distance in mm between these points. The patient was advised to return in six months for repeat imaging and serial comparison for growth or other evidence of malignant progression.
A patient with a history of high myopia has noted recent difficulties descending stairs. She believes this to be associated with a new onset blind spot in her inferior visual field of both eyes, right eye greater than left. On examination she shows a bullous elevation of the retina in the superior periphery of both eyes due to retinoschisis, OD>OS. There is no evidence of inner or outer layer breaks, and the maculae are not threatened, so a decision is made to follow closely for progression suggesting a need for intervention. Wide field imaging of both fundi is obtained, with clear depiction of the posterior extension of the retinoschisis. Careful measurements of the shortest distance in mm between the posterior edge of the retinal splitting and the fovea is made using the diagnostic display measurement tool, and the patient was advised to return in four months for repeat imaging and serial comparison of the posterior location of the retinoschisis.
Patients with diabetes are enrolled in a randomized clinical trial to prospectively test the impact of disco music on the progression of capillary drop out in the retinal periphery. The retinal capillary drop-out is demonstrated using wide-field angiography with expanse of this drop-out determined serially using diagnostic display measurement tools, and the area of the drop-out reported in mm2. Regional areas of capillary drop out are imaged such that the full expanse of the defect is captured. In some cases this involves eccentric viewing with the fovea positioned in other than the center of the image. Exclusion criteria for patient enrollment include refractive errors greater than 8D of Myopia and 4D of hyperopia.
Patients with ARMD and subfoveal subretinal neovascular membranes but refusing intravitreal injections are enrolled in a randomized clinical trial to test the efficacy of topical anti-VEGF (Vascular Endothelial Growth Factor) eye drops on progression of their disease. The patients are selected such that there is a wide range of lesion size (area measured in mm2) and retinal thickening. This includes patients with significant elevation of the macula due to subretinal fluid.
Every 2-dimensional image that represents the back of the eye is a projection of a 3-dimensional object (the retina) into a 2-dimensional space (the image). Therefore, every image acquired with a fundus camera or scanning laser ophthalmoscope is a particular projection. In ophthalmoscopy, part of the spherical retina (the back of the eye can be approximated by a sphere) is projected to a plane, i.e., a 2-dimensional image.
The projection used for a specific retinal image depends on the ophthalmoscope; its optical system comprising lenses, mirrors and other optical elements, dictates how the image is formed. These projections are not well-characterized mathematical projections, but they can be reversed to return to a sphere. Once in spherical geometry, the image can then be projected once more. This time any mathematical projection can be used, preferably one that enables correct measurements. Many projections are described in the literature, so which one should be choosen?
Certain projections are more suitable for a particular task than others. Conformal projections preserve angle, which is a property that applies to points in the plane of projection that are locally distortion-free. Practically speaking, this means that the projected meridian and parallel intersect through a point at right angles and are equiscaled. Therefore, measuring angles on the 2-dimensional image yields the same results as measuring these on the spherical representation, i.e., the retina. Conformal projections are particularly suitable for tasks where the preservation of shapes is important. Therefore, the stereographic projection explained in Figure UUU.1.2-1 can be used for images on which to perform anatomically-correct measurements. The stereographic projection has the projection plane intersect with the equator of the eye where the fovea and cornea are poles. The points Fovea, p and q on the sphere (retina) are projected onto the projection plane (image in stereographic projection) along lines through the cornea where they intersect with the project plane creating points F′, p′and q′respectively.
Note that in the definition of stereographic projection the fovea is conceptually in the center of the image. For the mathematics below to work correctly, it is critical that each image is projected such that conceptually the fovea is in the center, even if the fovea is not in the image. This is not difficult to achieve as a similar result is achieved when creating a montage of fundus images; each image is re-projected relative to the area it covers on the retina. Most montages place the fovea in the center. An example of two images of the same eye in Figure UUU.1.2-2 and Figure UUU.1.2-3 taken from different angles and then transformed to adhere to this principle are in Figure UUU.1.2-4 and Figure UUU.1.2-5 respectively.
Furthermore the mathematical "background calculations" are well known for images in stereographic projection. Given points (pixels) on a retinal image, these can be directly located as points on the sphere and geometric measurements, i.e., area and distance measurements, performed on the sphere to obtain the correct values. The mathematical details behind the calculations for locating points on a sphere are presented in Section C.8.17.11.1.1 “Center Pixel View Angle” in PS3.3 .
The shortest distance between two points on a sphere lies on a "great circle", which is a circle on the sphere's surface that is concentric with the sphere. The great circle section that connects the points (the line of shortest distance) is called a geodesic. There are several equations that approximate the distance between two points on the back of the eye along the great circle through those points (the arc length of the geodesic), with varying degrees of accuracy. The simplest method uses the "spherical law of cosines". Let λs, ϕs; λf, ϕf be the longitude and latitude of two points s and f, and ∆λ ≡ |λf−λs| the absolute difference of the longitudes, then the central angle is defined as
where the central angle is the angle between the two points via the center of the sphere, e.g., angle a in Figure UUU.1.2-6. If the central angle is given in radians, then the distance d, known as arc length, is defined as
where R is the radius of the sphere.
This equation leads to inaccuracies both for small distances and if the two points are opposite each other on the sphere. A more accurate method that works for all distances is the use of the Vincenty formulae. Now the central angle is defined as
Figure UUU.1.2-6 is an example of a polygon made up of three geodesic Ga , Gb , Gc , describing the shortest distances on the sphere between the polygon vertices x1 , x2 , x 3. Angleγ is the angle on the surface between geodesics Ga and Gb . Angle a is the central angle (angle via the sphere's center) of geodesic Ga .
If the length of a path on the image (e.g., tracing of a blood vessel) is needed, this can be easily implemented using the geodesic distance defined above, by dividing the traced path into sections with lengths of the order of 1-5 pixels, and then calculating and summing the geodesic distance of each section separately. This works because for short enough distances, the geodesic distance is equal to the on-image distance. Note that sub-pixel accuracy is required.
To measure an area A defined by a polygon on the surface of the sphere where surface angle (such as γ in Figure UUU.1.2-6) αi for i=1,…,n for n angles internal to the polygon and R the radius of the sphere, we use the following formula, which makes use of the "angle excess".
This yields a result in physical units (e.g., mm2 if R was given in mm), but if R2 is omitted in the above formula, a result is obtained in units relative to the sphere, in steradians (sr), the unit of solid angle.
In practice, if the length of the straight arms of the calipers used to measure surface angle (such as γ in Figure UUU.1.2-6) are short then the angle measured on the image is equivalent to its representation on the sphere, which is a direct result of using the stereographic projection as it is conformal.
A 2D to 3D map includes 3D coordinates of all or a subset of pixels (namely coordinate points) to the 2D image. Implementations choose the interpolation type used, but it is recommended to use a spline based interpolation. See Figure UUU.1.3-1.
Pixels' 3D coordinates could be used for different analyses and computations e.g., measuring the length of a path, and calculating the area of region of interest, 3D computer graphics, registration, shortest distance computation, etc. Some examples of methods using 3D coordinates are listed in the following subsections.
Let the path between points A, and B be represented by set of N following pixels P={pi} and p0=A and pN=B. The length of this path can be computed from the partial lengths between path points by:
and where xi, yi, zi are the 3D coordinates of the point pi which is either available in the 2D to 3D map if pi is a coordinate point or it is computed by interpolation. Here it is assumed that the sequence of path points is known and the path is 4- or 8-connected (i.e., the path points are neighbors with no more than one pixel distance in horizontal, vertical, or diagonal direction). It is recommendable to support sub-pixel processing by using interpolation.
Shortest distance between two points along the surface of a sphere, known as the great circle or orthodromic distance, can be computed from:
Where r is the radius of the sphere and the central angle (Δσ) is computed from the Cartesian coordinate of the two points in radians. Here n1 and n2 are the normals to the ellipsoid at the two positions. The above equations can also be computed based on longitudes and latitudes of the points.
However, the shortest distance in general can be computed by algorithms such as Dijkstra, which computes the shortest distance on graphs. In this case the image is represented as a graph in which the nodes refer to the pixels and the weight of edges is defined based on the connectivity of the points and their distance.
Let R be the region of interest on the 2D image and it is tessellated by set of unit triangles T={Ti}. By unit triangle we refer to isosceles right triangle that the two equal sides have one pixel distance (4-connected neighbors). The area of the region of interest can be computed as the sum of partial areas of the unit triangles in 3D. Let { ai , bi , ci } be the 3D coordinates of the three points of unit triangle Ti . The 3D area of this triangle is
Where (‖ … ‖) and ( x ) refer to the magnitude and cross product, respectively.
Consider that ai , bi and ci are the 3D coordinates not the 2D indices of the unit triangle points on the image.
If Transformation Method Code Sequence (0022,1512) is (111791, DCM, "Spherical projection") is used then all coordinates in the Two Dimensional to Three Dimensional Map Sequence (0022,1518) are expected to lie on a sphere with a diameter that is equal to Ophthalmic Axial Length (0022,1019).
The use of this model for representing the 3D retina enables the calculation of the shortest distance between two points using great circles as per section UUU.1.3.2.
This Section provides examples of the relationship between the Ophthalmic Tomography Image SOP Instance(s) and Ophthalmic Optical Coherence Tomography B-scan Volume Analysis SOP Instance(s).
Ophthalmic Tomography Image SOP Instance UID is "1.2.3.4.5" and contains five frames.
Ophthalmic Optical Coherence Tomography B-scan Volume Analysis SOP Instance encodes five frames (e.g., one frame for each ophthalmic tomography frame).
References are encoded via the Per-frame Functional Groups Sequence (5200,9230) using Attributes Derivation Image Sequence (0008,9124) and Source Image Sequence (0008,2112).
Figure UUU.2-1. Ophthalmic Tomography Image and Ophthalmic Optical Coherence Tomography B-scan Volume Analysis IOD Relationship - Simple Example
Below is a more complex example.
Ophthalmic TomographyImage SOP Instance UID is "2.3.4.5" and contains 3 frames.
Ophthalmic TomographyImage SOP Instance UID is "1.6.7.8.9" and contains 2 frames.
Ophthalmic Optical Coherence Tomography B-scan Volume Analysis SOP Instance encodes five frames (e.g., one frame for each Ophthalmic TomographyFrame from the two Ophthalmic Tomography Image SOP Instances).
Figure UUU.2-2. Ophthalmic Tomography Image and Ophthalmic Optical Coherence Tomography B-scan Volume Analysis IOD Relationship - Complex Example
OCT en face images are derived from images obtained using OCT technology (i.e., structural OCT volume images plus angiographic flow volume information). With special image acquisition sequences and post hoc image processing algorithms, OCT-A detects the motion of the blood cells in the vessels to produce images of retinal and choroidal blood flow with capillary level resolution. En face images derived from these motion contrast volumes are similar to images obtained in retinal fluorescein angiography with contrast dye administered intravenously, though differences are observed when comparing these two modalities. This technology enables a high resolution visualization of the retinal and choroidal capillary network to detect the growth of abnormal blood vessels to provide additional insights in diagnosing and managing a variety of retinal diseases including diabetic retinopathy, neovascular age-related macular degeneration, retinal vein occlusion and others.
The following are examples of how the ophthalmic tomography angiography DICOM objects may be used.
A 54 year old female patient with an 18 year history of DM2 presents with unexplained painless decreased visual acuity in both eyes. The patient was on hemodialysis (HD) for diabetes related renal failure. She had a failed HD shunt in the right arm and a functioning shunt in the left. SD-OCT testing showed no thickening of the macula. Because of her renal failure and HD history IVFA was deferred and OCT angiography of the maculae was performed. This showed significant widening of the foveal avascular zone (FAZ) explaining her poor visual acuity and excluding treatment opportunities.
A 71 Year Old Male Patient Presents With A 3 Month History of Decreased Visual Acuity and Distorted Vision in The Right Eye. He Demonstrates A Well-defined Elevation of The Deep Retina Adjacent to The Fovea Od by Biomicroscopy That Correlates to A Small Pigment Epithelial Detachment (ped) Shown by Sd-oct. OCT Angiography Demonstrated A Subretinal Neovascular Network in The Same Area. This Was Treated With Intravitreal Anti-vegf Injection Monthly For Three Months With Resolution of The Ped and Incremental Regression of The Subretinal Neovascular Membrane by Point to Point Registration OCT Angiography and Finally Non-perfusion of The Previous Srn.
A 59 y/o male patient with hypertension and long smoking history presents with a six week history of painless decrease in vision in the right eye. Ophthalmoscopy showed dilated and tortuous veins inferior temporally in the right eye with a superior temporal distribution of deep retinal hemorrhages that extended to the mid-periphery, but did not include the macula. SD-OCT showed thickening of the macula and OCT angiography showed rarefaction of the retinal capillaries consistent with ischemic branch retinal vein occlusion and macular edema.
Imaging of small animals used for preclinical research may involve acquiring images of multiple animals simultaneously (i.e., more than one animal is present in the same image).
This Annex describes methods of cross-referencing image and other composite SOP instances produced in the acquisition and segmentation process and how the provenance of each may be recorded.
Only backward references are described, allowing for a sequential workflow with processing performed by successive devices, without modification of earlier instances.
The relevant Attributes are described in Section C.7.6.1.1.3 “Derivation Description” in PS3.3 and Section C.7.6.1.1.4 “Source Image Sequence” in PS3.3 of the General Image Module. The same principles apply if the General Image Module is not used (e.g., for Enhanced Multi-frame images, in which the same Attributes are present, but nested in the appropriate Functional Group Macros).
For the purpose of illustration, three successive steps are assumed:
Various DICOM composite objects could be used to encode the segmented region. If the form of the segmented region is a
rasterized (bitmap), then the Segmentation Storage SOP Class is appropriate
surface (mesh), then the Surface Storage SOP Class is appropriate
for 3D patient-relative coordinates, the RT Structure Set Storage SOP Class is appropriate
for 2D or 3D coordinates (and geometric shapes), a Structured Report Storage SOP Class may be appropriate, if a template with the appropriate semantics (what the contours "mean") is defined
for 2D coordinates (and geometric shapes), a Grayscale or Color Softcopy Presentation State Storage SOP Class may be appropriate, though there are no defined semantics for recognizing what to do with which graphic objects
For illustrative purposes, the use of the Segmentation Storage SOP Class is assumed, and a consistent Frame of Reference is assumed.
If images from different modalities are acquired, on separate devices, but with the same physical arrangement of animals, a more complex workflow might involve the use of one segmentation derived from one modality applied to images from a different modality with a different Frame of Reference, in which case use of the Spatial Registration Storage SOP Class or Deformable Spatial Registration Storage as persistent object might be appropriate, and appropriate references to it included. The same might apply if registration were necessary between images acquired on the same device, but given that research small animals are normally anesthetized, this is usually not required.
No references are present, since forward references are not used.
The Frame of Reference UID is present for cross-sectional modalities.
If the animals are not all aligned in the same direction, Patient Position (0018,5100) for each animal is present within Group of Patients Identification Sequence (0010,0027) and a nominal Patient Position (0018,5100) is present in the General Series Module, and the coordinate system dependent position and orientation Attributes of the Image Plane Module attributes (or corresponding Functional Groups) are relative to the nominal Patient Position (0018,5100) present in the General Series Module.
Segmentations are Enhanced Multi-frame Images, so the Derivation Image Functional Group (Section C.7.6.16.2.6 in PS3.3) is used.
As required by the Segmentation IOD (Section A.51.5.1 in PS3.3):
the value of Purpose of Reference Sequence (0040,A170) within the Source Image Sequence (0008,2112) within Derivation Image Sequence (0008,9124) is (121322, DCM, "Source Image for Image Processing Operation")
the value of Derivation Code Sequence (0008,9215) within Derivation Image Sequence (0008,9124) is (113076, DCM, "Segmentation")
though not required, the value of Derivation Description (0008, 2111) may contain additional detail describing the image processing operation
The Frame of Reference UID is the same as that for the images from which the segmentation was derived.
There is no requirement that application of the Segmentation be restricted to the image referenced in the Derivation Image Functional Group Macro, which describes the images that the segmentation was derived from, not the images to which it is applicable (potentially all of the images in the same Frame of Reference).
The Common Instance Reference Module is required to be present, which provides Study and Series Instance UIDs for all referenced instances.
A segmentation instance may contain multiple segments, thus multiple animals could be described in a single segmentation instance, or each animal could be described in one of multiple segments within a single segmentation instance. The manner in which each segment is numbered, labeled and categorized is thus important. Each segment may be described as follows:
Segment Number (0062,0004) from 1 to the number of animals (since the Attribute definition requires starting at 1, incrementing by 1)
Segment Label (0062,0005) using a human-readable label that appropriately identifies each animal in the context of the experiment, e.g., it may have the same value as the Patient ID (0010,0020) used for each separate animal.
Segmented Property Category Code Sequence (0062,0003) value of (R-42018, SRT, "Spatial and Relational Concept")
Segmented Property Type Code Sequence (0062,000F) value of (113132, DCM, "Single subject selected from group")
The properties of (R-42018, SRT, "Spatial and Relational Concept") and (113132, DCM, "Single subject selected from group") are suggested instead of a more generic description, such as (T-D000A, SRT, "Anatomical Structure") and (T-D0010, SRT, "Entire Body"), since though the latter would be accurate, it would not convey the additional implication of selection of one from many. Further, in some cases, the entire body may not actually be imaged (e.g., just the head of multiple subjects may be imaged simultaneously for brain studies).
It is recommended that the source image(s) be referenced using Source Image Sequence (0008,2112), either in the top level data set or within the Derivation Image Functional Group (Section C.7.6.16.2.6 in PS3.3) as appropriate for the IOD, with:
the value of Purpose of Reference Sequence (0040,A170) within the Source Image Sequence (0008,2112) being (113130, DCM, "Predecessor containing group of imaging subjects")
the value of Derivation Code Sequence (0008,9215) being (113131, DCM, "Extraction of individual subject from group")
the value of Derivation Description (0008,2111) containing additional detail describing the image processing operation
It is recommended that the segmentation used be referenced using Referenced Image Sequence (0008,1140), either in the top level data set or within the Referenced Image Functional Group (Section C.7.6.16.2.5 in PS3.3) as appropriate for the IOD, with:
the value of Purpose of Reference Sequence (0040,A170) within Referenced Image Sequence (0008,1140) being (121321, DCM, "Mask image for image processing operation")
If instead of a segmentation (which is a form of image), a non-image object were used to encode the segmented regions, then use of Referenced Instance Sequence (0008,114A) instead of Referenced Image Sequence (0008,1140) would be appropriate.
The Frame of Reference UID is the same as the source images and the segmentation.
If all the animals are not aligned in the same direction (i.e., do not have the same value for Patient Position (0018,5100)), the coordinate system dependent position and orientation Attributes of the Image Plane Module attributes (or corresponding Functional Groups) may have been recomputed. If the animals are aligned in different directions, and Patient Position (0018,5100) from within Group of Patients Identification Sequence (0010,0027) in the source images is compared against Patient Position (0018,5100) from the General Series Module in the source images, the difference may be used to recompute (rotate, flip and translate) new patient-relative vectors and offsets within the same Frame of Reference. The value in the Patient Position (0018,5100) from the General Series Module in the derived images are appropriate for the selected animal.
It is recommended that the Common Instance Reference Module be present even if it is not required by the IOD, to provide Study and Series Instance UIDs for all referenced instances.
Propagation and replacement of the appropriate patient-level and study-level identifying and descriptive attributes is also required.
The issues related to the identification of the "patient" in such cases are addressed in Section C.7.1.4.1.1 “Groups of Subjects” in PS3.3.
New studies are required if the patient identifiers have changed.
New series are required for each of the derived (types) of objects, since they are created by different equipment and have different values for Modality.
The history of operations applied to a composite instance and its predecessors may be recorded in multiple items of Derivation Code Sequence (0008,9215). It is preferable, when creating a new derived object, to add to the end of the existing sequence of items, rather than to completely replace them. It is also common to add to the plain text that is contained in Derivation Description (0008, 2111), rather than replacing it (maximum length permitting).
The history of which devices (and human operators) have operated on a composite instance and its predecessors may be recorded in Contributing Equipment Sequence (0018,A001). Again, it is preferable that the existing sequence of items be extended rather than replaced, if possible.
For both Derivation Code Sequence (0008,9215) and Contributing Equipment Sequence (0018,A001), if multiple predecessors are applicable (e.g., the source image and a segmentation mask), then the sequence of items of both predecessors may be merged.
MRI diffusion imaging is able to quantify diffusion of water along certain directions. The diffusion tensor model is a simple model that is able to describe the statistical diffusion process accurately at most white matter positions. To calculate diffusion tensors, a base-line MRI without diffusion-weighting and at least six differently weighted diffusion MRIs have to be acquired. After some preprocessing of the data, at each grid point, a diffusion tensor can be calculated. This gives rise to a tensor volume that is the basis for tracking. Refinements to the diffusion model and acquisition method such as HARDI, Q-Ball, diffusion spectrum imaging (DSI) and diffusion kurtosis imaging (DKI) are expanding the directionality information available beyond the simple tensor model, enhancing tracking through crossings, adjacent fibers, sharp turns, and other difficult scenarios.
A tracking algorithm produces tracks (i.e., fibers), which are collected into track sets. A track contains the set of x, y and z coordinates of each point making up the track. Depending upon the algorithm and software used, additional quantities such as Fractional Anisotropy (FA) values or color etc. may be associated with the data, by track set, track or point, either to facilitate further filtering or for clinical use. Descriptive statistics of quantities such as FA may be associated with the data by track set or track.
Examples of tractography applications include:
Visualization of white matter tracks to aid in resection planning or to support image guided (neuro) surgery;
Determination of proximity and/or displacement versus infiltration of white matter by tumor processes;
Assessment of white matter health in neurodegenerative disorders, both axonal and myelin integrity, through sampling of derived diffusion parameters along the white matter tracks.
This section illustrates the usage of the Section C.8.33.2 “Tractography Results Module” in PS3.3 in the context of the Tractography Results IOD.
Figure WWW-1. Two Example Track Sets. "Track Set Left" with two tracks, "Track Set Right" with one track.
Figure WWW-1 shows two example track sets. The example consists of:
Encoding of Measurement Values for Tracks "A" and "B"
For storing measurement values like Fractional Anisotropy or Apparent Diffusion Coefficient values on specific points on a track the overall view over all tracks of a given track set is needed. Only tracks shall be grouped in track sets that share a specific type of measurement value.
Measurements Sequence (0066,0121) => each item describes one value type of all tracks in the track set (here: "Track Set Left" contains two value types: Fractional Anisotropy and Apparent Diffusion Coefficient).
Measurement Values Sequence (0066,0132) => one item for each track of a track set.
When used to store Fractional Anisotropy values:Since a Fractional Anisotropy value is stored for each point in both tracks of "Track Set Left", Floating Point Values (0066,0125) contains an array of Fractional Anisotropy values for tracks "A" and "B" respectively. Track Point Index List (0066,0129) is absent since there is a Fractional Anisotropy value associated with every point in Point Coordinates Data (0066,0016).
When used to store Apparent Diffusion Coefficient values:Since an Apparent Diffusion Coefficient value is stored only for a subset of points in both tracks of "Track Set Left", Track Point Index List (0066,0129) contains indices to the track points in Point Coordinates Data (0066,0016) and Floating Point Values (0066,0125) contains a measurement value for every track point referenced in Track Point Index List (0066,0129).
The table WWW-1 shows the encoding of the Tractography Results module for the example above. In addition to the two example track sets the table WWW-1 also encodes the following information:
Within "Track Set Left" the mean Fractional Anisotropy values for track "A" (0.475) and "B" (0.667).
For "Track Set Left" the maximum Fractional Anisotropy value (0.9).
Diffusion acquisition, model and tracking algorithm information.
Image instance references used to define the Tractography Results instance.
Table WWW-1. Example of the Tractography Results Module
Volume data may be presented through a variety of display algorithms, such as frame-by-frame viewing, multi-planar reconstruction, surface rendering and volume rendering. The Volumetric Source Information consists of one or more volumes (3D or 4D) used to form the presentation. When a volume Presentation View is created through the use of a Display Algorithm, it typically requires a set of Display Parameters that determine the specific presentation to be obtained from the volume data. Persistent storage of the Display Parameters used by a Display Algorithm to obtain a presentation from a set of volume-related data is called a Volumetric Presentation State (VPS):
Each Volumetric Presentation State describes a single view with optional animation parameters. A Volumetric Presentation State may also indicate that a particular view is intended to be displayed alongside the views from other Volumetric Presentation States. However, descriptions of how multiple views should be presented are not part of a Volumetric Presentation State and should be specified by a Structured Display, a Hanging Protocol or by another means.
The result of application of a Volumetric Presentation State is not expected to be exactly reproducible on different systems. It is difficult to describe the rendering algorithms in enough detail in an interoperable manner, such that a presentation produced at a later time is indistinguishable from that of the original presentation. While Volumetric Presentation States use established DICOM concepts of grayscale and color matching (GSDF and ICC color profiles) and provides a generic description of the different types of display algorithms possible, variations in algorithm implementations within display devices are inevitable and an exact match of volume presentation on multiple devices cannot be guaranteed. Nevertheless, reasonable consistency is provided by specification of inputs, geometric descriptions of spatial views, type of processing to be used, color mapping and blending, input fusion, and many generic rendering parameters, producing what is expected to be a clinically acceptable result.
A Volumetric Presentation State is different from Softcopy Presentation States in several ways:
Unlike Softcopy Presentation States, a Volumetric Presentation State describes the process of creating a new image rather than parameters for displaying an existing one
Volumetric Presentation State may not be displayed exactly the same way by all display systems due to differences in the implementations of rendering algorithms.
While both Volumetric Presentation States and Softcopy Presentation States reference source images, a display application applying a Volumetric Presentation State will not directly display the source images. Instead, it will use the source data to construct a volume and then create a new view of the volume data to be displayed. Depending on the specific Volumetric Presentation State parameters, it is possible that some portion of the inputs may not contribute to the generated view.
Some types of volumetric views may be significantly influenced by the hardware and software used to create them, and the industry has not yet standardized the volume rendering pipelines to any great extent.
While volume geometry is consistent, other display characteristics such as color, tissue opacity and lighting may vary slightly between display systems.
The use of the Rendered Image Reference Sequence (0070,1104) to associate the Volumetric Presentation State with a static rendering of the same view is encouraged to facilitate the assessment of the view consistency (see Section XXX.2.3).
A Volumetric Presentation State creator is likely to be capable of also creating a derived static image (such as a secondary capture image) representing the same view. Depending on the use case, either a Volumetric Presentation State or a Secondary Capture image or both may be preferred.
Volumetric Presentation States have the following advantages:
can be used to re-create the view and allow interactive creation of additional views
supporting artifacts, such as Segmentation instances, are preserved and can be re-used
allows collaboration between dissimilar clinical applications (e.g., a radiology application could create a view to be used as a starting point for a surgical planning application)
measurements and annotations can be linked to machine-readable structured context to allow integration with reporting and analysis applications
A Volumetric Presentation State (VPS) creator can create a static derived image at the same time and link it to the VPS by using the Rendered Image Reference Sequence (0070,1104). This approach yields most of the advantages of the individual formats. Additionally, it allows the static images to be used to assess the display consistency of the view.
This approach also allows for a staged review where the static image is reviewed first and the Volumetric Presentation State is only processed if further interactivity is needed.
The main disadvantage to this approach is that it may add a significant amount of data to an imaging study.
This section includes examples of volumetric views and how they can be described with the Volumetric Presentation States to allow recreation of those views on other systems. The illustrated use cases are examples only and are by no means exhaustive.
Each use case is structured in three sections:
User Scenario: Describes the user needs in a specific clinical context, and/or a particular system configuration and equipment type.
Encoding Outline: Describes the Volumetric Presentation States related to this scenario, and highlights key aspects.
Encoding Details: Provides detailed recommendations of the key attributes of the Volumetric Presentation States to address this particular scenario. The tables are similar to the IOD tables of PS3.3. Only attributes with specific recommendation in this particular scenario have been included.
A grayscale planar MPR view created from one input volume without cropping is the most basic application of the Planar MPR VPS.
To create this view, the Volumetric Presentation State Relationship Module refers to one input volume, and uses the Volumetric Presentation State Display Module with a minimum set of attributes, generating this simple pipeline:
The parameters for computing the Multi-Planar Reconstruction are defined in the Multi-Planar Reconstruction Geometry Module.
Planar MPR views are often displayed together with other spatially related Planar MPR views. For example, a very common setup are three orthogonal MPRs showing a lesion in transverse, coronal and sagittal views of the data set.
The storage of the view shown in Figure XXX.3.2-1 requires the generation of three Planar MPR VPS SOP instances and normally a Basic Structured Display SOP instance which references the Planar MPR VPS SOP instances.
In order to enable display applications which do not support the Basic Structured Display SOP Class to create similar views of multiple related Planar MPRs the Planar MPR VPS SOP Class supports marking instances as spatially related in the Volumetric Presentation State Identification Module.
This allows display applications to identify Volumetric Presentation State instances for viewing together. Additionally, via the View Modifier Code Sequence (0054, 0222) in the Presentation View Description Module, display applications can determine which Volumetric Presentation State instance to show at which position on the display depending on the user preferences. Refer to Section XXX.4 for display layout considerations.
Display applications may want to implement mechanisms for detecting when VPS SOP Instances reference exactly the same image instances within their Volumetric Presentation State Input Sequence item which creates the volume. This saves memory by loading the image instances that create the volume only once.
The Volumetric Presentation States provide no mechanism to explicitly specify the sharing of a volume by multiple VPS SOP instances.
Table XXX.3.2-3. Presentation View Description Module Recommendations
|
For this particular example, CID 2 “Anatomic Modifier” provides applicable values: |
||
In the clinical routine radiologists often create a set of derived images from Planar MPR views that cover a specific anatomic region. For example from a head scan a range of oblique transverse Planar MPR views are defined. These views are rendered into separate derived CT or Secondary Capture SOP Class instances for conveying the relevant information to the referring clinician.
Figure XXX.3.3-1. Definition of a range of oblique transverse Planar MPR views on sagittal view of head scan for creation of derived images
However, these derived images depicting the specific anatomy cannot be changed by the display application. The referring clinician cannot view other anatomy not shown by the derived images and cannot alter the orientation of the Planar MPR views.
Alternatively, a set of Planar MPR VPS SOP Instances can be created to depict the slices through the volume. To indicate that these Volumetric Presentation State instances are sequentially related, the Presentation Sequence Collection UID (0070,1102) is used to associate the instances to show the instances are to be displayed in sequence, and each VPS instance is given a Presentation Sequence Position Index (0070,1103) value to indicate the order in which the instances occur in the collection (in this case, a spatial sequence). In this usage, no animation is specified and it is at the discretion of the display application how these views are to be presented, such as by frames in a light-box format or by a manual control stepping through the presentations in one display window.
For Planar MPR views that can be moved or rotated by the display application, no special encodings in the Planar MPR VPS SOP Instance are necessary.
Figure XXX.3.3-2. One Volumetric Presentations States is created for each of the MPR views. The VPS Instances have the same value of Presentation Display Collection UID (0070,1101)
In general, the individual VPS instances may have any orientation and be in any location.
Another technique for depicting a set of derived images is to have a single Planar MPR VPS SOP Instance that describes an initial Planar MPR view, and specify cross-curve animation to generate the other related views. A straight-line curve is specified that begins at the center of the initial Planar MPR view and ends at the intended center of the last Planar MPR view of the set. A step size is specified to be the distance between the first and last points of the line divided by the number of desired slices minus one. A Recommended Animation Rate (0070,1A03) is specified if the creator wishes to provide a hint to the display application to scroll through the slices in the set, or could be omitted to leave the animation method to the discretion of the display application.
Figure XXX.3.4-1. Additional MPR views are generated by moving the view that is defined in the VPS in Animation Step Size (0070,1A05) steps perpendicular along the curve
For this case, the curve is a straight line. In general, however, the curve may have any form such as a circular curve to create radial MPR views.
Table XXX.3.4-1. Presentation Animation Module Recommendations
The Planar MPR Volumetric Presentation State makes it easy for the receiving display application to enable the user to modify the initial view for viewing nearby anatomy. This requires that display relative annotations need to be removed when the initial view gets manipulated. Otherwise there would be the risk that the annotations point to the wrong anatomy.
To give the display application more control over when to show annotations, the Planar MPR Volumetric Presentation State defines annotations described by coordinates in the VPS-RCS.
As an example, during intervention trajectory planning one or more straight lines representing the trajectories of a device (e.g., needle) to be introduced during further treatment (e.g., cementoplasty, tumor ablation…) are drawn by the user on a planar MPR view.
The Planar MPR VPS does not define how to render the text associated with the annotation or how to connect it to the graphical representation of the annotation. This is an implementation decision.
The creating application derives the 3D coordinates of the needle trajectory from the Planar MPR view and creates a Planar MPR VPS SOP instance with the needle trajectory as Volumetric Graphic Annotation. When a user viewing the Presentation State manipulates the initial Planar MPR view, the display application could control the visibility of the needle trajectory based on the visibility of the part of the volume which is crossed by the needle trajectory (e.g., by fading the trajectory in and out, since the intersection of the graphic with the plane may only appear as one point). Annotation Clipping (0070,1907) controls whether the out-of-view portion of the 3D annotation is displayed or not; see Section C.11.28.1 “Annotation Clipping” in PS3.3 for details.
For handling multiple annotations in different areas of the volume, applications might provide a list of the annotations which are referenced in the Presentation State. When a user selects one of the annotations the Planar MPR view could automatically be adjusted to optimally show the part of the volume containing the annotation.
The Volumetric Presentation State provides the Volumetric Annotation Sequence (0070,1901) for defining annotations by coordinates in the VPS-RCS.
The needle trajectory is encoded as a line described by coordinates in the VPS-RCS. Optionally a Structured Report can be referenced in order to allow the display application to access additional clinical context.
Table XXX.3.5-1. Volumetric Graphic Annotation Module Recommendations
Lung nodules in a volume have been classified by a Computer Aided Detection mechanism into different categories. E.g. small, medium, large. In planar MPR views the nodules are colorized according to their classification.
The classification of the lung nodules is stored in one or multiple Segmentation IOD instances. For each lung nodule category one Segmentation marks the corresponding areas in the volume.
For example, to create a Planar MPR view which shows 3 lung nodule categories in different colors the Planar MPR VPS IOD instance defines via the Volumetric Presentation State Display Module a volumetric pipeline with 4 inputs.
The same volume data set of the lung is used as input for all sub pipelines:
The first input to the Volumetric Presentation State (VPS) Display Module provides the full (uncropped) MPR view of the anatomy in the display, which will be left as grayscale in the VPS Display Module. This will provide the backdrop to the colorized segmented inputs to be subsequently overlaid by compositor components of the Volumetric Presentation State Display pipeline.
The same input data and a single set of MPR geometry parameters defined in the Multi-Planar Reconstruction Geometry Module are used to generate each VPS Display Module input; only the cropping is different. The Volume Cropping Module for each of the other inputs specifies the included segments used to crop away all parts of the volume which do not belong to a nodule of the corresponding nodule category.
From these cropped volumes Planar MPR views are generated, which are then colorized and overlaid on the grayscale background within the Volume Presentation State Display Module (see Section FF.2.3.2.1 “Classification Component Components” in PS3.4).
In the Volumetric Presentation State Display Module the Presentation State Classification Component Sequence (0070,1801) defines scalar-to-RGB transformations for mapping each MPR view to RGBA. The first MPR (anatomy) view is mapped to grayscale RGB by a RGB LUT Transfer Function (0028,140F) value of EQUAL_RGB. Alpha LUT Transfer Function (0028,1410) is set to NONE; i.e., the anatomy will be rendered as completely opaque background.
For each of the three lung nodule MPR views an RGB transfer function maps the view to the color corresponding the respective nodule category. Alpha is set to 0 for black pixels, making them completely transparent. Alpha for all other pixels is set to 1 (or a value between 0 and 1, if some of the underlying anatomy shall be visible through the nodule segmentation).
Presentation State Compositor Component Sequence (0070,1805) in the Volumetric Presentation State Display Module then creates a chain of three RGB Compositor Components which composite the four MPR views into one. The first RGB Compositor performs "Partially Transparent A over B" compositing as described in Section XXX.5.2 by passing through the Alpha of input 2 as Weight-2 and 1-Alpha of input 2 as Weight-1.
The remaining two Compositor Components then perform "Pass Through" compositing as described by Section XXX.5.3 by using Weighting LUTs which simply pass through Alpha-1 as Weight-1 and Alpha-2 as Weight-2, since the output of the previous Compositor Components contains no Alpha, and therefore Alpha-1 will automatically be set to one minus Alpha-2 by the Compositor.
Figure XXX.3.6-3 shows the complete pipeline for the lung nodule example:
It is envisioned that display applications provide user interfaces for manipulating the Alpha LUT Transfer Functions for each input of the pipeline, allowing the user to control the visibility of the highlighting of each lung nodule category.
Table XXX.3.6-1. Volumetric Presentation State Relationship Module Recommendations
Table XXX.3.6-2. Volumetric Presentation State Cropping Module Recommendations
Table XXX.3.6-3. Volumetric Presentation State Display Module Recommendations
|
Set only one item in this sequence since the component has only one input. |
||
|
Set only one item in this sequence since the component has only one input. |
||
|
Set to "TABLE" to be able to set a transparency for the segmentation. |
||
|
Set only one item in this sequence since the component has only one input. |
||
|
Set to "TABLE" to be able to set a transparency for the segmentation. |
||
|
Set only one item in this sequence since the component has only one input. |
||
|
Set to "TABLE" to be able to set a transparency for the segmentation. |
||
|
Include three items that define the chain of RGB Compositor components. |
||
|
Contains the two Weighting LUTs from Section XXX.5.2 to create the "Partially Transparent A over B" composting from two RGBA inputs. |
||
|
Contains the two Weighting LUTs from Section XXX.5.3 to create the "A over B" composting from one RGB and one RGBA input. |
||
|
Contains the two Weighting LUTs from Section XXX.5.3 to create the "A over B" composting from one RGB and one RGBA input. |
||
|
Set to an ICC Profile describing the transformation of the resulting RGB image into PCS-Values. |
||
Ultrasound images and volumes are able to depict both anatomic tissue information (usually shown as a grayscale image) along with functional tissue motion or blood flow information (usually shown in colors representing motion towards or away from the ultrasound transducer).
The sample illustration in Figure XXX.3.7-1 is comprised of three Color Flow MPR presentations that are approximately mutually orthogonal in the VPS Reference Coordinate System. Each presentation is described by one Planar MPR Volumetric Presentation State instance, with layout and overlay graphics provided by a Hanging Protocol instance. The three VPS instances share the same value of Presentation Display Collection UID (0070,1101) indicating that they are intended to be displayed together.
Each of the planar MPR presentations in the display is specified by one Planar MPR Volumetric Presentation State instance. The source volume in this case is stored in an Enhanced US Volume instance, which uses two sets of frames to construct the volume: one set contains tissue intensity frames and one set contains flow velocity frames. Each set of frames comprise one input to the VPS instance and is referenced in one item of Volumetric Presentation State Input Sequence (0070,1201), wherein the Referenced Image Sequence (0008,1140) contains one item per frame of the Enhanced US Volume instance. Spatial Registration is not necessary since both frame sets share the same Volume Frame of Reference in the source instance. Cropping is usually not necessary for multi-planar reconstruction, and both inputs use the same MPR geometry specification.
Classification of the two data types is accomplished using Pixel Presentation (0008,9205) of TRUE_COLOR and two items in Presentation State Classification Component Sequence (0070,1801). The tissue intensity MPR frame is classified using Component Type (0070,1802) of ONE_TO_RGBA and RGB LUT Transfer Function (0028,140F) of EQUAL_RGB to create a grayscale image, while the flow velocity MPR frame is colorized by using Component Type (0070,1802) of ONE_TO_RGBA and RGB LUT Transfer Function (0028,140F) of TABLE and mapping to colors in an RGB color lookup table. Both inputs use Alpha LUT Transfer Function (0028,1410) of IDENTITY so that the alpha represents the magnitude of the input value.
Compositing of the two classified data streams is accomplished using one RGB compositor component, specified by one item in Presentation State Compositor Component Sequence (0070,1805). The Weighting Transfer Function Sequence (0070,1806) is used to accomplish "Threshold Compositing" as described in Section XXX.5.4, a common method used for ultrasound color flow compositing.
Figure XXX.3.7-2 shows the complete pipeline for Ultrasound Color Flow Planar MPR.
Table XXX.3.7-1. Volumetric Presentation State Relationship Module Recommendations
|
Two items are this sequence referencing one volume for each data type |
||
|
Sequence of frames with Data Type (0018,9808) value of TISSUE_INTENSITY |
||
|
>>Table 10-3 “Image SOP Instance Reference Macro Attributes” in PS3.3 |
||
|
Sequence of frames with Data Type (0018,9808) value of FLOW_VELOCITY |
||
|
>>Table 10-3 “Image SOP Instance Reference Macro Attributes” in PS3.3 |
||
Table XXX.3.7-2. Presentation View Description Module Recommendations
|
Set to (T-32000, SRT, "Heart") |
||
Table XXX.3.7-3. Multi-Planar Reconstruction Geometry Module Recommendations
Table XXX.3.7-4. Volumetric Presentation State Display Module Recommendations
|
Only one item in this sequence since the component has only one input. |
||
|
Only one item in this sequence since the component has only one input. |
||
|
Set to "TABLE" to be able to map to colors representing the flow velocities towards and away from the ultrasound transducer |
||
|
Set to one item that defines the threshold compositing of the two data types |
||
|
Contains the two Weighting LUTs from Section XXX.5.4 to create the threshold composting from two RGBA inputs: |
||
|
Set to an ICC Profile describing the transformation of the resulting RGB image into PCS-Values. |
||
To aid in the exact localization of functional data, e.g., the accumulation of a radioactive tracer which is measured with a position emission tomography (PET) scan, the colorized functional data is blended with e.g., a CT scan which shows the corresponding anatomy in detail.
To create a Planar MPR view which shows the colorized PET data blended with the grayscale CT data the Planar MPR VPS IOD instance defines via the Volumetric Presentation State Display Module a volumetric pipeline with 2 inputs.
The first input to the Volumetric Presentation State (VPS) Display pipeline provides the MPR view of the anatomy in the display, which will be left as grayscale in the VPS Display pipeline. This will provide the backdrop to the colorized PET input to be subsequently overlaid in the second stage of the VPS pipeline.
Since PET and CT datasets usually have different resolutions and are not aligned (even if they reference the same Frame of Reference IOD instance) the datasets are spatially registered to the Volumetric Presentation State RCS. From these registered volumes grayscale Planar MPRs are generated using the same MPR geometry.
The Volume Presentation State Display pipeline then blends the MPRs into one view (see Section FF.2.3.2.1 “Classification Component Components” in PS3.4).
In the Volumetric Presentation State Display Module the Presentation State Classification Component Sequence (0070,1801) defines classification components for mapping the MPRs to RGBA.
The first MPR (CT) view is mapped to grayscale RGBA by an EQUAL_RGB RGB LUT Transfer Function (0028,140F). Alpha LUT Transfer Function (0028,1410) is set to NONE, since the anatomy will be rendered as completely opaque background.
For the second MPR (functional, PET) view an RGBA transfer function maps the tracer intensity values to a color range. Alpha-2 is set to 0 for black pixels, making them completely transparent. Alpha-2 for all other pixels is set to a single value between 0 and 1, depending on the intended transparency of the functional data.
It is envisioned that display applications provide mechanisms to the user for manipulating the Alpha-2 value which has been set in the Presentation State, thereby allowing the user to control the visibility of the anatomy vs. the functional data.
The RGB Compositor then performs "Partially Transparent A over B" compositing as described in Section XXX.5.2 by passing through Alpha-2 as Weight-2 and (1- Alpha-2) as Weight-1
Figure XXX.3.8-3 shows details of the classification and compositing for the blended PET/CT Planar MPR.
Table XXX.3.8-1. Volumetric Presentation State Relationship Module Recommendations
Table XXX.3.8-3. Volumetric Presentation State Display Module Recommendations
|
Contains two items, one for classifying the CT data and one for classifying the PET data. |
||
|
Contains one item in this sequence since the component has only one input. |
||
|
Contains one item in this sequence since the component has only one input. |
||
|
Set to "TABLE" to be able to map functional data to a color range. |
||
|
Set to "TABLE" to be able to set a transparency for the segmentation. |
||
|
Contains the two Weighting LUTs from Section XXX.5.2 to create the "Partially Transparent A over B" composting of two RGBA inputs. |
||
|
Set to an ICC Profile describing the transformation of the resulting RGB image into PCS-Values. |
||
When evaluating the placement of a coronary artery stent, the stent is often viewed in each phase of a multiphase cardiac CT. An oblique Planar MPR MIP slab is typically used. Because of cardiac motion the oblique plane must be repositioned for each phase in order to yield the best view of the stent in that phase, resulting in a sequence of Planar MPR views with different geometry but identical display parameters.
The storage of the view shown in Figure XXX.3.9-1 requires the generation of one Planar MPR Volumetric Presentation State per cardiac phase in the input data. These presentation states form a Sequence Collection.
Table XXX.3.9-2. Volumetric Presentation State Relationship Module Recommendations
This Module is replicated in each of the created Volumetric Presentation States.
Table XXX.3.9-3. Presentation View Description Module Recommendations
|
For this particular example, CID 2 “Anatomic Modifier” provides applicable values: |
||
The goal is the Identification and Annotation of a bilateral iliac stenosis with an acquired CT scan. The objective is the visualization of a leg artery with a three-dimensional annotation. There are also informative two-dimensional annotations.
Specifying the classification transfer function, it is possible to provide a color and adjust the opacity of the rendering of different Hounsfield Units. The Render Shading Module is used to adjust parameters like the shininess and the different reflections. The Volumetric Graphic Annotation is used to display the active vessel selection. The Volumetric Graphic Annotation is a projection in the 3D Voxel Data, while the Graphic Annotation module provides annotation made directly in the 2D pixel data. Both types of annotation specify a Graphic Layer in which the annotation is displayed.
Table XXX.3.10.3.1-1. Volume Presentation State Relationship Module Recommendations
Table XXX.3.10.3.2-1. Volume Render Geometry Module Recommendations
Table XXX.3.10.3.4-1. Render Display Module Recommendations
A tumor in a volume has been identified and segmented. In volume rendered views the tumor is highlighted while preserving information about its relationship to surrounding anatomy.
In this pipeline the different classifications for the segmented objects are shown, followed by the blending operations. To visualize the vessels, they are first classified with a special transfer function and then blended over the background image. The segmented Tumor is also classified and then blended over the Vessels + Bones. Generally the classified segmented volumes are blended in lowest to highest priority order using B-over-A blending of the RGB data and the corresponding opacity (alpha) data.
Table XXX.3.11.3.1-1. Volumetric Presentation State Relationship Module Recommendations
Table XXX.3.11.3.4-1. Render Display Module Recommendations
|
>Item #1 in Presentation State Classification Component Sequence |
||
|
>Item #2 in Presentation State Classification Component Sequence |
||
|
>Item #3 in Presentation State Classification Component Sequence |
||
A patient has been imaged by CT at arterial and portal venous contrast phases in order to plan for a liver resection. The two phases are rendered together to visualize the relationship of the tumor to the portal vein, hepatic veins and hepatic arteries to ensure the resection avoids these structures.
In this pipeline, volume streams from two volume inputs are blended together. From the arterial phase volume input, segmented views of the liver, tumor and hepatic arteries are blended in sequence (B over A). From the venous phase volume input, segmented views of the hepatic veins and the portal vein are blended in sequence (B over A). Outputs from these operations are blended together with both given equal weight.
Table XXX.3.12.3.5-1. Render Display Module Recommendations
A Hanging Protocol Instance could select a set of orthogonal MPRs by use of the Image Sets Sequence (0072,0020).
Table XXX.4.1-1. Hanging Protocol Image Set Sequence Recommendations
|
Set to "1.2.840.10008.5.1.4.1.1.11.6", SOP Class UID of the Planar MPR VPS SOP Class |
||
|
Set to (G-A138, SRT, "Coronal") |
||
|
Set to "1.2.840.10008.5.1.4.1.1.11.6", SOP Class UID of the Planar MPR VPS SOP Class |
||
|
Set to (G-A145, SRT, "Sagittal") |
||
|
Set to "1.2.840.10008.5.1.4.1.1.11.6", SOP Class UID of the Planar MPR VPS SOP Class |
||
|
Set to (G-A117, SRT, "Transverse") |
||
The display application would look for three Planar MPR Volumetric Presentation States - one Coronal, one Sagittal and one Transverse - and associate them with Image Sets in the view.
A Structured Display Instance could select a set of one or more Volumetric Presentation States by defining an image box whose Image Box Layout Type (0072,0304) has a Value of "VOLUME_VIEW" or "VOLUME_CINE" and by specifying one or more Volumetric Presentation States using the Referenced Presentation State Sequence (see Table C.11.17-1 “Structured Display Image Box Module Attributes” in PS3.3).
Layout could be accomplished in a display application by using the Presentation Display Collection UID (0070,1101) to identify the presentations to be displayed together, and the View Code Sequence (0054,0220) to determine which presentation to display in each display slot. This requires some clinical context at the exam level, which can be obtained from the source images (for example, Performed Protocol Code Sequence (0040,0260) ).
The RGB Compositor described in Section FF.2.3.3.2 “Internal Structure of RGB and RGBA Compositor Components” in PS3.4 utilizes two weighting transfer functions of Alpha-1 and Alpha-2 to control the Compositor Function, allowing compositing functions that would not be possible if each weighting factor were based only on that input's Alpha value. These weighting transfer functions are implemented as Weighting Look-Up Tables (LUTs). Several examples of the use of these Weighting LUTs are described in this section. The format of the examples is in the form of a graph whose horizontal axis is the Alpha-1 input value and whose vertical axis is the Alpha-2 input value. The Weight output value is represented as a gray level, where 0.0=black and 1.0=white.
Section XXX.3 references these different weighting function styles from real clinical use cases.
In this example, a fixed proportion (in this case 2/3) of RGB-1 is added to a fixed proportion (in this case 1/3) of RGB-2. Note that the weighting factors are independent of Alpha values in this case:
In this example, the Compositor Component performs a Porter-Duff "Partially Transparent A over B" compositing.
In this example, the Alpha values are specified to be representative of the magnitude of the corresponding input data, and the weighting tables are designed such that the stronger of the two inputs are output at each point. If Alpha-2 is less than Alpha-1 then the output consists solely of RGB-1, while if Alpha-2 is greater than Alpha-1 then the output consists solely of RGB-2. This approach is common with ultrasound tissue intensity + flow velocity images, where each output pixel would be either a grayscale tissue value if the flow value is less than the tissue value or a colorized flow velocity value if the flow value is greater than the tissue value:
With these components, blending operations such as the following are possible:
One input to PCS-Values output:
Two inputs to PCS-Values output:
Three inputs to PCS-Values output:
Presentation State Classification Component Sequence (0070,1801) has three items:
Presentation State Compositor Component Sequence (0070,1805) has two items:
RGB Compositor component that combines the outputs of the first two classification components into one RGB
RGB Compositor component that combines the outputs of the previous RGB Compositor and the third classification component into one RGB. This RGB Compositor internally sets the missing Alpha to (1 - Alpha-3) since there is no Alpha output from the previous RGB Compositor
The Volumetric Presentation State Display Module provides functionality equivalent to the Enhanced Blending and Display Pipeline defined in Section C.7.6.23 “Enhanced Palette Color Lookup Table Module” in PS3.3:
This Annex describes the use of Preclinical Small Animal Imaging Acquisition Context.
This Section contains examples for use cases involving imaging of a single animal in a hybrid PET-CT system.
The basic use case involves an animal, which:
lives in an individually ventilated home cage with several other animals in the same cage
is (briefly) transported (in its home cage) with its cage mates to the imaging facility, without heating, with an appropriate lid
is removed from its home/transport cage for preparation for imaging, involving insertion of a tail vein cannula, performed on an electrically heated pad
is induced by (a) placement in an induction chamber with more concentrated volatile anesthetic, or (b) intraperitoneal injection of Ketamine mixture
is placed in a PET-CT compatible imaging sled/carrier/chamber for imaging (of one animal at a time), with anesthesia with Isoflurane and Oxygen as the carrier gas, and heated with an electric pad regulated by feedback from a rectal probe
The content tree structure (when induction is by a volatile anesthetic) would resemble:
The content tree structure when induction is by intra-peritoneal injection might be different in the following way, in that the housing during the induction phase does not involve a chamber, and the injected agent is specified, as follows:
Only the exogenous substance information is included in this example and content describing animal handling, anesthesia information, etc. is excluded for clarity. Indeed, given the optionality of the other content, it would be possible to create an Acquisition Context SR instance that describes only the exogenous substance information and nothing else.
The content tree structure would resemble:
| ... | ... | ... | ... |
[Stout et al 2013] Molecular Imaging. 2013. 7. 1-15. “Guidance for Methods Descriptions Used in Preclinical Imaging Papers”. http://journals.sagepub.com/doi/pdf/10.2310/7290.2013.00055 .
[David et al 2013a] Comparative Medicine. 2013. 5. 386–91. “The Hidden Cost of Housing Practices: Using Noninvasive Imaging to Quantify the Metabolic Demands of Chronic Cold Stress of Laboratory Mice”. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3796748/ .
[David et al 2013b] Journal of the American Association for Laboratory Animal Science. 2013. 6. 738–44. “Individually Ventilated Cages Impose Cold Stress on Laboratory Mice: A Source of Systemic Experimental Variability”. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3838608/ .
[Rosenbaum et al 2009] Journal of the American Association for Laboratory Animal Science. 2009. 6. 763–73. “Effects of Cage-Change Frequency and Bedding Volume on Mice and Their Microenvironment”. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2786931/ .
[Fueger et al 2006] Journal of Nuclear Medicine. 2006. 6. 999–1006. “Impact of Animal Handling on the Results of 18F-FDG PET Studies in Mice”. http://jnm.snmjournals.org/content/47/6/999 .
[Dandekar et al 2007] Journal of Nuclear Medicine. 2007. 4. 602–7. “Reproducibility of 18F-FDG microPET Studies in Mouse Tumor Xenografts”. doi:10.2967/jnumed.106.036608 http://jnm.snmjournals.org/content/48/4/602 .
[Lee et al 2005] Journal of Nuclear Medicine. 2005. 9. 1531–36. “Effects of Anesthetic Agents and Fasting Duration on 18F-FDG Biodistribution and Insulin Levels in Tumor-Bearing Mice”. http://jnm.snmjournals.org/content/46/9/1531 .
[Balcombe et al 2004] Journal of the American Association for Laboratory Animal Science. 2004. 6. 42–51. “Laboratory Routines Cause Animal Stress”. .
[Van der Meer et al 2004] Journal of the American Association for Laboratory Animal Science. 2004. 4. 376–83. “Short-term effects of a disturbed light–dark cycle and environmental enrichment on aggression and stress-related parameters in male mice”. http://www.animalexperiments.info/resources/Studies/Animal-impacts/Stress.-Balcombe-et-al-2004./Stress-Balcombe-et-al-2004.pdf .
[Tabata et al 1998] Laboratory Animals. 1998. 2. 143–48. “Comparison of Effects of Restraint, Cage Transportation, Anaesthesia and Repeated Bleeding on Plasma Glucose Levels between Mice and Rats”. doi:10.1258/002367798780599983 http://lan.sagepub.com/content/32/2/143 .
The following use cases exemplify the use of Content Assessment Results IOD.
A RT Plan SOP Instance is sent from a Treatment Planning System (TPS) to a Quality Assurance (QA) Application and to the Treatment Management System (TMS). The TMS de-composes the content for internal storage. At the time of treatment the TMS re-composes the Instance and sends it to the operator console of the linear accelerator. However, during re-composition an error occurs and one jaw specification is omitted from the recomposed Instance and the Beam Dose in the Fraction Scheme Module is set to 0.0.
The operator console requests the QA Application to perform an assessment to compare the copy of the Instance received from the operator console with the copy of the Instance received earlier from the TPS. The QA Application retrieves the Instance from the operator console. The QA Application also performs an assessment by re-calculation of the dosimetric parameters in the assessed plan. Although the Beam Meterset in the assessed plan (from the operator console) is the same as the Beam Meterset in the comparison plan (from the TPS) , the Beam Meterset re-calculated by the QA Application is different due to the missing jaw. Further on it is detected, that all Beam Dose values have the value 0.0.
Beam Meterset for the current treatment device in this example is expressed in Monitor Units.
Table ZZZ.1-1. Content Assessment Results Module Example of a RT Plan Treatment Assessment
The following examples are provided to illustrate the usage of the CT Defined and Performed Procedure Protocol IODs. They do NOT represent recommended scanning practice. In some cases they have been influenced by published protocols, but the examples here may not fully encode those published protocols and no attempt has been made to keep them up-to-date.
The primary applications (use cases) considered during the development of the CT Procedure Protocol Storage IODs were the following:
Managing protocols within a site for consistency and dose management (Using Defined Protocols)
Recording protocol details for a performed study so the same or similar values can be used when performing followup or repeat studies, especially for oncology which does comparative measurements (Using Performed Protocols)
Vendor troubleshooting image quality issues that may be due to poor protocol/technique (Using Performed Protocols, Defined Protocols)
Distributing departmental, "best practice" or reference protocols (such as AAPM) to modality systems (Using Defined Protocols)
Backing up protocols from a modality to PACS or removable media (e.g., during system upgrades or replacement); most vendors have a proprietary method for doing this which would essentially become redundant when Protocol Management is implemented (Using Defined Protocols)
Additional potential applications include:
Making more detailed protocol information available to rendering or processing applications that would allow them to select processing that corresponds to the acquisition protocol, to select parameters appropriate to the acquisition characteristics, and to select the right series to process/display (Using Performed Protocols)
Improving imaging consistency in terms of repeatable technique, performance, quality and image charateristics; would benefit from associated image quality metrics and other physics work (Using Defined Protocols and Performed Protocols)
Distributing clinical trial protocols (general purpose or scanner model specific) to participating sites (Using Defined Protocols)
Recording protocol details for a performed study to submit with clinical trial images for technique validation (Using Performed Protocols)
Tracking/extracting details of Performed Protocol such as timestamps, execution sequence and technique for QA, data mining, etc. (Using Performed Protocols)
Making more detailed protocol information available to radiologists reviewing a study and priors, or comparing similar studies of different patients (Using Performed Protocols)
Using non-Patient-specific Protocols
In most cases, the scanner uses any protocol details in the modality worklist item to present to the technologist a list of matching Defined Protocols on this scanner.
Preparing and executing Patient-specific Protocols
In the simplest form, this process could be driven with a combination of the Modality Worklist and Defined Protocols.
In special cases, the radiologist might attend the scan and modify the protocol directly on the console.
Note that the primary record of adjustments is the Performed Protocol object (which can be compared to the referenced Defined Protocol object).
A new Defined Protocol is not typically saved unless the intent is to have a new Defined Protocol available in the Library.
The examples in this Annex are intended to illustrate the encoding mechanisms of the DICOM CT Protocol Storage IODs, not to suggest particular values for clinical use. Further, these examples do not contain the many detailed attributes one would expect from a fully executable defined protocol generated by a CT scanner, but they do demonstrate the usage of many common attributes.
This section includes Defined Protocol examples of a Routine Adult Head Protocol for several different scanner models. The protocol is presented as adjusted by a fictitious Mercy Hospital from a reference protocol referenced in the Predecessor Protocol Sequence. Although the examples in this section were originally derived from protocol documents previously published by the AAPM, some values here were modified and are likely out of date. Parties interested in the current AAPM protocols are encouraged to visit http://www.aapm.org/pubs/CTProtocols/
Table AAAA.2-1 is basically the same for each model so it is shown here rather than duplicating it. The second half for two different scanner models is then shown below in Table AAAA.2-2 and Table AAAA.2-3.
Table AAAA.2-1. Routine Adult Head - Context
|
(24726-2, LN, "CT HEAD WITHOUT THEN WITH IV CONTRAST") , |
||
|
Suspected acute intracranial hemorrhage\ Immediate postoperative evaluation following brain surgery\ Suspected shunt malfunctions, or shunt revisions\ Increased intracranial pressure\ Evaluating psychiatric disorders\ When magnetic resonance imaging (MRI) imaging is unavailable or contraindicated, or if the supervising physician deems CT to be most appropriate. |
||
|
Detect brain edema or ischemia\ Identify shift in the normal locations of the brain structures including in the cephalad or caudal directions\ Evaluate the location of shunt hardware and the size of the ventricles\ Evaluate the size of the sulci and relative changes in symmetry\ Detect calcifications in the brain and related structures\ |
||
|
Tube Current Modulation (or Automatic Exposure Control) may be used, but is often turned off; According to ACR CT Accreditation Program guidelines: - The diagnostic reference level (in terms of volume CTDI) is 75 mGy. - The pass/fail limit (in terms of volume CTDI) is 80 mGy. - These values are for a routine adult head scan and may be significantly different (higher or lower) for a given patient with unique indications. NOTE: All volume CTDI values are for the 16-cm diameter CTDI phantom. ACR-ASNR Practice Guideline For The Performance Of Computed Tomography (CT) Of The Brain, http://www.acr.org/Quality-Safety/Standards-Guidelines/Practice-Guidelines-by-Modality/CT. ACR CT Accreditation Program information, including Clinical Image Guide and Phantom Testing Instructions, http://www.acr.org/Quality-Safety/Accreditation/CT. |
||
|
"Some indications require injection of intravenous or intrathecal contrast media during imaging of the brain. Intravenous contrast administration should be performed as directed by the supervising radiologist using appropriate injection protocols and in accordance with the ACR Practice Guideline for the Use of Intravascular Contrast Media. A typical amount would be 100 cc at 300 mg/cc strength, injected at 1 cc/sec. A delay of 4 minutes between contrast injection and the start of scanning is typical." |
||
|
"To reduce or avoid ocular lens exposure, the scan angle should be parallel to a line created by the supraorbital ridge and the inner table of the posterior margin of the foramen magnum. This may be accomplished by either tilting the patient's chin toward the chest ("tucked" position) or tilting the gantry. While there may be some situations where this is not possible due to scanner or patient positioning limitations, it is considered good practice to perform one or both of these maneuvers whenever possible." |
||
The first part of this example is shown above in Table AAAA.2-1.
Table AAAA.2-2. Routine Adult Head - Details - Scantech
|
>>See Table AAAA.2-2b “First Acquisition Protocol Element Specification” |
||
|
>>See Table AAAA.2-2c “Second Acquisition Protocol Element Specification” |
||
|
>>See Table AAAA.2-2d “First Reconstruction Protocol Element Specification” |
||
|
mAs Quality Point is a parameter for the proprietary tube current modulation algorithm. |
||
The following tables reflect the semantic contents of constraint sequences but not the actual structure of the IOD. The centered rows in italics clarify the context of the constrained attributes that follow by indicating which sequence in the performed module contains the constrained attribute (as specified in the Selector Sequence Pointer).
The first part of this example is shown above in Table AAAA.2-1.
The author of this protocol chose to use the code for the vertex of the head rather than the skull as the basis for the plane defining the extent of the scan and reconstructions.
The Requested Series Description (0018,9937) is the same for both localizer acquisitions, however DICOM does not mandate series organization behavior so this does not guarantee that both localizers will be placed in the same series.
Table AAAA.2-3. AAPM Routine Brain Details - Acme
The following tables reflect the semantic contents of constraint sequences but not the actual structure of the IOD. The centered rows in italics clarify the context of the constrained attributes that follow by indicating which sequence in the performed module contains the constrained attribute (as specified in the Selector Sequence Pointer).
Table AAAA.2-3g. First Storage Protocol Element Specification
Table AAAA.2-3h. Second Storage Protocol Element Specification
Table AAAA.2-3i. Third Storage Protocol Element Specification
This section includes a Defined Protocol examples of a CT Protocol for Tumor Volumetric Measurements for a clinical trial. These examples are intended to illustrate the encoding mechanisms of the DICOM CT Protocol Storage IODs, not to suggest particular values for clinical trials. Although the examples in this section were originally inspired by protocol documents previously published by ACRIN, some values here were modified and are likely out of date. Parties interested in the current ACRIN protocols are encouraged to visit https://www.acrin.org/
Table AAAA.3-1 is basically the same for each model so it is shown here rather than duplicating it. The second half is then shown below in Table AAAA.3-2.
Table AAAA.3-1. CT Tumor Volumetric Measurement - Context
The first part of this example is shown above in Table AAAA.3-2.
The anatomical extent is defined in the reconstruction to represent the dataset of interest to the clinical trial. The extent was not defined in the localizer or acquisition. Sites are welcome to reflect their local practice in the localizer and acquisition extent as long as they permit production of the reconstruction as specified here.
Table AAAA.3-2. CT Tumor Volumetric Measurement - Details - Acme
The following tables reflect the semantic contents of constraint sequences but not the actual structure of the IOD. The centered rows in italics clarify the context of the constrained attributes that follow by indicating which sequence in the performed module contains the constrained attribute (as specified in the Selector Sequence Pointer).
Functional imaging can create Parametric Maps showing a functional relation between the anatomical region and the specific functional activity. For display purposes it is useful to show this functional activity with the use of a color LUT on the related anatomical image. To be able to do this it is necessary to include a Palette Color Lookup Table for the Parametric Map and define how to map the (floating point) values to a specific RGB value.
For a correct mapping it is important to know what range of continuous values needs to be mapped to the discrete range of RGB values of the LUT. For this the Minimum Stored Value Mapped (0028,1231) and the Maximum Stored Value Mapped (0028,1232) are defined. All values between the minimum and maximum will be distributed in a linear manner to the Palette Color Lookup Table that is supplied.
The usage of floating point values for the stored values removes the need for a Real World Value transformation other than the identity transformation.
This example illustrates BOLD fMRI activation data for a bipolar motor paradigm stored as a floating point parametric map encoding ‘t’ (statistical) Real World Values. Each voxel’s value represents how well the BOLD time series information at that location of the brain fits the general linear model (GLM) of the fMRI block paradigm pattern (right or left versus control, no movement). Right and left have been encoded as positive and negative t values, respectively.
The Double Float Minimum Stored Value Mapped and Maximum Stored Value Mapped in this case are -16.739 and 21.434, respectively. This range will be mapped to the low and high ends of the LUT applied to this activation map. In this case the Minimum Stored Value Mapped and Maximum Stored Value Mapped are equal to the RWV Minimum and Maximum, respectively, in the activation map data. Note several compelling reasons for the range to be different from the RWV Minimum and RWV Maximum:
Centering the RWV zero value on some desired index of the LUT; e.g., choosing -21.434 to +21.434 to properly center RWV zero on the middle of the LUT (presumably to match the LUT design).
Choosing a narrower range of Minimum Stored Value Mapped and/or Maximum Stored Value Mapped (negative and/or positive), i.e., windowing, to maximize the dynamic range of the LUT for critical RWV range(s).
Specifying a predetermined Minimum Stored Value Mapped and Maximum Stored Value Mapped regardless of the actual RWV data, in order to have key RWV transitions match LUT color effects, e.g., generally accepted hyperperfusion and hypoperfusion transition points in cerebral blood flow (CBF) maps.
For the purpose of this example, the full RWV range of the activation map is appropriate to display with the full range of the Spring Color Palette.
As the activation map without threshold suggests, areas outside the brain have been masked off. These would be coded with Padding values in the parametric map.
Thresholding (not part of the parametric map) will be applied for positive and/or negative ranges. Note that this operation does not change the color mapping (i.e., RWV x corresponds to LUT entry j) but only the opacity of voxels outside the range (forcing A=0 or transparent).
Other visualization methods such as smoothing and overall opacity may be applied to the colored, thresholded activation map.
This section illustrates the usage of the Color LUT in the context of a Parametric Map IOD with the Palette Color Lookup Table for the example described.
Table BBBB.2-1. Example data for the Floating Point Image Pixel Module
Table BBBB.2-7. Example data for the Real World Value Mapping Macro
The Palette Color Lookup Table used is the Spring Color Palette (see Figure BBBB.1-3 Resulting Color LUT Spring).
This can be described as follows through the Palette Color Lookup Table:
Red has a constant value of 255
Green has a linear segment that starts at 0 and ends at 255
Blue has a linear segment that starts at 255 and ends at 0
Using the Segmented Color Lookup Table all three can be described by a discrete segment with length 1 to specify the starting value (0,1,value) followed by a linear segment of length 255 with the end-value (1,255,end-value).
Table BBBB.2-8. Example data for the Palette Color Lookup Table Module
The values specifying the range to be mapped to the Color LUT are given by the Minimum Stored Value Mapped and the Maximum Stored Value mapped.
This Annex provides guidance to understand and populate the TID 5300 “Simplified Echo Procedure Report” and its sub-templates. For implementers familiar with the TID 5200 “Echocardiography Procedure Report”, which is largely replaced by TID 5300, some relationships and differences are also explained.
Measurements in this template (except for the Wall Motion Analysis) are collected into one of three containers, each with a specific sub-template and constraints appropriate to the purpose of the container.
Are fully standardized measurements (many taken from the ASE practice guidelines).
Each has a single pre-coordinated standard code that fully captures the semantics of the measurement.
The only modifiers permitted are to indicate coordinates where the measurement was taken, provide a brief display label, and indicate which of a set of repeated measurements is the preferred value. Other modifiers are not permitted.
Are measurements for which DICOM has not established pre-coordinated codes, but that are performed with enough regularity to merit configuration and capturing the full semantics of the measurement. For example these measurements may include those configured on the Ultrasound System by the vendor or user site. Some of these may be variants of the Pre-coordinated Measurements.
A set of mandatory and conditional modifiers with controlled vocabularies capture the essential semantics in a uniform way.
A single pre-coordinated code is also provided so that when the same type of measurement is encountered in the future, it is not necessary to parse and evaluate the full constellation of modifer values. Since this measurement has not been fully standardized, the pre-coordinated code may use a private coding scheme (e.g., from the vendor or user site).
Are non-standardized measurements that do not merit the effort to track or configure all the details necessary to populate the set of modifiers required for a post-coordinated measurement.
The measurement code describes the elementary property measured.
Modifiers provide a brief display label and indicate coordinates where the measurement was taken. Other modifiers are not permitted.
The user wishes to perform measurements on the Ultrasound System, store them to the PACS and later have a specific measurement (say ABC) automatically displayed in the overlay or automatically inserted into a report page on the review system. This does not require the receiver to understand any of the semantics of the measurement.
The Ultrasound System is configured to encode a particular measurement using a specific pre-coordinated code (and code meaning).
In the case of measurements from the Core Set, it is a well-known pre-coordinated code (i.e., the code is in CID 12300 “Core Echo Measurements”), the full semantics are well-known and the measurement will be recorded in TID 5301 “Pre-coordinated Echo Measurement” . Likely most, if not all, of the Core Set measurements come pre-configured on the Ultrasound System.
In the case of vendor-specific or site-specific measurements, it is a pre-coordinated code managed by the site or the vendor which is entered and persisted on the Ultrasound System. Since the code is not well-known, the measurement will be recorded in TID 5302 “Post-coordinated Echo Measurement” along with the modifiers describing its semantics.
The receiver (i.e., the PACS display package or the reporting package) is configured to associate the specific pre-coordinated code with a location on the overlay or a slot in the report.
The form of the user interface for these capabilities is up to the implementer.
The user takes measurements on the Ultrasound System, including measurement ABC. All these measurements are recorded in the Simplified Adult Echo SR object. If multiple instances of measurement ABC are included, one of them may be flagged by the Ultrasound System by setting the Selection Status for that instance to the reason it was selected as the preferred value.
The Ultrasound System stores the SR object to the PACS.
The PACS or the reporting package retrieves the SR object and scans the contents looking for measurements with the pre-coordinated code for measurement ABC. If multiple instances are found, the receiver takes the one for which the Selection Status has been set.
The receiver renders the measurement value to the display or report, annotating it with the recorded Units, Code Meaning, and/or Short Label as appropriate.
Note that in this use case the receiver handles the measurement in a mechanical way. As long as the measurement can be unambiguously identified, the semantics do not need to be understood by the receiver.
The user wishes to perform measurements on the Ultrasound System, store them to the PACS and later perform processing of some or all of the measurements on a CVIS (Cardiovascular Information System) or other system. Processing may include incorporating measurements into a database, performing trend analysis, plotting graphs, driving decision support, etc. One measurement taken at end systole may be compared to the "same" measurement that is taken at end diastole, etc. Measurements at the same Finding Site might be collected together.
As in Use Case 1, the Ultrasound System is configured to encode each measurement using a specific pre-coordinated code (and code meaning).
Again, measurements from the Core Set use a well-known pre-coordinated code and are recorded in TID 5301 “Pre-coordinated Echo Measurement” while vendor-specific or site-specific measurements use locally managed codes and are recorded in TID 5302 “Post-coordinated Echo Measurement” along with the modifiers describing their semantics.
The user again takes measurements on the Ultrasound System which are recorded in the Simplified Adult Echo SR object and if multiple instances of a measurement are included, one of them may be flagged by the Ultrasound System by setting the Selection Status for that instance to the reason it was selected as the preferred value.
The Ultrasound System stores the SR object to the PACS.
The receiving database or processing system retrieves the SR object and parses the contents. The contents of TID 5301 “Pre-coordinated Echo Measurement” have known semantics and are processed accordingly.
On first encounter, measurements in TID 5302 “Post-coordinated Echo Measurement” will likely have unfamiliar pre-coordinated codes (since the pre-coordinated code in Row 1 of TID 5302 is not taken from CID 12300 “Core Echo Measurements”, but rather was likely produced by the vendor of the Ultrasound System). Depending on the sophistication of the receiver, parsing the modifiers may provide sufficient information for the receiver to automatically handle the new measurement. If not, the measurement can be put in an exception queue for a human operator to review the values of the modifiers and decide how the measurement should be handled. In between those two possibilities, the receiver may be able to compare the modifier values of known measurements and provide the operator with a partially categorized measurement.
In any case, once the semantics of the measurement are understood by the receiver, the corresponding pre-coordinated code can be logged so that future encounters with that measurement can be handled in an automated fashion.
The receiver may also make use of the Selection Status values or may database all the provided measurement values or allow the human to select from the provided set.
Note that in this use case the receiver handles the measurements based on the semantics associated with the measurement.
In TID 5200 “Echocardiography Procedure Report”, containers and headings were used to facilitate the layout of printed/displayed reports by collecting measurements into groups based on concepts like anatomical region. Further, TID 5200 permitted Ultrasound Systems to add new sections freely, TID 5300 “Simplified Echo Procedure Report” does not. Section usage was a source of problematic variability for receivers of TID 5200. TID 5300 constrains this. When such groupings are useful, for example when printing reports, it makes more sense to configure it in one place (in the receiving database/reporting system) rather than configuring such groupings independently (and possibly inconsistently) on each ultrasound device in a department. Receivers may choose to group measurements based on Finding Site or some other logic as they see fit. This avoids the problem of trying to keep many Ultrasound Systems in sync. SR objects are considered acquisition data/evidence. If the findings are transcoded into CDA reports, sections will likely be introduced in the CDA as appropriate.
The Finding Site is the location at which the measurement was taken. While some measurements will be an observation of the structure of the finding site itself, other measurements will be an observation of something like flow, in which case the Finding Site is simply the location, not the actual thing being observed/measured. To clarify this distinction, Finding Observation Type was introduced in TID 5302 “Post-coordinated Echo Measurement” . For example, when the measurement is a peak velocity and the Finding Site is a valve, to distinguish between a measurement of the velocity of the blood through the valve, and a measurement of the velocity of the valve tissue, the Finding Observation Type would be set to "Hemodynamic Measurement" or "Behavior of Finding Site" respectively.
Modifiers are not permitted on the Finding Site in TID 5302 “Post-coordinated Echo Measurement” since such modifiers resulted in different ways of encoding the same concept. TID 5302 requires the use of a single anatomical code that fully pre-coordinates the location details of the measurement. CID 12305 “Basic Echo Anatomic Sites” has proven to be sufficient to encode the ASE Core Set of measurements. Implementers are strongly recommended to using codes from that list. If there is a truly significant location detail that needs to be captured, e.g., to identify a specific segment of the atrial wall, or a specific leaf of a valve as the location of the measurement, then the implementer may introduce a new code (CID 12305 is extensible) or better yet, new codes can be added to CID 12305 through a DICOM Change Proposal.
The codes in CID 12304 “Echo Measured Properties” have also proven to be sufficient to encode the ASE Core Set of measurements. It is expected that the majority of vendor-specific or site-specific measurements can also be encoded using these properties, but it is understandable that some additional codes may be needed. When introducing new codes, implementers should be careful not to introduce elements of the other modifiers, such as Finding Site or Cardiac Cycle Point, into the Measured Property. For example, do not introduce a property for Diastolic Atrial Length to be used for the left and right atria, rather for such a measurement, use Property=Length, Cardiac Cycle Point=End Diastole and Finding Site=Left Atrium or Right Atrium respectively.
Implementers may use codes for image views beyond those listed in DCID 12226 “Echocardiography Image View” as needed, but note that Image View is only recorded if it is significant to the interpretation of the measurement. Inclusion of the Image View will likely isolate the measurement from other measurements of the same feature taken in different views.
Note that (SRT, F-32020, "Systole") is used here to refer to the entire duration of ventricular systole, while (SRT, R-FAB5B, "End Systole") is used to refer to the point in time where the aortic valve closes (or in the case of the right ventricle, the pulmonary valve). Therefore, a Vmax measurement for systole would mean the maximum velocity over the period of systole, and a Vmax measurement for end systole would mean the maximum velocity at the time point of end systole.
This distinguishes between two measurements that convey the same concept, but are obtained or derived in a different way. As with the Image View, this is only recorded if it is significant to the interpretation of the measurement.
This is used to flag the preferred value when multiple instances of the same measurement are recorded in the SR object. Using this to communicate the value preferred by the operator or the Ultrasound System is very useful for receivers that lack the logic to make a selection themselves. In cases where there is no need or value in sending multiple instances of the same measurement, the issue can be avoided by only sending a single instance of any given measurement in the SR object.
The concept modifiers in the template are sufficient to accurately encode all the best practice echo measurements recommended by the ASE. Although TID 5302 “Post-coordinated Echo Measurement” is extensible and adding new modifiers is not prohibited, the meaning and significance of such new modifiers will generally not be understood by receiving systems, delaying or preventing import of such measurements. Further, adding modifiers that replicate the meaning of an existing modifier is prohibited.
Real-world quantities of clinical interest are exchanged in DICOM Structured Reports. These real-world quantities are identified using concept codes of three different types:
Standard measurements that are defined by professional organizations such as the American Society of Echocardiography (ASE), and codified by vocabulary standards such as the Logical Observation Identifiers Names and Codes (LOINC) or Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT) standards.
Non-Standard measurements that are defined by a medical equipment vendor or clinical institution and codified using a private or standard Coding Scheme.
Adhoc measurements are those measurements that are generally acquired one time to quantify some atypical anatomy or pathology that may be observed during an exam. These measurements are not codified, but rather are described by the image itself and a label assigned at the time the measurement is taken.
This Annex discusses the requirements for identifying measurements in such a manner that they are accurately acquired and correctly interpreted by medical practitioners.
Clinical organizations publish recommendations for standardized measurements that comprise a necessary and sufficient quantification of particular anatomy and physiology useful in obtaining a clinical diagnosis. For each measurement recommendation, the measurement definition is specific enough so that any trained medical practitioner would know exactly how to acquire the measurement and how to interpret the measurement. Thus, there would be a 1:1 correspondence between the intended measurement recommendation and the practitioner's understanding of the intended measurement and the technique used to measure it (anatomy and physiology, image view, cardiac/respiratory phase, and position/orientation of measurement calipers). This is illustrated in Figure DDDD.2-1.
The goal is for each recommended measurement to be fully specified such that every medical practitioner making the measurement on a given patient at a given time achieves the same result. However, if the recommendation were to be unclear or ambiguous, different qualified medical practitioners would achieve different results measuring the same quantity on the same patient, as illustrated in Figure DDDD.2-2.
There are a number of characteristics that should be included in a measurement recommendation in order to ensure that all practitioners making that measurement achieve the same results in making the measurement. Some characteristics are:
Anatomy being measured, specified to appropriate level of detail
Reference points (e.g., "OFD is measurement in the same plane as BPD from the outer table of the proximal skull with the cranial bones perpendicular to the US beam to the inner table of the distal skull")
Type of measurement (distance, area, volume, velocity, time, VTI, etc.)
Sampling method (average of several samples, peak value of several samples, etc.)
The measurement definition should specify these characteristics in order that the definition is clear and unambiguous. Since the characteristics are published by the professional society as part of the Standard measurement definition document are incorporated in the codes that are added to LOINC, a pre-coordinated measurement code is sufficient to specify the measurement in a structured report.
Because of the detail in the definition of each standard measurement, it is sufficient to represent such measurements with a pre-coordinated measurement code and a minimum of circumstantial modifiers. This approach is being followed by TID 5301, for example.
Non-Standard Measurements are defined by a particular vendor or clinical institution, and are not necessarily understood by users of other vendors' equipment or practitioners in other clinical institutions. A system producing such measurements cannot expect a consuming application to implicitly understand the measurement and its characteristics. Further, such measurements may not be fully understood by the medical practitioners who are acquiring the measurements, so there is some risk that the measurement acquired may not match the real-world quantity intended by the measurement definition, as illustrated by Figure DDDD.3-1.
It is important for all non-standard measurement definitions to include all the characteristics of the measurement as would have been specified for Standard (baseline) measurement definitions, such as:
Anatomy being measured, specified to appropriate level of detail
Reference points (e.g., "OFD is measurement in the same plane as BPD from the outer table of the proximal skull with the cranial bones perpendicular to the US beam to the inner table of the distal skull")
Type of measurement (distance, area, volume, velocity, time, VTI, etc.)
Sampling method (average of several samples, peak value of several samples, etc.)
Fully specifying the characteristics of such measurements is important for several reasons:
Ensuring medical practitioners correctly measure the intended real-world quantity
Aiding receiving applications in correctly interpreting the non-standard measurement and mapping the non-standard measurement to the most appropriate internally-supported measurement.
Aid in determining whether non-standard measurements from different sources are in fact equivalent measurements and could thus be described by a common measurement definition.
Each of these reasons is elaborated upon in the sections to follow. This is the justification for representing such non-standard measurements using both post-coordinated concepts and a pre-coordinated concept code for the measurement, such as is done in TID 5302 “Post-coordinated Echo Measurement”.
A medical practitioner can be expected to correctly acquire the real-world quantity intended by the non-standard measurement definition only if it is completely specified. This includes explicitly specifying all the essential clinical characteristics as are described for Standard measurements. While the resultant measurement value can be described by a pre-coordinated concept code, the characteristics of the intended real-world quantity must be defined and known.
The characteristics of the real-world measurement measured by the acquisition system and user are conveyed in the mandatory post-coordinated descriptors recorded alongside the measurement value.
The presence of such post-coordinated descriptors aids the consumer application in
Mapping the non-standard measurement to a corresponding internally-supported measurement. The full details provided by including the post-coordinated descriptors greatly simplifies the task of determining measurement equivalence.
Organizing the display of the non-standard measurement values in a report. It is clinically useful to structure written reports in a hierarchical manner by displaying all measurements that pertain to the same anatomical structure or physiological condition together.
Interpreting similar anatomical measurements differently depending on such characteristics as acquisition image mode (e.g., 2D vs. M-mode image). Since the clinical interpretation may depend on this information, it should be explicitly included along with the measurement concept code/code meaning.
Analyzing accumulated report data (trending, data mining, and big data analytics)
Some of these benefits are reduced if the context groups specified for each post-coordinated descriptor are extended with custom codes. A user should take great care when considering the extension of the standard context groups to minimize the proliferation of modifier codes.
The first time that a consumer application encounters a new post-coordinated measurement, it will need to evaluate it based on the values of the post-coordinated descriptors. To help the consumer application with subsequent encounters with the same type of measurement, the acquisition system can consistently populate the Concept Name of the measurement with a code that corresponds to the collection of post-coordinated descriptor values; effectively a non-standard, but stable, pre-coordinated measurement code. (See TID 5302 “Post-coordinated Echo Measurement”, Row 1)
The presence of the pre-coordinated code in addition to the post-coordinated descriptors allows subsequent receipt of the same measurement to utilize the mapping that was performed as described above and treat the measurement as an effectively pre-coordinated measurement.
If the acquisition system is aware of other pre-coordinated codes (e.g., those used by other vendor carts) that are also equivalent to the collection of post-coordinated descriptor values for a given measurement, those pre-coordinated codes may be listed as (121050, DCM, "Equivalent Meaning of Concept Name"). These "known mappings" provided by the acquisition system can also be useful for consumer applications trying to recognize or map measurements.
It is customary for individual vendors to provide tools to acquire measurements that aren't currently defined in a Standard measurement template. In the normal evolution of the Standard, standard measurement sets are periodically updated to reflect the state of medical practice. Often, individual vendors and/or clinical users are first to implement the acquisition of new measurements.
Some measurements may be defined and used within a particular clinical institution. For maximum interoperability, if there exists a Standard or vendor-defined measurement concept code for that measurement, the Standard or vendor-defined concept code should be used instead of creating a custom measurement code unique to that institution.
Determining whether two or more different measurement definitions pertain to the same real-world quantity is a non-trivial task. It requires clinical experts to carefully examine alternative measurement definitions to determine if two or more definitions are equivalent. This task is greatly simplified if the distinct characteristics of the non-standard measurement are explicitly stated and conveyed. If two measurements differ in one or more critical characteristics then it can be concluded that the two measurement definitions describe different real-world quantities. Only those measurements that share all the critical clinical characteristics need to be carefully examined by clinical experts to see if they are equivalent.
It may be determined that two measurements that share all specified clinical characteristics are actually distinct real-world quantities. If this occurs, it may be an indication that not all relevant clinical characteristics have been isolated and codified. In this case, the convention for defining the measurement should be extended to include the unspecified clinical characteristic.
In the case of a measurement that is only being performed once, there is little value in incurring the overhead to specify all measurement characteristics and assign a code to the measurement as it will never be used again. Rather, the descriptive text associated with the measurement may provide sufficient clinical context. Association of the measurement with the source image (and/or particular points in the source image) can often provide additional relevant context so it is recommended to provide image coordinate references in the Structured Report (See TID 5303).
If a user finds that the same quantity is being measured repeatedly as an adhoc measurement, a non-standard measurement definition should be created for the measurement as described in Section DDDD.3.
This Annex contains examples of how to encode diffusion models and acquisition parameters within the Quantity Definition Sequence of Parametric Maps and in ROIs in Measurement Report SR Documents.
The approach suggested is to describe that an ADC value is being measured by using ADC (generic) as the concept name of the numeric measurement, and to add post-coordinated concept modifiers to describe:
the model (e.g., mono-exponential, bi-exponential or other multi-compartment models) (drawn from CID 7273 “MR Diffusion Models”)
the method of fitting the data points to that model (e.g., for mono-exponential models, log of ratio of two samples, linear least-squares for log-intensities of all b-values) (drawn from CID 7274 “MR Diffusion Model Fitting Methods”)
relevant numeric parameters, such as the b-values used during acquisition of the source images (drawn from CID 7275 “MR Diffusion Model Specific Methods”)
The model and method of fitting are encoded separately since even though the method of fitting is sometimes dependent on the model, the model may be known but not the method of fitting, or there may be no code for the method of fitting.
The generic concept of ADC, (113041, DCM, "Apparent Diffusion Coefficient"), is used, rather than the specific concept of ADCm, (113290, DCM, "Mono-exponential Apparent Diffusion Coefficient"), since the model is expressed in a post-coordinated manner. Most clinical users will not be concerned with which model was used, and so the ability to display and query for a single generic concept is preferred. However, model-specific pre-coordinated concepts for ADC are provided, as are concepts for other model parameters when a single ADC concept is inappropriate, e.g., for the fast and slow components of a bi-dimensional model.
The generic concept of (G-C306, SRT, "Measurement Method") is used to describe the model, rather than being used to described the fitting method, since the model is the more important aspect of the measurement to distinguish. This pattern is consistent with historical precedent (e.g., in PS3.17 RRR.3 the model (Extended Tofts) for DCE-MR measurements is described using the Measurement Method and the fitting method is not described).
Also illustrated is how the (121050, DCM, "Equivalent Meaning of Concept Name") can be used to communicate a single human readable textual description for the entire concept.
This example shows how to use the Table C.7.6.16-12b “Real World Value Mapping Item Macro Attributes” in PS3.3 to describe pixel values of an ADC parametric map obtained from a pair of B0 and B1000 images using an EEEE fit to a mono-exponential function (single compartment model). It elaborates on the simple example provided in PS3.3 Section C.7.6.16.2.11.1.2 by adding coded concepts that describe the model, the method of fitting and listing the b-values used.
Real World Value Mapping Sequence (0040,9096)
LUT Explanation (0028,3003) = "ADC mm2/s mono-exponential log ratio B0 and B1000"
Measurement Units Code Sequence (0040,08EA) = (mm2/s, UCUM, "mm2/s")
Quantity Definition Sequence (0040,9220):
CODE (G-C1C6, SRT, "Quantity") = (113041, DCM, "Apparent Diffusion Coefficient")
CODE (G-C306, SRT, "Measurement Method") = (113250, DCM, "Mono-exponential ADC model")
CODE (113241, DCM, "Model fitting method") = (113260, DCM, "Log of ratio of two samples")
NUMERIC (113240, DCM, "Source image diffusion b-value") = 0 (s/mm2, UCUM, "s/mm2")
NUMERIC (113240, DCM, "Source image diffusion b-value") = 1000 (s/mm2, UCUM, "s/mm2")
TEXT (121050, DCM, "Equivalent Meaning of Concept Name") = "ADC mono-exponential log ratio B0 and B1000"
In this usage, the text of the (121050, DCM, "Equivalent Meaning of Concept Name") is redundant with the value of LUT Explanation (0028,3003); either or both could be omitted.
The parameter describing a b-value of 0 is expected to be sent, and one should not assume that a b-value of 0 is used if it is absent, since some methods may use a low b-value (e.g., 50), which is not 0.
There is no consensus in the MR community or scientific literature as to the appropriate units to use to report diffusion coefficient values to the user, nor amongst the MR vendors as to how to encode them. In this example, the units are specified as "s/mm2". If the diffusion coefficient pixel values were encoded as integers with such a unit, they could then be encoded with a Rescale Slope of 1E-06, given the typical range of values encountered. Alternatively, the pixel values could be encoded as floating point pixel data values with identity rescaling. Or, if the units were specified "um2/s" (or "10-6.mm2/s", which is the same thing), then integer pixels could be used with a Rescale Slope of 1. Application software can of course rescale the values for display and convert the units as appropriate to the user's preference, as long as they are unambiguously encoded.
This example shows how to describe the mean ADC value of a region of interest on a volume of ADC values obtained from a pair of B0 and B1000 images fitting the log ratio ot two samples to a mono-exponential function (single compartment model). In this case the template used is TID 1419 ROI Measurements.
NUM (113041, DCM, "Apparent Diffusion Coefficient") = 0.75E-3 (mm2/s, UCUM, "mm2/s")
HAS CONCEPT MOD CODE (G-C306, SRT, "Measurement Method") = (113250, DCM, "Mono-exponential ADC model")
HAS CONCEPT MOD CODE (113241, DCM, "Model fitting method") = (113260, DCM, "Log of ratio of two samples")
HAS CONCEPT MOD CODE (121401, DCM, "Derivation") = (R-00317, SRT, "Mean")
INFERRED FROM NUM (113240, DCM, "Source image diffusion b-value") = 0 (s/mm2, UCUM, "s/mm2")
INFERRED FROM NUM (113240, DCM, "Source image diffusion b-value") = 1000 (s/mm2, UCUM, "s/mm2")
HAS CONCEPT MOD TEXT (121050, DCM, "Equivalent Meaning of Concept Name") = "Mean ADC mono-exponential log ratio B0 and B1000"
This example illustrates how to describe the manner in which an ADC Parametric Map image was derived from B0 and B1000 images. The intent is to provide links to the images, not to replicate all the information that can be provided in the Quantity Definition Sequence.
This particular example illustrates the reference from an ADC Parametric Map to a pair of Enhanced MR images, one for each b-value (or a pair of subsets of frames of a single Enhanced MR image), but the same principle is applicable when single frame IODs are used as source or derived image.
since multiple items are permitted in the Derivation Code Sequence (0008,9215), both the general concept (calculation of ADC) and the specific method have been listed; alternatively, just one or the other could be provided
a textual description has also be provided, which in this case provides more information than the structured content (i.e., about the b-values used)
a generic purpose of reference code has been used, since only a single code is permitted and there is no mechanism (other then creating pre-coordinated codes for every possible b-value) to convey which image (set) was acquired with which b-value; the more specific alternative of a coded concept for "source image for ADC calculation" would add no value over the concept already described in Derivation Code Sequence
the SOP Instance UID in the first and second items may be the same, but a different range of frames referenced, e.g., if all of the source frames (all of the b-values) are in the same instance, as is required by the IHE Diffusion (DIFF) profile (http://wiki.ihe.net/index.php/MR_Diffusion_Imaging); if all of the frames in a single source image are used, then only a single item is necessary and the Referenced Frame Number can be omitted.
all of the images have been listed in a single item of Derivation Image Sequence (0008,9124); alternatively, multiple items of Derivation Image Sequence (0008,9124) could be sent. one for each of the different b-values used; this would allow Derivation Description (0008,2111) to communicate which set contained which b-value, but there is no structured way to communicate such numeric parameters (other then creating pre-coordinated codes for every possible b-value)
This example illustrates how to encode the Image and Frame Type values of an ADC Parametric Map image.
Parametric maps are of the enhanced multi-frame family, so they use the standard roles of Image Flavor for Value 3 and Derived Pixel Contrast for Value 4.
The specific requirement are defined in Section C.8.32.2 “Parametric Map Image Module” in PS3.3 and Section C.8.32.3.1 “Parametric Map Frame Type Macro” in PS3.3 .
Since this is a derived diffusion image that contains ADC value, suitable values are:
This usage is consistent with the requirements for Image and Frame Type in the IHE Diffusion (DIFF) profile (http://wiki.ihe.net/index.php/MR_Diffusion_Imaging).
This section lists useful references related to the taxonomy of ADC calculation methods.
[Burdette 1998] J Comput Assist Tomogr. 1998. 5. 792–4. “Calculation of apparent diffusion coefficients (ADCs) in brain using two-point and six-point methods”. http://journals.lww.com/jcat/pages/articleviewer.aspx?year=1998&issue=09000&article=00023&type=abstract .
[Barbieri 2016] Magnetic Resonance in Medicine. 2016. 5. 2175–84. “Impact of the calculation algorithm on biexponential fitting of diffusion-weighted MRI in upper abdominal organs”. http://dx.doi.org/10.1002/mrm.25765 .
[Bennett 2003] Magnetic Resonance in Medicine. 2003. 727–734. “Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model”. http://dx.doi.org/10.1002/mrm.10581 .
[Gatidis 2016] Journal of Magnetic Resonance Imaging. 2016. 4. 824–32. “Apparent diffusion coefficient-dependent voxelwise computed diffusion-weighted imaging: An approach for improving SNR and reducing T2 shine-through effects”. http://dx.doi.org/10.1002/jmri.25044 .
[Graessner 2011] MAGNETOM Flash. 2011. 84-87. “Frequently Asked Questions: Diffusion-Weighted Imaging (DWI)”. Siemens Healthcare. http://clinical-mri.com/wp-content/uploads/software_hardware_updates/Graessner.pdf .
[Merisaari 2016] Magnetic Resonance in Medicine. 2016. “Fitting methods for intravoxel incoherent motion imaging of prostate cancer on region of interest level: Repeatability and gleason score prediction”. http://dx.doi.org/10.1002/mrm.26169 .
[Neil 1993] Magnetic Resonance in Medicine. 1993. 5. 642–7. “On the use of bayesian probability theory for analysis of exponential decay date: An example taken from intravoxel incoherent motion experiments”. http://dx.doi.org/10.1002/mrm.1910290510 .
[Oshio 2014] Magn Reson Med Sci. 2014. 191–195. “Interpretation of diffusion MR imaging data using a gamma distribution model”. http://dx.doi.org/10.2463/mrms.2014-0016 .
[Toivonen 2015] Magnetic Resonance in Medicine. 2015. 4. 1116–24. “Mathematical models for diffusion-weighted imaging of prostate cancer using b values up to 2000 s/mm2: Correlation with Gleason score and repeatability of region of interest analysis”. http://dx.doi.org/10.1002/mrm.25482 .
[Yablonskiy 2003] Magnetic Resonance in Medicine. 2003. 4. 664–9. “Statistical model for diffusion attenuated MR signal”. http://dx.doi.org/10.1002/mrm.10578 .
This section illustrates the usage of the Advanced Blending Presentation State for a functional MRI study.
Quantitative imaging provides measurements of physical properties, in vivo and non-invasively, for research and clinical practice. DICOM support for parametric maps provides a structure for organizing these results as an extension of the already widely-used imaging standard. The addition of color LUT support for parametric maps bridges the gap between data handling and visualization.
An example of quantitative imaging in clinical practice today is the use of MRI, PET and other modalities in brain mapping for diagnostic assessment in pre-treatment planning for tumor, epilepsy, arterio-venous malformations (AVMs) and other conditions. MR Diffusion tensor imaging (DTI) results in fractional anisotropy (FA) and other parametric maps highlighting white matter structures. Task-based functional MRI (fMRI) highlights specific areas of eloquent cortex (gray matter) as expressed in statistical activation maps. Other parameters and modalities including perfusion, MR spectroscopy, and PET are often employed to locate and characterize lesions by means of their hyper- and hypo-metabolism and -perfusion in parametric maps.
The visualization of multiple parametric maps and sources of anatomical information in the same space requires the tools to highlight areas of interest (and hide irrelevant areas) in parametric maps. Two important tools provided in this supplement are thresholding of parametric maps by their real-world values, and blending of multiple image data sets in a single view.
In this example the series 2 to 5 have a lower resolution and are expected to be resampled to have the same resolution as series 1 as this is identified as series to be used for target Geometry.
The example describes the blending of five series:
Series 1: the anatomical series which is stored as a single volume in an Enhanced MR Image object having no Color LUT attached. The Image will be displayed with a Relative Opacity of 0.7.
Series 2: the DTI series which is stored as an Enhanced MR Color Image object means that no RGB transformation is needed. The Image will be displayed with a Relative Opacity of 1 - 0.7.
Series 3: Reading task captured in a Parametric Map with Color LUT Winter attached to it. The Image will be displayed with threshold range 6% to 50%. Opacity will be equal divided with the other two task maps.
Series 4: Listening task captured in a Parametric Map with Color LUT Fall attached to it. The Image will be displayed with threshold range 9% to 60%. Opacity will be equal divided with the other two task maps.
Series 5: Silent word generation task captured in a Parametric Map with Color LUT Spring attached to it. The Image will be displayed with threshold range 7% to 75%. Opacity will be equal divided with the other two task maps.
The result of the first blending operation (FOREGROUND) will be blended with the result of the second blending operation (EQUAL) through a FOREGROUND blending operation with a Relative Opacity of 0.6.
Figure FFFF.2-6 shows the final result with information of patient and different blended image layers. The overlay of the patient and layer information is not described in the object but would be application specific behavior.
Table FFFF.3-1. Encoding Example
This Annex contains examples of the use of Patient Radiation Dose templates within Patient Radiation Dose Structured Report Documents.
The following example shows the report of the skin dose map calculated from the dose delivered during an X-Ray interventional cardiology procedure.
The calculation uses a Radiation Dose SR provided by a Single Plane X-Ray Angiography equipment of the manufacturer "A". The Radiation Dose SR is created during one procedure step, corresponding to the coronary stenting of an adult male of 83 kg and 179 cm height.
The skin dose calculations are performed by an application on a separated workstation of the manufacturer "B", operated by the medical physicist, who is logged into the workstation at the time of the creation of the Patient Radiation Dose Structured Report document.
The dose calculation application generates a Patient Radiation Dose Structured Report document and a Secondary Capture Image containing an image of the dose distribution over the deployed skin of the patient model.
The dose calculation application uses the following settings and assumptions:
The patient model is a combination of two elliptic cylinders to represent the chest and neck of the patient.
The actual dimensions of the model are determined by the age, gender, height, and weight of the patient.
In this example the exact height and weight of the patient are used to create the model. The resulting elliptic cylinder for the chest of the model is 31 cm in the AP dimension and 74 cm in the lateral dimension.
The application creates internally a 3D voxelized model that is stored in a DICOM SOP Instance.
The distance from the top of the patient's head to the head of the table (measured during the procedure) is known. The location of the patient head and table head are stored in a Spatial Fiducials SOP instance.
The application uses fiducials to register the patient model with the data of the source Radiation Dose SR.
A-priori knowledge of the distance from the table head to the system Isocenter at table zero position is calibrated offline.
The table tilt, cradle, and rotation angles are ignored because the description of the acquisition geometry is incomplete in the Radiation Dose SR. Only table translations relative to the Isocenter are considered in the calculations.
A-priori knowledge of the model of the table and mattress (i.e., shape, dimensions, and absorption material) is calibrated offline, and it is referenced internally by the application. The model contains the same coordinate system as the one used in the equipment referenced in the Radiation Dose SR, so there is no need of another registration SOP instance.
The X-Ray filter information from the source Radiation Dose SR is used by the application. There is no other a-priori knowledge of the X-Ray filtration.
Table GGGG.1-1. Skin Dose Map Example
The following example shows the report of the organ dose calculated for a dual-source CT scan.
The calculation uses a Radiation Dose SR provided by a CT system that has dual X-Ray tubes. The Radiation Dose SR is created during the acquisition of Neck DE_CAROTID CT scan of an adult male of 75 kg and 165 cm height.
The dose calculations are performed on the CT system. The dose calculation application generates a Patient Radiation Dose Structured Report document and a Dose Point Cloud containing an image of the dose distribution for the patient model.
The dose calculation application uses the following settings and assumptions:
The patient model is a stylized anthropomorphic model of the patient.
Organs are represented by simple geometric shapes described by mathematical equations. The parameters of the equations describing the location, shape, and dimension of the organs are stored in a DICOM SOP Instance.
In this example the gender and age of the patient are used to select the appropriate phantom from the existing phantom library.
The following example is provided to illustrate the usage of the Protocol Approval IOD.
This example shows approval of a pair of CT Protocols for routine adult head studies. It is approved by the Chief of Radiology and by the Physicist. The Instance UIDs of the two CT Protocols are 1.2.3.456.7.7 and 1.2.3.456.7.8.
Note that the Institution Code Sequence (0008,0082) inside the Asserter Identification Sequence (0044,0103) communicates that Mercy Hospital is the organization to which Dr. Welby is responsible. The Institution Code Sequence (0008,0082) at the end of the first Approval Item communicates that Mercy Hospital is the institution for which the protocols are "Approved for use at the institution".
Table HHHH-1. Approval by Chief Radiologist
The goal of encapsulating a Stereolithography (STL) 3D manufacturing model file inside a DICOM instance rather than transforming the data into a different representation is to facilitate preservation of the STL file in the exact form that it is used with extant manufacturing devices, while at the same time unambiguously associating it with the patient for whose care the model was created and the images from which the model was derived.
In this example, the patient requires a replacement implant for a large piece of skull on the left side of his head. A 3D manufacturing model (encoded in binary STL) was created by mirroring the corresponding section of the patient's right skull hemisphere, and then modified by trimming to fit the specific implantation area.
The model was derived from a series of CT images (CT-01). The STL data in this example is the first version, having no predecessor. The STL data was created on November 22, 2017 at 7:10:14 AM and then stored in a DICOM instance at 7:15:23 AM. The CT images were acquired weeks earlier.
The STL data was created in the coordinate system of CT-01; so they share the same Frame of Reference UID value.
A preview image (optional) showing the rendered 3D object was created and included with the encapsulated STL as an icon image.
No burned in annotation identifying the patient was included. The region of the skull reconstructed in the model contains no distinguishing facial features of the patient.
Table IIII.1-1. CT Derived Encapsulated STL Example
|
CID 7060 “Encapsulated Document Source Purposes of Reference” |
|||
|
In this example, mirroring (from the right side) was performed to create the object. |
|||
|
In this example, the goal is to implant the object in the patient. |
|||
|
<Content of Table C.7-11b "Image Pixel Macro Attributes" not shown> |
|||
In this example, the patient will shortly be undergoing a complex cardiac surgery. A 3D manufacturing model (encoded in binary STL) was created to manufacture a surgical planning aid representing the patient's unique anatomy.
To begin, a series of CT images (CT-02) and a series of MR images (MR-01) were registered using CT-02's frame of reference as the base coordinate system and then fused. An initial version of the model was derived and reviewed by the surgical team who requested that some of the anatomy surrounding the heart be removed. A second version of the model was created on July 16, 2017 at 1:04:34 PM then stored in a DICOM instance at 1:33:01 PM. The CT and MR data were acquired at earlier dates.
The Encapsulated STL file shown in this example is the second version..
Both versions of the STL were created in the coordinate system of CT-02; so they all share the same Frame of Reference value.
Note: Mapping to other Frames of Reference of secondary source series would be handled via registration objects.
A preview image (optional) showing the rendered 3D object was created and included with the encapsulated STL as an icon image.
The creator of the model inscribed the patient's medical record number on a side of the model to avoid the possibility of a wrong patient error.
Table IIII.2-1. Fused CT/MR Derived Encapsulated STL Example
|
A sequence referencing CT-02 and MR-01 source images because both were used. |
|||
|
CID 7060 “Encapsulated Document Source Purposes of Reference” |
|||
|
CID 7060 “Encapsulated Document Source Purposes of Reference” |
|||
|
In this example, the goal is to help plan the surgery, so the value is "Planning Intent". |
|||
|
<Content of Table C.7-11b "Image Pixel Macro Attributes" not shown> |
|||