|Year : 2020 | Volume
| Issue : 2 | Page : 79-84
Design and development of simulation training system for positron emission tomography/computed tomography disassembly and maintenance
Yi Xie1, Jing Wei1, Xiaoxuan Xie1, Nianyu Zhang1, Yanming Han1, Xuan Xu1, Hao Cui1, Meng Yu2, Meilin Shi1
1 School of Medical Imaging, Xuzhou Medical University, Xuzhou, Jiangsu Province, China
2 Department of Nuclear Medicine, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu Province, China
|Date of Submission||26-Apr-2021|
|Date of Decision||22-Jun-2021|
|Date of Acceptance||24-Jun-2021|
|Date of Web Publication||19-Nov-2021|
School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, Jiangsu Province
Source of Support: None, Conflict of Interest: None
In view of the shortage of equipment resources and the inability for on-site disassembly and assembly training in the current teaching of positron emission tomography/computed tomography (PET/CT), a simulation training system for PET/CT based on Unity 3D engine and C# language is designed by 3ds Max and Cinema 4D. The system realizes the simulation interaction of the equipment, thus providing a simulation environment for the operation training of equipment disassembly and maintenance. Through practical application, more users have a great interest in learning, and the comprehensive quality of operation and maintenance ability has been significantly improved, which shows that the system effectively improves teaching quality and reduces teaching costs. Taken together, we can see that it is worth rolling out.
Keywords: Positron emission tomography/computed tomography, Simulation interaction, Training system
|How to cite this article:|
Xie Y, Wei J, Xie X, Zhang N, Han Y, Xu X, Cui H, Yu M, Shi M. Design and development of simulation training system for positron emission tomography/computed tomography disassembly and maintenance. Digit Med 2020;6:79-84
|How to cite this URL:|
Xie Y, Wei J, Xie X, Zhang N, Han Y, Xu X, Cui H, Yu M, Shi M. Design and development of simulation training system for positron emission tomography/computed tomography disassembly and maintenance. Digit Med [serial online] 2020 [cited 2022 May 22];6:79-84. Available from: http://www.digitmedicine.com/text.asp?2020/6/2/79/330763
| Introduction|| |
Positron emission computed tomography/X-ray computed tomography (PET/CT) is an integrated nuclear medicine imaging equipment, which combines functional metabolic imaging and anatomical structure imaging. As a new and potential imaging technology, the demand for PET/CT has been increasing dramatically in recent years. Technical personnel in this industry need to be proficient in technical operation and maintenance. At present, practical training of PET/CT in colleges usually needs to be carried out in collaboration with hospitals, which sets many limits. However, due to the high cost, hazardous radiation, and overloaded operation, users can only observe the demonstration of their teacher's operation of disassembly and maintenance in the absence of experience. This kind of teaching paradigm eventually leads to the disconnection between theory and practice, depriving learners' chances to exercise, and limiting the cultivation of practical ability to a great extent.
Taking PET/CT as the research object, we developed a simulation training system with the help of computer graphics, virtual reality, augmented reality, human-computer interaction techniques, etc. The system creates a highly simulated virtual environment, featured by immersion, interaction, imagination, and intelligence, to enable users to experience the disassembly and maintenance training process. The system will greatly stimulate users' interest and potential in learning, and help them apply theoretical knowledge to practice, making them more competent for work., Moreover, it achieves the transformation from inculcation education to experiential education.
| Design the System|| |
First, we used 3ds MAX and Cinema 4D to build a scaled-down 3D model of PET/CT, then imported the model library into Unity 3D to develop virtual reality interactive functions, and the AR function development was achieved using Vuforia Software Development Kit (SDK). Finally, we compiled it into an executable (*.exe) file [Figure 1].
| Model Building and Mapping|| |
Since the extent of simulation directly impacts the teaching effectiveness of the system, the 3D model is the foundation of the simulation training system throughout the whole process, which should be attached to great significance. Therefore, with permission, our team conducted field research in the nuclear medicine department of the affiliated hospitals of our university. We obtained pictures of the equipment, observed the structure of the PET system and the CT system, especially the arrangement of detector components, the shape of the tube, the design of the photomultiplier tube and X-ray detectors, and archived the number of loop detectors, scintillation crystals, photomultiplier tubes, and the CT detector acquisition units, as well as recorded other valuable parameters. What's more, we consulted some professional engineers about the primary circuit, regular operation, essential maintenance, practical judgment, and solution to common faults to check the relevant issues for modeling.
The 3D model of this system mainly includes the external frame, the detector system, and the rotating anode X-ray tube structure. To improve the modeling efficiency, we used software to classify models before carrying out a series of operations such as lighting layout, scale adjustment, and rendering. The effective storage and scheduling of the models are realized by hierarchical structure and model optimization technology [Figure 2]., For the model's authenticity, we applied the “U Tile-V Tile-W Tile MAP(UVW Map)” to unfolding the texture. After that, we used Photoshop to design the metal texture map and adjust the material properties.
The detector system is mainly composed of detector components, scintillation crystals, and photomultiplier tubes.
The detection scanner is usually circular, consisting of 12 detector electronics assemblies (DEA) arranged in a 360° clockwise direction in the frame. The key to the modeling of the detection ring lies in its complexity and regularity. For a high degree of simulation of a single DEA model and have a sense of hierarchy, we modified the edge contour of the model on purpose by “bevel” or “extrude.” After completing the DEA model, we checked to “stitch and sew” redundant surfaces and optimized the final rendering model without affecting the actual visual effects. Then, we selected the moderate parameters by “array,” including quantity and radius, to ensure the uniformity of each unit of the detection ring for further elaboration [Figure 3].
The photomultiplier tube is an indispensable part of the detection system, which generally comprises an entrance window, a photocathode, a dynode system, an anode. According to the symmetry, it was modeled from the local to the whole. The housing of the photomultiplier tube is a pseudocylinder. To avoid redundant rendering surfaces, first, we activated the “pen” to draw a couple of “circles” with dashed lines as molds and then used the “loft” command to generate the model. The symmetrical structures of photomultiplier tubes, such as needle and multiplier poles, were modeled monomers. Next, the “array” command was used to complete the generation of subobjects. After modeling the shell and the panel, all parts were converted into editable polygons. Before finally adding materials and rendering, it is necessary to elaborate them by “bend,” “extrude,” and “weld” [Figure 4].
The structure of rotating anode X-ray tube
The rotating anode X-ray tube mainly includes a rotating anode, a cathode, a target, a filament, rotors, and a glass wall. We adopted a nested structure. Due to the specific hierarchical structure, multiple inner structure models were placed into the tube model. Then, we inserted the cathode, the rotating anode, the rotor, and other models in order [Figure 5].
| Realization of User-Friendly Interface and Realization of Interactive Training|| |
Defined as a simulation training software, the interface of this system is supposed to be clear, user-friendly, and maneuverable, and the interactive training must be immersive, intelligent, and easy to achieve teaching effects. The user-friendly interface is to use Unity 3D to set the components according to the functional classification. We strived to convey accurate information to users and avoid visual interference. The interactive function is a meaningful way to realize the training of simulation equipment, so this system has adopted the following interactive methods.
Scene switching interaction
This function is as follows: When the user clicks the action button, the current interface will synchronously switch to the following interface. For example, when the user enters an account and password in the login screen, the system will verify it. If they match, they are automatically transferred to the home screen. The system contains multiple interface switching, which requires scene management to optimize the loading time. Then, add the corresponding Unity Graphical User Interface (UGUI) component such as a button to the relevant scene and use C# script to set switching scenes.
Model movement and rotation interaction
This function is as follows: Long-press the left mouse button to drag the model space at will, and long-press the right button to rotate the model at a fixed point. First, we added a collider to the model, which needs to move, then edited the movable range, and finally attached the code to the model to move the model. Having finished typing the recognition preset, we got the offset of the mouse X-axis and Y-axis volume. Last, we used the “transform.rotate” to achieve model rotation.
Interaction of video and sound effects
This function is as follows: After adding immediate sound effects in scenes such as characters walking, animation principle display, and so on, we inserted example analysis videos in the equipment maintenance training module. Then we utilized the “video manager” component and the “audio source” component to mount C# scripts to realize system sound effects and video playback.
Mouse over the display text interaction
This function is as follows: Displaying related information of the model when the mouse is over the model. We used the “On Graphical User Interface (OnGUI)” to determine the text format and display positions, added colliders, edited colliders, and added codes. It was specified that “Graphical User Interface style (GUIstyle)” was moderate, and the display position was (100, 40) coordinates with the mouse as the origin. For accuracy, it was necessary to store each model name in a different code and reduce the collider as much as possible to fit the model. The code addition in this part was not in conflict with the model mobile interaction code and can be used in parallel.
The interaction between the change of vision and the walking of the characters
This function is as follows: The user uses the keyboard to control the roaming scene of the virtual character from the first perspective. First, we created a capsule in the location and renamed it to FPScontroller to represent the user's body. Second, we adjusted the transform properties of the First-Person Shooter controller (FPScontroller) appropriately. Then we set the main camera that comes with the scene as a subobject of the FPScontroller and removed the capsule collider from the inspector panel. We added the model controller component. After the establishment, it was necessary to repeatedly use the simulation running function to adjust the camera's height. Finally, we wrote the mouse vision control code and keyboard walk control code and attached them to the FPScontroller to complete the interaction of the vision.
| Design and Realization of the Main Modules of the System|| |
Disassembly and assembly training module
This module includes a structure display part [Figure 6] and an animation display part [Figure 7] according to the actual training needs.
When entering the virtual scene, the user can observe or disassemble the gantry placed in the operation room from the first perspective. Click the “gantry” button to activate the internal structure learning module, and zoom it in or rotate it for more details. Text descriptions accompany the system.
The animation display part comprises the imaging principle, disassembly, and assembly of the whole machine. The user can click the button “disassembly” or “assembly” to realize the entire process of decomposition and restoration. To reduce the cumbersome degree of animation, we chose to use the animation clip in Unity 3D for animation editing during production. The more straightforward animation was directly imported into Unity 3D after rendering in 3d MAX.
Maintenance training module
PET/CT failures that happen all the time generally involve gantry failure, reconstruction failure, artifact failure, conveyor belt failure, and cooling system failure. The system has a built-in fault diagnosis test program, indicator light conversion, normal and fault sounds, critical points of multimeter, oscilloscope maintenance, etc. In the virtual maintenance scene, the user clicks the “fault” button, and a dialog box will pop up to demonstrate the fault cause and solutions. Besides, the operation training teaching module contains a series of maintenance analysis videos captured on-site, such as “quality control” and “carbon brush lifespan judgement.” This part in the system converts the clinical working experience of engineers into high-quality training resources, combining virtual and actual to consolidate the knowledge for a better training effect.
Augmented reality module
This module is adapted to mobile terminal equipment. After the user downloads and installs the software, the user is required to mandate the permission. When the mobile camera captures the illustrations in the textbook, the screen will show the 3D model of the device.
Class assessment module
This module designs a question bank for training that contains 300 multiple-choice questions, enabling the user to provide feedback after a temporary trial. Every time the system randomly extracts 30 questions from the database for the training test. When the user answers, the system proofreads it with the correct answer. If the two match, it will prompt “The answer is correct;” if not, it will prompt “The answer is wrong” and the right answer. Meanwhile, the correct rate is updated in the upper right corner of the interface. The core of the answer function is its logical order and control code. Since the answer module does not involve the interaction of the characters moving, there is no need to consider the interaction of the main camera. We only adopted the canvas, bgpanel, ques index text, accuracy text, tip correct text, question content, select toggles, and buttons.
| Conclusion|| |
We took PET/CT as the object and developed a simulation training system [Figure 8], using computer graphics, virtual reality, augmented reality, human-computer interaction techniques, and other technologies. In the modeling process, the geometric model remained unchanged, and the physical model was simplified to ensure the simulation effect and efficiency in the interaction process. In the interactive programming process, virtual scenes were established, modular development was adopted to save development resources, and intelligent interaction technology was adopted to realize intellectual training and interactive learning. Users can break through the limitations of conventional conditions such as venues, equipment, time, and so on, to undertake independent participation training. The system enables users to understand the equipment structure, master maintenance skills, and improve practical ability. This system aims to use simulation technology to solve the difficulty in practical training of medical imaging equipment. Guided by the needs of talents in the development of the industry, it focuses on improving skills and quality training.
|Figure 8: Construction of simulation training system for positron emission tomography/computed tomography|
Click here to view
This work was supported by the College Students' Innovative Entrepreneurial Training Plan Program (202010313007Z), Jiangsu Qinglan Project, National Natural Science Foundation of China (81602533), Special funds for the Construction of First-class Specialties of Xuzhou Medical University (Xjyylzx202004), and National Demonstration Center for Experiment Education of Basic Medical Sciences (Xuzhou Medical University, 221002, China). We are grateful to the affiliated hospital of Xuzhou Medical University for their helpful technical assistance.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Sun T, Han SQ, Wang JW. Imaging theory predominance and clinical applications of PET/CT (In Chinese). Chin J Med Phys 2010;27:1581-7.
de Galiza Barbosa F, Delso G, Ter Voert EE, Huellner MW, Herrmann K, Veit-Haibach P. Multi-technique hybrid imaging in PET/CT and PET/MR: What does the future hold?. Clin Radiol 2016;71:660-72.
Zhang X, Xiao YY. Training and teaching practice for qualified professional personnel of medical imaging. Chin J Interv Imaging Ther 2015;12:126-8.
Li XJ, Bai J, Du YH, Meng SH, Yang ZT. Application of virtual simulation teaching instruments in large-scale medical equipment courses and its reflection (In Chinese). Chin J Med Educ Res 2020;19:435-9.
Chang L, Liu HJ, Sun XJ, Zhang XY, Chen XZ. Reflection on construction of virtual simulation experiment teaching projects in colleges and universities: Taking Hebei University as example. Exp Technol Manage 2020;37:29-32.
Izard SG, Juanes JA, García Peñalvo FJ, Estella JM, Ledesma MJ, Ruisoto P. Virtual reality as an educational and training tool for medicine. J Med Syst 2018;42:50.
Wang WG, Hu BH, Liu H. Current situation and development of virtual simulation experimental teaching of overseas universities. Res Explor Lab 2015;34:214-9.
Pieter J, Ivan DB. ImmersiMed: Cross-platform simulation training. Digit Med 2018;4:166-72.
Meng XJ, Ma ZQ, Zhao WH, Meng M. Research and design of augmented reality technique applied in medical apparatus and instrument textbook (In Chinese). China Med Equip 2017;14:120-122.
Ji ZN, He WZ, Shen JY, Gao JY, Luo J, Cao Y, et al
. Design and Application of Teaching System for Disassembly and Maintenance of CT Equipment Based on VR Technology (In Chinese). China Med Educ Technol 2019;33:313-5.
Cao L, Wang GX, Wang WQ, Guo YG. A 3D model optimization method based on line features of oblique images (In Chinese). J Nanjing Univ Aeronautics Astronautics 2020;52:980-8.
Yang J, Tian ZX. Structure and emergency repair of Siemens Biograph 16HR PET/CT (In Chinese). Chin J Med Device 2017;30:45-8.
He J, Ma Y, Yuan XP, Hu SQ, Qiu HL. Current Development of Detectors and Crystal Materials for Nuclear Medicine Imaging System. Piezoelectrics Acoustooptics 2018;40:460-9.
Luo ST, Liu JX, Feng T. Research on the application of virtual reality in medical rehabilitation (In Chinese). China Med Devices 2020;35:156-9.
Peng QB. Research and analysis of PET/CT maintenance methods (In Chinese). China Health Ind 2017;14:58-9.
Parry J. The use of virtual reality environments for medical training. Digit Med 2019;5:100-1. [Full text]
Wang H, Hu JF, Shi ML, Liu LL, Tang HY, Tang H. Construction and practice of the simulation teaching system of medical imaging equipment (In Chinese). China Med Educ Technol 2017;31:305-7.
Liu H, He PZ, Tang HM, Zhang M, Zhou JZ, Shen XM, et al.
Developing a talent-cultivation system of medical imaging technique program based on the “medicine, education, research and competition” four-dimensional collaborative platform (In Chinese). China High Med Educ 2019;(5):10-1.
Liu XH, Wang YP, Li B, Chen C. The Thinking of Virtual Teaching Integrated into Diversified Medical Instruction. Res Explor Lab 2014; 2014;33:180-3.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]