In recent years, with the continuous development of AI technology, AI systems based on deep learning algorithms mainly focus on the analysis and research of mammographic X-ray images, and there are relatively few studies on MRI, ultrasound and digital breast tomography. Some domestic medical companies have also introduced a new generation of molybdenum target AI system based on deep learning algorithm, which can achieve more than 90% accuracy in breast mass detection and calcification detection, almost equivalent to the level of medical imaging experts. At the same time, in the discrimination of benign and malignant lesions, the new generation of AI model can achieve 87% sensitivity and more than 90% specificity, even beyond the level of medical imaging experts.
Although the AI system for mammography has been tested to have good diagnostic performance, it lacks in-depth research on breast imaging application scenarios. Algorithm engineers fail to effectively excavate the pain points and difficulties in clinical diagnosis, making the breast molybdenum target imaging AI system unable to accurately reflect the real needs of imaging physicians.
The auxiliary diagnostic system of mammography is based on mammography, which realizes the diagnosis and treatment of breast diseases through the close cooperation of "AI + medical treatment". This protocol is completely in accordance with the daily working mode of imaging physicians. Medical imaging experts accurately identify and label all breast masses, calcifications, structural distortions and other characteristics on the images according to the experience accumulation and work summary in the hospital for many years. The program makes a complete description of the shape, size, density, and nature of breast hyperplasia, lesions, lesions, and calcifications, and makes important decision preparations for clinical intervention, which greatly improves the work efficiency of imaging physicians and reduces the incidence of missed diagnosis and misdiagnosis.
1. Automatically parses patient and image details based on uploaded image resources.
2. Provide query function based on fields such as device type, examination site, impact number, examination time period and upload time period of the image.
3. Adjustments can be made to the film's greyscale, window width window position, etc. as required
4. Gradual interpretation, with the possibility of marking the position of nodules and remarks information using different labelling tools.
5. The size, density, voxel and other important information of each lesion can be calculated automatically, and the physician can make an accurate judgement on the morphology, nature and location of each lesion.
6. Statistics on suspected lesions.
7. Record the uploaded images and their detailed information and lesion information into the library to provide support for later lesion follow up and clinical diagnosis.