Enlitic, a medical start-up selected by MIT Technology Review as one of the world’s “50 Smartest Companies,” and Capitol Health, the fastest-growing radiology service provider in Australia, recently announced a landmark milestone for human health.
For the first time, real patients will directly benefit from deep learning, at scale, across a whole healthcare network, through many times faster, more accurate, and less expensive diagnostics, compared to legacy approaches. Together, the two companies will bring deep learning-supported diagnostics to the Australian and Asian healthcare markets.
Capitol Health is also leading a $10 million Series B investment round in Enlitic, a modern machine learning company driving some amazing improvements in patient care – across everything from accurately detecting and classifying tumors in CT scans, to quickly finding fractures in X-rays. CNN did a short TV segment on a scientific breakthrough they developed in early-stage lung cancer detection.
What is this deep learning all about?
Considered the next step toward artificial intelligence, deep learning is an approach to machine learning that is revolutionizing what is possible for computers to achieve.
“Deep learning is a way of doing machine learning – that is programming computers by having them learn from examples rather than laying out the steps one at a time – which is loosely based on the biological processes in the brain, and continually improves as you provide it with more data or more computational power,” Enlitic’s founder and CEO Jeremy Howard (pictured) told us.
In his 2014 TED Talk, entitled “The wonderful and terrifying implications of computers that can learn,” Jeremy, a top data scientist in his field, said, “Deep learning is an algorithm inspired by how the human brain works, and as a result it is an algorithm which has no theoretical limitations.”
“The more data you give it, and the more computation time you give it, the better it becomes.”
Over the last three years, deep learning has been adopted in numerous fields, from giving Spanish and English speakers the ability to communicate in real-time with automated translation, to improving auto safety and driving efficiency with the use driverless cars.
But until now, medical care has lagged behind
Enlitic has for the first time tailored deep learning to meet the immense challenges of medical data, partnering with a leading healthcare provider to ensure a broad and immediately positive effect on patient care. Unnecessary expensive and dangerous procedures are avoided, saving both valuable dollars and precious lives.
In a world’s first, doctors can now use the predictive power of deep learning to directly improve people’s medical outcomes.
For example, using Enlitic’s deep learning technology, doctors can improve their accuracy by over 50 per cent while also delivering time-sensitive results faster (the algorithms are not only more accurate, but also 50,000x faster than a manual reads by humans).
How will deep learning be used in healthcare?
Enlitic is applying this tech to radiology with Capitol.
Their medical deep learning tools are commercially integrated end-to-end, and will power hundreds of radiologists’ workflow across Capitol Health’s large-scale care network.
Radiology is their initial target because the data is available, well stored, and has source truth they can leverage like pathology.
Capitol plans to expand to other data types too as deep learning has application for broad diagnostic areas for any kind of unstructured data like pathology, genetics, etc.
Enlitic’s software thinks like the human brain. In the case of radiology and digital imaging, like what Capitol does, the plan is to hunt for “undetectable diseases” that might have been previously missed.
It’s about better patient care. Scans can now be done of digital imaging databased to see what might have been “missed”. The computer will know what to look for.
Enlitic’s software leverages Capitol’s vast image archives across all radiology modalities (Ultrasound, CT, MRI, PET, X-Ray) to accelerate training of deep learning algorithms, and to help radiologists prepare more accurate diagnoses of thousands of diseases and afflictions.
This software is going to add a level of scrutiny that has previously been unavailable. Capitol now has the capability to map the entire body. They had the data, now they have the tech too.
How exactly is Enlitic’s tech used in radiology?
Lung cancer kills 80 to 90 percent of all patients diagnosed in late-stages; this is one of the hardest cancers to detect in medical images. If caught early, survival is nearly 10 times more likely.
For the first time ever, Enlitic adapted deep learning to automatically detect lung cancer nodules in chest CT images 50 percent more accurately than an expert panel of thoracic radiologists.
The reduction of false negatives and the ability to detect early-stage nodules saves lives. The simultaneous reduction of false positives leads to fewer unnecessary and often costly biopsies, and less patient anxiety.
Enlitic benchmarked its performance against the publicly available, NIH-funded Lung Image Database Consortium data set, demonstrating its commitment to transparency.
Enlitic has also achieved recent breakthroughs in detection of extremity (e.g. wrist) bone fractures, which are very common yet extremely difficult for radiologists to reliably detect. Errors can lead to improper bone healing, resulting in a lifetime of alignment issues.
These fractures are often represented only by 4×4 pixels in a 4,000×4,000-pixel X-ray image, pushing the limits of computer vision technology.
In detection of fractures, Enlitic achieved 0.97 AUC (the most common measure of predictive modeling accuracy), more than 3 times better than the 0.85 AUC achieved by leading radiologists and many times better than the 0.71 AUC achieved by traditional computer vision approaches.
Enlitic was able to support analysis of thousands of image studies in a fraction of the time needed for a human to analyze a single study.
“We have yet to see this level of improvement in radiology since Roentgen’s application of X-rays to medicine,” said Rodney Sappington, PhD, Enlitic’s VP of Radiology, and former executive of several leading radiology companies.
What lies in the future of deep learning?
We caught up with Jeremy for a quick interview and below is what more he had to share with Anthill about the fascinating phenomenon that is deep learning.
Lung cancer and fractures aside, what other afflictions will this deep learning help to better diagnose, and how?
What is unique about deep learning is that it can deal with any kind of data – image, audio, natural language – without any special hand engineering or manual programming for each data type.
Therefore, as soon as we have data about a disease or from a modality, deep learning immediately knows how to use it, so we can handle every affliction for which we have the images – and the information about which of those images have or don’t have that disease.
It’s for any disease for which medical imaging is an important diagnostic device. For example, highly time-sensitive things like strokes, where you have a three-hour window before there’s severe permanent damage, could be automatically alerted as soon as it goes through the scanner.
There are also asymptomatic things, like aneurysms – things that you’re never looking for but can kill you – that will be automatically found even if nobody was looking for it.
And every kind of tumor, such as breast cancer; or prostate cancer, where MRIs have recently become important; neuro-degenerative diseases like MS – these are all things we have looked at and where medical imaging is a critical diagnostic component.
Beyond language, auto safety and healthcare now, in what other fields do you think we can apply this deep learning?
The opportunities for deep learning really are everywhere.
There is a company called Descartes Labs, which is doing agricultural crop management by using deep learning to analyze satellite imagery. You can use satellite imagery and deep learning for intelligence, to find signs of troop movements or terrorist activity.
And it’s being used by music streaming companies to automatically generate playlists or make music recommendations; by Google for speech recognition; and in the oil and gas industry, where they’re using it for seismic data.