Developing the next generation of blood tests
A few decades ago, researchers changed the course of photography by replacing film with digital sensors. The advantages quickly became evident: A photographer could see the results almost instantaneously, and cameras could be made far smaller than their film equivalents.
Now researchers developing health diagnostic tools are tackling another imaging challenge – removing the lens altogether and letting a sensor alone gather information. The work promises to have a similar effect in hematology: Replacing a slow system that relies on clinically based analyzers with tiny chip-based devices that give comparable results in a matter of minutes.
Sensors that can resolve images down to the level of a molecule or less already exist. But unlike a camera, no human can quickly and accurately determine what information a blood sample image contains. That’s where Rene Vidal comes in.
Vidal, a professor in the Johns Hopkins Department of Biomedical Engineering and director of the Vision Dynamics and Learning Lab, is working with Robert Bollinger, professor of medicine, to spearhead an effort to, in essence, teach these specialized testing devices to detect and report on what’s contained in a few drops of blood. They are working with Belgium research institute imec, which is responsible for the microchip-sized sensors, as part of a joint project funded by a company called miDIAGNOSTICS. Vidal works through the heavy computational issues that are critical to the automatic analysis of images captured by the device.
The way that the sensors will eventually “see” what’s in the blood samples is similar to a process that the human brain employs to detect patterns. “There are millions of neurons in our brain, but for some tasks, only a subset of neurons is responsible,” says Vidal. “For example, some neurons in the visual cortex are dedicated to identifying horizontal edges, such as the boundaries of an object.”
By using what’s called “sparse coding,” computational systems can be created to detect a specific pattern and ignore others. Computer algorithms are created to learn a collection of such patterns that form what is known by experts as a “dictionary.” But rather than defining words, these dictionaries define visual patterns: This is a white blood cell, that is a red blood cell. Then the dictionaries get more refined: This is a monocyte, that is a lymphocyte.
The whole time, the sensor has to screen out items that are irrelevant – the computational equivalent of narrowing the depth of field and knocking out the entire background. At the same time, it needs to spot blood cells that might be very close to other blood cells, which means creating a holographic image as well.
Finally, to be able to scan an entire image at once in an efficient manner, another computational technique called convolutional sparse coding is added. Although the mechanics are not identical, it adds up to something like a digital application’s “face recognition” feature in which it zips through photos and identifies people based on small elements of their faces.
The technology will greatly speed up a diagnosis from the current practice. “You go to the doctor, they say you need a blood test and they send you off to the lab,” says Vidal. “Then it takes a day to get the results. The doctor gets a diagnosis, then you have to go back for another visit. With our device, the goal is to do it in a matter of minutes at the doctor’s office just by extracting some blood from a finger.”
It’s not just convenience. Because hospitals and clinics are full of ill people, each visit by a patient is another contamination risk.
In underdeveloped nations, the low cost and portability of the lens-less sensors that Vidal and his partners are developing could allow millions of people lacking access to health centers to be tested and given appropriate treatment.
And that – no matter how it’s generated – is a tremendously compelling vision.
– Michael Blumfield