X-rays were initially used for research in atomic physics and medical diagnostics and therapy. Their ability to reveal structures inside an object even with an opaque surface was the driving feature of scientific and technical development of X-rays. Nowadays, beside its proven medical usefulness, X-rays are used to examine technical structures and there are telescopes to map X-rays from our Galaxy and the universe. Every radiological technician who starts in its profession learns to do X-rays of common structures like flowers, animals or teddy bears.
In the digital era, X-ray images are obtained using sensors, while film was used historically. The sensors in the medical radiological field have dimensions such as 24cm x 30cm or 43cm x 43cm. The corresponding spatial resolution for these sensors is between 70µm and 140µm. A typical high-end camera used by a professional or advanced amateur photographer might have a pixel resolution between 4 and 8 µm. Therefore, photographers might well wonder if there is any precise imaging possible with such pixel size. Let’s look at this a little more closely.
X-rays, like visible light, can be characterized by their energy or wavelength. Shorter wavelengths correspond to higher energy. The capability to penetrate an opaque structure increases with energy. If you think of a photon as a particle, smaller particles with higher energies penetrate an object more easily. An overview of this relationship is given in the table shown here:
To make this more clear, here is a series of X-Ray images with increasing energy. The first image was obtained with 40 kV which corresponds to a wavelength of 0.031nm. Our eyes are only able to see wavelengths between 400nm (blue) and 750 nm (red). Therefore, photons with a wavelength at 40 kV cannot be seen with the naked eye. The peripheral parts of the Nautilus shell are clearly depicted. A photographer would classify the circle at the center of the shell as „blown out“. In fact, they are not blown out. The radiation is not able to resolve the structure, because the wavelength of the X-rays is too long in this case to penetrate the shell.
Let’s go to shorter wavelengths (implying higher energies). Using 50 kV or a wavelength of 0.024nm gives more structure to the central parts. The photographic impression of a „blown out“ center is reduced. However, looking at the peripheral parts of the Nautilus there is a loss of intensity and a more grayish impression. It is conceivable that this might be regarded as an overall acceptable but subtle effect.
To take this further, we can go up to 60 kV or down to 0.0207nm. The center is now close to perfectly „exposed“ with some detail apparent, although some smaller structures are still not resolved. The intensity loss at the peripheral parts increases and is now pronounced. A photographer would clearly regard the periphery as „underexposed“.
The last example of this direction of higher energy and shorter wavelengths is 70 kV or a wavelength of 0.0177nm. The photons wavelength is now 57% compared to 40 kV. You may think of this as „smaller“ photons. The result is a clearly depicted core with a complete loss of peripheral structure. A photographer would have every reason to be worried about „underexposure“ and loss of detail everywhere but the center.
What we’ve seen here is that the capability of X-rays to penetrate an object and to go through an object is dependent on energy levels. With shorter wavelengths X-rays go through an object without disturbance but our sensor is „blown out“ at the peripheral parts of the Nautilus shell. Using 70 kV the central parts are much better resolved, but the periphery is too dark. The energy to use for any X-ray image therefore often depends on the primary goal of which portion of the subject is most important to capture.
In a certain sense, the compositing of images with different energies into one image can be compared to the HDR process of photography in the field of visible light. Whereby in visible light the intensity plays the essential role and not the light energy, which determines the color. When X-raying an object that requires different energy levels for accurate representation, I would therefore also speak of a High Dynamic Range image.
If combined, the four exposures shown provide a beautiful, nearly weightless image:
Are you already high-brow ? You don’t want physics, because you didn’t like it at school ? Then take a look at a well understandable FAQ-sheet for x-rays of flowers given by Harold Davis. The doctor advices you to stop reading here !
Those who like some more background may read the following paragraphs.
Our eyes are sensitive to visible light. The wavelengths of visible light range from 400nm to 750 nm. Digital sensors for photography are modified in their sensitivity to gain a pleasing image for human eyes. E.g. we like green tones. A digital sensor for photography can be modified in its sensitivity within the range of visible light and over a wider range of wavelengths than visible light. The energy of visible light ranges between 1.6 eV (750nm) and 3.2 eV (400nm). Typical spatial resolutions of photographic sensors in the consumer section are between 4µm and 8µm.
A digital x-ray sensor works with spatial resolutions between approximately 70µm and 140µm. Using a medical x-ray machine the available energy levels of x-rays depend on the purpose of a human examination. Energy levels of mammography systems vary between approximately 20 keV and 45 keV, depending on manufacturer. Energy levels of conventional x-rays for bones or chest vary between approximately 80 keV and 125 keV. The corresponding wavelengths are under these conditions 0.06nm (20 keV) down to 0.01 nm (125keV).
As you may know, visible light and x-rays are part of the electromagnetic spectrum. Visible light and x-ray differ in their energy. Higher energy of a radiation means higher frequency and shorter wavelengths. Our eyes don’t see other light than visible light. X-rays are a special light then, not to be seen with our eyes – but with a digital sensor.
A substantial property of x-rays is their ability to run through objects with mainly no interaction. The x-ray sensor „sees“ only a small percentage of less radiation coming from the x-ray source when an object is placed near the sensor.
The left hand image appears normal to your eyes when thinking of an x-ray. Before the digital era, radiologists were using films, an analog medium to produce an x-ray. As x-rays run through an object with mainly no interaction, the dark parts of the left image were fully exposed to radiation. A dark part in an x-ray image therefore was called transparent by radiologists. The parts with lighter grey or white in it were called „opaque“ or „dense“ or „attenuated“ areas. The brighter parts result from the attenuation of radiation by an object. As a matter of convenience, digital x-ray images are shown like the left image. You see already details of the inner structure of our flower, a Bird of Paradise.
The right hand image is an inverted grey scale image. Black turns into white, 50% grey stays unaffected and white turns into black. A 75% grey turns into a 25% white. In every photo editor that’s just a simple and easy action to do. The inverted image is more pleasant to the perceptive habits of our eyes. To our experience, the inverted image is preferable for fusion imaging.
Explanation of the idea
Fusion imaging is a child of the digital era of mapping structures. Before image fusion was used in diagnostic radiology, astronomers used it to extract new insights from our universe. Fusion imaging of flowers can be beautiful. And, maybe, it’s a starting point for research in new fields.
The use of photography was initially, after its invention in the 40s of the 19th century, nothing more than a gadget. Only by astronomers, that used used photography for detection of asteroids, photography became a serious matter. By comparison („blinking“) of photographies astronomers discovered mobile objects within a field of fixed stars. In Heidelberg, Max Wolf (1863 – 1932) has been a pioneer of astrophotography.
Imaging of flowers is nothing new. But in the digital era of photography, the mapping possibilities changed fundamentally. It became possible to create the illusion of transparency or translucency by using a set of HDR images at the HighKey side of the exposures. The procedure was introduced by Harold Davis.
X-rays were initially used for medical diagnostics and therapy. Their ability to reveal structures inside an object with an opaque surface was the driving feature of technical development in this field. Nowadays x-rays are used to examin technical structures and there are telescopes to map x-rays from our Galaxy. Every technician who started in its profession learned to do x-rays of interesting structures like flowers, animals or teddy bears. X-ray images of flowers are nothing new.
Transparent looking flowers and transparent looking x-rays of the same flowers are each already for itself appealing to our eye and mind. By combining two digital images of the same structure in visible light and x-ray there is something new to happen. We name this combined procedure „fusion imaging“ and the result of a combination a „fusion image“.
How it works in a nutshell