ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

Andy Götz Head of Software at European Synchrotron Radiation Facility (left) and Armando Solé   Head of Data Analysis at European Synchrotron Radiation Facilityby Andy Götz, Head of Software at European Synchrotron Radiation Facility (left) and Armando Solé,  Head of Data Analysis at European Synchrotron Radiation Facility

The increasing use of algorithms to produce images that are easier for the human eye to interpret is perfectly illustrated by the progress that is being made with synchrotrons. Synchrotrons like the European Synchrotron Radiation Facility (ESRF), commonly known as photon sources, are like huge microscopes that produce photons in the form of highly focused and brilliant x-rays for studying any kind of sample. We have come a long way from the first x-rays that took crude images of objects much like a camera - the first x-ray image of a hand being taken by Wilhelm Röntgen 120 years ago (http://wilhelmconradroentgen.de/en/about-roentgen). Photon sources like synchrotrons did not always function in this way, however. Owing to the very small beams of x-rays produced by synchrotrons, the first uses were for deducing the properties of a sample’s microscopic structure. Computers have changed this. Thanks largely to computers and computational algorithms the photons detected with modern day detectors can be converted to 2D images, 3D volumes and even n-dimensional representations of the samples in the beam.

Tomography is the most widely used technique for imaging at photon sources and elsewhere (e.g., electron microscopes, neutron sources). The algorithms developed for different kinds of illumination, contrasts, geometries and detection types are very rich as illustrated by some of the articles in this issue. With the help of innovative algorithms, scientists have invented new techniques based on the basic principles of tomography. Basically, any raster scan of the sample combined with the measurement of one or several properties yields an image. Therefore, in addition to conventional reconstructions from images obtained measuring the transmission of the incident beam through the sample, we can use indirect images representing the distribution of variables such as chemical elements, crystalline phases and electron density. The list of tomography-based techniques is growing continuously, and includes: phase contrast tomography, diffraction tomography, micro-tomography, PDF tomography, XANES tomography, fluorescence tomography and grain tracking and many more.

One reason for the emergence of this wide range of techniques is that photon sources like synchrotrons can measure energy (spectroscopy), momentum (diffraction), position and time. Any or all of these parameters can be used as the base for a tomographic reconstruction. Using more than one of these allows multi-dimensional reconstructions which provide information about the chemical composition and its phase at each point in a 3D volume.

The continued developments of the synchrotrons and the increased computational power have allowed the emergence of new imaging methods exploiting the coherence of the x-ray beam. Only a small fraction (currently less than 1 % at x-ray energy of 10 keV at the ESRF) of the beam is coherent but that small amount has opened a field in which algorithms play a crucial role. The relevance of these new techniques is such that increasing the coherence fraction of the beam is among the main reasons behind the huge investments to upgrade (or replace) existing synchrotrons. The expected coherence fraction after the current upgrade of the ESRF is expected to be increased by a factor of 30.

The coherent beam produces fluctuations in the measured intensity of a diffracted beam. Like in a hologram with coherent light, one can record an interference pattern. With visible light one recovers the image of the object just illuminating the interference pattern by the same coherent light. Our eyes receive the same waves (amplitude and phase) as if they were coming from the original object. When we deal with x-rays, we need a computer to model the object and the wave that produce the interference pattern. From the recorded intensities we obtain amplitudes of waves but we also need the phases which cannot be directly measured. Basic algorithms assume an arbitrary phase and iterate by means of fast Fourier transforms until reaching a self-consistent solution. In ptychography, different overlapping data sets are used to obtain a self-consistent solution to the problem object and wave [1]. Nanometric spatial resolutions are achieved. Limitations arise from the large computing power required to achieve the reconstruction in a reasonable time. Traditional MPI calculations are being progressively replaced by calculations on GPU clusters.

In addition to the challenge of determining the phase these new techniques raise new challenges which include, firstly, algorithmic and computational issues – some of which are addressed by the articles in this issue. New software packages have been developed to speed up the reconstruction time. Secondly, a related class of challenges is the increasing data volume size. Generating multi-dimensional data sets of high resolution (nanometers) large scale volumes (10 to 100s of microns) are pushing the limits of data and metadata collection, data handling and storage. For example, a single experiment of diffraction computed tomography at a recently constructed beamline at the ESRF is capable of generating 10 petabytes in one week. The reduced data set of 10 terabytes is a 5-dimensional representation of the physical and chemical structure of the sample. Managing these huge data sets requires new algorithms and software to speed up the reconstruction to follow the experiment in real time. It is essential for scientific applications that software being developed for addressing these challenges is open source so it can be verified and free of patents so it can be used and improved freely. Some algorithms (e.g. [1]) are under patent which restricts their usage for science.

Reference:
[1] J. M. Rodenburg and H. M. L. Faulkner: “A phase retrieval algorithm for shifting illumination”, in Applied Physics Letters Vol.85 n. 20, 4795 – 4797, 2004.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 108 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed