by Joost Batenburg (CWI) and Tamas Sziranyi (MTA SZTAKI)

For the young generation it is hard to imagine that just two decades ago, taking a picture took days, or even weeks, requiring us to wait until the film was full, after which it was taken to a photo shop for further development. These days, digital cameras are all around us, which has revolutionised the way we deal with images. The development of digital sensors has followed a similar path in other disciplines within science and engineering, resulting in the development of a broad range of detectors and sensors that can collect various types of high-dimensional data reflecting various properties of the world around us. This has fuelled the development of a new field of mathematics and computation, which deals with interpreting such sensor data, applying algorithms to it, and generating new data.

by Michael Eickenberg, Gaël Varoquaux, Bertrand Thirion (Inria) and Alexandre Gramfort (Telecom Paris)

Recently, deep neural networks (aka convolutional networks) used for object recognition have shown impressive performance and the ability to rival the human visual system for many computer vision tasks. While the design of these networks was initially inspired by the primate visual system, it is unclear whether their computation is still comparable with that of biological systems. By comparing a deep network with human brain activity recordings, researchers of the Inria-CEA Parietal team (Saclay, France) show that the two systems are comparable.

by Bernd Rieger and Sjoerd Stallinga (Delft University of Technology)

Standard fluorescent light microscopy is rapidly approaching resolutions of a few nanometers when computationally combining information from hundreds of identical structures. We have developed algorithms to combine this information taking into account the specifics of fluorescent imaging.

by Erwan Zerhouni, Bogdan Prisacari, Maria Gabrani (IBM Zurich) and Qing Zhong and Peter Wild (Institute of Surgical Pathology, University Hospital Zurich)

Cognitive computing (in the sense of computational image processing and machine learning) helps address two of the challenges of histological image analysis: the high dimensionality of histological images, and the imprecise labelling. We propose an unsupervised method of generating representative image signatures that are robust to tissue heterogeneity. By integrating this mechanism in a broader framework for disease grading, we show significant improvement in terms of grading accuracy compared to alternative supervised feature-extraction methods.

by Christophe De Vleeschouwer (UCL) and Isabelle Migeotte (ULB)

The progress in imaging techniques has allowed the study of various aspects of cellular mechanisms. Individual cells in live imaging data can be isolated using an elegant image segmentation framework to extract cell boundaries, even when edge details are poor. Our approach works in two stages. First, we estimate interior/border/exterior class probabilities in each pixel, using binary tests about fluorescence intensity values in the pixel neighbourhood and semi-naïve Bayesian inference. Then we use an energy minimisation framework to compute cell boundaries that are compliant with the pixel class probabilities.

by László R. Orzó, Zoltán Á. Varecza, Márton Zs. Kiss and Ákos Zarándy (MTA SZTAKI)

Digital holographic microscopy (DHM) provides a simple way to automate biological water monitoring. Using proper computational imaging algorithms, the rather complicated measuring techniques can be replaced by easy methods, and the further data processing steps are also simplified.

by Olivier Colliot (CNRS), Fabrizio De Vico Fallani and Stanley Durrleman (Inria)

Neurodegenerative diseases, such as Alzheimer’s and Parkinson’s disease, are complex multi-faceted disorders involving a mosaic of alterations that can be measured in the brain of living patients thanks to the tremendous progresses of neuroimaging. A major challenge for computational imaging is to build efficient and meaningful models of disease from multimodal imaging datasets in order to better understand diseases, improve diagnosis and develop precision medicine.

by Pierre-Henri Tournier

Microwave tomography is a novel imaging modality holding great promise for medical applications and in particular for brain stroke diagnosis. We demonstrated on synthetic data the feasibility of a microwave imaging technique for the characterisation and monitoring of strokes. Using high performance computing, we are able to obtain a tomographic reconstruction of the brain in less than two minutes.

by Josiane Zerubia (Inria), Sebastiano B. Serpico and Gabriele Moser (University of Genoa)

In a joint project at Inria and the University of Genoa, we have designed novel multiresolution image processing approaches to exploit satellite image sources in order to classify the areas that have suffered the devastating impacts of earthquakes or floods.

by Marco Reggiannini and Marco Righi (ISTI-CNR)

In recent years, European maritime countries have had to deal with new situations involving the traffic of illegal vessels. In order to tackle such problems, systems are required that can detect relevant anomalies such as unauthorised fishing or irregular migration and related smuggling activity. The OSIRIS project [L1] aims to contribute to a solution to these problems with the use of large scale data provided by satellite missions (Sentinel, Cosmo-SkyMed, EROS).

by Tristan van Leeuwen (Utrecht University), Ajinkya Kadu (Utrecht University) and Wim A. Mulder (Shell Global Solutions International B.V. / Delft University of Technology)

For many decades, seismic studies have been providing useful images of subsurface layers and formations. The need for more accurate characterisation of complex geological structures from increasingly large data volumes requires advanced mathematical techniques and high performance algorithms.

by Nicola Viganò (CWI)

 The vast majority of metallic and ceramic objects have a granular microstructure, which has a direct influence on their mechanical behaviour. Understanding the microstructure of these materials is especially important for nuclear reactors and other safety-critical applications in which they are used. Modern mathematical tools and recent developments in computed tomography can be used to study the evolution of these materials when they are being deformed or heated.

by Frédéric Sur (Université de Lorraine, Inria), Benoît Blaysat and Michel Grédiac (Université Clermont-Auvergne)

Experimental mechanics is currently experiencing a revolution: the rapid development and spread of camera-based measurement systems, which enable experimentalists to visualise the displacement and strain distributions occurring in structures or specimens subjected to a load. In order to obtain information that is as valuable as that provided by numerical models, we need to move on from lowly to highly-resolved maps, and from qualitative to quantitative measuring tools. To this end, new mathematical results and algorithms are needed.

by Lorenzo Audibert (EDF) and Houssem Haddar (Inria)

Wave imaging, a very useful technique for identifying remote/inaccessible objects using waves, is at the center of many widely used technologies such as RADAR, SONAR, medical imaging, nondestructive testing and seismic imaging. These techniques are well established for homogeneous backgrounds, such as air or water, for which a large variety of algorithms have been successfully used to solve the underlying imaging problem. However, most classical methods are inadequate at dealing with heterogeneous backgrounds, such as high resolution seismic imaging, ultrasound of bones, radar in urban environments and nondestructive testing of concrete or periodic nano-materials.

by Samuli Siltanen (University of Helsinki)

X-ray tomography is a wonderful tool that allows doctors to peek inside patients. However, since X-rays are harmful, a patient’s exposure to them should be limited. An obvious way to achieve this is to take fewer images, but unfortunately this causes trouble for classical image reconstruction methods. However, new mathematical methods, based on compressed sensing and the multi-scale shearlet transform, can save the day!

by Rafael Kuffner dos Anjos, Carla Fernandes (FCSH/UNL) and João Madeiras Pereira (INESC-ID)

Viewpoint-free visualisation using sequences of point clouds can capture previously lost concepts in a contemporary dance performance.
The BlackBox project aims to develop a model for a web-based collaborative platform dedicated to documenting the compositional processes used by choreographers of contemporary dance and theatre. BlackBox is an interdisciplinary project spanning contemporary performing arts, cognition and computer science.

by Furqan M. Khan and Francois Bremond (Inria)

Computers now excel at face recognition under severely constrained environments; however, most of the surveillance networks capture un-constrained data in which person (re)identification is a challenging task. The STARS team at INRIA is making considerable progress towards solving the person re-identification problem in a traditional visual surveillance setup.

Svorad Štolc, Reinhold Huber-Mörk and Dorothea Heiss

The Austrian Institute of Technology (AIT) is working on novel in-line methods to infer object and material properties in automated visual inspection. Here we describe a real-time method for concurrent extraction of light-field and photometric stereo data using a multi-line-scan acquisition and processing framework.

by Kostas Hatzigiannakis, Athanasios Zacharopoulos and Xenophon Zabulis (ICS-FORTH)

Multispectral imaging and spectroscopic analysis are valuable tools for the study of materials of cultural heritage objects. Accurate registration of spectral images into a spectral cube increases the precision and quality of spectral measurements, supporting portable and low cost multispectral imaging systems.

by Grigorios Tsagkatakis and Panagiotis Tsakalides

Within the EU funded PHySIS project (Sparse Signal Processing Technologies for Hyper Spectral Systems), we are developing a novel approach to achieve high speed and high resolution spectral imaging by leveraging cutting edge signal processing paradigms.  
Spectral imaging aims at acquiring tens to hundreds of spectral bands - many more than the three bands acquired by colour imaging. From an application perspective, analysis of spectral data acquired by Earth observing satellites has greatly aided our understanding of global environmental and ecological phenomena, while its applications in manufacturing, food quality control, and biomedical imaging are also gaining momentum.

by Alessandro Danielis, Daniela Giorgi and Sara Colantonio (ISTI-CNR)

We present a lip segmentation method based on simulated Lambertian shadings. The input consists of hyper-spectral images generated by a prototype for medical applications. 

by Oliver Zendel, Markus Murschitz and Martin Humenberger (AIT Austrian Institute of Technology)

Do we really need to drive several million kilometres to test self-driving cars? Can proper test data validation reduce the workload? CV-HAZOP is a first step for finding the answers. It is a guideline for evaluation of existing datasets as well as for designing of new test data. Both with emphasis on maximising test case coverage, diversity, and reduce redundancies.

Next issue: April 2021
Special theme:
"Brain-Inspired Computing"
Call for the next issue
Image ERCIM News 108 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed