by Olivier Parisot, Pierrick Bruneau, Patrik Hitzelberger (Luxembourg Institute of Science and Technology), Gilles Krebs and Christophe Destruel (Vaonis)

Electronically Assisted Astronomy allows near-real-time generation of enhanced views of deep sky objects like nebulae and galaxies. This approach is ideal for people who have difficulty with direct observation through a telescope, especially those who have poor visual acuity or physical difficulties with positioning oneself correctly in front of the instrument.

Electronically Assisted Astronomy is widely applied today by astronomers to observe planets and deep faint sky objects like nebulae, galaxies, or globular clusters. By capturing images directly from a telescope coupled to a camera, this approach allows generation of enhanced views of observed targets that can be displayed in near real-time. While astrophotography aims at producing detailed and visually appealing images after numerous hours of post-processing of long exposure images [1], Electronically Assisted Astronomy aims at quickly getting images by stacking on-the-fly raw images in order to accumulate the (faint) signal (and then reduce the inherent noise). All this is made possible by the fact that recent CMOS/CCD cameras are extremely sensitive and have a very low read noise (i.e., amount of noise generated by electronics) [2], which makes it possible to obtain already satisfactory results with lightweight image processing.

By comparing with direct visual observation through an eyepiece and an instrument (refractor or reflector), this approach has definite advantages for people who have physical constraints preventing them from enjoying astronomy to the fullest: poor eyesight requiring the wearing of glasses (in particular for people with astigmatism), difficulties in positioning to look through the eyepiece of a telescope, etc. Not to mention the fact that most people cannot see colours during visual observations (with a few exceptions, the light from deep sky objects is too weak for the colour to be visible), making deep sky observing sessions frustrating for a novice.

Electronically Assisted Astronomy also makes it possible to observe in difficult outdoor conditions, for example in places heavily impacted by light pollution. Deep sky objects almost invisible in the eyepiece of an urban or suburban sky become impressive and detailed.

Nevertheless, the practical implementation is not straightforward. Electronically Assisted Astronomy requires a complex hardware setup [L1]: motorised alt-azimuthal or equatorial mount for tracking targets (with respect to the Earth’s rotation), refractor/reflector with good-quality lens, CMOS/CCD dedicated cameras, pollution filters, etc. Depending on the size of the targets, a Barlow lens (for planets and planetary nebulae) or a focal length reducer (for large nebulae) is also required. Moreover, dedicated software like Sharpcap or AstroDMX are needed to control the camera and then deliver the live images on a display device [L1].

The easiest way to get rid of these difficulties to observe the deep sky is to use a remote telescope. Using a simple web interface, it is possible to control telescopes located at the other end of the planet (which can be interesting for viewing deep sky objects only visible from another hemisphere) [L2]. Nevertheless, this mechanism is not very suitable for live observation – the idea being rather to retrieve the images a few hours later.

During the MILAN research project (MachIne Learning for AstroNomy), funded by Luxembourg National Research Fund, we use instruments provided by our partner VAONIS [L3] to collect images of deep sky objects. VAONIS provide fully automated telescopes that are controlled via smartphones and tablets. With these telescopes, all the critical steps are automatised and transparent for the end user: tracking, focus, capture, lightweight image processing, and then display.

Speech-to-Text Technology for Hard-of-Hearing People  by Manuela Hürlimann (Centre for Artificial Intelligence, Zurich University of Applied Sciences), Jolanda Galbier (Pro Audito Schweiz) and Mark Cieliebak (Centre for Artificial Intelligence, Zurich University of Applied Sciences)  Hard-of-hearing people face challenges in daily interactions that involve spoken language, such as meetings or doctor’s visits. Automatic speech recognition technology can support them by providing a written transcript of the conversation. Pro Audito Schweiz, the Swiss federation of hard-of-hearing people, and the Centre for Artificial Intelligence (CAI) at the Zurich University of Applied Sciences (ZHAW) conducted a preliminary study into the use of Speech-to-Text (STT) for this target group. Our survey among the members of Pro Audito found that there is large interest in using automated solutions for better understanding in everyday situations. We now propose to take the next step and develop an application which uses ZHAW’s high-quality STT models. The average person holds more than 25 conversations per day, which can be very challenging for people with hearing loss, as their auditory perception of spoken language is limited. Pro Audito provides an interpreting service (“Schriftdolmetschen”), where a trained human interpreter accompanies the hard-of-hearing person and creates a written transcript of the interaction on the fly. While this is highly appreciated with 1,800 hours of speech transcribed each year, the financial compensation by the Swiss disability insurance is currently limited to professional and educational settings and the cost is capped [L1]. We received an Innovation Cheque from Innosuisse to run a preliminary study consisting of a needs analysis and market research. Our goal was to find out how STT could be used to create an offer for people with hearing loss that provides more flexibility and independence. Needs analysis The needs analysis was conducted via a detailed survey among the members of Pro Audito, which was answered by 166 respondents, of which 87% have moderate or severe hearing loss. We found that 28% already use technical support to facilitate understanding, which consists mostly of external microphones, headsets or rerouting sound to their hearing aid via Bluetooth (e.g., when watching TV). Some people already use STT apps, where the most frequently named use cases are appointments at the doctor or optometrist, meetings (both online and on-site, see Figure 1) and conversations in crowded spaces with background noise (such as restaurants). 57% of our respondents can imagine using STT technology to facilitate their understanding – the most frequently named languages are Standard German, Swiss German, French and English. They were also asked what would be important features of an STT application: it should be as easy as possible to use and provide high-quality recognition (e.g., accuracy, robustness to noise, specialised vocabulary) with minimum latency. Many of our respondents would be willing to pay for a STT solution, either as a one-off purchase or on a monthly subscription basis. Most people would be willing to pay between 50 and 150 CHF one-off or 10 CHF per month. Market Research We reviewed existing STT solutions for people with hearing loss and found that currently no single solution ticks all the boxes – some have good recognition accuracy but a poor user interface, others are very easy to use but quickly become unstable when tested in real-life conditions. We are currently developing STT models for various languages at ZHAW. We believe that the best way forward is to develop a dedicated application for hard-of-hearing people and integrate our models for the following reasons: •	Latency: For real-time STT, latency needs to be minimised as much as possible. This means that ideally the model runs on-device, since using external cloud providers introduces an additional time-lag. Creating STT models which are small enough to run on a device such as a smartphone yet have high prediction accuracy is an important challenge. •	Privacy: Users will in some cases want to transcribe sensitive information, such as a conversation with a doctor. With a local model, privacy can be guaranteed. •	Customisation: The use cases from our survey offer significant challenges such as a large number of speakers, spontaneous speech, and background noise. If we use our own STT models, we have full control over their customisation. Furthermore, it is important that this application can run on an inexpensive device to be accessible to as many users as possible; this is a further argument in favour of a smartphone app. Future Activities We propose to develop an application for hard-of-hearing people based on our STT models, which will use a high-precision microphone to record audio – either from the hearing aid itself, a partner microphone, or a wireless lapel microphone. The audio is then transmitted via Bluetooth to the user’s smartphone. For minimum latency as well as maximum privacy and customisation, the transcription will be carried out on-device and will be displayed in an easy-to-use interface. Pro Audito and ZHAW are now looking for partners interested in jointly developing and operating this application - if you are interested, please refer to the contact information below. Link: [L1] https://kwz.me/hjf  Please contact:  Mark Cieliebak, ZHAW School of Engineering, Switzerland ciel@zhaw.ch
Figure 1: Live session of Electronically Assisted Astronomy on the night of 14 May 2022 from a village in the northeast of France.

Figure 2: Image of the M5 globular cluster (distance from Earth: 24,460 light years), as seen on the night of 9 May 2022 from a village in the northeast of France. 125 raw images of 10s exposure-time were stacked in near real-time to obtain this result
Figure 2: Image of the M5 globular cluster (distance from Earth: 24,460 light years), as seen on the night of 9 May 2022 from a village in the northeast of France. 125 raw images of 10s exposure-time were stacked in near real-time to obtain this result.

Electronically Assisted Astronomy allows us to plan and organise observation sessions without most of the technical barriers mentioned earlier. For the time being, we can capture and visualise live images in different conditions (e.g., low or high light pollution) and with different parameters (exposure time and gain for each unit shot) to build a collection of images while controlling the results. In the near future, we plan to participate in events for the general public in Luxembourg and in the Greater Region in order to allow young and old to discover the beauties of the deep sky.

Links:
[L1] https://agenaastro.com/articles/agena-beginners-guide-to-choosing-equipment-for-deep-sky-eaa.html
[L2] https://telescope.live
[L3] https://www.vaonis.com

References:
[1] G. Parker: “Making Beautiful Deep-Sky Images”, Springer, 2017, doi:10.1007/978-3-319-46316-2.
[2] P. Qiu, et al.: “Research on performances of back-illuminated scientific CMOS for astronomical observations”, Research in Astronomy and Astrophysics, 21(10), 268. doi:10.1088/1674-4527/21/10/268, 2021.

Please contact:
Olivier Parisot
Luxembourg Institute of Science and Technology
This email address is being protected from spambots. You need JavaScript enabled to view it.

Gilles Krebs, Vaonis, Luxembourg
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 130
This issue in pdf

 

Image ERCIM News 130 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed