Close Close icon

Brainport for you

Whether you are studying, working or having a company here, Brainport Eindhoven offers limitless opportunities for growth. Your success depends on the way you overcome your challenges. Please contact Brainport if you need any support. We will help you gain more knowledge and new perspectives or we will just answer your questions.

Close Close icon
Close Close
Close Close

Keeping an AI on astronauts’ emotions

The European Space Agency turned to the PDEng Software Technology program to develop an artificial intelligence system that can detect the emotions of astronauts on challenging deep-space missions.

Written by Bits & Chips

26 January 2021

The European Space Agency turned to the PDEng Software Technology program to develop an artificial intelligence system that can detect the emotions of astronauts on challenging deep-space missions.

Written by Bits & Chips

26 January 2021

With deep-space missions posing all kinds of new challenges under extreme conditions, preserving a healthy emotional state will be one of the main hurdles for astronauts. The European Space Agency (ESA) is looking into the possibility of assessing their mood and lifting their spirits when deemed necessary. All this should be done in real time and therefore autonomously, on-board the spacecraft, because of the considerable delay in communicating to Earth – if there’s a connection at all.

The past decade has also witnessed the rise of artificial intelligence (AI), bringing massive improvements to data analysis. Sentiment analysis, for example, was traditionally used only for texts, but thanks to recent advancements in machine learning, deep learning and computer vision, it’s now also starting to be applied to audio and image data. This opens the door to a facial and voice sentiment analysis system as envisioned by ESA.

After two earlier successful collaborations, the European Space Agency again turned to the PDEng Software Technology (ST) program at Eindhoven University of Technology (TUE) to develop the system. The assignment was divided into two parallel tasks: design, train and validate a machine learning model that determines an astronaut’s emotional state using audio input – ie his voice – and video input – ie his facial expressions. This was to be followed by a consolidation step, using the combination of audio and video for cross-validation.

Building an AI competence

TUE’s PDEng ST is a two-year salaried post-master technological designer program on a doctorate level for top MSc graduates with a degree in computer science or a related field. It prepares the trainees for a career in industry by strengthening their theoretical basis and confronting them with challenging problems from industrial partners. Yanja Dajsuren, PDEng ST program director: “In our program, we aim to find a variety of clients that offer complex system/software architecture and design-related challenges. Our trainees learn to develop innovative solutions meeting industry standards while mastering all the aspects of teamwork, different roles and professional skills.”

AI is one of the program’s key topics. “A substantial part of our graduation projects is about data-driven architecture and intelligent systems, in collaboration with, to name a few, EIT Digital, Philips and Thermo Fisher Scientific,” states Dajsuren. “The rapidly expanding field of AI is estimated to reach 2.9 trillion dollars in business value this year. We’re confident that our program will contribute to the development of innovative AI platforms, tools and methods, and strengthen the collaboration between industry and academia.”

Few of the ST trainees in the ESA project, entitled “Astronaut emotion recognition” (STERN), had prior knowledge of AI. “We were no machine learning experts,” acknowledges project manager Juan van der Heijden. “Two of the 18 trainees have a master’s degree in data science, the rest have a background in software. So we really had to build an AI competence from scratch. In doing so, our two experts served as mentors for the others, helping them find information and master the subject.”

“As this domain was relatively new for almost all of us, we were sure to learn a lot,” adds Raha Sadeghi, one of the project’s engineers. “The experiences gained have indeed opened up new horizons for us in AI systems design. Trying to increase model accuracy while applying state-of-the-art technologies and creating pipelines to make the system robust and flexible to be able to handle future changes was quite challenging, especially taking into account the time constraint – we had only two months to analyze the domain, develop our knowledge of deep learning, search for the best practices, design a robust system, create an efficient and well-structured implementation and verify and validate it.”\

A histogram of emotions

As the main dataset for training, validating and testing their ML models, the STERN trainees used the “Ryerson Audio-Visual Database of Emotional Speech and Song” (RAVDESS). It contains 7356 files, totaling 26.5 GB of audio, video and combined audio-visual inputs. Each file is classified into eight different emotions: neutral, calm, happy, sad, angry, fearful, surprise and disgust. “We were required to use open sources,” explains Van der Heijden. “RAVDESS is a public database that provides data from North American actors expressing particular emotions. Unfortunately, real astronaut data was unavailable – that would have been ideal.”

The team created two machine learning models, one for audio and one for video, with the model parameters tuned to the RAVDESS data. “We reduced the original eight emotions to the five most relevant and distinctive: neutral, happy, sad, angry and fearful. The model output is a histogram showing how strongly each of these five sentiments is present in the input. For example, 80 percent happy, 10 percent neutral and 10 percent spread across the other emotions,” illustrates Van der Heijden.

Using open-source libraries such as Tensorflow and Dlib for machine learning, OpenCV for computer vision, Librosa for audio analysis and Openvino for deep-learning optimization, the two models have been neatly packaged in a Python framework and deployed on a Raspberry Pi 4, extended with an Intel Neural Compute Stick 2 for more inference power. Van der Heijden: “Running it on limited hardware, similar to what’s on-board a spacecraft, was one of the requirements.”

Online communication

“We’ve developed an extensible and modular AI pipeline for both audio and video. ESA can use our models, retrain them, but they can also plug in and test their own,” summarizes Van der Heijden the main project result. “The consolidation step, ie cross-validating the two models with the audio-visual data, remains for future work. We did give this a go and recorded our own dataset, but with 12 different nationalities in our team, it contained too many accents to get good accuracy. Another future recommendation is to use a wider variety of sensors, eg for EEG and heart rate monitoring.”

According to Van der Heijden, one of the biggest challenges for the team was the project’s research-centric nature. “We’re used to starting from requirements and a software architecture and developing a product that conforms to that architecture, meeting those requirements. Here, we only worked on a small piece of the puzzle, the models. For a large part, the deliverable was the information we gathered along the way. Despite the research-centric nature, however, we also managed to build an extensible and modular product.”

Another challenge was working in the midst of the Covid pandemic. “It’s much harder to share hardware components. And you miss face-to-face socializing with your teammates. For most of these 10 weeks, we were communicating online – even the occasional drinks and games were virtual. That’s not only much less fun, but it also magnifies the cultural differences – and not always in a good way. The Dutch are very direct, which foreigners can perceive as blunt. Their indirectness, in turn, comes across as very strange to us. As the project manager, I think I was able to find a good balance there.”

“The more team members, the harder it becomes to manage individuals and their expectations,” seconds Sadeghi. “One of the pros of this PDEng program is that it gives you the opportunity to pursue your goals. Sometimes, individual goals don’t align. Learning how to fulfill your personal ambitions while respecting the team goal is another valuable achievement of this project.”

Apollo 13

The team managed to impress ESA with their work. Van der Heijden: “They were positively surprised with our taking a broad scope in the beginning and honing in towards the end. They were very flexible and not overly demanding. All in all, a super nice client to work for.”

“They did an excellent job in a very short time,” says Christophe Honvault, head of ESA’s Software Technology Section and prime contact to the PDEng ST program. “They actually did two projects: an audio and a video pipeline – very impressive. We look forward to building on their results and implementing their recommendations.”

“We expect AI technologies to have a major impact in the mid to long term,” adds Luis Mansilla, software development engineer at ESA, who co-defined the assignment together with Honvault. “Fields like affective computing can also contribute to this technological breakthrough in space, and this type of collaboration helps us to pave the path for future applications.”

The project’s final presentation was very well received. Among the attendees was Claude Nicollier, the first – and, to date, only – astronaut from Switzerland. Equally impressed with the results, he concluded: “After the explosion on the Apollo 13, upon the utterance of those famous words, ‘Houston, we’ve had a problem,’ it would have been interesting to have had the ability to analyze the astronaut’s voice and find out his true emotional state.”