Soon we will no longer need the help function. The computer recognizes that we have a problem and comes to our rescue on its own. This is one of the possible implications of new research at the University of Copenhagen and the University of Helsinki. We can make a computer process images entirely based on human thoughts.
The computer has absolutely no prior information about which features to process and how. Nobody has ever done this before, says associate professor Tuukka Ruotsalo, Department of Computer Science, University of Copenhagen.
Brain activity as the only input
In the underlying study, 30 participants were fitted with hoods containing electrodes that image electrical brain signals (electroencephalography; EEG). All participants got the same 200 different face pictures to look at. They were also given a number of tasks such as B. searching for female faces, searching for elderly people, searching for blond hair, etc. The participants did not perform any actions but just looked at the images briefly for 0.5 seconds for each image.
Based on their brain activity, the machine first maps the given preference and then processes the images accordingly. So if the task was to look for older people, the computer would alter the portraits of the younger people to make them look older. And if the task was to search for a specific hair color, everyone would get that color.
Specifically, the computer had no idea about facial recognition and had no idea about gender, hair color, or any other relevant trait. Still, only the feature in question was edited and other facial features were left untouched, comments PhD student Keith Davis of the University of Helsinki.
Some may argue that there is already a lot of software that can manipulate facial features. That would miss the point, explains Keith Davis: All existing software was previously trained with labeled inputs. So if you want an app that makes people look older, feed them thousands of portraits and tell the computer which are young and which are old. Here, the subjects’ brain activity was the only input. This is a whole new paradigm in artificial intelligence, using the human brain directly as an input source.
Possible applications in medicine
A possible application could be in medicine:
Doctors already use artificial intelligence when interpreting scan images. However, mistakes happen. After all, the doctors are only supported by the images, but they make the decisions themselves. Perhaps certain features in the images are misinterpreted more often than others. Such patterns could be discovered by applying our research, says Tuukka Ruotsalo.
Another application could be to support certain groups of people with disabilities, for example by allowing a paralyzed person to use his or her computer. That would be fantastic, comments Tuukka Ruotsalo, adding: But that’s not the focus of our research.
We have a broad spectrum and strive to improve machine learning in general. The range of possible applications will be wide. For example, in 10 or 20 years we won’t need to use a mouse or type commands to operate our computer. Maybe we can just use mind control!
Demands for political regulation
However, according to Tuukka Ruotsalo, there is also a downside to the coin: the collection of individual brain signals is linked to ethical questions. By acquiring this knowledge, one can potentially gain deep insights into a person’s preferences. We are already seeing some trends. People are buying smartwatches and similar devices that can record heart rate etc., but are we sure that data is not being generated that is giving private corporations knowledge that we do not want to share?
I see this as an important aspect of scientific work. Our research shows what’s possible, but we shouldn’t do things just because they’re possible. This is an area that I believe needs to be addressed through policy and public action. If these are not adjusted, private companies will just carry on.
The Cents Warrior – Please subscribe