direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content


“We have to set the right course today”

Monday, 29. July 2019

Artificial intelligence is developing at a rapid pace. AI expert Toby Walsh receives an ERC grant to conduct research at TU Berlin. Here he shares his thoughts on the possibilities and risks for the future

Human and machine. Toby Walsh’s work is on the future of artificial intelligence. Visitors can also experience robots live at the OpenLab of junior research group MTI-engAge of Faculty IV Electrical Engineering and Computer Science.

Professor Walsh, cinema and books are teeming with anthropomorphic intelligent computers who suddenly seize power and out-of-control killer machines. Is artificial intelligence something we need to fear?
Two years ago, Times Higher Education asked 50 Nobel laureates what the greatest danger facing humankind was. Climate change was top of the list, but population explosion, atomic war, ignorance, and terrorism were all seen as representing a greater threat than artificial intelligence. Of course, we need to be very much aware of AI’s potential to change our environment due to its ubiquity: its presence in transportation, health systems, communication, finance, and the military. And we need to be aware that like any technology it can be used for good and evil. In 2062, the world will look completely different to today. As such, we have to set the right course today to ensure it will be a livable world. We have to ensure that people will still have work, an income, that they will be able to shape their own futures. Philosophers have to be involved in the further development of machines so that ethical issues can be successfully addressed. It will then be possible for AI to be of service to us. What’s wrong with working three days a week instead of five with the same productivity if we can use the time for sports, family life, and other things which are important to us?

The economy in particular has already been made more efficient by computers and robots. How will the intelligent machines of the future further impact the human species?
Robots will become more and more autonomous in how they undertake heavy, dirty, or dangerous work for us. If one treads on a mine in a crisis zone, the next will be sent in its place. They can detect cancer cells or minimal narrowing of veins more reliably and quicker than people. They can recognize faces, produce translations, conduct simple conversations, such as making a doctor’s appointment, without our really noticing the difference to how humans do this. And they can help us achieve a fair and efficient allocation of resources by conducting reliable and speedy calculations. The increasing global population means this is becoming an ever more pressing global challenge for society. We are giving special attention to this topic in our AMPLify project (see box below). We want to use mathematical game theory and behavior theory to develop a complete model for the efficient and fair allocation of goods and resources. First case studies are already showing a possible savings potential for businesses of around 10 percent – that alone could amount to several tens of million dollars.

Machines are already able to accomplish many things better and quicker than humans due to the fact that they can calculate more quickly, with fewer errors and without having to take breaks. Will they, however, ever be able to match humans in decision-making? Or even exceed them?
That is a fascinating question. It is up to us to feed computers with data to allow them to make rational decisions. We can program computers so that they don’t make decisions based on racism, sexism, etc. Humans don’t, however, necessarily make decisions on a rational basis. When allocating organs for transplantation, for example, many ethical and humanitarian elements are involved; one cannot simply tick off questions such as: What kind of insurance does a person have? How much longer is their natural life expectancy? Does the person have a criminal record? What economic contribution are they still able to make to society? The more decisions we delegate to machines, however, the higher the risk that fundamental ethical considerations will be infringed, that the computer does not distinguish between beneficial and counterproductive knowledge. We are the ones, then, responsible for setting the course for a better and safer digital future. And it is essential we do so now. Artificial intelligence is developing at a rapid pace. Firstly, today we have more scientists than ever before –  and there will be more and more in the future, and secondly robots are perfecting themselves through machine learning and sharing the latest knowledge in co-learning processes with other computers.

What would you like to see happen?
One very important point is to ban autonomous weapons worldwide. A machine cannot be allowed to decide over life and death. Even if machines at some point are able to decide ethically and on the basis of humanitarian considerations, it will never be possible to protect them from attacks by hackers. And such a machine manipulated for unethical purposes in the hands of terrorists or irresponsible individuals would represent a huge danger for humankind.
This is why I have organized “open-letter campaigns” in recent years in order to collect signatures from people with political influence, from academics, and from influential businesses. I was able to get the topic placed on the agenda at the United Nations. 28 countries have already signaled their approval; Germany is still deciding.

Many thanks!

Interviewer: Patricia Pätzold

Information available at: www.mti-engage.tu-berlin.de/openlab/

Book Recommendation - Will Homo digitalis come to equal Homo sapiens?


Will robots develop a consciousness? How can artificial intelligence help us solve future challenges for humankind? And: how will war be waged in the future? In his popular science book 2062 - The World that AI Made, Professor Toby Walsh, world-renowned Australian artificial intelligence scientist and visiting scholar at TU Berlin, explains why the development of AI represents a turning point in the history of humankind. He predicts that by 2062 we will have developed machines which are as intelligent as we are. Homo digitalis will have come to equal Homo sapiens. “It won’t happen tomorrow,” says Toby Walsh, “but it won’t take 1000 years either.” The year 2062 represents the average estimate for this development among 300 of his scientific peers. However, Walsh goes on to explain that as by this point autonomous machines will have already become a regular feature of our everyday lives, relieving us of dangerous, heavy, and tedious work, they will also have to be capable of ethical behavior.
“It wasn’t particularly difficult for me to write this book,” says Walsh. “It represents the answer to the many questions being asked by the public and journalists after the publication in 2017 of my first popular science book It’s Alive!: AI from the Logic Piano to Killer Robots.
“In that book I explained how AI has developed throughout the history of humankind, what it is capable of today and, above all, what it is not capable of.” The new book 2062 addresses the future and shows how super intelligence could develop and how it should develop. Above all, it adds weight to Toby Walsh’s plea that the necessary political and social measures be taken today to enable us to benefit in the future from the advantages AI offers society and to remove potential risks.

Toby Walsh: 2062 - The World that AI Made, La Trobe University Press, 2018 ISBN: 9781760640514

Efficiency and Fairness - In his ERC project, Toby Walsh explores how game and behavior theories can provide models for a fairer allocation of resources and costs.

“AMPLify – Allocation Made PracticaL“ is the name of the project through which Professor Toby Walsh aims to create a basis for successfully addressing a pressing problem faced by society: the fair, global distribution of resources and costs. He is seeking to create a comprehensive computer-aided model incorporating behavior and game theory as a calculation tool for what is referred to as “allocation research”. Professor Walsh believes the current mechanisms for allocating resources and costs are limited to simple abstract models which do not take account of how people actually behave in reality. This includes the allocation and distribution worldwide of all kinds of goods, wealth, energy, foodstuffs, or – and what is particularly relevant at the moment – organs for transplantation. Among issues examined at the regional level are: the allocation of study places in schools and universities; the organization and joint use of large-scale equipment for researchers; the use of clinical equipment for operations; transport options for the distribution of goods.
Walsh, who is a member of the Australian Academy of Science, is Scienta Professor of Artificial Intelligence at the University of New South Wales (UNSW) in Sydney and leads the Algorithmic Decision Theory group at Data61, Australia's Centre of Excellence for ICT Research, where his work focuses on optimization, game theory, and social choice theory. In 2016, he received an advanced grant worth approximately 2.5 million euros from the European Research Council (ERC) as part of the “Excellent Science” program. He used this funding to set up the Chair of Algorithmic Decision Theory at TU Berlin’s Institute of Software Engineering and Theoretical Computer Science and it is here that he established the “AMPLify” project.
His teaching includes modules on artificial intelligence and social choice theory, while the 2019 “AI Summer of Research” focused on theories of fairer allocation of resources as well as on game theory, the economy, and machine learning. A program covering related issues will also be offered in the coming winter semester.


Patricia Pätzold, "TU intern" Juli 2019

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

This site uses Matomo for anonymized webanalysis. Visit Data Privacy for more information and opt-out options.