• News

News

27 August 2018

Prague meeting brings together top researchers in artificial intelligence


By Jan Velinger

August 27, 2018


The Human-Level AI 2018 conference, bringing top researchers to Prague to discuss the recent advances and different approaches to artificial intelligence has just wrapped up. For anyone interested in the development of general AI, over the last few days Prague was the place to be.


GoodAI's chief operating officer (and Charles University graduate) Olga Afanasjeva
GoodAI's chief operating officer (and Charles University graduate) Olga Afanasjeva


INTERVIEW
INTERVIEW


During the event, I met with one of the main organisers, Olga Afanasjeva, to discuss developments in AI as well as how the conference came together.


“The idea was to bring together three existing conferences under one umbrella in the Czech capital called Human-Level AI. It is an advantage to have everyone meet in one place, rather than many. Prague made sense because our company GoodAI is based here. Our core focus is AGI or general artificial intelligence and we were honoured to be able to co-host this kind of event.”


When I spoke to CEO Marek Rosa a few years back, he stated that there weren’t nearly as many companies or researchers interested in AGI but much more narrow applications. Is that the same today or has the situation changed?


“The number of researchers focusing on narrow AI still greatly outnumber those in AGI. But trends are changing very fast and more groups are now turning to the problem of trying to tackle human-level artificial intelligence than was the case before. Even those who focus on current real-world problems are now coming up with and using similar technique we having been using in the search for AGI - techniques such as lifelong learning, transfer learning, conditions where an AI learns in a similar way to humans.


“Some of these methods can have huge advantages when it comes to learning efficiency. Some of your readers or listeners will know that to design narrow AI systems or applied AI solutions you usually need a lot of data, many, many examples and a lot of computing power. This is what has been behind the success of neural networks in the very recent past. It’s important to realize that the technology has been around since the 1940s but thanks to powerful graphic processing units (GPUs) today and the ability to crunch a lot of data, we saw a boom. Of course this is still in narrow AI.


“But this is not the way humans learn. One thing that is remarkable is just how much we can do with our brains when you consider that we don’t have such computing power and the equivalent is something like powering a 30 watt lightbulb. Yet we can do so much, we can process so much, we don’t need millions of training examples to learn and understand something and most important of all we can transfer our knowledge to new domains. You can learn something but understand complex associations and are able to employ meta-strategies when we encounter new but similar problems down the road.”


Are we at a kind of turning point when it comes to machine learning?


“Well we kind of see this change. Until now, solving computer games was more of a statistical problem for AI and a good example is the software that recently defeated the world’s top Go player. There were actually two implementations of the AI, the one which succeeded, where it was fed and studied millions of examples of human play and a second implementation, called AlphaGo Zero, which started from scratch and taught itself how to play. It received no examples from humans. [Ed. note: The original AlphaGo was since crushed repeatedly by its successor, according to the verge, the register, and other sites].



“This particular AI still designed for one thing – to play and win and Go – and its capabilities are not immediately or easily transferable to other domains. At the same time, there has been a very interesting move towards algorithms that can improve themselves and learn on their own and this is an interesting trend.”


In his book Life 3.0, physicist Max Tegmark, writes about that Go match, saying that the turning point was a move by the AI that would have been highly counter-intuitive for generation after generation of Go masters. You have people playing this game for several thousand years but in a relatively short period of time an AI found a move that no human being has.


“It’s exciting. And now they will use this AI to study how to play Go better! It is, however, not because it thinks like humans do, we are not close to that yet: it is still about optimization strategies. It is still a narrow AI, designed for a specific purpose. For general AI, we need for to software to be far closer, at least initially, to how humans operate. It needs to understand our world and our values. A general AI will also have to react in similar and predictable ways. If you ask an AI-programmed robot to bring you something from the other room, you don’t want the machine breaking through the wall or any other barrier in its path. The ways to achieving goals have to be compatible with the way humans would do it.


“For potential general AIs to operate in the real world we have to be able to communicate with them without ambiguity and the AIs have to understand our world.”


It is really depends on who you ask: some experts think achieving AGI (known as the singularity) is hundreds of years away or even impossible; others think a major breakthrough is just around the corner…


“During the conference we ran an online survey for attendees. Mid-way through, more than 40 percent replied they believe it will be within the next five to 10 years. So many people here are quite optimistic.”


That is kind of the Ray Kurzweil school of thought: he usually presents quite a near date, doesn’t he… [Ed. note: Kurzweil has predicted 2029 as the date AI will pass a valid Turing test and 2045 for the singularity].


“Yes, and when it comes to Ray Kurzweil, his predictions have often been correct. But we’ll see.”


Security is a big issue when it comes to developing AGI: does that mean it has to be kept in a box off the grid? Fed an enormous amount of information but without being connected directly to the internet?


“That is one safeguard. Sandboxing and keeping AIs which are under development in a safe testing environment is important. It is the same with self-driving cars: you wouldn’t let one of those cars out onto the street without testing it over many hours in some polygon centre. When humans are in the loop, you have to do a lot of testing in virtual environments and simulations. When it comes to AGI, the design has to be safe from the get-go. You can have sandboxes and emergency shutdown buttons, but more important is that the AI is taught human valuesand that it is compatible. We need any future AI to be modelled on our values.”




In a recent lecture in Prague, the well-known AI scientist Ben Goertzel talked about how used we were getting to things changing rapidly, that 10 years ago the idea of self-driving cars would have seemed incredible… yet now they are just around the corner.


“And AGI could be next. I also think there is reason to be optimistic that it isn’t too far away: maybe someone will come up with a solution, an algorithm which pushes the research ahead significantly and help us advance. There are no physical limitations we have encountered which would indicate that achieving AGI is impossible. We need good ideas but I think the obstacles can be overcome.”


To come back to the point of the conference: it brings together many different researchers and many different disciplines and approaches. I suppose that when it comes to solving AGI, there are many different ways to skin the cat…


“Yes. Saturday’s program at the conference was a good example, divided into several blocks. One block was dedicated to AI inspired by Nature, so there we had a lecture by Irina Higgins, a senior research scientist from Google DeepMind, one of the biggest companies in the world focusing on general AI research. Her focus is transferring knowledge from neuroscience to designing AIs.


“Ryota Kanai, the CEO of a Tokyo-based company called Araya, is another very interesting guest. His firm's take on achieving AGI is through artificial consciousness. His background is also in neuroscience.


“There is also a block dedicated to how systems learn and there we have the Czech-born speaker Tomas Mikolov from Facebook AI Research. He is focusing on how machines learn and on communication aspects in creating AGI. He had quite a detailed scientific keynote presentation as well as a presentation for a more general audience.


“One really interesting area is Nature-inspired AIs, which is an area looked at by Kenneth O. Stanley of Uber AI Labs. He focusses on genetic algorithms and using them for creating AIs that are capable of creativity. He has a very interesting take on objectives and lack of them, in a way. For example, you have a goal but cannot come up with a solution. But for humans, a solution can come when you are not expecting it, when you are taking a shower or taking a break.


“So even an AI might benefit from some ‘time off’ or some time to explore, to come up with a solution in a partly random way.”



Coming from an arts background, that is a situation I recognise: you can be sitting in front of a blank page or screen and racking your brain or working hard but then you eventually hit a wall and need to take a break, to get some distance. But even if you go do something else, you are still thinking about the project on some level at the back of your mind and sometimes solutions present themselves.


“That’s it.”


Speaking of the arts, that is partly your background as well, isn’t it? You didn’t come to AI from computer science or IT…


“That’s right. I studied Czech and Fine Arts at the University of West Bohemia in Plzeň and then did a second Masters – on public and social policy – at Charles University in Prague. My social sciences background has been useful because AI is so interdisciplinary: when it comes to both the teams involved and the approaches and various aspects of research. And there are many important aspects beyond the research itself: the societal impact and so on.


“AI draws on so much when designing algorithms: psychology, neuroscience, economics, fields well beyond IT. We have what is a multi-agent system where the agents interact in different ways and have a certain ‘economy’ between them. So it is very important to look at the problem from different scientific perspectives.”


It also sounds like the more we strive to develop AGI, the better we have to be to also understand - and explain – ourselves.


“Definitely. I would say so, yes.”


Sdílet na:  
Contact Us
Contact

Charles University

Ovocný trh 5

Prague 1

116 36

Czech Republic


CU Point - Centre for Information, Counselling and Social Services

E-mail:

Phone: +420 224 491 850


Erasmus+ info:

E-mail:


ALLIANCE CU




How to Reach Us