picture of a platypus
picture of a platypus

That’s a good question. And to be honest, not an easy one to answer. But hey, I like challenges! Another good question is: what the hell is a platypus doing on this article? Well, keep reading to know!

A bit of history

Nothing too fancy, don’t worry. It is usually agreed that the creation of the term “Artificial Intelligence” and of this research field dates back to 1956, during the “Dartmouth Conference”. This conference reunited the leading researchers in various fields from Mathematics, Computer Science, Engineering, Psychology, and others with the goal of studying “every aspect of learning and intelligence”. Or, as John McCarthy put it himself :

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. — J. McCarthy, Aug. 31 1955

Two months for modeling all the feature of intelligence? Maybe a bit optimistic and this one. The whole proposal for the Dartmouth Conference is available online1.

Since this time, the field of Artificial Intelligence has passed through two major “winters”. The term “winter of AI” has appeared for the first time in 1984 during the AAAI Conference (the American Association of Artificial Intelligence, one of the leading conference in the field, nowadays renamed Association for the Advancement of Artificial Intelligence). The first winter happened in the second half of the 1970s and the second one in the late 1980s.

One of the causes from the first AI winter was the unfulfilled promises of the field. Indeed, researchers promised tremendous successes and great changes for everyone thanks to AI, but failed to deliver on their promises and the task seemed impossible. Public, medias and researchers themselves started to loose faith in front of the major challenges and the goal seemed not to be worth the price anymore. This resulted in a lot of funding cuts and a loss of interest.

The second winter of AI in the late 1980’s started similarly, by some very ambitious projects not being fulfilled and the governmental and funding agencies loosing faith in the commercialization of AI. This once again resulted in a lack of money for AI project, slowing down the research.

In between, of course, AI, saw some “summers” with the apparition of Expert Systems after the first winter, or the rise of embodied AI after the second. AI seems to have been in a constant summer since the early 2000, but it seems we are still experiencing the latest and probably hottest wave up to today with the improvement, great successes and commercialization of Machine Learning, especially Deep Learning technologies. However, it would be a mistake to think that the rest of the field has been twiddling its thumb while Machine Learning progressed. Great successes have been reached and various fields , but probably less visible than the learning one.

<Insert Artificially Intelligent Title here>

Giving a definition for AI is a bit difficult, and so is finding a name for a section giving this definition. Artificial Intelligence is a very very large field which encompasses technologies as varied as expert systems, reasoning, planning, machine perception, natural language processing, game theory… We also encounter the problem that the field of AI changes with its realizations and improvement. The Tesler’s theorem gives a good summary of it by stating that:

AI is whatever hasn’t been done yet.

It summarizes the fact that in the past, each time that an application of Artificial Intelligence techniques become mainstream, it is not considered as being AI anymore, because it does not reflect what we imagine “intelligence” to be.

Despite these difficulties at defining AI, when asked to do the so, I usually go by the following: Artificial Intelligence is what gives the ability to an artificial agent to reason, plan, predict and learn. In other words, it is what allows an agent to adapt.

I am well aware that this definition does not generalize to all fields of AI, especially as soon as we consider human-aware AI2 or embodied AI3, but it seems to me that it encompasses enough to be a satisfactory first definition. So let’s try to break it down and see what it means.

Agent and Environment

First of all, we introduced the term artificial agent(that I will often simplify as agent) in this article. You might have a good intuition of what an agent is, but as usual: let’s make it clear.

An agent is that:

An agent is an entity (a program or a set of programs) that exists in an environment. This environment can be the physical world or an abstract one and this agent can be embodied (i.e. having a body, like a robot) or abstract (i.e. existing only in a computer). An agent does not exist in isolation: an agent exists in and in relation with an environment. Note that the environment can contain other agents. Finally, the set Agent + Environment is called System.

An agent has a model and a reactor. The model describe what the agent knows about itself and the environment. The reactor is a very generic word4 to describe that the agent reacts to things it perceives from the environment using its model and produces actions. Actions can be moving, putting a label and a picture, opening a door… you name it!

Robots evolving in the real world are embodied agent (i.e. they have a body and controlling this body is an important part of their intelligence). However, having a body is not a necessity for an agent. A chat bot answering your question on your bank website is also an agent. An agent by itself is not intelligent. There exists reactive and cognitive agents, and all the degrees in between. Purely reactive agents reacts to stimuli: something happens in the environment and the agent reacts to it always in the same way. Cognitive agents are capable of complex behaviors, that they decide by reasoning and planning. In reality, most agents show a mix of reactive and cognitive behavior. In the rest of this article, we will focus on the cognitive part.

The fuel of intelligent decision-making: perception, data and knowledge

To be able to reason and plan their behavior, agents need to be able to perceive their environment. As humans5, we are using our sensors (eyes, ears, nose, skin…) to perceive the world around us in different ways. It is similar for agents, expect that their sensors can be real (cameras, microphones, temperature sensors…) or virtual. Virtual sensors are simply mechanisms that enable the agent to receive information about their environment, such as an input field in which you type your question, or the function that allow a negotiating bot to know the current price of the item it wants to buy.

As for us, sensors give data to agents. Simple signals that taken alone have no meaning at all, just sequences of zeros and ones (as our sensors convey electric impulses). We need to transform this data into knowledge, i.e. give a meaning to this data, before even considering reasoning upon it.

Usually, humans are the one to give the meaning. When a neural network makes classifies images of cats, dogs or platypus, what it really classifies is images of Thing1, Thing2, Thing3. The network has no idea of what a cat, a dog or a platypus is. For instance, it does not know that all walk on four legs, cats and dogs have two external ears but not the platypus, or that the dog has human masters while the cat has human slaves and the platypus is the proof that God has a sense of humor. All this information is something that human needs to encode so that the algorithm can reason (if needed) with the fact that it recognized a platypus.

picture of a platypus

I really wanted to put a picture of a platypus. Credit: KlausFlickr: Wild Platypus 4

Adding semantics to data is the field of research of Knowledge Representation and all Semantic-related techniques (semantic perception, semantic mapping…). And there is a lot to do!

Inference: n. Deriving logical conclusions from premises known or assumed to be true

Getting knowledge from sensor data is a first step. The second one would be to create new knowledge from known fact. This is the domain of Reasoning Techniques. There is dozens of ways to perform reasoning and as many research fields. It all depends on what type of knowledge you want to consider and which parts of the system are important to you. Therefore, you can count on Spatio-Temporal Reasoning if you want to consider the duration of events or actions as well as the entities’ location in space, Reasoning under Uncertainty if you do not know everything about the world around, Case-Based Reasoning if you want to try to solve new problems based on previously known solutions to different but related ones, or Common-Sense Reasoning6 if you wish to deal with the reasoning based on usual and common situations as humans do. And obviously, I did not list all of the existing reasoning sub-topics. And you know what? In each of these subtopics, there are other subtopics! For instance, for a while, I have been working on Probabilistic Reasoning which is a sub-class of Reasoning under Uncertainty. Who said that there was nothing let to do in AI? No-one? Great! Let’s continue.

From Knowledge to Action: a little bit of planning

So now we are able of inferring knowledge from data and knew knowledge from previous ones. It is great for a lot of applications7. But for a lot of others, it is not enough. At some point, we might need to plan. What is planning? In their book Automated Planning8, Traverso, Ghallab and Nau9 define planning by saying that “planning is reasoning about actions”.

Wait, what? Planning is reasoning? So why is it not in the previous section? Well, indeed planning is a form of reasoning as in infers new knowledge (in that case, a set of actions to perform) from previous one (about the environment, the planning agent…). But it is such a special case of reasoning and such a huge one that it is nice to deal with hit separately.

So, planning is deciding of a set of actions to perform in a certain order to reach a given goal. This set of actions is usually represented either as a plan or as a policy. A plan is a succession of actions. It is basically saying “first you do A, then you do B, then you do C”. There exist conditional plans, which say “first you do A, and if X happens you do B, otherwise, you do C”,  but, you get the idea, it’s still a succession of actions. On the other hand, a policy is a summary of the best action to perform for each possible state of the system. So a policy is saying “if the system is in state X, you do A. If it is state Y, you do B and if it is in state Z, you do C”. But it does not say whether the system is going to go in state Y or Z after performing A in X.

As for reasoning, the type of planning you will choose will depend on the assumptions you consider. There is a lot of possible assumptions, but three of them are the most important to decide which type of planning to go for:

  1. Are your actions deterministic or stochastic? This means that if you perform twice the same action A in the same state X, are you 100% that you will end up in the same state Y? If yes, your actions are deterministic and you will usually use techniques that come up with plans. If not, your actions are stochastic and you will usually use techniques that come up with policies or conditional plans.
  2. Do you know exactly the state of your system? If you can know at each time in which state your system is, then the system is said to be fully observable. It is the case for an agent playing chess which knows exactly the state of the board after each move. Otherwise, the system is said partially observable, as  in the case of an agent playing Tetris: it doesn’t know which piece will come after the next one. It is also the case for a robot which can only sense the world with its sensors but does not see it entirely.
  3. Is your environment static or dynamic? Your environment is static if only the actions you perform can change its state. This would be the case for a one-player game such as solitaire. If its state might change without you acting, then it is dynamic. A two player game such as chess would be a dynamic environment from the point of view of one agent.

And now, you have an agent capable of perceiving its environment, reason about it and plan its future actions. Awesome, isn’t it?

And what about Machine Learning?

Machine Learning is a sub-topic of AI the same way perception, reasoning and planning are, but it is also a set of techniques used in the aforementioned fields. So it is both a study subject and a tool for other study subject.

The idea behind Machine Learning is to give a gazillion10 of examples of something to an algorithm and let it find a way to do something with it. This second “something” depends on the goal given by the programmer.

Disclaimer: it is mostly wrong to say that “even researchers do not understand ML”. The problem with ML (and also its strength) is that is comes up with solutions too complex and taking into account an amount of data so huge that no human could process it. However, the system designer still gives the goal to the machine and its possible actions. What they do not understand is how the algorithm comes up saying that this solution is the right one11 but they know what the algorithm is supposed to achieve and what is can use to do so. It happens that the algorithm comes up with surprising and unexpected solutions, but the goal of the software is known. So from now on, be careful each time you read news titles such as “Robots come up with a new artificial language without being instructed to”. The truth is usually much more subtle.

Anyhow, back to topic. There exists three types of Machine Learning: Supervised Learning, Unsupervised Learning and Reinforcement Learning.

In Supervised Learning, we give a lot of examples to an algorithm, let’s stay pictures of animals. We also give it a label for each example, let’s say “this is a cat, this is a dog, this is a platypus”. And then the algorithm learns to recognize what, in a picture, characterize a cat, a dog or a platypus and is able to discriminate. Note that as I wrote before, the algorithms doesn’t know what is means to be a cat, a dog or a platypus. It just learns the recognize characteristics that allow to distinguish them.

Unsupervised Learning is used to detect patterns and trends in an amount of data that a human cannot handle. Basically, you give your algorithm millions of samples of something, and ask it “find something”. It is excellent at finding correlations between seemingly unrelated data. However, as usual, the meaning to give to these correlations is left to the human mind.

Finally, Reinforcement Learning allows an agent to plan without knowing in advance a model of the world. This is the kind of learning AlphaGo is using, as well as the algorithms learning to play Atari games. The idea is you give a set of possible actions to a program and let it run during a certain number of episodes, one episode being a succession of actions, environmental changes and perceptions. At the end of the episode, you give a reward to the agent depending on how it performed, and the underlying algorithm will adapt to maximize its reward and learn what a good behavior.

This is all?

Nope. As I previously mentioned, there is a ton of other sub-domains of AI that we haven’t even started to consider. Just to name a few: Multi-agent systems (when several agents act in one environment), Natural Language Processing, Simultaneous Localization and Mapping (when a robot tries to determine where it is and where to go… at the same time)… We will probably tackle some of these in the future.

As you might have got a clue, AI is huge. It’s a very complex and large field of study, which is also actually pretty badly defined. But now, I think you have the bases for following a bit better the future articles of this blog! But if you want more information on a topic, I give you a list of links, papers and books on different topics in the following section. I stared the one I highly recommend. So stick around 🙂

See you later!

Additional Content

The Holy Grail of all books on Artificial Intelligence in general: Russell and Norvig, Artificial Intelligence: A Modern Approach

The bible of planning: Traverso, Ghallab and Nau,  Automated Planning: Theory and Practice

The bible of Reinforcement Learning: Sutton, Barto and Bach, Reinforcement Learning: An Introduction, 2nd Edition.

The course of David Silver, from DeepMind, is available on Youtube. Very good introduction to reinforcement learning.

The coursera class of Andrew Ng, a good introduction to Machine Learning in general.

** Excellent article about Common-sense reasoning and Machine Learning on WIRED

*** Fantastic talk from Subbarao Kambhampati about human-aware AI

A couples of very good articles too:

 
 
 
 
  1. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
  2. We’ll do an article later on this specific topic
  3. Yeah, there will be something in there too
  4. Invented by myself
  5. Or any other creature on Earth, really.
  6. About Common-Sense reasoning, I recently found a fantastic article fro Wired explaining why it is hard and why we would like machines to have it. Article available here and in the “Additional Content” section at the end of this article. I very very strongly recommend reading it.
  7. Like the guest detection I already talked about here
  8. The ultimate reference in Automated Planning (no kidding)
  9. Yes, in science we generally use only family names to refer to authors. It seems a bit odd at the beginning but you get used to it quickly. It still happens to me not to know the first name of some famous researchers or to realize that they are men when I thought they were women or the other way round.
  10. I am (slightly) exaggerating… Not that much though
  11. Even this is not completely true, but let’s accept this simplification for now
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.