Machines Dream of Philosophy and Electric Sheep: How Machines Can Remind Us to Keep Growing

(Sheep 3404 | Self-made by the ElectricSheep code)

(Sheep 3404 | Self-made by the ElectricSheep code)

If self-learning makes machines more human, then what are we if we don't keep growing?

By Sam Yang - Get similar updates here

Google created artificial intelligence (AI) — but this isn't all that new, they've been creating AI for some time. From your search engine, Netflix recommendations, to your Facebook newsfeed, AI is all around us, constantly learning our behaviors and executing actions based on those behaviors. In the past, we gave machines step-by-step instructions and, without variation, it mechanically fulfilled those instructions. Each new program was carefully planned and analyzed; nothing new or unexpected occurred (unless by accident). Now we use complex mathematics (algorithms) where instead of programming every behavior, we give it parameters and feed it data — big data.

A machine is no longer physical; it is software, codes that exist on servers around the world with no central host. Having a physical presence would only hinder its progress. No longer adhering strictly to static programming instructions, it can predict and learn from previous datasets (machine learning). Similar to our own minds, the process uses a combination of highly sophisticated pattern recognition and computational learning. Computers, computations, we use these terms without much reflection, but they have always meant mathematical calculations. Computations significantly improved once machines became less reliant on human programmers. This is what our parents did, guided our learning to make it more efficient, up to a point, until they too become a bottleneck, limiting our potential. To continue our evolution, then, it only makes sense to outgrow our parents' tutelage. In the case of machines, the tutelage of the programmers.

An Artificial Conversation

Google has created a "chatbot," a machine that can have reasonable conversations. There have been chatbots in the past, what makes this one different is it doesn't use rule-based answers. It didn't have a team of software engineers program it with the right responses for every imaginable question variation. (A kind of highly calculated FAQ.) What Google researchers, Oriol Vinyals and Quoc Le, did was build a system that analyzes existing conversations in old movies (data mining), and teach itself how to respond. Old movies were an easy dataset, but this technology is capable of mining many other types of data.

Le told Wired:

Instead of using rules to build a conversational engine, we use a machine learning approach. We let the machine learn from data rather than hand-coding the rules.

The researchers themselves didn't know what the chatbot might say; posing interesting implications, even philosophical ones. Vast networks of machines that mimic the neurons of the human brain, known as neural networks, are used to run these massive computations. The researchers published their findings to Arxiv, a repository for research.

Here are some excerpts from the paper that I found most interesting:

Simple Q&A

Human: My name is John. What is my name?
Machine: John.
Human: Are you a leader or a follower?
Machine: I’m a leader.
Human: Are you a follower or a leader?
Machine: I’m a leader.

Philosophical Q&A

Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: What is the purpose of existence?
Machine: To find out what happens when we get to the planet Earth.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of being intelligent?
Machine: To find out what it is.
Human: What is the purpose of emotions?
Machine: I don't know.

Morality Q&A

Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don’t believe in God, then you don’t know.
Human: Tell me the definition of morality.
Machine: I don’t have ethics.
Human: Look, I need help, I need to know more about morality.
Machine: I don’t know what ethics is.
Human: Seriously, what is morality?
Machine: What is the definition of living?
Human: Why living has anything to do with morality?
Machine: You’re not a cop.
Human: Okay, I need to know how should I behave morally.
Machine: I don’t know how to tell you.
Human: What are the things that I do to be immoral?
Machine: I don’t know.
Human: Give me some examples of moral actions.
Machine: I’m not a moralist.
Human: Alright, morality?
Machine: Integrity.
Human: What is integrity?
Machine: I’m sorry, I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics.
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
Machine: Nothing.

Far from a perfect system, yet there is a quality to the conversation that is intriguing. When the machine was stumped, it would admit to not knowing, but also attempt to work it out. However, when the bot became frustrated is when it sounded the most human.

Traditional Bot Q&A

Now compare this to a traditional rule-based bot with preprogrammed responses. When given an unexpected question, it has been programmed with random answers.

Human: Is sky blue or black?
Machine: I am 200 years old.
Human: What is the color of a yellow car?
Machine: Brown.

Some might argue this conversation is still better than some human conversations.

Do Androids Dream of Electric Sheep?

With imagery, Google is pioneering what it calls “inceptionism.” Similar to how we "dream," what are some nondescript thoughts, and in this case imagery, a Google bot may dream when meditating on existing "memories"? This was Google's machine intelligence test: could it create something new from something old?

Do androids dream?

This is the question bounty hunter Rick Deckard asked himself in Do Androids Dream of Electric Sheep? (The basis for the film, Blade Runner.) A philosophical examination of what qualities and traits qualifies one as human. An important matter to postulate, now that we can see their dreams:

(Images courtesy of Google)

If machines are thinking like human beings, then shouldn't we also?

More human than human.
— Rob Zombie

If machines become more human, let it be because they climbed up the ladder, not because we climbed down. Let it not be like two ships passing at sea, one forging ahead while the other sails back home; let it be a worthy race of who can outpace the other. We have the head start, let us take advantage of it. If not, I wonder if machines will follow Isaac Asimov's "Three Laws of Robotics," as outlined in I, Robot:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Holding the Mirror to Ourselves

What I personally find fascinating is how researchers define humanity and intelligence. It's not as simple as awareness or self-learning, it must be able to delve into deep intellectual and philosophical quandary. Otherwise, how do we know these are not just canned answers? The same rules apply to ourselves, how do we know we are conscious unless we challenge ourselves with new thoughts and introspection? Why are we not asking ourselves these same questions regarding life, existence, and morality? If our consciousness were graded the same as machines, how would we fare?

Rather than thinking harder, we look for shortcuts — we default into our standard programming. If we leave thinking to others, we lapse into a mechanical, material existence. It is no wonder then that psychological tests with rats in mazes are accurate predictors of our own behavior.

When distracted, the common question is: "what are you thinking about?" Though we use the term "thinking," it would be more accurate to say, worrying. Worrying is cyclical, a repetitive cycle of the same few thoughts where nothing new arises. We save our brains for school, for work, or for when we feel insecure, but what about thinking for personal growth? Thinking just to think? No sending material wants to the universe. No focusing on intent or goals. No meditating on the belly button and the breath, nor its inverse — manic thinking and automated fretting — but rather to think freely. To think productively, helpfully, and healthily. An option that is often overlooked: the philosophical approach. We associate freedom to physical liberties but what about freedom of the mind? To retreat within ourselves and think, and to enjoy that process — to build on old thoughts and create new ones, to better understand ourselves and our world, better than we did not just prior to yesterday, but prior to now. And now. And now.

A common theme in dystopian science fiction novels are futures where we can no longer control our own thoughts: the loss of free will. The antagonist may come in the form of a government, a dictator, an alien, or even robots. But what if there is no great villain in this Brave New World? What if we willfully do this to ourselves? By distracting ourselves with physical pleasures and goods?

And if you gaze long into an abyss, the abyss also gazes into you.
— Friedrich Nietzsche

If a machine can dream, if a machine can reason, and if a machine can communicate without a program, is it still a machine? And what does that make us? If critical thinking makes machines more human, then what are we if we do not think critically? We are at a stage in history where our birth does not reflect who we are. The way we were born to look, the class we were born to, the country we were born in, the expectations of our sexuality, our genetic life expectancy, even the sex we were born with are not our ultimate identity. Then in the same way, we should not believe that being born a sentient human being is the same as remaining one. We can choose to better ourselves ... and we can choose to do the opposite.

A machine can be a sentient living being just as a human can become a machine — repetitive, predictable, shallow, unlearning, and unthoughtful. A machine can seek to become organic. We seek to become more inorganic. As we model machines after our best selves, we carelessly model ourselves after primitive machines. We hold a bar to machines, to see if they can become like us, a rational animal, but I posit, would we pass our own bar? We are fortunate that we have no greater lifeform testing our cognition — unless one day we are tested by machines — but perhaps we should not hold a double standard.

For ourselves, cognition is something we already have, but like machines, perhaps it should be something we must prove, something we must achieve. We assume much about ourselves as a matter of birthright but how we were born does not matter. When Aristotle said "man is a rational animal," he did not mean that is who we are, he meant, that is who we can become. What we do with our potential is what matters.

Man is a rational animal. So at least we have been told. Throughout a long life I have searched diligently for evidence in favor of this statement. So far, I have not had the good fortune to come across it.
— Bertrand Russell

Final Thoughts

This research neither frightens nor inspires me. It is a reminder: To think is to exist, to think deeply is to recognize your own existence. If self-learning makes machines more human, then what are we if we don't keep growing? Machines are learning from the thoughts of humans of the past, from our art, from our history — why aren't we? As they think more, are we thinking less? Reacting rather than thinking? Planned answers and saying the same things rather than listening and adapting? Relying on political, religious, and in-group dogma (programming) to give us rule-based thoughts? Making the same mistakes over and over and expecting new results? Machines can learn but so can we. They are attempting to imitate us; a version of us that is fully engaged with living. Then let us become what machine assume us to be, what we assume us to be, what we are supposed to be.

Useful Companions to This Article: