Artificial Intelligence

Artificial Intelligence


By Alex Beahm, George Mahon, Shannon Nelson, Ben Nolan, and Cody Willhelm




What is Artificial Intelligence?


    The theory and development of computer systems able to perform tasks at the level of human intelligence.


What is Intelligence?

    When attempting to create a computer that can “think” on a human level, one of the most important considerations is what human intelligence is. Unfortunately, that debate still rages on, and a single, formal definition does not yet exist.

    Intelligence might be one’s ability to learn about and from the environment. It could be the amount of knowledge stored in one’s brain and the speed at which it can be accessed. Or perhaps it is step-by-step problem solving, choosing the most efficient and effective solutions to a presented challenge. One thing that most researchers agree on is that there are different kinds of intelligence - analytic, linguistic, and emotional, for example.
At present, A.I. cannot hope to imitate all types of intelligence at once, so different applications use different methods according to what they need. A search engine need not wonder about the reason for its search, only to find items tagged with the requested keywords, and a speech-recognition AI does not need to consider the emotional impact that the words it is receiving might have on a human being.


Branches of A.I.

    There are many branches of artificial intelligence, and surely several more yet to be discovered. Listed here are some of the more common areas of study.

Epistemology

    Epistemology is the branch of philosophy concerned with the study of knowledge. It attempts to answer the question, “What distinguishes true, or adequate, knowledge from false, or inadequate, knowledge?” The study of knowledge leads to the studies of learning and teaching, which in turn allow artificial intelligence to advance even more.

Expert Systems

    An expert system is a compilation of expert-level knowledge run by a reasoning engine to solve problems.The knowledge base contains both factual and heuristic information that is often written in plain language, so it can be easily edited. Some rule sets are not in plain language, and only computer scientists can understand them. It is written in an if… then structure so that the system can, with a piece of information, infer other, missing information about the situation.

    The reasoning or inference engine produces reasoning based on rules, and it must be based on logic in order to succeed. It can use whichever kind of logic will serve its purpose best – propositional, predicate of first or greater order, epistemic, temporal, fuzzy, or any other. Humans’ basic logic is propositional, which is expressed in syllogisms; an engine that uses propositional logic is called zeroth-order.

    An expert system can run in batch or conversational style. With batch, the system has all the information that it needs right from the beginning. Conversational, on the other hand, is necessary for more complex problems where the developer cannot ask the user for all of the information at the start.

Default Reasoning

    It is often the case in reasoning that assumptions must be made on incompletely specified information. When more information is gathered, however, perhaps the assumptions must be retracted. A popular example is this: when told of a bird, you assume it can fly. However, when told that it is a penguin, you retract your assumption for a new one stating that the bird cannot fly. Default logic is a non-monotonic logic that can express facts in the form, “by default, this is true,” and allows for cases where inferences must be changed to agree with new information.

Vision Systems

    The field of vision systems or computer vision works to provide computers with the ability to process and understand visual data. Gigabytes of data pass through human eyes every second and are translated into sight – information and understanding about what is seen.AI equipped with computer vision has accomplished such things as vehicles that can safely navigate highways, computers that can interpret facial expressions, and a surveillance system that spots swimmers who are drowning. Just as when someone is navigating on a foggy day, AI can fill in data from incomplete images based on knowledge of the environment.

Pattern Recognition

    The ability to recognize patterns is at the core of human learning. Sensations are received and organized, patterns are recognized, and those patterns become examples or rules to interpret.

    The computer CogVis, developed at Yorkshire, UK’s University of Leeds, displayed its remarkable learning abilities in 2004 at a British Computer Society event, where it learned the rules of Rock, Paper, Scissors by observing. Derek Magee of the University of Leeds says, “A system that can observe events in an unknown scenario, learn and participate just as a child would is almost the Holy Grail of A.I.”.


Applications of A.I.

    Artificial intelligence has been integrated into a wide range of fields – anywhere a computer is to be found, AI probably isn’t far away.

Computer Science


    Computer scientists have developed many tools and methods in their quest for artificial intelligence. Intelligent storage management systems and dynamic, object-oriented, and symbolic programming are all side effects of the AI quest, along with everyday applications such as graphical user interfaces and the computer mouse.

Finance

    Intelligent software applications screen and analyze financial data, detect and flag unusual charges and claims, and can predict stock market trends far more accurately than humans. Some companies employ automated online assistants to help customers with such services as checking their balance or signing up for a new card.

Medicine

    A.I. can organize schedules, make staff rotations, and provide medical information to assist in diagnosis and treatment. Intelligent systems can scan digital images for abnormalities and point them out, helping doctors find and begin treating problems quickly.

Heavy Industry

    A common usage for artificial intelligence is industry robots that perform tasks too repetitive or dangerous for humans. Entire manufacturing processes can be totally automated – computer chip or machine tool production, for example. They also handle radioactive materials and maneuver unmanned spacecraft.


Objections to A.I.

    Naturally, as with any controversial field, artificial intelligence has its opponents and objectors. John Searle posed his famous Chinese Room argument, asking whether a machine can truly understand its task, even though it carries out its instructions perfectly. The argument is as follows: A person who knows only English is inside a room with instructions on how to manipulate Chinese characters. From outside the room, a note in Chinese is slipped under the door. The English-speaker looks at the note, consults the manual, and writes an appropriate response back, giving the impression that they speak fluent Chinese.

    Lady Lovelace also voiced a famous objection: artificial intelligence can have no original ideas. A computer receives input, manipulates it according to a set of rules, and gives the proper output, nothing more. If a computer cannot create, how can it truly think and grow? Stemming from this is the objection that computers only work in their specific domain. Feed a computer input that it has not been programmed to handle, and it can do nothing.

    Ethical and theological objections also stand against A.I. Will the AI be granted rights, and if so, how many? What happens when the artificial intelligence surpasses human? If the robots decide to stop taking orders from humans because they are superior, could a war result? What if true “thinking” is the result, not of proper programming, but of a soul? Humans could never bestow a soul upon a machine, so artificial intelligence is impossible. Is A.I. not god’s will? All of these and more are questions that will only be answered when, or maybe if, we succeed.


History of Artificial Intelligence

A.I.’s Origins

    Artificial Intelligence first came to life as stories and creative thinking. Several ancient Greek philosophers and scientists proclaimed thoughts and theories about a man made intelligence, known as Artificial Intelligence. However, Folklore of Artificial Intelligence can be unearthed from the time of the ancient Egyptians, around 800 B.C. Within the early Egyptian city of Napata, a statue was constructed of the great Amun, god of air, to move his arms and speak to onlookers. The Egyptians, at that time, believed he had some form of intelligence, while nowadays we know this not to be true.

Father of A.I.

    John McCarthy was a pioneer in the computer science industry and is known as the father of Artificial Intelligence. McCarthy coined the term Artificial Intelligence and his work defined the field, saying, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."


    McCarthy was born in Boston, Massachusetts in 1927. He attended the California Institute of Technology and received his Ph.D. in mathematics from Princeton University in 1951. In 1962 he became he fulltime professor at Stanford. As a professor at Stanford University, McCarthy was credited with developing Lisp: the oldest programming language still in use, based on lambda calculus to be primarily used in artificial Intelligence. He also invented and popularized computer time-sharing, which made distributed computing much more efficient.

    One of McCarthy’s most notable contribution in A.I. was developing the first computer that could see 3D blocks through a camera and stack them in different arrangements. Furthermore, he held the first research conference for artificial intelligence, which firmly defined A.I. as a separate field of computer science. McCarthy’s list of awards includes the Turing Award, for his work in artificial intelligence, and the United States National Medal of Science; he is also a member of the Computer History Museum.

Turning Thought into Reality

    The “thought” of A.I. has been around for hundreds of years, though it wasn’t until the technology and the practicality of computers around the 1950’s that the theory of Artificial Intelligence started becoming an actuality. Up to this point, it was borderline crazy to even consider creating some form of intelligent being with computers that had cost millions of dollars, performed with less power than today’s average wrist watch, and big enough to fill entire rooms. This describes the some of the world’s first electronic computers, which appeared around the year 1941.

    In 1950, the first leap to A.I. was from an accomplished scientist, Norbert Wiener, whom created the feedback theory, stating that a user can make an input into the machine, and that it would then react. The most popular example of this would be the modern day thermostat: not exactly intelligent, yet. Furthermore, in the late 1955, Simon and Newell developed, what is considered by many to be the first A.I. program, The Logic Theorist. Representing each problem as a tree model, the program would then attempt to solve the problem by selecting the branch that would most likely result in a correct conclusion. The impact of this program made a crucial step in the development of the A.I. field.

History at a Glance

    The history of Artificial Intelligence broken down as technology advanced.

1950’s – 1960’s

    As stated above, Norbert Wiener created the Feedback Theory, and Newell and Simon developed the Logic Theorist. In addition, the “Father of A.I.”, John McCarthy, called a month-long conference in Vermont, 1956, to brainstorm with other scientists about the future of the Artificial Intelligence field: taking the field of A.I. a couple strides future. A year later, Newell and Simon created another program called The General Problem Solver, which was capable of solving common sense problems. Then in 1958, John McCarthy developed the programming language used for Artificial Intelligence, LISP, while he was at the Massachusetts Institute of Technology (MIT). After a lull in development, Marvin Minsky and other researchers at MIT found that AI programs could solve spatial and logic problems in the 1960’s.

1970’s – 1980’s

    The 1970’s brought multiple image recognition and expert systems, which figured out the probabilities of certain solutions under set conditions, to the A.I. field. While the 70’s did not bring much to the table, the 1980’s were crucial in bringing Artificial Intelligence out of the labs and into the homes of people around the world. Before this time, A.I. was cursed with the notion that it had almost no practical public use, hurting the funding and kept large companies from supporting its development. This changed when the first annual National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford. His conference brought the best scientists of the field together, and set in stone the future of A.I. In the following years, of the 1980’s, Artificial Intelligence saw the creation of neural networks that could learn on their own, and expert systems, which solved problems based on specific domain knowledge using a set of rules, were being implemented by companies around the world.

1990’s - 2000’s

    In 1990 when the Gulf War, or Operation Desert Storm, began, it prompted the US Military to try new tactics that would put soldiers further away from the battlefield and technology at the forefront. Anything from packing transport vehicles to deadly cruise missiles all used some form of Artificial Intelligence: expert systems visual recognition, and variable analysis. Then after the war was over, A.I. was put to use for “toy pets” that would speak or perform an action on voice command. Video games have also adopted Artificial Intelligence to create harder and more lifelike enemies.



AI Projects

    Each day computers get smarter and smarter. Computers have been around since the 1900’s and have grown since then. Scientists work on computers more and more each day. They have gone from simple calculators to multiple computer systems all in one making super computers. Processors have gotten faster, Graphics cards have gotten smoother and better working, and memory has been able to store ten times as much as earlier computers.

    The world of technology changed drastically in the 1940’s. This was the period Scientists started experimenting with Artificial Intelligence and human minds. They set a goal to compare computers and how they computed calculations to human minds. A great way to do this was getting a machine to calculate and analyze the way the game chess worked. In doing this, scientists programmed the machine to play chess and eventually challenge humans and beat them.

    IBM was the main founder of this idea for computers to compute chess playing since the 1950’s. At Carnegie Mellon University, a graduate student, Feng-hsiung Hsu had begun to work on an individual project. He called this project the ChipTest, which was a chess playing machine. One of his classmates, Murray Campbell, joined him­­­­­­ in this project as well. Four years later after working on this project IBM hired both of them for research at the IBM Research Center. Computer Scientists Joe Hoane, Jerry Brody, and C.J. Tan helped them to continue their project at IBM. The team then decided to call the project Deep Blue.

    In 1996 while Deep Blue was s­­­­­till in the making, they challenged a chess champion to play the prototype. The human chess champion resulted in winning this match. Deep Blue was then remodeled and reconfigured from the last match. IBM called a rematch to the chess champion with their newer version of Deep Blue. This rematch had millions watching the outcome of these games. Deep Blue and the human chess champion met at the Equitable Center in New York. There were no odds if the computer Deep Blue could beat the human chess champion, but they did know it could calculate 200 million possible chess positions per second.

    All 500 tickets were sold out at the Theater for all six games of the match. This match had a big impression on the world at this time. There were more than 3 billion viewers on this televised match. The first game of the match, the chess champion took a victory against the computer. Deep Blue had then won the next match, but the next 3 matches they took draws and no one gained a victory. For the final game the computer, Deep Blue, took a stifling victory again the chess champion.

    The results of this world with chess match changed the perspective people had for computers and technology. It impacted many other companies on technology computing and how computers could solve complex calculations. Researchers took this information of how computers were now being able to computer complicated computations and used it for more useful attributes. Deep Blue gave an inspiration for architecture applying financial models, searching patterns in large database systems, and helping researcher develop new tools to create new drugs for medical centers.

    Deep Blue was then retired and put in the Smithsonian Museum in Washington, DC. IBM did not stop at the Deep Blue project; they went on to greater and better projects. They went into building new kinds of massive computers to compute more complicated computations such as the IBM Watson.

    IBM Watson was a new project that IBM worked on after they had retired Deep Blue. IBM Watson was built for their project called Deep QA. The Principal Investigator was the head of this project; he named the computer after the first president of IBM, Thomas J. Watson. To challenge the human brain with this computer, IBM gave it the task to play the game Jeopardy.

    Watson was programmed to have the abilities of knowing the definitions of words, puns, inferred hints, and also the double meanings. The computer was also able to respond with extremely quick responses and also could collect huge amounts of information and could connect them with logical reasons. IBM knew that creating Watson would be a big challenge, instead of using keyword searches they changed it to asking questions and analyzing vast amounts of information to find the best answer. The IBM team could establish this capability of a computer by using just three abilities Watson had to contain. Natural Language Processing, Hypothesis Generation, and Evidence-Based Learning were the three capabilities that Watson had to process to be able to answer questions and analyze the best answer for each question.

    In February 2011, Watson was challenged to go on the game show Jeopardy to test its capabilities. IBM picked Jeopardy to test Watson with because the game show is a extraordinary challenge using the wide range of topics and the pace that the game is played. Jeopardy uses multiple language complexities that computers have a hard time of understanding where as human can process them on a daily basis.

    When a player is asked a question on the game show they have to use split-second knowledge and analysis in their brain to find the most significant answer. They have learned to do this through educational learning and also watching the show to train how to process their brain at an extreme pace. Watson didn’t have any of those traits like a human does, so the scientists programmed the computer to do all these abilities and make real life decisions.

    There are three contestants in the game show, and when asked a question they have to respond with an answer that they have analyzed in their brains in a matter of seconds. Watson had to be programmed to do this, so it was able to go through 200 million files of information in the span of seconds.

    After IBM Watson had been challenged to the game of Jeopardy, the world then realized that computers can be programmed and can think like human brains. Watson won the game of Jeopardy to the two champion players of the past and beat them with a total amount of 1 million dollars. Impressed with the way Watson performed, Scientists are now using Watsons technology to calculate solutions for Financial and Healthcare computations for machine to provide better technology to those fields.


Artificial Intelligence’s Future

    Now that the past and current uses of A.I have been covered, it is time to venture into the great wide unknown: The Future. But is it really that unknown? Surely we have some sort of idea where we are going in the future. As a roboticist at a South by Southwest Interactive panel explained, the progress in A.I. will be made in terms similar of past improvements: with hard-won, but incremental improvements in narrow fields. But why would the improvements still be in short intervals? There is one answer unfortunately, money. For example a PR2 robot costs $400,00 and to even replicate IBM’s Jeopardy Championing Supercomputer Watson, it would cost upwards of 3 Million dollars. But let’s not get too pessimistic on advancement and just actually show what should happen in the future. The number one thought that comes to mind when dealing with future A.I. comes from the movies, usually dealing with robots or evil computers taking over the world.

    Terminator, Space Odyssey: 2001, iRobot, War Games, they all depict a state in the world were A.I is so advanced that it can essentially think as abstract as a human but eventually thinks that humanity is imperfect and therefore should be destroyed/enslaved. Fortunately for us, we shouldn’t be expecting any A.I takeover any time in the near future. This comes because of one reason mainly. The fact that actual robots are not exactly good at moving around by themselves, in a human way at least. But truly, there is the main development path of the A.I. and that is for it to evolve and understand things like a human. Look at Google’s Cleverbot, for example; since launching in 1997, it has continuously been developing intelligence through more than 65 million conversations passing through its advanced artificial intelligence algorithm. And because of its nature, it will continue to gather information and advance. But then there is the problem that has plagued A.I. since its beginning. Abstract human thought. The use of feelings, the use of different tones to use different meanings to the same set of words, and even facial recognition are the problems that have been facing A.I researchers. Just look at the popular voice recognition software in the iPhone, Siri. While yes it is fairly functional, it is in no ways near perfect. There are still the problems of voice inflection, tone, and even just in the accents of those speaking to it, Siri can’t quite get things perfectly. Trying to have the A.I understand and operate like the human brain is the main end goal of A.I. In the end of it all, studying A.I will meet with biology with how the human brain actually operates. Just imagine at the open possibilities of such advanced A.I. Sure, on the entertainment side, there would be an increase in technical advancement of video games, but there are much more far reaching possibilities. You can look no farther than our very own military. Imagine a world where our own soldiers don’t have to take as much of a hit on the front lines. Being able to send in those flying drones and even certain kinds of mobile tactile robots can be in our future in order to limit our casualties. The future will hold many things for us to improve on, and if time has told right, we may never be able to truly predict what lies ahead for us. We just have to rely on our past accomplishments to show us the way. According the troubled past of A.I, there will be those problems such as the A.I. Winters in the way. But, if everything goes well we should be able to stand on the shoulders of giants, and improve our technology and capabilities for society as a whole.


Works Cited

"Applications of Artificial Intelligence." Buzzle. Web.

Curtis, Hank, and Kim Martin. "Computers and Artificial Intelligence." Http://peace.saumag.edu/faculty/kardas/Courses/CS/Student%20Pages/AI/ComputersAI2000.html. 24 Feb. 2000. Web. 10 Oct. 2012.

"Deep Blue." IBM.com. IBM. Web. 11 Oct. 2012.

Hertling, William. "The Future of Robotics and Artificial Intelligence Is Open." Iee Spectrum. 05 Apr. 2012. Web. 11 Oct. 2012.

"The History of Artificial Intelligence." Oracle ThinkQuest. Web. 10 Oct. 2012.

Humphrys, Mark. "The Future of Artificial Intelligence." Robot Books. Web. 11 Oct. 2012.

"IBM Watson: Ushering in a New Era of Computing." IBM.com. IBM. Web. 10 Oct. 2012.

M, Anthony. "The History of Artificial Intelligence." Yahoo Voices. 02 June 2008. Web. 10 Oct. 2012.

McCartney, John. "Objections to Artificial Intelligence." Artificial Intelligence. 19 Oct. 2010. Web. 10 Oct. 2012.

"Stanford's John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84." Stanford. Stanford Report, 25 Oct. 2011. Web. 11 Oct. 2012.

"A Theological Objection to Artificial Intelligence." Http://www.mikestratton.net/2011/06/a-theological-objection-to-artificial-intelligence/. Web. 08 Oct. 2012.

"A Theological Objection to Artificial Intelligence." Http://www.mikestratton.net/2011/06/a-theological-objection-to-artificial-intelligence/. Web. 08 Oct. 2012.

Waltz, David L. "Artificial Intelligence: Realizing the Ultimate Promises of Computing." Http://homes.cs.washington.edu/~lazowska/cra/ai.html. Web.

"Watson." Wikipedia.com. Wikipedia. Web. 12 Oct. 2012.

"WHAT IS ARTIFICIAL INTELLIGENCE?" Computer Science Department Stanford University, 12 Nov. 2007. Web. 08 Oct. 2012. 

Comments