Sunteți pe pagina 1din 5

Discuss the practicality of the Chinese room and the Turing test.

How has the Turing test and


Chinese room problem contributed to AI? How can they be improved to make them more practical
and applicable?

Alan Turner pioneered artificial intelligence during the 1940s and 1950s and coined
the term Turning test which was a model for measuring intelligence in machines.
Turner theory states that a computer is deemed to have artificial intelligence if it
can mimic human responses under specific conditions. On the other hand,
philosopher John Searle in the 1980s coined The Chinese Room argument which
refuted the validity of the Turning Test, by stating that a program cannot give a
computer a "mind", "understanding" or "consciousness", regardless of how
intelligently or human-like the program may make the computer behave.
The Chinese Room and Turning test have practicality because of their usefulness to
science. Both theories have provided a strong and useful vision to the field of AI.
The Chinese Room Argument is a classic thought experiment in the question of AI
and the Turning test relevance indicates that it will be a goal for the field for many
years to come and a necessary marker in tracking the progress of the AI field as a
whole. Both theories are still largely discussed and they have led to a large amount
of productive philosophical discussion. These arguments do us the service of
advancing a variety of different theories.
In the field of AI, expectations seem to always outpace the reality. Without a vision
of what AI could achieve, the field itself might never have formed or simply
remained a branch of math or philosophy. Both theories have proven to be practical
in the field of AI but are complex in its applicability.
References
https://en.wikipedia.org/wiki/Chinese_room

Does a computer think? Are computers excellent chess players? If a computer consistently wins chess
games with a group of the best chess players in the world, how convincing is it that the computers are
actually intelligent and thinking? What will convince you that a computer is actually intelligent?

The complexities of the mind mirror the challenges of Artificial Intelligence. The
inevitable question has been asked for decades: Does a computer
think? Really think? According to Dictionaty.com thinking is defined to have a
conscious mind, to some extent of reasoning, remembering experiences, making
rational decisions, etc. A computer may mirror some of these attributes but
ultimately the computer is merely a superficial imitation of human intelligence. It
has been designed by humans to obey certain commands, and then it has been
provided with programs composed of those commands. Because of this, the
computer has to obey those commands, but without any idea of what's happening.
Therefore, a computer does not really think as it possesses no real intelligence of its
own. It merely executes what its programmed to do.
In the realm of computer chess, the best chess programs are impossible to beat by
the grandmasters of chess because computers possess no intuition at all, it analyzes
the game using brute force; it inspects the pieces currently on the board, and then
calculates all options and makes its moves. These capabilities make it an excellent
chess-player but the machines way of thinking is fundamentally un-human. In
contrast, human chess players learn by spending years studying the worlds best
opening moves and endgames; they play thousands of games, slowly amassing a
capacious, in-brain library of which strategies triumphed and which flopped. They
analyze their opponents strengths and weaknesses, as well as their moods. When
they look at the board, that knowledge manifests as intuitiona eureka moment
when they suddenly spy the best possible move.
While some advances have been made in AI, a truly intelligent computer currently
remains in the realm of speculation. Though researchers have continually projected
that intelligent computers are imminent, progress in AI has been limited. Computers
with intentionality and self-consciousness, with fully human reasoning skills or the
ability to be in relationship, exist only in the realm of dreams and desires, a realm
explored in fiction and fantasy. It is therefore my though that computers are not
innately intelligent but designed to emulate the perceived meaning of intelligence.
References
http://web.media.mit.edu/~minsky/papers/ComputersCantThink.txt
https://jakubmarian.com/will-computers-ever-be-smarter-than-humans/
http://smarterthanyouthink.net/excerpt/

http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-andmaps/artificial-intelligence

The complexities of the mind mirror the challenges of Artificial Intelligence. The
inevitable question has been asked for decades: Does a computer
think? Really think? According to Dictionaty.com thinking is defined to have a
conscious mind, to some extent of reasoning, remembering experiences, making
rational decisions, etc. Ultimately the computer is merely a superficial imitation of
human intelligence. It has been designed by humans to obey certain commands,
and then it has been provided with programs composed of those commands.
Because of this, the computer has to obey those commands, but without any idea of
what's happening. Therefore, a computer does not really think as it possesses no
real intelligence of its own. The machine simply performs a task that it has been
designed to do; it doesn't actually come up with any new ideas of its own nor do
any thinking.
In the realm of computer chess, the best chess programs have proven impossible to
beat because computers possess no intuition at all, it analyzes the game using
brute force; it inspects the pieces currently on the board, and then calculates all
options and makes its moves. They dont make errors like humans. These
capabilities make it an excellent chess-player but the machines way of thinking is
fundamentally un-human.
While some advances have been made in AI, a truly intelligent computer currently
remains in the realm of speculation. Computers with intentionality and selfconsciousness, with fully human reasoning skills or the ability to be in relationship,
exist only in the realm of dreams and desires, a realm explored in fiction and
fantasy. It is therefore my though that computers are not innately intelligent but
designed to emulate the perceived meaning of intelligence. I would only be
convinced that a computer is actually intelligent when it has emotions and can
reason, its breaks rules and makes rules.
References
http://web.media.mit.edu/~minsky/papers/ComputersCantThink.txt
https://jakubmarian.com/will-computers-ever-be-smarter-than-humans/
http://smarterthanyouthink.net/excerpt/
http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-andmaps/artificial-intelligence
http://www.thekeyboard.org.uk/computers%20become%20self%20aware.htm

There is a claim that the Depth First Iterative Deepening Search (DFIDS) algorithm is very wasteful. Is
there any truth to this claim? Justify your argument. Is it possible to prevent the waste? If so, is it always
worth the effort? Which other search algorithm would you consider more efficient and effective than
DFIDS.

Depth-First Iterative Deepening (DFID) search combines the best features of


breadth-first search and depth-first search. Depth-First Iterative Deepening search
first performs a depth-first search to depth one, then starts over, executing a
complete depth-first search to depth two, and continues to run depth-first searches
to successively greater depths, until a solution is found.
The (DFIDS) algorithm is considered to be very wasteful for the following
reasons:
DFID requires a great deal of time in the iterations prior to the one that finds the
solution, this result in extra work which is usually insufficient.
DFID It is not guaranteed to find an optimal path; iterative deepening is.
DFID may explore the entire graph before finding the target node; iterative
deepening only does this if the distance between the start and end node is the
maximum in the graph.
On the other hand, the DFIGS algorithm can find a shallow solution much faster than
depth-first search since it will not go down a deep branch of the tree until it has
searched the shallow branches first. Memory usage is the same as depth-first
search while time-complexity is not much worse since the breadth of the tree
increases exponentially with increasing depth. For this reason, DFIDS is worth the
effort.
In a graph with cycles, however, breadth-first search may be much more efficient
than any DFID. The reason is that a breadth-first search can check for duplicate

nodes whereas a DFID cannot. Thus, the complexity of breadth-first search grows
only as the number of nodes at a given depth, while the complexity of DFID
depends on the number of paths of a given length.
References
http://wiki.cs.pdx.edu/cs543-spring2010/important_algorithms.html
http://intelligence.worldofcomputing.net/ai-search/depth-first-iterativedeepening.html#.WBb9GtUrLIU
https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
http://stackoverflow.com/questions/7395992/iterative-deepening-vs-depth-firstsearch

S-ar putea să vă placă și