Sunteți pe pagina 1din 14

Prepared by Bala

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


ARTIFICIAL INTELLIGENCE

What is Artificial intelligence?


Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated
by machines, in contrast to the natural intelligence displayed by humans and other animals.
Computer science defines AI research as the study of "intelligent agents": any device that perceives
its environment and takes actions that maximize its chance of successfully achieving its goals.

Statistical learning: Statistical learning theory is a framework for machine learning drawing from
the fields of statistics and functional analysis. Statistical learning theory deals with the problem of
finding a predictive function based on data. Statistical learning theory has led to successful
applications in fields such as computer vision, speech recognition, bioinformatics and baseball.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Speech recognition: Speech recognition is the ability of a machine or program to identify words
and phrases in spoken language and convert them to a machine-readable format. Rudimentary
speech recognition software has a limited vocabulary of words and phrases, and it may only identify
these if they are spoken very clearly. More sophisticated software has the ability to accept natural
speech.

How it works?
Speech recognition works using algorithms through acoustic and language modelling. Acoustic
modelling represents the relationship between linguistic units of speech and audio signals; language
modelling matches sounds with word sequences to help distinguish between words that sound

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


similar. Often, hidden Markov models are used as well to recognize temporal patterns in speech to
improve accuracy within the system.
Speech recognition performance is measured by accuracy and speed. Accuracy is measured with
word error rate. WER works at the word level and identifies inaccuracies in transcription, although it
cannot identify how the error occurred. Speed is measured with the real-time factor. A variety of
factors can affect computer speech recognition performance, including pronunciation, accent, pitch,
volume and background noise.
It is important to note the terms speech recognition and voice recognition are sometimes used
interchangeably. However, the two terms mean different things. Speech recognition is used to
identify words in spoken language. Voice recognition is a biometric technology used to identify a
particular individual's voice or for speaker identification.

Natural language processing (NLP) is a branch of artificial intelligence that helps computers
understand, interpret and manipulate human language. NLP draws from many disciplines, including
computer science and computational linguistics, in its pursuit to fill the gap between human
communication and computer understanding.

Evolution of natural language processing


While natural language processing isn’t a new science, the technology is rapidly advancing thanks to
an increased interest in human-to-machine communications, plus an availability of big data, powerful
computing and enhanced algorithms.
As a human, you may speak and write in English, Spanish or Chinese. But a computer’s native
language – known as machine code or machine language – is largely incomprehensible to most
people. At your device’s lowest levels, communication occurs not with words but through millions of
zeros and ones that produce logical actions.
Indeed, programmers used punch cards to communicate with the first computers 70 years ago. This
manual and arduous process was understood by a relatively small number of people. Now you can

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


say, “Alexa, I like this song,” and a device playing music in your home will lower the volume and reply,
“OK. Rating saved,” in a humanlike voice. Then it adapts its algorithm to play that song – and others
like it – the next time you listen to that music station.
Let’s take a closer look at that interaction. Your device activated when it heard you speak, understood
the unspoken intent in the comment, executed an action and provided feedback in a well-formed
English sentence, all in the space of about five seconds. The complete interaction was made possible
by NLP, along with other AI elements such as machine learning and deep learning.

Symbolic learning theory


A symbolic learning theory that attempts to explain how imagery works in performance
enhancement. It suggests that imagery develops and enhances a coding system that creates a mental
blueprint of what has to be done to complete an action.

Computer vision is a field of computer science that works on enabling computers to see, identify and
process images in the same way that human vision does, and then provide appropriate output. It is
like imparting human intelligence and instincts to a computer. In reality though, it is a difficult task
to enable computers to recognize images of different objects.

Computer vision is a field of computer science that works on enabling computers to see, identify
and process images in the same way that human vision does, and then provide appropriate output.
It is like imparting human intelligence and instincts to a computer. In reality though, it is a difficult
task to enable computers to recognize images of different objects.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Computer vision is closely linked with artificial intelligence, as the computer must interpret what it
sees, and then perform appropriate analysis or act accordingly.
Computer vision's goal is not only to see, but also process and provide useful results based on the
observation. For example, a computer could create a 3-D image from a 2-D image, such as those in
cars, and provide important data to the car and/or driver. For example, cars could be fitted with
computer vision which would be able to identify and distinguish objects on and around the road such
as traffic lights, pedestrians, traffic signs and so on, and act accordingly. The intelligent device could
provide inputs to the driver or even make the car stop if there is a sudden obstacle on the road.
When a human who is driving a car sees someone suddenly move into the path of the car, the driver
must react instantly. In a split second, human vision has completed a complex task, that of identifying
the object, processing data and deciding what to do. Computer vision's aim is to enable computers
to perform the same kind of tasks as humans with the same efficiency.

Image processing is a method to convert an image into digital form and perform some operations
on it, in order to get an enhanced image or to extract some useful information from it. It is a type of
signal dispensation in which input is image, like video frame or photograph and output may be image
or characteristics associated with that image. Usually Image Processing system includes treating
images as two dimensional signals while applying already set signal processing methods to them.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Purpose of Image processing
The purpose of image processing is divided into 5 groups.
They are:
1. Visualization - Observe the objects that are not visible.
2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern – Measures various objects in an image.
5. Image Recognition – Distinguish the objects in an image.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Pattern recognition is the ability to detect arrangements of characteristics or data that yield
information about a given system or data set. In a technological context, a pattern might be recurring
sequences of data over time that can be used to predict trends, particular configurations of features

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


in images that identify objects, frequent combinations of words and phrases for natural language
processing (NLP), or particular clusters of behavior on a network that could indicate an attack --
among almost endless other possibilities.

Deep Learning: Deep learning is a machine learning technique that teaches computers to do what
comes naturally to humans: learn by example. Deep learning is a key technology behind driverless
cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the
key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep
learning is getting lots of attention lately and for good reason. It’s achieving results that were not
possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text,
or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-
level performance. Models are trained by using a large set of labeled data and neural network
architectures that contain many layers.

How Deep Learning Works?


Most deep learning methods use neural network architectures, which is why deep learning models
are often referred to as deep neural networks. The term “deep” usually refers to the number of
hidden layers in the neural network. Traditional neural networks only contain 2-3 hidden layers, while
deep networks can have as many as 150. Deep learning models are trained by using large sets of
labeled data and neural network architectures that learn features directly from the data without the
need for manual feature extraction.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


What's the Difference Between Machine Learning and Deep Learning?
Deep learning is a specialized form of machine learning. A machine learning workflow starts with
relevant features being manually extracted from images. The features are then used to create a
model that categorizes the objects in the image. With a deep learning workflow, relevant features
are automatically extracted from images. In addition, deep learning performs “end-to-end learning”
– where a network is given raw data and a task to perform, such as classification, and it learns how
to do this automatically.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Another key difference is deep learning algorithms scale with data, whereas shallow learning
converges. Shallow learning refers to machine learning methods that plateau at a certain level of
performance when you add more examples and training data to the network.

IBM Watson- The first artificial intelligence


computer

Watson is an IBM supercomputer that combines artificial intelligence (AI) and sophisticated analytical
software for optimal performance as a "question answering" machine. The supercomputer is named
for IBM's founder, Thomas J. Watson.
The Watson supercomputer processes at a rate of 80 teraflops (trillion floating point operations per
second). To replicate (or surpass) a high-functioning human's ability to answer questions, Watson
accesses 90 servers with a combined data store of over 200 million pages of information, which it
processes against six million logic rules. The system and its data are self-contained in a space that
could accommodate 10 refrigerators.
Watson's key components include: Apache Unstructured Information Management Architecture
(UIMA) frameworks, infrastructure and other elements required for the analysis of unstructured
data.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


 Apache's Hadoop, a free, Java-based programming framework that supports the processing
of large data sets in a distributed computing environment.
 SUSE Enterprise Linux Server 11, the fastest available Power7 processor operating system.
 2,880 processor cores.
 15 terabytes (TB) of RAM.
 500 gigabytes (GB) of pre-processed information.
 IBM's DeepQA software, which is designed for information retrieval that incorporates natural
language processing (NLP) and machine learning.
Applications for Watson's underlying cognitive computing technology are almost endless. Because
the device can perform text mining and complex analytics on huge volumes of unstructured data, it
can support a search engine or an expert system with capabilities far superior to any previously
existing.
In May 2016, Baker Hostetler, an Ohio-based law firm, signed a contract for a legal expert system
based on Watson to work with its 50-person bankruptcy team. That system, called Ross, can mine
data from about a billion text documents, analyze the information and provide precise responses to
complicated questions in less than three seconds. Natural language processing allows the system to
translate legalese to respond to the lawyers' questions.
As Ross' creators add more legal modules, similar expert systems are transforming medical research.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Watson in healthcare
Healthcare was one of the first industries to which Watson technology was applied. The first
commercial implementation of Watson came in 2013 when the Memorial Sloan Kettering Cancer
Center began using the system to recommend treatment options for lung cancer patients to ensure
they received the right treatment while reducing costs. Since that time, providers such as Cleveland
Clinic, Maine Center for Cancer Medicine and Westmed Medical Group have also implemented
Watson tools.

IBM's Watson Health is changing patient care.


However, not every implementation has gone smoothly. The MD Anderson Cancer Centre in Houston
launched a project in 2013 to build a decision support system powered by Watson technology to help
doctors determine the best treatment options. But after spending more than $62 million on the
project over the course of four years, hospital administrators cancelled the project, saying it had
failed to meet its goals.
Healthcare remains a primary focal point for IBM as it tries to prove Watson technology, and the
company continues to forge partnerships with healthcare organizations. In May 2018, for example,
India's largest specialty healthcare systems, Apollo, agreed to adopt Watson for Oncology and
Watson for Genomics. The two IBM cognitive computing platforms will help doctors make decisions
for personalized cancer care.
IBM's use of Watson to solve some of the biggest problems around patient care and using data-driven
insights to recommend treatment options would prove the value of Watson technologies.

Watson Analytics
Watson Analytics is one of the primary implementations of Watson technology. It is a platform for
exploring, visualizing and presenting data that utilizes Watson's cognitive capabilities to
automatically surface data-driven insights and recommend ways of presenting the data.
The platform is made up of an exploration component, which allows users to upload their data,
automatically recommends potentially correlated variables and builds comparisons; a prediction tool
that allows users to get answers to complex questions based on their data; and a reporting tool that
supports dashboard and report development.
Each component is accessed using a graphical user interface (GUI), which minimizes the need for
advanced data science training. The platform is intended to make advanced analytics accessible to
workers with limited technical knowledge. The cost of Watson Analytics depends on the version;
there is a free version which includes the ability to upload spreadsheets, get visualizations, get
insights and build dashboards. The "Plus" edition includes the capabilities in the free version along
with 2 GB of storage and data sources, including databases, starts at $30 per user, per month. A
"Professional" edition with all of the above features, as well as a multiuser tenant to collaborate, 100
GB of storage and more data, costs $80 or more per user, per month. (2018 pricing sourced from IBM
Watson Analytics website).

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN


Watson APIs let businesses build AI applications
IBM has published a range of application program interfaces (APIs) on its cloud that allow users to
build their own AI applications that utilize Watson's core technology on the back end. There are APIs
that support popular development frameworks like Java, Python and others.
IBM also has API connectors to pre-trained deep learning algorithms that allow users to build
applications for things like natural language processing, image recognition and tone analysis. One API
supports the development of smart assistants using Watson technology on the back end.

IBM Watson's history


In a fall 2010 AI Magazine article, IBM researchers reported on their three-year journey to build a
computer system that could compete with humans in answering questions correctly in real time on
the TV show Jeopardy! This project led to the design of IBM's DeepQA architecture and Watson.
In 2011, Watson challenged two top-ranked players on Jeopardy! -- Champions Ken Jennings and
Brad Rutter -- and famously beat them. The Watson avatar sat between the two contestants, as a
human competitor would, while its considerable bulk sat on a different floor of the building. Like the
other contestants, Watson didn't have internet access.
In the practice round, Watson demonstrated a human-like ability for complex wordplay, correctly
responding, for example, to the answer clue, "Classic candy bar that's a female Supreme Court
justice," with, "What is Baby Ruth Ginsburg?" Rutter noted that although the retrieval of information
is "trivial" for Watson and difficult for a human, the human is still better at the complex task of
comprehension. Nevertheless, machine learning allows Watson to examine its mistakes against the
correct answers to see where it erred and inform future responses.
IBM researchers concluded that DeepQA proved to be an effective and extensible architecture which
could be used to combine, deploy, evaluate and advance a wide range of algorithmic techniques in
the field of question answering.

ARTIFICIAL INTELLIGENCE – PREPARED BY BALAMURUGAN

S-ar putea să vă placă și