Sunteți pe pagina 1din 8

Don Bosco Institute of Technology

Kurla,Mumbai,400070
Computer Engineering

ARTIFICIAL INTELLIGENCE

Code : CPC703 Semester : VII

Name : Shruti Vellat


Roll no : 69
Subject : Artificial Intelligence
Assignment No. 1

Explain the various sub-areas of Artificial Intelligence

Section 1 List all the Areas


Section 2 Explain each as below:
e.g
1.1 Neural Networks
1.2 Applications of Neural Networks

Q.1] List all the Area


Ans:
Expert system
Natural Language Processing
Neural Networks
Reinforcement Learning
Machine Learning

1] Expert Systems
What are Expert Systems?
The expert systems are the computer applications developed to solve complex problems in a
particular domain, at the level of extra-ordinary human intelligence and expertise.

Characteristics of Expert Systems


High performance
Understandable
Reliable
Highly responsive
Capabilities of Expert Systems
The expert systems are capable of

Advising
Instructing and assisting human in decision making
Demonstrating
Deriving a solution
Diagnosing
Explaining
Interpreting input
Predicting results
Justifying the conclusion
Suggesting alternative options to a problem

They are incapable of

Substituting human decision makers


Possessing human capabilities
Producing accurate output for inadequate knowledge base
Refining their own knowledge

Application of expert system:


The following table shows where ES can be applied.

Application Description
Design Domain Camera lens design, automobile design.
Diagnosis Systems to deduce cause of disease from observed data, conduction
Medical Domain
medical operations on humans.
Monitoring Comparing data continuously with observed system or with prescribed
Systems behavior such as leakage monitoring in long petroleum pipeline.
Process Control
Controlling a physical process based on monitoring.
Systems
Knowledge
Finding out faults in vehicles, computers.
Domain
Detection of possible fraud, suspicious transactions, stock market trading,
Finance/Commerce
Airline scheduling, cargo scheduling.
2] Natural Language Processing
Natural Language Processing (NLP) refers to AI method of communicating with an
intelligent systems using a natural language such as English.

NLP is a way for computers to analyze, understand, and derive meaning from human language in a
smart and useful way. By utilizing NLP, developers can organize and structure knowledge to
perform tasks such as automatic summarization, translation, named entity recognition, relationship
extraction, sentiment analysis, speech recognition, and topic segmentation.

Apart from common word processor operations that treat text like a mere sequence of symbols,
NLP considers the hierarchical structure of language: several words make a phrase, several phrases
make a sentence and, ultimately, sentences convey ideas, John Rehling, an NLP expert at
Meltwater Group, said in How Natural Language Processing Helps Uncover Social Media
Sentiment. By analyzing language for its meaning, NLP systems have long filled useful roles, such
as correcting grammar, converting speech to text and automatically translating between languages.

NLP is used to analyze text, allowing machines to understand how humans speak. This human-
computer interaction enables real-world applications like automatic text summarization, sentiment
analysis, topic extraction, named entity recognition, parts-of-speech tagging, relationship
extraction, stemming, and more. NLP is commonly used for text mining, machine translation,
and automated question answering.

NLP is characterized as a hard problem in computer science. Human language is rarely precise, or
plainly spoken. To understand human language is to understand not only the words, but the concepts
and how theyre linked together to create meaning. Despite language being one of the easiest things
for humans to learn, the ambiguity of language is what makes natural language processing a
difficult problem for computers to master.

Components of NLP
There are two components of NLP as given

Natural Language Understanding (NLU)


Understanding involves the following tasks
Mapping the given input in natural language into useful representations.
Analyzing different aspects of the language.

Natural Language Generation (NLG)


It is the process of producing meaningful phrases and sentences in the form of natural language
from some internal representation.
It involves

Text planning It includes retrieving the relevant content from knowledge base.

Sentence planning It includes choosing required words, forming meaningful phrases,


setting tone of the sentence.

Text Realization It is mapping sentence plan into sentence structure.

The NLU is harder than NLG.

Applications of NLP:
Machine Translation
As the world's information is online, the task of making that data accessible becomes increasingly
important. The challenge of making the world's information accessible to everyone, across language
barriers, has simply outgrown the capacity for human translation. Innovative companies like
Duolingo are looking to recruit large amounts of people to contribute, by coinciding translation
efforts with learning a new language. But machine translation offers an even more scalable
alternative to harmonizing the world's information. Google is a company at the forefront of machine
translation, using a proprietary statistical engine for its Google translate service. The challenge with
machine translation technologies is not in translating words, but in preserving the meaning of
sentences, a complex technological issue that is at the heart of NLP.

Fighting Spam
Spam filters have become important as the first line of defense against the ever-increasing problem
of unwanted email. But almost everyone that uses email extensively has experienced agony over
unwanted emails that are still received, or important emails that have been accidentally caught in
the filter. The false-positive and false-negative issues of spam filters are at the heart of NLP
technology, again boiling down to the challenge of extracting meaning from strings of text. A
technology that has received much attention is Bayesian spam filtering, a statistical technique in
which the incidence of words in an email is measured against its typical occurrence in a corpus of
spam and non-spam emails.

Information Extraction
Many important decisions in financial markets are increasingly moving away from human oversight
and control. Algorithmic trading is becoming more popular, a form of financial investing that is
entirely controlled by technology. But many of these financial decisions are impacted by news, by
journalism which is still presented predominantly in English. A major task, then, of NLP has
become taking these plain text announcements, and extracting the pertinent info in a format that can
be factored into algorithmic trading decisions. For example, news of a merger between companies
can have a big impact on trading decisions, and the speed at which the particulars of the merger,
players, prices, who acquires who, can be incorporated into a trading algorithm can have profit
implications in the millions of dollars.

Summarization
Information overload is a real phenomenon in our digital age, and already our access to knowledge
and information far exceeds our capacity to understand it. This is a trend that shows no sign of
slowing down, and so an ability to summarize the meaning of documents and information is
becoming increasingly important. This is important not just in allowing us the ability to recognize
and absorb the pertinent information from vast amounts of data. Another desired outcome is to
understand deeper emotional meanings, for example, based on aggregated data from social media,
can a company determine the general sentiment for its latest product offering? This branch of NLP
will become increasingly useful as a valuable marketing asset.

Question Answering
Search engines put the world's wealth of information at our fingertips, but are still generally quite
primitive when it comes to actually answering specific questions posed by humans. Google has seen
the frustration this has caused in users, who often need to try a number of different search results to
find the answer they are looking for. A big focus of Google's efforts in NLP has been to recognize
natural language questions, extract the meaning, and provide the answer, and the evolution of
Google's results page has shown this focus. Though certainly improving, this remains a major
challenge for search engines, and one of the main applications of natural language processing
research.
Examples: Google Now feature, speech recognition, Automatic voice output.

3] Neural Networks
What are Artificial Neural Networks (ANNs)?
The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as
"...a computing system made up of a number of simple, highly interconnected
processing elements, which process information by their dynamic state response to
external inputs.

Basic Structure of ANNs


The idea of ANNs is based on the belief that working of human brain by making the right
connections, can be imitated using silicon and wires as living neurons and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They are connected to
other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs
are accepted by dendrites. These inputs create electric impulses, which quickly travel through the
neural network. A neuron can then send the message to other neuron to handle the issue or does not
send it forward.

ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The
neurons are connected by links and they interact with each other. The nodes can take input data and
perform simple operations on the data. The result of these operations is passed to other neurons. The
output at each node is called its activation or node value.
Each link is associated with weight. ANNs are capable of learning, which takes place by altering
weight values.
Examples Pattern recognition systems such as face recognition, character recognition,
handwriting recognition.

Applications of Neural Networks


They can perform tasks that are easy for a human but difficult for a machine

Aerospace Autopilot aircrafts, aircraft fault detection.

Automotive Automobile guidance systems.

Military Weapon orientation and steering, target tracking, object discrimination, facial
recognition, signal/image identification.

Electronics Code sequence prediction, IC chip layout, chip failure analysis, machine
vision, voice synthesis.

Financial Real estate appraisal, loan advisor, mortgage screening, corporate bond rating,
portfolio trading program, corporate financial analysis, currency value prediction, document
readers, credit application evaluators.

Industrial Manufacturing process control, product design and analysis, quality inspection
systems, welding quality analysis, paper quality prediction, chemical product design
analysis, dynamic modeling of chemical process systems, machine maintenance analysis,
project bidding, planning, and management.

Medical Cancer cell analysis, EEG and ECG analysis, prosthetic design, transplant time
optimizer.

Speech Speech recognition, speech classification, text to speech conversion.

Telecommunications Image and data compression, automated information services, real-


time spoken language translation.

Transportation Truck Brake system diagnosis, vehicle scheduling, routing systems.

Software Pattern Recognition in facial recognition, optical character recognition, etc.

Time Series Prediction ANNs are used to make predictions on stocks and natural
calamities.

Signal Processing Neural networks can be trained to process an audio signal and filter it
appropriately in the hearing aids.

Control ANNs are often used to make steering decisions of physical vehicles.

Anomaly Detection As ANNs are expert at recognizing patterns, they can also be trained
to generate an output when something unusual occurs that misfits the pattern.

4] Reinforcement Learning
Concept : RL is a paradigm for learning by trial-and-error inspired by the way humans learn
new tasks. In a typical RL setup, an agent is tasked with observing its current state in a digital
environment and taking actions that maximise accrual of a long-term reward it has been set. The
agent receives feedback from the environment as a result of each action such that it knows whether
the action promoted or hindered its progress. An RL agent must therefore balance the exploration of
its environment to find optimal strategies of accruing reward with exploiting the best strategy it has
found to achieve the desired goal.
Applications: Multiple agents learning in their own instance of an environment with a
shared model or by interacting and learning from one another in the same environment,
learning to navigate 3D environments like mazes or city streets for autonomous driving,
inverse reinforcement learning to recapitulate observed behaviours by learning the goal of a
task (e.g. learning to drive or endowing non-player video game characters with human-like
behaviours).

5] Machine Learning
Concept : It involves analysis of data & treads. Training of system(s) to learn from data and
trends. Then the system makes decisions and adapt based on its learning.Machine learning is a
method of data analysis that automates analytical model building. Using algorithms that iteratively
learn from data, machine learning allows computers to find hidden insights without being explicitly
programmed where to look.
Every machine learning algorithm has three components:
Representation: how to represent knowledge. Examples include decision trees, sets of rules,
instances, graphical models, neural networks, support vector machines, model ensembles
and others.
Evaluation: the way to evaluate candidate programs (hypotheses). Examples include
accuracy, prediction and recall, squared error, likelihood, posterior probability, cost,
margin,entropy k-L divergence and others.
Optimization: the way candidate programs are generated known as the search process. For
example combinatorial optimization, convex optimization, constrained optimization.

Applications :

1. Web search: ranking page based on what you are most likely to click on.
2. Computational biology: rational design drugs in the computer based on past experiments.
3. Finance: decide who to send what credit card offers to. Evaluation of risk on credit offers.
How to decide where to invest money.
4. E-commerce: Predicting customer churn. Whether or not a transaction is fraudulent.
5. Space exploration: space probes and radio astronomy.
6. Robotics: how to handle uncertainty in new environments. Autonomous. Self-driving car.
7. Information extraction: Ask questions over databases across the web.
8. Social networks: Data on relationships and preferences. Machine learning to extract value
from data.
9. Debugging: Use in computer science problems like debugging. Labor intensive process.
Could suggest where the bug could be.
.

S-ar putea să vă placă și