Documente Academic
Documente Profesional
Documente Cultură
Statistical learning: Statistical learning theory is a framework for machine learning drawing from
the fields of statistics and functional analysis. Statistical learning theory deals with the problem of
finding a predictive function based on data. Statistical learning theory has led to successful
applications in fields such as computer vision, speech recognition, bioinformatics and baseball.
How it works?
Speech recognition works using algorithms through acoustic and language modelling. Acoustic
modelling represents the relationship between linguistic units of speech and audio signals; language
modelling matches sounds with word sequences to help distinguish between words that sound
Natural language processing (NLP) is a branch of artificial intelligence that helps computers
understand, interpret and manipulate human language. NLP draws from many disciplines, including
computer science and computational linguistics, in its pursuit to fill the gap between human
communication and computer understanding.
Computer vision is a field of computer science that works on enabling computers to see, identify and
process images in the same way that human vision does, and then provide appropriate output. It is
like imparting human intelligence and instincts to a computer. In reality though, it is a difficult task
to enable computers to recognize images of different objects.
Computer vision is a field of computer science that works on enabling computers to see, identify
and process images in the same way that human vision does, and then provide appropriate output.
It is like imparting human intelligence and instincts to a computer. In reality though, it is a difficult
task to enable computers to recognize images of different objects.
Image processing is a method to convert an image into digital form and perform some operations
on it, in order to get an enhanced image or to extract some useful information from it. It is a type of
signal dispensation in which input is image, like video frame or photograph and output may be image
or characteristics associated with that image. Usually Image Processing system includes treating
images as two dimensional signals while applying already set signal processing methods to them.
Deep Learning: Deep learning is a machine learning technique that teaches computers to do what
comes naturally to humans: learn by example. Deep learning is a key technology behind driverless
cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the
key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep
learning is getting lots of attention lately and for good reason. It’s achieving results that were not
possible before.
In deep learning, a computer model learns to perform classification tasks directly from images, text,
or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-
level performance. Models are trained by using a large set of labeled data and neural network
architectures that contain many layers.
Watson is an IBM supercomputer that combines artificial intelligence (AI) and sophisticated analytical
software for optimal performance as a "question answering" machine. The supercomputer is named
for IBM's founder, Thomas J. Watson.
The Watson supercomputer processes at a rate of 80 teraflops (trillion floating point operations per
second). To replicate (or surpass) a high-functioning human's ability to answer questions, Watson
accesses 90 servers with a combined data store of over 200 million pages of information, which it
processes against six million logic rules. The system and its data are self-contained in a space that
could accommodate 10 refrigerators.
Watson's key components include: Apache Unstructured Information Management Architecture
(UIMA) frameworks, infrastructure and other elements required for the analysis of unstructured
data.
Watson Analytics
Watson Analytics is one of the primary implementations of Watson technology. It is a platform for
exploring, visualizing and presenting data that utilizes Watson's cognitive capabilities to
automatically surface data-driven insights and recommend ways of presenting the data.
The platform is made up of an exploration component, which allows users to upload their data,
automatically recommends potentially correlated variables and builds comparisons; a prediction tool
that allows users to get answers to complex questions based on their data; and a reporting tool that
supports dashboard and report development.
Each component is accessed using a graphical user interface (GUI), which minimizes the need for
advanced data science training. The platform is intended to make advanced analytics accessible to
workers with limited technical knowledge. The cost of Watson Analytics depends on the version;
there is a free version which includes the ability to upload spreadsheets, get visualizations, get
insights and build dashboards. The "Plus" edition includes the capabilities in the free version along
with 2 GB of storage and data sources, including databases, starts at $30 per user, per month. A
"Professional" edition with all of the above features, as well as a multiuser tenant to collaborate, 100
GB of storage and more data, costs $80 or more per user, per month. (2018 pricing sourced from IBM
Watson Analytics website).