Sunteți pe pagina 1din 26

INTRODUCTION:-

A differential equation is a mathematical equation that relates some


function with its derivatives. In applications, the functions usually
represent physical quantities, the derivatives represent their rates of
change, and the equation defines a relationship between the two. Because
such relations are extremely common, differential equations play a
prominent role in many disciplines including engineering, physics,
economics, and biology.

In pure mathematics, differential equations are studied from several


different perspectives, mostly concerned with their solutions—the set of
functions that satisfy the equation. Only the simplest differential
equations are solvable by explicit formulas; however, some properties of
solutions of a given differential equation may be determined without
finding their exact form.

If a self-contained formula for the solution is not available, the solution


may be numerically approximated using computers. The theory of
dynamical systems puts emphasis on qualitative analysis of systems
described by differential equations, while many numerical methods have
been developed to determine solutions with a given degree of accuracy.

History
Differential equations first came into existence with the invention of
calculus by Newton and Leibniz. In Chapter 2 of his 1671 work
"Methodus fluxionum et Serierum Infinitarum",[1] Isaac Newton listed
three kinds of differential equations:

He solves these examples and others using infinite series and discusses
the non-uniqueness of solutions.
Jacob Bernoulli proposed the Bernoulli differential equation in 1695.[2]
This is an ordinary differential equation of the form

for which the following year Leibniz obtained solutions by simplifying


it.[3]

Historically, the problem of a vibrating string such as that of a musical


instrument was studied by Jean le Rond d'Alembert, Leonhard Euler,
Daniel Bernoulli, and Joseph-Louis Lagrange.[4][5][6][7] In 1746,
d’Alembert discovered the one-dimensional wave equation, and within
ten years Euler discovered the three-dimensional wave equation.[8]

The Euler–Lagrange equation was developed in the 1750s by Euler and


Lagrange in connection with their studies of the tautochrone problem.
This is the problem of determining a curve on which a weighted particle
will fall to a fixed point in a fixed amount of time, independent of the
starting point.

Lagrange solved this problem in 1755 and sent the solution to Euler. Both
further developed Lagrange's method and applied it to mechanics, which
led to the formulation of Lagrangian mechanics.

In 1822, Fourier published his work on heat flow in Théorie analytique


de la chaleur (The Analytic Theory of Heat),[9] in which he based his
reasoning on Newton's law of cooling, namely, that the flow of heat
between two adjacent molecules is proportional to the extremely small
difference of their temperatures. Contained in this book was Fourier's
proposal of his heat equation for conductive diffusion of heat. This partial
differential equation is now taught to every student of mathematical
physics.

Types:
Differential equations can be divided into several types. Apart from
describing the properties of the equation itself, these classes of differential
equations can help inform the choice of approach to a solution. Commonly
used distinctions include whether the equation is: Ordinary/Partial,
Linear/Non-linear, and Homogeneous/Inhomogeneous. This list is far
from exhaustive; there are many other properties and subclasses of
differential equations which can be very useful in specific contexts.

Ordinary differential equations

An ordinary differential equation (ODE) is an equation containing an


unknown function of one real or complex variable x, its derivatives, and
some given functions of x. The unknown function is generally represented
by a variable (often denoted y), which, therefore, depends on x. Thus x is
often called the independent variable of the equation. The term "ordinary"
is used in contrast with the term partial differential equation, which may
be with respect to more than one independent variable.

Linear differential equations are the differential equations that are linear
in the unknown function and its derivatives. Their theory is well
developed, and, in many cases, one may express their solutions in terms
of integrals.

Most ODEs that are encountered in physics are linear, and, therefore, most
special functions may be defined as solutions of linear differential
equations
As, in general, the solutions of a differential equation cannot be expressed
by a closed-form expression, numerical methods are commonly used for
solving differential equations on a computer.

Partial differential equations

A partial differential equation (PDE) is a differential equation that


contains unknown multivariable functions and their partial derivatives.
(This is in contrast to ordinary differential equations, which deal with
functions of a single variable and their derivatives.) PDEs are used to
formulate problems involving functions of several variables, and are
either solved in closed form, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena in nature such
as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or
quantum mechanics. These seemingly distinct physical phenomena can
be formalised similarly in terms of PDEs. Just as ordinary differential
equations often model one-dimensional dynamical systems, partial
differential equations often model multidimensional systems. PDEs find
their generalisation in stochastic partial differential equations. Non-linear
differential equations

Non-linear differential equations are formed by the products of the


unknown function and its derivatives are allowed and its degree is > 1.
There are very few methods of solving nonlinear differential equations
exactly; those that are known typically depend on the equation having
particular symmetries. Nonlinear differential equations can exhibit very
complicated behavior over extended time intervals, characteristic of
chaos. Even the fundamental questions of existence, uniqueness, and
extendability of solutions for nonlinear differential equations, and well-
posedness of initial and boundary value problems for nonlinear PDEs are
hard problems and their resolution in special cases is considered to be a
significant advance in the mathematical theory (cf. Navier–Stokes
existence and smoothness). However, if the differential equation is a
correctly formulated representation of a meaningful physical process, then
one expects it to have a solution.[10]
Linear differential equations frequently appear as approximations to
nonlinear equations. These approximations are only valid under restricted
conditions. For example, the harmonic oscillator equation is an
approximation to the nonlinear pendulum equation that is valid for small
amplitude oscillations

Equation order

Differential equations are described by their order, determined by the term


with the highest derivatives. An equation containing only first derivatives
is a first-order differential equation, an equation containing the second
derivative is a second-order differential equation, and so on.[11][12]
Differential equations that describe natural phenomena almost always
have only first and second order derivatives in them, but there are some
exceptions, such as the thin film equation, which is a fourth order partial
differential equation.

Example

In the first group of examples, u is an unknown function of x, and c and


ω are constants that are supposed to be known. Two broad classifications
of both ordinary and partial differential equations consists of
distinguishing between linear and nonlinear differential equations, and
between homogeneous differential equations and inhomogeneous ones.
Existence of solutions

Solving differential equations is not like solving algebraic equations. Not


only are their solutions often unclear, but whether solutions are unique or
exist at all are also notable subjects of interest.

For first order initial value problems, the Peano existence theorem gives
one set of circumstances in which a solution exists. Given any point
in the xy-plane, define some rectangular region ,
such that and is in the interior of

Z . If we are given a differential equation and the condition


that when , then there is locally a solution to this

problem if and are both continuous on Z . This solution


exists on some interval with its center at a . The solution may not be
unique. (See Ordinary differential equation for other results.)

However, this only helps us with first order initial value problems.
Suppose we had a linear initial value problem of the nth order:

Connection to difference equations

The theory of differential equations is closely related to the theory of


difference equations, in which the coordinates assume only discrete
values, and the relationship involves values of the unknown function or
functions and values at nearby coordinates. Many methods to compute
numerical solutions of differential equations or study the properties of
differential equations involve the approximation of the solution of a
differential equation by the solution of a corresponding difference
equation.

Applications
The study of differential equations is a wide field in pure and applied
mathematics, physics, and engineering. All of these disciplines are
concerned with the properties of differential equations of various types.
Pure mathematics focuses on the existence and uniqueness of solutions,
while applied mathematics emphasizes the rigorous justification of the
methods for approximating solutions. Differential equations play an
important role in modelling virtually every physical, technical, or
biological process, from celestial motion, to bridge design, to interactions
between neurons. Differential equations such as those used to solve real-
life problems may not necessarily be directly solvable, i.e. do not have
closed form solutions. Instead, solutions can be approximated using
numerical methods.

Many fundamental laws of physics and chemistry can be formulated as


differential equations. In biology and economics, differential equations
are used to model the behavior of complex systems. The mathematical
theory of differential equations first developed together with the sciences
where the equations had originated and where the results found
application. However, diverse problems, sometimes originating in quite
distinct scientific fields, may give rise to identical differential equations.
Whenever this happens, mathematical theory behind the equations can be
viewed as a unifying principle behind diverse phenomena. As an example,
consider the propagation of light and sound in the atmosphere, and of
waves on the surface of a pond. All of them may be described by the same
second-order partial differential equation, the wave equation, which
allows us to think of light and sound as forms of waves, much like familiar
waves in the water. Conduction of heat, the theory of which was
developed by Joseph Fourier, is governed by another second-order partial
differential equation, the heat equation. It turns out that many diffusion
processes, while seemingly different, are described by the same equation;
the Black–Scholes equation in finance is, for instance, related to the heat
equation.

Physics
Radioactive decay (also known as nuclear decay, radioactivity or
nuclear radiation) is the process by which an unstable atomic nucleus
loses energy (in terms of mass in its rest frame) by emitting radiation, such
as an alpha particle, beta particle with neutrino or only a neutrino in the
case of electron capture, or a gamma ray or electron in the case of internal
conversion. A material containing such unstable nuclei is considered
radioactive. Certain highly excited short-lived nuclear states can decay
through neutron emission, or more rarely, proton emission.

Radioactive decay is a stochastic (i.e. random) process at the level of


single atoms. According to quantum theory, it is impossible to predict
when a particular atom will decay,[1][2][3] regardless of how long the atom
has existed. However, for a collection of atoms, the collection's expected
decay rate is characterized in terms of their measured decay constants or
half-lives. This is the basis of radiometric dating. The half-lives of
radioactive atoms have no known upper limit, spanning a time range of
over 55 orders of magnitude, from nearly instantaneous to far longer than
the age of the universe

LAPLACE EQUATION:-
In mathematics, Laplace's equation is a second-order partial differential
equation named after Pierre-Simon Laplace who first studied its
properties. This is often written as:
where ∆ = ∇2 is the Laplace operator[1] (see below) and is a scalar function.

Laplace's equation and Poisson's equation are the simplest examples of


elliptic partial differential equations. The general theory of solutions to
Laplace's equation is known as potential theory. The solutions of
Laplace's equation are the harmonic functions, which are important in
many fields of science, notably the fields of electromagnetism,
astronomy, and fluid dynamics, because they can be used to accurately
describe the behavior of electric, gravitational, and fluid potentials. In the
study of heat conduction, the Laplace equation is the steady-state heat
equation.

Laplace equation in two dimensions

The Laplace equation in two independent variables has the form

Analytic functions
The real and imaginary parts of a complex analytic function both satisfy
the Laplace equation. That is, if z = x + iy, and if

then the necessary condition that f(z) be analytic is that the Cauchy–
Riemann equations be satisfied:

where ux is the first partial derivative of u with respect to x. It


follows that

Therefore u satisfies the Laplace equation. A similar calculation shows


that v also satisfies the Laplace equation. Conversely, given a harmonic
function, it is the real part of an analytic function, f(z) (at least locally). If
a trial form is
then the Cauchy–Riemann equations will be satisfied if we set

Newton's law of cooling states that the rate of heat loss of a body
is directly proportional to the difference in the temperatures between the
body and its surroundings provided the temperature difference is small
and the nature of radiating surface remains same. As such, it is
equivalent to a statement that the heat transfer coefficient, which
mediates between heat losses and temperature differences, is a
constant. This condition is generally true in thermal conduction (where
it is guaranteed by Fourier's law), but it is often only approximately true
in conditions of convective heat transfer, where a number of physical
processes make effective heat transfer coefficients somewhat
dependent on temperature differences. Finally, in the case of heat
transfer by thermal radiation, Newton's law of cooling is not true.

Heat transfer version of the law


The heat-transfer version of Newton's law, which (as noted) requires a
constant heat transfer coefficient, states that the rate of heat loss of a body
is proportional to the difference in temperatures between the body and its
surroundings.

The rate of heat transfer in such circumstances is derived below:[4]

Newton's cooling law in convection is a restatement of the differential


equation given Fourier's law:

where

is the thermal energy (SI unit: joule)


is the heat transfer coefficient (assumed independent of T here)
(SI unit: W/(m2 K))
is the heat transfer surface area (SI unit: m2)

is the temperature of the object's surface and interior (since these


are the same in this approximation) (SI unit: K)
is the temperature of the environment; i.e. the temperature
suitably far from the surface (SI unit: K)
is the time-dependent thermal gradient between
environment and object (SI unit: K).

WAVE EQUATION:-
A pulse traveling through a string with fixed endpoints as modeled by the
wave equation.The wave equation is an important second-order linear
partial differential equation for the description of waves—as they occur
in classical physics—such as mechanical waves (e.g. water waves, sound
waves and seismic waves) or light waves. It arises in fields like acoustics,
electromagnetics, and fluid dynamics.

Historically, the problem of a vibrating string such as that of a musical


instrument was studied by Jean le Rond d'Alembert, Leonhard Euler,
Daniel Bernoulli, and Joseph-Louis Lagrange.[1][2][3][4] In 1746,
d’Alembert discovered the one-dimensional wave equation, and within
ten years Euler discovered the three-dimensional wave equation.[5]

The wave equation is a hyperbolic partial differential equation. It typically


concerns a time variable t, one or more spatial variables x1, x2, …, xn, and
a scalar function u = u (x1, x2, …, xn; t), whose values could model, for
example, the mechanical displacement of a wave. The wave equation for
u is

where ∇2 is the (spatial) Laplacian and c is a fixed constant. Solutions of


this equation describe propagation of disturbances out from the region at
a fixed speed in one or in all spatial directions, as do physical waves from
plane or localized sources; the constant c is identified with the propagation
speed of the wave. This equation is linear. Therefore, the sum of any two
solutions is again a solution: in physics this property is called the
superposition principle.

The wave equation alone does not specify a physical solution; a unique
solution is usually obtained by setting a problem with further conditions,
such as initial conditions, which prescribe the amplitude and phase of the
wave. Another important class of problems occurs in enclosed spaces
specified by boundary conditions, for which the solutions represent
standing waves, or harmonics, analogous to the harmonics of musical
instruments.

The wave equation, and modifications of it, are also found in elasticity,
quantum mechanics, plasma physics and general relativity.

Classical mechanics

So long as the force acting on a particle is known, Newton's second law


is sufficient to describe the motion of a particle. Once independent
relations for each force acting on a particle are available, they can be
substituted into Newton's second law to obtain an ordinary differential
equation, which is called the equation of motion.

Newton's second law

The second law states that the rate of change of momentum of a body is
directly proportional to the force applied, and this change in momentum
takes place in the direction of the applied force.

The second law can also be stated in terms of an object's acceleration.


Since Newton's second law is valid only for constant-mass
systems,[17][18][19] m can be taken outside the differentiation operator by
the constant factor rule in differentiation. Thus,
where F is the net force applied, m is the mass of the body, and a is the
body's acceleration. Thus, the net force applied to a body produces a
proportional acceleration. In other words, if a body is accelerating, then
there is a force on it. An application of this notation is the derivation of G
Subscript C.

Consistent with the first law, the time derivative of the momentum is non-
zero when the momentum changes direction, even if there is no change in
its magnitude; such is the case with uniform circular motion. The
relationship also implies the conservation of momentum: when the net
force on the body is zero, the momentum of the body is constant. Any net
force is equal to the rate of change of the momentum.

Any mass that is gained or lost by the system will cause a change in
momentum that is not the result of an external force. A different equation
is necessary for variable-mass systems (see below).

Newton's second law is an approximation that is increasingly worse at


high speeds because of relativistic effects.

Quantum mechanics

In quantum mechanics, the analogue of Newton's law is Schrödinger's


equation (a partial differential equation) for a quantum system (usually
atoms, molecules, and subatomic particles whether free, bound, or
localized). It is not a simple algebraic equation, but in general a linear
partial differential equation, describing the time-evolution of the system's
wave function (also called a "state function").[17]

The time-dependent Schrödinger equation described above predicts that


wave functions can form standing waves, called stationary states (also
called "orbitals", as in atomic orbitals or molecular orbitals). These states
are particularly important as their individual study later simplifies the task
of solving the time-dependent Schrödinger equation for any state.
Stationary states can also be described by a simpler form of the
Schrödinger equation, the time-independent Schrödinger equation
(TISE).

Time-independent Schrödinger equation (general)


where E is a constant equal to the total energy of the system. This is only
used when the Hamiltonian itself is not dependent on time explicitly.
However, even in this case the total wave function still has a time
dependency.

In words, the equation states:

When the Hamiltonian operator acts on a certain wave function Ψ,


and the result is proportional to the same wave function Ψ, then Ψ
is a stationary state, and the proportionality constant, E, is the
energy of the state Ψ.

In linear algebra terminology, this equation is an eigenvalue equation and


in this sense the wave function is an eigenfunction of the Hamiltonian
operator.

As before, the most common manifestation is the nonrelativistic


Schrödinger equation for a single particle moving in an electric field (but
not a magnetic field):

Time-independent Schrödinger equation (single


nonrelativistic particle)

Biology:-
Standard logistic sigmoid function i.e.

A logistic function or logistic curve is a common "S" shape (sigmoid


curve), with equation:

where

 e = the natural logarithm base (also known as Euler's number),


 x0 = the x-value of the sigmoid's midpoint,
 L = the curve's maximum value, and
 k = the logistic growth rate or steepness of the curve.[1]

For values of x in the domain of real numbers from −∞ to +∞, the S-curve
shown on the right is obtained, with the graph of f approaching L as x
approaches +∞ and approaching zero as x approaches −∞.

The logistic function finds applications in a range of fields, including


artificial neural networks, biology (especially ecology), biomathematics,
chemistry, demography, economics, geoscience, mathematical
psychology, probability, sociology, political science, linguistics, and
statisticsReplicator dynamicsJump to navigationJump to search

In mathematics, the replicator equation is a deterministic monotone non-


linear and non-innovative game dynamic used in evolutionary game
theory. The replicator equation differs from other equations used to model
replication, such as the quasispecies equation, in that it allows the fitness
function to incorporate the distribution of the population types rather than
setting the fitness of a particular type constant. This important property
allows the replicator equation to capture the essence of selection. Unlike
the quasispecies equation, the replicator equation does not incorporate
mutation and so is not able to innovate new types or pure strategies.

Equational forms:-

The most general continuous form is given by the differential equation


Basic components of Hodgkin–Huxley-type models. Hodgkin–Huxley
type models represent the biophysical characteristic of cell membranes.
The lipid bilayer is represented as a capacitance (Cm). Voltage-gated and
leak ion channels are represented by nonlinear (gn) and linear (gL)
conductances, respectively. The electrochemical gradients driving the
flow of ions are represented by batteries (E), and ion pumps and
exchangers are represented by current sources (Ip).

The Hodgkin–Huxley model, or conductance-based model, is a


mathematical model that describes how action potentials in neurons are
initiated and propagated. It is a set of nonlinear differential equations that
approximates the electrical characteristics of excitable cells such as
neurons and cardiac myocytes. It is a continuous time model, unlike, for
example, the Rulkov map.

Alan Lloyd Hodgkin and Andrew Fielding Huxley described the model in
1952 to explain the ionic mechanisms underlying the initiation and
propagation of action potentials in the squid giant axon.[1] They received
the 1963 Nobel Prize in Physiology or Medicine for this work.

Basic components:-

The typical Hodgkin–Huxley model treats each component of an excitable


cell as an electrical element (as shown in the figure). The lipid bilayer is
represented as a capacitance (Cm). Voltage-gated ion channels are
represented by electrical conductances (gn, where n is the specific ion
channel) that depend on both voltage and time. Leak channels are
represented by linear conductances (gL). The electrochemical gradients
driving the flow of ions are represented by voltage sources (En) whose
voltages are determined by the ratio of the intra- and extracellular
concentrations of the ionic species of interest. Finally, ion pumps are
represented by current sources (Ip).[clarification needed] The membrane
potential is denoted by Vm.

Mathematically, the current flowing through the lipid bilayer is written as

and the current through a given ion channel is the product

where is the reversal potential of the i-th ion channel. Thus, for a cell
with sodium and potassium channels, the total current through the
membrane is given by:

same

where I is the total membrane current per unit area, Cm is the membrane
capacitance per unit area, gK and gNa are the potassium and sodium
conductances per unit area, respectively, VK and VNa are the potassium and
sodium reversal potentials, respectively, and gl and Vl are the leak
conductance per unit area and leak reversal potential, respectively. The
time dependent elements of this equation are Vm, gNa, and gK, where the
last two conductances depend explicitly on voltage as well.
Prey
When multiplied out, the prey equation becomes

The prey are assumed to have an unlimited food supply and to reproduce
exponentially, unless subject to predation; this exponential growth is
represented in the equation above by the term αx. The rate of predation
upon the prey is assumed to be proportional to the rate at which the
predators and the prey meet, this is represented above by βxy. If either x
or y is zero, then there can be no predation.

With these two terms the equation above can be interpreted as follows:
the rate of change of the prey's population is given by its own growth rate
minus the rate at which it is preyed upon.

Predators

The predator equation becomes


In this equation, δxy represents the growth of the predator population.
(Note the similarity to the predation rate; however, a different constant
is used, as the rate at which the predator population grows is not
necessarily equal to the rate at which it consumes the prey). γy
represents the loss rate of the predators due to either natural death or
emigration, it leads to an exponential decay in the absence of prey.

Hence the equation expresses that the rate of change of the predator's
population depends upon the rate at which it consumes prey, minus its
intrinsic death rate.

Chemistry
The rate law or rate equation for a chemical reaction is a differential
equation that links the reaction rate with concentrations or pressures of
reactants and constant parameters (normally rate coefficients and partial
reaction orders).[18] To determine the rate equation for a particular system
one combines the reaction rate with a mass balance for the system.[19] In
addition, a range of differential equations are present in the study of
thermodynamics and quantum mechanics.

A thermite reaction using iron(III) oxide. The sparks flying outwards are
globules of molten iron trailing smoke in their wake.
A chemical reaction is a process that leads to the chemical
transformation of one set of chemical substances to another.[1] Classically,
chemical reactions encompass changes that only involve the positions of
electrons in the forming and breaking of chemical bonds between atoms,
with change to the nuclei (no change to the elements present), and can
often be described by a chemical equation. Nuclear chemistry is a sub-
discipline of chemistry that involves the chemical reactions of unstable
and radioactive elements where both electronic and nuclear changes can
occur. The substance (or substances) initially involved in a chemical
reaction are called reactants or reagents. Chemical reactions are usually
characterized by a chemical change, and they yield one or more products,
which usually have properties different from the reactants. Reactions
often consist of a sequence of individual sub-steps, the so-called
elementary reactions, and the information on the precise course of action
is part of the reaction mechanism. Chemical reactions are described with
chemical equations, which symbolically present the starting materials,
end products, and sometimes intermediate products and reaction
conditions. Chemical reactions happen at a characteristic reaction rate at
a given temperature and chemical concentration. Typically, reaction rates
increase with increasing temperature because there is more thermal
energy available to reach the activation energy necessary for breaking
bonds between atoms. Reactions may proceed in the forward or reverse
direction until they go to completion or reach equilibrium. Reactions that
proceed in the forward direction to approach equilibrium are often
described as spontaneous, requiring no input of free energy to go forward.
Non-spontaneous reactions require input of free energy to go forward
(examples include charging a battery by applying an external electrical
power source, or photosynthesis driven by absorption of electromagnetic
radiation in the form of sunlight). Different chemical reactions are used in
combinations during chemical synthesis in orderto obtain a desired
product. In biochemistry, a consecutive series of chemical reactions
(where the product of one reaction is the reactant of the next reaction)
form metabolic pathways. These reactions are often catalyzed by protein
enzymes. Enzymes increase the rates of biochemical reactions, so that
metabolic syntheses and decompositions impossible under ordinary
conditions can occur at the temperatures and concentrations present
within a cell. The general concept of a chemical reaction has been
extended to reactions between entities smaller than atoms, including
nuclear reactions, radioactive decays, and reactions between elementary
particles, as described by quantum field theory.
CO
CONCLUSION:-
Differential equations plays major role in applications of sciences and
engineering. It arises in wide variety of engineering applications for e.g.
electromagnetic theory, signal processing, computational fluid
dynamics, etc. These equations can be typically solved using either
analytical or numerical methods. Since many of the differential
equations arising in real life application cannot be solved analytically or
we can say that their analytical solution does not exist. For such type of
problems certain numerical methods exists in the literature. In this book,
our main focus is to present an emerging meshless method based on the
concept of neural networks for solving differential equations or
boundary value problems of type ODE’s as well as PDE’s. Here in this
book, we have started with the fundamental concept of differential
equation, some real life applications where the problem is arising and
explanation of some existing numerical methods for their solution. We
have also presented some basic concept of neural network that is
required for the study and history of neural networks. Different neural
network methods based on multilayer perceptron, radial basis functions,
multiquadric functions and finite element etc. are then presented for
solving differential equations. It has been pointed out that the
employment of neural network architecture adds many attractive
features towards the problem compared to the other existing methods
in the literature. Preparation of input data, robustness of methods and
the high accuracy of the solutions made these methods highly
acceptable. The main advantage of the proposed approach is that once
the network is trained, it allows evaluation of the solution at any desired
number of points instantaneously with spending negligible computing
time. Moreover, different hybrid approaches are also available and the
work is in progress to use better optimization algorithms. People are also
working in the combination of neural networks to other existing methods
to propose a new method for construction of a better trail solution for
all kind of boundary value problems. Such a collection could not be
exhaustive; indeed, we can hope to give only an indication of what is
possible.
REFRENCE:-
https://www.math.psu.edu/tseng/class/Math251/Notes-
PDE%20pt1.pdf

http://www.math.harvard.edu/archive/118r_spring_05/handouts/conc
lusion.pdf

https://www.khanacademy.org/math/differential-equations

https://www.khanacademy.org/math/ap-calculus-ab/ab-differential-
equations-new/ab-7-1/v/differential

http://mathworld.wolfram.com/LaguerreDifferentialEquation.html

https://en.wikipedia.org/wiki/Differential_equation

http://www.analyzemath.com/calculus/Differential_Equations/applicat
ions.html

https://www.youtube.com/watch?v=fKHFbOeJrD0

S-ar putea să vă placă și