Sunteți pe pagina 1din 10

International Journal of Mechanical Engineering and Technology (IJMET)

Volume 10, Issue 03, March 2019, pp. 1117-1126. Article ID: IJMET_10_03_114
Available online at http://www.iaeme.com/ijmet/issues.asp?JType=IJMET&VType=10&IType=3
ISSN Print: 0976-6340 and ISSN Online: 0976-6359

© IAEME Publication Scopus Indexed

TEMPERATURE CONTROL SYSTEM FOR


RANGE OPTIMIZATION IN ELECTRIC
VEHICLE
R. Angeline
Department of Computer Science and Engineering, Faulty of Computer Science and
Engineering, SRM Institute of Science and Technology, Chennai, India

Sahitya. P
Department of Computer Science and Engineering, SRM Institute of Science and Technology,
Chennai, India

Swathi. S
Department of Computer Science and Engineering, SRM Institute of Science and Technology,
Chennai, India

Chethan. T. S
Department of Computer Science and Engineering, SRM Institute of Science and Technology,
Chennai, India

Shivani. L
Department of Computer Science and Engineering, SRM Institute of Science and Technology,
Chennai, India

ABSTRACT
In this paper, we propose a mechanism to optimize range of EV by integrating some
of the best methods to estimate rotor position, efficient temperature control modules and
machine learning algorithms that analyze the vehicle’s environment and driving
pattern. A simulation of an EV model with the above-mentioned modules is presented
through Simulink. The result of this simulation is compared with the result of simulation
with the same modules but with Machine Learning Algorithms integrated. A
comprehensive comparison analysis is then presented to show how range of an EV
improves as the machine learns.
Keywords: Simulink, Reinforced Learning, Electric Vehicle, Q Learning,
TD(λ)Learning.

http://www.iaeme.com/IJMET/index.asp 1117 editor@iaeme.com


R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L

Cite this Article: R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L,


Temperature Control System for Range Optimization in Electric Vehicle, International
Journal of Mechanical Engineering and Technology, 10(3), 2019, pp. 1117-1126.
http://www.iaeme.com/IJMET/issues.asp?JType=IJMET&VType=10&IType=3

1. INTRODUCTION
The exponential depletion of petroleum and natural gas has prompted an aggressive research
and development to manufacture electric vehicles (EV). A major concern in EVs is to optimize
range because of the following reasons 1. Infrastructure to charge batteries is insufficient. 2. It
takes 30-40mins to charge a vehicle. 3. Resources required to manufacture the battery unit are
also limited. Hence the need to optimize range is required to ensure longevity of the EV’s
battery.
Several methods have been proposed to estimate rotor position, thus, eliminating the need
to have heavy mechanical sensors. One of the methods implements Machine Learning to
estimate position of rotors in EV by using phase current and voltage as their data points [3].
Other methods concentrate on carrier signal injection based sensorless control techniques at
zero and low speeds or on improved dynamic models of Permanent Magnet Synchronous Motor
(PMSM) drives. The former methods are hindered by saturation effects and signal injection
leads to unwelcomed torque ripples. In latter methods, the dependency on back-emf results in
inaccurate estimation of rotor position at very low and zero speeds.
The influence of environment temperature on battery of EVs has been extensively studied
through a number of tests like Electrochemical Impedance Spectroscopy test and Dynamic
Driving test thus, reaching a conclusion that at low temperature, a number of events like DC
internal resistance and polarization effect are the main factors that limits the battery
performance [4]. Experiments on 50A.h Lithium-iron phosphate batteries under temperature
range of minus 40°C - 40°C have been carried out to analyze the influence of environment
temperatures on voltages, internal resistance, efficiency and life cycle of battery while charging
and discharging [5].
Research has also been conducted to dynamically equalize the temperature of all battery
cells (especially Li-ion batteries) in a package because focusing on average temperature of the
battery package rather than each battery cell in the package has resulted in wear-out, uneven
temperature distribution and has affected the safety standards of the battery cells [6].
Furthermore, temperature variation degrades torque accuracy and efficiency of IPMSM
machines and hence, compensation control algorithm is shown to save energy and improve
efficiency.
In this paper, we propose a mechanism to optimize range of Electric Vehicle by integrating
some of the best methods that focus on controlling battery temperature and ML algorithms that
analyze the road conditions, driving pattern, ambient temperature and wind speed to estimate
an accurate torque for the EV. ML algorithm is also proposed to estimate the power required to
cool down the battery, which further increases the range in EVs. A simulation of an EV model
that integrates battery temperature control module is presented through Simulink. The result of
this simulation is compared with the result of that with ML algorithms integrated. A
comprehensive comparison analysis is then presented to show how range of an EV improves as
the machine learns.

http://www.iaeme.com/IJMET/index.asp 1118 editor@iaeme.com


Temperature Control System for Range Optimization in Electric Vehicle

2. SYSTEM ARCHITECTURE

Figure 1 System Architecture

2.1. Input
The Input module includes the following parameters: Inclination (Degree), Wind Speed(m/s),
Brake Pedal(on/off), Throttle ([0,1]), Cruise Enable(on/off), Cruise Disable(on/off),
Environment Temperature(˚C). These signals are generated by the Signal builder Block in
Matlab Simulink. Of these parameters, Inclination, Wind Speed and Brake Pedal are joined to
form a single bus unit called TD (Test Data) which is then sent to the Vehicle Dynamics module.
Finally, the signals other than Inclination and Wind Speed are sent to the Vehicle Control Unit.

2.2. Vehicle Control Module


The Vehicle Control unit consists of three main factors: Cruise Control, Torque Demand
Management, Feed Forward Torque. The Cruise Control is obtained using the cruise enable,
cruise disable, brake pedal and motor speed, these are connected to a J-K Flipflop to determine
if the Cruise has been enabled or disabled. If the result is found to be enabled then a torque
termed 'trqref' is generated. The Torque Demand Management is the torque obtained by
combining the outputs from acceleration pedal and brake pedal. The Feed Forward Torque is
the torque obtained as a result of the combination of positive and negative torque energies. The
positive torque energy is obtained when the vehicle is moving downhill. In this case there is no
need to explicitly provide energy by pushing down on the accelerator, since the vehicle moves
with respect to the mass of the vehicle and the gravity of the earth. On the other hand, pressing
the brake pedal while moving downhill creates a negative torque energy. The resultant torque,
which is a combination of Feed Forward Torque and the reference torques from the other two
units, is then forwarded to the speed controller switch which in turn generates a reference torque
depending on the output from the Cruise Control unit. The parameter required to calculate the
reference torques resulting from the three units is the motor speed, which is obtained from the
PMSM drive. The one other usage of motor speed is in the calculation of maximum power.

http://www.iaeme.com/IJMET/index.asp 1119 editor@iaeme.com


R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L

Figure 2 Vehicle Control Module

2.3. Final Drive Ratio and Vehicle Dynamics


The Final Drive Ratio is the last bit of gearing between your transmission and the driven wheels.
Altering this ratio will certainly affect the performance of the vehicle, rather drastically in some
cases. In general, when the final drive ratio is on the lower end it leads to less torque at the
wheels but a higher top speed. On the other hand, a higher drive ratio leads to more torque at
the wheels but a lower top speed. Since the torque helps in the acceleration, a higher final drive
ratio will thus result in the enhancement of the acceleration. This implies that the engine would
produce more revolutions, for a given speed, so the higher the final drive ratio the higher the
fuel consumption. This module is connected to the Vehicle Dynamics unit. Using parameters
such as the gear ratio, wind, inclination, brake pedal and mass of the vehicle, the stress induced
on the motor is calculated.

Figure 3 Vehicle Dynamics Module

2.4. Cooling System


In electric cars, battery discharge results in heat generation; more the battery gets discharged,
more the heat is produced. Batteries are manufactured to work only between certain temperature

http://www.iaeme.com/IJMET/index.asp 1120 editor@iaeme.com


Temperature Control System for Range Optimization in Electric Vehicle

extremes, so the moment the temperature strays from the working range, it will stop working;
in order to avoid that, we need cooling system. Cooling systems need to be able to keep the
battery pack in the temperature range, i.e. between the max and min battery temperature. Other
than this it also needs to keep the temperature difference within the battery pack to a minimum
(no more than 5 degrees Celsius). The output from this module, which is the battery
temperature, is then given to the PMSM(DC) drive to cool down the motor.

Figure 4 Cooling System Module

2.5. Battery
The battery unit consists of the BattSoc, BattVol, BattCurr, BattAH, BattPwr, BattCrnt and
BattTemp. The BattVol and BattCurr refers to the Battery Voltage and Battery Current.
BatteryAH refers to the amount of power residing inside the battery which is the stored energy.
The percentage representation of this value is referred to as BattSoc. The BattPwr is the amount
of power that is consumed by the vehicle at that instance. The BattCrnt is the current passed
onto the battery unit by the PMSM Drive. A thermometer is used to estimate the BattTemp
which is the battery temperature in terms of kelvin. The above described terms are further sent
to the BatteryStats which displays the BattSoc, BattTemp and BattPwr values onto the screen.

Figure 5 Battery Module

2.6. Permanent Magnet Synchronous Motor (PMSM)


PMSM (Permanent magnet synchronous motor) is similar to brushless DC motors. This is used
for the propulsion of the vehicle. This module is connected to the battery and cooling circuits,
which powers and cools the motor respectively. Reference torque demand, the torque required
at the rotor, is referred from VCM. The Mechanical rotational conserving port of the motor is
connected to the vehicle dynamics module which gives the torque acting on the rotor. The rotor
speed from PMSM is sent to VCM for torque demand calculations.

http://www.iaeme.com/IJMET/index.asp 1121 editor@iaeme.com


R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L

Figure 6 PMSM Module

2.7. DC-DC Controller


DC-DC controller is used to regulate the potential difference across the motor. This controller
also contains voltage and current sensors.

Figure 7 DC-DC Controller Module

2.8. ML Module
2.8.1. Reinforcement Learning

Figure 8 Reinforced Learning Module

http://www.iaeme.com/IJMET/index.asp 1122 editor@iaeme.com


Temperature Control System for Range Optimization in Electric Vehicle

In reinforcement learning, the decision is made by the agent. Everything other than the
agent is called the environment. The agent-environment interaction takes place at discrete time
steps t = 0,1,2,. At each time t, the agent observes the environment’s state st ∈ S, with respect
to which it takes an action at ∈ A, Where S and A are the sets of possible states and actions
respectively. In the next step, the agent receives a numerical reward rt+1 as a consequence
of the action taken.
A policy π of the agent is a mapping from each state s ∈ S to an action a ∈ A that specifies
the action a = π(s) that the agent will choose when the environment is in state s. The ultimate
goal of an agent is to find the optimal policy, such that is maximized for each state s ∈ S.
V π(s) = E𝜋 {∑ ∞ k
𝑘=0 γ · r𝑡+k+1 | s𝑡 = s }
The value function Vπ (s) is the expected return when the environment starts in state s at
time step t and follows policy π thereafter. γ (0 < γ < 1) is called the discount rate. It ensures
that the infinite sum (i.e., ∑ ∞ k
𝑘=0 γ · r𝑡+k+1 ) converges into a finite value. More importantly, γ
reflects the uncertainty in the future. rt+k+1 is the reward received at time step t+k+1.

2.8.1.1. State spaces


State space for ROEV is defined as
𝑆 = {𝑠 = [𝑝𝑑𝑒𝑚 , 𝑣𝑣 , 𝑏𝑡𝑒𝑚 , 𝑏𝑠𝑜𝑐 ]𝑇 |𝑝𝑑𝑒𝑚 ∈ 𝑃𝑑𝑒𝑚 , 𝑣𝑣 ∈ 𝑉𝑣 , 𝑏𝑡𝑒𝑚 ∈ 𝐵𝑡𝑒𝑚 , 𝑏𝑠𝑜𝑐 ∈ 𝐵𝑠𝑜𝑐 }
where 𝑝𝑑𝑒𝑚 is the power demand of the EV, 𝑣𝑣 is the vehicle speed, 𝑏𝑡𝑒𝑚 is the temperature
of the battery, and 𝑏𝑠𝑜𝑐 is the state of charge of the battery pack. Depending upon the state
different actions will be taken. For example, If the Power demand is high and the Battery
temperature is high, the action is to cool the battery. And If the Power demand is high and the
Battery temperature is low, the action is to heat the battery.
The reinforcement learning agent must observe the states. In the actual implementation of
the inner-loop reinforcement learning, all the inputs can be obtained by using sensors.
𝑃𝑑𝑒𝑚 , 𝑉𝑣 , 𝐵𝑡𝑒𝑚 and 𝐵𝑠𝑜𝑐 are respectively the finite sets of power demand of the EV, vehicle
speed, temperature of the battery and state of charge of the battery pack. Discretization is
required when defining these finite sets. In particular, 𝐵𝑠𝑜𝑐 is defined by discretizing the range
of charge stored in the battery pack i.e., [𝐵𝑠𝑜𝑐 min, 𝐵𝑠𝑜𝑐 max] into a finite number of charge levels
𝐵𝑠𝑜𝑐 = {𝐵𝑠𝑜𝑐 1, 𝐵𝑠𝑜𝑐 2, . . . , 𝐵𝑠𝑜𝑐 𝑘}
Where 𝐵𝑠𝑜𝑐 𝑚𝑖𝑛 ≤ 𝐵𝑠𝑜𝑐 1 < 𝐵𝑠𝑜𝑐 2 <. . . < 𝐵𝑠𝑜𝑐 𝑘 ≤ 𝐵𝑠𝑜𝑐 𝑚𝑎𝑥, 𝐵𝑠𝑜𝑐 𝑚𝑖𝑛 and 𝐵𝑠𝑜𝑐 𝑚𝑎𝑥 are
0% and 100% respectively.

2.8.1.2. Action space


Action space is defined as
𝐴 = {𝑎 = [𝑂(𝑡)]𝑇 |𝑂(𝑡) ∈ 𝑂}
Where action a=[𝑂(𝑡)]𝑇 taken by the agent to cool/heat the battery down by temperature t.
Note that t<0 and t>0 denotes cooling and heating the battery respectively. t=0 denotes cooling
system idle mode. Set O contains Cool, Heat and Idle modes.

2.8.1.3. Reward
The reward should be related to change in SoC (∆𝐵𝑠𝑜𝑐 ) over a distance (∆𝐷) in difference in
time step (∆𝑇), as we need the ML to optimise range. So we define reward the agent receives
for a action state pair (s,a) as increase in range (∆𝑅), which is given as
∆𝑅 = 𝑅𝑟𝑐 + 𝑅𝑙 + ∆𝐷 − 𝑅𝑟

http://www.iaeme.com/IJMET/index.asp 1123 editor@iaeme.com


R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L

where 𝑅𝑟𝑐 is the estimated range at time t after controlling the temperature of the battery
and 𝑅𝑙 is the loss of range in controlling temperature of battery, ∆𝐷 is the distance travelled in
1 unit time and 𝑅𝑟 is the estimated range at time t-1. Reward for a particular (s,a) is higher if
the range is increased and reward is negative (lower) if the range is decreased. Estimation of
range is done from multiple factors like wind speed, driver pattern, Battery Soc, total battery
power and current [7].

2.8.2. TD(λ)-Learning Algorithm


Temporal difference lambda/TD(λ) learning is a model-free reinforcement learning technique.
It is used to find optimal policy since it has a relatively higher convergence rate and performs
higher in non-Markovian environment. i.e. environments where predictions cannot be made
solely using the current state. λ is a constant between 0 and 1 called the trace decay.

2.8.3. Q-learning algorithm


In this algorithm, a Q value denoted by Q(s,a) is associated with each state-action pair (s,a).
The Q(s,a) value approximates the expected discounted cumulative reward of taking action a in
state s, there by maximizing its total (future) reward. Initially the Q value is fixed arbitrarily. At
each time step t, as per the state, the agent selects an action based on the Q value of that pair
i.e. Q(s,a). To explore all possible paths before deciding and getting stuck on a single path,
exploration-exploitation policy is implemented. [eep] that is, the agent does not always select
the action a that has the maximum Q(st,a) value for the current state st. After taking the selected
action at, the agent observes the next state st+1 and receives reward rt+1. Then, based on st+1 and
rt+1, the agent updates Q value of (s,a) pair for all the state-action pairs, in which the eligibility
e(s,a) of each state-action pair is updated during the Q value update. The eligibility e(s,a) of a
state-action pair reflects the degree to which the particular state-action pair has been
encountered in the recent past. This is used by the exploration-exploitation policy for exploring
all possible combinations.
Algorithm:
1 Initialize Q(s,a) arbitrarily for all the state-action pairs.
2 for each time step t
3 Using the exploration-exploitation policy, Choose action at for state st.
4 Run action at.
5 observe reward rt+1 and next state st+1.
6 δ←rt+1+γ·maxa0Q(st+1,a0)−Q(st,at).
7 e(st,at)←e(st,at)+1.
8 for all state-action pair (s,a)
9 Q(s,a)←Q(s,a)+α·e(s,a)·δ.
10 e(s,a)←γ·λ·e(s,a).
11 end for
12 end for

3. RESULTS
The whole system is simulated using Mathwork’s Simulink with and without our Range
optimization mechanism integrated. The results derived are then compared here. As for the test
the following inputs were used as given in the graph below.

http://www.iaeme.com/IJMET/index.asp 1124 editor@iaeme.com


Temperature Control System for Range Optimization in Electric Vehicle

Figure 8 Simulation input


The simulation results show that the battery discharges as the vehicle moves up an
inclination along with an increase in battery temperature. The change in battery SoC is high as
the battery discharges quickly.

Figure 9 Battery stats (Without TCSRORV)


The ML algorithm takes less than 2 hours to converge, which is way less than the life of an
electric vehicle. The battery temperature is thus maintained for better range and faster charging
as shown in the graph below for the same input as above.

Figure 10 Battery stats (With TCSROEV)

4. CONCLUSION
TCSROEV features an ML based mechanism for optimizing the Electric Vehicle’s range by
controlling it’s battery temperature. The system proposed in this paper derives the most optimal

http://www.iaeme.com/IJMET/index.asp 1125 editor@iaeme.com


R. Angeline, Sahitya. P, Swathi. S, Chethan. T. S and Shivani. L

battery temperature control strategy that integrates reinforced learning to improve the Electric
Vehicle’s range. One of the major advantages of this approach is that, it does not demand any
prior drive cycle data. The results from the comprehensive comparison generated using
Simulink that simulated a normal EV and one with range optimization mechanism integrated
shows that the range of an EV increases substantially by controlling the temperature of the EV’s
motor and battery, approximately about 15%-20% during the first drive cycle, which is bound
to increase as more data is processed.

REFERENCES
Journal Articles
[1] Wided Zine , Zaatar Makni , Eric Monmasson , Lahoucine Idkhajine, and Bruno Condamin,
Interests and Limits of Machine Learning-Based Neural Networks for Rotor Position
Estimation in EV Traction Drives. IEEE Transactions on Industrial Informatics, 14(5), May
2018, pp. 1942-1951.
[2] Xue Lin, Yanzhi Wang, Paul Bogdan, Naehyuck Chang, and Massoud Pedram,
Reinforcement Learning Based Power Management for Hybrid Electric Vehicles. IEEE
Transactions on Industrial Electronics, 2014, 62(12), pp.32-38.

Conference Proceedings
[3] Silong Li, Di Han, Bulent Sarlioglu, Impact of Temperature Variation on Fuel Economy of
Electric Vehicles and Energy Saving by using Compensation Control. 2018 IEEE
Transportation Electrification Conference and Expo (ITEC), 2018, pp.702-707.
[4] Xianzhi Gong and Chunting Chris Mi, Temperature-Dependent Performance of Lithium
Ion Batteries in Electric Vehicles. 2015 IEEE Applied Power Electronics Conference and
Exposition (APEC), 2015, pp. 1065-1072.
[5] Mengyan Zang, Jinhong Xie, Jian Ouyang, Shuangfeng Wang, Xiaolan Wu, Investigation
of Temperature Performance of Lithium-ion Batteries for Electric Vehicles. 2014 IEEE
Conference and Expo Transportation Electrification Asia-Pacific (ITEC Asia-Pacific),
2014, pp. 1-8.
[6] B. Ji, X.G. Song, W.P. Cao and V. Pickert, Active Temperature Control of Li-ion Batteries
in Electric Vehicles. IET Hybrid and Electric Vehicles Conference 2013 (HEVC 2013),
2016, pp. 1-5.
[7] Joonki Hong, Sangjun Park and Naehyuck Chang, Accurate Remaining Range Estimation
for Electric Vehicles. 2016 21st Asia and South Pacific Design Automation Conference
(ASP-DAC), 2016, pp. 781-786.

http://www.iaeme.com/IJMET/index.asp 1126 editor@iaeme.com

S-ar putea să vă placă și