Sunteți pe pagina 1din 11

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

Detecting Drivers Smartphone Usage via


Non-intrusively Sensing Driving Dynamics
Cheng Bo, Xuesi Jian, Taeho Jung, Junze Han, Xiang-Yang Li, Fellow, IEEE, Xufei Mao, Member, IEEE, and Yu
Wang, Member, IEEE

AbstractIn this work, we address a critical task of dynamically detecting the simultaneous behavior of driving and
texting using smartphone as the sensor. We propose, design,
and implement TEXIVE which achieves the goal of detecting
texting operations during driving utilizing irregularities and rich
micro-movements of users. Without relying on any external
infrastructures and additional devices, and no need to bring
any modification to vehicles, TEXIVE is able to successfully
detect dangerous operations with good sensitivity, specificity and
accuracy by leveraging the inertial sensors integrated in regular
smartphones. To validate our approach, we conduct extensive
experiments involving in a number of volunteers on various
of vehicles and smartphones. Our evaluation results show that
TEXIVE has a classification accuracy of 87.18%, and precision
of 96.67%.
Index TermsDriving Detection, Smartphone Sensing, Mobile
Applications, IoTs.

I. I NTRODUCTION
One recent study indicates that there are still at least 23%
of all vehicle crashed and 1.3 million crashes in the US
involve using cell phones (especially texting) during driving [1] although over 30 states and District of Columbia
has forbidden texting while driving [2]. Such severe safty
issue has stirred numerous researches and innovations on
detecting and preventing driving and texting behaviors so that
a number of detection approaches using different sensing or
Internet of Things (IoT) devices have been proposed, e.g.,
mounting a camera to monitor the driver [3], [4], relying
on acoustic ranging through car speakers [5], and leveraging
sensors and cloud computing to recognize drivers operations
[6]. Meanwhile, some Apps have been developed, such as
Rode Dog [7] and TextBuster [8]. Although these techniques
The work is partially supported by the US National Science Foundation
under Grant No. NSF CNS-1319915, CNS-1343355, CNS-1526638, EECS1247944, and CMMI-1436786, and the National Natural Science Foundation
of China under Grant No. 61272426, 61379117, 61428203, 61520106007, and
61572347. (Co-corresponding authors: Xiang-Yang Li and Yu Wang.)
C. Bo is with the Department of Computer Science, University of North
Carolina at Charlotte, Charlotte, NC 28223, USA. E-mail: cbo1@uncc.edu.
X. Jian, T. Jung and J. Han are with the Department of Computer
Science, Illinois Institute of Technology, Chicago, IL 60616, USA. E-mail:
{xjian,tjung,jhan20}@hawk.iit.edu.
X.-Y. Li is with the School of Computer Science and Technology, University
of Science and Technology of China, Hefei, Anhui, 230026, China and the
Department of Computer Science, Illinois Institute of Technology, Chicago,
IL 60616, USA. E-mail: xli@cs.iit.edu.
X. Mao is with School of Software, Tsinghua University, Beijing, 100084,
China. E-mail: xufei.mao@gmail.com.
Y. Wang is with the Department of Computer Science, University of
North Carolina at Charlotte, Charlotte, NC 28223, USA and the College of
Information Engineering, Taiyuan University of Technology, Taiyuan, Shanxi,
030024, China. E-mail: yu.wang@uncc.edu.

have been shown to perform well under various circumstances


based on some assumptions, there still exist some common
limitations, e.g., requiring extra infrastructures such as cameras or radio interceptor [3], [9], desiring special vehicles
(Bluetooth and special layout of speakers [5]), or collaborating
multiple phones in the vehicle and cloud computing [6].
In this work, we address this critical task of detecting driving and texting activities by treating the regular smartphones
as the smart things in IoT. We propose TEXIVE, a system
leveraging inertial sensors integrated in regular smartphones
to distinguish drivers from passengers through recognizing
rich micro-movements of smartphone users, and further detect
driving and texting activities of drivers. In order to achieve
high performance without relying on external infrastructure,
there are a number of challenges which should be properly
addressed. First, since inertial sensors of smartphones contain
indispensable inherent noise due to their special mechanical
structure, one of the challenges is how to minimize the negative impact of such noise data. Second, it is extremely difficult
to guarantee the sensitivity, accuracy, and precision of driving
and texting detection since the potential pattern similarities and
diversities of different user activities depend on many aspects,
like the pattern a user carries his/her smartphones, whether a
user sits on a driver seat, etc. Last but not the least challenge
is how to achieve a real-time activity detection and recognition
with high accuracy and tolerable energy assumption, in other
words, a good user experience.
To overcome these issues, our main idea is to let TEXIVE recognize micro-movements by fusing multiple evidences
collected from inertial sensors in smartphones, e.g., detecting
whether a user is entering a vehicle or not, inferring which side
of the vehicle he/she is entering, determining whether a user is
siting in front or rear seats. Precisely, we collect sensory data
when users are performing various activities and observe some
unique patterns by converting the signal to frequency domain
using DCT and wavelet to determine the users behavior. For
instance, in order to infer the side from which a user enters a
vehicle, as well as the position of the seat he/she is sitting, we
exploit the unique patterns in both acceleration and magnetic
field observed from both the respective actions and the ambient
environment, and finally make cognitive decision based on
machine learning techniques.
The main contributions of our work are summarized as
follows.
We propose and design TEXIVE based on regular smartphones without assistance from any external infrastructures and additional devices. In addition, TEXIVE does

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016


not bring any modification to vehicles.
We conduct extensive experiments involving a number of
volunteers on various vehicles and smartphones and the
results indicate that TEXIVE has a classification accuracy
of 87.18%, and precision of 96.67%.
The rest of paper is organized as follows. In Section II we
briefly review the existing work on distinguishing drivers from
passengers and general activity detection and recognition using
inertial sensors of smartphones. We present the system design
in Section III by discussing the main steps to tackle several
critical tasks for detecting driving and texting operations of
drivers. Section IV introduces the energy saving strategy
of TEXIVE. We report our extensive evaluation results in
Section V and conclude the paper in Section VI. A preliminary
version of this work was appeared as a poster [10] in ACM
MobiCom 13.

II. R ELATED W ORKS


A number of innovative systems have been proposed and
developed to distinguish drivers from passengers, or prevent
drivers from using the smartphone during driving.
The first line of work concentrates on how to detect whether
a driver is distracted [11] or whether a driver is using a
phone [5], [12] during driving. For instance, Bergasa et al. [13]
designed a wearable circuit equipped with a camera to monitor
the drivers vigilance in real time. Kutile et al. [14] developed
another smart human-machine interface by fusing stereo vision
and lane tracking data to measure the drivers momentary
state. Although these two systems could detect the drivers
distraction with approximately 77% accuracy on average, their
strategies need supports from external devices, rather than
taking the common handsets (e.g., smartphones) into account.
Some other work, like Blind Sight [15], Negotiator [16], Quiet
Calls [17], and Lindqvist [18], focus on dealing with devices
with less effort to reduce the dangerous driving distraction.
Most of the aforementioned designs require extra infrastructures or vehicle modification to detect drivers activities, which
increase the system cost and the coordination difficulty as well.
Other existing solutions for distinguishing drivers from passengers rely on specific mechanisms to determine the location
of the smartphones. For example, Yang et al. [5] presented
an innovative method by leveraging the high frequency beeps
from a smartphone over Bluetooth connection through cars
stereo system, calculating the relative delay between the signal
from speakers to estimate the location of smartphone. Wang et
al. [19] designed a protocol to determine the location of smartphone through an assistant adapter that provides vehicle speed
reference readings to the phone over Bluetooth. However, the
requirement of using Bluetooth limits its application since a
bluetooth equipment may not be available in some vehicle.
Even with Bluetooth, because of the varying cabin sizes and
stereo configurations, the accuracy may be compromised to
some extent. Chu et al. [12] presented a phone-based sensing
system capable of determining whether a user in a moving
vehicle is the driver without relying on additional wearable
sensors or custom instrumentation in the vehicle. Their main
idea is utilizing collaboration of multiple phones to process incar noise and using a back-end cloud service to differentiate

the front seat driver from a back seat passenger. Compared


with these systems, our system is able to achieve this using
the smartphones carried by users and does not require special
devices in the vehicle.
Some other work [20][30] were proposed to utilize inertial
sensors integrated in smartphones to achieve various of purposes. For exammple, Bao et al. [20] performed activity recognition based on multiple accelerometer deployed on specific
parts of human body. In [21], Parkka et al. proposed a system
embedded with 20 different wearable sensors to recognize
activities. Tapia et al. [22] presented a real-time algorithm to
recognize not only physical activities, but also their intensities
using five accelerometers and a wireless heart rate monitor.
Krishnan et al. demonstrated that putting accelerometers in
certain parts is inadequate to identify activities, such as sitting,
walking, or running in [23]. Mannini et al. [24] introduced
multiple machine learning methods to classify human body
postures and activities based on accelerometers attached to
certain positions on the body. [25] designed a novel system to
identify users location and activities through accelerometer
and angular velocity sensor in the pocket, combined with a
compass on the waist. Ravi et al. [26] used HP iPAQ to collect
acceleration data from sensors wirelessly, and recognize the
motion activities. Bo et al. [30] used inertial sensors to
learn drivers behaviors and further improve the localization
in urban environments. In summary, there is a clear trend for
smartphones to become the major personal devices for mobile
sensing [31] and smart urban sensing [32]. Unfortunately,
these current systems and approaches cannot be used for
distinguishing drivers from passengers and detecting driving
and texting activities directly.
III. S YSTEM D ESIGN
As we have mentioned before, one of challenges of TEXIVE
is to distinguish drivers and passengers efficiently. In this
section, we present the design of the system and solutions
of each critical part of TEXIVE in details.
A. Challenges and Design Goals
To distinguish drivers from the passengers using smartphones without any assistance of dedicated infrastructures or
intrusive equipments in the vehicle, it requires us to be able
to determine various of activities and classify the phone (i.e.,
user) location through observing unique and distinguishable
micro-movements and sequential patterns.
Diversity and Ambiguity of Human Activities: In order
to identify the driver with high accuracy and low delay, a realtime activity recognition is perfered. For example, we should
start the algorithm determining whether a user enters a vehicle
as long as a pattern walking towards vehicle is detected,
following which the driver-passenger differentiation process
launches. Simultaneously, an effective method of determining
both the starting point and duration of an activity begins for
the purpose of elevating the accuracy while considering the
randomness of action even by the same user. We assume that,
usually the behaviors of drivers or passengers are different,
especially during the action of entering the vehicle and driving.

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016


Since even the activities of the same user may result in
multiple irregular sensory data because of the difference of
smartphones orientation, clearly, we need to identify the
signal and extract the patterns carefully, which could be used
for accurate differentiation.
Tradeoff between Efficiency and Accuracy: As smartphones have limited computation capability and limited energy
supply, the proposed system should be not too complicated and
resource-consuming. On the one hand, standard smartphone
platforms should be able to execute our system online with a
bounded delay. On the other hand, a carefully selected adaptive
duty cycle strategy should be adopted in the system such as to
save energy. Thus, there is a tradeoff between efficiency and
detection accuracy.
B. System Architecture Overview
We adopt a three-phase solution to accomplish the task:
initial walking detecting, in-vehicle recognition, and evidence
fusion respectively. Figure 1 illustrates the basic working
flow of the system according to the three phases, and the
functionality of detailed components is as follows.
Clock Fired
N
Walking?

Y
Entering?

Side Detection

Side Signal

Front vs. Back

Decision
Combination

Driver

Passenger

Fig. 1. The system overview

Activity Identification: Driving activities usually follow a


series of actions, including walking (towards the vehicle), entering (the vehicle), fastening (the seat belt), or even pressing
(the gear and brake). Therefore, we assume that walking is
the start signal for the entire series of actions, and the system
begins to monitor entering action as long as the walking is
periodically detected. Generally, most of users get used to carrying their smartphones all day long, which facilitate observing
multiple activities. One of tasks is to identify related activities
from a rich set of potential daily activities, including walking,
sitting, standing or even ascending stairs. One thing deserves
mentioning that TEXIVE does not require any interaction from
the user. In addition, we study the temporal and spatial distribution of different activities as well, through constructing a
Hidden Markov Model (HMM) [33] to personalize the model,
and further optimize the energy consumption by carefully
adjusting the duty-cycle. Although we believe that different
activities will be reflected by individual micro-movements and
motion patterns most of the time, we cannot neglect some
cases that similar observed patterns will be observed even
in different activities. To overcome this, we exploit some
subtle differences (e.g., the different frequency distribution

when converting the time-series data to the frequency domain,


the variance of the amplitude of the time-series data) among
observed data from various of sensors for further recognition.
Detecting Boarding Side: Although whether a user entering a vehicle from either side of the vehicle is a driver could
be inferred most of the time directly, e.g., the driver usually
boards from the left side in the US, we cannot guarantee the
identity of the user precisely. Hence, it is still necessary to
judge the boarding side for users. To judge the boarding side
of a user, the system recognizes the entering activity based on
direction of turning and sequence of lifting leg. Fortunately,
turning and lifting actions on different sides of the vehicle
could be reflected through accelerometer sensor and gyroscope
sensor, and both two sensors will act divergent under different
side and place on the body.
Detecting Front or Back: The third evidence helping
the system make decision is determining where the user is
sitting in the car, i.e., at a front seat or a back seat. Our
approach achieve this relying on two separate signals. The
first signal is based on our extensive tests, which shows that
the start of engine influences the magnetic field of the front
half of the vehicle obviously, which could be sensed by the
magnetometer. The second signal is based on the unique and
distinguishable patterns reflected on the acceleration between
front and back seats when vehicle is crossing a bump or
pothole. According to our preliminary tests, the bump signal,
although not guaranteed to happen, can always accurately
determine whether the phone (user) is in front seats or rear
seats.
Abnormal Texting: Empirically, people may present an
abnormal texting behavior in typing speed, input interval, and
typo frequency when she is not fully concentrated. According
to such observation, we analyze in detail the texting pattern
in both focusing and distracted condition so as to trigger the
warning within a small amount of input to prevent further
possible risk.
Further Improvement: Based on three aforementioned
strategies, which provide us sufficient information of the
activity and location of a user, we apply machine learning
process and evidence fusion to further improve the identification accuracy. In addition, the robustness of our system
should be further improved in order to handle the diversity of
user behaviors considering different locations to put the phone
according to individual habits [34].
Another thing worth mentioning is that the system runs at
background, and operates the detection operation in real time,
rather than keeping recording the sensory data into local buffer
and detect the activities through rolling back which is most
common way with respect to efficiency and reducing cost. For
instance, supposing that a system starts to record the sensory
data at time 0, as shown in Figure 2. At time T1 , the user
starts entering a car which lasts T2 . He starts driving after a
delay of T3 sitting inside the car. Once the system detects the
driving behavior with detection delay , after users has driven
for time T4 . The whole duration of the sequence of actions
will last T1 + T2 + T3 + T4 + . However, the exact duration of
every Ti is unknown and unpredictable, the amount of sensory
data which have to be stored in the buffer will be pretty large

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

Start

T2

T3

EnterCar

T4

Driving

Time

DetectionDelay

Fig. 2. Real time strategy vs. rolling-back.

C. Entering Vehicles?
In this section, we mainly focus on the detection of entering
a vehicle, which is the initial stage of identification. We discuss
this according to three aspects: Activity Recognition, Entering
Detection, and Accuracy Improvement.
a) Activity Recognition: In our system, we set the sampling rate as 20Hz, and successive sensory data are collected,
which presents the continuous human activities. Two critical
stages attract our attention. The first one is to identify the
starting point and the duration of an activity from a sequence
of temporal data, while the other is employing effective feature
extraction, which are helpful for machine learning purpose
later. For the former issue, we store a small duration of sensory
data in the buffer, and set the sliding window size of sampling
data for recognition as 4.5 seconds, which based on the
extensive evaluation results to be presented later. According
to the latter, we adopt Discrete Cosine Transform (DCT) [35]
to extract features from linear acceleration to represent the
specific action.
Remembering that our system works in the background
and monitors multiple activities, which may present similar
patterns, our strategy is to analyze confusable activities from
the perspective of machine learning, and distinguish them from
each other to improve the accuracy. Therefore, we additionally
examine activities, like walking behavior, ascending stairs, and
sitting down, which could be reflected from acceleration on
the direction of gravity as shown in Figure 3(a) and 3(b).
Although the difference in walking and ascending stairs are
relatively small based on the experimental results, the machine
learning could still provide high accuracy to distinguish one
from the other. Sitting down (Figure 3(c)) is another activity
we take into account, not only because it is recorder multiple
times through one day, but also the pattern sometimes is
similar to sitting in the vehicle. We employ naive Bayesian
classifier [36], detect and identify our preliminary collected
data containing 20 cases for each activities mentioned above
and 100 other behaviors such as running and jumping, with
accuracy as high as 91.25%.
b) Entering Detection: Empirically, the activity of getting into vehicle consists of five basic actions, including
walking towards the vehicle, opening the door, turning the
body, entering, and sitting down. We extract the feature on
the linear acceleration and convert to both horizontal plane
and gravity directions in the Earth Frame Coordinate.
A study [37] indicates that most of the users carry their
phone in trousers. Therefore we put our emphasis mainly on
the trousers case.

10
8

8
Horizontal Plane
Ground

6
Acceleration (m2/s)

T1

We first collected 200 samples of entering vehicle from


both driver and passenger sides in the parking lot by a
group of volunteers with the smartphone in separate trouser
pockets. Due to the irregular and unpredictable positions of
smartphones in the pocket, as well as the orientation of the
vehicle, we extract the linear acceleration and transform to the
Earth Frame Coordinate. Thus no matter which orientation the
vehicle is heading, the entering behavior, from the perspective
of the head of vehicle, is still identical. We also calculate
the p
vector of joint linear acceleration in the horizontal plane
by a2north + a2east , and present the activity of entering the
vehicle in both horizontal plane and ground direction in two
cases (shown in Figure 4), in which the difference is obvious.

Acceleration (m2/s)

if we do offline detection. While in our real-time detection


system TEXIVE, we can distinguish driver at time T1 + T2
without buffering data.

6
4
2
0
2
4
0

4
2
0
2
4
Horizontal Plane
Ground

6
20

40
60
80
Time Sequence (X50 ms)

(a) Driver Side, Left Pocket

100

8
0

20

40
60
80
Time Sequence (X50 ms)

100

(b) Driver Side, Right Pocket

Fig. 4. Data extracted from accelerometer in Horizon plane and ground when
people enter vehicles.

c) Accuracy Improvement: According to Naive Bayesian


classifier, both the accuracy and precision of distinguishing 40
cases of entering from nearly 296 other activities are 84.46%
and 45.24% respectively, as shown in Table I. Although the
activity of entering is easily identified with high accuracy, a
number of other activities (sitting down mostly) are misjudged
as the same activity (the false positive is relatively high), which
descends the performance. After analysis, we conclude that
the two main reasons for such misjudgement are similarity
in different behaviors and multiple patterns for same activity
respectively. In order to overcome this weakness, we propose
a more comprehensive ambient-aware filter to improve the
accuracy.
TABLE I
P RELIMINARY RESULTS ON ACTIVITY RECOGNITION .
Test True
Test False

Entering Vehicles
38
3

Other activities
46
250

The filter is proposed based on the observation that compared with common sitting activity, the sensed ambient magnetic field fluctuate dramatically when approaching the vehicle
because of the special material of vehicle and engine. We
conduct comprehensive tests, and plot the changes of magnetic
field in both two scenarios respectively (results shown in
Figure 5). The figures indicate that the significant difference
lies on the stability of value of magnetic field. Neglecting the
absolute error from the value, the sensory data vibrate clearly
during the complete activity in the first case, while the second

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

6
6

2
0
2

Acceleration (m2/s)

Acceleration (m2/s)

Acceleration (m2/s)

2
0
2
4

Horizontal Plane
Ground

6
6
0

50
100
150
Time Sequence (X50 ms)

(a) in trouser pockets

200

8
0

20

40
60
80
100
Time Sequence (X50 ms)

120

Horizontal Plane
Ground

4
2
0
2
0

20

40
60
80
Time Sequence (X50 ms)

(b) ascending stairs

100

(c) sitting down

Fig. 3. Sensory data extracted from accelerometer in different behaviors.

X
Y
Z

20

100
Pitch
Roll

50

50
0
50

150
0

20
40
60
80
100
Time Sequence (X50 ms)

(a) Left Pocket, Driver Side

20

Pitch
Roll

100

50

Rotation Degree (degree)

40
X
Y
Z

Magnetic Field (uT)

Magnetic Field (uT)

50

100
Rotation Degree (degree)

case is much more steady. Therefore, the sensitivity to the


vehicle stimulates the ambient-aware filter to assist to elevate
the identification accuracy.

20
40
60
80
100
Time Sequence (X50 ms)

(b) Right Pocket, Driver Side

Fig. 6. Side detection: the observation of rotation along BFC.


50
0

100
200
300
Time Sequence (X50 ms)

(a) Toward a vehicle and sit

40
0

50
100
150
200
Time Sequence (X50 ms)

(b) Toward a chair and sit

E. Front or Back Seats?

Fig. 5. The magnetic field in two scenarios.


90

26
Front Row
Back Row

The system conducts side-detection operations simultaneously with the entering-detection so that the detection delay is
minimized. We call the leg close to the vehicle as inner leg,
and by looking into the collected data of experimental cases,
we have an interesting observation. When the smartphone is
located in the pockets of the inner leg, the boarding motion
leads to a large fluctuation on acceleration sensors followed
by a relatively smaller one. The observation is totally opposite
when the smartphone is put in some pocket of the other leg.
Unfortunately, it is still hard to judge the boarding side of a
user since a user with his/her phone in his left pocket boarding
from the right side has similar observing pattern as that of a
user with his/her phone in his right pocket boarding from the
left side. Thus pure acceleration-based determination may fail
in this stage.
Fortunately, we found another key factor determining the
direction of body rotation when entering from both sides.
Usually, in order to face front, the driver has to turn left while
the passenger will turn right before he/she enters into the car
(according to the car model in the US), and such small duration
of action could be captured by the gyroscope sensor in both
Pitch and Roll, shown in Figure 6.
Although the orientation of vehicle is unknown and unpredictable, the turning-based determination is demonstrated to be
robust during our evaluation. We also adopt Extend Kalman
Filter to eliminate the internal mechanism noise of sensors.

25

70
60
50
40

24.5
24
23.5
23

30
20
0

25.5
Magnetic FIeld (uT)

D. Left or Right Side?

Magnetic Field (uT)

80

22.5
50

100
150
200
Time Sequence (X50 ms)

(a) Front vs. Back

250

300

22
180

200

220
240
260
Time Period (X50 ms)

280

300

(b) Engine Starts

Fig. 7. Changes of magnetic field in various scenarios.

Although existing works could estimate the front or back


in the vehicle through distance from smartphone to the
speaker [5] or sound level of turning signal [6], additional costs
in placement of speakers and phone-to-phone comparison
through cloud server cannot be neglected. In this stage, we
propose two independent approaches (magnetic variation when
engine stars and vehicle vibration when driving) to determine
whether the smartphone is located in the front row or the back
row.
As mentioned in the previous section, the magnetometer is
sensitive to the various magnetic field in ambient environment,
especially around the engine. After sitting in the vehicle, we
assume the user will not be in motion dramatically because
of limited space, the magnetic data maintains in a more stable
condition. Based on our observation, when a user starts the
engine, the magnetometer experience an obvious pulse at that
moment, which is a good signal to detect whether the engine

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

0
1
2

Front Row
Back Row

3
0

10
20
30
40
Time Sequence (X50 ms)

(a) Driving through bump

2
1
0

1
2
0

10
20
30
Time Sequence (X50 ms)

40

(b) Driving through pothole

Fig. 8. Driving through bumps and potholes.

starts. The distance to the engines determines the amplitude of


such pulse, with the amplitude reaches approximately 20uT
when smartphone is mounted in the windshield. We also
test the vibration of magnetic value in two extreme cases
simultaneously, with the phone left in the pocket close to the
backrest of front seat and behind the backrest, representing
both front and back row respectively. We plot the value of the
magnetic field in Figure 7(a) with the same time domain. An
exciting phenomenon we notice is that even the smartphone
locates close to the backrest, the magnetic pulse can still
be detected in the front row, but with smaller amplitude
change (around 3uT , and the zooming in figure is shown in
Figure 7(b)). The smartphone in the back, on the other hand,
can hardly record anything. Thus, we exploit the instantaneous
magnetic field vibration when the engine starts to determine
the rows by fusing the readings from magnetometer.
The second method is inspired by both Nericell [38] and
Pothole Patrol (P2 ) [39], in which researchers use acceleration
sensors and GPS deployed on vehicles to detect the pothole
and bump on the road. Empirically, when we drive through a
bump or a pothole, people sitting in the back row feel more
bumpy than those sitting in the front. The thought is verified
after we collect a set of data by driving through either bumps
or potholes, the sensory data exactly match to our conjecture.
The results shown in Fig. 8 are from the following test cases.
Two smartphones are held by two passengers sit on both front
and back row, sample the acceleration in 20Hz during car
moving, the linear acceleration in gravity direction in both
two cases are plotted in Figure 8. Due to the special shape of
bumps or deceleration strips, one wheel will experience two
continuous large vibration when wheels first hit the bump and
hit the ground consequently. And smartphone is so sensitive
that both front and back wheel bumpy activities could be observed, only in different intensities. The sophisticated analyzed
data indicate that in both infrastructures, the front row users
experience two jumps with similar intensity, while in the back
row, the front-wheel vibration is relatively smaller than the
back-wheel, as shown in Figure 8(a) and 8(b) respectively.
F. Texting?
Accidents are prone to happen when the drivers attention is
distracted, especially texting, tweeting and composing emails.
Essentially, in case of emergency, blocking the phone call and
texting completely seems inappropriate, even if such behavior

is prone to accelerate the accident. In our TEXIVE, we allow


incoming phone call and message, but trigger necessary warning when endangered texting activity is detected. Therefore,
the key to satisfy the demand is detecting distracted behavior
within small amount of input or before touching the screen.
In order to accomplish this, we conduct a set of experiments
by a group of volunteers to compose multiple sentence in
smartphone in both normal and driving scenarios, focusing on
both the time interval between two inputs, and the frequency
of typo. In order to avoid accident, we conduct the latter tests
in a large parking lot with few vehicles.
We plot typing time interval between two inputs in Figure 9(a), which illustrates the 90% of typing inputs falls
within 800ms when in fully concentrated, while only less
then 70% in distracted scenario. The average typing speeds are
approximately 536.55ms to 742.42ms, with large difference
in standard deviation as well. The reason mainly generates
from the fact that driver have to pause and watch the road after
one word or phrase to keep alert. Therefore, the typo are more
likely to happen in this case, as shown in Figure 9(b). The typo
data is collected by monitoring the backspace key. We measure
the number of inputs between two continuous typos, with 50
vs. 30 in normal and driving scenarios on average.
1

25

Normal Texting
Texting while Driving

20
15

0.8
CDF

Acceleration (m2/s)

Acceleration (m2/s)

Front
Back

PDF (%)

3
3

0.6

10

0.4

0.2

0
0

400

800 1200 1600


Time Interval (ms)

2000

(a) The probability of different time interval between two inputs

0
0

Normal Texting
Texting while Driving
20
40
60
80
Numbe of inputs between two typos

(b) The typo frequency

Fig. 9. The information extracted from typing.

G. Make Decision
Now we can make the decision from three parts of information: Front or Back, Left or Right, Side Information. Bump
and magnetic field information included in the first part, cause
the Bump information gives out high accuracy Front or Back
decision but we can not get the result before the driving start
cause meeting a bump need driving for several minutes. So at
beginning we use the result of magnetic field information to
consider front or back, the Bump information will be used as
soon as we meet the first bump. Left or Right decision only
relies on the pattern comparison. After these two parts, we can
know if the user is a driver. Other side information we use to
confirm and correct our decision. If all these side information
shows a different decision within several texting action then
the system will change the decision to the opposite.
IV. R EDUCING E NERGY C ONSUMPTION
Energy consumption is a paramount concern in mobile
devices, especially for smartphones. TEXIVE is designed to

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

V. E VALUATIONS
In this section, we will describe the intensive evaluation of
TEXIVE according to different components. All experiments
are conducted using two different smartphones (Samsung
Galaxy S3 and Galaxy Note II) in two different vehicles
(Sedan and SUV) by multiple users. The whole process is
evaluated on streets in Chicago, except the texting part is
evaluated in a large parking lot in order to avoid accident.

CDF

0.8
0.6
0.4
0.2
0
0

50
100
150
200
250
Time intervala between two bumps (s)

Fig. 10. Bump on the road.

15

100

Entering
Vehicle

90
Percentage (%)

20
Acceleration (m2/s)

monitor the drivers activity through a day, without missing


any important signals while guaranteeing the normal daily
usage. Facing such dilemma, dynamic sampling strategy is
used, which is based on self-learnt daily routine generated
through a close loop. Empirically, we notice people may have
to walk to the parking lot or garage to pick the vehicle, thus
detecting walking is becoming an effective way to predict
whether the user will start driving. Since most people drive
at some fixed time period, especially in weekdays, TEXIVE
conducts walking detection in dynamic duty cycle to save
energy (high sampling rate in commute-time, but reduce to
10% in the rest).
TEXIVE learns when the user drives in a day, and adjusts
the personalized commute-time according to historical data.
When the current time T is close to commute-time TD , i.e.,
T = TD Tth where Tth is the variance of historical
commute-times and is a constant, we sample data with large
frequency, say 1/tc . In the rest of the time, the system will
detect the walking activity with sample frequency 1/td times,
which is much smaller than 1/tc .
For each detection, on the other hand, energy consumption depends on whether TEXIVE could capture available
bumpy or pothole information and how soon that would be.
Suppose the vehicle is driving at a constant velocity, and
the bump detection is taken in a cycle w + s, where w is
the duration of detection and s denotes the sleep time. The
system stops checking when the system detects the existence
of the bump or pothole. If bumps and potholes follow a
Poisson Process, the probability of detecting k bumps or

k
potholes in time interval [t, t + ] is: P (k) = e k! where
is a rate parameter. Thus, the probability of successfully
detecting the kth bump or pothole by the ith detecting cycle
w
is: Pik = Pith hit Pk1 miss = (1 ew ) s+w
. Suppose
the average power for sampling sensory data and running
activity recognition in one unit time is C, as a result, the
total energy consumption under the same circumstance is
C((i 1)(w + s) + t), where t is the time for identifying a
bump or pothole in the ith sampling. And the overall expected
w
cost is E(k) = (1 ew ) s+w
C((i 1)(w + s) + t). We
test a segment of the road (over 5 miles), containing both local
streets and highway. The actual bump measured in our data
is not the regular speed bump people experience, actually, we
treat any non-smoothy part of a road segment that will cause
bump-like behavior as a bump, and record the time interval
of driving through a bump or pothole on the street as shown in
Figure 10. The figure shows that the probability of a vehicle
driving through a bump within 50 seconds is over 80%, so
that method is feasible and reliable.

10
5
0
5
10
0

Walking Detected

Horizontal Plane
Ground

100
200
300
Time Sequence (X50 ms)

80
70
60
50
Precision
Sensitivity
Specificity
Accuracy

40
30
20

1.5

2.5 3 3.5 4
Window Size (s)

4.5

(a) Detecting the first arriving signal (b) Recognition of entering vehicles
Fig. 11. Detecting entering vehicles.

A. Getting on Vehicle
We first evaluate the performance of entering activity detection, more specifically, the capability of both extracting
entering pattern in a series of sensory data and distinguish
from others. We deploy TEXIVE in the background of multiple
users smartphones in the office for one week, and collect
the sensory data for analysis. Although a large amount of
sensory data is gathered and a lot of successful detections
are conducted, the number of driving related activity is much
smaller than the others because most of our volunteers only
drive in commute time, which is also considered credible
observation.
Figure 11(a) illustrates the successful detection of first
signal according to the protocol reflected in acceleration from
the perspective of both horizontal and ground direction. The
smartphone is put in the pocket, and TEXIVE perceives a
walking activity from 112th time slot, and the entering signal
arrives 2 seconds later (133th time slot). Since the duration for
the whole activity lasts approximately 5 to 6 seconds normally
if the user is not interrupted, thus we slide the detection
window with 0.5s step length to detect the action. In this
case, multiple entering activities are perceived, and only in
this circumstance we can increase the credibility that the user
is indeed entering the vehicle.
We collected 41 behaviors of entering in both SUV and
Sedan as well as 296 other activities in total, we evaluate the
precision, sensitivity, specificity and accuracy with respect to
different window sizes of the action. We set the window size
ranging from 1.5s to 5s and plot the results in Figure 11(b),
which indicates that generally the performance improves with
the increment of window size. The system reaches a better
performance when the window side is around 4s to 4.5s,
with both sensitivity are over 90%. On the other hand, we
have to admit the precision in both cases are not as high as

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

100

100

Percentage (%)

Accuracy (%)

90

80

Test in Front
Test in Back

80

70
60

Bump in Front
20
0

Bump in Back
0
20

70

Left
Right

50
40

TABLE II
B UMP ON THE ROAD .

Precision
Sensitivity
Specificity
Accuracy

90

1.5

2.5 3 3.5 4
Window Size (s)

4.5

(a) Accuracy in Left vs. Right

60
1.5

2.5 3 3.5 4
Window Size (s)

4.5

(b) Side Detection

Fig. 12. Side detection accuracy in different window sizes.

expected, and the reason mainly comes from the number of


false positives is close to true ones. However, both the true
negative rates (Specificity) are as high as above 85%, which
represents the reliability of detection, and the low precision is
reasonable. Surprisingly, when we employ the ambient-aware
filter, the value of false positives decreases to 0, thus both the
precision and specificity rise to 100%.
Similarly, we also evaluate the influence of window sizes to
the side detection shown in Figure 12. For entering in either
side, the accuracy climbs with the increment of the window
size with no doubt, the accuracy are around 90% except the
case with window size only 1.5s in Figure 12(a). Figure 12(b)
presents the precision, recall, specificity and accuracy for
whole process of side detection, it is worth noticing that the
precision reaches 90% when the window size is 3s and the
results still increases when enlarging the window size. The
precision reaches 90% when the window size is 3s, the result
increases while enlarging the window size, and the highest
precision is around 95% with window size 4.5s. The total
accuracy is approximately 85% when the window size is set
as the largest one. Considering both entering activity and sidedetection activity, we conclude that the feasible and reliable
performance will be achieved when the window size is set as
4.5s.
B. Front vs. Back
Our system presents two independent approaches to handle
the front-back classification through engine-start-signal monitoring and bumpy-road-signal detecting. In order to demonstrate the generality of both methods, we organize the experiments in both Mazda Sedan and a Honda SUV with multiple
positions where the phone may be put.
Although the embedded magnetometer could perceive the
sight vibration in magnetic field change when approaching
vehicles with phone in the trousers pockets, we have to realize
the condition that some users get used to make a phone call or
texting while entering, and then put the phone in cup holder
or under dashboard. Thus our experiments mainly focus on
the detection of the engine-start signal when the smartphone
is held in hand or put in some other possible positions in the
car.
Figure 13(a) shows the magnetic field changing in four
different locations: cup holder, holding in hand, on dashboard
and under the windshield (sorted by the distance to the engine)

when the engine starts. Obviously, the place closest to the


engine experiences the largest fluctuation in the magnetic
varying with the amplitude about 7uT , with the distance to
the engine increases, the amplitude of the magnetic fluctuation
decreases slightly. When the smartphone is held in the hand or
put in the cup holder, the amplitude is only half of the value
in the windshield. Although the magnetic field in the vehicle
fluctuates along with the unpredictable motions of the human
body, the orientation, position and location of the smartphone,
the magnetic field can be considered as a feasible factor to
distinguish the front and back.
We also test the availability of engine-starting detection in
seven separate sampling locations in both two vehicles. The
location numbers indicate the location to the engine in order
of increasing distance, i.e., under the windshield, dashboard,
the trouser pocket of driver, in cup-holder, back of front seat,
back seat, and under the back windshield respectively. Based
on the experiments, the value of magnetic field is determined
by both the location and position of the smartphone, as well as
the placement in the vehicle. We sample the original magnetic
field value in different locations in both vehicles when the
engine stops in Figure 13(b). Although the readings, as shown,
are irregular, we still observed instant spikes at that very
moment, as shown in Figure 13(c). The figure indicates that
the closer to the engine the phone is, the more sensitive the
magnetic field variation is, and when put the smartphone in
the back seat area, the sensor can hardly detect the changing
magnetic field when engine starts, which demonstrates that the
spike from engine is trustable.
We then take a number of comprehensive experiments in
both parking lot and local roads to evaluate the efficiency
of front-back distinguish using bumps and potholes. Most of
parking lots contain deceleration strips to impel the driver to
reduce driving speed, and the front-back process could be
solved before driving out if this works, which also reduce
the energy consumption as mentioned above. There are one
deceleration strip and one bump in the parking lot, and we
drive through both in ten times in each with a different driving
speed. The test results are shown in Table II, both detections
lead to the absolute correctness, 20 bumps are all successfully
detected in both locations.
When it comes to the street test, the results are slightly
different. The experiment is taken in a suburb at night, the
total distance is approximately 5.2 miles with local road and
highway. Both the driver and back seat passenger turned on the
system to estimate its exact location in the car according to the
sensory data while driving. The smartphone of driver detects
334 samples of readings and 23 of bumps and potholes, while
the back seat passenger only detects 286 samples but 58 bumps
and potholes. The difference in sampling number results from
the fact that the starting time of passenger is behind the driver.

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016

Cup Holder
Hand
Dashboard
Windshield

8
6
4
2
0
0

20
40
60
Time Sequence (X50 ms)

80

100
SUV
Sedan

80

10

60
40
20

0
1

(a) Spike if engine starts.

Sedan
SUV

15
Spike Amplitude of
Magnetic Field (uT)

Average Magnetic Field (uT)

Magnetic Field (uT)

10

3
4
5
Sampling Point

(b) Spatial variations when idle.

3
4
5
Sampling Point

(c) Engine starts.

Fig. 13. Magnetic field fluctuations experienced at different places of car.

In addition, although the number of bumps and potholes being


detected by both smartphones are different, both smartphones
report they are in the right location with accuracy of 100%.
C. Texting Evaluation
Texting evaluation is taken in the same parking lot as well,
with 8 texting in normal condition and 20 in driving condition.
Each sentence is approximately 20 to 30 words, we collect
the input time interval and calculate the average value in real
time. In Figure 14, we draw two pink lines, identifying the
average time interval of each scenario, and the green dash
line in the middle as the standard classifier. All the dots should
be separated by the standard classifier, with the blue (normal
texting) at below and red (texting while driving) in above.
The two error classifications are denoted in black circle. The
evaluation in texting detecting is reliable and feasible, the
accuracy is 90%.

Average Time Interval

900
800
700
600
500
400
0

Texting while Driving


Normal Texting

10
Text No.

15

20

Fig. 14. Texting.

Based on our experiments, we notice that the performance


of TEXIVE mainly depends on the first two phases. We test
the performance of driver-detection based on the fusion of all
the phases, the precision is 96.67% and accuracy is 87.18%.
Meanwhile, according to the real evaluation in Android smartphone, the recognition delay is only 0.2184 second.
E. Energy Consumption
The energy consumption of the system is determined by the
running duration of inertial sensors, as well as the sampling
rates. The working strategy of the system is determined based
on individual living pattern, more specifically, the activity
regulation.
We take a group of experiments using Galaxy S3 to test
the energy consumption in high density sampling. Without
using any inertial sensors, the battery drop 2% within half an
hour, but 9% when the inertial sensors are triggered. However,
in this process, we reduce the detecting rate to 10s in every
one minute with the sensor sampling rate 0.05s, which on
the other hand, match the transition probability of transferring
from walking to entering car. Based on the test, the battery
reduce only 4% for half an hour. Compared with other existing
works utilizing GPS to determine whether the user is in driving
vehicle, we found that although such solutions do not require
sensors to monitor the activities and adjust the user habit, the
energy consumption from GPS is much larger than that of
sensors. In addition, the system has to open GPS and store
sensory data for a certain duration as long as the driving
activity is detected. In our experiment, the battery goes from
84% to 70% for the same testing duration.
VI. C ONCLUSION

D. Driver Detection
The decision of driver detection is made by fusing the
evidence from previous sub-processes. When doing real time
recognition, the system slides the window with step 0.5s to
match the stored activities through naive Bayes classifier. Since
the activity could be detected in multiple times because of
the sliding window, we consider a continuous same activity
recognition to be a successful recognition to increase the
credibility. And taking the acceleration into account as a filter,
the recognition could provide high level of credit for current
recognition.

This paper presented TEXIVE, a smartphone-based IoT


application to detect driving and texting activity. Our system leverages inertial sensors integrated in smartphone and
accomplish the objective of driver-passenger distinguishing
without relying any additional equipment. We evaluate each
process of the detection, including activity recognition and
show that our system achieves good sensitivity, specificity,
accuracy and precision, which leads to the high classification
accuracy. Through evaluation, the accuracy of successful detection is approximately 87.18%, and the precision is 96.67%.
The evaluation of TEXIVE is based on the assumption that

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016


smartphone is attached to the user body in the trouser pocket
most of the time.
R EFERENCES
[1] National
highway
traffic
safety
administration
(nhtsa),
http://www.nhtsa.gov/.
[2] Insurance institute for highway safety, http://www.iihs.gov/.
[3] E. Wahlstrom, O. Masoud, and N. Papanikolopoulos, Vision-based
methods for driver monitoring, in Intelligent Transportation Systems,
vol. 2. IEEE, 2003, pp. 903908.
[4] C.-W. You, N. D. Lane, F. Chen, R. Wang, Z. Chen, T. J. Bao,
M. Montes-de Oca, Y. Cheng, M. Lin, L. Torresani, et al., Carsafe
app: Alerting drowsy and distracted drivers using dual cameras on
smartphones, Proc. of ACM Mobisys, 2013.
[5] J. Yang, S. Sidhom, G. Chandrasekaran, T. Vu, H. Liu, N. Cecan,
Y. Chen, M. Gruteser, and R. Martin, Detecting driver phone use
leveraging car speakers, in Proc. of MobiCom. ACM, 2011, pp. 97
108.
[6] H. Chu, V. Raman, J. Shen, R. Choudhury, A. Kansal, and V. Bahl, Invehicle driver detection using mobile phone sensors, in Proc. of ACM
MobiSys, 2011.
[7] Rode dog, http://www.rodedog.com/.
[8] Textbuster, http://www.textbuster.com/.
[9] H. A. Shabeer and R. Wahidabanu, A unified hardware/software approach to prevent mobile phone distraction, EJSR, vol. 73, no. 2, pp.
221234, 2012.
[10] C. Bo, X. Jian, X.-Y. Li, X. Mao, Y. Wang, and F. Li, Youre driving
and texting: Detecting drivers using personal smart phones by leveraging
inertial sensors, in Proc. of ACM MobiCom, Poster, 2013
[11] A. Sathyanarayana, S. Nageswaren, H. Ghasemzadeh, R. Jafari, and J. H.
Hansen, Body sensor networks for driver distraction identification, in
Proc. of ICVES. IEEE, 2008, pp. 120125.
[12] H. L. Chu, V. Raman, J. Shen, R. R. Choudhury, A. Kansal, and V. Bahl,
Poster: you driving? talk to you later, in Proc, of MobiSys. ACM, 2011,
pp. 397398.
[13] L. Bergasa, J. Nuevo, M. Sotelo, R. Barea, and M. Lopez, Realtime system for monitoring driver vigilance, Intelligent Transportation
Systems, IEEE Transactions on, vol. 7, no. 1, pp. 6377, 2006.
[14] M. Kutila, M. Jokela, G. Markkula, and M. Rue, Driver distraction
detection with a camera vision system, in Proc. of ICIP, vol. 6. IEEE,
2007, pp. VI201.
[15] K. Li, P. Baudisch, and K. Hinckley, Blindsight: eyes-free access to
mobile phones, in Proc. of SIGCHI. ACM, 2008, pp. 13891398.
[16] M. Wiberg and S. Whittaker, Managing availability: Supporting
lightweight negotiations to handle interruptions, ACM TOCHI, vol. 12,
no. 4, pp. 356387, 2005.
[17] L. Nelson, S. Bly, and T. Sokoler, Quiet calls: talking silently on mobile
phones, in Proc. of SIGCHI. ACM, 2001, pp. 174181.
[18] J. Lindqvist and J. Hong, Undistracted driving: A mobile phone that
doesnt distract, in Proc. of ACM HotMobile. ACM, 2011, pp. 7075.
[19] Y. Wang, J. Yang, H. Liu, Y. Chen, M. Gruteser, and R. P. Martin,
Sensing vehicle dynamics for determining driver phone use, Proc. of
ACM Mobisys, 2013.
[20] L. Bao and S. Intille, Activity recognition from user-annotated acceleration data, Pervasive Computing, pp. 117, 2004.
[21] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korhonen, Activity classification using realistic data from wearable sensors,
Information Technology in Biomedicine, IEEE Transactions on, vol. 10,
no. 1, pp. 119128, 2006.
[22] E. Tapia, S. Intille, W. Haskell, K. Larson, J. Wright, A. King, and
R. Friedman, Real-time recognition of physical activities and their
intensities using wireless accelerometers and a heart rate monitor, in
Proc. of ISWC, 2007, pp. 3740.
[23] N. Krishnan, D. Colbry, C. Juillard, and S. Panchanathan, Real time
human activity recognition using tri-axial accelerometers, in Proc. of
Sensors, signals and information processing workshop, 2008.
[24] A. Mannini and A. Sabatini, Machine learning methods for classifying
human physical activity from on-body accelerometers, Sensors, vol. 10,
no. 2, pp. 11541175, 2010.
[25] S. Lee and K. Mase, Activity and location recognition using wearable
sensors, Pervasive Computing, IEEE, vol. 1, no. 3, pp. 2432, 2002.
[26] N. Ravi, N. Dandekar, P. Mysore, and M. Littman, Activity recognition
from accelerometer data, in AAAI, vol. 20, no. 3, 2005, p. 1541.
[27] X. Su, H. Tong, and P. Ji. Activity recognition with smartphone
sensors, Tsinghua Science and Technology, 19(3):235-249, June 2014

10

[28] C. Bo, L. Zhang, X.-Y. Li, Q. Huang, and Y. Wang, SilentSense: silent
user identification via touch and movement behavioral biometrics, in
Proc. of ACM MobiCom, Poster, 2013.
[29] C. Bo, L. Zhang, T. Jung, J. Han, X.-Y. Li, and Y. Wang, Continuous
user identification via touch and movement behavioral biometrics, in
Proc. of IEEE IPCCC, 2014.
[30] C. Bo, T. Jung, X. Mao, X.-Y. Li, and Y. Wang, SmartLoc: Sensing
landmarks silently for smartphone based metropolitan localization,
EURASIP Journal on Wireless Communications and Networking, to
appear.
[31] B. Guo, Z. Wang, Z. Yu, Y. Wang, N. Yen, R. Huang, and X. Zhou,
Mobile crowd sensing and computing: The review of an emerging
human-powered sensing paradigm, ACM Computing Surveys, 48(1),
Article 7, August 2015.
[32] Y. Wang, Y. Ge, W. Wang, and W. Fan, Mobile big data meets cyberphysical system: Mobile crowdsensing based cyber-physical system for
smart urban traffic control, in Proceedings of 2015 Workshop for Big
Data Analytics in CPS: Enabling the Move From IoT to Real-Time
Control, Seattle, Washington, April 2015.
[33] L. Rabiner and B. Juang, An introduction to hidden markov models,
ASSP Magazine, IEEE, vol. 3, no. 1, pp. 416, 1986.
[34] F. Ichikawa, J. Chipchase, and R. Grignani, Wheres the phone? a study
of mobile phone location in public spaces, in Proc. of MobiSys. ACM,
2005, pp. 18.
[35] N. Ahmed, T. Natarajan, and K. Rao, Discrete cosine transform,
Computers, IEEE Transactions on, vol. 100, no. 1, pp. 9093, 1974.
[36] P. Langley, W. Iba, and K. Thompson, An analysis of bayesian
classifiers, in Proc. of AAAI, 1992, pp. 223223.
[37] Y. Cui, J. Chipchase, and F. Ichikawa, A cross culture study on phone
carrying and physical personalization, in Proc. of UI-HCII. Springer,
2007, pp. 483492.
[38] P. Mohan, V. Padmanabhan, and R. Ramjee, Nericell: rich monitoring
of road and traffic conditions using mobile smartphones, in Proc. of
SenSys. ACM, 2008, pp. 323336.
[39] J. Eriksson, L. Girod, B. Hull, R. Newton, S. Madden, and H. Balakrishnan, The pothole patrol: using a mobile sensor network for road
surface monitoring, in ACM Proc. of MobiSys, 2008.

Cheng Boi is a PhD Candidate in Department of


Computer Science at University of North Carolina
at Charlotte, and also a joint member of Wireless
Network Lab in Department of Computer Science
at Illinois Institute of Technology. He received both
his M.S and B.E from Department of Computer
Science at Hangzhou Dianzi University. His research
interests include localization, mobile networking and
computing, wireless networks, security and privacy.

Xuesi Jian is a Ph.D. student in the Department of


Computer Science, Illinois Institute of Technology,
USA. He received his B.E. degree in Software Engineering from Sun Yat-Sen University, P.R. China, in
2009, and his Master degree in Computer Science
from Illinois Institute of Technology. His research
interests include mobile sensing and wireless communication.

IEEE INTERNET OF THINGS JOURNAL ,VOLUME:30 , NO: 99, APRIL 2016


Taeho Jung received the B.E degree in Computer Software from Tsinghua University, Beijing, in
2007, and he is working toward the Ph.D degree in
Computer Science at Illinois Institute of Technology
while supervised by Dr. Xiang-Yang Li. His research
area, in general, includes privacy & security issues
in mobile network and social network analysis.
Specifically, he is currently working on the privacypreserving computation in various applications and
scenarios.

Junze Han is a Ph.D. student in Computer Science


at Illinois Institute of Technology since 2011. He
received the B.E. degree from the Department of
Computer Science at Nankai University, Tianjin, in
2011. His research interests include data privacy,
mobile computing and wireless networking.

Xiang-Yang Li is a professor at School of Computer


Science and Technology, University of Science and
Technology of China. He was previously a professor
at Computer Science Department, Illinois Institute
of Technology. Dr. Li received MS (2000) and PhD
(2001) degree at Department of Computer Science
from University of Illinois at Urbana-Champaign, a
Bachelor degree at Department of Computer Science
and a Bachelor degree at Department of Business
Management from Tsinghua University, P.R. China.
His research interests include wireless networking,
mobile computing, security and privacy, cyber physical systems, and algorithms. He and his students won five best paper awards (IEEE GlobeCom
2015, IEEE IPCCC 2014, ACM MobiCom 2014, COCOON 2001, IEEE
HICSS 2001), one best demo award (ACM MobiCom 2012). He published
a monograph Wireless Ad Hoc and Sensor Networks: Theory and Applications. He co-edited several books, including, Encyclopedia of Algorithms.
Dr. Li is an editor of several journals, including IEEE Transaction on
Mobile Computing, and IEEE/ACM Transaction on Networking. He has
served many international conferences in various capacities, including ACM
MobiCom, ACM MobiHoc, IEEE MASS. He is an IEEE Fellow and an ACM
Distinguished Scientist.

Xufei Mao received the Bachelor?s degree in Computer Science from Shenyang University of Technology and the MS degree in Computer Science
from Northeastern University in 1999 and 2003
respectively. He received the Ph.D. degree in Computer Science from Illinois Institute of Technology,
Chicago in 2010. He is currently an associate research professor in the School of Software, Tsinghua
University. His research interests span wireless ad
hoc networks and Internet of things. He is a member
of the IEEE.

11

Yu Wang is a full professor of computer science


at the University of North Carolina at Charlotte
and an adjunct professor of information engineering at Taiyuan University of Technology, China.
He received his Ph.D. degree (2004) in computer
science from Illinois Institute of Technology, his
B.Eng. degree (1998) and M.Eng. degree (2000) in
computer science from Tsinghua University, China.
His research interest includes wireless networks, mobile crowd sensing, mobile social networks, mobile
computing, complex network, and algorithm design.
He has published over 150 refereed papers. He has served as Editorial Board
Member of several international journals, including IEEE Transactions on
Parallel and Distributed Systems. He is a recipient of Ralph E. Powe Junior
Faculty Enhancement Awards from Oak Ridge Associated Universities in
2006 and a recipient of Outstanding Faculty Research Award from College
of Computing and Informatics at UNC Charlotte in 2008. He won Best Paper
Awards from IEEE HICSS-35 (2002), IEEE MASS (2013), IEEE IPCCC
(2013) and Tsinghua Science and Technology (2015). He is a senior member
of the ACM, IEEE, and IEEE Communication Society.

S-ar putea să vă placă și