Sunteți pe pagina 1din 14

VIRTUAL TRY-ON SYSTEM USING

KINECT
Abstract:
Virtual try on system is a solution to save your time, physical energy and provides a
platform to wear the clothes that are fit to your body. In this paper we present a virtual try
on system using Kinect that enables user to see hisself without wearing them physically.
The user can select various clothes for trying them. The system maps the selected clothes on
the user body and provides user and idea how much does the selected cloth maps to his
body. The paper contributes that in our system we create 3d clothes with different textures
(designs) and then after selection they are simulated to user’s body. The system is totally
based on Kinect and it is a .net application. This paper provides you complete information
about the methodology of our system and the effectiveness of our system. The system is
more efficient also that it is bases on live streaming. The paper also states the previous
similar implements systems that were slightly different from our system.
Index Terms: Virtual try-on, 3d clothes, textures, Kinect, simulated.

1. INTRODUCTION:
In retail shopping the procedure of physically trying on clothes consumes very time. It requires to try on
several clothes before the shopper makes decision to buy the cloth. Virtual try On System is virtual reality
based application which would be very helpful for user in terms of time saving, decision making and
ordering. By using this application the shopper can see the clothes on his body without physically trying
them. Nowadays virtual reality and 3d modeling are very useful technologies. Many fields such as
medical, military, gaming, etc are using this technology. So keeping that in mind we created this
application which provides user a platform to try-on several clothes without physically trying them on.
The Virtual Try-On system physically simulates the selected virtual clothes on the user’s body in real-
time and the user can see the virtual clothes putting on body. The user will give input to our system. A
gesture is a movement of part of the body, especially a hand or the head, to express a meaning. Our
system is based completely .net framework and it is a wpf based application. A wpf application is an
application that is totally windows based application that will run on a windows based environment.
Figure 1 shows the user interface of Virtual try on system.
Figure 1
In Figure 1 you can see clearly there are two options in catalog on left side to either to select shirt or pant.
After that another catalog appear with a collection of shirts or pants where you user can select his desired
cloth. In the menu there are certain options in the menu. Figure 2 shows the menu of Virtual try on
system.

Figure 2
In figure 2 you can clearly see the options of menu. A user can take his screenshot, user can remove his
virtual clothes, user can mute the applications sound and the user can order the selected clothes.
The applications entire methodology is further explained in this paper.
2. LITERATURE OVERVIEW:
People has worked a lot on this system that we will discuss in this literature. The system named as Mixed
Reality Virtual Clothes Try-On System developed in [1] presents three scenarios 1) Virtual clothes on
avatar 2) Virtual clothes on user’s image and 3)Virtual clothe son avatar blended with user’s face image.
The system named Virtual Try-on using Kinect and HD camera developed in [2] presents a scenario to try
on clothes in front of a desktop computer screen using Kinect and a HD camera. The system named as
Real Time Virtual Mirror Using Kinect developed in [3] presents a platform where a user can try on
clothes in virtual mirror environment using Kinect. The system named as A Real Time Virtual Dressing
Room Application using Kinect developed in [4] is a desktop application where the user can try on
clothes in front of Kinect camera. The system named as Virtual try on system developed in [5] uses
Kinect sensor, speech recognition, Google API for Upload / Download snaps or video for implementing
virtual mirror where users can virtually try on clothes. The system named as Creation of 3D Human
Avatar using Kinect developed in [6] creates a 3d human avatar using Kinect which would be very
helpful in creating such applications and the system developed in [1] also uses 3d human avatar for their
system. The system named as Image-based 3D Avatar for Virtual Try-on Applications developed in [7]
which provides 3D scanning system that allows to scan a user easily with only a few manual tweaks. The
system also includes cloth fitting and simulation. The system named as Virtual Try-On developed in [8]
provides a platform for user’s to try on clothes using certain image processing techniques.

3. METHODOLOGY:
Our virtual try-on system consists of a Microsoft Kinect sensor and a screen. Figure 3 shows the
Microsoft Kinect sensor. The Kinect sensor is an input device developed by Microsoft mostly used with
XBOX 360 in gaming to sense motion in the game. It consists of a depth camera, an RGB camera, and
microphone arrays. Kinect can also tilt up and down within -27 to +27 degrees.

Figure 3

In our system the user can virtually try on clothes during live streaming, the video that is provided by
Kinect RGB camera. The depth camera could work both in a lighter view in darker view for our
application. The figure 4 shows the complete architecture of our system.
To enable kinect sensor with our application we installed kinect SDK 1.8 API provided by Microsoft.

Figure 4

3.1 Kinect Skeleton Tracking:


Kinect has a feature that creates a user’s skeleton which was very helpful for our application. Figure 5.
Shows the skeleton created by kinect sensor.

Figure 5
The Skelton created by kinect is a combination of several joints. There are different types of joints in the
skeleton created by the kinect sensor such as shoulder center, shoulder left, shoulder right etc. Figure 6
shows the complete joints in a skeleton created by kinect sensor.

Figure 6

The joints that were useful for application are following:

 Shoulder center
 Shoulder left
 Shoulder right
 Hip center
 Hip right
 Hip left
 Knee right
 Knee left

There are certain techniques discussed in [9] but the technique we used in our application to track Skelton
information is widely used. To create the skeleton in our application we use polyline class that is a built-
in class of Microsoft used to create shapes in .net application. The shape was created using this class and
to enable Skelton in GUI kinect provides its class Skeleton in which to enable Skelton in your application
you have to open the skeleton frame. The figure 7 shows the kinect skeleton tracked in our application.
Figure 7

As you can see in Fig 5 we have created that joints that were useful to map pant and shirt to user’s body.
The main joints that were useful for proper mapping of the clothes.
In our system we have made skeleton transparent for a better user interface. The joint and mapping is
same but difference is that the skeleton is transparent. Figure 8 shows the transparency of skeleton in our
system.

Figure 8
3.2 GESTURE INPUT:
In our system the user can give input in form of gesture. We have done this by first tracking user hand and
taking input from user.

3.2.1Hand Tracking:
The hand tracking is done by finding the position of the hand-right and hand left as you can see in figure
4.Then there are different libraries and API’s that Microsoft provide to enable gesture in a Kinect based
application. For hand tracking we have used Depth image point class in .net. First we initialized depth
image point class’s object as left joint position or right joint position with the depth camera of Kinect.
Than we found lx and ly for left hand position and rx and ry for right hand position.
Lx=left point in x-axis * width / depth stream’s frame width

Ly=left point in y-axis * height / depth stream’s frame width

Rx=right point in x-axis * width / depth stream’s frame width

Ry=right point in y-axis * height / depth stream’s frame width

3.2.2GESTURE IMPLEMENTATION:
For gesture recognition we have used routed event class in .net. This class gives different method to
handle events in your system. Using this class we have created four events hand cursor enter event, hand
cursor move event, hand cursor leave event, hand cursor click event. Figure 9 shows the gesture
recognition in our system.

Figure 9
As you can see in figure 9 whenever user clicks on a button a bubble shaped icon appears that specifies
the gesture on that button. The whole buttons combination in our system is done with a prism library. It is
a library to make your Kinect based application better for user input.

3.3 3D Clothes:
The clothes that we used in our application are 3d designed model each cloth is modeled and designed
using blender. Blender is software which is mostly used in 3d modeling. It is very useful tool for 3d
characters in games. The design/texture on the cloth is a texture image which is mapped on that 3d cloth
using blender. Blender creates a material file along with the 3d model file in which we it gives certain
parameter such as length, width, height etc. To map the texture on the user body we choose a range like
shoulder, chest etc. Than we provide the path of that texture image that would be mapped on that 3d
model. In our application for 3d environment we have used helix toolkit library specific for Kinect and
wpf applications that provides different methods for importing and using 3d models in your application.
Figure 10 shows a complete 3d textures model of shirt and figure 11 shows a complete textures 3d model
of pant.

Figure 10 Figure 11

As you can see in Fig 6 and Fig 7 these are 3d models which are difficult to store in database but database
provides a type of varbinary (max) in which you can store any kind of file. We store these models in
different tables named as shirt and pant.
3.4Mapping Of The Clothes:
Mapping of the clothes on the user body was the basic module of our application in which we spend much
time as compared to other modules. For proper mapping of clothes on user’s body following methods
were used in our system.

3.4.1Scaling:
Scaling is method mostly used in Computer graphics it increases the size of an object. In our application
we use this technique to increase the 3d clothes model size according to the user’s body. For this
Transform3dGroup is a built in class in Microsoft which enable you to scale, translate and rotate your
object in the application. We just scale our 3d model to match the user’s body. For this we extract the
height and width of the user’s body according to the user’s skeleton and then scale the model according to
the height and width of the user’s skeleton. To extract width for the shirt we just find out distance
between shoulder left and shoulder right and for pent we just find out distance between knee left and knee
right.

4.4.2.Point 3d To Point 2d Mapping:


To map 3d points extracted from 3d clothes to 2d points extracted from user’s skeleton Matrix3d
class in .net. For that we extract out the particular point on which user stands. Using point class
in .net. Using that particular point we map 3d points to 2d points in our system than a new point
is created on which the 3d cloth is being mapped.

3.4.3Track Joints Rotation:


As in our system the user can see himself wearing clothes in straight position. So a little bit of
rotation of user’s body can change cloth model’s position. For this purpose we track a certain
rotation angle between joints when that angle is created between the joints the user in not in
straight position and he cannot see his self-wearing cloths perfectly.

3.4.4Main Joints For Clothes:


In our system we have five basic parameters for mapping a cloth to user’s body are joint to track position,
left joint to track angle, right joint to track angle, left joint to track scale, right joint to track scale.

3.4.4.1joint to track position:


This is the basic joint for mapping of a 3d cloth. From this joint you can easily identify that the model is a
shirt or pant. For shirt we use hip center for this joint and for pant we also use hip center for this joint.
The 3d cloth is bounded robustly on this joint.
3.4.4.2left joint to track angle:
This joint is used to identify the angle if the user rotates on the left side. This joint gives complete
overview of what is the rotation angle of the user on left side if the user rotates. For shirt we use shoulder
left for this joint and for pant we use knee left for this joint.

3.4.4.3right joint to track angle:


This joint is used to identify the angle if the user rotates on the right side. This joint gives complete
overview of what is the rotation angle of the user on right side if the user rotates. For shirt we use
shoulder right for this joint and for pant we use knee right for this joint.

3.4.4.4left joint to track scale:


This joint is used to identify whether the model requires scaling or not. This joint is very helpful to
proper map clothes on user’s body. For shirt we use shoulder center and for pant we use ankle left.

3.4.4right joint to track scale:


This joint is used to identify whether the model requires scaling or not. This joint is very helpful to
proper map clothes on user’s body. For shirt we use bottom joint and for pant we use ankle right.

Visual representation of pant and shirt mapping:


Figure 12 shows the mapping of shirt and figure 13 shows the mapping of both shirt and pant in our
system.

Figure 12
Figure 13
3.5 Ordering Items:
In our system ordering is also a main module. The user can order any item that he wants. For that the user
just have to click the order button than a window with four sizes will appear small, large, medium and
extra-large. When user select one of the sizes from above a window will appear to enter quantity of the
item. After entering the quantity the user have to click Ok button. After clicking Ok button an order id
will be assigned to the user so that user can receive the order. The data of the ordered item will be fetched
to database. It is implemented using My Sql database and .net. Whenever a user clicks on the order button
of blue color in our system a top window appears with sizes. Figure 14 shows the order order button
clicking in our system.
Figure 14
Figure 15 shows the appearance of top window that shows the different sizes for the selection of the user.

Figure 15
After the selection of one size a window appears to enter the quantity of the item that the user wants to
order. Figure 16 shows the quantity entering window in our system.
Figure 16
As you can see in figure 16 there are two buttons one is plus to increase the quantity and second ok to
finalize the quantity. When a user clicks ok button after entering the quantity a window appears showing
the order id of the user so the user can receive his order from the counter

Figure 17
So, here as you can see in figure 17 the order of the user is successfully done. Here our ordering module
is completed.
4. Overall performance:
The overall performance of all the modules we discussed above is shown in figure 18 graph below.

Figure 18
As you can see in figure 18 the total time spend on different options specified in the graph is 40. So this
shows that system is very time saving supplication.

5. References:
[1] Yuan, Miaolong, et al. "A mixed reality virtual clothes try-on system." IEEE Transactions on
Multimedia 15.8 (2013): 1958-1968.
[2] Giovanni, Stevie, et al. "Virtual try-on using kinect and HD camera." International Conference on
Motion in Games. Springer, Berlin, Heidelberg, 2012.
[3] Yolcu, G., S. Kazan, and C. Oz. "Real time virtual mirror using kinect." Balkan Journal of Electrical
and Computer Engineering 2.2 (2014).
[4] Isıkdogan, Furkan, and Gokcehan Kara. "A real time virtual dressing room application using
Kinect." CMPE537 Computer Vision Course Project (2012).
[5] Shaikh, Abrar I., et al. "Virtual Try-0N System." National Conference “NCPCI. Vol. 2016. 2016.
[6] Aitpayev, Kairat, and Jaafar Gaber. "Creation of 3D human avatar using kinect." Asian Transactions
on Fundamentals of Electronics, Communication & Multimedia 1.5 (2012): 12-24.
[7] Yvain Tisserand, Nadia Magnenat-Thalmann and Geneve. “Image-based 3D Avatar for Virtual Try-on
Applications”.
[8] Divivier, A., et al. "Virtual try-on." (2004).
[9] Sinthanayothin, Chanjira, Nonlapas Wongwaen, and Wisarut Bholsithi. "Skeleton Tracking using
Kinect Sensor & Displaying in 3 D Virtual Scene." International Journal of Advancements in Computing
Technology 4.11 (2012).

S-ar putea să vă placă și