Sunteți pe pagina 1din 50

CHAPTER-1

INTRODUCTION
1.1 SURFACE ROUGHNESS

Characterization of surface topography is important in


applications involving friction, lubrication, and wear (Thomas, 1999). In general, it has
been found that friction increases with average roughness. Roughness parameters are,
therefore, important in applications such as automobile brake linings, floor
surfaces, and tires. The effect of roughness on lubrication has also been studied to
determine its impact on issues regarding lubrication of sliding surfaces, compliant
surfaces, and roller bearing fatigue. Finally, some researchers have found a
correlation between initial roughness of sliding surfaces and their wear rate. Such correlations
have been used to predict failure time of contact surfaces.

Roughness is a measure of the texture of a surface. It is quantified by the vertical


deviations of a real surface from its ideal form. If these deviations are large, the surface is
rough; if they are small the surface is smooth. Roughness is typically considered to be the high
frequency, short wavelength component of a measured surface.

Roughness plays an important role in determining how a real object will interact with
its environment. Rough surfaces usually wear more quickly and have higher friction
coefficients than smooth surfaces. Roughness is often a good predictor of the performance of
a mechanical component, since irregularities in the surface may form nucleation sites for cracks
or corrosion.

Although roughness is usually undesirable, it is difficult and expensive to control in


manufacturing. Decreasing the roughness of a surface will usually increase exponentially its
manufacturing costs. This often results in a trade-off between the manufacturing cost of a
component and its performance in application.

1.1.1 PURPOSE OF MEASURING SURFACE ROUGHNESS

The size and shape of the irregularities on a machined surface have a major impact
on the quality and performance of that surface and on and on the performance of the end

1
product. The quantification and management of fine irregularities on the surface, which is to
say, measurement of surface roughness, is necessary to maintain high product performance.
Quantifying surface irregularities means assessing them by categorizing them by
height, depth, and interval. They are then analysed by a predetermined method and calculated
per industrial quantities standards. The form and size of surface irregularities and the way the
finished product will be used determine if the surface roughness acts in a favourable or an
unfavourable way. Painted surfaces should be easy for paint to stick to, while drive surfaces
should rotate easily and resist wear. It is important to manage surface roughness so that it is
suitable for the component in terms of quality and performance.
Many parameters have been established regarding the measurement and
assessment of surface roughness. As machining technologies progress and higher-quality
products are demanded, the performance of digital instruments continues to improve. The
surface roughness of more diverse surfaces can now be assessed.
1.1.2 TERMINOLOGY OF SURFACE ROUGHNESS

Surface: The boundary that separates an object from another object, substance, or space.
Real Surface: The actual boundary of an object. Its deviations from the
nominal surface stem from the processes that produce the surface.
Measured Surface: A representation of the real surface obtained by the use
of a measuring instrument.
Nominal Surface: The intended surface boundary (exclusive of any intended
surface roughness) the shape and extent of which is usually shown and dimensioned
on a drawing or descriptive specification.
Flaws: Flaws or defects are random irregularities such as scratches, cracks, holes
depressions, seams, tears, or inclusions
Lay: Lay, or directionality, is the direction of the predominant surface pattern and is
usually visible to the naked eye.
Roughness: It is defined as closely spaced, irregular deviations on a scale
smaller than that of waviness. Roughness may be superimposed on waviness.
Roughness is expressed in terms of its height, its width, and its distance on
the surface along which it is measured.

2
Fig 1.1 Various profiles of materials
Waviness: It is a recurrent deviation from a flat surface, much like waves on the surface
of water. It is measured and described in terms of the space between adjacent crests of the
waves (waviness width) and height between the crests and valleys of the waves
(waviness height). Waviness can be caused by,

Deflections of tools, dies, or the work piece


Forces or temperature sufficient to cause warping,
Uneven lubrication,
Vibration, or
Any periodic mechanical or thermal variations in the system during
manufacturing operations.

1.1.3 PARAMETERS OF SURFACE ROUGHNESS


Ra is the arithmetic average of the absolute values and Rf is the collected roughness
data points.
The average roughness, Ra is expresses in units of height. In the imperial system 1Ra
is expresses in millionths of an inch. This is also referred to as micro inches or sometimes
just as micro.

3
1.1.4 MEASUREMENT TECHNIQUES
Main Measurement Methods of Surface Roughness
Inspection and assessment of surface roughness of machined work pieces can be carried
out by means of different measurement techniques. These methods can be ranked into the
following classes:

1. Direct measurement methods

2. Comparison based techniques

3. Non-contact methods

4. On-process measurement

1. Direct Measurement Methods:

Direct methods assess surface finish by means of stylus type devices. Measurements are
obtained using a stylus drawn along the surface to be measured: the stylus motion perpendicular
to the surface is registered. This registered profile is then used to calculate the roughness
parameters. This method requires interruption of the machine process, and the sharp diamond
stylus may make micro-scratches on surfaces.

Fig 1.2 Stylus

4
2. Comparison Based Techniques:

Comparison techniques use specimens of surface roughness produced by the same


process, material and machining parameters as the surface to be compared. Visual and tactile
senses are used to compare a specimen with a surface of known surface finish. Because of the
subjective judgment involved, this method is useful for surface roughness Rq>1.6 micron.

3. Non-Contact Methods:

There have been some works done to attempt to measure surface roughness using non-
contact technique. Here is an electronic speckle correlation method given as an example.

When coherent light illuminates a rough surface, the diffracted waves from each point
of the surface mutually interfere to form a pattern which appears as a grain pattern of bright
and dark regions. The spatial statistical properties of this speckle image can be related to the

surface characteristics. The degree of correlation of two speckle patterns produced from the
same surface by two different illumination beams can be used as a roughness parameter.

The following figure shows the measure principle in the figure below. A rough surface
is illuminated by a monochromatic plane wave with an angle of incidence with respect to the
normal to the surface, multi-scattering and shadowing effects are neglected. The photo sensor
of a CCD camera placed in the focal plane of a Fourier lens is used for recording speckle
patterns. Assuming Cartesian coordinates x, y, z, a rough surface can be represented by its
ordinates Z (x,y) with respect to an arbitrary datum plane having transverse coordinates (x,y).
Then the rms surface roughness can be defined and calculated.

Fig 1.3 Measure Principle

5
4. On-process measurement

Many methods have been used to measure surface roughness in process. For example:

Machine vision: In this technique, a light source is used to illuminate the surface with
a digital system to viewing the surface and the data being sent to a computer to be
analysed. The digitized data is then used with a correlation chart to get actual roughness
values.

Inductance method: An inductance pickup is used to measure the distance between the
surface and the pickup. This measurement gives a parametric value that may be used to
give a comparative roughness. However, this method is limited to measuring magnetic
materials.
Ultrasound: A spherically focused ultrasonic sensor is positioned with a non-normal
incidence angle above the surface. The sensor sends out an ultrasonic pulse to a personal
computer for analysis and calculation of roughness parameters.

The Direct Measuring Technique of Surface Roughness is also called as


Stylus Technique. The average roughness 'Ra' of the machined surface is obtained
directly by stylus instrument. The surfaces of the work pieces which were machined by
different machining processes are subjected to this test. The stylus is made to move on
the machined work piece and the average roughness value Ra is measured, which is
obtained as a digital reading.

Fig 1.4 Taly surf

6
Principle of Measurement:

Principle of a contacting stylus instrument profilometer:

A cantilever is holding a small tip that is sliding along the horizontal direction over the
object's surface. Following the profile, the cantilever is moving vertically. The vertical position
is recorded as the measured profile shown in light green

Fig.1.5 Principle of Stylus Instrument

Roughness may be measured using contact or non-contact methods. Contact methods


involve dragging a measurement stylus across the surface; these instruments include
profilometers. Non-contact methods include interferometry, confocal microscopy, electrical
capacitance and electron microscopy.

For 2D measurements, the probe usually traces along a straight line on a flat surface or
in a circular arc around a cylindrical surface. The length of the path that it traces is called the
measurement length. The wavelength of the lowest frequency filter that will be used to analyse
the data is usually defined as the sampling length. Most standards recommend that the
measurement length should be at least seven times longer than the sampling length, and
according to the NyquistShannon sampling theorem it should be at least ten times longer than
the wavelength of interesting features. The assessment length or evaluation length is the length
of data that will be used for analysis. Commonly one sampling length is discarded from each
end of the measurement length.

7
For 3D measurements, the probe is commanded to scan over a 2D area on the surface.
The spacing between data points may not be the same in both directions.

In some cases, the physics of the measuring instrument may have a large effect on the
data. This is especially true when measuring very smooth surfaces. For contact measurements,
most obvious problem is that the stylus may scratch the measured surface.

Another problem is that the stylus may be too blunt to reach the bottom of deep valleys
and it may round the tips of sharp peaks. In this case the probe is a physical filter that limits
the accuracy of the instrument.

There are also limitations for non-contact instruments. For example, instruments that rely
on optical interference cannot resolve features that are less than some fraction of the frequency
of their operating wavelength. This limitation can make it difficult to accurately measure
roughness even on common objects, since the interesting features may be well below the
wavelength of light. The wavelength of red light is about 650 nm, while the Ra of a ground
shaft might be 2000 nm.

1.2 MILLING

Milling is the process of cutting away material by feeding a work piece past a rotating
multiple tooth cutter. The cutting action of the many teeth around the milling cutter provides a
fast method of machining. The machined surface may be flat, angular or curved. The surface
may be milled to any combination of shapes. The machine holding the work piece, rotating the
cutter and feeding it is known as Milling Machine. The machine used here is an automated
one and is called as CNC machine

1.2.1 WORKING PRINCIPLE OF MILLING MACHINE


The work piece is fixed in the worktable of the machine. The table movement
controls the feed of the work piece against the rotating cutter. The cutter is mounted on a
spindle or arbor and revolves at high speeds. Except for rotation of cutter has no other motion.
As the work piece advances, the cutter teeth remove the metal from the surface of work piece
and the desired shape is produced.

8
1.2.2 PARTS OF MILLING MACHINE
Base and column
Table
Saddle
Knee
Arbor
1.2.3 TYPES OF CNC MILLING MACHINES
Knee type
Universal horizontal
Ram type
Universal ram type
Swivel cutter head ram type
CNC vertical milling type
KNEE TYPE

Knee-type milling machines are characterized by a vertically adjustable worktable resting


on a saddle which is supported by a knee. The knee is a massive casting that rides vertically on
the milling machine column and can be clamped rigidly to the column in a position where the
milling head and milling machine spindle are properly adjusted vertically for operation.

The plain vertical machines are characterized by a spindle located vertically, parallel
to the column face, and mounted in a sliding head that can be fed up and down by hand or
power. Modern vertical milling machines are designed so the entire head can also swivel to
permit working on angular surfaces,

The turret and swivel head assembly is designed for making precision cuts and can be
swung 360 on its base. Angular cuts to the horizontal plane may be made with precision by
setting the head at any required angle within a 180" arc.

The plain horizontal milling machine's column contains the drive motor and gearing and
a fixed position horizontal milling machine spindle. An adjustable overhead arm containing
one or more arbour supports projects forward from the top of the column. The arm and arbour
supports are used to stabilize long arbours. Supports can be moved along the overhead arm to
support the arbour where support is desired depending on the position of the milling cutter or
cutters.

9
The milling machine's knee rides up or down the column on a rigid track. A heavy, vertical
positioning screw beneath past the milling cutter. The milling machine is excellent for forming
flat surfaces, cutting dovetails and keyways, forming and fluting milling cutters and reamers,
cutting gears, and so forth. Many special operations can be performed with the attachments
available for milling machine use. The knee is used for raising and lowering. The saddle rests
upon the knee and supports the worktable. The saddle moves in and out on a dovetail to control
cross feed of the worktable. The worktable traverses to the right or left upon the saddle for
feeding the work piece past the milling cutter. The table may be manually controlled or power
fed.

Fig 1.6 Knee Type Milling Machine

10
UNIVERSAL HORIZONTAL MILLING MACHINE

The basic difference between a universal horizontal milling machine and a plain
horizontal milling machine is the addition of a table swivel housing between the table and the
saddle of the universal machine. This permits the table to swing up to 45 in either direction
for angular and helical milling operations. The universal machine can be fitted with various
attachments such as the indexing fixture, rotary table, slotting and rack cutting attachments,
and various special fixtures.

Fig 1.7 Universal horizontal type

RAM-TYPE MILLING MACHINE

The ram-type milling machine is characterized by a spindle mounted to a movable


housing on the column to permit positioning the milling cutter forward or rearward in a
horizontal plane. Two popular ram-type milling machines are the universal milling machine
and the swivel cutter head ram-type milling machine.

UNIVERSAL RAM-TYPE MILLING MACHINE

The universal ram-type milling machine is similar to the universal horizontal milling
machine, the difference being, as its name implies, the spindle is mounted on a ram or movable
housing.

11
SWIVEL CUTTER HEAD RAM-TYPE MILLING MACHINE

The cutter head containing the milling machine spindle is attached to the ram. The
cutter head can be swivelled from a vertical spindle position to a horizontal spindle position or
can be fixed at any desired angular position between vertical and horizontal. The saddle and
knee are hand driven for vertical and cross feed adjustment while the worktable can be either
hand or power driven at the operator's choice.

Fig 1.8 Universal Swivel head

CNC VERTICAL MILLING MACHINE

Most CNC milling machines (also called machining centers) are computer
controlled vertical mills with the ability to move the spindle vertically along the Z-axis. This
extra degree of freedom permits their use in die sinking, engraving applications, and 2.5D
surfaces such as relief sculptures. When combined with the use of conical tools or a ball nose
cutter, it also significantly improves milling precision without impacting speed, providing a
cost-efficient alternative to most flat-surface hand-engraving work.

CNC machines can exist in virtually any of the forms of manual machinery,
like horizontal mills. The most advanced CNC milling-machines, the multi axis machine, add

12
two more axes in addition to the three normal axes (XYZ). Horizontal milling machines also
have a C or Q axis, allowing the horizontally mounted work piece to be rotated, essentially
allowing asymmetric and eccentric turning. The fifth axis (B axis) controls the tilt of the tool
itself. When all of these axes are used in conjunction with each other, extremely complicated
geometries, even organic geometries such as a human head can be made with relative ease with
these machines. But the skill to program such geometries is beyond that of most operators.
Therefore, 5-axis milling machines are practically always programmed with CAM.

The operating system of such machines is a closed loop system and functions on
feedback. These machines have developed from the basic NC (NUMERIC CONTROL)
machines. A computerized form of NC machines is known as CNC machines. A set of
instructions (called a program) is used to guide the machine for desired operations.

Fig 1.9 CNC Milling Machine

1.2.4 ADVANTAGES

CNC machines can be used continuously 24 hours a day, 365 days a year and only
need to be switched off for occasional maintenance.
CNC machines are programmed with a design which can then be manufactured
hundreds or even thousands of times. Each manufactured product will be exactly the
same.

13
Less skilled/trained people can operate CNCs unlike manual lathes / milling machines
etc., which need skilled engineers.
CNC machines can be updated by improving the software used to drive the machines
Training in the use of CNCs is available through the use of virtual software. This is
software that allows the operator to practice using the CNC machine on the screen of
a computer. The software is similar to a computer game.
A skilled engineer can make the same component many times. However, if each
component is carefully studied, each one will vary slightly. A CNC machine will
manufacture each component as an exact match
Modern design software allows the designer to simulate the manufacture of his/her
idea. There is no need to make a prototype or a model. This saves time and money.
One person can supervise many CNC machines as once they are programmed they can
usually be left to work by themselves. Sometimes only the cutting tools need replacing
occasionally.

1.2.5 CHARACTERISTICS OF CNC MILLING

CNC machining technology is an application technology accompanied by the


emergence of CNC machine tools, continuous development and gradually improvement. The
direct study objects are the numerical control equipment CNC devices, control systems,
numerical control procedures and preparation methods. CNC machining process comes from
the traditional processing technology. It combines the traditional processing technology,
computer numerical control technology, computer-aided design and auxiliary manufacturing
technology.

When the programmers receive a NC programming task of one component or product, the
main works include:

CNC machining process feasibility study in accord with design drawings and related
technical documents to determine the CNC machining parts processing methods;
Select the type of CNC machine tools and the specifications;
Select the fixture and its supporting tools;
Select the tool and tool clamping system;
CNC machining programs and process planning;
Determine the processing area;

14
Design of CNC machining process content;
Coding CNC programs;
NC program debugging and process validation;
Finally complete all the NC process file and archive all the documents.

CNC programming can be from the beginning of the comprehension of the design drawings to
the completion of coding the NC process.

15
CHAPTER-2

LITERATURE SURVEY
Researchers in the area of high-speed milling have implemented various
chatter recognition techniques. Professor Jiri Tlusty developed a method that
detects chatter during machining, and in turn, suggests a new speed for the same
depth. Cobb found after testing, that impact dampers served better in controlling
the vibrations. The types of impact dampers used were a spring /mass liquid impact
damper and a tapered impact Damper, keyvanmanesh did an extensive research in
understanding the dynamic characteristics of the tool and spindle to control chatter during
machining.

Cook et al.developed damping mechanisms to control vibrations on


traffic signal structures. Traffic signal structures that are subjected to cyclic loading due to the
wind and fast moving vehicles, sometimes, result in premature fatigue failures.
They investigated this problem and proposed devices to provide damping to the structures.
Their search damper model was based on the work done by Slocum on damping bending
in beams. In his book, Slocum introduced the concept of friction damping between layered
elements. The book explains that, when two ca ntilevered beams stacked on top of
each other undergo bending there occur a relative shear motion between the inner surfaces of
the layered elements causing friction energy to be produced at the interface which
in turn used to reduce the defection of the layered beam. One of his patented works implements
this idea. He developed a method to damp bending vibrations in beams and similar
structures T. Schmitz, J.C. Ziegert, C. Stanislaus Charles Stanislaus predicted that
the stable cutting regions are a critical requirement for high-speed milling operations.
M.Alauddin, M.A.EL Baradie, M.S.J.Hashmi has revealed that when the cutting speed is
increased productivity can be maximized and surface quality can be improved. F.ismail and
E.G.Kubica proposed the maximum quantity of material that can be removed by the milling
operation which is often limited by the stability of the cutting process, and not by the
power available on the machine. Smith and Dilio have d e s c r i b e d a c o n t r o l s t r a t e g y
f o r c h a t t e r s u p p r e s s i o n b y a d j u s t i n g t h e s p i n d l e s p e e d t o operate in high
stability lobe. Experimentally, they achieved a remarkable increase in metal removal rates.
Weck et al attempted to assess the merits of using the spindle speed modulation
and for that matter any other technique for chatter suppression, one needs to detect

16
the onset chatter reliably. M. Liang, T.Yeap, A. Hermansyah reported a fuzzy logic
approach for chatter suppression in end milling processes. Vibration energy and the peak value
of vibration frequency spectrum are jointly used as chatter indicators and inputs to the proposed
fuzzy controller.

Kosuke Nagaya, Jyoji Kobayasi, Katuhito Imaigave a method of micro-


vibration control of milling machine heads by use of vibration absorber. An auto -
tuning vibration absorber is presented in which the absorber creates anti -resonance
state. Ziegert John C. Stanislaus Charles, Schmitz Tony L. Streling Robert found
that the limiting chatter free depth of cut in milling is dependent on dynamic stiffness of the
tool or spindle system. N.H.Kim, K.K.Choi, J.S.Chen and Y.H.Park proposed a continuum-
based shape design sensitivity formulation for a frictional contact problem with a rigidbody
using mesh less method. Tony L. Schmitz, John C. Ziegert, Charles
S t a n i s l a u s predicted that the stable cutting regions are a critical requirement for high-speed
milling operations. Sridhar et al presented the first detailed mathematical model
with time v a r yi n g c u t t i n g f o r c e c o e f f i c i e n t s B u d a k a n d A l t i n t a s d e r i v e d
t h e f i n i t e o r d e r characteristic equation for the stability analysis in milling. Recent
investigation performed by Alauddin has revealed that when the cutting speed is increased,
productivity can be maximized, and surface quality can be improved .

17
CHAPTER-3

OPTIMIZATION TECHNIQUES

3.1 INTRODUCTION TO TAGUCHI METHOD


Competitive crisis in manufacturing during the 1970s and 1980s that gave rise to
the modern quality movement, leading to the introduction of Taguchi methods to the U.S. in
the 1980s. Taguchis method is a system of design engineering to increase quality. Taguchi
Methods refers to a collection of principles which make up the framework of a continually
evolving approach to quality. Taguchi Methods of Quality Engineering design is built around
three integral elements, the loss function, signal-to-noise ratio, and orthogonal arrays, which
are each closely related to the definition of quality

.
3.1.1 TAGUCHI DESIGN PHASES

To achieve economical product quality design, Taguchi proposed three phases:

System design
Parameter design
Tolerance design.
SYSTEMS DESIGN:

Systems design identifies the basic elements of the design, which will produce the
desired output, such as the best combination of processes and materials, selection of machine,
the type of tool is considered.

PARAMETER DESIGN:

Parameter design determines the most appropriate, optimizing set of parameters


covering these design elements by identifying the "settings" of each parameter which will
minimize variation from the target performance of the product.

18
TOLERANCE DESIGN:

Tolerance design finally identifies the components of the design which are sensitive in
terms of affecting the quality of the product and establishes tolerance limits which will give
the required level of variation in the design.

3.1.2 TAGUCHI APPROACH

The objective of the robust design is to find the controllable process


parameter settings for which noise or variation has a minimal effect on the product's or
process's functional characteristics. It is to be noted that the aim is not to find the parameter
settings for the uncontrollable noise variables, but the controllable design variables. To attain
this objective, the control parameters, also known as inner array variables, are systematically
varied as stipulated by the inner orthogonal array. For each experiment of the inner array, a
series of new experiments are conducted by varying the level settings of the uncontrollable
noise variables. The level combinations of noise variables are done using the outer orthogonal
array.

The influence of noise on the performance characteristics can be found using the ratio.
Where S is the standard deviation of the performance parameters for each inner array
experiment and N is the total number of experiment in the outer orthogonal array. This ratio
indicates the functional variation due to noise. Using this result, it is possible to predict which
control parameter settings will make the process insensitive to noise.

Taguchi method focuses on Robust Design through use of

Signal-To-Noise ratio

Orthogonal arrays.

Signal-To-Noise Ratio

The signal-to-noise concept is closely related to the robustness of a product design. A


Robust Design or product delivers strong signal. It performs its expected function and can
cope with variations (noise), both internal and external. In signal-to-Noise Ratio, signal
represents the desirable value and noise represents the undesirable value.

19
USES

S/N ratios can be used to get closer to a given target value, or to reduce variation in the
product's quality characteristic(s).
Signal-To-Noise ratio is used to measure controllable factors that can have such a
negative effect on the performance of design.
They lead to optimum through monotonic function.
They help improve additives of the effects.
To quantify the quality.

There are 3 Signal-to-Noise ratios of common interest for optimization of Static


Problems. The formulae for signal to noise ratio are designed so that an experimenter can
always select the largest factor level setting to optimize the quality characteristic of an
experiment. Therefore, a method of calculating the Signal-To-Noise ratio we had gone
for quality characteristic.

They are

1.Smaller-The-Better,

2. Larger-The-Better,

3.Nominal-The-Best.

The Smaller-The-Better:

Impurity in drinking water is critical to quality. The less

impurities customers find in them in their drinking water, the better it is. Vibrations

are critical to quality for a car, the less vibration the customers feel while driving

their cars the better, the more attractive the cars are.

The Signal-To-Noise ratio for the Smaller-The-Better is:

S/N = -10 *log (mean square of the response)

S/N = -10log10(y2/n)

20
The Larger-The-Better:

If the number of minutes per dollar customers get from their cellular phone service
provider is critical to quality, the customers will want to get the maximum number of minutes
they can for every dollar they spend on their phone bills.

If the lifetime of a battery is critical to quality, the customers will want their batteries to last
forever. The longer the battery lasts, the better it is.

The Signal-To-Noise ratio for the bigger-the-better is:

S/N = -10*log (mean square of the inverse of the response)

S/N = -10log10 (1/n1/y2)

Nominal-The-Best:

When a manufacturer is building mating parts, he would want every part to match
the predetermined target. For instance, when he is creating pistons that need to be anchored on
a given part of a machine, failure to have the length of the piston to match a predetermined size
will result in it being either too small or too long resulting in lowering the quality of the
machine. In that case, the manufacturer wants all the parts to match their target. When a
customer buys ceramic tiles to decorate his bathroom, the size of the tiles is critical to quality,
having tiles that do not match the predetermined target will result in them not being correctly
lined up against the bathroom walls.

The S/N equation for the Nominal-The-Best is:

S/N = 10 * log (the square of the mean divided by the variance)

S/N = 10log10 (y2/s2)

3.1.3 Orthogonal Arrays

Introduction:
In order to reduce the total number of experiments Sir Ronald Fisher developed
the solution: orthogonal arrays. The orthogonal array can be thought of as a distillation
mechanism through which the engineers experiment passes (Ealey, 1998). The array allows the
engineer to vary multiple variables at one time and obtain the effects which that set of variables
has an average and the dispersion.

21
Taguchi employs design experiments using specially constructed table,
known as Orthogonal Arrays (OA)" to treat the design process, such that the quality is built
into the product during the product design stage. Orthogonal Arrays (OA) are a special set of
Latin squares, constructed by Taguchi to lay out the product design experiments.
An orthogonal array is a type of experiment where the columns for the
independent variables are orthogonal to one another. Orthogonal arrays are employed to
study the effect of several control factors.
Orthogonal arrays are used to investigate quality. Orthogonal arrays are not
unique to Taguchi. They were discovered considerably earlier (Bendell, 1998). However,
Taguchi has simplified their use by providing tabulated sets of standard orthogonal arrays and
corresponding linear graphs to fit specific projects (ASI, 1989; Taguchi and Kenishi, 1987).
A Typical Orthogonal Array
S.NO A B C
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 3
5 2 2 1
6 2 3 2
7 3 1 2
8 3 2 3
9 3 3 1
Table no 3.1 L9 Orthogonal Arrays

In this array the columns are mutually orthogonal. That is for any pair of columns
all combination of factors occurs; and they occur an equal number of times. Here there are 3
parameters, A, B, and C each at three levels. This is called an L9 design; with the 9 indication
the nine rows, configurations, or prototypes to be tested. Specific test characteristics for each
experimental evaluation are identified in the associated row of the table. Thus L9(34) means
that nine experiments are to be carried out to study three variables with three levels. There are
greater savings in testing for larger arrays.

22
3.1.4 Application of Orthogonal Array
Taguchis OA analysis is used to produce the best parameters for the optimum
design process, with the least number of experiments.
OA is usually applied in the design of engineering products, test and quality
development, and process development.
3.1.5 Advantages and Disadvantages of Orthogonal Array:
Conclusions valid over the entire region spanned by the control
factors and their settings
Large saving in the experiment effort
Analysis is easy
OA techniques are not applicable, such as a process involving
influencing factors that vary in time and cannot be quantified exactly.
3.1.6 Steps in Taguchi Methodology

Taguchi method is a scientifically disciplined mechanism for evaluating and


implementing improvements in products, processes, materials, equipment, and facilities. These
improvements are aimed at improving the desired characteristics and simultaneously reducing
the number of defects by studying the key variables controlling the process and optimizing the
procedures or design to yield the best results. Taguchi proposed a standard procedure for
applying his method for optimizing any process.

Problem identification
Objectives of the Project Work
Selecting Quantity Characteristics
Selecting the Process Parameters that
may influence quantity characteristics

Identifying Control and Noise


Factors
Selecting Levels for Control Factors
Selecting Orthogonal Array and
Assign Factors
Conducting Tests as per trails in
Orthogonal Array
Analyse the Results of
experimentation trails

23
Conduct Confirmation Experiment

3.2 INTRODUCTION TO ANOVA METHOD


Analysis of variance (ANOVA) is a collection of statistical models used to
analyze the differences among group means and their associated procedures (such as
"variation" among and between groups), developed by statistician and evolutionary biologist
Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is
partitioned into components attributable to different sources of variation. In its simplest form,
ANOVA provides a statistical test of whether or not the means of several groups are equal, and
therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing
(testing) three or more means (groups or variables) for statistical significance. It is conceptually
similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and
is therefore suited to a wide range of practical problems.

3.2.1 TERMINOLOGY OF ANOVA METHOD

The terminology of ANOVA is largely from the statistical design of


experiments. The experimenter adjusts factors and measures responses in an attempt to
determine an effect. Factors are assigned to experimental units by a combination of
randomization and blocking to ensure the validity of the results. Blinding keeps the weighing
impartial. Responses show a variability that is partially the result of the effect and is partially
random error.

ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence,
it is difficult to define concisely or precisely.

"Classical ANOVA for balanced data does three things at once:

1. As exploratory data analysis, an ANOVA is an organization of an additive data


decomposition, and its sums of squares indicate the variance of each component of the
decomposition (or, equivalently, each set of terms of a linear model).
2. Comparisons of mean squares, along with an F-test ... allow testing of a nested sequence
of models.
3. Closely related to the ANOVA is a linear model fit with coefficient estimates and
standard errors.

24
In short, ANOVA is a statistical tool used in several ways to develop and confirm an
explanation for the observed data.

Additionally:

4. It is computationally elegant and relatively robust against violations of its assumptions.


5. ANOVA provides industrial strength (multiple sample comparison) statistical analysis.
6. It has been adapted to the analysis of a variety of experimental designs.

As a result: ANOVA "has long enjoyed the status of being the most used (some would say
abused) statistical technique in psychological research. ANOVA "is probably the most useful
technique in the field of statistical inference."

ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs
being notorious. In some cases, the proper application of the method is best determined by
problem pattern recognition followed by the consultation of a classic authoritative test.

3.2.2 DESIGN-OF-EXPERIMENTS TERMS

Balanced design
An experimental design where all cells (i.e. treatment combinations) have the
same number of observations.
Blocking
A schedule for conducting treatment combinations in an experimental
study such that any effects on the experimental results due to a known change in raw
materials, operators, machines, etc., become concentrated in the levels of the blocking
variable. The reason for blocking is to isolate a systematic effect and prevent it from
obscuring the main effects. Blocking is achieved by restricting randomization.
Design
A set of experimental runs which allows the fit of a particular model and the
estimate of effects.
DOE (Design of experiments)
An approach to problem solving involving collection of data that will
support valid, defensible, and supportable conclusions.

25
Effect
How changing the settings of a factor changes the response. The effect of a
single factor is also called a main effect.

Error
Unexplained variation in a collection of observations. DOE's typically require
understanding of both random error and lack of fit error.
Experimental unit
The entity to which a specific treatment combination is applied.
Factors
Process inputs an investigator manipulates to cause a change in the output.
Lack-of-fit error
Error that occurs when the analysis omits one or more important terms or
factors from the process model. Including replication in a DOE allows separation of
experimental error into its components: lack of fit and random (pure) error.
Model
Mathematical relationship which relates changes in a given response to
changes in one or more factors.
Random error
Error that occurs due to natural variation in the process. Random error is
typically assumed to be normally distributed with zero mean and a constant variance.
Random error is also called experimental error.
Randomization
A schedule for allocating treatment material and for conducting treatment
combinations in a DOE such that the conditions in one run neither depend on the
conditions of the previous run nor predict the conditions in the subsequent runs.
Replication
Performing the same treatment combination more than once. Including
replication allows an estimate of the random error independent of any lack of fit error.
Responses
The output(s) of a process. Sometimes called dependent variable(s).
Treatment
A treatment is a specific combination of factor levels whose effect is to be
compared with other treatments.

26
3.2.3 MODELS OF ANOVA METHOD

There are three classes of models used in the analysis of variance, and these are outlined here.

Fixed-effects models

The fixed-effects model (class I) of analysis of variance applies to situations in


which the experimenter applies one or more treatments to the subjects of the experiment to see
whether the response variable values change. This allows the experimenter to estimate the
ranges of response variable values that the treatment would generate in the population as a
whole.

Random-effects models

Random effects model (class II) is used when the treatments are not fixed. This
occurs when the various factor levels are sampled from a larger population. Because the levels
themselves are random variables, some assumptions and the method of contrasting the
treatments (a multi-variable generalization of simple differences) differ from the fixed-effects
model.

Mixed-effects models

A mixed-effects model (class III) contains experimental factors of both fixed


and random-effects types, with appropriately different interpretations and analysis for the two
types.

Example: Teaching experiments could be performed by a college or university department to


find a good introductory textbook, with each text considered a treatment. The fixed-effects
model would compare a list of candidate texts. The random-effects model would determine
whether important differences exist among a list of randomly selected texts. The mixed-effects
model would compare the (fixed) incumbent texts to randomly selected alternatives.

Defining fixed and random effects has proven elusive, with competing definitions arguably
leading toward a linguistic quagmire.

27
3.2.4 TYPES OF ANOVA:

No-Way ANOVA:

No-Way ANOVA is the simplest situation to analyse and begins with a set of
experimental data. Analysis of variance is a mathematical technique which breaks total
variation down into accountable sources; total variation is decomposed into its
appropriate components. No-way ANOVA, the simplest case, breaks total variation
down into only two components:

1. The variation of the average (or mean) of all the data points relative to zero.

2. The variation of the individual data points around the average.

The ANOVA method used states that,

SSt = SSm + SSe

Where, SSt is total sum of squares

SSm is the sum of squares due to the mean

SSe is the error sum of squares

One-Way ANOVA:

One-way ANOVA is the next most complex ANOVA to conduct. This


situation considers the effect of one controlled parameter upon the performance of a
product or process, in contrast to no-way ANOVA, where no parameters were
controlled.

The total variation can be decomposed into its appropriate components,

1. The variations of the average (mean) of all observations relative to zero.

2. The variation of the average (mean) of observations under each factor level around the
average of observations under each factor level.

28
3. The variation of the individual observations around the average of observations under each
factor level.

The ANOVA method used states that,

SSt = SSm + SSa + SSe

Where, SSt = Total sum of squares

= yi

SSm = The sum of squares due to the mean

= T/N

SSa = Variation due to factor A

= na1 (A1-T) + na2 (A2-T) + na3 (A3- T)

SSe = Error sum of squares

= (yi - )

Two- Way ANOVA:

Two-way ANOVA is the next highest order of ANOVA to review; there are two
controlled parameters in this experimental situation. The graphical representations will be
discontinued although utilization is still possible.

Whereas One- Way analysis of variance (ANOVA) measure significant effects of one
factor only, two- Way analysis of variance tests (ANOVA) tests (also called two-factor analysis
of variance) measure the effects of two factors simultaneously. For example, an experiment
might be defined by two parameters, such as treatment and time point. One-Way ANOVA tests
would be able to assess only the treatment effect or the time effect. Two-Way ANOVA on the
other hand would not only be able to assess both time and treatment in the same test, but also
whether there is an interaction between the parameters. A Two-Way test generates three p-
values, one for each parameter independently, and one measuring the interaction between the
two parameters.

29
Two-Way ANOVA Prerequisites: Before running a Two-Way ANOVA test,
experimental data must meet these prerequisites,

1. The parameters tested need to be a part of the experimental design.

2. Parameters need to be designed appropriately in the Experiment Parameter window.


Experiment should contain at least 2 parameters, each set of replicates should share common
parameter values.

3. Two-Way ANOVA is most powerful when the experiment has the same number of replicates
in each group defined by the pair of parameters. This is called a balanced design. However,
two tests can also be applied to proportional design experiments, where the proportion of
samples across each parameter group is retained.

Experiments with mild deviations from a proportional design may still be analysed, but
experiments with highly disproportional design cannot be analysed using Two-Way ANOVA.
Two-Way tests can also be analysed on data with only one replicate per group or condition.
However, the interaction between the factors cannot be tested.

Two-Way ANOVA is best to use when the experiment is designed to measure two
different factors, or when we want to measure these factors simultaneously.

ANOVA total variation may be decomposed into more components:

1. Variation due to factor A.

2. Variation due to factor B.

3. Variation due to the interaction of factors of A and B.

4. Variation due to error.

An equation for total variation may be written as,

SSt = SSa + SSb + SSa*b + SSe

Where,SSt = The total sum of squares

30
= ( YI) - T/ N

SSa = Variation due to factor A.

= (A1 A2)/ N

SSb = Variation due to factor B.

= (B1 B2)/ N

SSa*b = Variation due to the interaction of factors of A and B.

= (A*B) i/ Na*bi - T/ N SSa SSb

SSe = Error is found out by,

= SSt - SSa - SSb - SSa*b

Three- Way ANOVA: Three- way ANOVA entails three controlled factors in an
experiment.

An equation for total variation may be written in this case:

SSt= SSa + SSb + SSc + SSa*b + SSa*c + SSb*c + SSa*b*c + SSe

SSt = Total sum of squares

= ( YI) - T/ N

SSa = Variation due to factor A

= ( A12)/nA1+( A22)/nA2+( A32)/nA3-(T2/N)

SSb = Variation due to factor B

= ( B12)/nB1+( B22)/nB2+( B32)/nB3-(T2/N)

SSc = Variation due to factor C:

= ( C12)/nC1+( C22)/nC2+( C32)/nC3-(T2/N)

31
SSa*b = Variation due to interaction of factors of A and B

= (A*B) i/ Na*bi - T/ N SSa SSb

SSa*b*c = Variation due to interaction of factors of A, B, and C:

= (A*B*C) i/ n (A*B*C)i) - T/ N SSa SSb SSc - SSa*b - SSa*c - SSb*c.

SSe = The error can be found out by,

= SSt SSa SSb SSc - SSa*b - SSa*c - SSb*c - SSa*b*c

32
CHAPTER-4

DESCRIPTION OF CUTTING TOOL AND MATERIAL

4.1 TOOL MATERIALS IN COMMON USE

High Carbon Steel:


Contains 1 - 1.4% carbon with some addition of chromium and
tungsten to improve wear resistance. The steel begins to lose its hardness at about 2500
C, and is not favoured for modern machining operations where high speeds and heavy cuts are
usually employed.

High Speed Steel (H.S.S.):


Steel, which has a hot hardness value of about 600C, possesses good strength
and shock resistant properties. It is commonly used for single point lathe cutting tools and multi
point cutting tools such as drills, reamer sand milling cutters.

Fig no 4.1 HSS Tool


Applications of HSS
HSS is used to be in the manufacture of various cutting tools: drills, taps, milling
cutters, tool bits, gear cutters, saw blades, planer and jointer blades, router bits, etc.,
although usage for punches and dies is increasing.

High speed steel tools are the most popular for use in woodturning.

33
Cemented Carbides:
An extremely hard material made from tungsten powder. Carbide
tools are usually used in the form of brazed or clamped tips. High cutting Speeds
may be used and materials difficult to cut with HSS may be readily machined using carbide
tipped tool.

4.1.1 CUTTING PARAMETERS


As you proceed to the process of metal cutting, the relative speed of work piece
rotation and feed rates of the cutting tool coupled to the material to be cut must be given
your serious attention. This relationship is of paramount importance if items are to be
manufactured in a cost-effective way in the minimum time, in accordance with the laid down
specifications for quality of surface finish and accuracy. You, as a potential
supervisory/management level engineer, must take particular note of these
important parameters and ensure that you gain a fundamental understanding of factors
involved.
Cutting Speed
All materials have an optimum Cutting Speed and it is defined as the speed at which
a point on the surface of the work passes the cutting edge or point of the tool and is normally
given in meters/min. To calculate the spindle Speed required,
N=( )
Cutting feed:
The distance that the cutting tool or work piece advances during one revolution of
spindle and tool, measured in inches per revolution ( I P R ) . I n s o m e operations the tool
feeds into the work piece and in others the work piece feeds into the tool. For a
multi-point tool, the cutting feed is also equal to the feed per tooth, measured in inches per
tooth (IPT), and multiplied by the number of teeth on the cutting tool.

Spindle speed:
The rotational speed of the spindle and tool in revolutions per minute (RPM).
The spindle speed is equal to the cutting speed divided by the circumference of the tool.

Feed rate:
The speed of the cutting tool's movement relative to the work piece as the tool makes a
cut. The feed rate is measured in inches per minute (IPM) and is the product of the
cutting feed (IPR) and the spindle speed (RPM).

34
Axial depth of cut:
The depth of the tool along its axis in the work piece as it makes a cut. A large axial
depth of cut will require a low feed rate, or else it will result in a high load on t h e On the tool
and reduce tool life. Therefore, a feature is typically machined in several passes as the tool
moves to the specified axial depth of cut for each pass.

Radial depth of cut:


The depth of the tool along its radius in the work piece as it makes a cut. If the
radial depth of cut is less than the tool radius, the tool is only partially engaged
and is making a peripheral cut. If the radial depth of cut is equal to the tool
diameter, the cutting tool is fully engaged and is making a slot cut. A large radial depth of cut
w i l l require a low feed rate, or else it will result in a high load on the tool and
reduce the tool life. Therefore, a feature is often machined in several steps as the tool moves
over the step-over distance, and makes another cut at the radial depth of cut.

4.2 MATERIAL USED


The material used in the process of determining the surface roughness is ALUMINIUM.
Aluminum is remarkable for the metal's low density and its ability to resist corrosion through
the phenomenon of passivation. Aluminum and its alloys are vital to the aerospace industry
and important in transportation and structures, such as building facades and window frames.
The oxides and sulfates are the most useful compounds of aluminum.

Fig no 4.2 Material used

35
4.2.1 CHARACTERISTICS OF MATERIAL
PHYSICAL
Aluminum is a relatively soft, durable, lightweight, ductile, and malleable metal with
appearance ranging from silvery to dull gray, depending on the surface roughness. It is
nonmagnetic and does not easily ignite. A fresh film of aluminum serves as a good reflector
(approximately 92%) of visible light and an excellent reflector (as much as 98%) of medium
and far infrared radiation. The yield strength of pure aluminum is 711 MPa, while aluminum
alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminum has about one-third
the density and stiffness of steel. It is easily machined, cast, drawn and extruded.

CHEMICAL
Corrosion resistance can be excellent because a thin surface layer of aluminum oxide
forms when the bare metal is exposed to air, effectively preventing further oxidation, in a
process termed passivation. The strongest aluminum alloys are less corrosion resistant due to
galvanic reactions with alloyed copper. This corrosion resistance is greatly reduced by aqueous
salts, particularly in the presence of dissimilar metals.

4.2.2 PROPERTIES OF MATERIAL


Weight:

One of the best known properties of aluminum is that it is light, with a density one
third that of steel, 2,700 kg/m3. The low density of aluminum accounts for it being lightweight
but this does not affect its strength.

Strength

Aluminum alloys commonly have tensile strengths of between 70 and 700 MPa.
The range for alloys used in extrusion is 150 300 MPa. Unlike most steel grades, aluminum
does not become brittle at low temperatures. Instead, its strength increases. At high
temperatures, aluminums strength decreases. At temperatures continuously above 100C,
strength is affected to the extent that the weakening must be taken into account.

36
Linear expansion

Compared with other metals, aluminum has a relatively large coefficient of linear
expansion. This has to be taken into account in some designs.

Machining

Aluminum is easily worked using most machining methods milling, drilling,


cutting, punching, bending, etc. Furthermore, the energy input during machining is low.

Formability

Aluminums superior malleability is essential for extrusion. With the metal either hot
or cold, this property is also exploited in the rolling of strips and foils, as well as in bending
and other forming operations.

Conductivity

Aluminum is an excellent conductor of heat and electricity. An aluminum conductor


weighs approximately half as much as a copper conductor having the same conductivity.

Joining

Features facilitating easy jointing are often incorporated into profile design. Fusion
welding, Friction Stir Welding, bonding and taping are also used for joining.

Reflectivity

Another of the properties of aluminum is that it is a good reflector of both visible light
and radiated heat.

4.3 APPLICATIONS OF ALUMINUM

Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles,


spacecraft, etc.) as sheet, tube, and castings.
Packaging (cans, foil, frame of etc.).
Food and beverage containers, because of its resistance to corrosion.
Construction (windows, doors, siding, building wire, sheathing, roofing, etc.).

37
CHAPTER-5

DESIGN OF EXPERIMENT

Here there are 3 parameters, A, B, and C each at three levels. This is called
an L9 design; with the 9 indication the nine rows, configurations, or prototypes to be tested.
Specific test characteristics for each experimental evaluation are identified in the associated
row of the table. Thus L9(34) means that nine experiments are to be carried out to study three
variables with three levels. There are greater savings in testing for larger arrays.

S.NO A B C
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 3
5 2 2 1
6 2 3 2
7 3 1 2
8 3 2 3
9 3 3 1
Table no 5.1 L9 Orthogonal Arrays

The machining was carried out on Universal milling machine, by varying one
parameter while controlling the other two parameters and so on. Each work piece of 100mm
length 50mm breadth and 6mm thickness was taken. Therefore, we needed a 3 feet aluminium
for carrying out the full factorial experiment (9 experiments).

The initial operations were filing or grinding. These are done not to spoil the
work surface of the milling machine and also not to have any inclinations. The machining was
carried out according to the parameter conditions. Each piece was marked with its specific
serial number of machining, so as to avoid confusion later.

38
Fig 5.1 CNC XL MILLNG MACHINE Fig 5.2 CUTTING MACHINE

Before milling operation, the work piece is cut into pieces of 100mm length by
power saw machine by marking lines. By placing it on machine, cutter cuts the work piece.
And then these pieces are subjected to grinding or filing to make the surface smooth.

These pieces were taken to the metrology lab and tested for surface finish with the help of
surface finish apparatus called TR200 from the company TIME as shown in the figure below

Fig 5.3 Taly Surf

39
CHAPTER-6

PROBLEM STATEMENT

For the experimental plan, the Taguchi method for three levels was used with
careful understanding of levels taken by factors. Table 1 indicates the factors to be studied and
assignment of corresponding levels. According to Taguchi design concept, a l9 orthogonal
array was chosen for experiments. The plan is made of 9 tests (array rows) in which the first
column was assigned to cutting velocity(Vc), second column to feed rate (f) and the third
column to depth of cut(d) and remaining were assigned to interactions. The output to be studied
is surface roughness (Ra).

Levels Cutting velocity Vc Feed rate f Depth of cut d


(rpm) (mm/rev) (mm)
1 800 40 0.25
2 1000 60 0.5
3 1200 80 0.75

Table no. 6.1 Parameters with Levels

EXPERIMENTAL SETUP

Machine type: CNC Milling Machine


Manufacture: MTLAB
Type: Column and knee
X axis: 225mm
Y axis: 150mm
Z axis: 115mm

MATERIALS USED:
Tool material: High speed steel
Work piece material: Aluminium (100x50x6mm)

PROPOSED TECHNIQUES

1.Taguchi Method
2.Analysis of Variance

40
CHAPTER-7

RESULTS AND DISCUSSIONS

7.1 TAGUCHI METHOD

SPEED FEED DEPTH SR-1 SR-2 SR-3


OF CUT
S.NO rpm mm/rev
mm
1 800 40 0.25 0.2 0.209 0.142
2 1000 60 0.5 0.381 0.430 0.276
3 1200 80 0.75 0.274 0.294 0.557
4 800 40 0.25 0.243 0.550 0.311
5 1000 60 0.5 0.250 0.239 0.253
6 1200 80 0.75 0.231 0.218 0.211
7 800 40 0.25 0.187 0.227 0.453
8 1000 60 0.5 0.211 0.218 0.386
9 1200 80 0.75 0.229 0.414 0.321

Table no. 7.1 Experimental results of surface roughness

41
Experimental design using L9 orthogonal array:

SPEED FEED DEPTH OF MEAN S/N


CUT SURFACE
S.NO (rpm) (mm/rev) ROUGHNESS RATIO
(mm)
(mm)
1 800 40 0.25 0.183 14.75
2 800 60 0.5 0.362 8.82
3 800 80 0.75 0.375 8.5
4 1000 40 0.25 0.368 8.68
5 1000 60 0.5 0.247 12.14
6 1000 80 0.75 0.22 13.15
7 1200 40 0.25 0.289 10.78
8 1200 60 0.5 0.271 11.34
9 1200 80 0.75 0.321 9.86

Table no.7.2 Experimental results of Mean Surface Roughness and S/N ratio

S/N= -10logy2

Y=mean surface roughness

The S/N ratio for the individual control factors are calculated as given below:
FOR SPEED
Ss1=(++3), Ss2=(4+5+6) & Ss3=(7+8+9)
FOR DEPTH OF CUT
St1=(+4+7), St2=(2+5+8) & St3=(3+6+9)
FOR FEED
Sf1=(+5+9), Sf2=(2+6+7) & Sf3=(3+4+8)
For selecting the values of , 2, 3 etc. and to calculate Ss1, Ss2 & Ss3 check the
orthogonal array in the previous chapter
k is the S/N ratio corresponding to Experiment k.
Average S/N ratio corresponding to Cutting Speed at level 1 = Ss1/3
Average S/N ratio corresponding to Cutting Speed at level 2 = Ss2/3

42
Average S/N ratio corresponding to Cutting Speed at level 3 = Ss3/3
j is the corresponding level each factor. Similarly, Sfj and Stj are calculated for feed and
depth of cut. The average of the signal to noise ratios is shown in table below. Similarly,
S/N ratios can be calculated for other factors.
LEVEL SPEED FEED DEPTH OF CUT

SUM Ssj Avg S/N SUM Sfj Avg S/N SUM Stj Avg S/N

1 32.07 10.69 36.75 12.25 34.21 11.40


2 33.97 11.32 32.75 10.91 32.3 10.76
3 31.98 10.6 28.52 9.50 31.51 10.5

Table no 7.3 S/N Ratio for Speed, Feed and Depth of Cut

7.2 ANALYSIS OF VARIANCE(ANOVA)

Using the Orthogonal Array

Let T=Sum of all observations

= 0.813+0.362+0.375+0.368+0.247+0.22+0.289+0.271+0.321

= 2.636

N=Total no. of observations = 9

T2/N = 2.6362/9 = 0.772

A1=0.183+0.362+0.375 = 0.92

A2=0.368+0.247+0.22 = 0.835

A3=0.289+0.271+0.321 = 0.881

SSA = (A12)/nA1+( A22)/nA2+( A32)/nA3-(T2/N)

=0.922/3 +0.8352/3 +0.8812/3 0.772

=0.0012

43
SSb = Variation due to factor B

= (B12)/nB1+( B22)/nB2+( B32)/nB3-(T2/N)

B1=0.183+0.368+0.289 = 0.84

B2=0.362+0.247+0.271 = 0.88

B3=0.375+0.22+0.321 = 0.916

SSb = 0.2352+0.2581+0.2796-0.772

= 0.0009

SSc = Variation due to factor C:

= (C12)/nC1+( C22)/nC2+( C32)/nC3-(T2/N)

C1=0.183+0.22+0.271 = 0.674

C2=0.362+0.368+0.321 = 1.051

C3=0.375+0.247+0.289 = 0.991

SSc = 0.154+0.3682+0.327-0.772

= 0.0746

PARAMETERS Degrees of Adj SS Adj MS F-Value P- Value


freedom
SPEED 2 0.001207 0.000603 0.10 0.905
FEED 2 0.000964 0.000482 0.08 0.923
DOC 2 0.024211 0.012105 2.09 0.323
error 2 0.011558 0.005779
total 8 0.037939
Table no 7.4 Anova Results of Surface Roughness

S -0.0760183 R-Sq -69.54% R-Sq(adj)- 0.00% R-Sq(pred)- 0.00%

44
Table 7.4 shows the ANOVA results of surface roughness. From the result it is
observed that the depth of cut is most significant parameter followed by speed and feed. It
means that depth of cut influence significantly on work piece surface roughness between three
cutting parameters
7.3 MAIN EFFECT PLOT ANALYSIS
The data were further analysed to study the effect of cutting parameters on surface
roughness. From the S/N ratios given in the tables 7.5 and 7.6 main effect plots were drawn
using MINITAB 17 software and shown in the fig 7.1 and 7.2 respectively. The plot shows the
variation of response with the change in cutting parameters. These main effect plots are used
to determine optimal design conditions to obtain the low surface roughness.

LEVEL SPEED FEED DEPTH OF CUT


1 0.3067 0.2800 0.2247
2 0.2783 0.2933 0.3503
3 0.2937 0.3053 0.3037
DELTA 0.0283 0.0253 0.1257
RANK 2 3 1

Table no 7.5 Response Table for Means of Ra

SMALLER IS BETTER

LEVEL SPEED FEED DEPTH OF CUT


1 10.699 11.405 13.081
2 11.327 10.771 9.126
3 10.664 10.514 10.482
DELTA 0.663 0.892 3.955
RANK 3 2 1

Table no 7.6 Response table for S/N ratios

45
Fig no 7.1 Main effect plot for means

Fig no.7.2 Line plot for Means

46
From the above figures and tables, it is observed that with the increase in speed and
feed levels there is a less change in response but with the increase in level of depth of cut
significant change in response can be observed.

PARAMETERS BEST LEVEL VALUES


SPEED (rpm) 1 800
FEED (mm/rev) 1 40
DEPTH OF CUT (mm) 1 0.25

Table no 7.7 Optimum conditions for surface roughness

47
CONCLUSION

In this the work surface roughness is minimised using Taguchi, Anova


method. From these methods we can find out the best combination of cutting parameters like
speed, feed and depth of cut. Project aimed at developing quality parameters for heavy industry
materials
The L9 orthogonal array and S/N ratio is used to predict the best level of cutting
factors for minimising the surface roughness is Speed-800rpm Feed-40mm/rev, Depth of cut
0.25mm.
The Anova analysis is used to find the spindle speed, feed, depth of cut significant
effect on surface roughness.

48
FUTURE SCOPE

The present work can be extended further for different conditions of process
parameters at different levels for different materials.
The present work can be done by taking carbide tools rather than HSS and
by taking L27 orthogonal array we can obtain better results.
Techniques as improved genetic algorithm, particle swarm optimisation can
be used. Many other materials and inserts geometries can also be investigated.

49
REFERENCES

Ch. Maheshwara Rao and K. Venkatasubbaiah, Optimization of Surface Roughness,


International Journal of Advanced Science and Technology Vol.93 (2016), pp.1-14
Srinivas Athreya, Dr.Y.D.Venkatesh, Application of Taguchi Method For Optimization
Of Process Parameters In Improving The Surface Roughness International Refereed
Journal of Engineering and Science (IRJES) ISSN (Online) 2319-183X, (Print) 2319-
1821 Volume 1, Issue 3 (November 2012), PP.13-19
Journal of Scientific & Industrial Research Vol. 68, August 2009
Surface-Roughness-Optimization-Using-Taguchi-and-Anova-method from Scribd
Ali Riza Motorcu, The Optimization of Machining Parameters Using the Taguchi
Method for Surface Roughness, Journal of Mechanical Engineering 56(2010)6, 391-
401

50

S-ar putea să vă placă și