Documente Academic
Documente Profesional
Documente Cultură
INTRODUCTION
1.1 SURFACE ROUGHNESS
Roughness plays an important role in determining how a real object will interact with
its environment. Rough surfaces usually wear more quickly and have higher friction
coefficients than smooth surfaces. Roughness is often a good predictor of the performance of
a mechanical component, since irregularities in the surface may form nucleation sites for cracks
or corrosion.
The size and shape of the irregularities on a machined surface have a major impact
on the quality and performance of that surface and on and on the performance of the end
1
product. The quantification and management of fine irregularities on the surface, which is to
say, measurement of surface roughness, is necessary to maintain high product performance.
Quantifying surface irregularities means assessing them by categorizing them by
height, depth, and interval. They are then analysed by a predetermined method and calculated
per industrial quantities standards. The form and size of surface irregularities and the way the
finished product will be used determine if the surface roughness acts in a favourable or an
unfavourable way. Painted surfaces should be easy for paint to stick to, while drive surfaces
should rotate easily and resist wear. It is important to manage surface roughness so that it is
suitable for the component in terms of quality and performance.
Many parameters have been established regarding the measurement and
assessment of surface roughness. As machining technologies progress and higher-quality
products are demanded, the performance of digital instruments continues to improve. The
surface roughness of more diverse surfaces can now be assessed.
1.1.2 TERMINOLOGY OF SURFACE ROUGHNESS
Surface: The boundary that separates an object from another object, substance, or space.
Real Surface: The actual boundary of an object. Its deviations from the
nominal surface stem from the processes that produce the surface.
Measured Surface: A representation of the real surface obtained by the use
of a measuring instrument.
Nominal Surface: The intended surface boundary (exclusive of any intended
surface roughness) the shape and extent of which is usually shown and dimensioned
on a drawing or descriptive specification.
Flaws: Flaws or defects are random irregularities such as scratches, cracks, holes
depressions, seams, tears, or inclusions
Lay: Lay, or directionality, is the direction of the predominant surface pattern and is
usually visible to the naked eye.
Roughness: It is defined as closely spaced, irregular deviations on a scale
smaller than that of waviness. Roughness may be superimposed on waviness.
Roughness is expressed in terms of its height, its width, and its distance on
the surface along which it is measured.
2
Fig 1.1 Various profiles of materials
Waviness: It is a recurrent deviation from a flat surface, much like waves on the surface
of water. It is measured and described in terms of the space between adjacent crests of the
waves (waviness width) and height between the crests and valleys of the waves
(waviness height). Waviness can be caused by,
3
1.1.4 MEASUREMENT TECHNIQUES
Main Measurement Methods of Surface Roughness
Inspection and assessment of surface roughness of machined work pieces can be carried
out by means of different measurement techniques. These methods can be ranked into the
following classes:
3. Non-contact methods
4. On-process measurement
Direct methods assess surface finish by means of stylus type devices. Measurements are
obtained using a stylus drawn along the surface to be measured: the stylus motion perpendicular
to the surface is registered. This registered profile is then used to calculate the roughness
parameters. This method requires interruption of the machine process, and the sharp diamond
stylus may make micro-scratches on surfaces.
4
2. Comparison Based Techniques:
3. Non-Contact Methods:
There have been some works done to attempt to measure surface roughness using non-
contact technique. Here is an electronic speckle correlation method given as an example.
When coherent light illuminates a rough surface, the diffracted waves from each point
of the surface mutually interfere to form a pattern which appears as a grain pattern of bright
and dark regions. The spatial statistical properties of this speckle image can be related to the
surface characteristics. The degree of correlation of two speckle patterns produced from the
same surface by two different illumination beams can be used as a roughness parameter.
The following figure shows the measure principle in the figure below. A rough surface
is illuminated by a monochromatic plane wave with an angle of incidence with respect to the
normal to the surface, multi-scattering and shadowing effects are neglected. The photo sensor
of a CCD camera placed in the focal plane of a Fourier lens is used for recording speckle
patterns. Assuming Cartesian coordinates x, y, z, a rough surface can be represented by its
ordinates Z (x,y) with respect to an arbitrary datum plane having transverse coordinates (x,y).
Then the rms surface roughness can be defined and calculated.
5
4. On-process measurement
Many methods have been used to measure surface roughness in process. For example:
Machine vision: In this technique, a light source is used to illuminate the surface with
a digital system to viewing the surface and the data being sent to a computer to be
analysed. The digitized data is then used with a correlation chart to get actual roughness
values.
Inductance method: An inductance pickup is used to measure the distance between the
surface and the pickup. This measurement gives a parametric value that may be used to
give a comparative roughness. However, this method is limited to measuring magnetic
materials.
Ultrasound: A spherically focused ultrasonic sensor is positioned with a non-normal
incidence angle above the surface. The sensor sends out an ultrasonic pulse to a personal
computer for analysis and calculation of roughness parameters.
6
Principle of Measurement:
A cantilever is holding a small tip that is sliding along the horizontal direction over the
object's surface. Following the profile, the cantilever is moving vertically. The vertical position
is recorded as the measured profile shown in light green
For 2D measurements, the probe usually traces along a straight line on a flat surface or
in a circular arc around a cylindrical surface. The length of the path that it traces is called the
measurement length. The wavelength of the lowest frequency filter that will be used to analyse
the data is usually defined as the sampling length. Most standards recommend that the
measurement length should be at least seven times longer than the sampling length, and
according to the NyquistShannon sampling theorem it should be at least ten times longer than
the wavelength of interesting features. The assessment length or evaluation length is the length
of data that will be used for analysis. Commonly one sampling length is discarded from each
end of the measurement length.
7
For 3D measurements, the probe is commanded to scan over a 2D area on the surface.
The spacing between data points may not be the same in both directions.
In some cases, the physics of the measuring instrument may have a large effect on the
data. This is especially true when measuring very smooth surfaces. For contact measurements,
most obvious problem is that the stylus may scratch the measured surface.
Another problem is that the stylus may be too blunt to reach the bottom of deep valleys
and it may round the tips of sharp peaks. In this case the probe is a physical filter that limits
the accuracy of the instrument.
There are also limitations for non-contact instruments. For example, instruments that rely
on optical interference cannot resolve features that are less than some fraction of the frequency
of their operating wavelength. This limitation can make it difficult to accurately measure
roughness even on common objects, since the interesting features may be well below the
wavelength of light. The wavelength of red light is about 650 nm, while the Ra of a ground
shaft might be 2000 nm.
1.2 MILLING
Milling is the process of cutting away material by feeding a work piece past a rotating
multiple tooth cutter. The cutting action of the many teeth around the milling cutter provides a
fast method of machining. The machined surface may be flat, angular or curved. The surface
may be milled to any combination of shapes. The machine holding the work piece, rotating the
cutter and feeding it is known as Milling Machine. The machine used here is an automated
one and is called as CNC machine
8
1.2.2 PARTS OF MILLING MACHINE
Base and column
Table
Saddle
Knee
Arbor
1.2.3 TYPES OF CNC MILLING MACHINES
Knee type
Universal horizontal
Ram type
Universal ram type
Swivel cutter head ram type
CNC vertical milling type
KNEE TYPE
The plain vertical machines are characterized by a spindle located vertically, parallel
to the column face, and mounted in a sliding head that can be fed up and down by hand or
power. Modern vertical milling machines are designed so the entire head can also swivel to
permit working on angular surfaces,
The turret and swivel head assembly is designed for making precision cuts and can be
swung 360 on its base. Angular cuts to the horizontal plane may be made with precision by
setting the head at any required angle within a 180" arc.
The plain horizontal milling machine's column contains the drive motor and gearing and
a fixed position horizontal milling machine spindle. An adjustable overhead arm containing
one or more arbour supports projects forward from the top of the column. The arm and arbour
supports are used to stabilize long arbours. Supports can be moved along the overhead arm to
support the arbour where support is desired depending on the position of the milling cutter or
cutters.
9
The milling machine's knee rides up or down the column on a rigid track. A heavy, vertical
positioning screw beneath past the milling cutter. The milling machine is excellent for forming
flat surfaces, cutting dovetails and keyways, forming and fluting milling cutters and reamers,
cutting gears, and so forth. Many special operations can be performed with the attachments
available for milling machine use. The knee is used for raising and lowering. The saddle rests
upon the knee and supports the worktable. The saddle moves in and out on a dovetail to control
cross feed of the worktable. The worktable traverses to the right or left upon the saddle for
feeding the work piece past the milling cutter. The table may be manually controlled or power
fed.
10
UNIVERSAL HORIZONTAL MILLING MACHINE
The basic difference between a universal horizontal milling machine and a plain
horizontal milling machine is the addition of a table swivel housing between the table and the
saddle of the universal machine. This permits the table to swing up to 45 in either direction
for angular and helical milling operations. The universal machine can be fitted with various
attachments such as the indexing fixture, rotary table, slotting and rack cutting attachments,
and various special fixtures.
The universal ram-type milling machine is similar to the universal horizontal milling
machine, the difference being, as its name implies, the spindle is mounted on a ram or movable
housing.
11
SWIVEL CUTTER HEAD RAM-TYPE MILLING MACHINE
The cutter head containing the milling machine spindle is attached to the ram. The
cutter head can be swivelled from a vertical spindle position to a horizontal spindle position or
can be fixed at any desired angular position between vertical and horizontal. The saddle and
knee are hand driven for vertical and cross feed adjustment while the worktable can be either
hand or power driven at the operator's choice.
Most CNC milling machines (also called machining centers) are computer
controlled vertical mills with the ability to move the spindle vertically along the Z-axis. This
extra degree of freedom permits their use in die sinking, engraving applications, and 2.5D
surfaces such as relief sculptures. When combined with the use of conical tools or a ball nose
cutter, it also significantly improves milling precision without impacting speed, providing a
cost-efficient alternative to most flat-surface hand-engraving work.
CNC machines can exist in virtually any of the forms of manual machinery,
like horizontal mills. The most advanced CNC milling-machines, the multi axis machine, add
12
two more axes in addition to the three normal axes (XYZ). Horizontal milling machines also
have a C or Q axis, allowing the horizontally mounted work piece to be rotated, essentially
allowing asymmetric and eccentric turning. The fifth axis (B axis) controls the tilt of the tool
itself. When all of these axes are used in conjunction with each other, extremely complicated
geometries, even organic geometries such as a human head can be made with relative ease with
these machines. But the skill to program such geometries is beyond that of most operators.
Therefore, 5-axis milling machines are practically always programmed with CAM.
The operating system of such machines is a closed loop system and functions on
feedback. These machines have developed from the basic NC (NUMERIC CONTROL)
machines. A computerized form of NC machines is known as CNC machines. A set of
instructions (called a program) is used to guide the machine for desired operations.
1.2.4 ADVANTAGES
CNC machines can be used continuously 24 hours a day, 365 days a year and only
need to be switched off for occasional maintenance.
CNC machines are programmed with a design which can then be manufactured
hundreds or even thousands of times. Each manufactured product will be exactly the
same.
13
Less skilled/trained people can operate CNCs unlike manual lathes / milling machines
etc., which need skilled engineers.
CNC machines can be updated by improving the software used to drive the machines
Training in the use of CNCs is available through the use of virtual software. This is
software that allows the operator to practice using the CNC machine on the screen of
a computer. The software is similar to a computer game.
A skilled engineer can make the same component many times. However, if each
component is carefully studied, each one will vary slightly. A CNC machine will
manufacture each component as an exact match
Modern design software allows the designer to simulate the manufacture of his/her
idea. There is no need to make a prototype or a model. This saves time and money.
One person can supervise many CNC machines as once they are programmed they can
usually be left to work by themselves. Sometimes only the cutting tools need replacing
occasionally.
When the programmers receive a NC programming task of one component or product, the
main works include:
CNC machining process feasibility study in accord with design drawings and related
technical documents to determine the CNC machining parts processing methods;
Select the type of CNC machine tools and the specifications;
Select the fixture and its supporting tools;
Select the tool and tool clamping system;
CNC machining programs and process planning;
Determine the processing area;
14
Design of CNC machining process content;
Coding CNC programs;
NC program debugging and process validation;
Finally complete all the NC process file and archive all the documents.
CNC programming can be from the beginning of the comprehension of the design drawings to
the completion of coding the NC process.
15
CHAPTER-2
LITERATURE SURVEY
Researchers in the area of high-speed milling have implemented various
chatter recognition techniques. Professor Jiri Tlusty developed a method that
detects chatter during machining, and in turn, suggests a new speed for the same
depth. Cobb found after testing, that impact dampers served better in controlling
the vibrations. The types of impact dampers used were a spring /mass liquid impact
damper and a tapered impact Damper, keyvanmanesh did an extensive research in
understanding the dynamic characteristics of the tool and spindle to control chatter during
machining.
16
the onset chatter reliably. M. Liang, T.Yeap, A. Hermansyah reported a fuzzy logic
approach for chatter suppression in end milling processes. Vibration energy and the peak value
of vibration frequency spectrum are jointly used as chatter indicators and inputs to the proposed
fuzzy controller.
17
CHAPTER-3
OPTIMIZATION TECHNIQUES
.
3.1.1 TAGUCHI DESIGN PHASES
System design
Parameter design
Tolerance design.
SYSTEMS DESIGN:
Systems design identifies the basic elements of the design, which will produce the
desired output, such as the best combination of processes and materials, selection of machine,
the type of tool is considered.
PARAMETER DESIGN:
18
TOLERANCE DESIGN:
Tolerance design finally identifies the components of the design which are sensitive in
terms of affecting the quality of the product and establishes tolerance limits which will give
the required level of variation in the design.
The influence of noise on the performance characteristics can be found using the ratio.
Where S is the standard deviation of the performance parameters for each inner array
experiment and N is the total number of experiment in the outer orthogonal array. This ratio
indicates the functional variation due to noise. Using this result, it is possible to predict which
control parameter settings will make the process insensitive to noise.
Signal-To-Noise ratio
Orthogonal arrays.
Signal-To-Noise Ratio
19
USES
S/N ratios can be used to get closer to a given target value, or to reduce variation in the
product's quality characteristic(s).
Signal-To-Noise ratio is used to measure controllable factors that can have such a
negative effect on the performance of design.
They lead to optimum through monotonic function.
They help improve additives of the effects.
To quantify the quality.
They are
1.Smaller-The-Better,
2. Larger-The-Better,
3.Nominal-The-Best.
The Smaller-The-Better:
impurities customers find in them in their drinking water, the better it is. Vibrations
are critical to quality for a car, the less vibration the customers feel while driving
their cars the better, the more attractive the cars are.
S/N = -10log10(y2/n)
20
The Larger-The-Better:
If the number of minutes per dollar customers get from their cellular phone service
provider is critical to quality, the customers will want to get the maximum number of minutes
they can for every dollar they spend on their phone bills.
If the lifetime of a battery is critical to quality, the customers will want their batteries to last
forever. The longer the battery lasts, the better it is.
Nominal-The-Best:
When a manufacturer is building mating parts, he would want every part to match
the predetermined target. For instance, when he is creating pistons that need to be anchored on
a given part of a machine, failure to have the length of the piston to match a predetermined size
will result in it being either too small or too long resulting in lowering the quality of the
machine. In that case, the manufacturer wants all the parts to match their target. When a
customer buys ceramic tiles to decorate his bathroom, the size of the tiles is critical to quality,
having tiles that do not match the predetermined target will result in them not being correctly
lined up against the bathroom walls.
Introduction:
In order to reduce the total number of experiments Sir Ronald Fisher developed
the solution: orthogonal arrays. The orthogonal array can be thought of as a distillation
mechanism through which the engineers experiment passes (Ealey, 1998). The array allows the
engineer to vary multiple variables at one time and obtain the effects which that set of variables
has an average and the dispersion.
21
Taguchi employs design experiments using specially constructed table,
known as Orthogonal Arrays (OA)" to treat the design process, such that the quality is built
into the product during the product design stage. Orthogonal Arrays (OA) are a special set of
Latin squares, constructed by Taguchi to lay out the product design experiments.
An orthogonal array is a type of experiment where the columns for the
independent variables are orthogonal to one another. Orthogonal arrays are employed to
study the effect of several control factors.
Orthogonal arrays are used to investigate quality. Orthogonal arrays are not
unique to Taguchi. They were discovered considerably earlier (Bendell, 1998). However,
Taguchi has simplified their use by providing tabulated sets of standard orthogonal arrays and
corresponding linear graphs to fit specific projects (ASI, 1989; Taguchi and Kenishi, 1987).
A Typical Orthogonal Array
S.NO A B C
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 3
5 2 2 1
6 2 3 2
7 3 1 2
8 3 2 3
9 3 3 1
Table no 3.1 L9 Orthogonal Arrays
In this array the columns are mutually orthogonal. That is for any pair of columns
all combination of factors occurs; and they occur an equal number of times. Here there are 3
parameters, A, B, and C each at three levels. This is called an L9 design; with the 9 indication
the nine rows, configurations, or prototypes to be tested. Specific test characteristics for each
experimental evaluation are identified in the associated row of the table. Thus L9(34) means
that nine experiments are to be carried out to study three variables with three levels. There are
greater savings in testing for larger arrays.
22
3.1.4 Application of Orthogonal Array
Taguchis OA analysis is used to produce the best parameters for the optimum
design process, with the least number of experiments.
OA is usually applied in the design of engineering products, test and quality
development, and process development.
3.1.5 Advantages and Disadvantages of Orthogonal Array:
Conclusions valid over the entire region spanned by the control
factors and their settings
Large saving in the experiment effort
Analysis is easy
OA techniques are not applicable, such as a process involving
influencing factors that vary in time and cannot be quantified exactly.
3.1.6 Steps in Taguchi Methodology
Problem identification
Objectives of the Project Work
Selecting Quantity Characteristics
Selecting the Process Parameters that
may influence quantity characteristics
23
Conduct Confirmation Experiment
ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence,
it is difficult to define concisely or precisely.
24
In short, ANOVA is a statistical tool used in several ways to develop and confirm an
explanation for the observed data.
Additionally:
As a result: ANOVA "has long enjoyed the status of being the most used (some would say
abused) statistical technique in psychological research. ANOVA "is probably the most useful
technique in the field of statistical inference."
ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs
being notorious. In some cases, the proper application of the method is best determined by
problem pattern recognition followed by the consultation of a classic authoritative test.
Balanced design
An experimental design where all cells (i.e. treatment combinations) have the
same number of observations.
Blocking
A schedule for conducting treatment combinations in an experimental
study such that any effects on the experimental results due to a known change in raw
materials, operators, machines, etc., become concentrated in the levels of the blocking
variable. The reason for blocking is to isolate a systematic effect and prevent it from
obscuring the main effects. Blocking is achieved by restricting randomization.
Design
A set of experimental runs which allows the fit of a particular model and the
estimate of effects.
DOE (Design of experiments)
An approach to problem solving involving collection of data that will
support valid, defensible, and supportable conclusions.
25
Effect
How changing the settings of a factor changes the response. The effect of a
single factor is also called a main effect.
Error
Unexplained variation in a collection of observations. DOE's typically require
understanding of both random error and lack of fit error.
Experimental unit
The entity to which a specific treatment combination is applied.
Factors
Process inputs an investigator manipulates to cause a change in the output.
Lack-of-fit error
Error that occurs when the analysis omits one or more important terms or
factors from the process model. Including replication in a DOE allows separation of
experimental error into its components: lack of fit and random (pure) error.
Model
Mathematical relationship which relates changes in a given response to
changes in one or more factors.
Random error
Error that occurs due to natural variation in the process. Random error is
typically assumed to be normally distributed with zero mean and a constant variance.
Random error is also called experimental error.
Randomization
A schedule for allocating treatment material and for conducting treatment
combinations in a DOE such that the conditions in one run neither depend on the
conditions of the previous run nor predict the conditions in the subsequent runs.
Replication
Performing the same treatment combination more than once. Including
replication allows an estimate of the random error independent of any lack of fit error.
Responses
The output(s) of a process. Sometimes called dependent variable(s).
Treatment
A treatment is a specific combination of factor levels whose effect is to be
compared with other treatments.
26
3.2.3 MODELS OF ANOVA METHOD
There are three classes of models used in the analysis of variance, and these are outlined here.
Fixed-effects models
Random-effects models
Random effects model (class II) is used when the treatments are not fixed. This
occurs when the various factor levels are sampled from a larger population. Because the levels
themselves are random variables, some assumptions and the method of contrasting the
treatments (a multi-variable generalization of simple differences) differ from the fixed-effects
model.
Mixed-effects models
Defining fixed and random effects has proven elusive, with competing definitions arguably
leading toward a linguistic quagmire.
27
3.2.4 TYPES OF ANOVA:
No-Way ANOVA:
No-Way ANOVA is the simplest situation to analyse and begins with a set of
experimental data. Analysis of variance is a mathematical technique which breaks total
variation down into accountable sources; total variation is decomposed into its
appropriate components. No-way ANOVA, the simplest case, breaks total variation
down into only two components:
1. The variation of the average (or mean) of all the data points relative to zero.
One-Way ANOVA:
2. The variation of the average (mean) of observations under each factor level around the
average of observations under each factor level.
28
3. The variation of the individual observations around the average of observations under each
factor level.
= yi
= T/N
= (yi - )
Two-way ANOVA is the next highest order of ANOVA to review; there are two
controlled parameters in this experimental situation. The graphical representations will be
discontinued although utilization is still possible.
Whereas One- Way analysis of variance (ANOVA) measure significant effects of one
factor only, two- Way analysis of variance tests (ANOVA) tests (also called two-factor analysis
of variance) measure the effects of two factors simultaneously. For example, an experiment
might be defined by two parameters, such as treatment and time point. One-Way ANOVA tests
would be able to assess only the treatment effect or the time effect. Two-Way ANOVA on the
other hand would not only be able to assess both time and treatment in the same test, but also
whether there is an interaction between the parameters. A Two-Way test generates three p-
values, one for each parameter independently, and one measuring the interaction between the
two parameters.
29
Two-Way ANOVA Prerequisites: Before running a Two-Way ANOVA test,
experimental data must meet these prerequisites,
3. Two-Way ANOVA is most powerful when the experiment has the same number of replicates
in each group defined by the pair of parameters. This is called a balanced design. However,
two tests can also be applied to proportional design experiments, where the proportion of
samples across each parameter group is retained.
Experiments with mild deviations from a proportional design may still be analysed, but
experiments with highly disproportional design cannot be analysed using Two-Way ANOVA.
Two-Way tests can also be analysed on data with only one replicate per group or condition.
However, the interaction between the factors cannot be tested.
Two-Way ANOVA is best to use when the experiment is designed to measure two
different factors, or when we want to measure these factors simultaneously.
30
= ( YI) - T/ N
= (A1 A2)/ N
= (B1 B2)/ N
Three- Way ANOVA: Three- way ANOVA entails three controlled factors in an
experiment.
= ( YI) - T/ N
31
SSa*b = Variation due to interaction of factors of A and B
32
CHAPTER-4
High speed steel tools are the most popular for use in woodturning.
33
Cemented Carbides:
An extremely hard material made from tungsten powder. Carbide
tools are usually used in the form of brazed or clamped tips. High cutting Speeds
may be used and materials difficult to cut with HSS may be readily machined using carbide
tipped tool.
Spindle speed:
The rotational speed of the spindle and tool in revolutions per minute (RPM).
The spindle speed is equal to the cutting speed divided by the circumference of the tool.
Feed rate:
The speed of the cutting tool's movement relative to the work piece as the tool makes a
cut. The feed rate is measured in inches per minute (IPM) and is the product of the
cutting feed (IPR) and the spindle speed (RPM).
34
Axial depth of cut:
The depth of the tool along its axis in the work piece as it makes a cut. A large axial
depth of cut will require a low feed rate, or else it will result in a high load on t h e On the tool
and reduce tool life. Therefore, a feature is typically machined in several passes as the tool
moves to the specified axial depth of cut for each pass.
35
4.2.1 CHARACTERISTICS OF MATERIAL
PHYSICAL
Aluminum is a relatively soft, durable, lightweight, ductile, and malleable metal with
appearance ranging from silvery to dull gray, depending on the surface roughness. It is
nonmagnetic and does not easily ignite. A fresh film of aluminum serves as a good reflector
(approximately 92%) of visible light and an excellent reflector (as much as 98%) of medium
and far infrared radiation. The yield strength of pure aluminum is 711 MPa, while aluminum
alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminum has about one-third
the density and stiffness of steel. It is easily machined, cast, drawn and extruded.
CHEMICAL
Corrosion resistance can be excellent because a thin surface layer of aluminum oxide
forms when the bare metal is exposed to air, effectively preventing further oxidation, in a
process termed passivation. The strongest aluminum alloys are less corrosion resistant due to
galvanic reactions with alloyed copper. This corrosion resistance is greatly reduced by aqueous
salts, particularly in the presence of dissimilar metals.
One of the best known properties of aluminum is that it is light, with a density one
third that of steel, 2,700 kg/m3. The low density of aluminum accounts for it being lightweight
but this does not affect its strength.
Strength
Aluminum alloys commonly have tensile strengths of between 70 and 700 MPa.
The range for alloys used in extrusion is 150 300 MPa. Unlike most steel grades, aluminum
does not become brittle at low temperatures. Instead, its strength increases. At high
temperatures, aluminums strength decreases. At temperatures continuously above 100C,
strength is affected to the extent that the weakening must be taken into account.
36
Linear expansion
Compared with other metals, aluminum has a relatively large coefficient of linear
expansion. This has to be taken into account in some designs.
Machining
Formability
Aluminums superior malleability is essential for extrusion. With the metal either hot
or cold, this property is also exploited in the rolling of strips and foils, as well as in bending
and other forming operations.
Conductivity
Joining
Features facilitating easy jointing are often incorporated into profile design. Fusion
welding, Friction Stir Welding, bonding and taping are also used for joining.
Reflectivity
Another of the properties of aluminum is that it is a good reflector of both visible light
and radiated heat.
37
CHAPTER-5
DESIGN OF EXPERIMENT
Here there are 3 parameters, A, B, and C each at three levels. This is called
an L9 design; with the 9 indication the nine rows, configurations, or prototypes to be tested.
Specific test characteristics for each experimental evaluation are identified in the associated
row of the table. Thus L9(34) means that nine experiments are to be carried out to study three
variables with three levels. There are greater savings in testing for larger arrays.
S.NO A B C
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 3
5 2 2 1
6 2 3 2
7 3 1 2
8 3 2 3
9 3 3 1
Table no 5.1 L9 Orthogonal Arrays
The machining was carried out on Universal milling machine, by varying one
parameter while controlling the other two parameters and so on. Each work piece of 100mm
length 50mm breadth and 6mm thickness was taken. Therefore, we needed a 3 feet aluminium
for carrying out the full factorial experiment (9 experiments).
The initial operations were filing or grinding. These are done not to spoil the
work surface of the milling machine and also not to have any inclinations. The machining was
carried out according to the parameter conditions. Each piece was marked with its specific
serial number of machining, so as to avoid confusion later.
38
Fig 5.1 CNC XL MILLNG MACHINE Fig 5.2 CUTTING MACHINE
Before milling operation, the work piece is cut into pieces of 100mm length by
power saw machine by marking lines. By placing it on machine, cutter cuts the work piece.
And then these pieces are subjected to grinding or filing to make the surface smooth.
These pieces were taken to the metrology lab and tested for surface finish with the help of
surface finish apparatus called TR200 from the company TIME as shown in the figure below
39
CHAPTER-6
PROBLEM STATEMENT
For the experimental plan, the Taguchi method for three levels was used with
careful understanding of levels taken by factors. Table 1 indicates the factors to be studied and
assignment of corresponding levels. According to Taguchi design concept, a l9 orthogonal
array was chosen for experiments. The plan is made of 9 tests (array rows) in which the first
column was assigned to cutting velocity(Vc), second column to feed rate (f) and the third
column to depth of cut(d) and remaining were assigned to interactions. The output to be studied
is surface roughness (Ra).
EXPERIMENTAL SETUP
MATERIALS USED:
Tool material: High speed steel
Work piece material: Aluminium (100x50x6mm)
PROPOSED TECHNIQUES
1.Taguchi Method
2.Analysis of Variance
40
CHAPTER-7
41
Experimental design using L9 orthogonal array:
Table no.7.2 Experimental results of Mean Surface Roughness and S/N ratio
S/N= -10logy2
The S/N ratio for the individual control factors are calculated as given below:
FOR SPEED
Ss1=(++3), Ss2=(4+5+6) & Ss3=(7+8+9)
FOR DEPTH OF CUT
St1=(+4+7), St2=(2+5+8) & St3=(3+6+9)
FOR FEED
Sf1=(+5+9), Sf2=(2+6+7) & Sf3=(3+4+8)
For selecting the values of , 2, 3 etc. and to calculate Ss1, Ss2 & Ss3 check the
orthogonal array in the previous chapter
k is the S/N ratio corresponding to Experiment k.
Average S/N ratio corresponding to Cutting Speed at level 1 = Ss1/3
Average S/N ratio corresponding to Cutting Speed at level 2 = Ss2/3
42
Average S/N ratio corresponding to Cutting Speed at level 3 = Ss3/3
j is the corresponding level each factor. Similarly, Sfj and Stj are calculated for feed and
depth of cut. The average of the signal to noise ratios is shown in table below. Similarly,
S/N ratios can be calculated for other factors.
LEVEL SPEED FEED DEPTH OF CUT
SUM Ssj Avg S/N SUM Sfj Avg S/N SUM Stj Avg S/N
Table no 7.3 S/N Ratio for Speed, Feed and Depth of Cut
= 0.813+0.362+0.375+0.368+0.247+0.22+0.289+0.271+0.321
= 2.636
A1=0.183+0.362+0.375 = 0.92
A2=0.368+0.247+0.22 = 0.835
A3=0.289+0.271+0.321 = 0.881
=0.0012
43
SSb = Variation due to factor B
B1=0.183+0.368+0.289 = 0.84
B2=0.362+0.247+0.271 = 0.88
B3=0.375+0.22+0.321 = 0.916
SSb = 0.2352+0.2581+0.2796-0.772
= 0.0009
C1=0.183+0.22+0.271 = 0.674
C2=0.362+0.368+0.321 = 1.051
C3=0.375+0.247+0.289 = 0.991
SSc = 0.154+0.3682+0.327-0.772
= 0.0746
44
Table 7.4 shows the ANOVA results of surface roughness. From the result it is
observed that the depth of cut is most significant parameter followed by speed and feed. It
means that depth of cut influence significantly on work piece surface roughness between three
cutting parameters
7.3 MAIN EFFECT PLOT ANALYSIS
The data were further analysed to study the effect of cutting parameters on surface
roughness. From the S/N ratios given in the tables 7.5 and 7.6 main effect plots were drawn
using MINITAB 17 software and shown in the fig 7.1 and 7.2 respectively. The plot shows the
variation of response with the change in cutting parameters. These main effect plots are used
to determine optimal design conditions to obtain the low surface roughness.
SMALLER IS BETTER
45
Fig no 7.1 Main effect plot for means
46
From the above figures and tables, it is observed that with the increase in speed and
feed levels there is a less change in response but with the increase in level of depth of cut
significant change in response can be observed.
47
CONCLUSION
48
FUTURE SCOPE
The present work can be extended further for different conditions of process
parameters at different levels for different materials.
The present work can be done by taking carbide tools rather than HSS and
by taking L27 orthogonal array we can obtain better results.
Techniques as improved genetic algorithm, particle swarm optimisation can
be used. Many other materials and inserts geometries can also be investigated.
49
REFERENCES
50