Sunteți pe pagina 1din 16

Background Preventive maintenance Preventive maintenance is a schedule of planned maintenance actions aimed at the prevention of breakdowns and failures.

The primary goal of preventive maintenance is to prevent the failure of equipment before it actually occurs. Preventive maintenance activities include equipment checks, partial or complete overhauls at specified periods, oil changes, lubrication. In addition, workers can record equipment deterioration so they know to replace or repair worn parts before they cause system failure. Preventive maintenance is part of equipment maintenance planning . Equipment Maintenance Plan The Equipment Maintenance Plan, or EMP as it is commonly called, is a document, in table format, that is used when developing the tasks needed to properly maintain facility, plant or process equipment. The EMP helps lead the person or persons developing the required maintenance tasks by ensuring that the development is done consistently for all equipment. Each EMP should include one or more maintenance tasks designed to ensure the continued operation and maintenance of an equipment item, process or system. Each of these tasks has the following characteristics: A descriptive title for each maintenance task to be performed A frequency assigned for performing of each task Assignment of a specific craft or workgroup and the number of each craft or workgroup required to perform the task Equipment condition required for performance of the task (i.e. running or shut down) Type of Work Preventive Maintenance (PM), Predictive Maintenance (PdM), Corrective Maintenance (CM), Situational Maintenance (SIT), etc. Procedure number Unique identifier for the task, or file name if linked to another document that gives the individual task instructions Estimated time to perform the task Special tools, materials and equipment required to perform the task

The EMP can also provide the following additional planning and budgeting information if set up properly in a spreadsheet format: Annualized hours for performing the task Annualized hours for shut down of the equipment during performance of the task Annualized hours for performance of the task by craft

What is involved in preventive maintenance? PM involves Maintaining equipment and facilities in satisfactory operating condition by providing for systematic inspection, detection, and correction of incipient failures either before they occur or before they develop into major defects. It includes tests, measurements, adjustments, and parts replacement. A good preventive maintenance plan involves the following: Identifying of assets Standard Operating Procedures Scheduling of Maintenance Resources

This project focuses mainly on the last part of the activities of preventive maintenance i.e. scheduling of maintenance resources. The scheduling of maintenance resource being a scheduling problem has correlation with other scheduling problems such as school timetabling. It is with this respect that a lot of emphasis is put on timetabling problems, and approaches to solve them.

Terminology State space search: State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property. Problems are often modeled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states are connected if there is an operation that can be performed to transform the first state into the second. Schedule: a plan for carrying out a process or procedure, giving lists of intended events and times. It is synonymous to a time table Constraint Satisfaction Problems (CSP): These are problems in computing that involve finding values for problem variables subject to constraints on which combinations are acceptable. Partial solution: a transformation to a state that could lead to a full solution (or goal state in this respect). Full solution: A state in the problem world that is considered the goal state or solution to the problem we seek to solve. In this context, a goal state will be an optimized workable machine maintenance schedule. Backtracking algorithm: a search algorithm that is capable of returning to a previous state, when a choice of states leads to a dead-end. Local optimum this is a solution to a problem in the state space, but is not the best solution attainable in the problem space. Global optimum this is the most optimal solution attainable in the search space. There is usually only one global optimum.

Problem Solving as state space search This involves solution to a problem by searching. NASA has explored this approach in space explorations where a complex mission must involve: Plan complex sequences of actions Schedule actions Allocate tight resources Monitor and diagnose behavior Repair or reconfigure hardware

Intelligent agents are supposed to act in such a way that the environment goes through a sequence of states that maximizes the performance measure. In its full generality, this specification is difficult to translate into a successful agent design. A goal is a set of world statesjust those states in which the goal is satisfied. Actions can be viewed as causing transitions between world states, so obviously the agent has to find out which actions will get it to a goal state. Before it can do this, it needs to decide what sorts of actions and states to consider. Problem solving involves four main steps: a) Goal formulation: coming up with definition of a goal state i.e. deciding what state(s) is considered the solution to the problem. b) Problem formulation: the process of deciding what actions and states to consider, and follows goal formulation. c) Search solution: An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best one. This process of looking for such a sequence is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. d) Execution phase: once a solution is found, the actions it recommends can be carried out. Thus, we have a simple "formulate, search, execute" design for the agent. After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do, and then removing that step from the sequence. Once the solution has been executed, the agent will find a new goal.

The success of finding solution to the problem highly relies on phase b and c in the above steps and thus the following two subtopics illustrates these two steps in depth, giving illustrations. Problem formulation As mentioned earlier, this involves process of deciding which states and actions to consider. This problem solving technique is only possible if the problem and the solution is well defined. A problem (in the context of well-defined problems and solutions) is really a collection of information that the agent will use to decide what to do. This approach begins by specifying the information needed to define a single-state problem. The basic elements of a problem definition are the states and actions. To capture these formally, we need the following: The initial state that the agent knows itself to be in. The set of possible actions available to the agent. The term operator is used to denote the description of an action in terms of which state will be reached by carrying out the action in a particular state. (An alternate formulation uses a successor function S. Given a particular state x, S(x) returns the set of states reachable from x by any single action.) Together, these define the state space of the problem: the set of all states reachable from the initial state by any sequence of actions. A path in the state space is simply any sequence of actions leading from one state to another. The next element of a problem is the following: The goal test, which the agent can apply to a single state description to determine if it is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks to see if we have reached one of them. Sometimes the goal is specified by an abstract property rather than an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called "checkmate," where the opponent's king can be captured on the next move no matter what the opponent does. Finally, it may be the case that one solution is preferable to another, even though they both reach the goal. For example, we might prefer paths with fewer or less costly actions. A path cost function is a function that assigns a cost to a path. In all cases we will consider, the cost of a path is the sum of the costs of the individual actions along the path. The path cost function is often denoted by g. Together, the initial state, operator set, goal test, and path cost function define a problem. Naturally, we can then define a data type with which to represent problems. Example problem The 8-puzzle problem The 8-puzzle, an instance of which is shown in the figure below, consists of a 3x3 board with eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the space. The object is to reach the configuration shown on the right of the figure. One important trick is to notice that rather than use operators such as "move the 3 tile into the blank space," it is more sensible to have operators such as "the blank space changes places with the tile to its left." This is because there are fewer of the latter kind of operator. This leads us to the following formulation: States: a state description specifies the location of each of the eight tiles in one of the nine squares. For efficiency, it is useful to include the location of the blank. Operators: blank moves left, right, up, or down.

Goal test: state matches the goal configuration shown in figure below. Path cost: each step costs 1, so the path cost is just the length of the path.

Searching for solution Finding a solution is done through searching through the state space. The idea is to maintain and extend a set of partial solution sequences. Generating action sequences To solve the 8-puzzle problem from start to goal state, for example, we start off with just the initial state, as shown above. The first step is to test if this is a goal state. Clearly it is not, but it is important to check so that we can solve trick problems like a situation in which the puzzle is already solved i.e. start state = goal state. Because this is not a goal state, we need to consider some other states. This is done by applying the operators to the current state, thereby generating a new set of states. The process is called expanding the state. The choice of which state to expand first is determined by the search strategy. The figure below gives an illustration of the expansion process.

In state space search, the search for a solution is accomplished using a search tree. Clearly from the figure above, the expansion of the states resembles a tree structure. The root of the search tree is a search node corresponding to the initial state. The leaf nodes of the tree correspond to states that do not have successors in the tree, either because they have not been expanded yet, or because they were expanded, but generated the empty set. At each step, the search algorithm chooses one leaf node to expand. The following general search algorithm is used in the search for the goal state.

function GENERAL-SEARCH(problem, strategy) returns a solution, or failure initialize the search tree using the initial state of problem loop do if there are no candidates for expansion then return failure choose a leaf node for expansion according to strategy if the node contains a goal state then return the corresponding solution else expand the node and add the resulting nodes to the search tree end end function
Using the above the approach, timetabling problems have been formulated as problem, problem space and search.

Scheduling problem as a Constraint Satisfaction Problem A constraint logic programming (CLP) system (Jaffar and Lassez 1987) is a tool for modeling a specific search problem, which provides the ability to declare variables and their domains, and to place constraints. In order to search for a solution, a CLP system generates values for the variables, propagating values through the constraints in order to prune parts of the solution space where inconsistencies are discovered. The basic method is therefore a backtrack search where the constraints allow the system to look ahead to the consequences of decisions and spot failure earlier. For dealing with optimization problems the CLP systems provide a solution technique based on a form of depth first branch and bound search. Azevedo and Barahona (1994) deal with the timetabling problem using a CLP language called DOMLOG. DOMLOG extends CHIP (Van Hentenryck 1991), a popular CLP language, with features such as userdefined heuristics and a flexible look ahead constraint solving. In particular, DOMLOG allows for the possibility to specify a finite domain for the variables. In addition, the user can specify the heuristics for the selection of the value of a domain to assign first to a given variable. Several other authors recently employed constraint logic programming for course timetabling with a good success. Frangouli et al. (1995) and Gueret et al. (1995) solve their course timetabling problem relying on the finite domain libraries of the logic programming language ECLiPSe(ECRC 1995) and CHIP, respectively. Henz and Wurtz (1995) rely on the OZ system (Smolka 1995), which is a multiparadigm concurrent constraint language.

Search Control Strategies


Search is the systematic examination of paths to find path from the start state to goal state. Search usually result from lack of knowledge and explore knowledge alternatives to arrive at the best answer. Search output is a solution and it is usually a path from the initial state to a state that satisfies the goal sate. For general purpose problem solving search is an approach. Search deals with finding goals which have certain properties in the search space. Search methods explore the search-space intelligently, evaluating possibilities without investigating every possible node. The following diagram gives the available search optimization techniques in a problem space:

Search Algorithms That Have Been Used in Timetabling Problems A. Hill Climbing Algorithm Hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. If the change produces a better solution, an incremental change is made to the new solution, repeating until no further improvements can be found. Hill climbing is good for finding a local optimum (a solution that cannot be improved by considering a neighbouring configuration) but it is not guaranteed to find the best possible solution (the global optimum) out of all possible solutions (the search space). The characteristic that only local optima are guaranteed can be cured by using restarts (repeated local search), or more complex schemes based on iterations, like iterated local search, on memory, like reactive search optimization and tabu search, on memory-less stochastic modifications, like simulated annealing. The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely in artificial intelligence, for reaching a goal state from a starting node. Choice of next node and starting node can be varied to give a list of related algorithms. Although more advanced algorithms such as simulated annealing or tabu search may give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems. It is an anytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends.
Discrete Space Hill Climbing Algorithm currentNode = startNode; loop do L = NEIGHBORS(currentNode); nextEval = -INF; nextNode = NULL; for all x in L if (EVAL(x) > nextEval) nextNode = x; nextEval = EVAL(x); if nextEval <= EVAL(currentNode) //Return current node since no better neighbors exist return currentNode; currentNode = nextNode; Continuous Space Hill Climbing Algorithm currentPoint = initialPoint; // the zero-magnitude vector is common stepSize = initialStepSizes; // a vector of all 1's is common acceleration = someAcceleration; // a value such as 1.2 is common candidate[0] = -acceleration; candidate[1] = -1 / acceleration; candidate[2] = 0; candidate[3] = 1 / acceleration; candidate[4] = acceleration;

loop do before = EVAL(currentPoint); for each element i in currentPoint do best = -1; bestScore = -INF; for j from 0 to 4 // try each of 5 candidate locations currentPoint[i] = currentPoint[i] + stepSize[i] * candidate[j]; temp = EVAL(currentPoint); currentPoint[i] = currentPoint[i] - stepSize[i] * candidate[j]; if(temp > bestScore) bestScore = temp; best = j; if candidate[best] is not 0 currentPoint[i] = currentPoint[i] + stepSize[i] * candidate[best]; stepSize[i] = stepSize[i] * candidate[best]; // accelerate if (EVAL(currentPoint) - before) < epsilon return currentPoint;

Problems of hill-climbing algorithms 1. Local maximus - A problem with hill climbing is that it will find only local maxima. Unless the heuristic is convex, it may not reach a global maximum. Other local search algorithms try to overcome this problem such as stochastic hill climbing, random walks and simulated annealing. 2. Ridges and valleys - Ridges are a challenging problem for hill climbers that optimize in continuous spaces. Because hill climbers only adjust one element in the vector at a time, each step will move in an axis-aligned direction. If the target function creates a narrow ridge that ascends in a non-axis-aligned direction (or if the goal is to minimize, a narrow alley that descends in a non-axis-aligned direction), then the hill climber can only ascend the ridge (or descend the alley) by zig-zagging. If the sides of the ridge (or alley) are very steep, then the hill climber may be forced to take very tiny steps as it zig-zags toward a better position. Thus, it may take an unreasonable length of time for it to ascend the ridge (or descend the alley). 3. Plateau- Another problem that sometimes occurs with hill climbing is that of a plateau. A plateau is encountered when the search space is flat, or sufficiently flat that the value returned by the target function is indistinguishable from the value returned for nearby regions due to the precision used by the machine to represent its value. In such cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement.

B. Simulated annealing Simulated annealing (SA) is a random-search technique which exploits an analogy between the way in which a metal cools and freezes into a minimum energy crystalline structure (the annealing process) and the search for a minimum in a more general system; it forms the basis of an optimization technique for combinatorial and other problems. SA approaches the global maximisation problem similarly to using a bouncing ball that can bounce over mountains from valley to valley. It begins at a high "temperature" which enables the ball to make very high bounces, which enables it to bounce over any mountain to access any valley, given enough bounces. As the temperature declines the ball cannot bounce so high, and it can also settle to become trapped in relatively small ranges of valleys. A generating distribution generates possible valleys or states to be explored. An acceptance distribution is also defined, which depends on the difference between the function value of the present generated valley to be explored and the last saved lowest valley. The acceptance distribution decides probabilistically whether to stay in a new lower valley or to bounce out of it. All the generating and acceptance distributions depend on the temperature. It has been proved that by carefully controlling the rate of cooling of the temperature, SA can find the global optimum. However, this requires infinite time. In the simulated annealing (SA) method, each point s of the search space is analogous to a state of some physical system, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy. At each step, the SA heuristic considers some neighbouring state s of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.

The algorithm
s s0; e E(s) // Initial state, energy. sbest s; ebest e // Initial "best" solution k 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: T temperature(k/kmax) // Temperature calculation. snew neighbour(s) // Pick some neighbour. enew E(snew) // Compute its energy. if P(e, enew, T) > random() then // Should we move to it? s snew; e enew // Yes, change state. if e < ebest then // Is this a new best? sbest snew; ebest enew // Save 'new neighbour' to 'best found'. k k + 1 // One more evaluation done return sbest // Return the best solution found.

C. Genetic algorithm A genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an

optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.

A typical genetic algorithm requires: 1. A genetic representation of the solution domain, 2. A fitness function to evaluate the solution domain. A standard representation of the solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming. The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, interactive genetic algorithms are used. Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions (usually randomly) and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators. 1. Initialization

Initially many individual solutions are (usually) randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found. 2. Selection During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming. 3. Reproduction (Crossover and mutation) The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination), and/or mutation. For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions, for reasons already mentioned above. Although Crossover and Mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms. 4. Termination This generational process is repeated until a termination condition has been reached. Common terminating conditions are: A solution is found that satisfies minimum criteria Fixed number of generations reached Allocated budget (computation time/money) reached The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results Manual inspection Combinations of the above

A general genetic algorithm procedure


Choose the initial population of individuals Evaluate the fitness of each individual in that population Repeat on this generation until termination (time limit, sufficient fitness achieved, etc.): Select the best-fit individuals for reproduction Breed new individuals through crossover and mutation operations to give birth to offspring Evaluate the individual fitness of new individuals Replace least-fit population with new individuals

Problems with GA o o o State encoding is the real ingenuity, not the decision to use genetic algorithm. Lack of diversity can lead to premature convergence and non-optimal solution Not much to say theoretically Cross over (sexual reproduction) much more efficient than mutation (asexual reproduction).

Comparisons drawn from search algorithms Any efficient optimization algorithm must use two techniques to find a global maximum: exploration to investigate new and unknown areas in the search space, and exploitation to make use of knowledge found at points previously visited to help find better points. These two requirements are contradictory, and a good search algorithm must find a tradeoff between the two. Neural nets vs simulated annealing The main difference compared with neural nets is that neural nets learn (how to approximate a function) while simulated annealing searches for a global optimum. Neural nets are flexible function approximators while SA is an intelligent random search method. The adaptive characteristics of neural nets are a huge advantage in modeling changing environments. However, the power-hungriness of SA limits its use as a real-time application. Genetic algorithms vs simulated annealing Direct comparisons have been made between ASA/VFSR (versions of SA) and publicly-available genetic algorithm (GA) codes, using a test suite already adapted and adopted for GA. In each case, ASA outperformed the GA problem. GA is a class of algorithms that are interesting in their own right; GA was not originally developed as an optimization algorithm, and basic GA does not offer any statistical guarantee of global convergence to an optimal point. Nevertheless, it should be expected that GA may be better suited for some problems than SA.

Gradient methods A number of different methods for optimizing well-behaved continuous functions have been developed which rely on using information about the gradient of the function to guide the direction of search. If the derivative of the function cannot be computed, because it is discontinuous, for example, these methods often fail. Such methods are generally referred to as hill climbing. They can perform well on functions with only one peak (unimodal functions). But on functions with many peaks, (multimodal functions), they suffer from the problem that the first peak found will be climbed, and this may not be the highest peak. Having reached the top of a local maximum, no further progress can be made. Iterated Search Random search and gradient search may be combined to give an iterated hillclimbing search. Once one peak has been located, the hillclimb is started again, but with another, randomly chosen, starting point. This technique has the advantage of simplicity, and can perform well if the function does not have too many local maxima. However, since each random trial is carried out in isolation, no overall picture of the "shape" of the domain is obtained. As the random search progresses, it continues to allocate its trials evenly over the search space. This means that it will still evaluate just as many points in regions found to be of low fitness as in regions found to be of high fitness. Both SA and GAs, by comparison, start with an initial random population, and allocate increasing trials to regions of the search space found to have high fitness. This is a disadvantage if the maximum is in a small region, surrounded on all sides by regions of low fitness. This kind of function is difficult to optimize by any method, and here the simplicity of the iterated search usually wins.

S-ar putea să vă placă și