Sunteți pe pagina 1din 58

Data Mining

Frequent Itemset Mining

Assist.Prof.Dr. Burcu YILMAZ


Data Mining:
Concepts and Techniques
(3rd ed.)

— Chapter 6 —

Jiawei Han, Micheline Kamber, and Jian Pei


University of Illinois at Urbana-Champaign &
Simon Fraser University
©2013 Han, Kamber & Pei. All rights reserved.
2
December 30, 2017 Data Mining: Concepts and Techniques 3
Chapter 6: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

 Basic Concepts

 Frequent Itemset Mining Methods

 Which Patterns Are Interesting?—Pattern

Evaluation Methods

 Summary

4
What Is Frequent Pattern Analysis?
 Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
 First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context
of frequent itemsets and association rule mining
 Motivation: Finding inherent regularities in data
 What products were often purchased together?— Beer and diapers?!
 What are the subsequent purchases after buying a PC?
 What kinds of DNA are sensitive to this new drug?
 Can we automatically classify web documents?
 Applications
 Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence analysis.
5
Why Is Freq. Pattern Mining Important?

 Freq. pattern: An intrinsic and important property of


datasets
 Foundation for many essential data mining tasks
 Association, correlation, and causality analysis

 Sequential, structural (e.g., sub-graph) patterns

 Pattern analysis in spatiotemporal, multimedia, time-

series, and stream data


 Classification: discriminative, frequent pattern analysis

 Cluster analysis: frequent pattern-based clustering

 Data warehousing: iceberg cube and cube-gradient

 Semantic data compression: fascicles

 Broad applications

6
Basic Concepts: Frequent Patterns

Tid Items bought  itemset: A set of one or more


10 Beer, Nuts, Diaper items
20 Beer, Coffee, Diaper  k-itemset X = {x1, …, xk}
30 Beer, Diaper, Eggs
 (absolute) support, or, support
40 Nuts, Eggs, Milk count of X: Frequency or
50 Nuts, Coffee, Diaper, Eggs, Milk occurrence of an itemset X
Customer Customer
 (relative) support, s, is the
buys both buys diaper fraction of transactions that
contains X (i.e., the probability
that a transaction contains X)
 An itemset X is frequent if X’s
support is no less than a minsup
Customer
buys beer
threshold

7
Basic Concepts: Association Rules
Tid Items bought  Find all the rules X  Y with
10 Beer, Nuts, Diaper
minimum support and confidence
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs  support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X  Y
50 Nuts, Coffee, Diaper, Eggs, Milk
 confidence, c, conditional
probability that a transaction
Customer Customer
buys both
having X also contains Y
buys
diaper
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
Customer {Beer, Diaper}:3
buys beer  Association rules: (many more!)
 Beer  Diaper (60%, 100%)
 Diaper  Beer (60%, 75%)
8
Closed Patterns and Max-Patterns
 A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (1001) + (1002) + … +
(110000) = 2100 – 1 = 1.27*1030 sub-patterns!
 Solution: Mine closed patterns and max-patterns instead
 An itemset X is closed if X is frequent and there exists no
super-pattern Y ‫ כ‬X, with the same support as X
(proposed by Pasquier, et al. @ ICDT’99)
 An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y ‫ כ‬X (proposed by
Bayardo @ SIGMOD’98)
 Closed pattern is a lossless compression of freq. patterns
 Reducing the # of patterns and rules
9
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

 Basic Concepts

 Frequent Itemset Mining Methods

 Which Patterns Are Interesting?—Pattern

Evaluation Methods

 Summary

10
Scalable Frequent Itemset Mining Methods

 Apriori: A Candidate Generation-and-Test

Approach

 Improving the Efficiency of Apriori

 FPGrowth: A Frequent Pattern-Growth Approach

 ECLAT: Frequent Pattern Mining with Vertical

Data Format
11
The Downward Closure Property and Scalable
Mining Methods
 The downward closure property of frequent patterns
 Any subset of a frequent itemset must be frequent

 If {beer, diaper, nuts} is frequent, so is {beer,

diaper}
 i.e., every transaction having {beer, diaper, nuts} also

contains {beer, diaper}


 Scalable mining methods: Three major approaches
 Apriori (Agrawal & Srikant@VLDB’94)

 Freq. pattern growth (FPgrowth—Han, Pei & Yin

@SIGMOD’00)
 Vertical data format approach (Charm—Zaki & Hsiao

@SDM’02)
12
Apriori: A Candidate Generation & Test Approach

 Apriori pruning principle: If there is any itemset which is


infrequent, its superset should not be generated/tested!
(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
 Method:
 Initially, scan DB once to get frequent 1-itemset
 Generate length (k+1) candidate itemsets from length k
frequent itemsets
 Test the candidates against DB
 Terminate when no frequent or candidate set can be
generated

13
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
Tid Items
L1 {A} 2
C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset L3 Itemset sup


3rd scan
{B, C, E} {B, C, E} 2
14
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that
are contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
15
Implementation of Apriori

 How to generate candidates?


 Step 1: self-joining Lk
 Step 2: pruning
 Example of Candidate-generation
 L3={abc, abd, acd, ace, bcd}
 Self-joining: L3*L3
 abcd from abc and abd
 acde from acd and ace
 Pruning:
 acde is removed because ade is not in L3
 C4 = {abcd}
16
Scalable Frequent Itemset Mining Methods

 Apriori: A Candidate Generation-and-Test Approach

 Improving the Efficiency of Apriori

 FPGrowth: A Frequent Pattern-Growth Approach

 ECLAT: Frequent Pattern Mining with Vertical Data Format

 Mining Close Frequent Patterns and Maxpatterns

20
Further Improvement of the Apriori Method

 Major computational challenges


 Multiple scans of transaction database
 Huge number of candidates
 Tedious workload of support counting for candidates
 Improving Apriori: general ideas
 Reduce passes of transaction database scans
 Shrink number of candidates
 Facilitate support counting of candidates

Read the book!!! 21


Partition: Scan Database Only Twice
 Any itemset that is potentially frequent in DB must be frequent in at
least one of the partitions of DB
 Scan 1: partition database and find local frequent patterns

 Scan 2: consolidate global frequent patterns

 A. Savasere, E. Omiecinski and S. Navathe, VLDB’95

DB1 + DB2 + + DBk = DB


sup1(i) < σDB1 sup2(i) < σDB2 supk(i) < σDBk sup(i) < σDB

• Partition size is set so that each partion can fit into main memory and
read only once in each phase.
Read the book!!!
Sampling for Frequent Patterns

 Select a sample of original database, mine frequent patterns within


sample using Apriori
 Scan database once to verify frequent itemsets found in sample, only
borders of closure of frequent patterns are checked
 Example: check abcd instead of ab, ac, …, etc.
 Scan database again to find missed frequent patterns
 H. Toivonen. Sampling large databases for association rules. In
VLDB’96
 The sample size S is such that the search scan for frequent itemsets
in S can be done in main memory once.

Read the book!!! 24


Reduce Number of Scans

 A transaction that does not contain any frequent


k-itemsets cannot contain any frequent (k+1)-
itemsets.
 Such a transaction can be marked or removed
from further consideration.

December 30, 2017 Data Mining: Concepts and Techniques 26


Scalable Frequent Itemset Mining Methods

 Apriori: A Candidate Generation-and-Test Approach

 Improving the Efficiency of Apriori

 FPGrowth: A Frequent Pattern-Growth Approach

 ECLAT: Frequent Pattern Mining with Vertical Data Format

 Mining Close Frequent Patterns and Maxpatterns

27
Pattern-Growth Approach: Mining Frequent
Patterns Without Candidate Generation
 Bottlenecks of the Apriori approach
 Breadth-first (i.e., level-wise) search
 Candidate generation and test
 Often generates a huge number of candidates
 The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
 Depth-first search
 Avoid explicit candidate generation
 Major philosophy: Grow long patterns from short ones using local
frequent items only
 “abc” is a frequent pattern
 Get all transactions having “abc”, i.e., project DB on abc: DB|abc
 “d” is a local frequent item in DB|abc  abcd is a frequent pattern
28
December 30, 2017 Data Mining: Concepts and Techniques 29
December 30, 2017 Data Mining: Concepts and Techniques 31
December 30, 2017 Data Mining: Concepts and Techniques 32
December 30, 2017 Data Mining: Concepts and Techniques 33
December 30, 2017 Data Mining: Concepts and Techniques 34
Benefits of the FP-tree Structure

 Completeness
 Preserve complete information for frequent pattern
mining
 Never break a long pattern of any transaction
 Compactness
 Reduce irrelevant info—infrequent items are gone
 Items in frequency descending order: the more
frequently occurring, the more likely to be shared
 Never be larger than the original database (not count
node-links and the count field)

40
Further Improvements of Mining Methods

 AFOPT (Liu, et al. @ KDD’03)


 A “push-right” method for mining condensed frequent pattern
(CFP) tree
 Carpenter (Pan, et al. @ KDD’03)
 Mine data sets with small rows but numerous columns
 Construct a row-enumeration tree for efficient mining
 FPgrowth+ (Grahne and Zhu, FIMI’03)
 Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc.
ICDM'03 Int. Workshop on Frequent Itemset Mining
Implementations (FIMI'03), Melbourne, FL, Nov. 2003
 TD-Close (Liu, et al, SDM’06)

47
Extension of Pattern Growth Mining Methodology

 Mining closed frequent itemsets and max-patterns


 CLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)

 Mining sequential patterns


 PrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)

 Mining graph patterns


 gSpan (ICDM’02), CloseGraph (KDD’03)

 Constraint-based mining of frequent patterns


 Convertible constraints (ICDE’01), gPrune (PAKDD’03)

 Computing iceberg data cubes with complex measures


 H-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)

 Pattern-growth-based Clustering
 MaPle (Pei, et al., ICDM’03)

 Pattern-Growth-Based Classification
 Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)

48
Computational Complexity of Frequent Itemset
Mining
 How many itemsets are potentially to be generated in the worst case?
 The number of frequent itemsets to be generated is senstive to the
minsup threshold
 When minsup is low, there exist potentially an exponential number of
frequent itemsets
 The worst case: MN where M: # distinct items, and N: max length of
transactions
 The worst case complexty vs. the expected probability
 Ex. Suppose Walmart has 104 kinds of products
 The chance to pick up one product 10-4
 The chance to pick up a particular set of 10 products: ~10-40
 What is the chance this particular set of 10 products to be frequent
103 times in 109 transactions?

59
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

 Basic Concepts

 Frequent Itemset Mining Methods

 Which Patterns Are Interesting?—Pattern

Evaluation Methods

 Summary

60
Interestingness Measure: Correlations (Lift)
 play basketball  eat cereal [40%, 66.7%] is misleading
 The overall % of students eating cereal is 75% > 66.7%.
 Measure of dependent/correlated events: lift

P( A B) Basketball Not basketball Sum (row)


lift  Cereal 2000 1750 3750
P( A) P( B)
Not cereal 1000 250 1250
2000 / 5000
lift ( B, C )   0.89 Sum(col.) 3000 2000 5000
3000 / 5000 * 3750 / 5000
1000 / 5000
lift ( B, C )   1.33
3000 / 5000 *1250 / 5000
If lift value is <1, then B is negatively correlated with C. If lift value is >1, then
positively correlated, if =0, independent. 61
Are lift and 2 Good Measures of Correlation?

 “Buy walnuts  buy


milk [1%, 80%]” is
misleading if 85% of
customers buy milk
 Support and confidence
are not good to indicate
correlations
 Over 20 interestingness
measures have been
proposed (see Tan,
Kumar, Sritastava
@KDD’02)
 Which are good ones?

62
Which Null-Invariant Measure Is Better?
 IR (Imbalance Ratio): measure the imbalance of two
itemsets A and B in rule implications

 Kulczynski and Imbalance Ratio (IR) together present a


clear picture for all the three datasets D4 through D6
 D4 is balanced & neutral

 D5 is imbalanced & neutral

 D6 is very imbalanced & neutral


Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

 Basic Concepts

 Frequent Itemset Mining Methods

 Which Patterns Are Interesting?—Pattern

Evaluation Methods

 Frequent Graph Mining

 Summary 67
Pattern Mining Problems
• Again, there are many kinds of pattern mining problems,
according to the classes of patterns and databases

- string, tree, path, cycle, graph, vectors, sequence of


itemsets, graphs with itemsets on each vertex/edge,…

Questions:
Isomorphism check is easy?
Canonical form exists?
Frequency
- inclusion check is easy?
Frequent Graph Mining
• Labeled graph is a graph is labels on either vertices or
edges
- chemical compounds
- networks of maps
- graphs of organization, relationship
- XML

Frequent graph mining: find labeled graphs which are


subgraphs of many graphs in the data

• Checking the inclusion (subgraph isomorphism) is NP-


complete, checking the duplication is graph isomorphism
(NP) How do we do?
Straight Forward Approach
• Start from the empty graph (it's frequent)

• Generate graphs by adding one vertex or one edge to


the previously obtained graphs (generation)
• Check whether we already get it or not (isomorphism)
• Compute their frequencies (inclusion)
• Discard those not frequent

Too slow, if all are done in


straightforward ways
Frequent Subgraph Mining

 Extracting Frequent Subgraphs from many


graphs

 Extracting Frequent Subgraphs from a


single graph

December 30, 2017 Data Mining: Concepts and Techniques 71


Extracting Frequent Subgraphs
from many graphs
 Theoretically mining in graph databases can be modeled as the
search in the lattice of all possible subgraphs.
 All graph mining algorithms have in common, that they search

the subgraph lattice.


 All possible subgraphs of this small graph are listed in the lattice.
At the top, the empty graph modeled with * is shown. In the next
row all possible fragments containing just one atom (or zeros
bonds) are listed. The second row contains subgraphs with one
bond.
 Only graphs that are real subgraphs (having support greater than
a threshold) are listed in the lattice.
 They are interested in finding a subgraph (Or several Subgraphs)
that can be embedded as often as possible in the graph to be
mined.

December 30, 2017 Fischer, I., Meinl, T. (2004), “Graph Based Molecular Data Mining – An Overview” 72
Lattice Structure

December 30, 2017 Data Mining: Concepts and Techniques 73


The lattice of all subgraphs in cyclin

 A small molecule named cyclin

December 30, 2017 Data Mining: Concepts and Techniques 74


Building Lattice Structure
 When mining real life graph databases, not only one but a lot of
graphs are analyzed leading to a very large lattice.
 Searching this lattice can be done depth or breadth first.
 When searching depth first, the first discovered subgraph will be:

N followed by N-C. N-C-C and so forth. So first all subgraphs


containing N, in the next branch all containing 0 are found.
 If the lattice is traversed breadth first, all subgraphs in one level

of the lattice, i.e. structures that have the same number of edges,
are searched before the next level is started.
 The main disadvantage of breadth first search is the larger memory
consumption because in the middle of the lattice: a large amount of
subgraphs has to he stored. With depth first search only structures
whose amount is proportional to the size of the biggest graph in the
database have to be recorded during the search.

December 30, 2017 Data Mining: Concepts and Techniques 75


Building Lattice Structure
 Building this lattice of frequent subgraphs involves two
main steps:
 Candidate Generation, where new subgraphs are

created out of smaller ones, and


 Support Computation where the frequency or support

of the new subgraphs in the database is determined.


 Both steps are highly complex and thus various
algorithms and techniques have been developed to find
frequent subgraphs in finite time with reasonable
resource consumptions.

December 30, 2017 Data Mining: Concepts and Techniques 76


Candidate Generation
 First method:
 Combining small graphs that includes common subgraphs.

 Have to determine the graphs that have common subgraphs.

 Second method: At each level of the lattice structure, sub graphs are
constructed from the graphs at one level high by adding an edge.
 The lattice structure extends too much!!! (Unnecessary candidate

graphs are generated.)


 Have to calculate support values of each candidate graph.

 Increase processing time.

Third method: (Appriori Principle): If the support value of a graph is


below the threshold, the support values of other graphs including
it will be also below the threshold and wont be included to the
lattice.

December 30, 2017 Data Mining: Concepts and Techniques 77


Computing the
Support of new Candidates
 Computing the support of new candidates can also be done in two different
ways.
 First method: When combining two graphs, to calculate the support value of
the subgraph, only scan the graphs that include the two graphs (upper level
graphs at the lattice structure).
 Decrease the search space
 Have to save all the information in the memory.
 Second method: Use of embeddings lists. An embedding can be thought of
as a stored subgraph isomorphism i.e. a map from the nodes and edges in
the subgraph to the corresponding nodes and edges in the graph. Now, if the
support of a new greater subgraph has to be determined, the position in the
graph where it can occur is already known and only the new nodes and
edges have to be checked.
 reduces the time to find the isomorphism
 but drawback of enormous memory requirements as all embeddings of a
subgraph in the database have to be stored

December 30, 2017 Data Mining: Concepts and Techniques 78


Extracting Frequent Subgraphs
from a Single graph

December 30, 2017 Data Mining: Concepts and Techniques 79


Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

 Basic Concepts

 Frequent Itemset Mining Methods

 Which Patterns Are Interesting?—Pattern

Evaluation Methods

 Summary

80
Summary

 Basic concepts: association rules, support-


confident framework, closed and max-patterns
 Scalable frequent pattern mining methods
 Apriori (Candidate generation & test)
 Projection-based (FPgrowth, CLOSET+, ...)
 Vertical format approach (ECLAT, CHARM, ...)
 Which patterns are interesting?
 Pattern evaluation methods

81
December 30, 2017 Data Mining: Concepts and Techniques 82
Ref: Basic Concepts of Frequent Pattern Mining

 (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining


association rules between sets of items in large databases.
SIGMOD'93.
 (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from
databases. SIGMOD'98.
 (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal.
Discovering frequent closed itemsets for association rules. ICDT'99.
 (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential
patterns. ICDE'95

83
Ref: Apriori and Its Improvements

 R. Agrawal and R. Srikant. Fast algorithms for mining association rules.


VLDB'94.
 H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for
discovering association rules. KDD'94.
 A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for
mining association rules in large databases. VLDB'95.
 J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm
for mining association rules. SIGMOD'95.
 H. Toivonen. Sampling large databases for association rules. VLDB'96.
 S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset
counting and implication rules for market basket analysis. SIGMOD'97.
 S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule
mining with relational database systems: Alternatives and implications.
SIGMOD'98.
84
Ref: Depth-First, Projection-Based FP Mining
 R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for
generation of frequent itemsets. J. Parallel and Distributed Computing:02.
 J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate
generation. SIGMOD’ 00.
 J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by
Opportunistic Projection. KDD'02.
 J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed
Patterns without Minimum Support. ICDM'02.
 J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for
Mining Frequent Closed Itemsets. KDD'03.
 G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying Frequent
Patterns. KDD'03.
 G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent
Itemsets, Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining
Implementations (FIMI'03), Melbourne, FL, Nov. 2003
85
Ref: Vertical Format and Row Enumeration Methods

 M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm


for discovery of association rules. DAMI:97.
 Zaki and Hsiao. CHARM: An Efficient Algorithm for Closed Itemset
Mining, SDM'02.
 C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-
Pruning Algorithm for Itemsets with Constraints. KDD’02.
 F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER:
Finding Closed Patterns in Long Biological Datasets. KDD'03.
 H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from
Very High Dimensional Data: A Top-Down Row Enumeration
Approach, SDM'06.

86
Ref: Mining Correlations and Interesting Rules

 M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I.


Verkamo. Finding interesting rules from large sets of discovered
association rules. CIKM'94.
 S. Brin, R. Motwani, and C. Silverstein. Beyond market basket:
Generalizing association rules to correlations. SIGMOD'97.
 C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable
techniques for mining causal structures. VLDB'98.
 P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right
Interestingness Measure for Association Patterns. KDD'02.
 E. Omiecinski. Alternative Interest Measures for Mining
Associations. TKDE’03.
 T. Wu, Y. Chen and J. Han, “Association Mining in Large Databases:
A Re-Examination of Its Measures”, PKDD'07
87
Ref: Freq. Pattern Mining Applications

 Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient


Discovery of Functional and Approximate Dependencies Using
Partitions. ICDE’98.
 H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and
Pattern Extraction with Fascicles. VLDB'99.
 T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk.
Mining Database Structure; or How to Build a Data Quality
Browser. SIGMOD'02.
 K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions.
EDBT’02.

88

S-ar putea să vă placă și