Sunteți pe pagina 1din 15

1

AVIVA

MODEL SUMMARY

[0] INITIATION: CHALLENGE ADDRESSED WITH WHITE SPACES (WS) APPROACH

WS Posit 0: Formal mathematical representation of product model
(i) Product model
Product specs vector S :=| |
n
s s ,...,
1
, where
i
s is the i-th specification, and
i i
S s e
i-th specifications set = { }
k
v v ,...,
1
,
t
v is the t-th value of i-th spec, { } k t ,..., 1 e
ith specifications sets cardinality =
i
k
Product space is defined as all variants of product spec vectors on all specifications
values. The dimension of product space is N.
Product space magnitude m =
[
=
n
i
i
k
1

Actual product space magnitude m
a
is the magnitude of the subspace of m actually existing
(either offered/supplied or demanded/purchased) on the market.
Product space is the Cartesian product of all
i
X , i.e.
n
X X ...
1

Distinct subgrades are the distinguished levels within a given grade. Let the number of
distinct subgrades be L.
Intra-grade intensity IGI =
L
m
a

IGI can be defined on grades or on subgrades according to the need.
(ii) metaSPECS model
Meta specifications are the sub- or intra-grade specifications which are critical in the sense
that it is them that make a decision-maker (buying or selling) take a specific decision for a
specific product from a specific counter-party (beyond his/her already taken decision to buy
that product). In other words, meta specifications are the concrete values of generally stated
grade specifications.
It must be noted here that the meta specifications for a given coupling of domain and industry
captures the ontology of that domain on one hand and the actualities, specificities and
multiplicities of real world on the other. At the same time, such capture features, as a rule,
effectiveness and completeness (at least to a feasible extent).
In mathematical terms, using the notion of fuzzy membership function, we can formulate as
follows. Let the positive decision is assigned 1 and the negative decision 0. All the
continuous numbers in between them i.e. on continuous interval [0, 1] will represent the fuzzy
membership value of the decision-to-be made. This function is dependent on meta specifications
(and neither the other intra-grade specifications or regular grade specifications). If meta
specifications are the subset of specifications and are designated as
j
, j=[1; ; m], then
( )
m
f D ..., , ,
2 1
=
Since
j
-s are discrete in virtually all j, then we can write the first partial differences
to ultimately understand what changes are to be made to each of the meta specs so that the
resulting D > a sought threshold value.
2

On the other hand, we can think of grade specification as a mapping or function taking the
particular agent as an argument and rendering the respective meta specification. So meta
specification is the specific value of grade specification function for a particular agent.
So for each of the meta specifications analogous functions exist, i.e.
) (x
j j
= , where x is the agent variable. Hence
( ) ) ( ..., ), (
1
x x f D
m
=
Investigation of this functional is key in maximization of membership function D, which
underlies the following OMME system.
In reality, in the search mode, the
j
-s are discrete values established through agent
(supplier/buyer) screening. So it comes to the following matrix of meta specifications values
by agents (suppliers/buyers):
(
(
(

=
r r
x m x
x m x
p p
p p
D
, , 1
, , 1
...
... ... ...
...
1 1

In the following Optimal Match Mining (OMM) mode the task is to establish the column vectors
(out of this matrix) of each meta specification for all relevant agents (supplier/buyer):
(
(
(

=
r
x j
x j
j
p
p
P
,
,
...
1

I.e. the task is to choose the best x
q
, where q=[1, , r].
(iii) OMME (Optimal Match Mining and Engineering) model
OMME has two levels as obvious Optimal Match Mining (OMM) and Optimal Match Engineering
(OME). The elementary step of the former is covered in the previous paragraph.
OMM is itself a combinatorial optimization exercise taking the mentioned elementary
constructions ordered by a scale of preference weights expressed by the beneficiary party. So
the task is to maximize the following on x
q
:
max
1
=

=
m
j
j j
p D e , where
j
e -s are the respective weights.
Please note that in this case choices of x
q
s might be different from elementary (single-
specification) choices due to weights involved.
Now, proceeding with OME. In OME method, first the best x
q
s are selected (one, two or three
maximum usually) through the mentioned OMM exercise. Then these are processed along the
following lines.
In OME the investigation goes into specification function analysis for a given agent(s). In
particular, based on several values reported by the q-th agent on the j-th meta specification,
the respective function is constructed (approximated) through interpolation. The goal is to
maximize (in general case, optimize), as much as possible with regard to cost elasticity and
affordability for the beneficiary, also the suboptimal meta specifications (in addition to the
one(s) which has (have) been established in OMM stage as already optimal, with the ultimate
goal of having as many as possible meta specifications as optimal as possible. By optimal we
mean either maximal in a direct sense or optimal in terms of value contribution.
3

For each of critical meta specification, its cost function ) ( C and value contribution
function ) ( V are developed. Then their difference is optimized. The weighting is involved
likewise. So at the end we have the following setup:
max ))] ( ) ( ( ..., )), ( ) ( ( [
1 1 1
=
m m m q
C V C V f D e e
This is the general case of non-linearity of function D at least because of existence of
interdependencies among meta specification variables. It is evident that both value functions
and cost functions are non-linear in the general case. But that is not essential, instead what
is essential is that this function D is monotonic increasing on value functions and monotonic
decreasing on cost functions and monotonic increasing on the indicated differences thereof.
That means that we can optimize these differences separately and then just combine the results
(value functions are concave and cost functions are convex). However because in reality, in
the general case, some important interdependencies between the meta specification variables in
cost functions and value functions as well can exist, necessitating full account for them,
this problem does not become a simple decomposable problem. Rather it is solved with dynamic
programming techniques or, in simple cases, with network techniques.
In the simplest (and not rare) case of course the function D (for the given q-th agent) will
be linear in the sense of absence of variable interdependencies so we could be considering the
problem broken into separable stages. In each stage the following is sought to be maximized:
max ) ( ) ( ) ( ) ( = =
j j j j q
C V D t
The difference here is designated as profit margin for the j-th meta specification which is
not quite strict of course in economic terms. So give the concavity of value function and
convexity of cost function this becomes a relatively easy convex optimization problem.
For the hard case of presence of interdependencies among argument variables (meta specs) the
critical interdependency relationships are established and incorporated into function D
q
.
WS Posit 1: The VLM tackle continuum
In the context of Value Level Management (VLM) as the dyad conceptual framework for
application in purchasing/supply management and marketing/demand management in general, and,
in particular, from the viewpoint of purchasing decision-making analysis (PDMA) the goods and
services are categorized into a continuum starting with the most differentiated product
categories through to most standardized and commodity-like (fungible) goods and services
(irrespective of the number of accepted known grades of the latter which are effectively
considered as individual commodity categories).
One important note. PDMA is fundamental for any business function in that it, along with
organization/configuration and processing functions of an enterprise, makes up the core of any
economic value creation in business. This in particular means that PDMA is critical not only
for the analysis of procurement function but also for the analysis of marketing function and
generally for any business decision as the latter in a sense resorts back to purchasing
decision. In other words, the basic decision (choice) is the purchasing decision, not the
selling or marketing decision this is a buyers world. This is why PDMA is a basic
concept also for VLM and AVIVA model.
However, at the same time, in PDMA there is the second dimension (axis) which is a given
products intensity of varieties of make within a grade or put simply - intra-grade
intensity (IGI). The level of IGI depends on different aspects related to the factors and
facts of production, peculiarities of production, quality management practices specific to
given product category, etc. The task of AVIVA and of VLM in a broader context is to
structurize, model and formalize the IGI for procurement and supply value improvement and
procurement quality risk management.
Another representation of IGI vs. common differentiation/homogeneity is interpretation of IGI
as informal or unique/specific differentiation which varies from producer to producer i.e.
from context to context or case to case (micro level), while common
differentiation/homogeneity is aggregate differentiation on an industry level (meso level).
4

A useful tool for product analysis for PDMA purposes is the IGI matrix where the horizontal
axis is the level of differentiation/homogeneity of a product and the vertical axis is the IGI
itself.










Reservation: The graph depicts not the actual IGI but the VLM-normalized IGI so that IGIs can
be compared for different products having different IGI structures and densities. In reality,
the VLM-valid points of actual IGI can vary for different products and services as the level
of appropriateness of VLM depends on a number of factors including product/service-specific
(while for the sake of simplicity the VLM-valid points of IGI for different products are
assumed equal for concept illustration purposes).
This way, AVIVA/VLM considers this entire horizontal continuum of products and services for
which the magnitude of IGI is above a minimum threshold necessitating and justifying VLM
engagement and application.
The horizontal axis (differentiation vs. homogeneity) marks the continuum of all known
products and services. Leftwards along the line are located the many of the known products and
services most of which are known as value-added products. In PDMA as well as consumer buying
decisions these products undergo relatively straightforward evaluation algorithm largely in
common sense and intuitive mode. The reason is that in case with these products usually a
great deal of information, awareness and knowledge is furnished and disseminated in order to
enable, facilitate and often also bias the customer to take buying decisions favoring the
vendor. At the same time, beyond the communicated value differential as per the declared and
publicized information usually there is no more resultant effective differential value for the
consumer and what is there beyond it is the bundle of features and characteristics of the
given product intended for inducing/seeding specific buying and consumption patterns and
habits in favor of the vendor.
As for products and services located rightwards along the axis they are increasingly described
and recognized as commodities (at the extreme right are the pure commodities). PDMA analysis
for these products is performed with concepts and tools such as factor analysis, industry-wide
analysis and modeling, technical analysis, fundamental analysis, factor analysis, etc. The
PDMA-relevant value level in this case is much dependant on a variety of factors and aspects
which have time structure. However one principal difference here is that the meaning and
performance of PDMA in case of commodity trading/merchant business models is not limited to
itself alone but embraces also the marketing/selling function as the value of purchases is
relationally interconnected with the value of sales which is dependent on the time of sale.
This is the case whenever the value structure of the traded goods is not affected/modified. If
the value structure is changed i.e. through value-adding in e.g. manufacturing setup rendering
value-added goods as output (produced from commodity raw materials) the PDMA cycle in economic
terms sufficiently locks on itself. However, even with the first instance (commodity trading
setup), as long as an assumption on the situation of the trader namely that s/he possesses
readily available sales leads which the purchases can be sold to instantly with no or
immaterial transactional costs, is at least relatively valid, the segregation of the PDMA
function as a closed system can be feasibly made.
Level of differentiation vs. standardness/homogeneity
IGI
of different products and services (bars on the graph)

VLM-valid
point
VLM-validness line
VLM-valid region
5

The value level in case with commodity products is defined as the mathematical/numeric value
of value shaped by time of purchase. In other words the value level of commodity products is
that shaped by time versus the value level of IGI (intra-grade intense i.e. high IGI) products
which is shaped by individual specificity. Looked at in systemic scope, because of homogeneity
the individual productions go into an aggregation shift and make up the industry aggregate
which transforms due to different factors and developments in the mode of part-effecting-the-
whole chain. However, these two frameworks do not contradict with each other but reside
independently and objectively overlapping in many instances forming the content of PDMA from
different angles. Moreover, they as knowledge and approach quite often not only complement/add
to but also help each other (for only a commodity product case of course). For example,
valuable knowledge of material or critical change taken place with a (major) player/producer
is relevant and significant not only for the context of PDMA of the first, IGI, case, but also
for the second, the commodity case.
The conceptual framework and practice of managing the value level for the first scope of PDMA
i.e. concerned with high IGI products is the VLM1 or simply VLM, and for the second scope of
PDMA i.e. concerned with quasi-commodity products
1
is VLM2 (together generally referred to as
VLM-x). VLM2 is still a largely intuitive field of knowledge and practice and is still open
for further systematic conceptual, analytic and practical elaborations (for its part, taking
advantage from the known theories and methodologies for management of and trading in financial
instruments (foreign exchange/currency, securities, other capital market instruments,
derivatives) and tangible commodities e.g. primary agricultural (soft) commodities, hard
commodities like base metals and energy commodities like electricity, oil, coal, gas). In
short, the concepts of VLM1 and VLM2 effectively embrace the two basic praxeological concepts:
causality and time. Finally, VLM1 is the approach relevant to markets with structure of
monopolistic competition (competition between differentiated products, i.e. with predominantly
non-price competition), while VLM2 is the approach relevant to markets with structure of pure
competition (competition between homogenous/standard commodity products, i.e. with
predominantly price competition).
In VLM2, in particular, the time structure of value level is examined and managed for the
commercial benefit of the incumbent party. The idea is that the value level (or worth) is
different for different delivery dates connected with the dynamics and processes of the
beneficiary business be it a producer/manufacturing company or a real sector trader (e.g.
importer, exporter, wholesaler, distributor, etc.). The notion partially draws on the concept
of value/needs scale as well as value function. Similarly, in the context of VLM1 the value
level is examined in terms of and on the scale of actual intra-grade differentiation (i.e.
IGI), which makes up at the end of the day the value level of the given differentiated
product.
Summing, VLM1 is the approach/methodology of management of value level as defined for high
intra-grade intensity products i.e. VLM1-fit/relevant products (covering marketing and
purchasing thereof). For quasi-commodity products VLM2 is applied, in particular for the part
of selling and buying thereof. Within the scope of VLM2, value level is defined as the value
(price) of sold or bought product(s) in relation to the possibilities of its values (prices)
over the planning horizon for such sale or buy (please note that here an assumption is made
that the notions of value and price are equivalent due to the fact/posit that for quasi-
commodities the quality is standard and products are homogenous). This means that value level
within VLM2 is the value without (most of) uncertainty associated with the time structure of
value/price. This value level is hence defined only in the context of VLM2 services by TANGENT
Management, namely through the mechanism of qualification.
However one important thing must be noted here that VLM2 is not an approach or methodology for
speculative/arbitrage trading but for selling and/or buying for regular business purposes
either for production businesses or commerce trading i.e. geographic down-scaling
domestic/regional/local trading in the sense of channel distribution/marketing (e.g.
importing, wholesale, distribution, dealership, brokerage, etc.). Moreover, in particular
notions of convenience yield, cost of carry and the like (of dealing/trading context) do not
apply in this context (i.e. the context of commerce). Also, the time value of money can also
be ignored as the planning horizon in regular commerce settings unlike trade settings is
usually sufficiently short/limited. Another important remark: VLM2 covers, applies and is
relevant to only consumption soft commodities (not investment or hard commodities).

1
Quasi-commodities, here in this context, are near-commodity goods that are not fungible from lot to lot in the medium-run and from producer to
producer but are fungible in the short-run from lot to lot with a given producer.
6

Summarizing, the IGI matrix takes the following form.











And in terms of examples of products (illustrated on 3 broad product groups food, stone and
chemicals):












In most industry value chains one can find products representing all four quadrants lining up
vertically and acting as inputs and outputs for each other. In terms of flows on the IGI
matrix, there can be different directions to and from all four quadrants but the most typical
movement on average would be LIGIC HIGIC HIGID LIGID, however organic and legitimate
variations can frequently take place (within a single value chain).
An important remark on the above. There are two situations with branded products. The first,
non-principal branding is when the product line is branded for exclusively marketing and
identification purposes with no major product-specific uniqueness. The second type, principal
branding is when the brand is engaged to primarily or mainly communicate the message of
production/technology-related distinctiveness and/or advantages by the manufacturer/supplier.
The markets relevant and apt for VLM implementation are markets characterized by a reasonable
level of competition at the supply side (non-price competition) and where products are at
least reasonably differentiated.
The mentioned types of markets entail uncertainties and risks both for suppliers and buyers.
For handling uncertainties and managing risks both parties typically try and prefer to engage
in trade in product-inherent (product level) unambiguous (PIU) settings. This means for
example in the case with buyers that the latter prefer those purchasing opportunities that do
not entail significant uncertainties as to sub-grade variations of products sourced from
different suppliers. In other words, they, other things held equal, prefer sourcing and

Differentiated/value-added
products with low IGI

Differentiated/value-added
products with high IGI


Commodity/homogeneous
products with high IGI


Commodity/homogeneous
products with low IGI

IGI
Differentiation vs. standardness/homogeneity
VLM-valid region
VLM2-valid region
LIGI-D
Coffee/tea, confectionary, jams & preserves,
juices, fish snacks, ice cream, milk products
Natural stone articles for garden/funerary
art
Home care chemicals, fine chemicals, hygiene
& cosmetics

HIGI-D
Processed, canned and smoked food & fish
products, fruit comfiture & marmalade
Natural stone articles for interior,
pavestone/sett, kerbstone (curbstone)
Paints, coatings, adhesives, construction/DIY
chemicals

HIGI-C
Flour, vegetable oil, butter, tomato/fruit
paste, quick-frozen fruits & vegetables,
herbs/spices, frozen fish products
Natural stone slabs, flooring and facing
tiles (for int. & ext.)
Most inorganic chemicals, pigments and dyes,
fertilizers
LIGI-C
Honey, nuts, frozen fish whole, sugar,
cereals and corn, fresh fruits
Natural stone raw-cut blocks, crushed stone
for construction
Most organic chemicals (petrochemicals,
oleochemicals)


IGI
Differentiation vs. standardness/homogeneity
7

purchasing options whereby the sub-grade variance of key/critical (to them) specs and features
of products among individual supplier is minimal. This approach while being less risky offers
limited value and profitability opportunities.
What VLM provides is a move from product-level unambiguity to supplier-level or supplier-
inherent unambiguity (SIU) for the case of purchasing management for instance. In this case
while uncertainty and risks are managed and mitigated properly and in a more structural way,
opportunities and partner-end resources are meanwhile not missed but taken advantage of and
capitalized on to a maximum possible extent. This way the buyer for example is powered up to
reach structural alignment with supplier(s) instead of mere formal alignment. The same goes
with a supplier. (The general case of SIU is partner-inherent unambiguity, PrIU.)
The distance from PIU to SIU is where the untapped potential lies often left overlooked and
consequently lost with the excuse of qualification as highly uncertainty/risk-bearing. That
distance is what VLM covers turning uncertainty to a potential for value realization and
improvement.

WS Posit 2: IGI modeling
(a priori methodological parametric determinism through deductive and abductive inferencing)
In PDMA the buyer passes through a sequence of product evaluation steps called evaluation
landing classes (ELCs) in a hierarchical structure: general product category >
quality/feature-based grade (standard) > intra-standard actual make (diversity thereof = IGI).
The length of this sequence in terms of the number of steps i.e. relevant ELCs can be
different both shorter and longer for different product cases.
Usually when it comes to the last/lowest ELC i.e. IGI, the buyer faces radically more
uncertainty and confusion, and the less he/she has positive record of dealing in and tacit
knowledge of the category, the stronger is such uncertainty/confusion. In other words the
adequate formal coverage and capture of the category (subcategory) weakens downward so the
need is to formalize, model and process-ize this lowest ELC.
AVIVA provides the toolset to model, handle and manage IGI in a way as to improve recognition,
predictability, risk avoidance and mitigation, and thereby total value improvement of supply.
Among the tools of AVIVA there is the method of introduction of valid and instrumental
classification within the IGI. It is the custom proprietary taxonomy of products (CPTP) which
helps the decision-making function navigate and facilitate the PDMA by inter alia considering
fully defined intra-grade grades rather indefinite common grade aggregations. AVIVA inventory
tool benchmark specification grid is the materialization of constructed and validated CPTP for
a particular product category.
More specifically, for modeling the IGI the following approach is applied. The actual IGI is
split into two subsets verifiable factor-conditional and express (which cannot be feasibly
and satisfactorily mapped and molded with regard to their conditionality in terms of factors),
see the last section of this article for more thorough coverage of this aspect. Then data
collection, mining, validation and inferencing tasks/processes are undertaken and applied to
each subset. The ultimate goal is to arrive at highest possible confidence and reliability of
estimates of target parameters and the overall value level.
One important consideration here as a reservation. The IGI does not embrace those sub-grade
specifications, features and parameters and values thereof which are not consistent with
accepted and required in the industry standards and specifications constituting minimum
general quality and safety (e.g. one specific ingredient must not exceed X% by weight, or one
specification is not to be touched at all as coming given and set under regulations,
technology, standard, etc.). While producers can change and play with these specs and
parameters they do not do so and not encouraged to either (instead, they are encouraged or
even required to administer these parameters and their values in an unequivocal manner). In
other words IGI deals with intra-grade specs and in particular with individual variations
thereof from producer to producer, other than those inconsistent with minimum general quality
and/or safety requirements (often covered by GMP/ISO/TQM/6sigma and HACCP plans respectively).
The job of VLM described thus refers to, in particular, generating General Critical Factors
(GCFs as a subset of IGI) shaping a value level. However, for addressing the specific needs
8

and expectations of an individual buyer/supplier it is highly useful to establish the range of
factors (generally as a subset of GCFs) which are critical to that specific instance. These
are the Specific Critical Factors (SCFs) in relation to an individual buyer/supplier. This is
a bit close to the notion of Critical to Quality Characteristics (aka CTQs) but there are two
major differences. First, SCFs looks rather into the factors of the value while CTQs considers
the value itself or more exactly the quality part of the value. Second, CTQs are used in
product development and marketing contexts only.
The notion of GCFs & SCFs is twofold. It is of use both in marketing and procurement frames.
The idea underlying the point in doing that also for the procurement function (whereas doing
it for marketing is apparent) is that criticality analysis is a way to optimize and streamline
the procurement management process as it largely saves time and enhances critical focus and
hence adds to risk alleviation and stability.
The important output of IGI analysis is construction of the so called meta specifications.
Meta specifications are the ELC coming after the standard grade specifications. More coverage
of what meta specifications are can be found here.

WS Posit 3: Non-linear analysis and engineering of specifications (NLAES) and mathematical
programming and optimization on the basis of NLAES
The third-order task of value level analysis (in addition to the first-order task which is
regular inbound and outbound specs match and harmonization, the second-order task which is
covered above IGI analysis) is examination of inbound and outbound specifications with the
purpose of establishing, streamlining and managing binary (threshold/basic), monotonic
(performance) and marginal (attraction/excitement) groups of specifications. This is non-
linear analysis and engineering of specifications (NLAES). Now more on this.
Binary attributes are those attributes which are either present or absent meaning respectively
that the counterparty (partner) will either accept or reject the product featuring these
attributes. By the way, binary attributes can also be reverse (negative), i.e. not having an
attribute will mean the partner will accept the product(s) lacking that attribute and vice
versa.
A subclass within the class of binary attributes, are the precision attributes which are the
attributes the value(s) of which are required to be precise - to match exactly a given value,
to be exactly lower than a given value or to be exactly higher than a given value. And
respectively tolerances are set as acceptable deviations from the respective target values of
these attributes. Here, the better the match (the smaller the actual deviation) the higher the
value arising from the specs side of the product.
Monotonic attributes are those attributes the more which are present the more the partner
likes and approves the product(s) featuring these attributes. The degree to which the partner
will like and approve the respective product normally decreases with the increase of the
degree of presence of the monotonic attribute(s) but overall monotonicity is ensured.
Marginal attributes are those attributes which in general case affect the decision-making at
the partners end in a way so that the latter evaluates the cost and benefits of having such
attributes with a product under consideration. In particular, the often discussed case is when
such positive marginal attribute is there already available for a buyer and that attribute
plays as a wow point/proposition for the latter. This is the case when the cost of having
such attribute is not material or at least is not known to the customer. However, in a more
general context, marginal attributes can be still assessed as positive and in particular even
in the wow sense if it is forecast by the party making the proposition that the benefits of
having a particular marginal attribute can overweight the cost of having it (as exposed to the
counterparty) so that the whole business of having it implemented and proposed to the partner
makes good sense. On the other hand, the attribute must be assessed and examined properly by
the recipient party as to establish all explicit and inherent costs and benefits of having
such attribute(s) with a product. However, it must still be noted that in many cases such
analysis both at the end of the proposition-maker and at the end of the proposition-receiver
can render highly useful, effective and enabling.
The major use of NLAES is analysis of meta specifications for finding and optimizing
specifications match, for more coverage of this topic please refer to our blog
http://tangentmanagement.wordpress.com
9

Finally mathematical programming and optimization methods and techniques are applied to NLAES
constructions to optimize and engineer the specifications with the aim to arrive at maximum
possible value level for each case concerned. Read more in the blog post on application and
use of mathematical programming and optimization in specs management (in particular
engineering).

WS Posit 4. Holistic congregation - Value Level defined
Buying decisions are often taken without proper and full inquiry into the value dimension on
the IGI latitude but also indirect or more correctly the peripheral implications of value
embodied in the relevant IGI node. The consequences are unexpected losses and missed
opportunities alike. This problem is called bounded rationality or cognitive limitations.
Value is defined as the full bundle of features and qualities divided by the total cost of
ownership. The common understanding of quality short of full IGI inquiry (analysis) is quite
straightforward unfavorably as is the case with the cost short of hidden costs. However, the
trick lies with the peripheral and nebulous connections between the IGI level of quality
adversities and the hidden costs appearing eventually at different stages of asset cycle. At
the same time, in addition to this negative chain of cause and effect, there is also the
missed opportunity side, when due to inadequate practice positive peripheral links are
overlooked and hence benefits are missed.
The full and holistic consideration of the quality and cost ladders and the interconnections
and interdependencies in between is the core of the analysis of value level (coined in
similarity with service level standing for a measure/indicator of performance of a system,
whereas value level is the measure/indicator of performance of an asset/object). Thus, the
value level analysis aims at discerning and conceiving fully the value levels. In doing that
the value level analysis and management transcends the ordinary value management / engineering
/ analysis discipline. In the meantime, it must be noted that value level features more
ordinality rather than cardinality.
In summary, the value level is the value in common understanding plus certainty i.e.
uncertainty largely lifted in the following senses. In purchasing/supply management, the
uncertainty exists with regard to possible partial or full value failure arising from broadly
defined inconsistency of procured goods with the requirements (either explicitly set or
expected or presumed/sought). In sales, marketing and demand management, the uncertainty
exists in connection with possible full or impartial value failure arising from broadly
defined inconsistency of supply sold goods with the requirements (either explicitly set or
expected or presumed/sought). In other words, in these two instances the value level looks at
the pictures of causes and effects/purposes underlying an object (product, service, work,
solution, asset, etc.) - established as fully and inclusively as possible. In praxeological
terms, value level looks into and investigates means and ends.
The last but not the least is the resemblance of the notion of value level with the notion of
knowledge by acquaintance vs. knowledge by description known from theory of knowledge. This is
another interpretation of knowledge vis-a-vis uncertainty contained in that knowledge.
"Knowledge by description" is the nominal account of a fact, object, etc. whereas "knowledge
by acquaintance" includes both the knowledge of the particular fact(s) on the subject in
combination with the universal facts embodying inter alia the cause and effect relationships
underlying and drawing on, respectively, that particular fact, thus principally extending the
resultant knowledge about that subject and making it holistic ultimately. In such a circuit,
the resulting knowledge accounts for, contains and effectively implicates all superpositions
of facts/the subject thereby precluding from suffering from limited, "spot" examination of
subject/fact and hence from generating incomplete knowledge.
One important aspect of the notion of value level is that it is defined for a pair of counter-
parties and respectively counter-specifications. This task of best match finding and match
optimization is solved with the help of mathematical programming and optimization, see below.

WS Posit 6. Semantic modeling introduced for data representation, search and mining
Semantic modeling is applied for better data, information and knowledge representation, search
and mining. The model first posits hierarchical structure of ELCs of specifications of
differentiated products. It is described to follow the chain [top-level specifications] >
10

[grade level specifications] > [intra-grade level specifications, the critical subset of which
are meta specifications]. Meta specifications are used for semantic structuring and management
of complexity of the content of commerce in a given product domain.
Please read a synopsis on this in the blog post on metaSPECS heuristic (as quasi-algorithm)
for commerce content (space) complexity semantic organization and management.

WS Posit 7. Management of uncertainty for benefit
Economic agents (market players) are often if not always risk-averse. They are such not
inevitably but due largely to not being equipped with the right approach and methodology to
manage uncertainty and risks. In reality uncertainty in economic/business situations covers
both positive and negative subsets. Businesses have known for centuries to try to handle this
uncertainty and risk in a number of ways. Within the concept of the present model it is
posited with confidence that a method is to be derived and considered for economic/business
application which allows to, at the end of the day, take advantage of the positive subset
without touching the negative. Application of the present model allows the user/beneficiary to
benefit from this positive subset of uncertainty without encountering the negative subset. So
it allows to in fact benefit from uncertainty.
Now more specifically on how all this works. The major part of the job of uncertainty
reduction/management is the stage in specifications management called specifications oversight
which comes after specifications planning (on the basis of NLAES), specifications
implementation conducted virtually solely by the partner and before specifications
surveillance which is an on-going strategic exercise with the aim of taking advantage of
favorable spots on the timeline on one hand and knowing the counter-partner better for the
benefit of the principal partner and the whole commerce relationship(s).
Under specifications oversight the counter-specifications (inbound for the outbound and vice
versa) are controlled by third-party which is TANGENT Management. The control is done through
counter-partner reporting and complex inquiry into the actual picture of specs using
cross/vertical checking and multi-factor authentication techniques.
However, one major plus of such third-party oversight is the following. Apart from the benefit
of having double control, etc., the experience has shown that demonstration and corroboration
of actual picture in distance to a third-party has a stronger value and gravity than if the
same was extended in conventional simple bilateral commerce relationships. The common-sense
rationale is that if a party succeeds in convincing such a third-party and in a distance (i.e.
with lesser capabilities and possibilities for doing that), then the accuracy and reliability
of such information/knowledge is normally higher than the same is accomplished in usual
business settings, all things held equal.
So, at the end of the day it goes that such specifications control has a high value in terms
of reliability and accuracy. It is thanks to not only the methods and technical faculties of
TANGENT Management but also to the very framework of such control.

[1] THE THREE PIVOTAL PILLARS UNDERLYING THE MODEL

I. White spaces in product trade analysis, decision-making and management
Lowest ELC i.e. IGI level represents a level which ordinary business process management
concepts, mechanisms and methodologies often either miss or fail to account for adequately. So
in that sense the IGI level can be thought of as quantum level where other, quantum, rules
and patterns apply. IGI modeling addresses that task.
The second tool is NLAES which allows the proposition-maker and the proposition-receiver best
optimize and streamline its outbound/inbound specs offered/bid.
IGI analysis in conjunction with NLAES allows introducing a key white-space concept of meta
specifications. Consideration and analysis of this marginal or differential class of
description/identification of differentiated products allows best and most optimal matching
and development of commerce in a given differentiated product domain.

11

II. Focus on Value Level
Value level defined as the totality of all, including peripheral and other amorphous, links
between the quality and cost dimensions on one side and the resultant value on the other, is
taken into the focus of examination. It can be thought of as an extension of the common notion
of value which looks also at value implications of quality/cost beyond direct object
relationships on the fronts of causes and effects/purposes and by getting rid of uncertainty.
So effectively all is about beholding and fathoming the very value level which is the value in
common understanding with uncertainty lifted in the sense narrated in the previous sentence.
Value level while accounting for and managing the uncertainty inherent structurally in the
commerce flows, looks at the same time at specific pairedness of counter partners and
specifications, and that task is handled with mathematical programming for best match
optimization and construction.

III. Inferencing (as a process/method)
Inferencing, being the uncertainty management technique for value level analysis, allows and
leads to prototyping a priori an object/event for risk management and predictability
enhancement purposes. The approach is that through data mining and similar exercises
several/many parts of the situation/case under consideration at the end of the day coalesce
and eventually make up the actual whole with a satisfactory level of approximation, confidence
and reliability.
Multi-factor authentication, as the extension of Bayesian inferencing, is a technique borrowed
from the domain of identity and security management and is applied, in conjunction with
cross/vertical and complexity checking, in the context of supply and demand management.
Meanwhile, tense communication in specifications oversight (reporting, inquiry, etc.) with the
counter-partner in commerce increases significantly the accuracy and reliability of
information/knowledge hence reducing uncertainty/ambiguity.

[2] THE METHOD
VLM in its conceptual part is based on Arrayed Value Inferential Validation & Accord (AVIVA)
model. The notion underlying AVIVA is an original one developed with white-space technique
capitalizing on the following disciplines, theories, domains, systems and pools of knowledge:
information theory, theory of knowledge, set theory, microeconomics, welfare economics,
information economics, uncertainty theory, game theory, agency theory, competency/capabilities
theory, contract theory, decision theory, expected utility theory, prospect theory/cumulative
prospect theory and behaviourial economics, search theory, matching theory, value
theory/science and axiology, Austrian School of Economics and Praxeology, state-preference
theory, theory of parametric determinism and overdetermination, Bayesian inferential
(subjective) probability and its generalized version, DempsterShafer theory, possibility
theory, multi-factor authentication, conjoint analysis, Kano Model, TQM, Lean, Six Sigma,
Value Management/Engineering/Analysis, Value Network Analysis, root cause analysis, Quality
Function Deployment (QFD) and House of Quality (HOQ), Critical to Quality Characteristics
(CTQs), Analytic Hierarchy Process (AHP), Failure Mode and Effect Analysis (FMEA) framework,
Fault Tree Analysis (FTA) framework, Capability Maturity Model Integration (CMMI) model,
Business Process Reengineering/Business Process Improvement (BPR/BPI), Business Intelligence
(BI), data-mining, Knowledge Management (KM), management science and operations research.
AVIVA is a largely aprioristic (deductive) approach and method (methodological apriorism) of
evaluation and management of value level and value level failure risks in commerce. In
particular, the quality and risks associated with it are established by inter alia
constructing and validating
(A) a maximum number of relevant, incl. uncorrelated, factors (in wide sense) shaping the
quality and the risks,
(B) estimates of the values of those factors with maximum accuracy and reliability, and
(C) estimates of interrelationships and linkages in between the factors (parameters, causes,
etc.) and the principal variables (specs, features, characteristics, indicators, etc.)
12

For different markets/industries and products the content of the model varies. The factors
(GCFs, see above) relevant to any specific market or product are constructed with the help of
deep technical category knowledge/expertise and industry marketing foresight. The overall
model for any specific market/product passes the stages of development, testing, adjustment
and validation before its launch.
In the application phase (for a given case), the method of data collection is screening. Then
comes business intelligence, data mining and inferencing. Then SCFs are put together to
facilitate, optimize and streamline action and alignment. For data verification multi-factor
authentication is used (incl. cross-, double- and time- checking). For model construction as
well as data collection, compilation and verification the Delphi method may be used.

[3] AVIVA MODEL LIFECYCLE FOR A DOMAIN (CATEGORY)



[4] AVIVA MODEL FUNCTIONAL DIAGRAM (BASED ON SADT/IDEF0
2
)



[5] AVIVA MODELS ENGINE
The model uses the following inventory of tools:
1) IGI model

2
SADT = Structured Analysis and Design Technique, IDEF = Integration Definition or ICOM (Inputs, Controls, Outputs, Mechanisms) Definition
Domain Research
Model
Construction
Model Validation
Implementation
for VLM
Continuous
Adjustments
FUNCTION:
VLM
CONTROLS:
specification
benchmarks,
GCFs
OUTPUTS:
report (incl.
SCFs)
MECHANISM 3:
TANGENT core
competencies
MECHANISM 2:
implementation
methodology
MECHANISM 1:
AVIVA model
engine &
inventory
INPUTS:
data
13

2) Benchmark specification grid
3) General critical factors (GCFs) - for the context of marketing
4) Vendor metrics & scorecard
5) Ad hoc metrics & scorecard
6) Specific critical factors (SCFs) - for the context of marketing
7) Specification verification sheet (for factor-conditionals) - for the context of
procurement
The model produces the following reports
I. IGI model, if needed
II. GCF and SCF report - for the context of marketing
III. Vendor metrics & scorecard
IV. Ad hoc metrics & scorecard
V. Specification verification sheet (for factor-conditionals) for the context of
procurement
VI. Value Level Report (VLR) and its comments for the context of
procurement as well as, in certain cases, marketing

[6] DATA VERIFICATION AND TECHNIQUES USED
Multi-factor authentication

SV(K+H+I) = Something the vendor knows; Something the vendor has; Something the vendor is

including
- acceptance sampling
- internal cross-checking
- external/reference cross-checking
- double/time checking
- documental and log substantiation
- business intelligence
- data mining
- photo- and video-reporting

[7] VLM IMPLEMENTATION & APPLICATION WORKFLOW
The workflow of application of the model to an assignment looks the following way:



The methodology more specifically takes the following path:
1) Constructing the IGI for the domain
2) Constructing verifiable factor-conditional subset of IGI and the explicit subset
of IGI
3) Designing and carrying out the methodology for casting the two subsets of IGI
4) Verifying: data collection, mining, validation (inferencing & MFA)
5) Gathering/establishing the full Value Level (VL = individual IGI + relevant
peripheral/amorphous dimensions and factors)
6) Reporting on VL
AVIVA model
implemented (i.e.
developed, tested
and validated) for
a given
market/product
(category)
Vendor
arrangements
(RFX + consent)
Data collection,
mining &
verification
(screening)
Analysis &
modeling
(inferencing)
Reporting to
buyers (buyer
communication)
Buyer decision-
making(value
testing)
Feedback to
supplier and
closure (value
realization)
14


More specifically the research & modeling take the following steps (making up the full
cycle of VLM implementation for a domain and VLM application to a case):
1. Establishing the grades for a domain
2. Establishing the scope of parameters (attributes/features), causes and factors, and
other conditions
3. Establishing the intra-grades and modeling the IGI
4. Categorizing into: overt attributes/features (IGI), expressly verifiable
parameters, (B0), expressly verifiable factors (Bf1), obliquely demonstrable
ambiguous factors (Bf2) (as subset boundaries)
5. Case application: B0 - inferential analysis (deductive validation)
6. Case application: Bf - inferential analysis
a. Deductive validation for Bf1
b. Abductive validation for Bf2
7. Optional: casting the specific (individual) IGI as the variance/risk degree
8. Accounting for peripheral & amorphous factors/dimensions (roles, functions, uses,
effects, purposes)
9. Producing the actual value level





















15
























Copyright 2011 TANGENT Management. All rights reserved.
TANGENT and AVIVA are trademarks of TANGENT Management (Adventour Explorer LLC).

TANGENT Management
Address: 125a Arshakunyats str., P.O. Box 0007, Yerevan, Armenia
Tel.: (+ 374) 91 437021
Fax: (+ 374) 10 482271
E-Mail: info@tangentmanagement.net
www.tangentmanagement.net

S-ar putea să vă placă și