Download integrated biz to biz predictive collaboration

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
International
Journal of Mechanical
Engineering
and Technology (IJMET),
ISSN 0976 –
INTERNATIONAL
JOURNAL
OF MECHANICAL
ENGINEERING
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
AND TECHNOLOGY (IJMET)
ISSN 0976 – 6340 (Print)
ISSN 0976 – 6359 (Online)
Volume 4, Issue 6, November - December (2013), pp. 138-160
© IAEME: www.iaeme.com/ijmet.asp
Journal Impact Factor (2013): 5.7731 (Calculated by GISI)
www.jifactor.com
IJMET
©IAEME
INTEGRATED BIZ TO BIZ PREDICTIVE COLLABORATION
PERFORMANCE EVALUATION FRAMEWORK
Anantha Keshava Murthy1,
1
R Venkataram2,
S G Gopalakrishna3
Associate Professor, Dept of ME, EPCET, Bangalore, India,
2
Director Research, EPCET, Bangalore, India,
3
Principal, NCET, Bangalore, India,
INTRODUCTION
Collaboration between Supply Chain partners have been covered extensively in the strategic
management literature [01]. As a fact, several research surveys have shown that the core of supply
chain management and Biz to Biz supply chains and performance evaluation systems enabling the
development of a process predictive collaborative performance evolution and decision making which
has leading collaborative capabilities is the process improvement at the inter-enterprises level [02].
Some researchers have examined the theoretical implications of supply chain collaboration through
unilateral supply policies [03]. Some recent studies are interested in a better characterization of the
collaborative supply chain.
In recent times, most competitiveness improvements have concentrated on performance
measurement (PM) systems in many organizations. It has been recognized in terms of the
interrelationship [04]. PM has often pointed out the KPI improvement on the individual financial
aspects without concerning the collaboration [05]. It is due time to change PM with the innovation in
terms of long-term collaborative value. The performance tendency forecasting was based on the
learning system from the in-deep experiences of the supply chain managers and experts rather
than just only comparison of historical data. Some of the interested aspects were proposed to be
added in PM system such as: trust degree between partners, degree of information system sharing,
long-term orientation and involvement of the partners. There are many approaches to construct the
most suitable PM for one’s own supply chain; still, the small number of researches about future
performance planning capacity can provide the right direction after measurement instead of how they
evaluate the previous KPI results.
One of the major approaches to analytics is to identify the impending change in trend in any
Key Performance Indicators (KPIs) before it accelerates. This kind of early warning systems are very
important and will be useful in various scenarios like vendor management for services, service
138
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
quality Enhancement etc., However, using only a scalar value to compare multiple series, rank them
and project the series to the future is not appropriate, even though practically it is possible.
Recently, many firms are exposed to a sophisticated environment which is constituted by open
markets [06], globalization of sourcing, intensive use of information technologies, and decreasing in
product lifecycles. Moreover, such a complexity is intensified by consumers who are becoming
increasingly demanding in terms of product quality and service. It means globalization has increased
firms’ internationalization, shifting them from local to global markets and with increasing
competitiveness [07]. Furthermore, the dynamic environment (consisting of competitors, suppliers’
capacity, product variability and customers) complicated the business process. To that end, many
enterprises are often forced to cooperate together within a Supply Chain (SC) by forming a virtual
enterprise which is a network of agents typically consisting of suppliers, manufacturers, distributors,
retailers and consumers. Previous research [08] posits that SC can be considered as a network of
autonomous and semi-autonomous business entities associated with one or more family related
products.
Consumers (i.e., end or industrial) often require different types of products & services, ranging
from ordering batches to maintaining final products. This process needs suppliers to manage their
demand chain activities which are often based on customer demand [09]. Previous research in this area
posits the importance of internet-based tools in aiding this process [10]. It has been reported that a
number of challenges have to be faced while fulfilling demand management through supply chain
collaboration.
The ever changing market with prevailing volatility in business environment with constantly
shifting and increasing customer expectations is causing two types of timeframe based uncertainties
that can affect the system. They are: (i) short term uncertainties and (ii) long term uncertainties. Short
term uncertainties include day-to-day processing variations, such as cancelled/rushed orders,
equipment failure etc. Long term uncertainties include material and product unit price fluctuations,
seasonal demand variations. Understanding uncertainties can lead to planning decisions so that
company can safeguard against threats and can avoid the affect of uncertainties. As a result, any
failure in recognizing demand fluctuations often hold unpredictable consequences such as losing
customers, decrease in the market share and increasing in costs associated with holding inventories
[11].
In order to achieve competitive advantage, manufacturers are forced to rely on the agile supply
chain capabilities in the contemporary scenario of changing customer requirements and expectation as
well as with the changing technological requirements. SC integration often is considered as a vital tool
to achieve competitive advantage [12]. Previous research proved the implementation difficulty due to
certain factors such as lack of trust among partners and depending solely on technology.
Consideration to People, Processes and Technology, BI analytics and PM initiative from the
perspective of three groups of participants, are Analysts, Users and IT staff.
Analysts define and explore business models, mine and analyze data and events, produce
reports and dashboards, provide insights into the organization’s performance and support the decisionmaking processes. With a deep understanding of business issues and related performance measures
and good communications, a tricky balance to achieve. Technological trends in collaboration and
social software, combined with trends in the business world for more transparent and fact-based
decision making, will lead to a new style of decision support model and system that will give further
leverage to the work of analysts . It will be necessary to put in collaborative processes and
infrastructure to help analysts get their analytical insights consumed more broadly by the user
community and to have their analysis available and/or embedded in other business and analytic
applications .
139
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Users “consume” the information, analysis and insight produced by applications and tools to
make decisions or take other actions that help the enterprise achieve its goals. Some users may be
more than just consumers, such as the top executives who will help craft the performance metric
framework. Users may also include operational workers, in addition to executives and managers. The
users determine how well BI, analytics and PM initiatives succeed. Considering users’ requirements
from several perspectives:
•
What roles do they need to play in analytic, business and decision processes? For example,
finance executives responsible for managing corporate budgets and plans will need different
analytic applications from the operations manager of a highly automated manufacturing
environment.
• What metrics, data and applications do they have and/or need? Analytic applications turn data
into the information the users need to make the appropriate decisions and support their
management processes. And every user wants timely, relevant, accurate, and consistent data and
analysis, but each user may define those terms differently and need data from different domains,
one seeking product data, another focusing on customer data, and so on.
• How do the metrics and needs change over time? Any of the factors that determine a user’s
needs at a given moment can change at any time, including business strategy, processes, roles,
goals and available data . Even if all these factors remain the same, the insights delivered to
users will lead them to ask new questions.
IT Enablers, who help design, build and maintain the systems that users and analysts use (see
Note 1). Traditional IT roles such as project managers, data and system architects, and developers
remain important. But BI, analytics and PM initiatives require more than simply building applications
to fit a list of requirements. Those applications also have to deliver business results. Users have to
want to use them. They have to support analytic, business and decision processes. Thus, IT enablers
need business knowledge and the ability to work collaboratively outside their traditional area of
expertise.
With the growing number of large data warehouses [13] for decision support applications,
efficiently executing aggregate queries is becoming increasingly important. Aggregate queries are
frequent in decision support applications, where large history tables often are joined with other tables
and aggregated. Because the tables are large, better optimization of aggregate queries has the
potential to result in huge performance gains. Unfortunately, aggregation operators behave
differently from standard relational operators like select, project, and join. Thus, existing rewrite
rules for optimizing queries almost never involve aggregation operators. To reduce the cost of
executing aggregate queries in a data warehousing environment, frequently used aggregates are often
pre computed and materialized. These materialized aggregate views are commonly referred to as
summary tables. Summary tables can be used to help answer aggregate queries other than the view
they represent, potentially resulting in huge performance gains. However, no algorithms exist for
replacing base relations in aggregate queries with summary tables so the full potential of using
summary tables to help answer aggregate queries has not been realized.
In this research work an attempt has been made to develop an integrated Biz to Biz supply
chain predictive performance evaluation system for analyzing the multidimensional data, ranking the
time series and predicting the time series to the future. The idea was to have systematic manners to
predict future collaborative performance using a crosstab query to provide the transposed time series
140
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
data for selected dimensions, similarity search module which will rank the time series data and a
prediction module based on moving average/regression forecasting model.
BACKGROUND AND LITERATURE REVIEW
In this section, tried to explore some basic concepts and the literature which is mostly related
and essential to my work such as: supply chain management, Bullwhip effect, Collaborative CRM
processes, knowledge management, Key finding of analytics and performance management
framework, Overview of KPI analysis methodology, Data warehouse and analytical processing, Data
mining, Performance measurement, Optimizing Aggregations, Autoregressive Integrated Moving
Average , Granger causality test as an exploratory tool, etc.
Supply Chain Management
Supply Chain Management focuses on managing internal aspects of the supply chain. SCM is
concerned with the integrated process of design, management and control of the Supply Chain for the
purpose of providing business value to organisations lowering cost and enhancing customer
reachability. Further, SCM is the management of upstream and downstream relationships among
suppliers to deliver superior value at less cost to the supply chain as a whole. Many factors such as
globalization and demand uncertainty pressures forced companies to concentrate their efforts on core
business [14]. A process which leads many companies to outsource less profitable activities so that
they gain cost savings as well as increased focus on core business activities. As a result, most of these
companies have opted for specialization and differentiation strategies. Moreover, many companies are
attempting to adopt new business models around the concept of networks in order to cope with such a
complexity in making planning and predicting [15]. The new changes in business environment have
shifted the concentration of many companies towards adopting mass-customization instead of massproduction. Further, it derives the attention of many companies to focus their effort on markets and
customer value rather than on the product [16]. From International Journal of Managing Value and
Supply any single company often cannot satisfy all customer requirements such as fast-developing
technologies, a variety of product and service requirements and shortened product lifecycles. Creating
such new business environments have made companies look to the supply chain as an ‘extended
enterprise’, to meet the expectations of end-customers.
Bullwhip Effect
In a Supply Chain (SC), the uncertainty market demands of individual firms are usually driven
by some macro-level, industry-related or economy-related environmental factors. These are
individually managed demand forecasts and are causing SC to become inefficient in three ways: (i)
supply chain partners invest repeatedly in acquiring highly correlated demand information which
increases the overall cost of demand forecasting (ii) the quality of individual forecasts is generally
sub-optimal, since individual companies have only limited access to information sources and limited
ability to process them, it results in less accurate forecasts and inefficient decision making (iii) firms
vary in their capability to produce good quality forecasts.
The phenomenon of bullwhip effect is related to SCM. It is often considered as magnified and varied
order volumes observed at upstream nodes in Supply Chain. [17]. However the term bullwhip was
used by Procter & Gamble managers which observed the increase of variability of vendors and
distributors orders (with respect to the customer demand) through an empirical observation.
141
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Strategies to Counteracting Bullwhip Effect
Companies can reduce uncertainty by having information shared along the whole supply chain
providing the complete information related to customer demand at each stage. Other counteracts of
bullwhip effect include channel alignment [for e.g. alignment of Point-Of-sale (POS), with Electronic
Data Interchange (EDI)] and operational efficiency (for e.g., everyday low price) [18].
Collaborative CRM Processes
CRM entails all aspects of relationships a company has with its customers [19] from initial
contact, presales and sales to after-sales, service and support related. Collaboration between firms can
improve the involved intra-organizational business processes. The identification and definition of
collaborative CRM core processes is still ambiguous. Collaborative business processes that can be
found in literature are marketing campaigns, sales management, service management, complain
management, retention management, customer scoring, lead management, customer profiling,
customer segmentation, product development, feedback and knowledge management.
Knowledge Management
Business benefits of these investments included transactional efficiency, internal process
integration, back-office process automation, transactional status visibility, and reduced information
sharing costs. While some of the enterprise started to think of in the direction of acquiring and
preserving the knowledge, the primary motivation for many of these investments was to achieve better
control over day-to-day operations.
The concept of knowledge management: Just like knowledge itself, knowledge management is
difficult to define [20]. However, is believed that defining what is understood by knowledge
management may be somewhat simpler than defining knowledge on its own. The idea of
‘management’ gives us a starting point when considering, for example, the activities that make it up,
explaining the processes of creation and transfer or showing its main goals and objectives without the
need to define what is understood by knowledge. Consequently, in literature there are more ideas and
definitions on knowledge management than just on knowledge, although these are not always clear as
there are numerous terms connected with the concept.
KPI Analysis Methodology
To improve supply chain management performance in a systematic [21] way, I propose a
methodology of analyzing iterative KPI accomplishments. The framework consists of the following
steps (see Fig.1). First, the managers identify and define KPIs and their relationships. Then, the
accomplishment costs of these KPIs are estimated, and their dependencies are surveyed. Optimization
calculation (e.g., structure analysis, computer simulation) is used to estimate the convergence of the
total KPI accomplishment cost, and to find the critical KPIs and their improvement patterns. Then the
performance management strategy can be adjusted by interpreting the analysis results. The following
sections discuss the details of this methodology. Identifying KPI and model their relationships,
Managers in supply chains usually identify KPIs according to their objective requirements and
practical experiences. But to get a systematic or balanced performance measurement, they often turn
[22] to some widely recognized models, such as BSC and SCOR.
142
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Fig.1- A research framework of improving supply chain KPIs accomplishment
Conventional wisdom tells us a few things about establishing key performance indicators. It
goes something like this: Determine corporate goals. Identify metrics to grade progress against those
goals. Capture actual data for those metrics. Jam metrics into scorecards. Jam scorecards down the
throats of employees. Cross fingers. Hope for the best.
A Data Warehouse and Analytical Processing
Construction of data warehouses, involves data cleaning and data integration [23]. This can
be viewed as an important pre-processing step for data mining. Moreover, data warehouses provide
analytical processing tools for the interactive analysis of multidimensional data of varied
granularities, which facilitates effective data mining. Furthermore, many other data mining functions
such as classification, prediction, association, and clustering, can be integrated with analytical
processing operations to enhance interactive mining of knowledge at multiple levels of abstraction.
Subject-oriented: A data warehouse [24] is organized around major subjects, such as customer,
vendor, product, and sales. Rather than concentrating on the day-to-day operations and transaction
processing of an organization, a data warehouse focuses on the modelling and analysis of data for
decision makers. Hence, data warehouses typically provide a simple and concise view around
particular subject issues by excluding data that are not useful in the decision support process.
Integrated: A data warehouse is usually constructed by integrating multiple heterogeneous sources,
such as relational databases, at les, and on-line transaction records. Data cleaning and data
integration techniques are applied to ensure consistency in naming conventions, encoding structures,
attribute measures, and so on.
143
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
A data cube: A data cube [25] allows data to be modelled and viewed in multiple dimensions. It is
defined by dimensions and facts. In general terms, dimensions are the perspectives or entities with
respect to which an organization wants to keep records. For example, consider a sales data from
XYZ company and data warehouse in order to keep records of the store's sales with respect to the
dimensions time, item, branch, and location.
Fact table: Sales (Facts are numerical measures Ex; Sales Amount, Number of Units Sold) Fact table
contains the names of the facts, or measures, as well as keys to each of the related dimension tables.
Dimensions Tables: Time, item, branch and location, (These dimensions allow the store to keep
track of things like monthly sales of items, and the branches and locations at which the items were
sold).
Data Mining (DM)
Data mining has been broadly utilized and accepted in business and production during the
1990s [26]. Currently, data mining is made of use not only in businesses but also in many different
areas in supply chain and logistics engineering. A few examples are demand forecasting system
modelling, SC improvement roadmap rule extraction, quality assurance, scheduling, and decision
support systems. The data mining techniques can normally be categorized into four sorts i.e.,
association rules, clustering, classification, and prediction. At the turn of century, the decision
makings were used in production management to choose the suitable and agile solutions in real
production. Data warehouse systems allow for the integration of a variety of application systems.
They support information processing by providing a solid platform of consolidated, historical data
for analysis. E.
Performance Measurement (PM)
On the other research rivers, PM context [27] comprised of the multi-criteria decision
attribute (MCDA) are most commonly accepted for use. The classifications are as follows,
hierarchical techniques, deployment approaches, scoring method and objective programming. For
example, performance improvement of the selection of freight logistics hub in Thailand was
developed by coordinated simulation. K. A. Associates [28] figured out that PM, among
collaborative SC networks, is vital for management. There have been many certain attempts to
deploy and explore AI and data mining techniques to make up for the typical techniques in optimizing
PM in SCM with a better development roadmap.
Optimizing Aggregations
Viewing aggregation as an extension of duplicate eliminating (distinct) projection provides
very useful intuition for reasoning about aggregation operators inquiry trees. Rewrite rules for
duplicate-eliminating projection often can be used as building blocks to derive rules for the more
complex case of aggregation. In addition to the intuition obtained by viewing aggregation as
extended duplicate-eliminating projection, modelling both with one operator makes sense from an
implementation point of view. Typically, in existing query optimizers both aggregations and
duplicate eliminating projections are implemented in the same module. Presentation made to a set of
query rewrite rules for moving aggregation operators in a query tree. Other authors have previously
given rewrite rules for pulling aggregations up a query tree and for pushing aggregations down a
query tree [29]. My work unifies their results in a single intuitive framework, and using this
framework can derive more powerful rewrite rules. I present new rules for pushing aggregation
operators past selection conditions (and vice-versa) and show how selection conditions with
inequality comparisons can cause aggregate functions to be introduced into or removed from a
144
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
query tree. Also presented rules of coalescing multiple aggregation operators in a query tree into a
single aggregation operator, and conversely, rules for splitting a single aggregation operator into
two operators.
Autoregressive Integrated Moving Average (ARIMA)
ARIMA model was introduced by Box and Jenkins (hence also known as Box-Jenkins
model) in 1960s for forecasting a variable [30]. ARIMA method is an extrapolation method for
forecasting and, like any other such method, it requires only the historical time series data on the
variable under forecasting. Among the extrapolation methods, this is one of the most sophisticated
methods, for it incorporates the features of all such methods, does not require the investigator to
choose the initial values of any variable and values of various parameters a priori and it is robust to
handle any data pattern. As one would expect, this is quite a difficult model to develop and apply as
it involves transformation of the variable, identification of the model, estimation through non-linear
method, verification of the model and derivation of forecasts.
Granger Causality Test as an Exploratory Tool
Testing for causality in the sense of Granger involves using statistical tools for testing
whether lagged information on a variable u provides any statistically significant information about
the variable y. If not, then u does not Granger-cause y. The Granger Causality Test [31] compares the
residuals of an Auto Regressive Model (AR Model) with the residuals of an Auto Regressive
Moving Average Model (ARMA). If the Granger Causality Index (GCI) g is greater than the
specified critical value for the F−test, then reject the null hypothesis that u does not Granger-cause y.
As g increases, the p−value3 associated with the pair ({u (k)}, {y (k)}) decreases, lending more
evidence that the Null Hypothesis is false. In other words, high values of g are to be understood as
representing strong evidence that u is causally related to y [32].
Analytics and Performance Management Framework
This framework defines the people, processes and technologies [33] that need to be integrated
and aligned to take a more strategic approach to business intelligence (BI), analytics and performance
management (PM) initiatives.
•
•
•
•
•
Most organizations use a combination of vendors, products and services to provide BI, analytics
and PM solutions.
Successful managers recognize the diversity and interrelationships of the analytic processes
within the enterprise and can address the needs of a diverse set of users without creating silos .
A strategic view requires defining the business and decision processes, the analytical processes,
as well as the processes that define the information infrastructure independently from the
technology.
The PM, technology and complexity of skills associated with the strategic use of BI, analytics
and PM increases dramatically as the scope of the initiative widens across multiple business
processes.
There is no single or right instantiation of the framework; different configurations can be
supported by the framework based on business objectives and constraints.
Proposals,
•
Use this framework to develop a strategy to surface key decisions, integration points, gaps,
overlaps and biases that business and program managers may not have otherwise prepared for .
145
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
•
•
A portfolio of BI, analytic and PM technologies will be needed to meet the diversity of
requirements of a large organization.
Seek the advice from program management specialists so as to balance investments across
multiple projects and consider bringing BI, analytics and PM initiatives within a formal program
management framework.
OBJECTIVE AND METHODOLOGY
The Purpose or objective of the research is ‘’to develop an integrated Biz to Biz supply
chain predictive collaboration performance evaluation Framework, based on multiple decision
making’’.
The methodology is based on multiple decision making, They are - (a) an analytical query
system based on an advanced column store database server, which provides the aggregated time
series data from a star schema data mart, Classification and Regression Tree and K-means, is
shown in Fig. 2. (b) a time series ranking module which ranks the time series with an adoptive
algorithm and (c) a prediction module, which provides a simple but effective parametric model
building capabilities. C and RT and K-Means model for Biz to Biz collaborative performance
prediction was performed based on the two data source. One was the training and other one testing
set. In this, both worst and best relationship types were selected at the collaborative performance
learning data files to demonstrate this framework potential. Before C and RT model construction,
Pearson feature selection was applied to identify the significant inputs which have an effect on
collaborative performance score. Next, the model components using these files were defined by the
general configuration from domain experts concerning their related SC context. Finally, the results
were interpreted by the domain expert to forecast the overall collaborative performance and plan
their collaborative performance improvement direction.
Fig. 2- The Methodology of B to B - SC Predictive Collaborative Performance Evaluation Model
146
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Building Dimensions Tables
A dimension table, for example a table Item - contain the attributes item name, brand, and
type. Consider simple 2-D data cube which is, in fact, a table or spreadsheet for sales data items sold
per quarter in the city of Surat .
The Table-1 to represent a 2-D View of sales data for XYZ company according to the
dimensions time and item, at where the sales are from branches located in the city of Surat. The
measure displayed is Rupees sold.
Viewing things in 4-D becomes tricky. However, I can think of a 4-D cube as being a series
of 3-D cubes, as shown in Fig. 3. If we continue in this way, we may display any n-D data as a series
of (n_1)-D ‘’cubes". The important thing to remember is that data cubes are n-dimensional, and do
not confine data to 3-D.
Table-1: A 2-D View of sales data for XYZ company
Fig. 3 - A 3-D data cube representation of the data in Table- 2
Above Fig. 3 represents a 3-D data cube representation of the data in Table-2, according to the
dimensions time, item, and location individually for each location. The measure displayed is Rupees
sold for each location.
Fig. 4 - A 3-D data cube representation of the data in Table 2 combined
147
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Above Fig. 4 represents a 3-D data cube representation of the data in Table-2, according to
the dimensions time, item, and location. The measure displayed is Rupees sold. The 0-D cuboid
which holds the highest level of summarization is called the apex cuboid. The apex cuboid is
typically denoted by all.
Fig. 5: Lattice of cuboids, making up a 4-D data cube
Above Fig. 5 represents, Lattice of cuboids, making up a 4-D data cube for the dimensions
time, item, location, and supplier. Each cuboid represents a different degree of summarization. Stars,
Snowflakes, and fact constellations: schemas for multidimensional databases The entity-relationship
data model is commonly used in the design of relational databases,
A multidimensional data model : A compromise between the star schema and the Snowflake schema
is to adopt a mixed schema where only the very large dimension tables are normalized. Normalizing
large dimension tables saves storage space, while keeping small dimension tables un normalized may
reduce the cost and performance degradation due to joins on multiple dimension tables. Doing both
may lead to an overall performance gain. However, careful performance tuning could be required to
determine which dimension tables should be normalized and split into multiple tables.
Fact constellation: Sophisticated applications may require multiple fact tables to share dimension
tables. This kind of schema can be viewed as a collection of stars, and hence is called a galaxy
schema or a fact constellation.
Fig. 6: Fact constellation schema of a data warehouse for sales and shipping
148
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Examples for defining star, Snowflake, and fact constellation schemas : A relational query languages
like SQL is to be used to specify relational queries [34], a data mining query language can be used to
specify data mining tasks. In particular, we examine an SQL-based data mining query language
called DMQL which contains language primitives for defining data warehouses and data marts.
Language primitives for specifying other data mining tasks, such as the mining of concept/class
descriptions, associations, classifications, and so on, will be introduced in Chapter .
Data warehouses and data marts can be defined using two language primitives, one for cube
definition and one for dimension definition. The cube definition statement has the following syntax.
define cube.., {cube_name} [{dimension_list}] : {measure_list the dimension definition statement has
the following syntax.define dimension.., {dimension_ name} as ({attribute or sub_dimension list})
Examples to define the star, snowflake and constellations schemas of Examples 2.1 to 2.3 using
DMQL. DMQL keywords are displayed in sans serif font.
Finally, a fact constellation schema can be defined as a set of interconnected cubes. Below is
an example. Example 2.6 The fact constellation schema of Example 2.3 and Fig.6 is defined in
DMQL as follows.
define cube.., sales [time, item, branch, location]: rupees sold =
sum(sales in rupees),units sold = count(*)
define.., dimension time as (time_key, day, day of week, month, quarter, year)
define.., dimension item as (item_key, item name, brand_type, supplier_type)
define.., dimension branch as (branch_key, branch_name, branch_type)
define.., dimension location as (location_key, street, city, state, country)
define.., cube shipping [time, item, shipper, from_location, to_location]: rupees cost =
sum(cost in rupees), units shipped = count(*)
define.., dimension item as item in cube sales
define.., dimension shipper as (shipper_key, shipper_name, location as location in cube
sales, shipper_type)
define.., dimension from_location as location in cube sales
define.., dimension to_location as location in cube sales
A define cube statement is used to define data cubes for sales and shipping, corresponding to
the two fact tables of the schema of Example 2.3. Note that the time, item, and location dimensions
of the sales cube are shared with the shipping cube. This is indicated for the time dimension, for
example, as follows. Under the define cube statement for shipping, the statement ‘’define dimension
time as time in cube sales" is specified. Instead of having users or experts explicitly define data cube
dimensions, dimensions can be automatically generated or adjusted based on the examination of data
distributions. DMQL primitives for specifying such automatic generation or adjustments are also
possible.
Data Preparation and analyze a dyadic relation
After the data set of relationship between enterprise and its direct customer questionnaire
gathering following R. Derrouiche et al., [35] Model’s in Fig.7. to analyze a dyadic relation and to
evaluate its performance, the attribute ranking algorithm using information gain based on ranker
search was calculated for the two types of relationships.
149
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Fig. 7: Model to analysis relations and to evaluate its performance
Sub-KPI impact results from the attribute ranking algorithm: These results are shown in Fig.8. In
addition, the questionnaire from R. Derrouiche et al., able to characterize collaborative relation
between
two
or
more partners in a supply chain, evaluating their related performances
accordingly. The former level is the common perspective as follows: relation climate, relation
structure, IT- used and relation lifecycle and the later level consists of the perceived satisfaction of
the relation and its perceived effectiveness.
Fig. 8: The sub-KPI impact results from the attribute ranking algorithm using information gain,
based on ranker search
These represent the macro view of model. For example, the macro view of relation climate
has six micro views, and each micro view has also two sub-micro views.
Next, the data cleaning and input-output format following C&RT and K-Means structure was
conducted to prepare the learning data. Primary impact of each sub-KPI (i) from each relationship
type (j) was calculated from equation 2. Then weight definition was performed according to
equation 2.
(1)
150
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
C&RT Decision Tree of predictive collaborative performance construction
The simple additive weight of each sub macro view was constructed using the weight from
the results of primary impact from Fig.9. At this stage, the collaborative performance was also
calculated as the future response of C&RT model with 7 as the maximum tree depth. Then, Pearson
feature selection was applied to identify the significant inputs which have an effect on collaborative
performance score. The significant inputs are the inputs which pass the 95 percentage of confidence
interval (red line).
Preparation algorithm for computing the ranking of time series
A simple pattern based approach has been used in this research work to compare the time
series data [36]. The ranking of time series is done through automated sorting of patterns. In
order to sort the time series values, the spread of each series is computed and compared with the
spread of all the series. Large variances suggest a very different development, while small
variances indicate a similar development pattern. Since the values for each series are very
different, it is not possible to compare the series values directly. In order to make the series
comparable, the series will be normalized, by dividing the individual values of the series by
series mean. Once the data is normalized, square of a sum of differences of individual values in
the time series with that of the overall mean vector values is computed, which results in scalar
values for each series. Ranking of these series of scalars will provide statistically valid ranks for
the time series. The algorithm for comp uting the ranking of time series is shown in Fig. 10.
Fig. 10: An algorithm for computing the ranking of time series
Preparation algorithm for computing the ranking of time series
A standard approach for model forecasting is to use techniques like ARIMA or using
neural networks. However several problems limit their usefulness when dealing with analysis in a
practical situation. Some of them are like –
•
•
Simple models have proved to be effective in replicating complex models like ARIMA on
time series forecasting [37]
Non-parametric models like Kernel regression though simple require human evolution
which limits the usage in a dynamic setting like interactive analytics.
151
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
•
Though non-parametric methods like neural networks show promising results their
computational complexity is prohibitive. The prediction of future value of a KPI is an
important function in analytics. In this research we adopt the models proposed by Toskos
and integrate with my analytical system. The equations used is, k-th moving average:
(2)
The details of the derivations and comparison of the effectiveness of these models with
standard as well as best ARIMA models are discussed in Toskos. For the sake of abbreviation the
proposed set of model using kth series are referred to as KMV (Kth series Moving average
Variants). Actual algorithm is given below in Fig. 11.
Fig. 11: Prediction with KMV model.
This analysis can be done in two modes: manual and automatic. In both the modes the
process remains the same, only the space in which the analysis is carried out will differ. In
manual mode, user will select the dimensions of interest. However in the automatic mode a pre
defined structure for hierarchical analysis is followed. The actual process consists of
•
•
•
Selection of dimensional values and facts
Forecasting the KPIs using KMV model
Ranking the time series of predicted values
Collaborative Process Performance Management
Proposed Collaborative Key Performance Indicators
In this Research, we first proposed collaborative Key Performance Indicators (cKPIs) which
could be used to measure collaboration of multiple manufacturing partners based on the SCOR
standard model. cKPIs were developed to consolidate important KPIs of individual partners. They
were calculated from the values of KPIs leveraging the collaborative processes. The SCOR model
provides several levels of performance metrics in supply chain, which are good candidates for
collaboration performance indicators. Also, we developed a modified sigmoid function to reflect and
152
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
check the characteristics of Service Level Agreements (SLA), which are often contracted between
participating companies. To achieve the synthetic satisfaction of collaboration results, the modified
sigmoid function can partially overcome the limits of the desirability functions which are used to
combine multiple responses into one response from 0 to 1.
Collaboration Performance Indicators
Wheelwright and Bowen presented SCM performance indicators including cost, quality,
delivery period, and flexibility. Min and Park introduced performance measurement of supply chain
and systemized the measures by using the SCOR model [38].
Desirability Function: Desirability functions convert satisfaction of measured values into values of 0
to 1. Kim and Lin proposed a non-linear desirability function based on an exponential function [38].
The desirability can be calculated from response variables as follows:
(3)
It is assumed that the response variables are classified into LTB (Lager-The-Better), STB
(Small-The-Better), and NTB (Normal-The-Best). To measure desirability of three types of response
variables, z value is first calculated by using the response value Y with the maximum, minimum, and
target response values, noted Ymax, Ymax, and T, respectively .
(4)
Unfortunately, it has difficulty in adopting the desirability function as a performance measure
of supply chain due to two reasons. First, typical performance indicators do not have Ymax, Ymin,
and T values, or the values are not nearly meaningful. Second, the desirability function cannot reflect
the criteria of performance indicators, which are often used in contracts with partners, so-called SLA.
From the reasons, we devised a new desirability function for the collaboration in Section 4.
Methodology used for Collaborative Process Performance Management
The framework of collaborative process performance management collaboration process
monitoring and reporting, and collaborative performance indicator that combines KPIs of individual
companies. includes development of cKPIs, real-time process performance analysis. cKPI is the
Development of cKPIs: The product data is often shared in collaborative manufacturing
environment. However, performance indicators and their improvement are generally managed only
within individual companies. To overcome the limits, we propose the notion of cKPIs for
collaboration performance management. If all parties in the collaboration effectively measure and
manage the shared cKPIs in their manufacturing collaboration, they can continuously improve and
strengthen competitiveness of their collaboration for their common goals.
(1) Real-time collaboration process monitoring and reporting: Based on the derived cKPIs, real-time
collaboration can be monitored and reported to maintain their ongoing collaboration processes.
(2) Collaborative process performance analysis: It provides functions of process analysis to mutually
improve the performances of the partners in collaboration by analyzing the measured values of
cKPIs.
153
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Supply Chain Performance Management
SCOR model provides a reference of supply chain processes and the metrics. It contains five
generic processes for supply chain (plan, source, make, deliver and return), and it also provides
structured performance indicators for each process. In the model, supply chain performances are
measured to balance a high level of performance indicators, which include reliability, flexibility and
responsiveness, cost, and asset. SCOR model designs the supply chain by adjusting overall goals of
supply chain and measures performance [35]. SCOR model suggests the process reference model
hierarchically Level 1 to 3. For example, Level 2 includes forty performance indicators. In this
research, we derived cKPIs of manufacturing collaboration processes from the performance
indicators of Level 2 in SCOR model version 8.
Table- 2: cKPIs derived from metrics in SCOR model
Table-3: cKPIs and Their Calculation
154
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Desirability Function of cKPI
SLA describes the level of service quality. If the service provider does not satisfy the
agreement, penalty is imposed to the provider on the basis of SLA. In this research, a modified
sigmoid function is introduced to reflect sensitivity around critical value s, which is the criterion of
service level described in the agreement.
Logistic or Gompertz function is generally used to represent the sigmoid function as follows:
(5)
(6)
In this research, It is considered the logistic function because the function does not require
Ymax and Y min values and it can also transform the desirability to the values from 0 to 1. Focusing on
critical value s of SLA, developed a new desirability function to differentiate affection when is the
criteria over and criteria down.
Total Performance Satisfaction Measure
To measure performance of collaboration from the proposed cKPI, we gathered parameters of
performance indicators and the measured values of each partner. Table-5 shows the values of
Table -4: KPI and their values of partners in example collaboration
KPIs, which are used to calculate cKPI.
Table -5: cKPIs and the satisfaction of example collaboration
155
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Table-2 shows the progress of calculating the values of cKPIs and measuring the satisfaction
through interviews to experts. The values of cKPIs are calculated from those of KPIs in Table-4 by
using the equations in Table-2. About total performance satisfaction of manufacturing collaboration
using desirability function derived from section 4.1. Finally, the values of satisfaction functions of
cKPIs are calculated to obtain the satisfaction of the collaboration D by considering weights wi. The
values weights can be acquired by interviews to the experts.
(7)
For the example process, the satisfaction of manufacturing collaboration is calculated to 0.720.
RESULTS AND DISCUSSION
From Integrated (BiztoBiz) predictive collaboration performance evaluation
Here we tried to analyse the outcome of in macro level from C&RT Decision Tree model
and Performance Clustering based on K-Means Construction.
C&RT Decision Tree model performance analysis
The learning dataset of C&RT Decision Tree comes from the prepared dataset; the training
set makes up 80 percentage, performance prediction capacity of this model was expressed in terms
of the error of the predictive output. The mean absolute error of and testing set is 0.027; it can be
regarded as the error acceptance from domain experts assumptions.
C&RT Decision Tree model deployment
On the practical deployment, the domain users tried on it as follows:
• Prepare their collaborative performance data according the input-output model format and
then feed it into the model.
• Put performance improvement scenarios and analyze the result in terms of real usage
feasibility and how to take advantage from the KPI sensitivity analysis, related to their
expected collaborative performance.
• Analyze the impact of sub-KPIs to their expected collaborative performance.
• Form the sub-KPI improvement planning based on the model result and their long term
strategy.
For instance, one of domain experts exercised his performance improvement road map, in
which It has been profound a research on the case study of BiztoBiz-SC. The assumption stated
most of the best relation types merely started in the maturity phase of the product life cycle.
Between the two partners, they take effort to develop the cooperation climate in the long-term
orientation with a very high participation. These come from the high degree of engagement and
commitment, compatibility and solidarity, power exerted and confidence. Besides, it can improve
satisfaction with their partner. To prove his assumption, the statement of this assumption was
converted to the estimated value of sub-KPIs and then put all of its values to the C&RT decision tree
model.
As a result, it is found that the predictive collaborative performance value is a very high
value, which is greater than 75 percent of collaborative performance from the main improvement
condition (engagement and commitment > 3.425, compatibility and solidarity > 3.084, power
exerted > 4.253 and confidence > 4.245); it is corresponding to his assumption.
156
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Performance Clustering based on K-Means Construction
Before performing the K-Means algorithm, the number of clusters was set to three. The
percentage membership volume in each cluster was 42,14 and 55 percent.
Outcome from the ranking of time series module
The trend i.e. time series can have some prominent patterns which are of interest to
business analysts. Some of them are like, Vary considerably over the past few periods, Increase
greatly, Drop drastically, Increase greatly and then drop drastically, Perform differently than the
total trend, A typical comparison of the time series is given in The following graph-2 is the
outcome of time series ranking module which ranks the time series with an adoptive algorithm for
the quarterly sales chosen for 5 years. the red colour represent the typical patterns of average trend,
where as the Series 1 to 6 represent the sales pattern from past five years from various quarters.
Graph-1: Typical pattern of trend in time series for sales
The following graph is the outcome of time series ranking module of quarterly unit sales
chosen for 5 years similar to the one explained above for variations in patterns. the red colour
represent average
Graph-2: Typical pattern of trend in time series for unit of sales
trend, where as the Series 1 to 6 represent the sales pattern from past five years from various
quarters.
157
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
Conclusion Proposed Collaborative Key Performance Indicators
The existing studies on performance management mainly focus on tasks of a single company
or outsourcing in viewpoint of a client company. Such performance measures cannot be adopted for
the collaborative work since performance indicators often have conflicts between service providers
and clients. As a result, we tried to propose a methodology of measuring the collaborative
performance and the satisfaction of manufacturing collaboration.
In this research, the SCOR model is used to derive cKPIs, which are calculated from the
KPIs of each partner. And, the desirability function of measuring satisfactions of cKPIs is devised
by combining the logistic function and the exponential function so that Ymax or Ymin of cKPI do not
need to be considered. Finally, a method of obtaining the satisfaction of collaboration is proposed
from the values of cKPIs. The proposed methodology of calculating the collaborative performances
and the satisfaction of collaboration can be utilized for the purpose of maintaining and improving the
collaboration which is performed by multiple partners.
CONCLUSION
In the present work, the integrated application between B to B supply chain performance
evaluation systems, data mining and multi-criteria decision attribute techniques, developing
predictive collaborative performance evaluation model and performance clustering model. After the
methodology implementation and deployment, the results prove the model advantage in terms of
long term planning based on expected performance. Moreover, this proposed model that allows
users to combine human perception and judgment with C&RT decision tree of predictive
collaborative performance related with their SC context. It results from the innovation decision
making, concerning the changing in terms of computational result and human instinct. There are
some certain limitations such as the users have to know a some data mining and multi-criteria
decision attribute techniques. This work is focused on a quantitative approach on how managers
consider the role of each sub-KPI and its impact on the long term collaborative performance.
To illustrate it on dashboard or simplified graphics is another way to communicate among the
invokers from the different units. Moreover, some of uncertainty factors from the SC environment or
even management should be covered using Fuzzy MCDA to be handled in the first step before
predictive collaborative performance evaluation model construction. On the other hand, the PM
framework might be added of some measurement metrics from the SCOR model.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
Simatupang Togar M., and Sridharan Ramaswami, An integrative framework for supply
chain collaboration, The International Journal of Logistics Management, 2005, 16(2),
257 – 274.
Boyson, S., Corsi, T.M., Dresner, M.E., Harrington, L.H., :Benchmarks and Best Practices
for the Manufacturing Professional, Wiley, New York, NY, 1999.
Taylor, A.T, 2002, Supply chain coordination under channel rebates with sales effort
effects. Management Science, 48(8) 992-1007.
G. J. C. Da Silveira, "Improving trade-offs in manufacturing: Method and illustration,"
International Journal of Production Economics, vol. 95, pp. 27-38, 2005.
A. Gunasekaran and E. W. T. Ngai, "Information systems in supply chain integration and
management," European Journal of Operational Research, vol. 159, pp. 269-295, 2004.
International Journal of Managing Value and Supply Chains (IJMVSC) Vol. 3, No. 4,
December 2012.
158
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
[07] Alotaibi, K.F., Fawcett, S. E. and Birou, L. (1993) “Advancing Competitive Position
Through Global And JIT Sourcing: Review And Directions”, 3(1/2):4-37.
[08] Lee, H. L. & Billington, C. (1995). “The Evolution of Supply-Chain-Management Models
and Practice at Hewlett-Packard”, Interfaces, 25:42-63. Vol. 3, No. 4, December 2012.
[09] Hoover, Eloranta & Katiluttunen (2001) Managing the demand chain: Value innovations
for supplier excellence, USA: John Wiley and Sons,
[10] Raffaella, C., Federico, C. & Gianluca, S. (2003) “E-business strategy: How companies are
shaping their supply chain through the Internet”, 23(10):1142 – 1162.
[11] Gupta, A. and Maranas, C. D. (2003) “Managing demand uncertainty in supply chain
planning”, Computer and Chemical Engineering, 27:1219-1227.
[12] Yusuf, Y. Y., Gunsekaran, A. Adeleye, E. O. & Sivayoganathan, K. (2004) “Agile supply
chain capabilities: Determinants of competitive objectives”, 1(59):379-392.
[13] Aggregate-Query Processing in Data Warehousing Environments- Ashish Gupta Venky
Harinarayan Dallan Quass -IBM Almaden Research Center.
[14] Sanchez-Rodrigues, V., Potter, A. & Naim, M. M. (2010) “Evaluating the causes of
uncertainty in logistics operations”, International Journal of Logistics Management,
21(1):45 – 64.
[15] Saikouk, T., Zouaghi, I. & Spalanzani, A. (2012) “Mitigating Supply Chain System
Entropy by the Implementation of RFID”, CERAG, Vienna, Austria.
[16] Grenci, R. T. & Watts, C. A. (2007) “Maximizing customer value via mass customized
econsumer services”, Business horizons, 50 (2):123-132.
[17] Lee H, Padmanabhan, P. & Whang, S. (1997) “Information Distortion in a Supply Chain:
The Bullwhip Effect”, Management Science, 43(4): 546-558.
[18] Ravichandran, N. (2008) “Managing bullwhip effect: two case studies”, Journal of
Advances in Management Research, 5(2):77 – 87.
[19] Usability of CRM Systems as Collaboration Infrastructures in Business Networks Olaf
Reinhold, Germany [email protected], [email protected]
[20] Earl M (2001) Knowledge management strategies: toward a taxonomy. Journal of
Management Information Systems 18(1), 215–233.
[21] B.J. Angerhofer, M.C. Angelides, A model and a performance measurement system for
collaborative supply chains, Decision Support Systems 42 (2006) 283–301.
[22] F.T.S. Chan, H.J. Qi, An innovative performance measurement method for supply chain
management, Supply Chain Management: An International Journal 8 (3–4) (2003) 209–223.
[23] A. Berson and S. J. Smith. Data Warehousing, Data Mining, and OLAP. New York:
McGraw-Hill, 1997.
[24] S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology. ACM
SIGMOD Record1997.
[25] P. Deshpande, J. Naughton, K. Ramasamy, A. Shukla, K. Tufte, and Y. Zhao. Cubing
algorithms, storage estimation, and storage and processing alternatives for olap, 1997.
[26] J. Han and M. Kamber, Data mining: concepts and techniques.Morgan Kaufmann
Publishers, 2001.
[27] P.C. Fishburn, Method for estimating addtive utilities, Management Science,vol.13-17,
pp.435-453, 1997.
[28] K. A. Associates, A Guidebook for Developing a Transit Performance-measurement
System. Washington, DC., 2003.
[29] W. P. Yan and P. A. Larson. Performing Group-By Before Join. In ZCDE, 1994.
[30] Box, G.E.P., and G. M. Jenkins. 1970. Time series analysis: forecasting and control. Holden
Day, San Francisco, CA.
159
International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 –
6340(Print), ISSN 0976 – 6359(Online) Volume 4, Issue 6, November - December (2013) © IAEME
[31] C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral
methods. Econometrica, 34:424–438, 1969.
[32] J. Hamilton. Time Series Analysis. Princeton University Press, 1994.
[33] Gartner RAS Core Research Note G00166512 Gartner’s Business Intelligence, Analytics
and Performance Management Framework, 19 October 2009
[34] E. Thomsen. OLAP Solutions: Building Multidimensional Information Systems. John
Wiley & Sons, 1997.
[35] Derrouiche, R. Neubert G. and Bouras A., Supply chain management: a framework to
characterize the collaborative strategies, Vol. 21, Issue 4, June 2008 , pp. 426-439.
[36] KPI based analytics in e-Governance – A prototype using segmentation and trend analysis
M.N.Rao, Jay B. Simha
[37] Toskos C.P, “K-th, weighted and exponential moving averages for time series forecasting
models”, European Journal of Pure and Applied Mathematics, Vol.3, No.3, 2010, 406-416
[38] Min, D.G., and Park, J. D., 2003, “Development of a Performance-Based Supply Chain
Management System,” IE Interface, 16(3), 167-173.
[39] Kim, K. J., and Dennis K. J. Lin, 2000, “Simultaneous optimization of mechanical
properties of steel by maximizing exponential desirability functions,” 49(3), 211-325.
[40] C. P. Aruna Kumari and Dr. Y. Vijaya Kumar, “An Effective Way to Optimize Key
Performance Factors of Supply Chain Management (SCM)”, International Journal of
Management (IJM), Volume 4, Issue 3, 2013, pp. 8 - 13, ISSN Print: 0976-6502,
ISSN Online: 0976-6510.
[41] D. Siva Kumar and Dr. Jayshree Suresh, “Optimization of Supply Chain Logistics Cost”,
International Journal of Management (IJM), Volume 4, Issue 1, 2013, pp. 130 - 135,
ISSN Print: 0976-6502, ISSN Online: 0976-6510.
[42] Amit Raj Varshney, Sanjay Paliwal and Yogesh Atray, “A Systematic Review of Existing
Supply Chain Management: Definition, Framework and Key Factor”, International Journal of
Mechanical Engineering & Technology (IJMET), Volume 4, Issue 2, 2013, pp. 298 - 309,
ISSN Print: 0976 – 6340, ISSN Online: 0976 – 6359.
160