Download Marketing Research: Approaches, Methods and Applications in

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Principal component analysis wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
Marketing Research: Approaches, Methods and Applications in
Europe
Self-assessment questions (answers follow below)
Chapter 15 Alternative methods of data analysis
1. What are the key limitations of mainstream statistics?
2. What are the problems associated with focusing on covariation?
3. A non-linear relationship might take what forms?
4. What are the main alternatives to mainstream statistics that are covered in this text?
5. What is neural network analysis?
6. What are the main types of nodes in a neural network analysis?
7. What are the key steps in doing a neural network analysis?
8. What is data mining?
9. What is a data warehouse?
10. What are the main tasks that data mining can perform?
11. In what ways does data mining differ from mainstream statistics?
12. What techniques are specific to data mining?
13. What are Bayesian statistics?
14. In mainstream statistics a case is an entity whose variable characteristics are being recorded
in the process of data construction. How are cases treated in combinatorial logic?
15. What is a truth table?
16. What question is it possible to ask to establish necessary conditions?
17. What question is it possible to ask to establish sufficient conditions?
18. What are the limitations of qualitative comparative analysis (QCA) as developed by Ragin
(1987)?
19. What are fuzzy sets?
20. How are fuzzy sets created?
21. According to Smith and Fletcher (2004), what are the key steps for holistic analysis?
ANSWER SECTION
1.



The focus of the analysis is on variables and the relationships between variables
The patterns sought in terms of the relationships between variables are limited largely to
establishing differences, covariation or fit with a theoretical expectation
The analysis of causality, where it is sought or implied, depends on the establishment of
constant conjunction and linear thinking
2.


Covariation assumes that patterns are symmetrical, for example that saying that high
values on one variable are associated with high values on the other also implies that low
values also go together.
Covariation implies nothing about which variables are dependent and which are
independent
Covariation offers no evidence of causality, necessity or sufficiency






Curvilinear
Interdependent
Networked
Contingent
Parallel
Chaotic






Neural network analysis
Some data mining
Bayesian statistics
Set theory and combinatorial logic
Fussy set analysis
Holistic approaches

3.
4.
5.
An alternative to multivariate statistical techniques that tries to mimic the way the human brain
works by learning to solve problems by recognizing patterns in the data.
6.



Input nodes
Output nodes
Hidden nodes
7.






Deciding on a training sample and a validation sample
Examine the data for skewness
Define the model structure
Model estimation
Evaluate the model against the predictions made from it
Apply to a new set of data
8.
A range of techniques for extracting actionable information from large databases, usually stored
in a data warehouse, and applying it to business models.
9.
A very large database in which data are gathered from disparate sources and converted into a
consistent format that can be used to support management decision-making and customer
relationship management.
10.






Classification
Estimation
Prediction
Affinity grouping
Clustering
Description and profiling




There are lots of data, very often on every case or every transaction, so sampling and the
use of statistical inference is less important,
Data miners tend to ignore measurement error,
Almost all data used for data mining have time-dependency associated with them,
Data used for data mining are often incomplete or truncated.



Structured queries
Association rules
Market basket analysis
11.
12.
13.
The incorporation of prior probabilities, which may be subjectively determined, into the current
data or evidence to reassess the probability that a hypothesis is true.
14.
As configurations – combinations of characteristics such that only cases with identical sets of
characteristics are treated as ‘the same’.
15.
It shows the number of cases that possess each logically possible combination of potential causal
characteristics and the outcome
16.
“What characteristics are found in all instances where the outcome is present?”
17.
“Is the outcome always present when particular characteristics or combinations of characteristics
arise?“
18.




It is limited to binary variables, that is, ‘crisp’ sets of absence-presence characteristics,
It can cope with only a limited number of variables in one ‘pass’ at the data – more than
about 12 variables and the number of combinations and groupings gets very large, so
that, for example for 15 variables there are 315-1 or over 14 million groupings,
There needs to be a clear outcome or event that is being investigated and the key
variables in their possible explanation need to be known and understood,
If the cases are a random sample from a wider population, there is no procedure for
testing the statistical significance of the results.
19.
They extend crisp sets by permitting membership scores in the interval between 1 and 0.
20.
By taking binary categories and overlaying them with carefully calibrated measures of the extent
to which cases are ‘in’ or ‘out’ of a set (for example, a ‘satisfied’ customer) or, for continuous
metric scales, overlaying the scale with conceptually appropriate criteria of what ‘full
membership’, ‘partial membership’ and ‘non-membership’ of a set entails (for example, how many
units of alcohol per week classify a person as a ‘heavy’ drinker).
21.










Analyzing the right problem,
Understanding the big information picture,
Compensating for imperfect data,
Developing an analysis strategy,
Establishing interpretation boundaries,
Applying knowledge filters,
Reframing the data,
Integrating the evidence and telling the story,
Decision-facilitation,
Completing the feedback loop.