Managerial Economics/Demand estimation
Firms often wish to predict how demand for their good or service will change in the future. Demand estimation is any means to model how consumer behavior changes due to changes in the price of the product, consumer income, or any other variable that impacts demand. In practice, demand functions for a specific market must be estimated using empirical data. Demand estimation provides information about the prices and respective quantities that consumers are willing to demand. Forecasting is used to help managers reduce sunk costs and ensure growth is accounted for. There are five key methods outlined below which can be used to estimate demand.
Challenges to Demand Estimation[edit | edit source]
Ceteris Paribus[edit | edit source]
Ceteris Paribus is the most fundamental assumption that impacts managerial economics. When managers are determining the most profitable pricing strategy for a product they face difficulty when only equipped with information about current purchasing volumes. At any given time, the quantity sold is observed at one particular price point, when, in reality, there are multiple combinations of price and quantity which lie along the demand curve. This presents an issue when determining a consumer's maximum willingness to pay and hence, what posted price to decide on. Goods which are less elastic such as necessity goods (food, medicine, petrol, etc.) and goods that are addictive (e.g. Alcohol) may have the potential for higher prices, which may currently be unrealised. Additionally, when observing trends over time, different price quantities may represent changes in the firm's production techniques, different input prices or supplier competition, which reflects a shift in the supply curve, as opposed to changes in the market demand curve.
Therefore, when estimating changes in the market price quantity, Ceteris Paribus is often used - all else being equal. That is to say, only observing the effect in the change of one particular variable such as price or advertising, to isolate their impact on consumer demand. However, in reality, this rarely provides accurate results as there are several limitations to this method:
- Selection Bias - This occurs when collecting data on price quantity is not representative of the market a firm is acting in. It is the result of an improper selection of the sample group. An example of this would be comparing two different markets. A manager may incorrectly apply the findings of a change in price quantity in one market to the one they are competing in when in reality the two markets are different for reasons such as different elasticity to a price change or consumer preferences.
- Unobserved Variables - This is the result of failing to consider the number of consumers that did not buy at a certain price. The willingness of consumers who do not purchase a good is generally not accounted for.
- Measurement Error - Often occurs in larger markets where the current price quantity is not an accurate representation of the demand curve due to noise (inaccurate information), a lack of data from certain firms or in certain periods.
These are examples of measuring internal validity - the firm's own ability to calculate the demand curve accurately.
Managers need to consider how much the characteristics in one market they observe and reflect those of the actual market a firm is competing in. Differences in the population, consumer preferences, income differences, and market competition all need to be scrutinised and can lead to external validity issues - how much a particular firm applies conclusions of a study outside of its context. Determining whether these conclusions are representative often requires further studies that vary in context.
Additionally, it is important to evaluate the validity of analysis across markets when determining if variables are endogenous or exogenous. Often correlation between variables does not necessarily equal causation when observing trends between markets, and this can skew decision making by managers. This may be the case of simultaneity - when trends occur concurrent with one another, or it may be when an omitted variable bias occurs, a model in a statistical analysis that leaves out the results of one or more key variables.
To overcome these errors, managers can estimate demand by conducting a study with one or more of the Furious Five: Randomised Trials/Laboratory Experiments, Regression Analysis, Instrumental Variables, Difference-in-Differences, and Regression Discontinuity Design.
Ceteris Paribus and Econometrics[edit | edit source]
The difference of ceteris paribus in economic theory and econometrics is that ceteris paribus is imposed in economic theory, whereas it is used as a thought experiment that facilitated the interpretation of estimation results . The theory of the Law of Demand considers ceteris paribus as the main assumption. The law of Demand means that quantity demanded depends negatively on price ceteris paribus (when quantity increases, so prices decrease). In addition, ceteris paribus assumption is applied efficiently in theoretical models. However, the impact of this assumption is significant for empirical models and estimations. Assuming ceteris paribus in empirical models, people can ignore some important variables that affect the model and it could change the results of the model. Because of this, it is important to identify exogenous variables in econometrics. An exogenous variable is determined outside the model.
Market Experiments[edit | edit source]
Endogeneity[edit | edit source]
Endogeneity is a statistical problem where explanatory variables are influenced by factors external to the model tested. For example
- Simultaneity: where an explanatory variable is jointly determined with the dependent variable. That is, X causes Y, but Y also causes X.
An example of X can be smoking and Y be depression. It is difficult to determine the initial trigger a person had. Depression before smoking or smoking before depression.
- Omitted Variable Bias: Occurs when there are uncontrolled confounding variables. These are variables that are correlated with both an independent variable and the error term, but are not captured in the model tested by the experiment.
- Measurement Error: when there is measurement error in an explanatory variable (caused by measurement noise in an independent variable or systematic error)
As a result, variables of interest may be influenced by endogenous factors.
Note: Correlation does not mean causation. An example of this would be if number of sunglasses sold were strongly correlated with ice cream sales. While the two would have a correlation their is obviously no causation between the two outputs. There could easily be some joint causality caused by shared customers between the sunglasses shop and the ice cream shop. Outlining that correlation does not imply causation.
Laboratory experiment[edit | edit source]
These experiments seek to test how consumers react to changes in variables in the demand function in a hypothetical situation. Laboratory experiments allow all variables to be controlled or adjusted at the experimenter's discretion. For example, consumers are given a limited amount of money and they are encouraged to decide how to spend it on goods, at prices which are changed by the experimenter. From this information the researcher is able to compare the relative demands for products and what these products are worth to consumers. These experiments can be used to test consumer behavior in many ways and don't necessarily have to do with the demand for a particular product. A researcher may want to test consumer behavior over the course of interacting with a prototype checkout system for example to see where the pain points will be when the system goes live.
Test marketing[edit | edit source]
Test marketing involves using real markets in different locations to test how consumers react to changes in variables in the demand function. This type of experiment allows observation of actual consumer spending, leading to more reliable results. It is most commonly used in an online environment where firms run an advertisement and adjust variables such as audience, interests or location to observe how each set of consumers react. Firms can then readjust or refocus their campaigns accordingly. Amazon's online store is famous for continuously running this type of experiments on its customers to optimize their online experience with their services.
Advantages of market experiments[edit | edit source]
- Control of the demand variables - the experimenter can observe consumer responses based on a change in a single variable, keeping other factors constant. Each variable can be controlled on its own or in different combinations to study possible results for products, theory or idea as well.
- Counterfactual analysis - The experimenter can use random changes in demand variables to compare observed consumer reactions to their reactions in the absence of the intervention.
- Replicability - Due to the high internal validity of experiments, the ability to replicate consumer responses from a change in demand variable is highly likely. However, due to undetected changing variables over time and place and abstruse data collection procedures, happenstance field data is hard to replicate.
- Unlimited subjects or industries - Researchers can choose any industry or topic for studying. Market experiments can be used in a variety of situations and industries. Moreover, they can combine research methods according to specific characteristics. For example, some experimental methods are more suitable for a type of population, but not for others. So the research needs to choose another method for that specific population. This allows to obtain a more accurate information.
Disadvantages of market experiments[edit | edit source]
- Less control - some customers who are lost at this stage may be difficult to recover. The absence of a laboratory experiment, such as test marketing, would allow extraneous variables (variables that are not being tested but influence the results) to exist that could skew results and lower control. Such variables could be personal reasons which would affect consumers spending (which may cause some consumers to be lost) or weather, which would affect market goods (such as fruits and vegetables).
- Limited market segmentation - Firms can only observe limited variations in the variables across different markets. When conducting market experiments, it can be difficult for researchers to correctly to decide where one market ends and another begins.
- High costs - Market experiments can be expensive, especially if multiple markets are being tested. Costs can include paying consumers in laboratory experiments or hiring employees to during test marketing. Other costs can include producing beta products and testing them under market conditions, advertising, distributors etc.
- Time costs - Experiments can require a long period of time to reveal a reliable indication of consumer behavior, especially if the effects that they are testing are long-term effects on behavior.
- Ethical concerns - consumers do not appreciate being unfavourably discriminated against. One example is an experimental medicine to cure Alzheimer's disease. Imagine if an experimental medicine is created to cure Alzheimer's disease, patients will be split into 2 different groups(Treatment and Control group) to test the effectiveness of the drug. Patients in the treatment group will be provided with the medicine and the control group will be denied the medicine. Another example would be simply testing different prices. A firm may test different prices for different customers, particularly in the age of the online store. If customers were then to compare the different prices that they are asked to pay, there could be significant backlash against the firm.
Experimental design[edit | edit source]
Online platforms are often used to conduct experimental designs, for example, a company may use Facebook to conduct an experiment based on their social media advertisements.
- The experimenter is able to control key variables by either keeping it constant or varying it.
- Allows for randomised treatments to eliminate biases.
- Replication, companies on facebook can re-do the experiments again to confirm the results with new samples.
Between subjects design[edit | edit source]
Each subject of the experiment participates in one and only one experimental treatment (condition of the experiment). For example two students are asked to score a sample essay from A* to D. The two participants differed only in that one student was told the essay was marked by the teacher as A* and the other student was told that the essay was marked by the teacher as D. The independent variable is the mark by the teacher and has two values A* and D. This example is a between-subjects experiment because different participants were used for the two different values of the independent variable (mark A* and mark D).
Within subjects design[edit | edit source]
Each subject of the experiment participates in all experimental treatments (conditions of the experiment). It is often called a repeated-measures study. The results from before the treatment is applied is compared to the results after the treatment is applied.
This however may cause the Hysteresis Effect. The Hysteresis Effect occurs where the prior treatment of subjects might influence future treatment. This could occur because the subject has become aware of the experiment and then adjust their actions accordingly.
An example of a within-subjects design experiment is where a subject is testing the effectiveness of three types of pain killers (namely A, B, C, D). In this experiment, the independent variable in the pain killer type and the levels are A, B, C and D. One day the subject is given type A pain killer and the experimenter measures the time it takes for the pain killer to be effective. Another day the same subject is given type B pain killer and the experimenter measures the time it takes for the pain killer to be effective and so on. Hence, the same subjects test all levels of the independent variable.
Challenges of experimental design[edit | edit source]
Firstly, due to the controlled nature of experiments and its accuracy depends on the way the experiment is conducted, a number of intentional and unintentional biases may arise by the subjects or the experimenter. Secondly, the resulting data are usually private information and limited to the experimenter, which is hardly ever in the public domain. Lastly, ethical concerns involving discriminating people for the purpose of grouping them.
Ensuring unbiasedness[edit | edit source]
- Double-blind procedure - neither the participants nor the experimenters know who is receiving which particular treatment. This reduces experimenter prejudices and unintentional cues. A double-blind procedure is usually done in clinical settings.
An example of a double-blind procedure is when trailing a new drug, neither the doctor nor the patients know which drug is the real one and which is the placebo.
- Single-blind procedure - the participants do not know who is receiving a particular treatment, but the experimenters do. Certain information that could introduce bias is withheld. This primarily acts to prevent subjects from exhibiting the Placebo Effect.
An example of a single-blind procedure is when a researcher is trying to determine a performance-enhancing drug's effects. If the participants of the experiment knew they were taking performance-enhancing drugs, this might affect their perception and start increasing their performance levels.
Avoiding confounds[edit | edit source]
- Between subjects who belong to the control group (baseline group which does not receive the treatment) and the treatment group there should be only one variable that is different. This will allow differences between the two groups to be attributed to the change in that one particular variable.
- By changing two variables, you don't know whether the effect was caused by one variable or the other. Also if you don't find an effect it might be because the opposite effects of the two variables cancel each other out.
Randomised Trial[edit | edit source]
This type of experiment or trial can be done to eliminate certain biases that may be present otherwise, such biases can include selection bias and allocation bias. Randomised trials are done by allocating the test subjects randomly into two or more groups. One group is called the ‘control group.’ This group does not have any interventions or ‘testing’ conducted on them. While the other group undergoes the experiment. At the end of the trail, the data from the different groups are compared. An example is testing of certain pharmaceutical drugs, where subjects are randomly assigned to two groups, one group participates with the drug and the other group (control) will take no drug or a placebo, with both groups being unaware of which group they are in. Both are tested and have their results compared. This better evaluates the effect of the drug as there is no selection bias and the results have more power. From this trial, a decentralised managerial approach can be employed. This approach is effective for reducing information overload and improving firm decision making abilities.
Linear Regression[edit | edit source]
A linear regression analysis is a statistical method to analyse the relationship between two variables. A regression measures the degree in which a change in some dependent variable is caused by one or more independent variables. It fits a linear equation to observed data and measures how well the equation fits, or how much results from the dependent variable can be explained by the independent. A linear regression is a potential means to estimate how demand (the dependent variable) changes with any independent variable. However, to model how demand changes with respect to price, comprehensive information regarding all agent’s willingness to pay and willingness to sell. Without such, linear regression is of limited use to measure demand. An example of a linear regression is predicting the outcome of interest in sales. Using advertising as a predictor variable, a linear regression could be used to estimate the amount that sales is expected to increase with every $ of increased advertising.
Instrumental variables[edit | edit source]
Instrumental variables can be applied to help identify discrepancies or 'error terms' that may arise between explanatory variables. Also, instrumental variables were used to identify unexpected behaviour between variables and hidden (unobserved) correlation allowing to see the true correlation. This helps to mitigate the threats to internal validity. Instrumental variables replace the actual values of the explanatory variable with predicted values of the explanatory variables that are related to the actual explanatory variable but are uncorrelated with the error term. For example, if the demand was an explanatory variable, it would be replaced with a determinant of demand. Instrumental variables are best used when you want to identify long term trends.
For example: Bad weather affects the supply of fish, but does not affect the demand for fish and may explain a change in the relationship between supply and demand.
Good examples of IV's include:
- Natural experiments (Sudden Policy Changes, Sudden Technological changes). This type of experiment needs to be unanticipated.
- Experiments of Nature (Floods, Rainfall, things outside of man-kinds control)
Difference-in-Difference (DID)[edit | edit source]
Compares the outcomes of two groups over two different time periods. It assumes a parallel trend assumption. That is, that any existing or future trends will affect both groups equally and simultaneously. The end goal is to estimate an anticipative model of what the end result would be for a group undergoing treated and the group without treatment. To achieve this goal, nonlinear models are used and outcomes are of both treated and untreated groups are observed in the period after treatment. (Hal 2016)
Group A is exposed to no treatments in the first time period but is exposed to treatments in the second time period.
Group B Is exposed to no treatments at all to act as a control group.
In order to remove biases the following equation should be applied:
[Average gain in Group A - Average gain in Group B]
[(Group A year 2 - Group A year 1) - (Group B year 2- Group B year 1)]
Doing this removes any biases that may arise from permanent differences that exist between the two groups or biases overtime. Furthermore, this removes biases from comparisons over time in the treatment group that could be the result of trends.
However, DID relies on the parallel trend assumption holding true. The parallel trend assumption says differences between the treatment and control groups (unrelated to the treatment) are constant over time. If this assumption fails, then estimators may be biased.
Example:[edit | edit source]
A famous example of Difference-in-Difference experimental design, was done by Card and Krueger in 1994 on employment effects from wage increases across America. To account for any omitted variables, employment effects of the fast food industry were measured in states where the minimum wage was increased and where there was no change in wage at all. In April 1992, minimum wage in New Jersey rose to $5.05 from $4.25, and employment was measured in February, before the treatment, and November, after the treatment, for the fast food sector in New Jersey and Pennsylvania. By measuring employment changes in both states, the biases between the treatment and control variable that could be as a result of uncontrollable permanent differences, or trends are accounted for. Common omitted variables in this particular study could be changes in macroeconomic effects, weather and health. Thus, the change in Pennsylvania’s employment rate is essentially, the change in New Jersey employment if there was no wage increase. Assuming both states have parallel trends. The studies found that in fact by increasing minimum wage, employment also increased. 
Regression Discontinuity Design[edit | edit source]
Used when measuring the impact of a treatment, the treatment assigned is based on an eligibility index. A cut-off point has to be clearly defined to measure the difference in results if treatment is applied to those above or below the cut-off point, which is determined by the bandwidth. The bandwidth shows which participants are shown to be statistically comparable, therefore differences in results is due to the applied treatment.
- Set Bandwidth where individuals are statistically similar
- Compare on the observable characteristic
- Associate any difference to the difference in the observable characteristic
To combat unemployment the government is set to roll out a training program to help support the job search for those unemployed under the age of 25. To deduce how effective the program is in increasing their job prospects you must.
- Split into two groups: Those slightly below the age of 25( Group A) to those slightly above the age of 25(Group B). On average these two groups are very similar apart from participating in the program.
- Group A is allowed to participate in the program while Group B is excluded
- The difference in the unemployment rate can, therefore, be associated with the effectiveness of the program.
Common assignment variables:
- Exam scores: a cut-off score can be set as, for example, 50%, then students who just fail (49%) and who just pass (50%) the course will be two separate groups.
- Age: for example, have the school date as a cut-off birthdate, so children who were born before the school date (30th June) and who were born right after it (1st July) will be in two groups.
- Geographic location: for example, Tweed Heads (NSW) and Coolangatta (QLD) are just one street away and are geographically so similar, but when the law in NSW is changed, the effect on businesses in Tween Heads can be so different relative to that of Coolangatta.
- Employment/unemployment duration: set working hours as a cut-off point. For example, full-time workers who work exactly 38 hours a week and part-time workers who work slightly less than 38 hours, say, 37 hours a week, can be set as two groups.
Note, to enable fair randomness of treatment assignment,
- assignment variables should be random, and
- treatment status cannot be perfectly manipulated (e.g. kids who were born in June, their parents might think to hold them back and wait for another year to go to school; or they might give them extra schooling support - this is the case where treatment status is affected)
- Too few variables might lead to estimation bias and
- Too many variables might lead to a loss in efficiency
Main points from 'Regression Discontinuity Designs in Economics' paper[edit | edit source]
- RDD can be invalid if individuals can manipulate the assignment variable
When there is a reward for achieving above the cutoff point, participants will be incentivised to improve their performance. If this occurs, then this may lead to a difference between participants who achieved just below the cutoff and those who achieved just above it. This would, therefore, invalidate the RDD experiment.
- If individuals can imprecisely manipulate the assignment variable, the results around the threshold value do not reflect RDD
The less control that participants have over the assignment variable, the more 'random' the results of the RDD will be for individuals around the cutoff point. The results around the threshold will begin to reflect a randomized experiment, rather than RDD.
- RDD can be analyzed and tested like random experiments
Following from point 2, if the results from RDD become more random around the cutoff point, then the assignment variables just below and above the threshold will have the same distribution.
- Graphical presentation of an RDD should not be the determining factor of whether the cutoffs affect the results
While visual representations of the data can help show basic relationships between variables and outcome, in the case of RDD it should not be used to determine effects due to manipulation of the presentation.
- Nonparametric estimation is not a 'solution' to issues raised by RDD
Analysts should rely more on parametric functions to analyse the results of the experiment rather than nonparametric functions. Due to the presence of biases, the two analyses should be compared and any conclusions should be consistent across both methods.
- Trendlines and other tests can rule out restrictive specifications for RDD
While some specifications are necessary to produce a regression model, lower-order polynomial trend lines can help identify which specifications are too restrictive and do not help with analysis.
Sharp vs. Fuzzy[edit | edit source]
RD designs can be either “sharp” or “fuzzy”. A sharp RD is where the discontinuity in the treatment at the cutoff is deterministic. In other words, if an observation is situated below the cutoff, treatment is impossible; if an observation is situated above the cutoff, treatment is received with certainty. On the other hand, a fuzzy RD is used when the discontinuity is not deterministic. An example of this would be where there is imperfect compliance. Crossing the cutoff point simply increases the probability of receiving the treatment, but does not guarantee it. Moreover, it may be possible for observations below the cutoff to receive the treatment as well. Because this may give rise to endogeneity, fuzzy RD may be implemented using an IV approach.
Stages Involved in Demand Estimation[edit | edit source]
The objective of demand estimation is to collect information that will enable the firm to predict the behavior of consumers in relation to the firms products.
- Statement of a Theory or Hypothesis: The first step for demand estimation theory is understanding why consumers purchase particular goods. The motivation of individual consumers is a complex psychological matter involving the development of consumer preferences and tastes, habit formation, the desire to form social norms etc,. The objective for which the demand estimating is to be done must clearly specify. And it must be decided before the process of forecasting starts as it will give direction to the whole research.
- Model Specification: Model specification refers to the determination of which independent variable should be included or excluded from a regression equation. This is not an easy decision- formal criteria are useful but not perfect.
- Too many variables +> Estimation Bias
- Too many variables +> Efficiency Loss
- Data Collection: Data collection is the process of gathering and measuring information on targeted variables in an established system, which in turn enables answers to relevant questions and helps to effectively evaluate outcomes.
- Estimation of Parameters: Estimation of parameters refers to the process of using sample data to estimate the parameters of the selected distribution.
- Checking Goodness of it: Correlation analysis examines the strength of the relationship or goodness of fit. This refers to how closely the points fit the line, taking into consideration the units of measurement.
- Hypothesis Testing: Hypothesis testing is used to infer the result of a hypothesis performed on sample data from a larger population. The test tells analyst whether or not their primary hypothesis is true.
- Forecasting: Forecasting means estimating a specific value of a variable, as opposed to estimating a whole relationship.
Generally, we can say that goods are purchased for the satisfaction or utility they yield, and the satisfaction from a particular good is generated by the various characteristics of that good .
The most dominant factor influencing consumer is the price of the product. There is an inverse relationship between demand and price, and so the demand curve slopes down from left to right - this means that at a lower price, the market as a whole will buy a larger quantity
Machine Learning Methods For Demand Estimation[edit | edit source]
While the Machine Learning method is gaining popularity, it’s still considered new technology among current industries. Examples of this include LASSO, stagewise regression, forward stagewise regression and support vector machines, bagging and random forests.
Its advantages are the superior goodness of fit, flexibility, ease of use, scalability, open-sourced and archived software. Machine Learning’s only disadvantage is that even though it can be good for prediction, it is unsuitable to understand causality.
Stepwise regression begins with a set of demeaned covariates that have an intercept as the base model. Within the set of covariates, the particular model selects the variable with the highest correlation with the residual. That variable is added to the next model and using the subset of covariates from this scheme, ordinary least squares (OLS) can be estimated and this is repeated to find the next best covariate with the highest correlation. The stepwise regression is completed until a series of nested models are produced and until there are absolutely zero covariates that have a correlation sufficiently high enough with the error term.
Forward stagewise regression only updates one particular coefficient at each stage, causing this regression to be a variant of the stepwise regression, whereby stepwise regression changes all the variants at each step. The covariance is added to the coefficient once the method sources the highest correlation with the error term. Furthermore, this process is repeated until there are no more correlations between the error term and the covariates.
Support vector machines (SVM) is considered to be a penalized method of regression using a formula that incorporates a loss function and a tuning parameter that specifically controls the errors. On a typical standard basis, a non-zero value is assigned to a partial set of covariates. If the errors of the covariates are sufficiently small they are treated as zeros in the SVM regression.
LASSO is also very similar to the SVM regression in that they are both considered a penalized regression model. In this case, the tuning parameter also strictly ensures how the additional regressors are penalized. Typically, the LASSO regression method results in zero weightings assigned to the various different covariates. (Patrick Bajari, 2015) 
Prelaunch Demand Estimation[edit | edit source]
Accurate demand estimation is essential for new products/services to succeed . Useful tools when assessing the demand for a product that is yet to exist. Helpful when assessing the viability of a Start-up company. These include focus groups, hypothetical approaches and test marketing.
- Focus Groups - A focus group may assist in start-ups finding their relevant target market.
- Hypothetical Consumer Survey or Choice Experiment - Ask participants to either state their product valuation or make hypothetical product choices (such as a survey question - can be for example "If you have these options, which one would be buy?")
- Test Marketing - Fully incentive-aligned choice experiments/selling product in trial markets to gather consumer choice data in real purchase environments.
- Crowdfunding (eg. Kickstarter) - make a drawing or a video showing the crowd that certain products are going to be produced, but the producer does not have the money yet, hoping that people can pre-purchase something that doesn't exist;
- Actual Product - produce the actual product which will be put in the market to find out the demand for it;
- Minimum Viable Product - the product is not built fully, namely, it's just a prototype which possesses only the main functions but not full functionality.
Summary[edit | edit source]
The below section aims to briefly summaries the key takeaways from this topic:
Demand Estimation: Can be made difficult as certius paribus is hard to achieve in reality. There is often selection bias, omitted variables and measurement error. For it to be assumed that one model under one context allows for generality is incorrect. Further data samples need to be taken. Additionally, correlation does not imply causation.
Methods of Demand Estimation:
- Linear Regression: uses a single control variable.
- Random Sample: takes a random sample from a population.
- Instrumental Variables (IV): even when a variable is unknown programs can create and omit the effect of the variable on the error term. In the context of an exam question, it is another variable that generally may be considered to have a weak correlation with the dependent variable.
- Difference in Differences (DID): when you have two control groups in period 1 and treat 1 group at the end of period 1 and monitor for period two any other variables can be omitted.
- Regression discontinuity design (RDD): use eligibility criteria to make a ‘cut-off’ and impose a bandwidth around the cut off you are measuring the data you want. E.g Age or GPA cutoffs.
Double-Blind Procedure: Neither participants nor experimenter knows who the control and treatment groups is
Single-Blind: Info is withheld from participants to prevent placebo
Steps for demand estimation:
1. Statement of a theory or hypothesis
2. Model Specification
a. Too few variables cause biases and too many variables result in efficiency loss
3. Estimation of parameters
4. Checking goodness of fit
5. Hypothesis testing
Machine learning: Desirable for the goodness of fit, scalable; however, it is bad for understanding causality.
Estimating Demand for Something That Doesn’t Exist:
- Surveys and focus groups
- Test marketing
- Crowdfunding, actual product, minimum variable product
Reference/s[edit | edit source]
- , The Econometric Consequences of the Ceteris Paribus Condition in Economic Theory, Herman J. Bierens, Norman R. Swanson, September 2018.
- Hal R., V. (2016). Causal inference in economics and marketing. [online] Pnas.org. Available at: https://www.pnas.org/content/pnas/113/27/7310.full.pdf [Accessed 21 Oct. 2019].
- Levitt, M., Thomas, C., Schoning, F., & Lovells, H. (2019). Cartel leniency in EU: overview. Retrieved from Thomson Reuters Practical Law: https://uk.practicallaw.thomsonreuters.com/0-517-4976?transitionType=Default&contextData=(sc.Default)&firstPage=true&bhcp=1
- Hill, S. (2016). Managerial Economics. London: Macmillan Education, Limited, pp.100-132.
- Png, I. (2016). Managerial economics. 4th ed. Abingdon, Oxon: Routledge, pp.17-35.
- Patrick Bajari, D. N. (2015). Machine Learning Methods for Demand Estimation. American Economic Review, 481-485.
- Xinyu and Zhang, 2013. Prelaunch Demand Estimation [Online]. Accessed via: https://www.gsb.stanford.edu/sites/gsb/files/mkt_02_18_cao.pdf