Semiparametric Bayesian Estimation Of Discrete Choice Models PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Semiparametric Bayesian Estimation Of Discrete Choice Models PDF full book. Access full book title Semiparametric Bayesian Estimation Of Discrete Choice Models.

Semiparametric Bayesian Estimation of Dynamic Discrete Choice Models

Semiparametric Bayesian Estimation of Dynamic Discrete Choice Models
Author: Andriy Norets
Publisher:
Total Pages: 0
Release: 2022
Genre:
ISBN:

Download Semiparametric Bayesian Estimation of Dynamic Discrete Choice Models Book in PDF, ePub and Kindle

We propose a tractable semiparametric estimation method for dynamic discrete choice models. The distribution of additive utility shocks is modeled by location-scale mixtures of extreme value distributions with varying numbers of mixture components. Our approach exploits the analytical tractability of extreme value distributions and the flexibility of the location-scale mixtures. We implement the Bayesian approach to inference using Hamiltonian Monte Carlo and an approximately optimal reversible jump algorithm. For binary dynamic choice model, our approach delivers estimation results that are consistent with the previous literature. We also apply the proposed method to multinomial choice models, for which previous literature does not provide tractable estimation methods in general settings without distributional assumptions on the utility shocks. In our simulation experiments, we show that the standard dynamic logit model can deliver misleading results, especially about counterfactuals, when the shocks are not extreme value distributed. Our semiparametric approach delivers reliable inference in these settings. We develop theoretical results on approximations by location-scale mixtures in an appropriate distance and posterior concentration of the set identified utility parameters and the distribution of shocks in the model.


Bayesian Estimation of Dynamic Discrete Choice Models

Bayesian Estimation of Dynamic Discrete Choice Models
Author: Susumu Imai
Publisher:
Total Pages: 0
Release: 2009
Genre:
ISBN:

Download Bayesian Estimation of Dynamic Discrete Choice Models Book in PDF, ePub and Kindle

We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the Dynamic Programming (DP) solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though per solution-estimation iteration, the number of grid points on the state variable is small, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the "Curse of Dimensionality." We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.


A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models

A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models
Author: Andrew T. Ching
Publisher:
Total Pages: 0
Release: 2012
Genre:
ISBN:

Download A Practitioner's Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models Book in PDF, ePub and Kindle

This paper provides a step-by-step guide to estimating infinite horizon discrete choice dynamic programming (DDP) models using a new Bayesian estimation algorithm (Imai, Jain and Ching, Econometrica 77:1865-1899, 2009) (IJC). In the conventional nested fixed point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the IJC algorithm extensively uses the computational results obtained from the past iterations to help solve the DDP model at the current iterated parameter values. Consequently, it has the potential to significantly alleviate the computational burden of estimating DDP models. To illustrate this new estimation method, we use a simple dynamic store choice model where stores offer "frequent-buyer" type reward programs. We show that the parameters of this model, including the discount factor, are well-identified. Our Monte Carlo results demonstrate that the IJC method is able to recover the true parameter values of this model quite precisely. We also show that the IJC method could reduce the estimation time significantly when estimating DDP models with unobserved heterogeneity, especially when the discount factor is close to 1.


Discrete Choice Methods with Simulation

Discrete Choice Methods with Simulation
Author: Kenneth Train
Publisher: Cambridge University Press
Total Pages: 346
Release: 2003-01-13
Genre: Business & Economics
ISBN: 9780521017152

Download Discrete Choice Methods with Simulation Book in PDF, ePub and Kindle

Table of contents


Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models

Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models
Author: Masakazu Ishihara
Publisher:
Total Pages: 29
Release: 2016
Genre:
ISBN:

Download Bayesian Estimation of Finite-Horizon Discrete Choice Dynamic Programming Models Book in PDF, ePub and Kindle

We develop a Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating finite-horizon discrete choice dynamic programming (DDP) models. The proposed algorithm has the potential to reduce the computational burden significantly when some of the state variables are continuous. In a conventional approach to estimating such a finite-horizon DDP model, researchers achieve a reduction in estimation time by evaluating value functions at only a subset of state points and applying an interpolation method to approximate value functions at the remaining state points (e.g., Keane and Wolpin 1994). Although this approach has proven to be effective, the computational burden could still be high if the model has multiple continuous state variables or the number of periods in the time horizon is large. We propose a new estimation algorithm to reduce the computational burden for estimating this class of models. It extends the Bayesian MCMC algorithm for stationary infinite-horizon DDP models proposed by Imai, Jain and Ching (2009) (IJC). In our algorithm, we solve value functions at only one randomly chosen state point per time period, store those partially solved value functions period by period, and approximate expected value functions nonparametrically using the set of those partially solved value functions. We conduct Monte Carlo exercises and show that our algorithm is able to recover the true parameter values well. Finally, similar to IJC, our algorithm allows researchers to incorporate flexible unobserved heterogeneity, which is often computationally infeasible in the conventional two-step estimation approach (e.g., Hotz and Miller 1993; Aguirregabiria and Mira 2002).