Built using Zelig version

Bayesian Logit Regression with logit.bayes.

Use the logit regression model for model binary dependent variables specified as a function of a set of explanatory variables. The model is estimated using a Gibbs sampler. For other models suitable for binary response variables, see Bayesian logistic regression, maximum likelihood logit regression, and maximum likelihood logit regression.


z.out <- zelig(Y ~ X1 + X2, model = "logit.bayes", weights = w, data = mydata)
x.out <- setx(z.out)
s.out <- sim(z.out, x = x.out)

Additional Inputs

Using the following arguments to monitor the Markov chains:

  • burnin: number of the initial MCMC iterations to be discarded (defaults to 1,000).

  • mcmc: number of the MCMC iterations after burnin (defaults to 10,000).

  • thin: thinning interval for the Markov chain. Only every thin-th draw from the Markov chain is kept. The value of mcmc must be divisible by this value. The default value is 1.

  • verbose: defaults to FALSE. If TRUE, the progress of the sampler (every \(10\%\)) is printed to the screen.

  • seed: seed for the random number generator. The default is NA which corresponds to a random seed of 12345.

  • beta.start: starting values for the Markov chain, either a scalar or vector with length equal to the number of estimated coefficients. The default is NA, such that the maximum likelihood estimates are used as the starting values.

Use the following parameters to specify the model’s priors:

  • b0: prior mean for the coefficients, either a numeric vector or a scalar. If a scalar value, that value will be the prior mean for all the coefficients. The default is 0.

  • B0: prior precision parameter for the coefficients, either a square matrix (with the dimensions equal to the number of the coefficients) or a scalar. If a scalar value, that value times an identity matrix will be the prior precision parameter. The default is 0, which leads to an improper prior.

Use the following arguments to specify optional output for the model:

  • bayes.resid: defaults to FALSE. If TRUE, the latent Bayesian residuals for all observations are returned. Alternatively, users can specify a vector of observations for which the latent residuals should be returned.

Zelig users may wish to refer to help(MCMClogit) for more information.


Basic Example

Attaching the sample dataset:


Estimating the logit regression using logit.bayes:

z.out <- zelig(vote ~ race + educate, model = "logit.bayes",
               data = turnout, verbose = FALSE)
## How to cite this model in Zelig:
##   Ben Goodrich, and Ying Lu. 2013.
##   logit-bayes: Bayesian Logistic Regression for Dichotomous Dependent Variables
##   in Christine Choirat, Christopher Gandrud, James Honaker, Kosuke Imai, Gary King, and Olivia Lau,
##   "Zelig: Everyone's Statistical Software," http://zeligproject.org/

You can check for convergence before summarizing the estimates with three diagnostic tests. See the section :ref:[Diagnostics for Zelig Models](diagnostics>$ for examples of the output with interpretation:

## Model: 
## Iterations = 1001:11000
## Thinning interval = 1 
## Number of chains = 1 
## Sample size per chain = 10000 
## 1. Empirical mean and standard deviation for each variable,
##    plus standard error of the mean:
##                Mean      SD  Naive SE Time-series SE
## (Intercept) -1.2196 0.21972 0.0021972      0.0073115
## racewhite    0.5078 0.13934 0.0013934      0.0047843
## educate      0.1606 0.01687 0.0001687      0.0005484
## 2. Quantiles for each variable:
##                2.5%     25%     50%     75%   97.5%
## (Intercept) -1.6679 -1.3632 -1.2134 -1.0758 -0.7968
## racewhite    0.2326  0.4139  0.5071  0.5978  0.7889
## educate      0.1289  0.1490  0.1599  0.1719  0.1948
## Next step: Use 'setx' method

Setting values for the explanatory variables to their sample averages:

x.out <- setx(z.out)

Simulating quantities of interest from the posterior distribution given: x.out

s.out1 <- sim(z.out, x = x.out)
##  sim x :
##  -----
## ev
##           mean         sd       50%      2.5%     97.5%
## [1,] 0.7729175 0.01057453 0.7731894 0.7518301 0.7937312
## pv
##          0     1
## [1,] 2.259 7.741

Simulating First Differences

Estimating the first difference (and risk ratio) in individual’s probability of voting when education is set to be low (25th percentile) versus high (75th percentile) while all the other variables are held at their default values:

x.high <- setx(z.out, educate = quantile(turnout$educate, prob = 0.75))
x.low <- setx(z.out, educate = quantile(turnout$educate, prob = 0.25))
s.out2 <- sim(z.out, x = x.high, x1 = x.low)
##  sim x :
##  -----
## ev
##           mean         sd       50%    2.5%     97.5%
## [1,] 0.8226728 0.01062264 0.8227271 0.80211 0.8443141
## pv
##          0     1
## [1,] 1.792 8.208
##  sim x1 :
##  -----
## ev
##           mean         sd       50%      2.5%     97.5%
## [1,] 0.7095318 0.01331671 0.7097369 0.6823693 0.7350053
## pv
##          0     1
## [1,] 2.951 7.049
## fd
##           mean         sd        50%       2.5%       97.5%
## [1,] -0.113141 0.01189608 -0.1123246 -0.1377785 -0.09033748


Let \(Y_{i}\) be the binary dependent variable for observation \(i\) which takes the value of either 0 or 1.

  • The stochastic component is given by
$$ \[\begin{aligned} Y_{i} & \sim & \textrm{Bernoulli}(\pi_{i})\\ & = & \pi_{i}^{Y_{i}}(1-\pi_{i})^{1-Y_{i}},\end{aligned}\]

where \(\pi_{i}=\Pr(Y_{i}=1)\). $$ - The systematic component is given by

\[ \begin{aligned} \pi_{i}= \Phi(x_i \beta),\end{aligned} \]

where \(\Phi(\cdot)\) is the cumulative density function of the standard Normal distribution with mean 0 and variance 1, \(x_{i}\) is the vector of \(k\) explanatory variables for observation \(i\), and \(\beta\) is the vector of coefficients.

  • The prior for \(\beta\) is given by

\[ \begin{aligned} \beta \sim \textrm{Normal}_k \left( b_{0}, B_{0}^{-1} \right)\end{aligned} \]

where \(b_{0}\) is the vector of means for the \(k\) explanatory variables and \(B_{0}\) is the \(k \times k\) precision matrix (the inverse of a variance-covariance matrix).

Quantities of Interest

  • The expected values (qi$ev) for the logit model are the predicted probability of a success:

\[ \begin{aligned} E(Y \mid X) = \pi_{i}= \Phi(x_i \beta),\end{aligned} \]

given the posterior draws of \(\beta\) from the MCMC iterations.

  • The predicted values (qi$pr) are draws from the Bernoulli distribution with mean equal to the simulated expected value \(\pi_{i}\).

  • The first difference (qi$fd) for the logit model is defined as

\[ \begin{aligned} \text{FD}=\Pr(Y=1\mid X_{1})-\Pr(Y=1\mid X).\end{aligned} \]

  • The risk ratio (qi$rr)is defined as

\[ \begin{aligned} \text{RR}=\Pr(Y=1\mid X_{1})\ /\ \Pr(Y=1\mid X).\end{aligned} \]

  • In conditional prediction models, the average expected treatment effect (qi$att.ev) for the treatment group is

\[ \begin{aligned} \frac{1}{\sum t_{i}}\sum_{i:t_{i}=1}[Y_{i}(t_{i}=1)-E[Y_{i}(t_{i}=0)]],\end{aligned} \]

where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups.

  • In conditional prediction models, the average predicted treatment effect (qi$att.pr) for the treatment group is

\[ \begin{aligned} \frac{1}{\sum t_{i}}\sum_{i:t_{i}=1}[Y_{i}(t_{i}=1)-\widehat{Y_{i}(t_{i}=0)}],\end{aligned} \]

where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups.

Output Values

The output of each Zelig command contains useful information which you may view. For example, if you run:

z.out <- zelig(y ~ x, model = "logit.bayes", data)

then you may examine the available information in z.out by using names(z.out), see the draws from the posterior distribution of the coefficients by using z.out$coefficients, and view a default summary of information through summary(z.out). Other elements available through the $ operator are listed below.

  • From the zelig() output object z.out, you may extract:

  • coefficients: draws from the posterior distributions of the estimated parameters.

  • zelig.data: the input data frame if save.data = TRUE.

  • bayes.residuals: When bayes.residual is TRUE or a set of observation numbers is given, this object contains the posterior draws of the latent Bayesian residuals of all the observations or the observations specified by the user.

  • seed: the random seed used in the model.

  • From the sim() output object s.out:

  • qi$ev: the simulated expected values (probabilities) for the specified values of x.

  • qi$pr: the simulated predicted values for the specified values of x.

  • qi$fd: the simulated first difference in the expected values for the values specified in x and x1.

  • qi$rr: the simulated risk ratio for the expected values simulated from x and x1.

  • qi$att.ev: the simulated average expected treatment effect for the treated from conditional prediction models.

  • qi$att.pr: the simulated average predicted treatment effect for the treated from conditional prediction models.

See also

Bayesian logit regression is part of the MCMCpack library by Andrew D. Martin and Kevin M. Quinn . The convergence diagnostics are part of the CODA library by Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines.

Martin AD, Quinn KM and Park JH (2011). “MCMCpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software, 42 (9), pp. 22. <URL: http://www.jstatsoft.org/v42/i09/>.