Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Chapter

# Variations on Logistic Regression

DOI link for Variations on Logistic Regression

Variations on Logistic Regression book

# Variations on Logistic Regression

DOI link for Variations on Logistic Regression

Variations on Logistic Regression book

## ABSTRACT

Suppose that students answer questions on a test and that a specific student has an aptitude T . A particular question might have difficulty d and the student will get the answer correct only if T > d. Now if we consider d fixed and T as a random variable with density f and distribution function F , then the probability that the student will get the answer wrong is:

p = P(T ≤ d) = F(d) T is called a latent variable. Suppose that the distribution of T is logistic:

F(y) = exp(y−µ)/σ

1+ exp(y−µ)/σ So

logit(p) =−µ/σ+d/σ If we set β0 =−µ/σ and β1 = 1/σ, we now have a logistic regression model. We can illustrate this in the following example where we set d = 1 and let T have mean −1 and σ= 1: x <- seq(-6,4,0.1) y <- dlogis(x,location=-1) plot(x,y,type="l",ylab="density",xlab="t") ii <- (x <= 1) polygon(c(x[ii],1,-6),c(y[ii],0,0),col=’gray’)

The plot in Figure 4.1 shows a logistically distributed latent variable. We can see that this distribution is apparently very similar to the normal distribution. The shaded area represents the probability of getting an answer wrong. As the mean aptitude of this student is somewhat less than the difficulty of the question, this probability is substantially greater than one half.