ABSTRACT

By using the logarithmic transformation of p pi i1−( ) , we obtain the logit function, as can be seen in the following:

g x p p p

i ( ) = ( ) =

− 

 

  = +logit ln 1

0 1β β

where: β0 indicates the value of the logit(pi) when X = 0 β1 indicates changes in the logit(pi) per unit of change in X

8.2 Parameter Estimation The logistic regression model is adjusted by estimating the unknown parameters β0 and β1. One of the procedures for estimating the unknown parameters is known as the maximum-likelihood estimation (MLE), which is based on the likelihood function. This function is given by the joint probability of observing the sample data and is demonstrated in the following:

L Y Y Yn= …( )Pr 1 2, , ,

where n is the number of observations or the sample size. The likelihood function provides support for a particular value of the parameter βi , given an observed data. If the observed data provide more support for one value of the parameter than for another value, then the likelihood is higher for the former parameter value (Marschener, 2015).