logistic-regression

Search IconIcon to open search

See also: Linear regression

Source: google-ml-course

Linear logistic regression

  • For predicting probabilities (value between 0 and 1)
  • As an alternative, it could be possible to cap the predicted data at 1, but this would be introducing bias to the model
  • Therefore the necessity for a different loss function and data prediction model, which outputs the predicted probabilities

Sigmoid function

Gives a bounded value between zero and one. $$ y = \dfrac{1}{1 + e^{-z}} $$

VariableDescription
$y$Logistic regression output
$z$Original model output, ‘log-odds’

$$ z = \log \left( \dfrac{y}{1 - y} \right) $$

Log loss

$$ L = \sum_{(x,,y) , \in , D} -y \log (y') - (1 - y) \log (1 - y') $$

Regularisation for logistic regression

  • Regularisation is very important!
  • Due to the asymptotes in the log loss function, the model will try to drive the losses to zero but will never reach this, so the model weights will approach infinity
  • –> use L$_2$ regularisation or early stopping