mini-batch-gradient-descent
Parent: Variations of gradient descent
Source: google-ml-course
Mini-Batch Gradient Descent
- Performs gradient and loss calculation on 10 to 1000 data points
- Less noise compared to SGD
- Nevertheless more efficient than full batch gradient descent