The Cohere For AI open science community’s ML Theory Learning Group was pleased to welcome Yaroslav Bulatov to present Generating functions approach to gradient descent analysis (like in https://arxiv.org/abs/2206.11124).

Classical optimization theorems characterise behaviour in the worst case, while generating functions can tell you how things behave in the average case.

https://machine-learning-etc.ghost.io/

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics