My talk will be devoted to the convergence of stochastic interacting particle systems in the mean-field limit to solutions of conservative SPDEs. We will discuss the optimal convergence rate and derive a quantitative central limit theorem for such SPDEs. The results can be applied, in particular, to the convergence in the mean-field scaling of stochastic gradient descent dynamics in overparametrized neural networks. We will see that including the noise in the limiting equation improves the convergence rate and retains information about the fluctuations of stochastic gradient descent in the continuum limit. – The talk is based on joint work with Benjamin Gess and Rishabh S. Gvalani.