Published March 21, 2023 | Version v1
Publication

Faster Training and Improved Performance of Diffusion Models via Parallel Score Matching

Description

The modeling of the score evolution by a single time-dependent neural network in Diffusion Probabilistic Models (DPMs) requires long training periods and potentially reduces modeling flexibility and capacity. In order to mitigate such shortcomings, we propose to leverage the independence of the learning tasks at different time points in DPMs. More concretely, we split the learning task by employing independent networks, each of which only learns the evolution of scores in a time sub-interval. Furthermore, motivated by residual flows, we take this approach to the limit by employing separate networks independently modeling the score at each single time point. As demonstrated empirically on synthetic and image datasets, not only does our approach greatly speed up the training process, but it also improves the density estimation performance as compared to the standard training approach for DPMs.

Additional details

Created:
March 25, 2023
Modified:
November 30, 2023