Dmitry Vetrov
Dmitry Vetrov (graduated from Moscow State Univerisity in 2003, PhD in 2006) is a professor of Computer Science at Constructor University, Bremen. He is a founder and the head of the Bayesian Methods Research Group. Three of his recent PhD students became researchers at DeepMind. His research focuses on combining Bayesian framework with Deep Learning models. His group is also actively involved in developing more efficient algorithms for diffusion models, studying the properties of loss landscape in deep neural networks, building scalable tools for stochastic optimization, application of tensor decomposition methods to large-scale Machine Learning, improving conditional text generation models, etc.
Recent Publications
- Where Do Large Learning Rates Lead Us? (2024) NeurIPS 2024
- Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling (2024) NeurIPS 2024
- HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach (2024) NeurIPS 2024
- Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing (2024) ECCV 2024
- Neural Diffusion Models (2024) ICML 2024
- Gradual Optimization Learning for Conformational Energy Minimization (2024) ICLR 2024
- The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing (2024) CVPR 2024
- Differentiable Rendering with Reparameterized Volume Sampling (2024) AISTATS 2024
- Generative Flow Networks as Entropy-Regularized RL (2024) AISTATS 2024
- Regularized Distribution Matching Distillation for One-step Unpaired Image-to-Image Translation (2024) ICML Workshop 2024
- Improving GFlowNets with Monte Carlo Tree Search (2024) ICML Workshop 2024
- To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning (2023) NeurIPS 2023
- Star-Shaped Denoising Diffusion Probabilistic Models (2023) NeurIPS 2023
- Entropic Neural Optimal Transport via Diffusion Processes (2023) NeurIPS 2023
- StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation (2023) ICCV 2023