Time: 1:00 PM, April 18th (Friday), 2025
Location: POTR 234
Coffee and bagels will be provided.
Generative Dynamic Scene Reconstruction in Challenging Scenarios with Constrained Camera Motions
Dr. Sarah Ostadabbas, Associate Professor, Electrical and Computer Engineering Department, Northeastern University
In this talk, I will present two of our recent generative visual AI algorithms that address critical challenges in dynamic scene reconstruction, novel view synthesis, and physics-consistent video generation for autonomous vehicles. Our Expanded Dynamic NeRF (ExpanDyNeRF) enhances dynamic 3D scene reconstruction by overcoming the limitations of current Neural Radiance Fields (NeRF) and Gaussian splatting techniques. While these approaches have advanced 3D modeling, they struggle with significantly deviated camera angles and dynamic environments. ExpanDyNeRF bridges these gaps by combining the strengths of NeRF and Gaussian splatting, using a novel-view pseudo ground truth strategy that leverages DreamGaussian to dynamically define object contours and colors across frames. This ensures high-quality and temporally consistent scene reconstructions. A key aspect of this project is the creation of the Synthetic Dynamic Multi-view (SynDM) dataset, specifically designed to evaluate dynamic camera motions and rotated side views, providing a valuable tool for testing novel view synthesis. Our Autonomous Temporal Diffusion Model (AutoTDM), on the other hand, focuses on generating physics-consistent driving videos for autonomous vehicle research. Existing datasets for training autonomous systems are often limited, costly, and lack the realism required for real-world scenarios. AutoTDM addresses these issues by integrating multiple data inputs—such as depth maps, edge detection, and camera positions—to generate high-quality, temporally and spatially consistent driving scenes. Together, these two models push the frontiers of AI in dynamic scene reconstruction and autonomous systems, offering scalable, high-quality solutions for applications ranging from autonomous driving to mixed reality environments.
Dr. Ostadabbas is an associate professor in the Electrical and Computer Engineering Department at Northeastern University (NU) in Boston, MA. She joined NU in 2016 after post-doctoral research at Georgia Tech and earning her PhD from the University of Texas at Dallas in 2014. She is Director of the Augmented Cognition Laboratory (ACLab) and Director of Women in Engineering (WIE). Her research focuses on computer vision and machine learning, especially representation learning in visual perception problems. She models human and animal behaviors through visual motion and biomechanical analysis, and develops deep learning frameworks for small data domains in medicine and military fields. She has co-authored over 130 peer-reviewed articles and received awards from NSF, DoD, Sony, Amazon AWS, Oracle, and others. Honors include the NSF CAREER Award (2022), Sony Faculty Innovation Award (2023), Cade Prize (2024), and NU’s Translational Research Award (2025). She has served in leadership roles for workshops at top conferences such as CVPR, ECCV, ICCV, and ICIP.