Drag-guided Diffusion Models for Vehicle Image Generation
dc.contributor.author | Arechiga, Nikos | en |
dc.contributor.author | Permenter, Frank | en |
dc.contributor.author | Song, Binyang | en |
dc.contributor.author | Yuan, Chenyang | en |
dc.date.accessioned | 2024-02-22T18:27:49Z | en |
dc.date.available | 2024-02-22T18:27:49Z | en |
dc.date.issued | 2023-12-15 | en |
dc.description.abstract | Denoising diffusion models trained at web-scale have revolutionized image generation. The application of these tools to engineering design holds promising potential but is currently limited by their inability to understand and adhere to concrete engineering constraints. In this paper, we take a step toward the goal of incorporating quantitative constraints into diffusion models by proposing physics-based guidance, which enables the optimization of a performance metric (as predicted by a surrogate model) during the generation process. As a proof-of-concept, we add drag guidance to Stable Diffusion, which allows this tool to generate images of novel vehicles while simultaneously minimizing their predicted drag coefficients. | en |
dc.description.notes | Yes, full paper (Peer reviewed?) | en |
dc.description.version | Accepted version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.uri | https://hdl.handle.net/10919/118110 | en |
dc.language.iso | en | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.title | Drag-guided Diffusion Models for Vehicle Image Generation | en |
dc.type | Conference proceeding | en |
dc.type.dcmitype | Text | en |
pubs.finish-date | 2023-12-16 | en |
pubs.organisational-group | /Virginia Tech | en |
pubs.organisational-group | /Virginia Tech/Engineering | en |
pubs.organisational-group | /Virginia Tech/Engineering/Industrial and Systems Engineering | en |
pubs.start-date | 2023-12-10 | en |