AI has advanced to a point where images can be created with a few simple text commands. What if the same could be done for 3D-generated graphics? This is a task that the recently graduated Hyunwoo (Brian Kim) ‘25 sought to address in his published paper titled MeshUp: Multi-Target Mesh Deformation via Blended Score Distillation, which won an honorable mention for the Best Paper Award at the 2025 International Conference on 3D Vision.

Kim graduated with a major in Statistics at the University of Chicago. However, he was originally an economics major. His career took a turn when he was drafted into the mandatory Korean military service as a Korean citizen. By chance, he was placed into the Signal School, a branch of the military geared towards signal processing and computer science.
“They just sent me down to the library to organize the military books, and most of the books there were related to artificial intelligence, signal processing, and other computer science,” Kim recalls. “If you get nothing to do other than organize boring textbooks, then you build up an interest in the boring textbooks.”
This exposure sparked an interest, particularly in the work of Assistant Professor Rana Hanocka, who was then an incoming professor at the UChicago Department of Computer Science. He cold emailed her, and she provided several coding assignments, including translating her earlier Ph.D. work, Align-Net, to a newer codebase. Since then, he has joined her lab, 3DL (pronounced Threedle). Under Hanocka’s mentorship, Kim came up with MeshUp.
MeshUp was inspired by a work from one of Kim’s collaborators, Neural Jacobian Fields, which is a framework for smoothly deforming meshes, a three-dimensional data type commonly used for most graphics processing tasks. MeshUp takes advantage of this framework and allows users to turn an input mesh into something else, based on the text (or image) descriptions they provide.
“On top of Neural Jacobian Fields, we used a model called diffusion, which is the state-of-the-art model used for 2D-based images, to guide the deformation process,” Kim says. “By deforming meshes rather than generating 3D assets from scratch, it would allow users to be more interactive and creativewith the generative process, rather than just having the AI model do all the generation.” MeshUp also introduces a novel technique called Blended Score Distillation (BSD), which helps users generate 3D assets in even more creative ways by “blending” a variety of concepts. For example, users can generate creative assets that look like a 70% turtle and a 30% bulldog, a purely imaginative creature that even the most advanced AI models struggle to develop.
These modifications provided a huge advancement in both the quality and controllability of 3D-generated assets, as well as allowing more user agency and control over the generative process. To further push the boundaries of user controllability, Kim added the features of localized editing — for example, when changing a dog mesh to a turtle, localized editing allows a specific region to be selected such that only part of the dog changes to a turtle, not the entire mesh. This technique synergizes perfectly with MeshUp’s capability to blend concepts, as users can use the two techniques together to specify both where and how strongly each idea should be expressed, like changing a dog into 70% turtle mainly on the back and 30% bulldog on the head. These features would empower a broader audience, enabling even novice users unfamiliar with AI tasks to accomplish their goals in a way that previously could only be done by experts.
“I wanted to find a way to frame this differently,” Kim emphasized, “and convince the users that our model allows a lot more user-friendly editing capabilities, or, as I call it, democratization of graphics.”
Looking ahead, Kim is exploring a slightly different project, with the same focus on improving the interactivity of 3D-generated models for users. Given two different meshes, he is investigating whether two meshes can be “worn” on top of each other, such as if one were a hat mesh and the other were a face mesh. This involves an added dimension of modeling a physically accurate “shrink-wrapping” alteration of a hat wrapping around a head, instead of simply combining one mesh with another. Kim is graduating this year and will be headed to Columbia University for his Ph.D. He plans to work over the summer to complete this project before starting the next chapter of his career.
To learn more about 3DL lab’s work, please visit their lab page here.
This article was written and posted originally to the CS department