Kan Jen Cheng

I'm a student at UC Berkeley. Currently, I am doing audio research in Berkeley Speech Group.

My current research interests center on auditory perception, generation, and texture editing.

Email  /  Github

profile photo

Research

I'm interested in deep learning, generative AI, and audio processing. Most of my research is about inferring the physical world (speech, sound etc) from audio. Some papers are highlighted.

Audio Texture Manipulation by Exemplar-Based Analogy
Kan Jen Cheng, Tingle Li, Gopala Anumanchipalli
ICASSP, 2025  
project page / arXiv

An exemplar-based analogy model for audio texture manipulation that uses paired speech examples to learn transformations.


Last updated:

Template from Jon Barron.