• PhD Student, Physics Department

dkarkada@berkeley.edu

Current Research

Deep learning is humanity’s most successful attempt thus far to imitate aspects of human intelligence. Despite fundamental differences in the architectures of deep learning systems and biological brains, deep learning remains a theoretically- and experimentally-accessible playground for understanding learning as a general, emergent phenomenon. The overarching goal of my research is to probe deep learning systems (using both theoretical tools and numerical experiments) to elucidate and characterize the general properties of systems that learn.

But even as a toy model of learning, deep learning is itself mysterious in many ways. Experiments reveal many interesting behaviors (e.g. feature learning, double descent, neural scaling laws) which are not well-understood from a theory standpoint. On the other hand, idealized models of large neural networks have provable behaviors that are tantalizingly similar to those of real neural networks. My work bridges the gap between experiment and theory. I’m currently working on using percolation theory to characterize the texture of neural net outputs in input space.

Bio

I grew up in Texas and studied physics, astronomy, and computer science at UT Austin. I joined Berkeley as a physics PhD student in 2021, excited to use ideas from physics to understand information processing in neural networks. In my free time, I enjoy hanging out with friends, cooking, playing with synthesizers, making generative art, and scrolling Twitter.