Nudge

I recently co-founded Nudge with Fred Ehrsam, a company with the goal of creating hardware that dramatically improves the daily lived experience of people everywhere. We're using ultrasound to safely and non-invasively measure and modulate brain activity at high resolution — we see huge promise for both those struggling with mental health challenges and eventually people who are generally healthy and want to enhance their everyday life. Ultimately, practically everything that we care about is in some way tied to the quality of our state of mind - and few technologies can interface with it directly, especially in a way that’s beneficial over the long-run. 

While it's hard to predict the full range of possibilities we'll see from our device, we think the highest impact capabilities will be increasing wellbeing, agency over how we choose to spend our time, and the ability to think clearly and learn quickly. While we have a long way to go on delivering these capabilities consistently and at scale, we're starting to see real traction on the science and engineering that would enable such a device and are working in earnest to make it a reality. 

Non-invasive (without a surgical implant) brain stimulation over the past 5 years has already demonstrated the capacity to change people's lives for the better. Magnetic stimulation, namely an optimized protocol of rTMS first developed at Stanford, has shown ~80% efficacy in a randomized-controlled trial at treating intractable forms of depression during only a week of intensive treatment. There are reports of the treatment being transformative for many who participate, a true "before and after" moment. Ultrasound is starting to show the signs of a similar inflection point: we're seeing results of a single, under 30 minute treatment that can enable those addicted to opiates (and who have tried many other forms of standard treatment) to stop using for a month or more. From an article on the first results of the study, patients are reporting long-term effects. While we still need to see how results hold up in a fully randomized, controlled trial, it's an exciting first look at what may be to come:

Part of the power of this technology is that once it demonstrates efficacy for a given treatment, this implies a more generalized set of effects — not only can we target a brain area responsible for the reward learning implicated in addiction, but others which may help treat depression or pain, simply by changing the input parameters and targeting. In fact, we're already seeing treatment of chronic pain in a neighboring region show strong effectiveness in a single 40-minute treatment in a controlled trial. 

There are many ways to interface with the brain, both invasively and non-invasively, but very few are truly safe, scalable, and have compelling use cases. The primary physical ways to interface with the brain are optically (using light to either measure the properties of or affect a neuron's state), magnetically (as mentioned above), electrically (with brain implants like the current brain-computer interfaces used in patients with movement disorders), and acoustically (imaging and stimulating through sound or ultrasound). Optical approaches are limited by the depth light can penetrate into the brain to either implants or surface level brain imaging, potent electrical interfacing requires implantation, and magnetic approaches, while non-invasive, can't be miniaturized and used to target deep brain regions due to physical constraints on the focusing of magnetic fields. Only ultrasound has the overlapping advantages of being incredibly safe on the body (there are decades of research on using low-intensity ultrasound to image fetuses in the womb), can be focused using beamforming to target deep brain regions of millimeters in size, can be used for structural and functional brain imaging, and has hardware that can be miniaturized and scaled without needing an implant. 

While ultrasound's potential as a treatment for mental health diagnoses is just starting to become apparent, we think everyone could eventually benefit from the technology. While over the years I've been blown away by the progress on brain-computer interfaces, faster control of a computer cursor or typing with my thoughts isn't nearly as important to me as my ability to focus and learn effectively, regulate my sleep or stress levels, or explore new states of mind. On the imaging side, I'm excited for brain interfaces that create motifs for communication that are only enabled by understanding circuit-level activity — revolutions in computing happen when the medium creates new gestures for interfacing, rather than just speeding up old ones.  

We have a lot of work to do both scientifically and technically to make the vision of this technology a reality. It's something of an open secret in the field that there are still many hard problems which have plagued researchers for years: incomplete models for the mechanism of action, inaccuracies in acoustic simulation, bulky and/or imprecise transducer hardware, a lack of clear feedback from brain imaging, differences in neuroanatomy or function across individuals, and a lack of definitive parameters to use. Along with the tantalizing recent results in humans for addiction, chronic pain and more, we're starting to gain a foothold in solving many of them in the last 6 months — some solutions coming from the broader research community and some already at Nudge. I ultimately believe that for a compelling product to emerge from this approach we'll need to solve all of these problems at once, and in the same company which has the capacity to scale the product that comes out. If you're an exceptional researcher or engineer and want to work on technically challenging problems and a meaningful mission, please reach out.

The Neurotech Development Kit

Over the past couple of years there have been a number of new neurotechnology hardware platforms developed, from semi-invasive to fully invasive (Neuralink having recently reached their first human implantation), with capabilities for everything from high bandwidth interfacing with the retina to whole brain imaging to targeted neuromodulation. While this is incredibly exciting for the field, there's still a group of people with a potential to impact it who have almost no good routes for contributing: early-career software engineers. 

I've met a number of ambitious young engineers with primarily software skills who want to either start a company in neurotech, build a portfolio project to demonstrate ability and interest in the area, or just make something cool using brain signals. There is basically one option available to them, which is consumer EEG (or in some cases surface EMG, which is a fairly indirect method of collecting brain activity), and it's a really limited one. Even research-grade EEGs, with hundreds of contacts in controlled lab conditions top out at a few cm of spatial resolution (10s of thousands of neurons) and are only reliable for collecting cortical data. There have been some impressive demos using EEG for control but studying the deep brain, getting high bandwidth input and output for high fidelity computer control, discovering new modalities for interfacing with AI, all would either require or greatly benefit from new hardware. The hardware is both years away and likely to come more and more from companies without open API access. 

In the meantime, I think it would be extremely valuable to make simulation environments more available to outside contributors, which was the inspiration behind NDK, the Neurotech Development Kit, which I worked on with AE Studio, Milan Cvitkovic and Sumner Norman. Developing hardware and experimental design within neurotech companies often starts with or heavily uses simulation, and well-documented and open packages allow anyone with software skills to contribute to the space. Furthermore, given the speed of iteration in simulation, I could imagine these packages having an impact on the state of the art similar to what we've seen with robotics and reinforcement learning in the past, where simulation is a key part of making faster progress. In fact, part of the original impetus for NDK came from the success of the OpenAI gym in offering a standard environment to benchmark RL algorithms, which greatly accelerated the field. 

I think some of the most impactful future neurotech devices will be completely noninvasive, and the first use case for NDK is for modeling transcranial ultrasound for neuromodulation. It's open for contributions to anyone who wants to work on neurotech but doesn't have a lab or the hardware, and we hope to see what the world can build with it! 

Connectome Harmonics

During 2021 I worked with researchers at the Johns Hopkins CPCR and UNC to develop a new tool for connectome-specific harmonic waves, a technique first introduced by Selen Atasoy at Oxford for measuring whole brain network states in a fairly simple decomposition to around 100 structural/functional 'modes'. We wanted to make the technique usable by a larger number of scientists in the field by open-sourcing the code and analysis pipelines, and making use of more modern techniques from the software development community like containerization. It was fascinating, although not that surprising, to see how much cloud computing, open source neuro data, and improved code made collaboration on the project smoother and the reach broader. The long-term impact, I hope, is that more technical talent in other fields start to see neuro data as a resource for building magical applications in addition to understanding what the brain is doing - the best example I've seen of this recently is "Mind's Eye". 

So-called 'connectome harmonics', introduced in 2016 in this publication of Atasoy's in Nature, are a beautiful way of describing oscillations of activity throughout the brain and have proven useful for characterizing states of neural activity. There are plenty of descriptions of the math underneath connectome harmonics, which borrow from graph theory and show up in other unexpected places, but I often feel the significance of the approach can be lost between visualizations and jargon-filled language of the formal process. In my mind CSHW and related techniques provide a way to simplify the representation of the brain from the activity of 100 billion neurons (plus other emergent phenomena like local field potentials) to 100 dimensions that still give a rich understanding of the brain's state at a given moment in time. By describing such a state as "just" characterized by these 100 or so dimensions, we can further start to think about distance functions between these points (comparing one brainstate to another mathematically), trajectories through this high dimensional space (comparing brainstates over time), and critically, benchmarking such states across different interventions (when I do X to the brain, what trajectory does it follow?). 

I spend much of my time now working on new technologies that will modulate brain activity towards some desired end - be that something relatively mundane like sleep or focus, or more exotic like 'advanced meditation'.  A conventional "functional localization" view of neuroscience might tell us that we should target changing activity in the hypothalamus for modulating sleep or locus coeruleus for modulating arousal but I imagine there are far better metrics for both the endpoint (what is "sleep" or "good sleep", actually?) and more refined strategies for where to target that will fall out of a better understanding of these endpoints. Ultimately, I hope a union of mental models from science, engineering and math all contribute to directly improving people's state of mind.