Half cube
Half cube
Line.
Grid lines.
Line.
R&D / AI Alignment

Advancing
AI Alignment
Through
Neglected
Approaches

Pioneering AI research by exploring underinvestigated strategies to secure humanity’s future.

Trusted by leading brands
Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.
Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.
Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.
Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.Logo.

AGI will be the most powerful force on earth. Alignment is not optional.

Progress in AI is accelerating. Alignment research isn’t keeping up. Most solutions focus on narrow paths. We believe the space of possibilities is larger — and largely unexplored.

Exploring what is often overlooked.

We investigate strategies at the edges of mainstream AI safety thinking.

Graphic.

Reverse-engineering prosociality.

Study how humans naturally cooperate. Apply it to design more aligned AI behaviors.

Graphic.

BCI-driven alignment methods.

Leverage brain-computer interfaces to better model decision-making and agency.

Graphic.

Bridging political divides on AI risk.

Build bipartisan consensus to create durable AI safety policies.

Graphic.

Empowering AI whistleblowers.

Protect insiders who raise concerns. Surface critical information early.

Graphic.

Growing funding for neglected ideas.

Channel capital toward underexplored, high-leverage research directions.

Graphic.

Consciousness-informed safety research.

Investigate how awareness and self-modeling might guide safer AI.

Why AE Studio?

Built different by design.

Circle rings.
Globe

Why  AE Studio?

We've been focused on this for a decade with no external distractions.

Checkmark White.
Independent and profitable

No outside investors pushing for speed over safety.

Checkmark White.
Cross-disciplinary by default

Developers, neuroscientists, and AI researchers solving hard problems together.

Checkmark White.
Optimized for impact

We prioritize progress over publication. Building over posturing.

How we've contributed so far.

Modern society is too focused on near term gains instead of the long term consequences. We've invested time and money into something we believe is crucial to our species in the long run.

'Neglected Approaches' Agenda
Self-Other Overlap
SXSW Panel on Conscious AI
ICLR Paper on Reason-Based Deception
Educational Content
Attention Schema Theory
The Alignment Survey
AI Safety Startups Initiative
PromptInject: Vulnerability Study on Language Models
Research Funding

Early BCI work to help alignment efforts.

Our original theory of change involved enhancing human cognitive capabilities to address challenges like AI alignment. While we're now exploring multiple approaches to AI safety, we continue to see potential in BCI technology. If AI-driven scientific automation progresses safely, we anticipate increased investment in BCI research. We're also advocating for government funding to be directed towards this approach, as it represents an opportunity to augment human intelligence alongside AI development.

While our emphasis has shifted towards AI alignment, our work in Brain-Computer Interfaces (BCI) remains an important part of our mission to enhance human agency:

Open-Source Tools

We've developed and open-sourced several tools for the propel and democratize BCI development, like the Neural Data Simulator that facilitated the development of closed-loop BCIs, and the Neurotech Development Kit to model transcranial brain stimulation technologies. These tools have contributed to lowering barriers in BCI research and development.

Neural Latent Benchmark Challenge

We won first place in this challenge to develop the best ML models to predict neural data topping the best research labs in the space.

Neuro Metadata Standards

We led the development of widely accepted neuro metadata standards and tools, supporting open-source neuro-analysis software projects like MNE, OpenEphys, and Lab Streaming Layer.

Collaboration with top companies in the space

We've joined forces with leading BCI companies like Forest Neurotech and Blackrock Neurotech, helping to bridge the gap between academic research and industry applications.

Privacy-Preserving ML

We've developed secure methods for analyzing neural data and training privacy-preserving machine learning models, addressing crucial ethical considerations in BCI development.

Early BCI work to help alignment efforts.

Our original theory of change involved enhancing human cognitive capabilities to address challenges like AI alignment. While we're now exploring multiple approaches to AI safety, we continue to see potential in BCI technology. If AI-driven scientific automation progresses safely, we anticipate increased investment in BCI research. We're also advocating for government funding to be directed towards this approach, as it represents an opportunity to augment human intelligence alongside AI development.

While our emphasis has shifted towards AI alignment, our work in Brain-Computer Interfaces (BCI) remains an important part of our mission to enhance human agency:

01

Collaboration with top companies in the space

We've joined forces with leading BCI companies like Forest Neurotech and Blackrock Neurotech, helping to bridge the gap between academic research and industry applications.

Checkmark White.
Hosted a major panel at SXSW on the path to Conscious AI.
Checkmark White.
Regularly collaborating with top thinkers in AI consciousness research.
Checkmark White.
Previously hosted SXSW panels on brain-computer interfaces.
02

Neural Latent Benchmark Challenge

We won first place in this challenge to develop the best ML models to predict neural data topping the best research labs in the space.

Circle lines.

Early BCI work to help alignment efforts.

Our original theory of change involved enhancing human cognitive capabilities to address challenges like AI alignment. While we're now exploring multiple approaches to AI safety, we continue to see potential in BCI technology. If AI-driven scientific automation progresses safely, we anticipate increased investment in BCI research. We're also advocating for government funding to be directed towards this approach, as it represents an opportunity to augment human intelligence alongside AI development.

While our emphasis has shifted towards AI alignment, our work in Brain-Computer Interfaces (BCI) remains an important part of our mission to enhance human agency:

01 ━ 05
02 ━ 03
03 ━ 03
03 ━ 03
Neglected Approaches Manifesto

We've joined forces with leading BCI companies like Forest Neurotech and Blackrock Neurotech, helping to bridge the gap between academic research and industry applications.

Neural Latent Benchmark Challenge

We won first place in this challenge to develop the best ML models to predict neural data topping the best research labs in the space.

Open-Source Tool

We've developed and open-sourced several tools for the propel and democratize BCI development, like the Neural Data Simulator that facilitated the development of closed-loop BCIs, and the Neurotech Development Kit to model transcranial brain stimulation technologies. These tools have contributed to lowering barriers in BCI research and development.

Privacy-Preserving ML

We've developed secure methods for analyzing neural data and training privacy-preserving machine learning models, addressing crucial ethical considerations in BCI development.

Neuro Metadata Standards

We led the development of widely accepted neuro metadata standards and tools, supporting open-source neuro-analysis software projects like MNE, OpenEphys, and Lab Streaming Layer.

Privacy-Preserving ML

We've developed secure methods for analyzing neural data and training privacy-preserving machine learning models, addressing crucial ethical considerations in BCI development.

Checkmark White.
Hosted a major panel at SXSW on the path to Conscious AI.
Checkmark White.
Regularly collaborating with top thinkers in AI consciousness research.
Checkmark White.
Previously hosted SXSW panels on brain-computer interfaces.
Rainbow swirl background.

We're already experts in building AI. Now we're focused on solving something much bigger.

We're focused on the future.

At AE Studio, we tackle ambitious, high-impact challenges using neglected approaches.

Starting with Brain-Computer Interfaces (BCI), we bootstrapped a consulting business, launched startups, and reinvested into frontier research—leading to collaborations with Forest Neurotech and Blackrock Neurotech.

Today, we’re 160 strong—engineers, designers, and data scientists—focused on increasing human agency.

Now, we’re applying our proven model to AI alignment, accelerating safety startups like Goodfire AI and NotADoctor.ai to tackle existential risks.

A team full of experience.

Our data scientists - from places like Stanford, CalTech and MIT - are highly collaborative, efficient and pragmatic.

Diogo
Research scientist with 9+ years of experience developing (soft-)robotic systems for rehabilitation after stroke and spinal cord injury In his PhD at UC Irvine and postdoctoral fellowship at Harvard, Diogo led interdisciplinary teams of engineers, designers, and clinicians to develop systems (including intuitive user interface, embedded firmware, hardware/software integration, and data processing pipelines) to advance the science and technology of on-demand rehabilitation. Previously, he worked in the automotive industry developing engine control algorithms and test automation software for Renault and Volvo.
DIOGO, CHIEF SCIENTIST
Mike
Experienced data scientist and avid problem solver. While earning his PhD in Computational Data Science from the University at Buffalo, he created new machine learning techniques that he applied to better understand brain activity in the moments before an epileptic seizure.

Has a proven track record of delivering end-to-end machine learning solutions for a Fortune 500 bank, including delivering state-of-the-art natural language processing models for monitoring customer feedback and an AI system for real-time fraud prevention among many others.
MIKE, SENIOR DATA SCIENTIST
Ed
25+ years of experience using statistics and machine learning to analyze datasets in physics, finance, and online petitions. Expert in machine learning, deep learning, and distributed computing as a means of processing and analyzing large datasets. A.B. Magna Cum Laude in Chemistry & Physics from Harvard College and PhD from Cal Tech in Experimental High Energy Particle Physics.
ED CHen, HEAD OF DATA SCIENCE

How we've contributed so far.

Humanity is too focused on capitalism in a competitive landscape. We've invested time and money into something we believe is crucial to our species in the long run.

The 'Neglected Approaches' Approach

We described our alignment research agenda, focusing on neglected approaches, which received significant positive feedback from the community and has updated the broader alignment ecosystem towards embracing the notion of neglected approaches. Notably, some of the neglected approaches we propose could have a negative alignment tax, a concept we elaborate on in our LessWrong post "The case for a negative alignment tax" that challenges traditional assumptions about the relationship between AI capabilities and alignment.

We also discussed our approach to alignment, AI x-risks, and many other topics in a couple of podcasts:

The 'Neglected Approaches' Approach

We described our alignment research agenda, focusing on neglected approaches, which received significant positive feedback from the community and has updated the broader alignment ecosystem towards embracing the notion of neglected approaches. Notably, some of the neglected approaches we propose could have a negative alignment tax, a concept we elaborate on in our LessWrong post "The case for a negative alignment tax" that challenges traditional assumptions about the relationship between AI capabilities and alignment.

The 'Neglected Approaches' Approach

We described our alignment research agenda, focusing on neglected approaches, which received significant positive feedback from the community and has updated the broader alignment ecosystem towards embracing the notion of neglected approaches. Notably, some of the neglected approaches we propose could have a negative alignment tax, a concept we elaborate on in our LessWrong post "The case for a negative alignment tax" that challenges traditional assumptions about the relationship between AI capabilities and alignment.

The algorithm

Lorem ipsum dolor sit amet, consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim

The experts

Lorem ipsum dolor sit amet, consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam.

Ai graphic.

We implement agency-increasing technology around the world.

More about us.

Learn more about all the members of our team and why we do what we do.