The Great Unlock
Feb 23, 2026

2025 felt like the year the pieces finally lined up for brain and behavior research. Consumer EEG headsets dropped below $300. Signal processing improved. Remote studies stopped feeling like a workaround and started feeling normal. The tools to study the mind are no longer limited to well-funded labs with specialized hardware and a lot of patience.
We built NeuroFusion around a simple bet: research gets better when people can run studies outside the lab and repeat them often enough to learn from individuals, not only averages. That is the shift we care about, and it is why this moment matters so much to us.
The space is getting crowded. Incumbents already own parts of the workflow, open-source tools keep improving, and new startups are racing to become the default research platform. We call this moment The Great Unlock.
The bottleneck is still experimentation speed
Brain and behavior research has no shortage of questions. What it lacks is throughput. Good ideas still move too slowly from hypothesis to data to iteration.
lab visit -> rich snapshot
clinic visit -> long gaps between checks
repeatable home studies -> change over time
That gap shows up everywhere:
- In the lab, teams can collect high-quality data, but setup is heavy and follow-up is rare.
- In the clinic, useful assessments happen too infrequently to capture what changes between visits.
- At home, people can collect lighter-weight data much more often, but the workflow is still fragmented.
That is the gap we care about: not lab versus home, but snapshot research versus repeatable measurement.
Three shifts make this moment different:
- Consumer neurotechnology is usable now. Devices like Muse and Neurosity are noisy compared with lab rigs, but already good enough for a growing set of questions.
- Signal processing is faster and more accessible. Better tooling means researchers can adjust a study and run it again instead of waiting weeks to learn whether a setup worked.
- Remote research infrastructure matured. Participant pools exist. Distributed studies are normal. What is still missing is a product that combines tasks, prompts, and recordings in one manageable workflow.
For incumbents, this shift extends existing products. For small teams, it lowers the cost of shipping something new. Both trends are real.
Why the current stack still falls apart
If the parts exist, why build another platform?
survey tool
+ task tool
+ EEG tool
+ storage tool
+ custom scripts
= fragile study operations
Because research teams still spend too much time stitching tools together. A typical study can involve one system for surveys, another for task delivery, another for EEG acquisition, another for storage, and a pile of scripts to align timestamps and formats later. Remote studies make this worse because participants get bounced between links, apps, upload flows, and instructions that were never designed to feel like one study.
We have watched teams lose months to that integration work. Data gets dropped at the joins. Participants disappear when the workflow feels fragile. File formats drift. The study becomes harder to trust because the pipeline is harder to understand.
NeuroFusion exists to collapse that sprawl into a single product. Researchers can design a study with prompts, onboarding, consent, experiments, and recordings in one place. Participants can join from mobile, web, or tablet. Behavioral data and brain data land in the same system. Exports come out clean. Analysis can run as the study runs.
That is the product bet: durable value sits with the platform that owns the whole workflow.
What we think matters most
one study container
-> behavior
-> brain data
-> wearable context
-> analysis-ready outputs
Four things matter most to us:
- Scope. Prompts, jsPsych experiments, consumer EEG, and wearable health data should live inside the same study container.
- Privacy. Brain data is sensitive. Participation should not require an identity-heavy workflow by default.
- Community research. Useful studies should be able to happen outside formal institutions when the tooling is good enough.
- Reproducibility. Teams should understand how data was collected and analyzed instead of trusting a black box.
The infrastructure layer we care about
collection -> context -> storage -> analysis -> reuse
Good science and good models already exist. The problem is that they often sit as isolated assets: a dataset in cloud storage, a model behind a paper, a preprocessing library that takes too much setup, or a notebook that never leaves one lab.
We want that stack to feel usable.
A recording should come through one API whether it was collected at home on a Muse or in a community session on a Neurosity Crown. The same study container should hold participant metadata, prompts, experiment context, and device provenance. Researchers should not have to manage manual uploads and email chains just to keep a dataset coherent.
Analysis should also be extensible without turning every project into a software platform effort. We ship built-in pipelines for spectral decomposition, ERP extraction, and common comparisons such as eyes-open versus eyes-closed alpha. Researchers can also attach their own Python scripts to a quest and run them on incoming data, nightly aggregates, or combined datasets across multiple experiments.
That matters because the interesting work rarely lives in one table. The platform should support multi-modal analysis without forcing researchers to rebuild the collection layer every time.
What the progress already looks like
2021 question
|
2022 personal experiments
|
2023 early community studies
|
2024 research collaborations + platform depth
|
2025 live multi-modal quest system
The roots of Fusion go back to a simple question from late 2021: if we already generate so much data across our apps, could we use it to understand how we work and feel a little better?
That turned into early experiments correlating music, sleep, and work patterns. Very quickly we hit the same wall each time: Spotify knew what someone listened to. The phone knew sleep duration. Wearables knew steps and heart rate. None of those systems knew what was happening in the brain while the rest of life unfolded.
Since then, the work has moved across Lagos, Accra, San Diego, Vancouver, Toronto, Istanbul, Tampa, and Telford. We ran community events, cold plunge experiments, BrainHack Toronto sessions, and early collaborations around cognitive assessment. The pattern was consistent: once setup friction dropped, more experiments became possible.
Today the platform is live on iOS, Android, and the web. Research groups at the University of Toronto, the University of Plymouth, and the University of Port Harcourt are using it. Quests already support jsPsych 8 experiments, media uploads, Prolific recruitment, organization billing, automated Python analysis scripts, and multi-experiment workflows. The data layer already includes self-reports, cognitive task performance, resting-state EEG, event-related potentials, FOOOF-based frequency analysis, steps, sleep, and heart rate.
We have also released open datasets because we want the data commons we wished existed when we started.
What we are building toward
single visit
-> repeated measures
-> individual baseline
-> useful prediction
The long-term goal has not changed since Entry #00: increase the rate of experimentation enough that we can build useful predictive models for a single person.
Most brain research still produces group-level statements. Those results matter, but they do not yet tell an individual what changed for them this week, what usually happens next, or what intervention helped the last time a similar pattern showed up. To get there, you need repeated measurements from the same person across time.
A few areas feel especially ready: remote cognitive assessment, community intervention studies, longitudinal baselines for individuals, longer-form mobile recording, and closed-loop experiments that adapt in real time.
Foundation models for brain activity are also coming into view faster than we expected. What is still missing is a large body of longitudinal, multi-modal data collected from the same people over time. That is exactly the kind of data our quest system is designed to produce.
The move from "30 participants, one snapshot each" to repeated measurements from thousands of people is the change we care about. Lab research gave us the science. We are building the infrastructure that can make that science more frequent, more personal, and easier to use.
If you want to run a study, this is your moment
narrow question
-> repeatable measurement
-> quest
-> usable dataset
The Great Unlock only matters if more people actually use it. If you are a researcher, clinician, builder, community organizer, or simply someone with a question you want to test, we would love to help you turn it into a quest on NeuroFusion.
A good starting point is usually a narrow question with a repeatable measurement:
- Does sleep duration change next-day Stroop performance, resting-state alpha, or self-reported focus?
- Does a two-week breathwork, meditation, or cold exposure practice shift mood, calm, or spectral power in a measurable way?
- Can a community run the same short EEG + prompt protocol before and after a shared event or intervention weekend?
- Do subjective energy, heart rate, steps, and short cognitive tasks move together during stress, recovery, or burnout?
Those are all quest-shaped studies. A quest can combine onboarding, consent, recurring prompts, jsPsych experiments, wearable data, media uploads, and EEG recordings inside one flow that participants can repeat over time.
If you want to pilot a study, run a community challenge, or test a protocol with your own group, reach out to us at contact@usefusion.app or start exploring the NeuroFusion Explorer quest dashboard.
The lab showed what is measurable. The clinic showed what is at stake. Home and community settings are where this becomes routine.
The unexamined life is not worth living. We want to build better tools for examining it.
It is time to build.
By NEUROFUSION Research, Inc.

