Currently, there are no participants in our dataset with >100hrs -- intentionally so, we've been optimizing heavily for diversity of dataset up to this point. We've explored the idea of fine tuning on a particular participant's data, and we expect that this will be pretty impactful
Yeah -- we have the participants use chinrests as well, which reduces head motion artifacts for typing but less so for speaking (because they have to move their heads for that of course). so a lot of the data is with them keeping their heads quite still, although the model is becoming much more robust to this over time.
Yeah I think the way we trained the embedding model focused a lot on how to make it as efficient as possible, since it is such a data-limited regime. So I think based on (early) scaling results, it'll be closer to 50-70k hours, which we should be able to get in the next months now we've already scaled up a lot.
That said, the way to 10-20x data collection would be to open a couple other data collection centers outside SF, in high-population cities. Right now, there's a big advantage in just having the data collection totally in-house, because it's so much easier to debug/improve it because we're so small. But now we've mostly worked out the process, it should also be very straightforward for us to just replicate the entire ops/data pipeline in 3-4 parallel data collection centers.
1. The predictions get better with more data - and we don't seem to be anywhere near diminishing returns.
2. The thing we care about is generalization between people. For this, less data from more people is much better.
I noticed you tracked sessions per person, implying a subset of people have many hours of data collected on them. Are predictions for this subset better than the median?
For a given amount of data, is it better to have more people with less data per person or fewer people with more data per person?
Yes, the predictions are much better for people with more hours of data in the training set. Usually, we just totally separate the train and val set, so no individual with any sessions in the train set is ever used for evals. When we instead evaluate on someone with 10+ hours in the train set, predictions get ~20-25% better.
For a given amount of data, whether you want more or less data per person really depends on what you're trying to do. The thing we want is for it to be good at zero-shot, that is, for it to decode well on people who have zero hours in the train set. So for that, we want less data per person. If instead we wanted to make it do as well as possible on one individual, then we'd want way more data from that one person. (So, e.g., when we make it into a product at first, we'll probably finetune on each user for a while)
I wonder if there will be medical applications for this tech, for example identifying people with brain or neurological disorders based on how different their "neural imaging" looks from normal.
The second most useful by far is Indeed, where we post an internship opportunity for participants interested in doing 10 sessions over 10 weeks. Other things that work pretty well are asking professors to send out emails to students at local universities, putting up ~300-500 fliers (mostly around universities and public transit), and posting on Nextdoor. We also just texted a lot of groupchats/posted on linkedin/ gave out fliers and the signup link to kind of everyone we talked to in cafes and similar. We take on some participants as ambassadors as well, and pay them to refer their friends.
We tried google/facebook/instagram ads, and we tried paying for some video placements. Basically none of the explicit advertisement worked at all and it wasn't worth the money. Though for what it's worth, none of us are experts in advertising, so we might have been going about it wrong -- we didn't put loads of effort into iterating once we realized it wasn't working.
Hey I'm Nick, and I originally came to Conduit as a data participant! After my session, I started asking questions about the setup to the people working there, and apparently I asked good questions, so they hired me.
Since I joined, we've gone from <1k hours to >10k hours, and I've been really excited by how much our whole setup has changed. I've been implementing lots of improvements to the whole data pipeline and the operations side. Now that we train lots of models on the data, the model results also inform how we collect data (e.g. we care a lot less about noise now that we have more data).
We're definitely still improving the whole system, but at this point, we've learned a lot that I wish someone had told us when we started, so we thought we'd share it in case any of you are doing human data collection. We're all also very curious to get any feedback from the community!
I have dreamed many times about same story but with apple or epic games. But they have millions of human beings testing their products FOR FREE in every place of the world, hahahaha
reply