Brain Computer Interface w/ Python and OpenBCI for EEG data
HTML-код
- Опубликовано: 14 окт 2024
- Learning how to read EEG data in Python for the purposes of creating a brain computer interface with hopes of doing things like controlling characters in a game and hopefully much more!
openbci.com/
Channel membership: / @sentdex
Discord: / discord
Support the content: pythonprogramm...
Twitter: / sentdex
Instagram: / sentdex
Facebook: / pythonprogramming.net
Twitch: / sentdex
#BCI #EEG #Python
Hello sentdex,
usually for sampling rates something called the Nyquist criteria is used for the sampling frequency. It states that the sampling frequency must be at least double the frequency of the signal you're sampling. Brain waves are not pure sinusoids so they're a series of multiple pure sinusoids of differing frequency and phase (aka a Fourier series), you'd want your sampling rate to be higher than the frequency of the last sinusoid in the Fourier series. The problem with sampling at a frequency that's lower than double the frequency of the sampled signal is that when you try to reconstruct your signal (synthesis) the frequency of the reconstructed signal will reflect around the sampling rate (which was lower than the Nyquist rate). For example the maximum frequency of the Gamma waves is 100 Hz, and you used a device to sample the wave at 150 Hz, when you reconstruct the signal you will get a signal at 50 Hz instead of the original 100 Hz, this is called Aliasing which is bad and results in noise, that's why aliasing is a big deal in video graphics :D. The system you are using uses a sampling frequency of 250 Hz as someone else mentioned in the comments so it the acquired signals can be reconstructed without aliasing.
Just an addition to this, with highly aperiodic signals you need a sampling rate that is much higher than the nyquist frequency
Hey could you help me out? I'm having a hard time trying to figure out what's sampling and how do we differentiate gamma waves from beta and so on.
i don't even know why is he sampling it.. he was supposed to have a low pass filter over signal to get different waveforms
@@ceasazee6649 is it because the OpenBCI GUI already sampled the data already? Is there a better way to read the data from the GUI?
@@josephhubbard4332you need twice the rate of the highest frequency component. If the signal is irregular/aperiodic, that must means there'll be a higher sine wave component in there somewhere
siraj : watch me build a brain reading Machine in a week
Startup*
where's the kickstarter? :p
HAHAHAHAHAAHHA
Dont forget the paper he writes on BCI, where he accidentally has his synonym switcher change bci to brain computer interaction.
@@sentdex Complicated Brain Space
Now use an RNN to predict your next thought 💭 😋
what is RNN ? can you tell me more
@@alibadr123 Thank you sir
@@will1337 please elaborate
William do you have an email I may contact? I am a university student Schuster looking for some guidance in this field. Thank you!
Wait... This is amazing why has nobody done this?
you can bypass the print lag by writing it inline:
print("each line", end="
")
It's an easy way to see if you have the bursting going on.
This reminds me a project of a friend, where he was playing pong with similar system. Can't wait to see what you will do with that !
This is why im glad i clicked the subscribe button 3 years ago, looking forward to next video and no i have the best job in the world as someone pays me to work from home and i sit there watching you :)
The docs have the sample rate: "For OpenBCI Cyton the sampling rate is 250Hz and it sends data to buffer every half second." This buffer behavior explains the bursting you saw in the raw data dump, but it doesn't affect the sample rate.
Also, just in my opinion, I'd probably FFT the data to get frequency features for ML to ingest, rather than trying to use resampling as a low-pass filter.
Rob Petti - interesting that you mention about FFT - ing the Data. I work with similar kind of data but in a different domain - Electromagnetic spectrum measurements using off the shelf scanners.
What caught my attention is where you mention about feature extraction from FFT data to feed into ML. So, say I already have FFT data gathered by sensors that are connected via a 4G dongle to send the results back to my server, rather than sending raw IQ data or FFT results I get the average/ max / min values of many channels per band. How can I extract features from this aggregated data ? I would appreciate if you can point me in the right direction for this.
I intend to use ML to -
1. detect anomalies in transmissions,
2. Look for patterns in order to locate any interference
Among other stuff....
Thanks.
@@anantharamaniyer9135 I am by no means an expert, I just know that the frequency domain -- at least for Sent's purposes -- would be more useful than the raw IQ data. The frequencies from the individual probes would give a better indication as to what is going at any particular moment than an instantaneous readout of amplitude. In his case, I was thinking he could bucket the FFT data into discrete frequency ranges for each probe and using those as the input to the network. This is what I meant by frequency features, but I may be using the wrong terminology here.
I don't know nearly enough about ML to know how to train something to look for anomalies in time series data, or even to be able to understand time series data. I was always under the impression that such systems used statistical analysis rather than deep learning.
@@anantharamaniyer9135 feature extraction is a lossy process. So you have to know what kinds of information you really care before hand. And applying the suitable feature extraction process which remove the component you don't need and preserve the significant part. Thus reduce the information being transported while not effecting your main propose too much. In your comment I noticed that you did only send "the average/ max / min values of many channels per band", yeah, sure do this is one kind of feature extraction method. But you can think about the physical meaning of these features. I'd suggest considering including mean, standard deviation...etc values, since these values better capture the distribution of the samples instead of those outlier values. Furthermore, you can consider only preserve the significant part of the FFT result as many audio/video compression algorithms did. ( e.g. you sort the output values from FFT and only take the top ten values and denoted with its frequency index.) There are still so many options to extract the features like PCA, autoencoder, ANN models. Say, you can train another smaller ML model for transforming the raw data to feature vectors like word2vector did and this model is small enough to run on the local capturing device.
Sentdex in a week: uploading my brain to the singularity w/ python
Couple weeks for that*
That’s nothing to joke about
@@nathanthomas5430 yes it is
You're amazing. Every time trying something new.
That was on a whole new level to be honest to you and i am eagerly waiting for your next uploads in this context. You could use unsupervised learning to cluster the type of waveforms into some activity. Though you would have to create a data set I believe, with you thinking about different things and then capturing those particular impulses of the response from this wonderful machine and training the machine on it.
Good Luck mate!!!
This is one of my ideas for doing this. It shouldn't take too long to have a dataset that will at least show me whether or not there's something here.
I talked with a friend about something along these lines. Two days later sentdex just uploads an awesome video, you're the best!
I just recently started learning python. My wife has a friend that due to a car accident hasn't had speech or mobility save her eyes and some in one hand for the last 15 years. She recently lost her mother to make matters worse. Thank you so much for posting this! I'll be working towards seeing if I can get her the ability to send messages and browse the internet.
I was reading to see where BCI technology is at today. So excited to see what you'll do with it.
Saw your post on Reddit about how you started this channel a received job offers from it. Great thinking and I will keep that in mind as I am learning to code. Peace!
Reddit link??
I've been offline for a couple of weeks travelling
U can use unsupervised learning to cluster this data and control some sort of robot or automate some task.
This is definitely something I want to try
When I was designing FNIR brain computer interfaces for DARPA we used the motor cortex area for control. when you think about moving your right hand or your left hand that area activates the same as when you actually move your hands. I think you would have better results if you located two sensors on the motor cortex for the hands. the hand area is the largest...
It doesn't work that way with EEG though, right? You need to get the ERPs correlated with the right hand movements and then average them over time. Ideally, a neural net would be able to identify the ERPs itself or increase the signal strength, by amplifying ERPs that are likely from right hand movements. You also need a reference electrode so two sensors are per se not enough.
Commercial fNIRS would be great though lol
LOVE THIS PLEASE KEEP IT UP!
I always had an interest in BCI, would love it if you make it a long series!
Michael Reeves did a pretty good video on making his car drive forward using his thoughts. He used one of those toys you mentioned. All he did for that was check if the reading went above a certain threshold. So it'll be really cool what kind of control you can get using some proper ML.
I really look forward to this :)
You do the community such an amazing service by providing the content you do, and presenting it how you do. Amazing amazing work.
YESSSSSSSSSSSSS!! So pumped for this!
P.S - What is that font you're using in your command promptl? Looks great!
It's called "unispace"
This is a fantastic channel! The manner with which you explain concepts makes them super easy to digest. Thank you for this video!
Nice job. I was looking at playing around with EEG headset ( I think different company, maybe the same) last year, but the different amount of readings seem limited. Your video has inspired me to do some research. I dream of handless computer interactions.
The bursting in the signal is probably caused by the Bluetooth stream over the usb dongle, which is slow. There is a separate OpenBCI WiFi Shield available, which can be slipped onto the cyton board, which provides a fast wifi stream. It is also possible to connect the Cyton board directly via cable to a usb port of a PC.
I'd recommend for frequency-based data like this, using pywt to do wavelet transform realtime so that you can get the time series of the different frequency content bands. Then you can do your machine learning on that.
You'll still be limited to at most 50Hz from at 100Hz signal, but you can get all the frequencies below that with a continuous wavelet transform
You might be interested in some digital signal processing. High pass and low pass filters could be useful for viewing the data in specific frequency ranges. That way you could choose to only look at something like the gamma waves and filter out everything else. Python should have some good dsp libraries.
I can't remember if you mentioned it but the nyquist frequency is the highest frequency you can sample for a given sampling rate. So, if this is polling at 105 Hz, then you would only be able to sample frequencies up to 52.5 Hz.
Oh my goodness Harrison, seems the next series will be how to fly an UFO with python.
From your 1 second and 5 second graphs, it is clear that it is picking up mains interference, you should be able to use a digital band stop filter to completely remove its effect, and give you the non modulated output, If you want you could use a correlation filter to just subtract it from the original data if you really wanted to preserve what data you find at exactly 50/60Hz.
This is so cool! Python is so cool for interfacing with things easily
I don't know edward snowden has a youtube channel
happy with what open BCI has done for the community. Glad to see this tech become main stream
I'm very sceptical when it comes to these sorts of things to be honest. These EEG readers are over simplifying things. Which to me makes it a pretty much a novelty item. I am hoping to be proven wrong here, so im really really looking forward to see more of this! :D
By these EEG readers, do you mean the commercially available ones? Because research-grade EEG's are incredibly insightful into the underlying processes of human behavior. The main issue with this commercial application is that users will likely have no idea about temporal/spatial resolution, noise, wave forms, event-related potentials and other related topics on EEG .
This is amazing... Feeling excited for this ... You always amaze us 🤩🤩🤩
I love OpenBCI stuff. A lot of those high-frequency and high amplitude oscillations you are getting may be 60 Hz noise from the powerline and nearby electronics. I would try gathering the signal far from the computer or any cables, as well as notch filtering.Without it, I don't think data if useful for a brain-computer interface. You should be able to filter up to 125Hz as the board's sampling frequency is 250. When. I run the board for 1 second I get 250 plus one or two extra samples, typically.
this upload is uncanny, yesterday began reading about BCI.
Hey, in terms of sampling you will need atleast double the upper bound frequency of what you want...
what i mean is that, if you need gamma bound which is 30 ~ 100Hz in freq, the minimum sampling freq will be 200Hz which is 100*2 !
Before I read the title, I thought “why tf is he wearing a shower cap” after seeing the thumbnail. Amazing video 👍
Nyquist frequency: you need at least twice the frequency you're trying to sample as the sample rate (so for gamma waves between 38-42 Hz, it would be 84 Hz bare minimum - but more in practice, like 100 Hz sample rate or higher)
You need to apply a bandpass filter, say from .5 Hz to 45 Hz. This will remove both mains noise and the DC offset present with DC coupled amplifiers such as OpenBCI Cyton.
That was very interesting! Making a video for EEG data was a very creative idea!
Very interesting stuff. I imagine OpenBCI can't handle artefacts like blinking and speech so worth recording data without speaking for best results!
Might suggest redoing the latter part of this video after clarifications I suggested. Live "learn while you shoot video" is a fun concept. But may also confuse new users of OpenBCi. Regards, William
Thank you so much for the video! Just about to start an EEG related project in the lab.
holy crap ive been wanting to do something like this forever. Subbed and really looking forward to the next vid on this topic! I may even buy one of these kits myself.
Hello, Charles Xavier...how many mutants you've got.
you should get 200k subscribers for this series alone!
Leggoooo 1 million :D
@sentdex, Hey man, great content!
Why aren't you using Fourier transform to filter raw data into different frequency bands?
Hopefully the sky is not the limit for Sentdex. It is absolutely amazing! State-of-the-art.
The OpenBCI foundation has an official Python module called Brainflow which has built in downsampling.
Very good video, I hope there is comming more about BCI for now on
This is just awesome. You are taking educational vids to next level. Keep going.
Awesome video! Well done! If you do more of these videos, please share with NeuroTechX so they can share with their community of BCI enthusiasts.
Wow so amazing! From experience with previous projects it might be faster to use a C++ or equiv FFT or Highpass filter to get the frequency components of the signal you want. Also wonder if it would be easier to start by mapping muscle movements to ingame actions like stomping your foot or hitting your side rather than thoughts so you can repeat them easily.
Why don't you try it out and tell us instead of asking these useless questions?
Sentdex, thank you so much for this video. That's what I was needing for my undergrad research!
welp, fingers crossed this is the point where Harrison just crosses over and decides to go full super-villain - with a BCI cap and pug-mug, he strikes fear into the hearts of governments everywhere...
Sweeeeeeet. I'm looking forward to this series so much!
There are some Kaggle datasets around this type of thing that you might want to look at such as "Grasp and Lift EEG Detection" and "Emotions using brain wave data". Get your LTSMs initialized as it's time to backpropagate Sentdex's brain.
"AI Researchers Pave the Way For Translating Brain Waves Into Speech" - news.developer.nvidia.com/translating-brain-waves-into-speech/
Always a great pleasure watching your video
You can make a project to control a DJI Tello using this just by looking at the video stream from Tello (DJI Tello can be controlled using python) and that'd be super cool !
Super interested in more EEG stuff! Really wanna see how much you can get out of the 16 channels.
Skipped to 11:52 is this is the most realistic, futuristic shit I’ve seen LMAO
There it is! Was waiting for this after seeing your Instagram post!👌
Amazing Video as ever!
However, I have a small cause maybe I misunderstood it.
You said that the low granular wave obtained is referring to the Theta and Delta waves. However, the recording of the BCI data was done when you were awake. Shouldn't that imply that the deltas and thetas should be more or less constant if not zero?
"Some People Can Have Delta And Theta While Awake"!"
@@whoisabishag3433 Thanks for the clarification Abi
@@HT79 no problem. That said most people don't. I do
Thanks for sharing this... Do you plan to do more?
Wow! I'm so happy you're exploring so many interesting stuff. Best youtuber ever!! ♥️🙌
your eeg lit up when you mentioned machine learning
this is so freaking cool
They are probably using a buffered serial reader or even writer. If you open the file descriptor directly, you could potentially get the whole uninterrupted steady stream of data
I've solved the issue since recording this. Stay tuned :d
Would definitely like to see what you do next.
Just a note for sampling rate, it's needed to be at least twice of the max frequency of the signal so in this case we need a 200Hz for Gamma Waves
I eventually found the nyquist theorem. Gamma is more like 30+, you dont need 100hz. 100hz would still be a waves, but I dont think I'll keed those.
Anyway, I solved quite a few things since recording this. We could still get 100hz after sampling if needed. Stay tuned!
Record data while watching porn. See what gets lit up.
interesting idea
Occipital lobe for vision and reward centers (nucleus accumbens) because dopamine. Maybe the motor areas depending on if you're jacking it.
Nothing can really "light up" tho, that's more fMRI and fNIRS. From the EEG we can't really determine the brain areas unless we use source localization algorithms.
@@acidtears It would be great if we could have these technologies as small gadgets like these to test for the general public but not for fMRI unfortunately.
Bursting is such a cool term.
You should use some digital frequency filters before resampling the raw signal to minimize aliasing.
nice "Not a Real" Flamethrower!
really looking forward into these video series.
Very interesting video!
Very important, in general, it is a very bad idea to low pass filter your data by decreasing your sampling frequency as it will lead to aliazing problems (google aliazing)
Instead, use a discrete time filter, a higher order Butterworth filter is probably a good choice.
Going to save this and hopefully recreate it someday, such an awesome project
Just found this channel. This is amazing. Subbed
this is simultaneously hilarious and awesome...and i'm not sure why on either one.
You're crazy dude :D Keep going :)
This is so awesome. Thank you and keep up the good work 😊
Dude this is awesome! i used to daydream about this type of stuff a decade ago! what a cool video. I'm ambidextrous and I draw allot, and i mess with neural nets as my hobby. cant wait to play with one of these and see what my brain is doing when i draw.
I once worked in a startup where we predicted the emotion of a person given the brain waves, so how about you try something like that, use your ML for the brain waves :D
You are revealing all your secrets here, but only few can translate them. Secret services can decode all thoughts, and animate them on screen. This is also how all worlds memories can be searched, and feelings can be used as variables.
As a neuroinformatics student: i love u Dood
This is great! Is the unit you have received the "All-in-One Biosensing R&D Bundle" for $2500
I think so, though I didn't get a pulse sensor which comes with that kit. Might be some other difference that I'm missing too.
@@sentdex pulse sensor is a already copied design and you can buy them for very cheap in amazon.
It sounds so good. As you Said this could be your best tutorial ever. Hopefully i'm able to keep up 😅
Can't wait to go along with this project.
Same :D
Awesome!!! Have had this idea for years now just haven't had the time to play around. Im glad someone is showing their foray into the sub fields
Wow so much to learn from this series. Exited to see further tutorial in this series. 🤩🤩👌😁
Could you do a tutorial on CMA-ES for reinforcement learning?
I have not heard of CMA-ES, but I can peak into it and maybe cover it.
@@sentdex okay thanks ✌️💪👐
is this just a coincident Mr.Kinsley that you are wearing a HAWKINS t-shirt while trying to turn into ELEVEN?
Nothing to see here, move along people!
@sentdex
Do you have any document or video going into more detail about the whole process so we can follow along. For example, I don't know how you converted the TXT file format output from OpenBCI into a NPY extension and there are also many other things that I am missing like in the second video you have completely different code, do you have a git repository of the code that we can use to follow to learn how to do this? Thanks for your time.
Kind regards,
Nafis
You've got to go a step further and check the polarity with a multimeter, they might have switched the sleeve colors, not the connector position. I've had LED's where the long leg was negative (Unless that's how all 485 P infrared LED's are made)
Hahaha too late now, but yeah that's a fantastic point. After seeing things were awry, I should not have assumed the issue was polarity flipped at all. Probably more likely wrong color wiring used. Great point, thank you. You might have just saved me lots of money in the future!
You can read waves through time. I have seen people on old movies put up their hands when I asked. Have fun with that.
Man you're wild. Love it!
Wait... So i can Just think about a game and train ur network. You even dont have to play actualy. Awsome!
What are you referring to when you say 'resampling'? Are you talking about decimating? That won't yield the results you want without making some pretty good filters. The better bet is to take a lot of data and just make an FFT. I suspect that would perform much better than filtering and decimating the raw data several times.
You can use gumpy toolbox for processing your raw data in python :)
(5:26; 5:30 and few more) benign rolandic discharges? What is the reference? Linked ears? Can the software calculate an average reference?
This is really interesting, im excited for new videos
Me too!
I doubt the channels are continuously sampled, my guess is that the channels are sampled at a reasonable rate (1kHz say) , batched and sent via the serial port . thus each batch can be analysed (fft) at a workable bandwidth (i.e upto 100Hz) without too much filtering . 16 channels of 8-10 characters each over Bluetooth is going to make for low average bandwidth , (find the BT baud rate and do the maths)
What are your thoughts on Full Dive VR ? Do you think we could improve the resolution of BCI's to the point of simulating real life vision, sensation, smell, taste, etc ? I'm not even including balance sensation and visualisation reading..