Getting Somewhere! - Brain Computer Interface w/ Python, OpenBCI, and EEG data p.3

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025

Комментарии • 116

  • @LikeAndFavBF3
    @LikeAndFavBF3 5 лет назад +104

    Please continue this video series, it is amazing and truly an inspiration. Great job!

  • @ThisIsHatman
    @ThisIsHatman 4 года назад +43

    Drop a like if you're still waiting for part 4.

  • @willdawiz0
    @willdawiz0 5 лет назад +31

    Cool mug! I have a bit of experience in EEG and ML, specifically learning to classify sleep phases -- you've chosen a super difficult but interesting problem. Not sure where you'd ultimately like to take this but a few recommendations:
    1. agree with others that wavelets might be best transform
    2. absolutely should be using a time window of data, you're throwing out a crucial aspect! brains are dynamic systems, how things are changing is arguably more important than their current value state, preprocessing into frequencies doesn't mean you can't take advantage of the time-series nature of the data.
    3. 'Left' vs 'Right' is intuitively simple but thorny given what you're getting out of an EEG, these mental concepts are distributed all over your brain in different activity patterns, and they're actually pretty similar concepts. If you're willing to change your task but want to stick to 3 classes, you could try rest and two wildly different concepts that will evoke different brain regions and rhythms, even emotions -- MRI has way more spatial/temporal resolution and still benefits from this: ( www.theguardian.com/science/2010/feb/03/vegetative-state-patient-communication ) . Otherwise I saw a suggestion to play flappy bird with it on the last video, and binary event classification is way easier, as you've deduced. You can use this principle even in multi-class, training multiple binary classifiers and then using your favorite ensemble/arbiter method.
    4. Definitely don't watch the computer / graphics output as you create training data unless you've really thought the experiment through, Neurofeedback does some cool stuff in a therapy setting but might corrupt your "Left" thought with "the square is going right!" thoughts
    5. I'm a bit out of practice in conv nets but consider a way bigger model / param number, or generally get creative with architecture, you're trying to predict the output of a biological neural net with something like quintillions of parameters
    Loving the series so far, good luck ;)

  • @engineerhealthyself
    @engineerhealthyself 5 лет назад +27

    Me: I wonder what happens if I stack a blueberry and strawberry poptart and eat them together.
    Sentdex: *building a brain computer interface with python*

  • @tarmiziizzuddin337
    @tarmiziizzuddin337 5 лет назад +4

    Actually i'm doing research on BCI specifically on decoding movement intention using deep learning, your channel is very informative, thanks for the effort!

  • @Larry321ness
    @Larry321ness 2 года назад +1

    I have just bought a headset and am so glad this series exists

  • @danielf3623
    @danielf3623 5 лет назад +21

    Still recommend using stationary wavelet transforms on the raw data instead of FFT. FFT doesn't give you relative phase (unless you're pulling the real/ imaginary values too) and has nonlocal artifacts since it has to do an FFT of a certain length block of time series data that isn't weighted.

  • @lokthar6314
    @lokthar6314 3 года назад +3

    Thanks for this video series, I'd wish you continued this and showed maybe a demo of you moving boxes with your Jedi power

  • @TheHackysack
    @TheHackysack 5 лет назад +1

    It's about time you released another video that I'm not _quite_ ready for. Side note: Four months ago, your Python tutorial series helped to give me the head start I needed to quickly pick up programming finally. Now I'm obsessed with coding. Like, I don't do anything else anymore. I used to be miserable, damnit. Now I feel as if I need to do something productive with my life now. So, like, thanks, I guess, for giving me a purpose. Asshole.

  • @ethandickson9490
    @ethandickson9490 5 лет назад +2

    Consider the common spatial patterns approach, en.wikipedia.org/wiki/Common_spatial_pattern , is used specifically for this task and is relatively simple to implement. Using wavelets, as mentioned, is a also a good idea.

  • @Mohamm-ed
    @Mohamm-ed 4 года назад +1

    Please more videos about BCI you are amazing

  • @mathematicalninja2756
    @mathematicalninja2756 5 лет назад

    This is unbelievable man hats off

  • @DanielWilday
    @DanielWilday 3 года назад +1

    I just stumbled upon this, now slightly old, video series and it's really great. I have a question (which may never get answered as this is so old) While I know the BCI FFT data has the shape of (16, 60)... for a linear interpolation of this wouldn't you want to swap the axes so that the new shape would be (60, 16)?
    I'm new to Machine Learning so I'm likely wrong here.. but in my limited understanding a Conv1D kernel travels linearly along the x axis which would represent time (or in this case frequency), with the y axis acting like channels. If that's the case I'd assume you'd want to travel along the waveform of a single FFT, across all channels.
    Again, I'm new to this and ok with being wrong. Any clarification on this would be super welcome.
    Thanks!

  • @masternobody1896
    @masternobody1896 5 лет назад +1

    Thanks for knowledge sentex

  • @danieladelodun9547
    @danieladelodun9547 5 лет назад +2

    I wonder if it'd be helpful/possible to train the computer to know when you're thinking 'ohh yes, that is what I wanted to happen'

  • @alexandremarcil498
    @alexandremarcil498 5 лет назад

    Hi Sentdex, nice project! Moving a mouse with your hand will not contaminate the EEG signal with EMG. To capture EMG, you would need EMG electrodes on your arm to capture the electrical activity of muscles moving your arm. This electrical activity is not recorded by the EEG (maybe only the intention or signal coming from the brain telling the arm to move, but that is not EMG). Your heart beats regularly (I hope), yet an EEG does not capture EKG data. To do that, you need electrodes positioned close to the heart, or on both arms. Moving your eyes or blinking your eyelids will give rise to some EMG artifacts, as these muscles are close enough to the electrodes on your head to be picked up (and also the fact that the EMG signal is much stronger than the EEG one). Hope this clarifies things a bit. What you are saying about EMG doesn't make sense. Thanks for sharing your code and data!

  • @kobaltauge
    @kobaltauge 5 лет назад

    I love this series. Thank you very much.
    Since the first episode I'm asking me, how do you think of "left" and "right" do you "say" the word. Do you visualize the word? Do you think, you move your body or head to the direction. Do you look in the direction. All this actions could create different waves. It could probably help to have the same visual reference when collecting the data, so your thought of the direction are the same. And when "predicting" you could mentally recall the visual reference and it probably would match better.

  • @memoai7276
    @memoai7276 5 лет назад +5

    Do you use any automated hyper parameter optimization? If not, do you use any method for hyper parameter testing?

    • @sentdex
      @sentdex  5 лет назад +2

      hyperparms for the network or what? If you mean for the network, I spend time manually fiddling, but I am not running any specific algorithm for tuning them.

    • @memoai7276
      @memoai7276 5 лет назад +2

      @@sentdex Yeah I meant the network hyper parameters. In my own work I have realized the importance of having a way which can converge to the best settings. I have used evolutionary algo in which you save all the parameters of different training samples to a csv and after a pre-determined amounf of population is reached, say 100, crossbreeding occurs in which each new trial choses a random parameter out of the best 100 AD INFINITUM plus mutation chance every so often. Also I had been experimenting with bayesian optimization method which definitely works but seems less efficient than evo algo. In any case if you are interested you can check out optuna library for bayesian hyper parameter tuning.
      I am currently running bayesian on your BCI data. If I get better acc than you had achieved I will let you know.

  • @semiclean
    @semiclean 5 лет назад +1

    @sentdex, Is there a way to do FFT rolling averages ? in your first video you shown us that what was happening on différent time scales was totally different. We could see the curve going up or down for a specific channel once the time series was averaged.
    Also maybe start recording your FFT when you are in a 'thinking about nothing' state, if your wonderland of a brain can be in such state ;-), and then think left for 5 minutes and stop recording your FFT and see if there if the general FFT superposition of all channels looks different or if there is any response from specific channels.
    That's the best video series so far. I was in the middle of a computer vision notebook, I dropped everything to watch it !

  • @blackadam7078
    @blackadam7078 5 лет назад +4

    So, my question is, how many cups do you really have?

  • @TheRelul
    @TheRelul 3 года назад +1

    well these were 80 minutes well spent

  • @diasakishev8897
    @diasakishev8897 5 лет назад +4

    Just imagine him watching at the monitor and all those graphs were going down. And he realizes that he is dying.
    Sry. 2 o'clock in the morning.

    • @anonyme103
      @anonyme103 5 лет назад +1

      Damn you that's horrible thinking xd you Evil

  • @LastVoyage
    @LastVoyage 4 года назад +1

    Waiting for part 4

  • @ab185
    @ab185 5 лет назад +2

    Need Part 4!

  • @ThePinkoe
    @ThePinkoe 5 лет назад

    You can do zero-phase fft if you are set on doing fft's. I worked on a project almost exactly like this but with motor imagery instead of some abstract 'left or right' and the largest problem for me was what you call the 'none' class. Try transfer learning if you have not already and definitely try wavelet transforms --> scalogram for your images.

  • @readdaily5680
    @readdaily5680 4 года назад +1

    Can I follow along with 8 Channel?

  • @pasquale-s5g
    @pasquale-s5g 5 лет назад +2

    Not related: have you ever done a series on text generation?

  • @thedosiusdreamtwister1546
    @thedosiusdreamtwister1546 5 лет назад +1

    You might get better data if you show yourself flash cards. There's tons of good research that says the same neurons are activated in the same sequence whether the stimulus is internal (think left/right) or external (respond left/right) and it should be less affected by the fatigue you described.

  • @manukhurana483
    @manukhurana483 5 лет назад +8

    I was Just Watching your video and then suddenly notification pop new Video Uploaded.

  • @teneshvignesan6227
    @teneshvignesan6227 5 лет назад

    where can i donate for you to keep up this series?????????????????

  • @qwisacz
    @qwisacz 5 лет назад

    Just some thoughts. I believe that thinking "left" might not be enough, how about writing a simple game where you move some character with keys and generate data from it (pressed key may be a classification result). This correlation might be useful when taking some predefined delay into account (as human reactions are not instant). Great series, keep up the good work :)

  • @Praxss
    @Praxss 4 года назад +1

    Pls continue

  • @drewgi7543
    @drewgi7543 2 года назад

    Have you considered thinking about actions instead of words? Like, imagining yourself to move your left hand vs right hand. You would be more readily accessing the motor functions of your brain.

  • @realityheadquarters4956
    @realityheadquarters4956 4 года назад

    Did you try thinking left or right in different spoken languages?

  • @kristoferkrus
    @kristoferkrus 5 лет назад

    Maybe your brain patterns are actually changing the more you use the device and do the right/left/none "exercise," hence causing a "drift" in the patterns. You could perhaps try adding an extra input parameter for every example, which is the total time you have been using the device for, and see if that explains some of the uncertainty in your predictions.

  • @angrymurloc7626
    @angrymurloc7626 5 лет назад

    In response to your point, that this should be applicable for people with a disability: Unless fully paralyzed for a very long time, even people with a disability will have some neural connection from brain to muscle, and since you wouldn't be learning from the data at the muscle itself, that shouldn't be an issue.
    I would suggest the following points, which I'm sure would make your BCI more effective, for any person using it:
    Try to see the BCI as some imaginary limb, for which you have to learn movement, like you would with an arm or a leg.
    DON'T try to equate verbal thought to movement, as that is nonsensical from a neural standpoint. Your neural representation of the word left is very complicated. For example, since it's a direction, one of the closest associations with the word 'left' will be the word 'right'. Your data will be very hard to read if you deliberately mix it.
    Also try to think about how to continually generate data. Like for example, you could first try to learn the difference between correct and incorrect, by simply generating data with headset keyboard and BCI output, and having a model learn, what your neural representation of failure would be. If you have that, the next step is only to set the loss of the model to the amount of failure that your brain detects, and you can have the BCI adjust to you while you use it. Obviously you'd need to stop to let the network train for a while, but I'm pretty sure that the results will be much better than what you're getting here.
    All of this information is partly taken from the ted talk 'can we create new senses for humans' which is kinda the reverse of what you're doing here. It will probably be very a interesting watch

  • @rextlfung
    @rextlfung 5 лет назад

    Hi Sentdex!
    Great video as always, what you're doing is really interesting.
    What would you say are the programming/math concepts I need to learn before doing stuff in this video?
    I am studying neuroscience and am really interested in self-learning how to interpret brain signals to create useful outputs.
    Thanks!

  • @gomenaros
    @gomenaros 5 лет назад

    You do everything I wanna do... Awesome

  • @santiagoarroyob
    @santiagoarroyob 3 года назад

    You know what could you do? you could set three sensors that they measure what's happening and then you just linked the signals with what you're thinking and therefore we can know what you're thinking just based on the data of your signal. And then you communicate rapidly with the computer and it gives you instant feedback

  • @atrumluminarium
    @atrumluminarium 5 лет назад

    I'm still half way in so apologies if you say something about this, are you planning to experiment with recording data from other people (i.e. a friend or family member that you can bring over to get readings over a couple of sittings)? In principle this should increase variety and make the data less anecdotal
    Also maybe consider hand selecting which parts of the brain you read from if at all possible. For example the cerebellum is responsible for motor signals to the muscles so to eliminate that as much as possible one might want to experiment with not mapping it. Same goes for the other parts of the brain.

  • @juusokorhonen1628
    @juusokorhonen1628 4 года назад

    How are you doing the 'thinking of moving to the right/left'? Could it be that you're missing the visual response of the thing actually moving to the right? And I've been wondering could we use the EEG signaling of when you're thinking 'nope, it is going in the wrong way' as the error for the model, and then continuously train the model.

  • @iiWicked
    @iiWicked 4 года назад

    I did a quick search through your videos and this one seems to be the last on the subject but I still have some thoughts on how to proceed.
    Someone mentioned simplifying the data to a binary task. My immediate reaction is to do the opposite and introduce more data. Like training the feeling of not wanting to move right, not left, and not moving at all. Also doing training while being distracted or thinking left while watching the block not move or move right. I'm wondering if this might help the data differentiate between moving and not moving. I'm not an expert in the field or anything, just putting down some thoughts I had.

  • @tanelhelmik
    @tanelhelmik 5 лет назад

    What if you took only images of left and only have objects going left or turning left to train the left data and then switch all the data type to right for the right data. I think having something move right when you are thinking left would make you think your thought left is not the correct "left".

  • @wktodd
    @wktodd 5 лет назад

    So, there's no temporal content to your data? just instantaneous fft . I wonder if that is missing the actual thought process , which by the nature of wet neural networks is slow. ???? And, as a matter of interest how would you present temporal / sequential data to a NN?

  • @macarrony00
    @macarrony00 5 лет назад

    I am not sure how the brain network works for people without an arm for example. But I think people that lose an arm can still try to move it, the brain signals generated should be the same as before, so pressing buttons on keyboard or move a part of your body shouldn't be a problem, but a solution. Those people already got the brain control to move their new arm.

  • @jtdyalEngineer
    @jtdyalEngineer 4 года назад

    Think left, none, right, none, repeat... Then train the model to look for the change?

  • @ab185
    @ab185 5 лет назад

    Is it possible you have too many input channels and that’s creating accidental noise? Maybe try a sample of some channels (e.g. sensors over Broca’s or Wernicke’s Area) and see if the ability to predict accurately improves?

  • @IAmOxidised7525
    @IAmOxidised7525 5 лет назад

    Nice !!!! This is cool...
    I would get high model the behavior ,
    then find the music that creates the same trip....
    Can we do that ?

  • @CrimsonTheOriginal
    @CrimsonTheOriginal 5 лет назад +4

    You should add a key-logger to your system that timestamps keystrokes and then aligns them with your readings.
    I'm an Analytics engineer, I specialize in time series data in manufacturing, but I would love to talk to you about some ideas I have around this.

    • @atrumluminarium
      @atrumluminarium 5 лет назад +2

      Mapping EEG readings to the keystrokes might be kinda cool for a typing with your brain project.
      That being said he might want to experiment with different parts of the brain. I feel that for such a task the cerebellum should be ignored to weed out motor signals to the fingers

    • @CrimsonTheOriginal
      @CrimsonTheOriginal 5 лет назад +2

      @@atrumluminarium My thought was once he had a continuous flow of keystrokes, he could partition his readings by time slices of individual letters or words, then with that data, he would have the opportunity to overlay all of the readings from say a specific word, and use a correlation formula to see if any specific signals have higher repetition over the set of occurrences and then use machine learning on those targeted signals for determination instead.

  • @lautarodapin
    @lautarodapin 5 лет назад

    are you filtering low frecuency fft and higher ones?

  • @TroubleMakery
    @TroubleMakery 3 года назад

    Thinking about at getting one of these. Any other BCI recommendations or is the openBCI still the best?

  • @gm4984
    @gm4984 5 лет назад +1

    What if you used a visual program that randomises the position of a cube (btween left,right,up,right) you follow it with your eyes and the program learns from that. Should I explain it better?

    • @gutzbenj
      @gutzbenj 5 лет назад

      I also thought of something like an eye tracker to get the position of the eyes and would expect you to think of where your eyes are looking.

    • @gutzbenj
      @gutzbenj 5 лет назад

      And probably an empty "clean" room

  • @MrBoubource
    @MrBoubource 5 лет назад

    I was thinking that you could apply some sort of error correction code to keep only the most predicted output over a small period of time (the simplest implementation would be the most common over the last 4 guesses).
    That will add a little lag but the experience might be more enjoyable.
    I don't know if there are many error correction codes for 3 state information unit (here left, none and right, instead of 0 and 1 for bits) since their main purpose is to correct errors in binary data..

  • @mantasr7715
    @mantasr7715 5 лет назад

    Sorry if you already touched on this, but wouldn't it be simpler to just train on just 'right' and 'none' as I imagine left and right would have more similarities as as opposed to 'go' and 'stop'.

  • @wktodd
    @wktodd 5 лет назад

    Thinking about this (always a dangerous thing to do :-)) ...
    How about combining the sequential FFT sets in to a 3D cascade diagram (Freq/Time and level [colour]) this would combine the FFT sets and give temporal info to the CNN

  • @xiubinzheng7
    @xiubinzheng7 5 лет назад

    how do you like recording on Ubuntu vs Windows?

  • @pranjal86able
    @pranjal86able 5 лет назад

    you are amazing!

  • @Evan-bv8bh
    @Evan-bv8bh 4 года назад

    Hi! Is there anyway to replay EEG data using eeglab/ bcilab as well?

  • @rushibhatt1201
    @rushibhatt1201 5 лет назад

    Inspirational

  • @jond532
    @jond532 4 года назад

    Maybe instead of perfecting its ability to predict if we are thinking left or right...we should change how we actually think about left or right? Maybe if we found certain thoughts that the ai found easy to predict we could use them as a way of triggering left or right. Basically test different thoughts and see what is most easily detectable and substitute them for actually thinking left or right. Use an ai to find the most effective thoughts and label each thought.
    like making a mental language.

  • @allurbase
    @allurbase 5 лет назад

    I'd do a sequence with the hidden representation of this.

  • @castme.z
    @castme.z 5 лет назад

    Hey i need a quick help

  • @x1expert1x
    @x1expert1x 5 лет назад +1

    I wanted to do this shit so baaaad, I'm trying to convince my Computer Club at my university to invest into a brain-ECG reader, it would get so many more people to try out machine learning.

  • @maxitube30
    @maxitube30 4 года назад

    no more video here?

    • @sentdex
      @sentdex  4 года назад +5

      Nope, not yet. I want to come back, but got sidetracked with other things.

    • @maxitube30
      @maxitube30 4 года назад

      @@sentdex oh :(. Hope see you back soon

    • @ryanventurino3578
      @ryanventurino3578 4 года назад

      @@sentdex Interested as well!

  • @devgupta9469
    @devgupta9469 5 лет назад

    @sentdex please continue with pytorch tutorials. Concepts like RNN and LSTM

  • @blitzer658
    @blitzer658 5 лет назад +3

    Yo Sentdex
    I'm finding that 90% of the YT "software" community is filled with uninspired unmotivated idiots who lack passion for what they do all they do is complain about their job whilst making unrealistic day in the life videos where they flex on you hoping their YT channel well become their new career
    so, my question is what youtuber do you recomend that ACTUALLY CODE AND SHOWCASE COOL PROJECTS
    and not just mediocore fromt end tutorials

    • @blitzer658
      @blitzer658 5 лет назад

      edit phone crashed lol

    • @sentdex
      @sentdex  5 лет назад +5

      Have you heard of.... Sentdex ?
      Sorry but you set me up too easily. As for other channels like mine, Corey Schafer is great, tech with tim is good...uhh... running out. Code bullet is cool for projects, but he doesnt dive into code really or teach. Just shows cool stuff.
      The problem mainly is channels like this are really hard to create/exist.

    • @blitzer658
      @blitzer658 5 лет назад +1

      @@sentdex Sentdex is the littest coder on YT fam
      I see your point. It just a shame really, I seriously love programming and would do it even if it wasn't economically viable so it's sad to see that passion is quite scarce and to see the market saturated by
      GET RICH WITH HTML AND CSS QUICK FRONT END CODE BOOT CAMP COLLEGE IS TRASH SELF TAUGHT FOR ONLY COURSE FOR ONLY $997
      . meh, atleast I have your channel to watch
      PS. I'd like to recommend Devon Crawford
      he's the first one to combine "hip" film making / vlogs with programming though he stopped uploading cuz as you said, its' hard to make these types of videos
      that concludes my Ted Talk

  • @starviptv6544
    @starviptv6544 3 года назад

    🎬 Great Tutorials 💾💻📚

  • @MaxTechEngineering
    @MaxTechEngineering 5 лет назад

    You need to check out HTM, dude. It’s naturally resistant to noise.

  • @connormilliken8347
    @connormilliken8347 5 лет назад

    Hey Sentdex,
    Just wanted to say I've always really enjoyed your videos, so much so that I reference your channel in the book I just wrote. It was released recently. It helps to teach beginners and intermediate level programmers learn Python and some Analytics libraries. Feel free to check it out. You can find the shout out on page 322.
    Thanks!

  • @julianray6802
    @julianray6802 4 года назад

    Hi there Sentdex. Think all your videos are brilliant! With BCI, I was listening to a podcast (www.abc.net.au/radionational/programs/allinthemind/brains-old,-new,-and-augmented/12063992) where a quadraplegic patient managed to control a formula 1 car with a BCI and used a novel approach. Rather than thinking 'go left ..go left' etc, would think of eating icecream for turning left ie activating very different brain regions which would probably improve your signal to noise ratio....would be interested to see if you could replicate this technique. Thanks again for your great shows!

  • @DanielVeazey
    @DanielVeazey 5 лет назад

    I used to have a computer.

  • @alexandremarcotte7368
    @alexandremarcotte7368 5 лет назад +1

    I started a GUI to visualize data Live from OpenBci if someone wants to branch from it:
    github.com/AlexandreMarcotte/PolyCortex_Gui

  • @Jeacom
    @Jeacom 5 лет назад +1

    Please try driving a car with your thoughts.
    I mean a game car, not a real one.

  • @anhnguyen-ik6dj
    @anhnguyen-ik6dj 5 лет назад

    hello, I learned about the Ddos attack on python3 whit module urllib.requests, I wrote a program similar to Hulk.py but using an extra Proxy, you can give me more ideas with this moudle to write the Ddos program is better than Hulk, or you can make a Ddos Video with this Module, I look forward to it. Please answer me, thank you!

  • @brubrudsi
    @brubrudsi 5 лет назад

    Hey Sentdex I sent you an email idk if you saw it

  • @Bladermishal10
    @Bladermishal10 Год назад

    bro pt 4

  • @name1483
    @name1483 5 лет назад

    Use VS code

  • @eitanas85
    @eitanas85 5 лет назад

    Hi,
    Could someone please refer me to Sentdex videos,
    where he explains how do you choose which model to use, how many layers, what activation functions are proper for the problem etc.?
    It would be of great help!

  • @thetechegg8859
    @thetechegg8859 5 лет назад

    Hiiiiiiii

  • @mr.beelzebub888
    @mr.beelzebub888 5 лет назад

    hmmmmmmm

  • @phillipotey9736
    @phillipotey9736 5 лет назад +1

    Stop using FFT or increase the fft sample size. Our brains are way ahead of our reaction/conscious thinking time. I'll look at gh later but interested

  • @nictanghe98
    @nictanghe98 4 года назад

    PLz don't use for lame stuff like directions and start using it to code with your brain.

    • @sentdex
      @sentdex  4 года назад

      I don't find this lame.

    • @davidson2727what
      @davidson2727what 4 года назад

      I want this to code with my brain, then wear an oculus to display my coding environments in 3d. Im livin in 252525!