BRAINPOWER: THE FUTURE OF COMMUNICATING WITH TECHNOLOGY
WRITTEN BY: RYAN ZERNACH
SUMMARY β The user is able to create an account, connect their EEG device, and control a digitally-animated light switch and thermostat with their brain’s neuroelectrical signals/wavelengths.
TECH STACK β Brainflow, Scikit-Learn, Flask, Heroku
TEAM & TIME β One Data Scientist (Myself), One Front-End Developer (Ellen Weng), One Back-End Developer (Roenz Aberin), and an EEG Device User (Milecia McGregor) β Two Weeks
PERSONAL CONTRIBUTIONS β
β» Trained machine learning algorithm to predict what the user is thinking
β» Built a back-end RESTful Python APIβs to return the predicted command: up, down, left, right, yes, or no
SIXTY SECOND VIDEO DEMO β PRESENTED BY ME
MVP/Proof-of-Concept Journal Updates, March 13th 2020 β
Milecia recorded our EEG data using her OpenBCI device, which has four electrodes. She recorded (100) 1-second bursts of EEG data, which Ryan used to train a predictive model with 97% accuracy! We built a back-end Python API β which, when called upon, will return a number 0, 1, 2, 3, 4, or 5. Those numbers correspond to one of six commands that the user may be thinking: no, yes, up, down, left, or right.
Ellen & Roenz built both the front-end React JS user-interface and the back-end connectivity to our database for securely storing user’s data. The back-end Python API is called upon when the user on the front-end clicks the record EEG button, which is pictured at the bottom of this page with the pink brain and rotating blue/white circles. When that button is clicked, the following code is executed:
Connect to the local EEG device
Collect EEG data for one second
Compile the data into a pandas dataframe
Run predictions on those instances of EEG “screenshots”
Of those predictions, return the command that was most frequently predicted
Then with that returned command, the front-end is hard-coded to react in a certain way, depending on which command was being “thought of” by the user wearing the EEG device for that one second in time β sort of like recording your voice to make a voice command to Siri or Google Assistant, but you’re recording your brain’s neuroelectrical transmissions.
We’d eventually like to be able to just continuously record and make predictions in real-time, while filtering out all other thoughts that the user is thinking, unless they’re thinking of a specific command: no, yes, up, down, left, right. And we’d also like to eventually, obviously add more commands to our list of available actions.
However, these six commands are enough for a proof-of-concept. Currently, our front-end is equipped with the following functionalities, using just these six commands.
THANKS FOR READING!
CHECK OUT ANOTHER PROJECT OR BLOG POST OF MINE…