BRAINPOWER: THE FUTURE OF COMMUNICATING WITH TECHNOLOGY
WRITTEN BY: RYAN ZERNACH
SUMMARY — The user is able to create an account, connect their EEG device, and control a digitally-animated light switch and thermostat with their brain’s neuroelectrical signals/wavelengths.
TECH STACK — Brainflow, Scikit-Learn, Flask, Heroku
TEAM & TIME — One Data Scientist (Myself), One Front-End Developer (Ellen Weng), One Back-End Developer (Roenz Aberin), and an EEG Device User (Milecia McGregor) — Two Weeks
PERSONAL CONTRIBUTIONS —
▻ Trained machine learning algorithm to predict what the user is thinking
▻ Built a back-end RESTful Python API’s to return the predicted command: up, down, left, right, yes, or no
SIXTY SECOND VIDEO DEMO — PRESENTED BY ME
MVP/Proof-of-Concept Journal Updates, March 13th 2020 —
Milecia recorded our EEG data using her OpenBCI device, which has four electrodes. She recorded (100) 1-second bursts of EEG data, which Ryan used to train a predictive model with 97% accuracy! We built a back-end Python API — which, when called upon, will return a number 0, 1, 2, 3, 4, or 5. Those numbers correspond to one of six commands that the user may be thinking: no, yes, up, down, left, or right.
Ellen & Roenz built both the front-end React JS user-interface and the back-end connectivity to our database for securely storing user’s data. The back-end Python API is called upon when the user on the front-end clicks the record EEG button, which is pictured at the bottom of this page with the pink brain and rotating blue/white circles. When that button is clicked, the following code is executed:
Connect to the local EEG device
Collect EEG data for one second
Compile the data into a pandas dataframe
Run predictions on those instances of EEG “screenshots”
Of those predictions, return the command that was most frequently predicted
Then with that returned command, the front-end is hard-coded to react in a certain way, depending on which command was being “thought of” by the user wearing the EEG device for that one second in time — sort of like recording your voice to make a voice command to Siri or Google Assistant, but you’re recording your brain’s neuroelectrical transmissions.
We’d eventually like to be able to just continuously record and make predictions in real-time, while filtering out all other thoughts that the user is thinking, unless they’re thinking of a specific command: no, yes, up, down, left, right. And we’d also like to eventually, obviously add more commands to our list of available actions.
However, these six commands are enough for a proof-of-concept. Currently, our front-end is equipped with the following functionalities, using just these six commands.
THANKS FOR READING!
CHECK OUT ANOTHER PROJECT OR BLOG POST OF MINE…
AWS offers certification programs to demonstrate platform mastery. I earned my AWS Cloud Practitioner Certification!
I’ve been working hard towards my official TensorFlow Certification! What tools have I been using to study?
Which machine learning algorithms did I learn to use in Lambda School and when are the most appropriate times to use them?
What projects did I build using DataRobot? What problems does DataRobot’s AutoML platform solve?
What are the building blocks of SQL and how is this language used to manipulate data that’s stored in relational databases?
Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.
I’ve built a couple dozen WordPress sites over the last 5+ years. Here are the steps I take to configure them with endless customizability.