🧠 BRAINPOWER: THE FUTURE OF COMMUNICATING WITH TECHNOLOGY

WRITTEN BY: RYAN ZERNACH

SUMMARY β€” The user is able to create an account, connect their EEG device, and control a digitally-animated light switch and thermostat with their brain's neuroelectrical signals/wavelengths.


TECH STACK β€” Brainflow, Scikit-Learn, Flask, Heroku


TEAM & TIME β€” One Data Scientist (Myself), One Front-End Developer (Ellen Weng), One Back-End Developer (Roenz Aberin), and an EEG Device User (Milecia McGregor) β€” Two Weeks


PERSONAL CONTRIBUTIONS β€”

β–» Trained machine learning algorithm to predict what the user is thinking

β–» Built a back-end RESTful Python API’s to return the predicted command: up, down, left, right, yes, or no

SIXTY SECOND VIDEO DEMO β€” PRESENTED BY ME

DEPLOYED WEB APP
BACK-END PYTHON API
FRONT-END G.U.I. CODE
DATA EXPLORATION

MVP/Proof-of-Concept Journal Updates, March 13th 2020 β€”


Milecia recorded our EEG data using her OpenBCI device, which has four electrodes. She recorded (100) 1-second bursts of EEG data, which Ryan used to train a predictive model with 97% accuracy! We built a back-end Python API β€” which, when called upon, will return a number 0, 1, 2, 3, 4, or 5. Those numbers correspond to one of six commands that the user may be thinking: no, yes, up, down, left, or right.


Ellen & Roenz built both the front-end React JS user-interface and the back-end connectivity to our database for securely storing user's data. The back-end Python API is called upon when the user on the front-end clicks the record EEG button, which is pictured at the bottom of this page with the pink brain and rotating blue/white circles. When that button is clicked, the following code is executed:


βš™οΈ Connect to the local EEG device

βš™οΈ Collect EEG data for one second

βš™οΈ Compile the data into a pandas dataframe

βš™οΈ Run predictions on those instances of EEG "screenshots"

βš™οΈ Of those predictions, return the command that was most frequently predicted


Then with that returned command, the front-end is hard-coded to react in a certain way, depending on which command was being "thought of" by the user wearing the EEG device for that one second in time β€” sort of like recording your voice to make a voice command to Siri or Google Assistant, but you're recording your brain's neuroelectrical transmissions.


We'd eventually like to be able to just continuously record and make predictions in real-time, while filtering out all other thoughts that the user is thinking, unless they're thinking of a specific command: no, yes, up, down, left, right. And we'd also like to eventually, obviously add more commands to our list of available actions.


However, these six commands are enough for a proof-of-concept. Currently, our front-end is equipped with the following functionalities, using just these six commands.

THANKS FOR READING!

CHECK OUT ANOTHER PROJECT OF MINE: