National Science Foundation (NSF) funded centers like Pervasive Personalized Intelligence (PPI) Center not only help industry solve today’s real-world problems, but they also help students find their way and become leaders of the future.
Take for example, 4th year PhD student, Julia Romero from the University of Colorado Boulder (CU). (CU is one of PPI’s academic sites.) Julia received her B.S. in biomedical engineering from the University of Texas, Austin in 2020. As a competitive athlete, who races trail and mountain ultra marathons, and also participates in strenuous outdoor activities like soccer, biking and skiing, Julia is passionate about the knowledge that can be gathered from wearable devices with sensors. And she wanted to work on research related to exercise, health and medicine. Research that could be used to help athletes access their nutritional needs, predict injury, measure clinical biomarkers, track training goals, and monitor and identify other health applications. She wanted to take the data being collected by wearable devices like Apple Watch and Google’s Fitbit and help use it for all kinds of health applications.
But she encountered a major problem. There is little research currently being done in the area of wearable sensors impact on health and fitness. Thus, she embarked on the research regardless. However, this left Julia feeling isolated in her research field, where Julia didn’t feel challenged or fulfilled. She wasn’t making the impact she had hoped for.
However, she continued to follow her passion. And as often happens when one does, things changed for Julia. Her determination led her to adding a co-advisor, Morteza Karimzadeh, who suggested that she submit a project proposal to the PPI Center for possible funding. She did, they were interested in her work … and it was accepted.
Today, Julia, with supervision from Morteza and her industry collaborators at Intel Labs (a PPI member) is working on groundbreaking research that leverages spatiotemporal and multimodal information on small data subsets. The name of her project is “Learning Spatiotemporal Graphs for Detection of Human Activities in Multimodal Data.” Her research helps to automate understanding of the overload of data currently being collected by multi-modal and multi-sensor systems in smart manufacturing, patient monitoring, safety monitoring, and other devices and applications. Her research focuses on the development of techniques that specifically leverage spatiotemporal and multimodal information that can run on smaller data and less computational resources with faster runtime.
Her work will benefit applications based on Virtual Reality (VR) and Augmented Reality (AR), human action recognition from sensor data, multi-modal learning, and understanding human behavior in completing complex tasks. The ultimate goal is that her work will enable technology to help individuals through assisted living, health monitoring, and learning new skills. For example, it will assist the elderly, allowing them to live alone, where sensors within their home can monitor, remind, detect falls, and help them with daily tasks. Another application is using VR and AR, where AI can act as a personal coach. For example, the AI can be embedded in a VR headset and teach someone who is trying to learn a new skill, like cooking, dancing, playing a musical instrument, etc.
However, these are downstream applications. At this point in the process, Julia and her team are still working on the core technology of understanding human actions with sensor data. There are nuances that they want to understand, such as the temporal relations between actions, i.e. the order of certain actions; how different people perform the same actions; etc.
Julia is currently working on her project in close collaboration with Intel Labs with the assistance of their computational resources, she’s been able to work with a newly released dataset from Meta, called Ego-Exo4D, in addition to 50 Salads, the original dataset. Ego-Exo4D is a very large dataset of videos accompanied by highly detailed annotations, released in December 2023. It is rich and intended to advance research in VR/AR since each sample contains egocentric (first-person) point-of-view video, as well as 1+ exocentric (third-person) views.
According to Julia, “None of this would have been possible without the support of the PPI Center and NSF. I wouldn’t have been able to partner with Intel and leverage the knowledge of other industry and academic professionals. It might have taken me years to make the progress I am currently making. I am so grateful for this opportunity.”
Danny Dig, executive director of the PPI Center, also offered his praises. “I am very grateful for wonderful students like Julia who are contributing to the research we are doing at the PPI Center. I met Julia early on in her PhD program when she was still searching for a way to pursue her dream. Seeing the transformation in her and her excitement about the new research direction she is working with our industry members at Intel on makes me very happy. Fulfilling students' dreams and giving them a pathway for practical impact in society is the reason why we exist as a Center. Tomorrow will be better because people like Julia make our community better.”
The assistance that NSF provides is invaluable. Julia’s story is just one example of how NSF-funded centers not only provide avenues for important research, but also support the intellectual curiosity of students, and shape them into future leaders and innovators.