tiprankstipranks
Advertisement
Advertisement

Aura Data Powers First Large-Scale Study on U.S. Kids’ GenAI Use, Highlighting Early Adoption and Safety Gaps

Aura Data Powers First Large-Scale Study on U.S. Kids’ GenAI Use, Highlighting Early Adoption and Safety Gaps

New updates have been reported about Aura.

Claim 55% Off TipRanks

Aura has leveraged usage data from its parental monitoring app to underpin a peer-reviewed study in JAMA Network Open, positioning the company at the center of evidence-based debate over how children use generative AI and what that means for online safety. In collaboration with the University of North Carolina at Chapel Hill’s Winston Center on Technology and Brain Development, Aura analyzed device activity from 6,488 U.S. users aged 4–17 collected between September 1, 2024 and April 1, 2025. The findings show that 26% of the sample had used GenAI apps for at least three minutes, with a smaller cohort of heavy users spending over 30 minutes per day and, in some extreme cases, nearly three hours. Adoption rises sharply with age, but usage among younger children is already material: 9% of school-aged kids and 6% of younger children have accessed GenAI, often via platforms designed for social companionship. ChatGPT dominated the category, accounting for 79% of GenAI users, and 41% of frequently used GenAI apps were marketed around companionship features.

For Aura, the study reinforces both the relevance and differentiation of its platform as AI exposure becomes a core family risk vector. The data reveal that GenAI use is most common on weekdays after school, with 12.5% of children accessing AI tools at night, and meaningful usage among children aged 4–13 despite U.S. privacy rules that are intended to limit access, underscoring a regulatory–practice gap that heightens demand for robust parental controls and monitoring. Aura’s Chief Medical Officer, Dr. Scott Kollins, noted that AI tools now mimic social interaction, introducing new family dynamics that can be difficult to manage without visibility into how and when kids engage with these platforms. By deidentifying data, securing IRB approval, and adhering to STROBE reporting standards, Aura demonstrated that its data assets can support rigorous academic research, strengthening its credibility with regulators, healthcare professionals, and policymakers. Strategically, this research underscores Aura’s potential to expand from a consumer security tool into a key data and insights partner on child digital behavior, supporting product development around AI-specific safety features, informing future policy discussions, and potentially improving customer acquisition among parents and institutions focused on mitigating tech-driven mental health and safety risks.

Disclaimer & DisclosureReport an Issue

1