APA calls railings, education, to protect users of adolescents

by Finn Patraic

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

NEVESSE – The effects of artificial intelligence on adolescents are nuanced and complex, according to a report by the American Psychological Association which calls for developers to prioritize the characteristics that protect young people from exploitation, manipulation and erosion of real relationships.

“AI offers new efficiency and opportunities, but its deeper integration in daily life requires special attention to guarantee that AI tools are safe, especially for adolescents”, according to the report, entitled “Artificial intelligence and well-being of adolescents: an opinion of the health of the apa. “” We urge all stakeholders to ensure that young people's safety is considered relatively early in the evolution of AI. It is essential that we do not repeat the same harmful errors made with social media. “

The report was written by an expert advisory committee and follows two other APA reports on Use of social media in adolescence And Healthy video contents Recommendations.

The report on AI notes that adolescence – which it defines as 10 to 25 year olds – is a long period of development and that age is “not an infallible marker for maturity or psychological competence”. It is also a period of critical brain development, which pleads for special guarantees for young users.

“Like social media, AI is neither intrinsically good nor bad,” said the head of APA psychology, Mitch Prinstein, Phd, who led the development of the report. “But we have already seen cases where adolescents have developed unhealthy and even dangerous” relationships “with chatbots, for example. Some adolescents may not even know that they interact with AI, which is why it is crucial that developers set up railings in place. ”

The report makes a number of recommendations to ensure that adolescents can use AI safely. These include:

Make sure that there are healthy borders with simulated human relations. Adolescents are less likely than adults to question the accuracy and intention of the information offered by a bot, rather than a human.

Creation of default defects adapted to age in confidentiality parameters, interaction limits and content. This will imply transparency, human surveillance and rigorous support and tests, according to the report.

Encourage AI uses that can promote healthy development. AI can help to think, create, summarize and synthesize information – which can facilitate understanding and conservation of key concepts, notes the report. But it is essential that students are aware of the limits of AI.

Limit access and commitment with harmful and inaccurate content. AI developers should build protections to prevent adolescent exposure with harmful content.

Protect the confidentiality and resemblances of adolescent data. This includes limiting the use of adolescent data for targeted advertising and the sale of their data to third parties.

The report also calls for a complete education in AI literacy, to integrate it into basic programs and to develop national and state directives for literacy education.

“Many of these changes can be made immediately by parents, educators and adolescents themselves,” said Prinstein. “Others will need more substantial changes in developers, decision -makers and other technology professionals.”

In addition to the report, other resources and advice for AI parents and the maintenance of adolescents And For teenagers on the literacy of AI are available on APA.org.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.