Search This Blog

Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Saturday, May 31, 2025

MAHA AI BS


Caitlin Gilbert, Emily Wright, Fenit Nirappil and Lauren Weber at WP:
The White House’s “Make America Healthy Again” report, which issued a dire warning about the forces responsible for Americans’ declining life expectancy, bears hallmarks of the use of artificial intelligence in its citations. That appears to have garbled citations and invented studies that underpin the report’s conclusions.

Trump administration officials have been repeatedly revising and updating the report since Thursday as news outlets, beginning with NOTUS, have highlighted the discrepancies and evidence of nonexistent research.

Department of Health and Human Services spokesman Andrew Nixon said that “minor citation and formatting errors have been corrected, but the substance of the MAHA report remains the same — a historic and transformative assessment by the federal government to understand the chronic disease epidemic afflicting our nation’s children.” .
...
  • The referenced report is real. But the inclusion of "oaicite," a marker of the use of OpenAI, in the URL offers a definitive sign that artificial intelligence was used to collect research.
  •  This dead URL was live as recently as 2017, according to archived versions of the site. AI experts said chatbots can produce outdated links in response to queries because they were trained using older material.


  Margaret Manto and Emily Kennard at NOTUS:

The Trump administration’s clean up of the “Make America Healthy Again” Commission’s hallmark and error-riddled report is opening new questions about how the report’s authors drew some of its sweeping conclusions about the state of Americans’ health.

At least 18 of the original report’s citations have been edited or completely swapped out for new references since NOTUS first revealed the errors Thursday morning. While some of the original report’s inconsistencies have been changed, a few of the new updated citations continue to misinterpret scientific studies.

Thursday, June 13, 2024

AI Companionship

 In The Politics of Autism, I discuss the day-to-day challenges facing autistic people and their families

Webb Wright at Scientific American:

Many mental health experts have serious concerns about people who are socially isolated—autistic or not—relying on AI companionship apps as a means of self-treatment or escapism. The problem is “not the inherent content of the AI,” says Catherine Lord, a clinical psychologist in Los Angeles who specializes in autism. But she worries that AI can exacerbate a user’s isolation if the technology is used without the guidance of trained therapists. (Replika and WithFeeling.AI, Paradot’s parent company, have not responded to Scientific American’s requests for comment.)

The open-ended interactions provided by such apps present a double-edged sword for autistic users. Personalized avatars that respond to user behavior with encouraging, humanlike language could help autistic people open up about themselves, especially in ways they may not be able to with other individuals. But these avatars—unlike real people—are always available and very rarely criticize anyone’s opinions. “You end up in this circuit where you have an algorithm dressed up as a human telling you that you’re right and maybe pushing you towards bad choices,” says Valentina Pitardi, an associate professor of marketing at Surrey Business School in England, who has studied the emotional impacts of AI companionship apps.

...

Lord also points to what she regards as a lack of real data that show any kind of therapeutic benefit of AI-powered apps for autistic users. She draws a comparison to prescription drugs: new medications must pass rigorous human trials before legal approval, and the same should be true of AI for autistic users, in her view. “It should be clear what the risks are and what the true value is,” she says. But many companion apps are only a few years old, and autism research is often a painstakingly slow process. For more than three decades, Lord has been running a single longitudinal study of autistic people, for example. It will take some time before she and other autism experts fully understand the technology’s potential consequences.

Sunday, April 28, 2024

AI and IEPs

These meetings can turn nasty, and many autism parents have “IEP horror stories.”  One parent told me that she tried to ease tensions by bringing cookies to the meeting.  The principal then shouted to his staff, “Nobody touch those cookies!”  Another parent writes of asking for a sensory diet, a personalized activity plan that helps the student stay focused (e.g., low noise levels for those with a sensitivity to sound).  “After just proclaiming she is extremely knowledgeable about Asperger’s Syndrome, from the mouth of a school psychologist after we suggested our son needed a sensory diet. `Our cafeteria does not have the ability to provide this.’” 
Maybe AI can help.

Rakap, S., Balikci, S. Enhancing IEP Goal Development for Preschoolers with Autism: A Preliminary Study on ChatGPT Integration. J Autism Dev Disord (2024). https://doi.org/10.1007/s10803-024-06343-0.  Abstract:
Purpose

The impact of well-crafted IEP goals on student outcomes is well-documented, but creating high-quality goals can be a challenging task for many special education teachers. This study aims to investigate potential effectiveness of using ChatGPT, an AI technology, in supporting development of high-quality, individualized IEP goals for preschool children with autism.
Methods

Thirty special education teachers working with preschool children with autism were randomly assigned to either the ChatGPT or control groups. Both groups received written guidelines on how to write SMART IEP goals, but only the ChatGPT group was given handout on how to use ChatGPT during IEP goal writing process. Quality of IEP goals written by the two groups was compared using a two-sample t-test, and categorization of goals by developmental domains was reported using frequency counts.
Results

Results indicate that using ChatGPT significantly improved the quality of IEP goals developed by special education teachers compared to those who did not use the technology. Teachers in the ChatGPT group had a higher proportion of goals targeting communication, social skills, motor/sensory, and self-care skills, while teachers in the control group had a higher proportion of goals targeting preacademic skills and behaviors.
Conclusion

The potential of ChatGPT as an effective tool for supporting special education teachers in developing high-quality IEP goals suggests promising implications for improving outcomes for preschool children with autism. Its integration may offer valuable assistance in tailoring individualized goals to meet the diverse needs of students in special education settings.

Wednesday, December 20, 2023

AI, Retinal Photographs, and Autism Screening



Question Can deep learning models screen individuals for autism spectrum disorder (ASD) and symptom severity using retinal photographs?

Findings In this diagnostic study of 1890 eyes of 958 participants, deep learning models had a mean area under the receiver operating characteristic curve of 1.00 for ASD screening and 0.74 for symptom severity. The optic disc area was also important in screening for ASD.

Meaning These findings support the potential of artificial intelligence as an objective tool in screening for ASD and possibly for symptom severity using retinal photographs.

Abstract

Importance Screening for autism spectrum disorder (ASD) is constrained by limited resources, particularly trained professionals to conduct evaluations. Individuals with ASD have structural retinal changes that potentially reflect brain alterations, including visual pathway abnormalities through embryonic and anatomic connections. Whether deep learning algorithms can aid in objective screening for ASD and symptom severity using retinal photographs is unknown.

Objective To develop deep ensemble models to differentiate between retinal photographs of individuals with ASD vs typical development (TD) and between individuals with severe ASD vs mild to moderate ASD.

Design, Setting, and Participants This diagnostic study was conducted at a single tertiary-care hospital (Severance Hospital, Yonsei University College of Medicine) in Seoul, Republic of Korea. Retinal photographs of individuals with ASD were prospectively collected between April and October 2022, and those of age- and sex-matched individuals with TD were retrospectively collected between December 2007 and February 2023. Deep ensembles of 5 models were built with 10-fold cross-validation using the pretrained ResNeXt-50 (32×4d) network. Score-weighted visual explanations for convolutional neural networks, with a progressive erasing technique, were used for model visualization and quantitative validation. Data analysis was performed between December 2022 and October 2023.

Exposures Autism Diagnostic Observation Schedule–Second Edition calibrated severity scores (cutoff of 8) and Social Responsiveness Scale–Second Edition T scores (cutoff of 76) were used to assess symptom severity.

Main Outcomes and Measures The main outcomes were participant-level area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. The 95% CI was estimated through the bootstrapping method with 1000 resamples.

Results This study included 1890 eyes of 958 participants. The ASD and TD groups each included 479 participants (945 eyes), had a mean (SD) age of 7.8 (3.2) years, and comprised mostly boys (392 [81.8%]). For ASD screening, the models had a mean AUROC, sensitivity, and specificity of 1.00 (95% CI, 1.00-1.00) on the test set. These models retained a mean AUROC of 1.00 using only 10% of the image containing the optic disc. For symptom severity screening, the models had a mean AUROC of 0.74 (95% CI, 0.67-0.80), sensitivity of 0.58 (95% CI, 0.49-0.66), and specificity of 0.74 (95% CI, 0.67-0.82) on the test set.

Conclusions and Relevance These findings suggest that retinal photographs may be a viable objective screening tool for ASD and possibly for symptom severity. Retinal photograph use may speed the ASD screening process, which may help improve accessibility to specialized child psychiatry assessments currently strained by limited resources.

Sunday, October 22, 2023

Retracted Article on Private Equity

The Politics of Autism includes an extensive discussion of autism service providers.  Private equity firms now own many of them.  There are critics.

Retraction Watch:

 An article that proposed potential benefits of private equity firms investing in autism service providers has been removed from the journal in which it was published.

The article, “Private equity investment: Friend or foe to applied behavior analysis?” was originally published in the International Electronic Journal of Elementary Education as part of a January 2023 special issue devoted to applied behavior analysis (ABA) for autism.

... 

The sole author of the article, Sara Gershfeld Litvak, “decided to retract the article due to her commitment to scientific integrity and ethical values,” following “a rigorous review process,” according to the undated retraction notice on page 266 of the special issue. Litvak is founder and CEO of the Behavioral Health Center of Excellence (BHCOE), a company that offers accreditation for organizations that provide ABA services, and she co-founded the Autism Investor Summit, an annual meeting focused on the business side of autism services. She is also an advisory board member for Calex Partners, a firm that provides advice on mergers and acquisitions for autism-related businesses.

The original retraction notice did not mention any specific issues with the article, which is no longer available on the journal’s website. A correction notice to the issue’s introduction, published 4 October, says that the editors retracted Litvak’s article “due to the use of Artificial Intelligence (AI) that led to numerous inaccuracies within the reference and the body of the paper.” A close examination of a PDF copy of the article obtained by Spectrum and Retraction Watch revealed that nearly two-thirds of the article’s references appear to not exist.

Friday, July 21, 2023

Autistic Language Patterns and a Problem with AI Detection

In The Politics of Autism, I discuss the day-to-day challenges facing autistic people and their families.

Richard Pollina The New York Post:
An assistant professor at Purdue University, who has been diagnosed with autism, said that they were accused by a fellow researcher of being an AI bot after sending an email that allegedly lacked “warmth.”

Rua Mea Williams, 37, warned that people with disabilities might be confused with artificial intelligence because fellow professors are not accounting for those who have neurological issues or are not native English speakers.

“Kids used to make fun of me for speaking robotically. That’s a really common complaint with Autistic children,” Williams told The Post on Thursday about the misconception.

“Chat GPT detectors are flagging non-native English speakers as using Chat GPT when really it’s just that they have an idiosyncratic way of pulling together words that’s based on translating their native language into English.”

Williams, who uses they/them pronouns, holds a Ph.D. in human-centered computing.y chose to share the interaction on Twitter to illustrate how the mistake could happen to anyone with disabilities.

“The AI design of your email is clever, but significantly lacks warmth,” the researcher replied to Williams’ email, followed by a request to speak with a “human being.”

“It’s not an AI. I’m just Autistic,” the professor replied, telling The Post it was “probably” not the first time they’ve been accused of “roboticness,” but is the first time they received the “bot implication.”


 




Tuesday, May 30, 2023

AI Chatbots and Autistic People

In The Politics of Autism, I discuss the day-to-day challenges facing autistic people and their families.

Amanda Hoover and Samantha Spengler at Wired:
Autism affects people in many different ways and individuals can have varying needs. ChatGPT may not work for some or even most, but a common feature of autism is that social interactions can be difficult or confusing.

Using a chatbot to help with communication may seem unconventional, but it’s in line with some established ideas used in social work to help people become more independent. “We talk about empowering people and helping people to be fully autonomous and experience success on their own terms,” says Lauri Goldkind, a professor in Fordham University’s Graduate School of Social Service who focuses on the marriage of social work and technology. An accessible tool like a generative AI bot can often help bridge the gap left by intermittent access to mental health services like therapy, Goldkind says.

But the true impact of ChatGPT for therapeutic reasons is largely unknown. It’s too new—WIRED reached out to four clinical therapists and counselors for input. Each of them declined to comment, saying that they have yet to explore the use of ChatGPT as a therapeutic tool or encounter it in their sessions.

But ChatGPT still does not reason very well.  I asked it why Senator Rick Santorum sponsored the Combating Autism Act of 2006.  Among the reasons that it cited: " Santorum has a child with a developmental disability. His daughter, Bella, was born with Trisomy 18, a rare genetic condition that causes severe developmental delays. This personal experience likely influenced his interest in issues related to disabilities and special needs, including autism."  But Bella Santorum was born in 2008, two years after the bill passed.