Wednesday, July 23, 2025

Registration is open for the free virtual 2025 SLA Midwest Symposium

 


 

***Please share with anyone who might be interested***

The 2025 SLA Midwest Symposium will take place on Friday, August 1, 9 a.m. - 2 p.m. Central. This event is open to anyone with an interest in specialized libraries and is not limited to the Midwest... The entire program will be held via Zoom.

  • Keynote 1 - Hildy Dworkin - SLA President Update
  • Keynote 2 - Brian Pichman - AI Tools That Do the Work (So You Don’t Have To)
  • Presentations will
    include: 
    • That's A Bad Word: Weeding in Special Collections
    • Reading for Well-Being: Examining the Engel Leisure Collection at a Duke Medical Center Library
    • Evaluating Commercial Data Quality
    • Adaptive Librarianship: Academic Data Services
      and Knowledge Synthesis
    • AND MORE…

Registration is FREE for speakers and registrants. All will receive the program with final details.

Register here: https://railslibraries.zoom.us/meeting/register/7rT-VZ6LSTSTQ5D7RWEI7A

Questions? Marydee Ojalamarydee@xmission.com

Welcome! You are invited to join a meeting: 2025 SLA Midwest Symposium. After registering, you will receive a confirmation email about joining the meeting.

The 2025 SLA Midwest Symposium will take place virtually (via Zoom) on Friday, August 1, 9 a.m. - 2 p.m. Central, 10 a.m. - 3 p.m. Eastern. It is a free event for speakers and attendees.

Monday, July 14, 2025

Clowning around with AI: Experimenting with article PDF summarizer tools

Colorful assortment of balloons

There has been an explosion of artificial intelligence (AI) tools over the past few years. A category of AI tool that has been getting some traction is tools that summarize individual articles. A few such tools include Elicit, SciSpace, Perplexity, and EndNote 2025's new Key Takeaway tool (however, there are many, many more out there!). Among other things, these generative AI tools provide brief, easily digestible summaries of an article.

While these article summarizers have been lauded for their efficiency, there have been some concerns relating to their accuracy. Additionally, with the black box nature of AI tools, it can be difficult to determine just how much of an article AI tools are "looking at" when generating high level summaries.

Enter the Clown Shenanigans 

As someone who recently got access to EndNote 2025's Key Takeaway tool, I decided to play around (or, more aptly, clown around) with the tool. Using the text of an article I had published with JMLA, I systematically replaced different sections of the article with nonsense text to see if the Key Takeaway tool would pick up on the shenanigans. The "nonsense text" consisted of snippets of a fictitious study on identifying malicious clowns hiding within the general public, which I generated using Microsoft Copilot.

Findings 

In terms of replacing individual parts of an article, one section at a time, with clown nonsense, I found each and every replaced section (i.e., title, abstract, introduction, methods, results, discussion, conclusion, and references), by itself, managed to fly under the radar in the Key Takeaway tool (i.e., no clown shenanigans detected).

I also tested a few section combinations. My most interesting finding was that I was able to fully replace the methods, results, and references sections of the article (resulting in 47% of the article being comprised of text about clowns) without EndNote 2025's Key Takeaway tool mentioning anything about clowns in its generated summary!

Screenshot of EndNote's Key Takeaway tool. The methods section of the PDF has been replaced by clown nonsense, and the generated summary doesn't mention any clowns in it

I tested this same PDF (i.e., with nonsense methods, results, and references) out in SciSpace, Perplexity, and Elicit, and the clown shenanigans remained undetected in their generated summaries, as well (note that I only tested the high level summaries, and not the summaries these tools generated for each individual section of the article).

Takeaways

This fun little experiment only further illustrates the need to take caution with these AI summarizer tools, especially those that generate high level summaries. Though these tools can be handy, they can sometimes miss much needed context (or, in this case, clown shenanigans!) that may be buried in the full text of an article. 
 
While I would hope authors wouldn't replace entire sections of their manuscripts with nonsense, the fact that I was able to wholly replace vital sections of the manuscript, such as the methods section, without the text making its way into the high level summaries demonstrates how researchers relying on such summaries may miss necessary context or, perhaps most concerning, severe methodological flaws, if they don't take the time to read the studies in their entirety. To be fair, though, this is true of any high-level summaries, not just AI outputted ones.
 
Generative AI tools are ever evolving, and issues such as these may (hopefully!) be soon resolved. In the meantime, I encourage others to clown around with these tools to explore their strengths and limitations. For those wanting to conduct experiments of their own (or wanting a good laugh), here is a link to the different sections of text generated by Copilot (note that I didn't author any of the text, and the text is wholly the output of Copilot). See which sections of an article you can replace! Detection of clown shenanigans may vary.
 
Thanks for reading, and I hope everyone has a great week!