This article was originally posted to the Infinitus monthly AI roundup blog post.
The world of AI and large language models (LLMs) is in a state of constant flux, change, and improvement.
It is my hope that the research papers and articles below will provide some insight into AI and machine learning (ML). August’s theme revolves around how humans will be augmented by AI, not replaced by it.
Here are six pieces of AI research that piqued my interest this month.
Catching Up On The Weird World of LLMs by Simon Willison
If you only have time to read one piece of text in August about the great big world of large language models (LLM) and AI, this "Catching Up on the Weird World of LLMs" is that piece.
In one (very very long) blog post, Simon Willison summarizes the last few years of development in the space of LLMs, the technology behind tools like ChatGPT, Claude, Bard, and more. In the words of the author himself, it covers a lot of ground, everything from what they are, what you can use LLMs for, what you can build on them, how they're trained, and many of the challenges involved in using LLMs safely, effectively, and ethically. Whether you have been working with generative AI for years, or is completely new to the subject, I think this is the perfect piece to read.AI and The Automation of Work by Benedict Evans
What's going to happen to the concept of work as AI becomes smarter and can do more of what humans can traditionally do? More importantly, how should we address the worries of automation and resulting job loss?
Mr. Evans, a former partner at Andreessen Horowitz, postulates that we actually shouldn't panic. AI and LLMs are the latest wave of automation, similar to previous innovations like the typewriter and spreadsheet. He postulates that history shows total employment doesn't decrease with more automation (Jevons Paradox), and new kinds of jobs get created. Furthermore, adoption of new technology tends to be slower than we expect, so while AI will impact work, it won't be an immediate disruption.
What we should worry about, instead, is on developing ethical + responsible applications of AI that augment humans, not replacement. Essentially, our work at Infinitus should be of one that guides AI development positively in a way that provides maximum benefit to society.Super Mario Meets AI: Experimental Effects of Automation and Skills on Team Performance and Coordination by Harvard University and Columbia University researchers
On the topic of human + AI collaboration, there is a burning question - are we actually more productive with AI? In what areas should we be more cautious in our AI usage?
This paper provides insights we should consider when developing AI systems meant to collaborate with humans. The experiments conducted by Columbia Business School found that introducing automated agents to teams actually decreased performance, especially for low and medium-skilled teams. The paper argues this is because humans prefer working with other humans - those paired with AI reported lower trust and effort.
So while AI can excel at individual tasks, when it comes to collaboration, there are motivational downsides we must address. As an AI healthcare company, it reminds us to focus not just on technical capabilities, but the social experience of working with AI teammates. Rather than fully automating teams, we should aim for AI that supports and motivates human workers. The paper is a call to design AI systems that account for the nuances of human collaboration and team dynamics. Our AI needs to complement and empower people, not just replace them.An Internet Veteran's Guide to Not Being Scared of Technology by Mike Masnick of Techdirt
Requires subscription to The New York Times. Here’s an archive link (shh).
If you don't know who Mike Masnick, he is the founder and editor for Techdirt, one of the longest running tech blogs. Mike is a pragmatist optimists at heart, and has a deep understanding of technology's impact. A common fear we have with the rise of AI and LLMs is the anxiety and fear of said new technology.
He recently advised Hollywood professionals on the rise of AI, emphasizing the "AI plus human" synergy and urging them to harness AI's potential rather than resist it. Drawing from his vast experience in the tech industry since 1998, Mr. Masnick's core message is consistent: embrace technological change but be wary of hasty decisions that might have unintended consequences.
Personally, I find Masnick's approach invaluable for humanity as a whole. His insights remind us that while innovation is inevitable, it's crucial to ensure that our advancements align with human interests. His advocacy for "protocols, not platforms" emphasizes interoperability and decentralization, which could redefine how we approach AI integration in various industries.Capabilities of GPT-4 in Medical Challenge Problems by Microsoft and OpenAI
I discovered this article when trying to find scientific sourcing on GPT-4 with regards to healthcare and medical knowledge for drug and therapies research, a project that I was working on at Infinitus. That said, this article is written by Microsoft and OpenAI on their own technology, so take it with a small grain of salt. While GPT-4 scored well on medical licensing exams, real clinical still requires more work.
The risks around accuracy and bias mean we must be extremely cautious about any medical applications, even with human oversight. However, the qualitative examples show the potential for AI to aid physicians - explaining diagnoses, generating clinical scenarios, etc. If we can address the risks, AI augmentation could one day help provide more personalized instruction for medical students. For Infinitus' use of LLMs in our work, the paper is a reminder that benchmarks only tell part of the story. Before deploying any high-stakes application, we need rigorous real-world testing, focused on safety and accuracy. Medicine also requires special attention to fairness - we must ensure our systems don't propagate inequities in care. While research like this is promising, it's again a reminder that responsible development of AI is crucial, especially for our field where our work has very real direct human impacts.
The Role of GPT-4 in Drug Discovery by Andrew White of Vial.com
Andrew White is one of the people whom devised scientific examples in the GPT-4 technical report. He writes that GPT-4 has shown potential in assisting the drug discovery process. While it cannot directly discover new drugs, it can propose new compounds for further study.
For instance, when targeting the protein TYK2 for psoriasis treatment, GPT-4 can conduct literature searches, identify related drugs, check for patents, and propose modifications to create novel compounds. It can then verify the novelty of these compounds and suggest synthesis for those that aren't purchasable. However, the real-world application of these compounds requires extensive testing and clinical trials, which GPT-4 cannot automate (duh). While GPT-4 shows promise in augmenting drug discovery, like previous opinions, AI currently complements, not replaces, the expertise of professionals in the field.
What AI research or articles are you reading this month?
Feel free to share it with me on Threads, LinkedIn, the comments below, or just replying to this newsletter!