è

Teaching Students AI Literacy

UC Davis Workshop Explains How Large Language Models Work

Man in pink blazer lecturing at a whiteboard to seated students
Carl Stahmer, director of the UC Davis Library's DataLab, stands in front of a group of students at the first "AI Literacy and Logic Workshop." (Greg Watry/UC Davis)

Professionals and students alike are contending with a , or AI. In our modern, technological world, AI literacy is no longer an option; it’s foundational. 

And that’s why the College of Letters and Science at UC Davis the UC Davis Library’s are offering a unique opportunity for undergraduate students to learn the skills required to be at the forefront of this revolution. 

In a one-day intensive workshop, being held through May, more than 30 College of Letters and Science students gathered to learn about how AI systems work, the ethics behind their usage and strategies to make them work best. 

Why AI literacy matters for students today

“AI is an essential part of a growing number of workplaces, and you’re expected to be able to work with these tools,” said , director of the DataLab and an associate adjunct professor of English. “What sets you apart is understanding the tools so that you can adapt as new tools come out and be an efficient user, so understanding when to use it, when not to use it, when it is actually going to make me more efficient, when it’s going to make me less efficient, which is the primary drive of business.” 

No matter where you go in life,” he added, “that ability to be able to use what's here now and adapt to a rapidly evolving future is a differentiator, both for career success and, more importantly, for life success.”   

During the “,” which was designed for students across the liberal arts, social sciences and STEM disciplines, students participated in four deep-dive learning modules. 

Inside large language models: how AI actually works 

Stahmer first gave students a peak beneath the proverbial hood of AI, explaining the technologies and systems underlying large language models, such as OpenAI’s ChatGPT and Google’s Gemini, and their history. Such large language models employ statistical and mathematical methods such as vector models and maximum likelihood estimation.

“What lives underneath a large language model is a giant spreadsheet where you have each word,” Stahmer said. “You’ve got millions, if not billions, of columns.”

Large language models are prediction algorithms and words with similar meanings or closer associations are lumped into the same semantic space. This means that the more context and details given in a prompt, the better the output from the machine. Specificity is key.

A presenter points at a projected flowchart slide; audience seated in foreground

Why you should be skeptical of AI outputs

Stahmer emphasized that students should operate from a skeptic’s perspective when using AI.

“Assume your AI is lying to you,” he said.  

Because many large language models are trained on the vast amount of content on the World Wide Web, where misinformation can proliferate, users should remember that the models aren’t necessarily trained on truth. They’re trained on everything that’s been said and written, regardless of its factuality. 

While language models thrive at tasks where the locus of concern is the relationship of words, Stahmer said that students should vet any information they glean from a large language model. After all, in both the university and work environments, the user is the one responsible for the work they produce, including the work produced in collaboration with a large language model.       

The ethics of generative AI

The workshop also covered the ethics of AI usage. , associate director of the DataLab, reviewed the environmental and societal impacts of generative AI.

“Every time we’re hitting ‘Enter,’ we’re actually engaging with a global network of data, energy and human labor,” Reynolds said. “That’s all happening in the background.” 

Reynolds’ section offered a sobering look at the AI space. In addition to reiterating the importance of skepticism, she recommended students think of themselves as “truth auditors” and seek out generative AI tools that align with their moral values. 

How was the large language model built? What are the environmental and energy costs of a single prompt? What tasks are worth that cost and what tasks aren’t? These are all things people should consider when using generative AI. 

You don’t have to use a sledgehammer to crack a nut,” Reynolds said, noting that an AI prompt uses roughly 10 times more energy than a Google search query. 

So for small tasks, such as writing a 100-word email, it’s probably not worth using generative AI. 

Reynolds also reviewed privacy concerns and urged students to consider, “Would you be comfortable having your prompt on a billboard?”   

How to use AI effectively (and responsibly)

Stahmer followed the ethics discussion with a lecture about getting the best answers from generative AI. He cautioned students to not farm out thought to AI, but to use it as a collaborator with the knowledge that its answers aren’t always correct. 

So how does a user get the best answers from an AI? By learning how to inhabit the way AI thinks and writing prompts that fit that mold. It’s not like talking to another human, Stahmer said. 

To illustrate this, Stahmer urged students to experiment with markdown prompting. Unlike the text chat prompting, which is characterized by its similarity to chat sessions between humans, markdown prompting employs hashtags and asterisks to break up the prompt into various sections that can include things such as role, objective, context, specific instructions and expected outputs. It’s a detailed methodology that takes time to master. 

“We shouldn’t be looking at AI as a way to save time,” Stahmer said. “Look at it as a way to produce a better product.” 

If a user’s sole mission is to use AI to get something done fast, that’s a recipe for failure, he said.  

Student perspectives on AI learning

At the end of the workshop, students receive a certificate of completion for participating in the day’s event and submitting a short reflection paper. 

Anthropology student Nathalie Mvondo said she found the workshop to be a useful way to get ahead of the curve when it comes to AI. 

“I’m learning a lot,” Mvondo said. “I came with the mindset of, ‘There is a lot that I don't know.’” 

Psychology and neurobiology, physiology and behavior student Suahn Cho added she was eager to let her professors and fellow students know about it.  

“I especially liked the ethics part,” Cho said. “This kind of workshop was a deep enough dive for us to learn how to use AI, and the good things and bad things associated with it.”    

Primary Category

Tags