There and Back Again: Of Tails and Tubes

By Lew Ludwig

Recently, I was pleased to run into two former students collaborating at a whiteboard table in our library. They were deep in thought—or perhaps confusion (are these really so different?)—over something written on the table. After our initial pleasantries, I asked what they were working on: sequences.

They were puzzling over the phrase, “the tail needs to be entirely in the tube.” Before I continue, a bit about these students—let's call them Ari and Blair. As individuals, they are wonderful: pleasant to talk to, friendly, and genuinely sincere. As math majors, they have to work hard. I never like the phrase “math comes easy to them,” as it implies some inherent gift. When we use this phrase, we should consider the prior experiences and opportunities that have led students to this point—think of Gladwell’s ideas about outliers. I suspect that for many of us during undergrad, the phrase “it comes easy to them” was used once or twice.

Math does not come easy to Ari and Blair. With this in mind, I examined what they had written on the whiteboard. There were plenty of an'’s, epsilons, and absolute values—yet, curiously, no mention of "tails” or “tubes." They shuffled these symbols around, struggling to grasp their underlying meaning. What they really needed was to visualize the sequence.

I asked, “What is your favorite number?”

Ari replied, “Three.”

I continued, “Create a sequence that converges to three.”

Thinking, thinking, thinking…

Blair offered, “Three minus one over n.”

"Awesome," I said.

What followed was a Socratic back-and-forth that guided them to plot the sequence on the xy-plane. This led to two horizontal lines—one slightly above three and the other slightly below—with numerous dots between them. Triumphantly, they had found their "tail in the tube."

Back in my office, this interaction sparked a thought: Could generative AI coach students through a similar dialogue? Eager to explore this idea, I experimented with the new Strawberry version of OpenAI's ChatGPT (details to follow). I set up the scenario with the bot playing the role of a hypothetical student, including details like academic year, major, course load, past performance, and even doubts about continuing the major. Crucially, I sought not answers but guidance towards understanding, through the Socratic method.

Within just five interactions, the bot facilitated a structured approach to visualizing sequences on the xy-plane. This included drawing horizontal lines and placing numerous dots within a tube. Remarkably, the bot guided my hypothetical student to the same conceptual understanding that I had helped Ari and Blair achieve!  (My entire exchange can be found here.)

So, what does this mean for us? Am I suggesting that chatbots replace human instructors? Certainly not. However, consider how frequently we encounter students like Ari and Blair, who drift away from a math major because the material becomes too daunting or stops making sense. Often, they don't receive the support they need.

This reflection leads me to consider the broader spectrum of my math majors. Some students naturally "just get it"—they thrive without much intervention and are a delight to teach and challenge. Others may struggle more but often benefit from their friendships with those who grasp the concepts more quickly. Through late-night discussions,  friendships are forged, as one student guides the other in their understanding.

What can we do for students like Ari and Blair? While they were fortunate to have each other, many students lack such support and encouragement. Notably, in my interactions with the bot, I found it to be highly encouraging and nonjudgmental—essential qualities that I wish I possessed more of.

Again, I am not advocating for bots to replace humans. The interactions I have with my students are invaluable—we cherish these moments. However, it's undeniable that the math community often loses too many students like Ari and Blair. Previously, we might have simply accepted this as part of the academic landscape. Now, facing budget cutbacks and a demographic cliff, it becomes crucial to retain as many students as possible. Could chatbots serve as a tool to bridge the gaps when a student can’t make office hours due to work, when a faculty member isn't available at 1 AM, or when a student feels too embarrassed to seek help?

No one asked for this "arrival technology" known as generative AI, yet here it is, reshaping our academic and ethical landscapes. While I acknowledge the moral reservations some may have about its use, we must ask ourselves: Is resisting this technology a battle worth fighting? Instead of allowing the perception of this technology as merely a tool for cheating to exacerbate the divide between students and instructors, we should proactively harness its potential to enhance learning and ethical understanding. My preference is clear: I aim to educate students like Ari and Blair on using this technology ethically and responsibly rather than risk losing them from our programs or, worse, unjustly accusing them of misconduct.

What’s New?

In my last post, I shared three resources that might prove valuable as you navigate this evolving technological landscape. I also cautioned that "you get what you pay for," noting that the paid versions of generative AI are significantly superior to the free versions.

Since then, OpenAI, the creators of ChatGPT, has released "o1-preview," sometimes referred to as Strawberry. Designed to "think" or "reason" before providing answers, o1 sets itself apart from previous models that primarily relied on predictive text. It employs chain-of-thought reasoning, breaking down complex problems into manageable components. The developers claim that "this makes it particularly adept at tackling intricate challenges in fields like science, physics, and programming." However, it does take a bit longer to process—typically between 10 to 20 seconds—compared to earlier models.

I initially used version 4o Legacy for my “tail in the tube” exploration, but o1 far exceeded 4o’s explanations. While I haven’t fully explored this new model yet, it promises to be a formidable force. And yes, access to this preview version requires a paid account, reinforcing the adage that, indeed, you get what you pay for. As this technology continues to evolve, it's clear that o1 is not just a step forward—it's a leap into the future of educational technology.


Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.