
Assumed within the term “artificial intelligence” is a difference between that kind of intelligence and the real thing—actual, true, non-artificial intelligence, whatever that is. It’s like the difference between artificial flowers and real ones; artificial flowers lack certain key characteristics of real flowers (notably, the things we most appreciate—subtle or powerful scents, the silky way they feel to the touch, their delicate combination of grandeur and fragility). But artificial ones do possess other features that real flowers cannot claim (durability, sustainability, and manufactured consistency). Still photographs taken with careful attention to lighting and setting may make it hard to differentiate real from artificial flowers at a distance, but all it takes is a closer look and a quick touch to remove any uncertainty.
Wait, though, what do we mean by photographs? It may be hard to tell the difference between the old kind that required actual film and developing in darkrooms from the digital “photos” (such as the ones we take too many of with our smartphones). Digital pictures can be very easily adjusted and manipulated, so it’s not as easy to tell whether an image is real as it is with real and artificial flowers. Distinguishing a digital photo from an old style one can be challenging, and now artificial intelligence applications have the capacity to generate images that look enough like film or digital photographs that we can be pretty easily surprised, fooled, or misled.
Taking another, and deeper, step: it’s even harder to differentiate real from artificial statements, never mind feelings, though it’s essential for us, as human beings living in community, to be able to do so. Just a few painful, tedious minutes with today’s social media or television news immerses us in statements, reports, and claims that may or may not be “true”—so much so that the concept of “true” has itself become uncertain and debatable; hence “fake news.” We are not well equipped to tell the difference between real and fake news; “facts” have various degrees of relativity and reliability and are often presented in ways that weaken our ability to determine their credibility.
As for feelings, well, you might remember Billy Joel’s song, “Honesty,” from his album “52nd Street,” released almost exactly 47 years ago, on October 12, 1978:
If you search for tenderness
It isn't hard to find
You can have the love you need to live
But if you look for truthfulness
You might just as well be blind
It always seems to be so hard to give
Honesty is such a lonely word
Everyone is so untrue
Honesty is hardly ever heard
And mostly what I need from you
And then Emily Dickinson, the belle of Amherst, takes on the same dilemma in her terse poem, “I like a look of agony”:
I like a look of Agony,
Because I know it's true -
Men do not sham Convulsion,
Nor simulate a Throe -
The Eyes glaze once - and that is Death -
Impossible to feign…
Interpreting feelings is a distinctly human problem, and one with a long history. Where are the objective criteria—the key performance indicators, perhaps—to be used to measure affection, friendship, or trust? Is there any way to tell whether what someone says they feel is what they actually feel? Authors and artists over many centuries continue to show how human beings wrestle, mostly unsuccessfully, with that question.
More generally, is telling the difference between what’s real and what isn’t part of what students should learn in college? Should they become not just digitally literate, but also cautious analytical thinkers, increasingly able to distinguish fact from fiction? Some colleges and universities have formulated and started to teach “digital hygiene” to help students protect themselves from online harm and manage the torrent of information that slips and slides into their online consciousness; that is a new dimension added to the traditional “written, oral, and digital communications skills” in resumes and position descriptions.
II
But back to differentiating real and artificial intelligence: In higher education there has long been great concern about the validity and trustworthiness of students’ products, from journals to poems, essays, stories, laboratory reports, and research papers. The underlying question is just this: “Is this the student’s own work?” Which means, did the student create this as an original product themself, without assistance from anyone else and without using sources or materials or tools that provided some or all of it for them? If a student affirms falsely that a term paper is their own work, they violate prohibitions on both lying and cheating; that is the fundamental basis for honor codes in colleges and universities everywhere.
Telling the difference between a student’s own original work and something the student used other resources to prepare is, however, often not easy—hence the development and marketing of software applications designed to detect such misrepresentations. The point is to distinguish what the student’s own intelligence fashioned from what the student imported and passed off as their own work—which was, in a somewhat primitive sense, a kind of artificial intelligence. Except for this: it required some industry and intelligence on the student’s part to know where to look and what to find and how to embed it in a paper or other creation that had to look and sound sufficiently real to avoid claims of plagiarism in the sphere of academic misconduct.
Into that historical and perennially uncomfortable crucible of fact, truth, and prevarication arrived what we now know as artificial intelligence, or AI, which differs from its various predecessors (such as the ordinary copying of someone else’s work) by its increasingly impressive capacity to close the perceptible gap between artificial and real. No need to go find some obscure volume in the stacks of the library that contains the perfect paragraph for tonight’s thousand word essay—instead, just ask the AI software application to do most or the whole of the assignment. Large language models that provide the intelligence in artificial intelligence draw upon unfathomable numbers of words, sentences, paragraphs, and whole written or spoken pieces to come up with answers to more questions than we can think to ask. In less time than “seconds” conveys, they can prepare whole outlines, letters, essays, stories, and reports that—and here lies one of the greatest concerns—are not only hard to distinguish from a human being’s original work but also better than what many, and soon most, human beings can do. Identifying papers written by AI might mean focusing on the better ones, not the worst ones.
The ways in which we have come to name and talk about artificial intelligence applications have made telling the difference between non-artificial (which we might call “human”) intelligence and the artificial kind more complicated and confusing. When an AI application absorbs enormous amounts of information and data and starts to categorize and use those resources, AI engineers say it is “learning,” or being “trained.” If it produces errors or makes up things that do not actually exist (citations of court cases that never happened, for example), they say it has had an “hallucination.” The problem of course is that training and learning and hallucinating are things human beings do; the way we understand what those terms mean is far richer and more nuanced than what they mean when applied to AI. Can we stretch “learning” to embrace what a software application does with new information? Is making that “learning” intentional by sending certain types and amounts of information to the application to improve its results actually “training” it?
Sleep with one eye open, perhaps; some theorists and advocates for AI predict that AI will soon not only be able to learn and hallucinate, but also to have consciousness, intentionality, and free will (subject only, perhaps, to unplugging it). In any event, it’s time to watch the movie “2001, A Space Odyssey” again; it’s been almost 60 years since “Hal,” the “Hall 9000” computer robot on the spaceship “Discovery,” decided to kill its human colleagues. Meanwhile, fact has been gaining on fantasy.
III
Would you dare to enter a contest with an artificial intelligence application to write, say, a 500 word essay on the development of the atomic bomb? And, even if you did, could you do it in less than two seconds? If one of us used artificial intelligence to prepare an operational document—a summary, an agenda, a routine letter or notice—would we be concerned about doing so, or just relieved by the reduction in our task list? What, then, is the problem if a student uses one of the commonly available artificial intelligence software applications to write a paper for a class?
There would be more than one problem. Let’s start with the easy ones. First, by definition, the student would be submitting something that was not their own original creative work, though many students (and perhaps others) would argue that the student’s paper when ready for submission would reflect at least the way in which the student understood the assignment and how they used AI to generate some, most, or all of their submission. Nonetheless, submitting work that was not original would violate both the lying and cheating standards mentioned earlier. That said, it is clear that many (honestly, most) undergraduates use AI to assist in, draft, or fully write something required in their classes— as well as the personal essay they prepared for their admission application.
The second problem is thornier. Very briefly summarized, it is that the student did not have to do the work of researching, thinking about, drafting, reviewing, revising, and finalizing the paper in question; the AI application did most, or all, of that for them. Were that true for one paper or another class assignment out of many the student would prepare, it might be a brief distraction, an aberration. But if the student is using AI to prepare and submit assignments on a routine basis, they sacrifice an important opportunity that is built into the heart of liberal education—learning how to think, analyze, write, revise, and refine their work. If AI does the conceptual and thought work for them, what do they get for completing the assignment? They know little more about analysis, critical thinking, and improving the quality of their own work than they did before AI offered new pathways to less cerebral intensity. They may have learned how to find material to satisfy an assignment, but they didn’t learn much more about the content, or meaning, of the assignment itself. And the application’s writing may be better than theirs, but using it did not improve their own writing at all. They might not even recognize their own submission if asked to claim it from a small collection of other students’ papers.
There is, though, a third and more serious problem: artificial intelligence substitutes (imperfectly) for the human interactions—conversations, critiques, office hours, shared experiences—that are the source of so much learning in college. Having to do research on, think about, and write something regarding a topic or question assigned for a class might send a student into the library, across the hall to talk to a friend, or to the office of a faculty member or student affairs professional. In the process of investigating that topic or question, the student might encounter people whose views differed from their own, or who had personal or family knowledge related to the assignment. That kind of experience, multiplied and amplified over four years, contributes to the development and maturation of the student in ways not sufficiently measured by the time required or the grade on any assignment, or in any of their courses. But it matters a great deal, because it is how the student comes to know themself, as they learn about the world around them; it is the process of cultivating humanity. Learning and student development are collaborating partners; each occurs in the penumbra of the other.
IV
In 2019, in an essay for the Journal of College and Character, I wrote that “Learning about what it means to be human depends above all else on significant, substantial, and ongoing personal engagement of students with faculty and staff who serve as mentors, advisors, and guides; learning about being human requires time with, and investments by, more seasoned humans. Such engagement is as expensive as it is essential.” Artificial intelligence was just a toddler then; six years later, it has become a force—and higher education is unprepared for its consequences to our mission-centered, human-oriented responsibility of cultivating humanity.
The list of factors that limit students personal access to and comfort with members of the faculty and staff has grown wildly, even explosively, since then. Think of just a few—(1) the pandemic and its legacy of isolation and distancing, (2) the challenges to enrollment and budgets inherent in the demographic cliff (which is here, full on, in 2025), (3) the continuing losses of positions, especially in the professional staff and contingent faculty, (4) financial challenges institutions, families, and students themselves that require students to work more on and off campus, (5) the seemingly unending stream of attacks on higher education under the current presidential administration, and the anxiety, fear, and instability that those attacks continue to cause, and (6) the progressive infusion of a business, or corporate, mindset into institutional governance, decision-making, and resource allocation. Remote classes, remote office hours, remote counseling sessions, and remote administrative tasks further de-humanize the university experience and have made students’ relationships with administrators, service providers, and faculty far too transactional, turning too many professional staff into concierges.
Once again, from my essay in 2019: “Higher learning is not simply the acquisition of a credential, and a degree is not a deliverable. To understand an academic credential as a deliverable — purchased by tuition and provided in return for the accumulation of a required number of credit hours — is to see higher education not only as a business, but as a consumer service; it turns a degree from something earned and commodifies it into a product.” Humanity, empathy, and wisdom are not commodities; none of them can be purchased, and none of them can be created in isolation. Each of them is essential to students’ ability to flourish despite the uncertainties, turbulence, and doubts of the world they have inherited. And developing each of them requires substantial, ongoing 1:1 contact between students and mentors, advisors, professors, graduate assistants, and student affairs professionals.
The challenge of this moment in higher education then is not just monitoring or limiting or controlling artificial intelligence in some way; that technology—which, remember, is what it is; it is not human, and it does not learn or hallucinate as humans do—is here to stay and will be with us, faculty, staff, and students, for the proverbial duration. Faculty certainly need to create the conditions in which students can actually and personally learn to think, analyze, write, create, and assess their own work and identify ways to improve it; student affairs professionals have an analogous challenge. All of us, as educators and colleagues, must understand both the opportunities AI creates and the dangers it allows but too often obscures. And all of us, as educators and colleagues, must remember the strength that arises from cultivating humanity, and the resources and responsibilities required to elevate and sustain that process in students’ lives.
In response to a question I asked about the value and importance of true personal contact between educators and students, ChatGPT (the most commonly used artificial intelligence application in higher education) sent this remarkable statement: “Education is, at its core, a human relationship. Beyond curriculum and grades, what truly shapes learning is the genuine personal connection between teachers and students. When teachers take the time to know their students as individuals, education becomes more than instruction—it becomes transformation.” There, ChatGPT seems to argue for exactly what is needed to balance its influence and the effects of its utilization. It goes on to list seven key factors related to that personal connection between students and educators:
- Foundation of trust and respect
- Motivation and engagement
- Individualized learning and support
- Emotional and social development
- Equity and inclusion
- Academic success and long term impact
- The human core of education
And it concludes that “In the end, what students remember most are not the facts they memorized but the teachers who believed in them. True personal contact is not an extra— it’s the heart of meaningful education.”
There it is. Even ChatGPT “knows” that artificial intelligence does not offer what students most need, just like that vase of artificial flowers will never replace the real thing.
Leave A Comment