60 seconds with carm taglienti: “use the machine to be consistent, so the human has the bandwidth to be profound.”
- 43 minutes ago
- 5 min read
hot take: people – not computers – still say the coolest stuff. this series is dedicated to the soundbites, aha! moments & stories that are undeniably human.
Carmen “Carm” Taglienti, Ed.D. has worked with and around AI for decades, long before ChatGPT was running TV ads.
And it shows.
Every conversation I have with Carm, whether online or in-person, I walk away with an “ah, I hadn’t thought of that” moment. In fact, I usually walk away with two: one revolving around technology and one revolving around us, the humans, as the users of that technology.
Today, he brings that mix of technical expertise and human-centered approach to his role as Chief Technology Officer, Public Sector, for Insight Enterprises, a Fortune 500 solutions integrator. At the same time, he is also supporting the next generation of leaders as academic director for the online Master of AI Strategy and Innovation program at Wake Forest School of Professional Studies.
He’s got some thoughts on all things ethics, humanity, and AI – and we’re excited to share them with you.
– steph
toth shop (ts): In 10 words or less, what does “ethical AI” mean?
Carm Taglienti (CT): Leverage AI techniques with fairness, safety, transparency, and accountability.
ts: Can you share about a difficult human conversation you’ve had to navigate while implementing advanced technology?
CT: When people ask me about the hardest conversation I’ve had to navigate while building an AI program, they usually expect a story about data privacy or budget committees. They are surprised when I tell them about "Dr. E.", a higher education professional who lives and breathes the Wake Forest motto, Pro Humanitate (For Humanity). When I showed him the beta version of our AI assessment tool (a system designed to provide instant, granular feedback on student problem sets), he looked betrayed.
"If I outsource my grading to a machine, I am breaking the covenant. Teaching isn't just delivering content; it's evaluating the mind," he argued.
(He wasn't wrong. If we use AI just to be lazy, we fail. But that wasn't what we were building.)
I had to take a breath and ask the uncomfortable question—the one nobody in academia likes to answer out loud: “Be honest with me. Does the student whose paper you grade at 9:00 AM on Saturday, after a fresh cup of coffee, get the exact same as the student whose paper you grade at 11:30 PM on Sunday?"
He sighed. "No. They don't."
"That," I told him, "is why this is ethical. That is why this is Pro Humanitate."
I explained that we weren't trying to replace the teacher; we were trying to cure the inconsistency. By letting the AI handle the diagnostics (the "X-Ray" of the paper—checking logic flow, citation consistency, and grammar with a thoroughness that never gets tired), we ensure every student gets the same baseline of excellence.
"The AI handles the mechanics," I said. "Which frees you up to handle the mentorship. Instead of being a copy editor for 20 hours, you can be a coach."
In the end, we agreed on a new definition for our program: We use the machine to be consistent, so the human has the bandwidth to be profound.
ts: What question(s) about AI and humanity do you think we’re still avoiding?
CT: Here are the three questions I think we are avoiding:
1. Are we confusing "inefficiency" with "learning"? (This is the big one.) We are selling AI as the ultimate friction-remover. It removes the friction of writing code, summarizing texts, scheduling, etc. But in education, friction is the point. The "waste of time" spent staring at a blank page, wrestling with a thesis that won't form, is not a bug; it’s the feature. That struggle is where the neural pathways are built.
If we use AI to smooth out every bump in the road, are we raising a generation of students who have never had to climb? We need to ask: Which struggles are useless drudgery, and which struggles are essential for the soul?
2. If the AI is the "Average of Everything," do we lose the "Edge of Anything"? Large Language Models work by predicting the most probable next word based on the average of human knowledge. They are, by definition, the Ultimate Consensus. They are brilliant at the conventional. So, my fear is that if we use AI as our primary lens for the world, we begin to think in averages. We might lose the spiky, weird, inefficient outliers—the very things that drive innovation.
The question becomes: How do we teach students to use the tool without becoming the tool? How do we ensure they treat the AI as a floor to stand on, not a ceiling to hit?
3. Who are we when we are no longer the "Smartest"? For centuries, we have defined human value by cognitive processing power. We are the ones who calculate, who remember, who synthesize. Now, we have built something that can calculate faster, remember more, and synthesize broader than we can. If our value isn't "being smart" anymore, what is it?
I love this question, because I think it forces us back to our humanity. If we can't hang our hats on raw intelligence, we have to hang them on judgment, empathy, ethics, and curation. We have to pivot from being "Processors of Information" to "Stewards of Meaning."
ts: Every person we interview answers this same question last – Mile 18 is generally considered to be one of the hardest miles in a marathon. You’re hitting a wall. You’re forced to dig deep. What’s that look like in your line of work (or at a point in your career), and what do you tell yourself when you find yourself in the middle of a Mile 18?
CT: I’ve run six marathons in my life and qualified twice for Boston, so I know what mile 18 feels like!
In the world of AI strategy and education, 'Mile 18' is usually the moment of implementation. It’s when the excitement of the initial idea has worn off, the complexities of the infrastructure are piling up, and the 'trough of disillusionment' sets in.
When I hit that wall, I rely on two things: my conditioning as a lifelong learner, and my family’s philosophy on grit.
First, I have trained myself not to believe in 'failure' in the traditional sense. In my line of work, a setback isn't a stop sign; it is a data point. It’s an opportunity to iterate. So, when I’m at Mile 18, I don't tell myself, 'I'm failing.' I tell myself, 'I am currently acquiring the data I need to fix the problem.'
Second, I go back to my roots. My parents instilled a very specific rhythm in me: strategize, plan, execute. We didn’t wallow in the difficulty; our family motto was simply, 'We get things done.' It’s a mindset that has trickled down to the next generation. My son, who is a rocket scientist at NASA, took that family motto and elevated it to Ad Astra, per Aspera—'To the stars, through difficulties.'
That is what I repeat to myself. I remind myself that the 'Aspera'—the difficulty—is the only path to the stars.
So, how do I push through? I focus on tenacity and resourcefulness. I take an honest inventory of my own limitations, and instead of letting them stop me, I use them as a trigger to collaborate. I find the person who knows what I don't know.
Mile 18 is just the point where strategy meets stamina. And if you stay curious and keep executing, you always finish the race. Of the six marathons I’ve run in my life, I have ALWAYS finished!
.png)



Comments