60 seconds with christie burris: “AI is a thought partner for learning, creating, and solving.”
- toth shop

- 18m
- 6 min read
hot take: people – not computers – still say the coolest stuff. this series is dedicated to the soundbites, aha! moments & stories that are undeniably human.
The first time we met Christie Burris was at the Wake Forest School of Professional Studies in-person Wake360 event, and it was immediately clear she’s someone who sees both the system and the people moving inside it. After listening to our thought leadership presentation, she came to us with insightful and important questions – questions to protect people and intellectual property so as to keep human insight WITH humans versus plugging it into AI to train AI.
In that moment, she became a thought partner with us.
Christie has spent her career at the intersection of data, policy, people, and public service – most recently as Chief Data Officer for the State of North Carolina – helping complex institutions make smarter, more human decisions. She brings a rare combination of strategic rigor and calm clarity, the kind that comes from having actually built things that work at scale.
In reading Christie’s responses below, you’ll learn more about AI; and, in turn, we hope more about how you want to use AI moving forward.
– meg & steph
toth shop (ts): In 10 words or less, what does artificial intelligence (AI) mean to you?
Christie Burris (CB): AI is a thought partner for learning, creating, and solving.
ts: What's one piece of advice you'd give an everyday user interacting with AI that you don’t think they’ve heard or read?
CB: In the communications field, we often guide folks to “know their audience” when crafting their message. Everyday users should do the same when writing prompts. Never assume the AI knows who your target audience is; you must tell it explicitly, even for simple requests.
Because Large Language Models (LLMs) generally default to a generic and slightly formal tone (assuming the audience is the average internet reader), it can take several prompts to coax the answer and/or context you’re seeking. This wastes power. By defining the reader (aka audience), you unlock the model's ability to adjust vocabulary, complexity, tone, and formatting instantly.
And, while we’re on the subject of power, I often remind my friends and family that every prompt entered into a LLM requires significant computing power and is the equivalent of pouring out a bottle of water. Be intentional about your use of AI: consolidate questions into one well thought out prompt instead of many short small ones; use smaller models, avoid excessive playing around, and keep or reuse earlier prompts.
ts: What are the biggest misconceptions people have about AI, people’s messaging they put into AI, and/or a human’s data privacy?
CB: A few misconceptions come to mind. First, people often think of the AI as a super brain that understands the words you type, knows facts, and learns in real-time.
AI is best described as a Super Pattern Machine. Its main brain is built on advanced math and probability—it guesses the most likely next word based on the trillions of books and websites it has read (also known as model training). This is why it can be so confident when it gives you a fake fact (also known as a "hallucination"). It's just choosing a pattern that looks correct, even if it's completely wrong.
This “super pattern machine” has two hidden abilities that make it seem much smarter than it is:
The AI model’s main knowledge is frozen and does not update when you chat, However, it has short-term memory. It uses a context window (like a scratchpad) to remember everything you say in the current conversation, but only until you close the chat.
The AI model is only predicting the most probable words. However, it simulates logic. If you ask the AI to "think step-by-step" before answering, it forces itself to build a logical pathway. Its math is so powerful that it can fake human reasoning and problem-solving perfectly, making it incredibly useful.
A second misconception is that when you close the chat window, everything you typed is erased, like closing an incognito browser tab.
Unless you have specified “private” mode or “history off” when interacting with your favorite LLM, most companies default to keeping all of the messages you type in the chat. This is valuable data to help them study how the AI did and to make future versions smarter through training. You control what you type in the prompt.
Never type anything into an AI that you wouldn't feel comfortable posting publicly on a website or your social media page (considered released to the public). This means no passwords, confidential information, or highly personal information - names, addresses, etc. And, think long and hard about sharing your own creative thoughts and images for refinement without the settings mentioned above.
Pro tip: If you don’t know how to change your LLM to private or history off, just ask it, and it will be glad to step you through it.
ts: What concerns you and/or excites you about the future of humanity and AI?
CB: What concerns me: Proliferation of Misinformation.
AI's ability to create convincing, realistic content (deepfakes, fake articles, fake voices) is becoming a real threat to our online interactions. We risk a future where the average person can no longer trust what they see, hear, or read online. This erodes the shared foundation of facts necessary for debate, the democratic process, and even human connection.
I urge my family and friends to fact check with another source all news/articles/videos/advertisements that seem sensationalized, a little off, or too good to be true. Often they are fake. Additionally, all of this misinformation is also data and data is what trains models. If the pool of online information becomes saturated with convincing lies, AI’s ability to provide reliable, grounded answers is fundamentally compromised.
What excites me: Curation of Knowledge With an Individualized Personal Assistant
AI can act as a thought partner, organizing vast amounts of information into personalized learning experiences for users who understand how to use it well and responsibly (see above answers). It can increase productivity, bridge knowledge gaps and make learning accessible to anyone with a smartphone.
What’s more, AI is already helping to automate online tasks that can be frustrating for many - planning travel, paying bills, cancelling subscriptions, comparing pricing of goods and services, searching for jobs, etc.
ts: Every person we interview answers this same question last – Mile 18 is generally considered to be one of the hardest miles in a marathon. You’re hitting a wall. You’re forced to dig deep. What’s mile 18 in your line of work or at a point in your career, what do you tell yourself when you find yourself in the middle of a mile 18?
CB: I’ve spent the last ten years working in the data and AI space. Data for good is an amazing thing - whether AI is leveraging vast amounts of health care data to improve how we deliver care or advancing our ability to predict dangerous weather patterns through use of geospatial data. But, we are creating more and more data, so much so that the world’s annual amount of data is expected to increase to nearly 400 Zetabytes by 2028 (See IDC Global DataSphere forecast). To put this in perspective, it would take roughly 250 billion brand-new iPhones to hold 400 ZB of data. The thought of managing all of this data responsibly is incredibly overwhelming. My dig deep mantra in the Middle of Mile 18 is the age-old question - “"How do you eat an elephant? One bite at a time.” Just like in the marathon scenario, set goals and then break them down into smaller, manageable, and time realistic steps. Celebrate all the small wins along the way and don’t be afraid to FAIL (also known as First Attempt Iterative Learning!).
For those of us who work in this space, I am reminded of one of my favorite Bible verses (Luke 12;48): "To whom much is given, much will be required.” Data and technology leaders have an enormous responsibility to ensure good data governance and ethical use of AI to ensure that these technological advancements benefit and not harm humanity.
.png)



Comments