Engaging Thought

EducationSociety & Culture

Listen

All Episodes

AI as Your Public Service Partner

Explore a hands-on approach to using AI tools effectively and responsibly in local government. This episode demystifies prompt writing, ethical rules, and practical know-how for public sector professionals. Hosts Andre and Eric blend expert advice with relatable scenarios for everyday use.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

Setting the Mindset for Government AI Use

Andre

Alright, welcome back to Engaging Thought. I’m Andre, and as always, I’m here with Eric. Today we’re diggin’ into something that’s been on my mind for a while—AI in government. You know, Eric, I keep hearing folks talk about AI like it’s about to take over their jobs, or even run the show at city hall. I mean, no. That’s not what this is. We gotta start with the right mindset. AI? It’s a tool, y’all. It’s like, well, it’s like the calculator or that old city zoning map. It’s helpful, but it’s not deciding anything for you.

Eric Marquette

Right. It’s funny you say that, Andre, because I remember early on, even in media, people thought automation would just do all the heavy lifting. But in government, especially—man, the stakes are higher. You have privacy, public trust, yeah, and that extra layer of transparency the public expects. And you’re accountable for everything that comes out of those tools. That’s a lot.

Andre

Absolutely. Accountability is everything. You can have the fanciest tech, but you—meaning the person, the professional—are still responsible for the judgment and the values behind every choice. I always go back to HR. I’ve been in plenty of situations where we used tech to sift through resumes or recommend candidates. But no algorithm can tell you, like, "Hey, how’s this gonna play with our multi-lingual team?" Or, "What about equity in this promotion decision?" The AI can suggest, but only a human can weigh those bigger consequences and balance equity with policy, you know?

Eric Marquette

Exactly. And public trust? It’s fragile. If someone thinks a faceless program picked their application or decided a service, the goodwill we talked about last episode—boom, gone. That’s why, like you just said, AI can support, not replace, the real expertise and values it takes to serve the community the right way.

Chapter 2

Mastering Prompting: Practical Skills and Pitfalls

Eric Marquette

Okay, let’s get practical here. This whole “how do you talk to AI” thing trips people up. I swear, the first couple times I used chatbots for scriptwriting, I’d type something like “Help me write a script”—and get total nonsense, really bland stuff. I thought the tech was broken. I just didn’t know how to ask.

Andre

You’re not alone on that, Eric. That’s a classic move. The prompt you give—it's like setting a destination in your GPS. A weak prompt? That’s just, “Take me somewhere.” A strong prompt is more like, “Take me to 123 Main Street, avoiding highways, and play some Prince on the way.” The more specific you are—context, tone, who’s reading, what format you want—the better the answer.

Eric Marquette

Yeah, and that applies anywhere. Let’s do a quick compare. If I say, “Summarize this report,” I might get a wall of text with no breaks. But if I say, “Give me three bullet points, plain language, for a city council update,” now I’m getting somewhere. I learned that the hard way through lots of trial and error. Seriously, it was like AI improv class in my inbox.

Andre

That’s it, right there. And prompt engineering, or whatever you wanna call it, doesn’t have to be high-tech magic. It’s: give context—like “I’m writing to a resident”; be specific—“three sentences, no jargon”; and show examples—say, “Here’s how I’d write the intro.” Then you iterate. Ask for a rewrite: “Make it simpler,” “Add an equity lens.” That’s how you work with your assistant—not let it run wild.

Eric Marquette

And don’t forget, the AI will try to please you even when it’s totally making things up! You gotta be ready to spot those “hallucinations,” right? Especially if it’s referencing a law or policy—fact-check, always. That applies in city work more than anywhere.

Chapter 3

Safe and Strategic AI Adoption in Daily Tasks

Andre

So once you get that prompting under control, the next step is using AI in the right places. This is, in a way, about drawing bright lines—what’s a green-light, what’s, uh, “no-go.” It’s tempting to push the envelope, but city work? There are guardrails for a reason.

Eric Marquette

There’s a real risk if you get careless. Like, using AI to draft emails or brainstorm outreach—awesome. Summarizing a giant document after a long city planning meeting? Say hello to your new best friend. But feeding in resident info or confidential case details? That’s “absolutely do not.” I mean, you could leak private data, and you probably wouldn’t even know it until it hit the news.

Andre

Right! I remember a situation with the city’s comms team—Minneapolis, actually. They were deciding which tool to use for a press statement draft. The project manager used Copilot in Outlook, which is locked down, and not ChatGPT, because that’s where you risk data getting out. By sticking to green-light tasks—drafts, summaries, organizing notes—they kept things efficient without risking resident info. That’s the difference. Pull ChatGPT in for brainstorming or rewriting plain language, but keep Perplexity for fact-checking and research—never for handling legal docs or sensitive HR records.

Eric Marquette

Makes sense. Each tool’s got its lane: Copilot for the office stuff, ChatGPT for creativity, Perplexity for research. But there’s no replacement for your own review—especially if you’re going to hit send or share it out to the public.

Chapter 4

Building Trust and Ensuring Transparency in AI Use

Eric Marquette

Let’s be real—none of this works if people don’t trust what’s happening with AI inside government. And there’s a lot of skepticism out there. So how do you show the public you’re using these tools responsibly?

Andre

It starts with clear communication, like open statements on your website or during council meetings—explain what AI is doing, and what it’s not doing. That’s what folks need: transparency, not more mystery. It’s not “the robots are coming”; it’s “here’s how we draft bulletins, here’s how staff get quick research, but humans always have the final word.”

Eric Marquette

And, you gotta train staff, right? Not just hand them a login and say, “Good luck!” Give real-world examples, like what tasks are safe, what’s off-limits, and how to spot when AI might be stretching the truth. It’s about building their confidence, so they don’t over-rely—or get spooked by—all this new tech.

Andre

Feedback channels matter too. Make it easy for people—staff or residents—to flag problems, ask questions, or report a weird AI suggestion. That loop keeps the system honest and actually shows you’re listening and willing to course-correct. That’s how you build trust and make sure innovation serves everyone, not just the tech-minded folks.

Eric Marquette

Exactly. So—if you remember one thing from today, it’s this: AI is a partner, not the boss. You stay in charge, you tell the public what’s happening, and you listen if something goes wrong. Thanks for walking through it, Andre—always sharp insights from you.

Andre

Thank you, and thank you all for listening. Keep those questions coming, and remember—next time, we’ll go even deeper into how to make tech work for, not against, our communities. Be well, Eric. Everybody take care!

Eric Marquette

You too, Andre. Until next time on Engaging Thought!