That kind of situation can spiral fast if it’s not caught early. A while ago, we built a chatbot for internal HR queries, and even though it wasn't public-facing, we had to rethink how we managed sensitive inputs like salaries or ID numbers. We ended up implementing data masking and set strict retention policies from the start. If you're looking for ideas,
https://agileengine.com/ai-studio/ has a section that walks through how to handle privacy and security in AI apps. That helped us set up boundaries early in the design process instead of patching things later. Worth checking out for frameworks and real examples.