Navigating AI and Data Privacy: The Department of Homeland Security Can Help Get You Started
Navigating AI and Data Privacy: The Department of Homeland Security Can Help Get You Started
If you’re a project manager like me, you are likely in the process of evaluating solutions to help your clients leverage generative AI. During your research, you’ve probably come across a warning like this, “Be careful with data privacy. The product maker could be sharing the data or training their bots.” This is, of course, good advice. But where should you begin? How do you avoid it? The truth is, it’s not just about being careful—it’s about knowing what to look for and how to have the right conversations with vendors.
Fortunately, you don’t need to start from scratch. Agencies like the Department of Homeland Security have done some of the legwork to set up policies that can be used with vendors to protect data sharing and breaches.
If you’re guiding a client through the process of evaluating AI tools, here’s a practical, step-by-step guide to get started.
1. Review the AI Product’s Privacy Policy
This is the first step—and an important one. Every AI product has a privacy policy that outlines what data it collects, stores, and shares. When you dive into it, look for:
- What specific types of data are collected (e.g., personal, sensitive, third-party data).
- How the data is shared with other companies or partners.
- What security measures are in place to protect the data.
After reviewing, compare the vendor’s privacy practices with your organization’s policies. If there are gaps or red flags, ask the vendor questions.
2. Use DHS Policies as a Guide
The Department of Homeland Security (DHS) has policies for using AI tools, like Policy 139-07 and Policy 139-06 for generative AI products. These guidelines are great conversation starters when working with vendors. Here’s how you can use them:
- Ask vendors how their practices align with DHS policies.
- Use these policies to build internal procurement processes and identify key privacy and security requirements.
- Treat them as a checklist to assess whether a vendor’s product fits your needs.
3. This is Key: Privacy Laws Still Apply to AI
Don’t forget—existing privacy laws still apply, even if AI is involved. AI may feel like new territory, but laws like HIPAA and FERPA don’t change just because you’re using advanced tech. Here are a couple of examples:
- HIPAA: Any healthcare AI tool must comply with strict privacy regulations for patient data.
- FERPA: Schools must keep student data private for the student’s entire life, even if AI tools are involved.
- CCPA: If you’re collecting data from California residents, the AI product must comply with the California Consumer Privacy Act.
Ask vendors to confirm compliance with these laws, and make sure they have the proper safeguards in place.
4. Ask the Right Questions During Procurement and Lean on DHS and Other Federal Policies
When evaluating AI products, ask targeted questions to uncover risks and ensure the tool aligns with your organization’s goals. Start with:
- What data does the product collect, and how is it shared?
- How does the product ensure compliance with HIPAA, FERPA, or CCPA?
- What security measures are in place to protect the data from breaches?
- Can the product be customized to meet your organization’s unique privacy needs?
These questions will help you spot any issues early in the process and make more informed decisions.
Final Thoughts: Lean on DHs and existing privacy policies to and build confidence
Data privacy in AI can feel overwhelming, but you don’t need to do it all from scratch. Start with small steps: review privacy policies, ask the right questions, and use DHS guidelines as a framework. With each step, you’ll build confidence and create a solid foundation for responsible, secure AI products in your organization.
-Lyria Hojnacki, Project Manager