Artificial Intelligence is changing how we work, live, and interact with technology. In Fort Collins, as in the rest of the world, it’s no longer a futuristic concept. AI is embedded in tools we use daily — from customer service chatbots, to document drafting assistants, to photo apps that suggest tags. These systems are powerful, but they’re also reshaping the privacy and security landscape in ways that most people haven’t considered.
On August 18, I spoke with the Fort Collins AI community about how to navigate this new environment responsibly. This post captures the main ideas from that talk, so you can revisit them — or catch up if you weren’t there.
Our Approach: Risk Management First
One of the most important points to understand is this: AI is evolving too quickly for one-off fixes to keep up. A new tool today may be outdated in six months. Threats change as fast as the technology. If our approach to security is chasing tools, we’ll always be behind.
Instead, we need to ground ourselves in risk management practices — the same disciplines that keep businesses safe in areas like finance, compliance, and cybersecurity. Risk management focuses on principles that remain relevant no matter how technology shifts.
When we use this approach for AI, it gives us three big advantages:
-
Resilience – We don’t have to reinvent the wheel every time something new comes out.
-
Clarity – We know where to focus: privacy, security, and accountability.
-
Sustainability – Good governance makes AI safer to adopt, easier to scale, and less likely to create surprises later.
At Fenix Cyber, this is the philosophy we apply for our clients. It’s not about slowing AI down — it’s about making sure AI adoption is done in a way that’s safe and sustainable.
The AI Landscape: How Your Data is Used
When most people think about AI, they picture chatbots like ChatGPT. But the reality is much broader. There are different categories of AI tools, and the way each one handles your data is crucial to understanding the risks.
Free tools that train on your data
If you’re using a free AI platform, chances are you are paying with your data. These tools often improve their models by learning from the prompts and uploads of users. Your information may also be aggregated and sold as part of analytics packages.
-
Example: Public, free versions of chatbots or browser-based “AI content” generators.
-
Risk: If you paste sensitive information — say, a client contract or medical note — it may end up feeding future model updates or being stored in ways you don’t control.
Paid tools that promise not to train on your data
Some companies have responded to privacy concerns by offering paid subscriptions. Their pitch is: “We make money from your subscription, not your data.” That’s an improvement, but it isn’t the full picture.
-
Example: Microsoft Copilot for Business, enterprise versions of ChatGPT.
-
Risk: “No training” does not mean “no storage.” Your inputs may still be logged, retained, or reviewed under certain conditions. The key is to verify exactly how your data is handled.
Local or offline AI tools
At the other end of the spectrum are AI applications that run entirely on your device. They don’t send data back to the cloud.
-
Example: SureLink AI, LM Studio.
-
Benefit: Maximum control and privacy, because your information never leaves your system.
-
Trade-off: You need the hardware power to run them, and you may have to manage updates or support yourself.
AI embedded in other companies’ systems
One of the most overlooked risks is that many companies are adding AI features into their existing services. You may not even realize you’re interacting with AI when you use them.
-
Example: Customer service chatbots on company websites, AI call center assistants, or automated email responders.
-
Risk: If these are built poorly, or if the third-party provider handling the AI doesn’t have strong controls, your information could be stored without consent, forwarded to vendors you never agreed to, or leaked if the system is compromised.
Key question for everyone to ask: Do you know where the AI you use stores your data?
Privacy, Security, and Reputational Risks
Why does all this matter? Because the way AI systems handle data creates risk on multiple levels.
Privacy risks
AI tools can capture and retain far more than people expect. This creates risk if sensitive personal details or confidential business data are processed without protections.
Business risks
Companies face potential compliance violations. Think of HIPAA in healthcare, GDPR in Europe, or PCI DSS in payments. Feeding sensitive data into an AI tool without the right controls could easily trigger regulatory problems.
Reputational risks
Even if no law is technically broken, public trust can evaporate overnight if customers believe their data is being handled carelessly. Once lost, reputation is difficult to regain.
Emerging threats
AI itself creates new kinds of attacks:
-
AI-assisted phishing – more convincing scams generated at scale.
-
Data inference attacks – pulling sensitive facts from a model’s outputs.
-
Model supply chain compromises – vulnerabilities in the AI system itself.
-
Over-automation risk – allowing AI to run workflows end-to-end without human checks.
Regulatory trend
The legal environment is changing fast.
-
Here in Colorado, the Colorado Artificial Intelligence Act was signed into law in 2024. When it goes into effect on February 1, 2026, it will require developers and deployers of “high-risk AI systems” to demonstrate reasonable care, perform impact assessments, disclose AI use, and ensure human oversight.
-
Globally, the EU AI Act and international standards like ISO/IEC 42001 are setting the direction: organizations must be able to prove that their AI is fair, transparent, and accountable.
The message is clear: getting ahead of regulation now means avoiding costly compliance problems later.
Practical Steps for Safer AI Adoption
So what can you do? The answers look different for individuals, small businesses, and larger organizations — but the principles are the same.
For Individuals
-
Know where your data goes. Don’t assume tools are private.
-
Limit sensitive inputs. Don’t paste confidential documents into free chatbots.
-
Prefer paid or local tools when handling sensitive work.
-
Keep a human in the loop. Never trust AI output blindly. Use it to draft, summarize, or speed up work — but always verify before you act.
For Small Businesses
-
Inventory AI use cases. Know which tools are in use and why.
-
Classify your data. Decide what is safe for AI and what is not.
-
Create guidelines and training. Staff need clear rules for safe use.
-
Define your process first. Map your workflow, then add AI to enhance steps — with checks and verification points.
Example: A support team might let AI draft responses, but a human should always review tone, accuracy, and compliance before sending.
For Larger Organizations
-
Adopt frameworks. NIST AI RMF, ISO/IEC 42001, IEEE 7000-series.
-
Conduct ongoing assessments. Regular audits and testing of models.
-
Integrate into continuity planning. AI risks belong in disaster recovery and business continuity.
-
Formalize human oversight. Make it a policy: no AI output reaches a customer without review.
-
Anticipate regulation. Align practices with emerging laws like Colorado’s AI Act and the EU AI Act.
Resources for Deeper Learning
Here are some reliable starting points if you want to dive deeper:
-
NIST AI RMF Playbook
https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook -
ISO/IEC 42001 – AI Management Systems
https://www.iso.org/standard/81230.html -
IEEE 7000-Series – Ethical AI Standards
https://standards.ieee.org/initiatives/autonomous-intelligence-systems -
OECD AI Principles
https://oecd.ai/en/ai-principles
Scaling Security for Small Teams
It’s easy to assume that only Fortune 500 companies need to worry about frameworks and audits. But in reality, small businesses face the same risks — with fewer resources to manage them.
At Fenix Cyber, our mission is to make enterprise-grade risk management and security practices accessible to small organizations. We simplify what large organizations do, and adapt it into processes that small teams can actually use.
To make it easier to get started, we offer a free Risk & Best Practices Assessment. It’s a simple, no-cost way to:
-
Understand your current AI security posture
-
Spot your biggest risks
-
Identify high-impact improvements you can make right away
Closing Thoughts
AI is not just a tool for the future. It’s here, now, shaping how we work and live. But with that power comes responsibility. If we adopt AI without thought for privacy, security, and oversight, we risk creating problems that could outweigh the benefits.
By taking a risk-first approach, building in human review, and getting ahead of emerging regulations, individuals and organizations can make AI not only useful, but trustworthy.
And that’s the ultimate goal: AI that helps us work faster, smarter, and safer — without compromising our values or our data.