Over 90% of employees now use personal AI tools at work, but only 40% of companies officially license them. This gap between sanctioned use and real-world behavior has given rise to what experts call the “shadow AI economy” — where professionals rely on consumer tools like ChatGPT, Claude, and Copilot to get work done, often outside the view of IT and compliance team.
In financial advisory, this reality is already reshaping daily workflows. Advisors lean on AI for drafting emails, summarizing complex KYC/AML documents, or brainstorming client planning scenarios. The appeal is obvious: speed, efficiency, and relief from repetitive tasks. But so are the risks: client data moving into unsecured tools, no audit trail for outputs, and communications that may not meet regulatory standards.
Shadow AI refers to the use of artificial intelligence tools outside of sanctioned enterprise systems. It emerges because employees perceive consumer AI as faster, easier, and more useful than official platforms. Research from KPMG and Menlo Security shows employees adopt shadow AI when enterprise tools feel inflexible — and often paste sensitive data into those tools without safeguards.
For financial advisors, shadow AI shows up in predictable places:
Drafting portfolio summaries or client letters late at night.
Summarizing 50-page KYC/AML reports into a digestible one-pager.
Using AI to brainstorm “what-if” scenarios for tax or estate planning.
Organizing reminders and to-do lists for compliance tasks.
These uses often seem harmless, but they involve sensitive information and bypass the oversight mechanisms that firms depend on to protect clients and themselves.
Multiple studies confirm what’s happening beneath the surface:
Shadow AI is pervasive. Menlo Security reports a 68% surge in generative AI traffic from enterprises, with 57% of employees admitting they paste sensitive data into personal AI accounts.
Pilots stall, but shadow thrives. MIT’s GenAI Divide study found that 95% of enterprise AI pilots fail to create measurable business value, but employees using personal AI tools reported significant productivity gains.
Advisory-specific findings. A SIGIR 2025 study showed generative AI advisors can capture client preferences as effectively as humans but often fail with complexity or conflicting needs. MIT researchers also caution that AI in financial advice requires strong oversight to maintain trust and regulatory compliance.
Conceptual illustration of an LLM-advisor with two stages: (1) Preference Elicitation and (2) Advisory Discussion
Institutional signals. JPMorgan recently rolled out an in-house AI assistant for wealth managers to draft and summarize documents — effectively formalizing behaviors advisors were already doing in the shadows.
Investor sentiment. Despite growing advisor use of AI, 82% of investors say they still trust human advisors over AI for financial planning decisions, underscoring that AI should support, not replace advisory roles.
Regulators in both the U.S. and Canada are moving quickly to clarify expectations around the use of artificial intelligence in financial services. While there is not yet a single binding “AI law” for advisors, securities regulators and self-regulatory bodies are making it clear that AI use must comply with existing obligations on suitability, supervision, conflicts, and recordkeeping. This is not an exhaustive list and is intended to briefly summarize some of the key points that we are monitoring at SideDrawer.
Canada
CSA: The Canadian Securities Administrators issued a consultation paper in late 2024 highlighting risks and responsibilities for AI in capital markets. The notice calls for governance, transparency, testing, and suitability standards when firms use AI in advisory or client-facing roles.
CIRO: The Canadian Investment Regulatory Organization’s 2025 compliance report emphasizes that firms deploying emerging technologies (including AI) must demonstrate effective supervision and controls over sales, communications, and operations.
OSFI: While directed at federally regulated financial institutions, the Office of the Superintendent of Financial Institutions has proposed Guideline E-23 on Model Risk Management, setting out robust expectations for validation, monitoring, and governance of AI/ML systems — principles likely to influence best practices across the sector.
United States
SEC: The Securities and Exchange Commission has proposed rules requiring broker-dealers and advisers to eliminate or neutralize conflicts of interest when using AI or predictive analytics in client interactions. If adopted, this would directly impact how firms deploy AI tools for recommendations or nudges.
FINRA: The Financial Industry Regulatory Authority has issued guidance stressing that AI-generated communications are subject to the same rules as any client communication. Firms must test, validate, and supervise AI outputs, control for hallucinations or misleading content, and maintain books and records.
NIST: The U.S. National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF) and its 2024 Generative AI Profile are increasingly being adopted as reference models for governance, testing, monitoring, and transparency.
Bottom line: Financial advisory firms are expected to treat AI like any other supervised technology:
Eliminate or mitigate conflicts of interest.
Supervise and archive AI-assisted client communications.
Validate models and monitor outputs for accuracy and fairness.
Maintain an inventory of models and third-party AI tools.
Provide transparent governance and evidence of controls.
In practice, this means firms can experiment with AI, but only if they can demonstrate the same level of oversight and compliance that applies to all other tools and communications in the advisory process.
Forward-looking firms are starting to ask: How do we capture the value of AI without the risks of shadow usage? The answer is not banning tools — that only drives usage deeper underground. Instead, the opportunity is to:
Map actual shadow AI usage. Understand what advisors are already doing. Most shadow AI tasks are low-stakes but high-friction (summarization, drafting, reminders).
Provide sanctioned alternatives. Introduce compliant, auditable ways to do the same work — with clear guidelines for what’s safe.
Leverage standards like MCP (Model Context Protocol). MCP provides a secure way to connect AI assistants directly to systems like SideDrawer, so advisors can use AI to draft, summarize, and automate within governed, auditable workflows.
Educate and guide. Offer training and cultural framing: AI is here to assist, not replace.
If you lead a financial advisory firm, here’s where to start:
Survey anonymously. Ask advisors how they use AI in their daily workflows. Confidentiality will reveal hidden adoption.
Identify high-friction workflows. Focus pilots on tasks like KYC renewal summaries or collaborator onboarding.
Run short pilots. Test AI-enhanced workflows in a 90-day cycle. Measure time saved, compliance risk reduced, and advisor satisfaction.
Build social proof. Package early wins into case studies. Enterprise procurement is driven by peer referrals more than by flashy demos.
Shadow AI use by advisors is proof of demand. Professionals are signaling, through their behavior, that they need faster, smarter ways to work. Firms that ignore this reality risk unmanaged exposure. Firms that embrace it can turn shadow AI into safe AI, strengthening compliance, protecting client trust, and building a foundation for future innovation.
At SideDrawer, we believe this is where the future of advisory technology lies: not in resisting change, but in designing pathways where innovation and compliance move together.