|
Credit unions respond to AI-driven attacks with adaptive detection tools, stronger authentication, and targeted education for employees and members. By Marc Rapport, Contributing Editor Remember when phishing emails were easy to spot, riddled with typos and sent in clumsy batches by amateurish attackers copying scripts from online forums? Josh Bartolomie Now artificial intelligence has done to phishing what early generative tools did to novelty content: it has democratized it, accelerated it, and turned what was once crude and sporadic into something polished, scalable, and constant. A new report from cybersecurity platform Cofense says it’s now documenting a malicious email attack every 19 seconds, more than doubling last year’s pace of one every 42 seconds among the company’s 35 million end users. The shift is not just about volume but about capability, as attackers now use AI as core infrastructure rather than an add-on convenience. “AI has fundamentally changed the economics and effectiveness of phishing,” says Josh Bartolomie, chief security officer at Northern Virginia-based Cofense. He says threat actors now use AI to dynamically adapt phishing pages, generate thousands of unique variants, and manage infected systems at scale. For credit unions, that operational shift shows up in day-to-day fraud activity that feels more realistic and harder to contain. Odene James “We’re seeing fraudsters increasingly leverage AI to make phishing and impersonation attempts more convincing, faster to produce, and harder for both members and controls to detect,” says Odene James, vice president of risk management at $5.8 billion Coastal Credit Union in Raleigh, NC. She says the most notable tactics include highly personalized phishing messages generated at scale, with rapid variations designed to evade filtering, as well as voice‑based impersonation using synthetic audio paired with spoofed caller ID to make fraudulent outreach appear legitimate. Attacks Evolve From Volume to Precision The defining change is that phishing is no longer a blunt instrument aimed at anyone who might click, but a precision tool engineered around context and timing. Nicole Jiang, co-founder and CEO of Fable Security in San Francisco, says AI has changed “the speed, scale, and personalization of attacks.” Employees and members are now targeted across email, phone, Slack, WhatsApp, and even Zoom with lures tailored to their roles and current activities. Attackers are not just blasting out messages, they’re researching individuals and running parallel campaigns optimized for response rates. Nicole Jiang “Gone are the days of Nigerian prince scams,” says Jiang, whose 2024 startup focuses on human risk management and AI-driven security training. Criminals now cite real vendors, mimic internal formatting, spoof executives, and construct plausible regulatory or operational scenarios that feel authentic to lending officers, call center staff, IT administrators, or members who’re innocently initiating transactions. That precision exposes weaknesses that traditional metrics fail to capture, especially in lean organizations, Jiang warns. She says leadership teams can find themselves “mistaking activity for outcomes,” relying on training completion rates or phishing click percentages, when in reality those are what she calls “window dressing” that fail to measure whether risky behaviors are actually decreasing or whether exposure is shrinking. Technical controls remain essential, but they cannot close every gap created by human decision-making under pressure. “Modern phishing succeeds not because companies don’t have the defenses, but because cybercriminals are able to find just the right people and manipulate them in just the right way — and do it at scale,” Jiang says, underscoring that when an employee or member willingly shares credentials, systems often treat that action as safe and legitimate. Hyper-Personalized Attacks Call for Adaptive Defenses To keep pace, credit unions are investing in layered, intelligence-driven controls designed to evolve as quickly as attacker tactics. James says Coastal is expanding partnerships with vendors innovating in AI-based detection, prioritizing tools that identify emerging attack patterns in real time and provide early warning signals before members are impacted. Adaptation also requires internal visibility and disciplined reporting that connects risk to measurable outcomes. James notes that her team briefs leadership using auditable data focused on member impact, financial exposure, attack activity by channel, and control effectiveness, reinforcing that authentication strategies and step-up triggers must continuously evolve as fraudsters pivot across email, SMS, and voice. Paul Sidhu At $21.1 billion Golden 1 Credit Union, Paul Sidhu, vice president of payments and fraud, says they’re watching several trends that are reshaping how fraud happens. “AI is increasing both the scale and sophistication of attacks, and we expect that pace to accelerate in the years ahead,” he says. “Hyper-personalized phishing is becoming more common as fraudsters train models on publicly available information and breached data sources from the dark web. We’re also seeing more coordinated cross-channel social engineering that blends email, SMS, voice, and digital banking activity into a single attack sequence,” Sidhu says. Meanwhile, he says, synthetic identity techniques and AI generated voice and text are making impersonation more accessible, while attack cycles are accelerating, fraud schemes can shift in days or even hour. In response, Golden 1 is shifting toward behavioral analytics and adaptive monitoring embedded across the fraud ecosystem. The organization evaluates device, session and transaction patterns to detect anomalies linked to phishing or account takeover attempts, while integrating threat intelligence from emerging campaigns into updated detection rules. Sidhu believes leaders often underestimate how dramatically AI has lowered the barrier to entry for sophisticated social engineering. Phishing, he says, is no longer just a volume problem but “a precision problem,” where branding, tone, and timing are optimized to exploit trust, urgency, and emotion rather than purely technical vulnerabilities. Balancing that vigilance with member experience requires contextual, risk-based controls rather than blanket friction. Sidhu says Golden 1 uses dynamic authentication that escalates only when risk thresholds are exceeded, allowing low-risk activity to proceed uninterrupted while higher-risk behavior triggers additional verification informed by behavioral and device-level indicators. And turnabout is fair play. “AI is helping to enhance the skills of our fraud experts,” says Sidhu, who’s been with the Sacramento-based credit union for more than 25 years, the last six in his current role overseeing payments and fraud. “These investments are designed to increase speed, consistency, and scalability while ensuring expert decision-making remains the center of our fraud-prevention strategy.” Trust, Behavior, and What to Do Now Even the most advanced detection systems cannot prevent every socially engineered interaction, which is why education and behavior management have become strategic priorities rather than afterthoughts in many shops. “We prioritize member education and communication,” Sidhu says. He notes that when members understand what protections are in place and why they exist, they’re more likely to see security measures as protective rather than restrictive, strengthening both trust and fraud prevention outcomes among the giant credit union’s nearly 1.2 million members. That education, of course, must be practical and confidence-building rather than alarmist, especially as synthetic voice and cross-channel impersonation attempts increase. James, who joined Coastal in 2023, says they use targeted messaging within digital banking channels to highlight current spoofing trends and clearly outline what the credit union will and will not ask for, helping members recognize red flags without undermining trust in legitimate communications. “By clearly outlining what we will — and will not — ask for, and providing simple steps to verify legitimacy, we empower members to recognize AI‑enhanced phishing attempts in a calm, constructive way,” James says. On the employee side, Jiang argues that effective human risk management begins with measurement and segmentation. She advises CEOs to quantify human risk across roles, identify high-risk cohorts such as employees with payment authority or privileged access, and replace generic awareness training with targeted, just-in-time interventions designed to drive measurable behavior change. The broader defense also depends on collaboration that extends beyond any single institution’s perimeter. James stresses that information-sharing with peers, vendors, and industry groups provides early visibility into emerging tactics, while Sidhu notes that building strong partnerships and cross-team coordination across fraud, cybersecurity, and frontline operations is essential as attack cycles compress from weeks to days or even hours. AI has shifted phishing from a periodic nuisance to a continuous, adaptive threat that blends psychology, automation, and speed in ways that test both systems and people. For credit unions, the response will not be a single tool or policy update, but a sustained commitment to adaptive technology, measurable human risk reduction, and the trust-centered relationships that have always defined the cooperative model.
0 Comments
Leave a Reply. |
Archives
April 2026
Categories |




RSS Feed