Microsoft security specialist urges 'assume breach' approach in secure-coding briefing to ETS
Loading...
Summary
Rich Antonow of Microsoft briefed Enterprise Technology Services staff on secure-coding practices, zero-trust design, secrets management and the practical limits and uses of generative AI, urging developers to "assume breach," validate inputs and avoid hard-coded credentials.
Rich Antonow, a Microsoft security and identity technical specialist, told Enterprise Technology Services staff that developers must write code with the expectation that networks and systems may already be compromised. "Assume breach," Antonow said, identifying that mindset as the core zero-trust principle developers should use when deciding authentication, authorization and data-storage practices.
Antonow framed secure coding as an environmental problem, not only a matter of individual functions. He listed three foundational developer responsibilities from zero-trust doctrine: "assume breach," "verify explicitly" and implement "least-privileged access." He argued that code should avoid elevated test accounts and that applications must rely on proper identity providers for authentication while performing their own validation and authorization checks.
Why it matters: Antonow said attackers increasingly target organizations for scale and data value, not individual users, and noted the business case for security: "We invest, every year, we invest $5,000,000,000 specifically on security," he told attendees, and he pointed to Microsoft's family of generative-AI tools (branded Copilot) as both a capability and a responsibility for careful deployment.
Technical takeaways: Antonow warned against embedding usernames, passwords or test credentials in source code and emphasized managed identities and key rotation to reduce the risk of long-lived secrets. He described common attack patterns—phishing as the primary entry vector, followed by lateral movement and privilege escalation—and cited an observation that intruders can remain in an environment for long periods (he referenced a figure of about 200 days in the briefing).
On verification and release, Antonow recommended staged testing (prerelease, private preview, public preview) and robust verification steps including user-acceptance testing. He also urged developers to gather clear requirements and to use whiteboarding to uncover integration points and hidden inputs that otherwise create vulnerabilities.
Generative AI and prompt risks: Responding to questions about prompt injection and APIs, Antonow described prompt pre-validation, responsible-AI guardrails and layered API design. "AI is a toddler," he said, stressing that guardrails and validation are required and that human review remains essential. He described Copilot for security as a SOC-analyst tool that aggregates alerts, builds incident timelines and produces human-readable summaries to help triage and response.
Authentication nuance: Antonow clarified that multifactor authentication (MFA) occurs after initial authentication and therefore cannot by itself prevent an initial credential compromise; he recommended combining MFA with conditional access policies that check device, location or domain membership.
Audience interaction and next steps: The presentation included multiple audience questions on app risk models, low-code/no-code tools and practical tips for testing and patching. Antonow concluded by saying he would share the slides with attendees and remained available for follow-up.
Antonow's central message to developers: design with the environment in mind, minimize privileges, rotate secrets and validate inputs. "Secure your own code best to your ability, and don't count on others," he said.

