Predictions are fun, but they rarely turn out the way we expect. So, rather than speculating about what might happen, let’s examine what shaped AppSec and the Cybersecurity industry in 2025 and what is likely to keep us busy in 2026. This is my personal point of view, which is strongly influenced by my focus on application security and the associated compliance challenges.
Architecture Complexity: Cloud, APIs, LLMs and Agents
The complexity of modern application environments increased again in 2025. With cloud-native services, expanding API ecosystems and AI embedded in everything from internal tools to customer-facing systems, it has become increasingly challenging to keep pace. We try to develop a better understanding of worthwhile use cases and AI integrations while we continue to explore the ‚alien tool‘ (Andrej Karpathy) of LLM technology, still wondering whether Prompt Injection is a feature or a vulnerability. But autonomous agents and the MCP stack are already introducing additional layers of automation and security concerns. Developers, with the help of AI, are moving fast and occasionally break things. Unfortunately, security practices have not fully adapted and remain an afterthought. Efforts to retrofit ’security by design‘ into these fast-moving environments are ongoing. Managing security across multi-cloud and API-centric architectures, with the added complexity of AI, and the access to backend systems and large datasets it provides, is a significant challenge, even for well known market players. In 2026, we can expect this complexity to persist. Perhaps focusing on better visibility, not blindly trusting every new technology and improving integration and communication between architecture and security will help tackle it, until tools improve and security processes mature. Also we should not forget to keep in mind all our basic security practices (after all, AI is just software, isn’t it?) and the good guidance that is already availablefrom different sources. So what can possibly go wrong?
Regulatory pressure in the EU (CRA, NIS-2 and the AI Act)
Throughout 2025, regulatory momentum continued to build, particularly within the EU. The NIS-2 Directive, in effect since 2024, introduced stricter cybersecurity governance requirements for essential sectors finally also in Germany, while the Cyber Resilience Act (CRA) set out baseline security expectations for digital products. NIS2 targets a significant number of organizations in EU and Germany deemed relevant in terms of size or sector (plus special ‚essential‘ cases). The CRA is a product regulation that will have a severe impact on a wide range of products with digital elements on the EU market, including IoT devices, desktop and mobile apps. It will also affect open-source projects with a commercial background, although „pure“““ open-source projects remain exempt. Meanwhile, the EU AI Act, which is set to come into full effect in 2026, will impose significant obligations on providers and deployers of high-risk AI systems (and transparency obligations on others). Application and product security are central to all of these regulations. They can be used as business cases to advocate for better application security, but AppSec teams also need to take compliance with these regulations on their roadmap.
Open-source supply chain issues are nothing new, but the sophistication of the impact and maturity of the attacks are increasing. Automated, AI-driven attacks, such as Shai Hulud, provide an indication of what might be to come. By November 2025, Shai-Hulud 2.0 had compromised 796 NPM packages (with over 20 million weekly downloads) in order to steal credentials from developers and CI/CD environments. Shortly afterwards, a critical vulnerability in the React framework, dubbed ‚React2Shell‚ was exploited on a large scale by both opportunistic cybercriminals and state-linked espionage groups within days of its disclosure. Although the adoption of SBOMs and improved dependency controls has increased and tools are readily available, the overall threat level remains high. As an industry, we have started to adopt processes and tools to mitigate the increased risks, but we are far from having them under control.
Developer Velocity vs. Security Debt (or: Vibe Coding)
In 2025, AI tools are used daily by nearly 50% of developer to increase productivity (- maybe not in every scenario, and with a bit of decreasing confidence in the tools). The definition of ‚developer productivity‘ remains vague but AI now generates 41% of code or even more. Vibe coding, whereby developers use AI prompts to iterate code until something functional emerges, becomes more common. However, this workflow often prioritises functionality over security, leading to vulnerabilities or insecure defaults being introduced. The number of CVEs surpassed 48,000 in 2025, a 20% increase from the year before, indicating that software security quality remains a systemic challenge. Developers may accept code suggestions without fully understanding them, resulting in inadequate oversight of the actual functionality of the code and a ‚shaky foundations‘, as even Cursor’s CEO warned. Therefore, despite AI being deployed for vulnerability detection and automated code fixes (potentially introducing new issues), developer training and the establishment of a sustainable security culture remain essential (excluding, perhaps, phishing training). In terms of development culture, security should enable, rather than control, but it must keep up with the increased development speed through scaling, automation, and prioritization.
AI changing both attack and defense
AI continues to be integrated into security processes. Workflow automation tools are entering the security space, but AppSec SOAR and ASPM has yet to be established. AI oriented use cases such as automatic fixes for vulnerabilities, ticket enrichment and AI-supported vulnerability triage, as well as support for manual processes such as threat modelling, are continuously being explored. AI-assisted systems aim to help teams prioritize vulnerabilities based on real exploit risk, correlate code and runtime data for richer context and filter out false positives to reduce alert fatigue. However, AI assisted software vulnerability management tools that live up to the high expectations have not yet fully arrived. Automated penetration testing tools are improving, but attackers are also using AI as a weapon. More sophisticated phishing campaigns continue to erode user trust and prompt changes to IAM mechanisms (MFA, of course, and Passkeys). Threat actors have weaponized AI to scale up their campaigns, using generative AI to automate malware development, produce convincing phishing lures and generate increasingly convincing deepfake content for social engineering attacks. This ‚AI augmentation‘ of attacks enables even less-skilled adversaries to carry out sophisticated operations by letting AI handle the heavy lifting, from writing exploit code to solving problems on the fly. The time it takes attackers to exploit vulnerabilities before patches are available has shortened once again, falling into the negative at -1 day.
So, amidst all these slightly unpredictable and rapidly changing events, what’s next? I suppose, it will be a continuation and intensification of what we’ve already seen. Organizations have to manage ever-growing volumes of security-relevant data, including architectural diagrams, threat models, cloud configurations, runtime telemetry, compliance artifacts and AI outputs. At the same time, AI is beginning to actively and independently steer processes, even though we do not yet fully understand the risks involved. Opportunity lies in integrating knowledge silos more effectively to provide a clearer view and analysis of relevant risks to focus on. The common thread — and threat — is, however, complexity: in terms of technology, regulation and adversaries. We must all navigate an environment in which complex, rapidly changing architectures and technologies require constant attention. We must avoid failure while moving forward at an ever-increasing speed. Let’s work together to maintain balance and stay in control.
A good security team is implicitly present in a multitude of aspects in an organization and its culture. However, often we see security teams proposing security requirements without taking care or even encouraging proper implementation of the proposed security concepts and requirements. But if security teams consider themselves as a mere function that is just setting the guardrails without business or users in mind, they will be reduced to that and will have no long-lasting effect on the overall security of an organization.
In traditional organizations, security teams often start with compliance, mainly defining standards, policies, and processes and expecting everyone to adhere to those. In smaller organizations, security activities might start on a technical level, trying to follow general best practices and avoiding issues in operation (incidents) that might result from a lack of secure configuration. In both cases, teams usually lack the perception of end-users who do not understand security well. Therefore security teams do not see the need to guide employees through a proper implementation and adoption of the proposed security best practices and requirements – leading to either frustration or ignorance on the part of the users. This neglect is to a degree understandable – a security specialist or sysadmin is not a project manager, nor can they be put in a spot where they must shoulder all the security responsibility for everyone else. In addition, the role of a security function can be severely limited by the amount of how much a business is willing to spend for its security capabilities. However, a security team not driving any projects and not feeling responsible for the actual state of security in an organization across all layers will likely fail in creating anything more than security on paper.
To form a successful security team, it is important to understand that security is more than a function. It is a living organism; it is water; it is a meme – it is whatever creeps into and influences every process, every design, every product of the organization increasing its resiliency in the best case – and what stays a bunch of documents no one wants to read and cumbersome processes no one wants to follow in the worst case.
Luckily, in recent years, certain areas of security – and especially product security – have transformed from a compliance role into an enabler role. The necessity of connecting security activities to business goals and thinking about user adoption has become more important. Their mission: Enable business to make business, to stay afloat by preventing hacks and incidents, and help product teams to deliver secure by default products with a reasonable effort which their users can use without getting frustrated by security processes. A good security team will look for ways to optimize processes and achieve security and compliance without spending endless time and resources on it. One key sentence formulated in CISA’s “Security by Design” principles is that (software) manufacturers should take responsibility for customer security outcomes. This, we believe, is the key sentence for a successful product security program and potentially even for a whole enterprise security program in general. What it means is simple: Feel responsible for implementation and not just formulation or configuration of good security practice. Feel responsible for the outcome – that includes bad user passwords, insecure authentication choices, careless sharing of profile information, post-it notes with passwords on the computer screen, etc. Take ownership of these issues and consider and prevent them before they happen. The work of a good security team may start with the decision on adequate security controls, followed by implementation or configuration, but it does not end there. A security team needs to ensure the user is made aware of the implications of their security-related decisions, grasp the reasons why security processes benefit the company, and encourage decisions that will protect the user – internal or external – of a company’s products and services.
If we widen the perception of “user” a bit, we can take the sentence from the CISA paper as a key attitude also toward an organization’s employees: A security team should feel responsible for the phishing rates, for the hardcoded passwords, and that users might avoid your security tools because they are so difficult to use and no one ever cared to do a proper rollout or onboarding. If people don’t talk to you about security concerns because they are afraid you will cause them more work without resolving their concern, you are ultimately failing as a security team.
In the same way that product security is moving to an enabling culture and the clear understanding of shared responsibility, traditional enterprise security needs to move to care about implementation and culture. It is no secret that there usually is a big compliance gap: A company has their ISO 27001 / TISAX / NIST, etc., audit done, but the experts inside know that the technical security level is still lacking, and good security practices are limited to a small scope of the systems and processes. Documents and process definitions exist but they are not followed, and if they are followed, their outcome does not really contribute to a better security posture of the enterprise. This is what we see as the most dangerous form of a compliance gap: Not the gap between documents that an auditor requires and which a company might or might not have, but the gap that exists between the compliance stated on paper and the actual security of a company’s systems and processes. This is the gap that matters when you are talking about security and this gap might not even be properly reflected in your risk management activities, which are usually based on paper and checklists as well. And ultimately, this is a dangerous gap between a security team and its users. The paper is just a transport mechanism. If the ideas and concepts formulated in the security team fail to reach the company’s workforce, then there is no communication, and no actual security improvement will be achieved.
Coming back to the title of this paper, the Doorman Fallacy. The Doorman Fallacy, as mentioned by Rory Sutherland, is this:
Business, technology and, to a great extent, government have spent the last several decades engaged in an unrelenting quest for measurable gains in efficiency. However, what they have never asked, is whether people like efficiency as much as economic theory believes they do.
The ‘doorman fallacy’, as I call it, is what happens when your strategy becomes synonymous with cost-saving and efficiency; first you define a hotel doorman’s role as ‘opening the door’, then you replace his role with an automatic door-opening mechanism.
The problem arises because opening the door is only the notional role of a doorman; his other, less definable sources of value lie in a multiplicity of other functions, in addition to door-opening: taxi-hailing, security, vagrant discouragement, customer recognition, as well as in signalling the status of the hotel. The doorman may actually increase what you can charge for a night’s stay in your hotel.
Alchemy: The Dark Art and Curious Science of Creating Magic in Brands, Business and Life, Rory Sutherland, p. 126 f.
The Doorman fallacy is not only relevant from a business perspective (e.g. a company investing in tools to save costs on it’s security team); it is also relevant from a security team’s perspective. A good security team is like the doorman in a hotel: They guard people on their journey to use the hotel’s services. They recognize customer needs, are helpful, provide useful guidance and pathways. They need to be concerned about the end-to-end service the Hotel provides. Their notional function might be to take care of systems, policies, or standards or do proper system configuration. But if they stop there, they are not only missing out on opportunities to show the value of their work but run into the danger that they will never have a real impact on the security of their organization. Taking care of the end-to-end process, security can become a business advantage instead of a mere cost driver. If a security team and the organization start to understand that the security team’s work is not only about formulating requirements, documenting and following processes, but that they are part of a business context and that their users matter – then we will see significant improvements in the security of an organization, the breaking of “security silos” and the start of a real security culture with security as a doorman and its own, valuable contributions to the business.