1
1
1
2
3
The rapid evolution of artificial intelligence has presented modern engineering organizations with a paradox: while AI offers the potential for unprecedented productivity gains, many teams find themselves unable to translate these technological investments into meaningful business outcomes. As organizations strive to increase their delivery speed, the focus is shifting away from isolated AI tools toward a comprehensive AI operating model. Recent data and industry analysis suggest that the primary bottleneck in software development is no longer the act of writing code itself, but rather the fragmented workflows that surround the development lifecycle.
According to the 2025 State of Developer Experience report, which surveyed over 3,500 developers and engineering managers, professional developers spend only 16% of their time actually writing code. The remaining 84% of their work week is consumed by "outer loop" activities, including clarifying requirements, documenting decisions, reviewing changes, searching for internal information, and attending meetings. This data highlights a critical disconnect: while many companies have invested in AI-powered coding assistants to optimize the 16% of time spent coding, they have neglected the vast majority of the software development lifecycle (SDLC) where time is most frequently lost.
Industry experts argue that teams rarely fall behind because a single phase of the SDLC is inherently slow. Instead, delays occur because the flow of work is broken and context is lost as a project moves between different stages and tools. For tech leaders, the challenge is no longer just about adopting AI features, but about building a connected, measurable system of work that allows AI to function across the entire end-to-end workflow.
Many engineering organizations currently treat AI as a localized coding accelerator, piloting various code assistants and model providers within a fragmented toolchain. However, this approach limits the potential for compounding value. When AI is applied only within a code editor, it cannot bridge the gaps between stakeholders, product managers, and engineers. To overcome this, leaders are being urged to move toward an AI operating model that unifies tools and teams.
The necessity of this shift is underscored by Gartner research, which indicates that while 73% of tech leaders identify AI expansion as their top priority, 77% struggle with integration. Furthermore, McKinsey reports that to gain connected visibility across a business, leaders must rewire their data and operating models rather than continuing to run isolated AI pilots. For analytical, outcome-focused leaders, these integration struggles manifest as slower delivery times, increased firefighting, and difficulty in proving the return on investment (ROI) for AI expenditures.
To demonstrate the efficacy of a connected AI system, Atlassian recently detailed how its Confluence engineering team utilized its "Teamwork Collection" and "Rovo" AI agent to accelerate a major product rollout. The team was tasked with moving from an initial concept to a fully shipped cross-surface AI experience—spanning pages, whiteboards, and databases—within a four-month window. Achieving this "AI speed" required maintaining high architectural quality while eliminating manual handoffs.
The success of the project was attributed to the use of the Teamwork Graph, a unified platform that provides context-aware AI experiences. By grounding AI in both "left of code" context (requirements and designs) and "right of code" context (incidents and support tickets), the team was able to maintain a continuous flow of information.
A common point of failure in engineering is the "siloing" of design assets. While the Confluence team used Figma for UX design, they integrated it into their primary workflow using Rovo connectors. This allowed engineers and product managers to query Figma content directly from their documentation or chat interfaces. Instead of manually checking for updates, team members could ask the AI to summarize recent design changes or identify specific entry points.
Furthermore, the team utilized Loom for asynchronous design reviews. By recording walkthroughs of Figma designs, the team avoided the delays associated with scheduling synchronous meetings, enabling faster consensus and more inclusive feedback loops.
During the discovery phase, the team utilized digital whiteboards to brainstorm user journeys and technical constraints. In traditional workflows, these brainstorms often result in "dead" documentation that must be manually transcribed. However, by using AI to cluster and prioritize ideas directly on the whiteboard, the team was able to automatically convert brainstormed notes into structured Confluence pages and Jira issues. This transition from exploration to execution reduced the risk of losing nuance during the handoff from product to engineering.
When the project reached the implementation stage, engineers utilized Rovo Dev, an AI-powered development agent. Unlike generic coding assistants, Rovo Dev is grounded in the organization’s specific Teamwork Graph, which includes Jira issues, Confluence docs, and third-party data. This allowed developers to ask architectural questions, such as how existing AI entry points were wired or which services were relevant to a specific task. By acting as an "architecture service," the AI helped engineers maintain consistency with established patterns and navigate the codebase more efficiently.
The AI also played a role in synthesizing historical knowledge. By analyzing past customer feedback and internal architecture decisions, the AI identified themes and trade-offs that influenced the final product design, such as the adoption of a single creation agent and a streaming flow.
The impact of connected workflows extends beyond the engineering department. After the product launch, research teams collected qualitative feedback from customers. Historically, research notes often live in isolation from the engineering backlog. In this model, however, research recordings and Loom clips were fed back into the system of work. AI was used to synthesize recurring themes from customer sessions and link them directly back to the original specifications and Jira issues, allowing engineers to incorporate learnings into subsequent iterations.
This connectivity also improved collaboration between technical and business teams. Product and marketing teams were able to move from a Product Requirements Document (PRD) to launch-ready campaigns without losing context. Marketing teams used the AI to generate campaign briefs and messaging frameworks grounded in the original product intent documented in the PRD. This "single source of truth" ensured that sales and customer-facing teams remained aligned with the technical capabilities of the product.
Atlassian’s internal data suggests that the integration of Loom, Rovo, Jira, and Confluence has led to measurable improvements in engineering efficiency. Reported outcomes include a 2x increase in the speed of creating PRDs and specifications, a 50% reduction in time spent on design reviews via asynchronous video, and a 40% increase in efficiency for creating and refining Jira issues.
For CEOs and tech leaders, these metrics are vital. KPMG research indicates that 71% of CEOs rank AI as a top investment priority, with most expecting an ROI within one to three years. To achieve these results, leaders are encouraged to follow a pragmatic playbook:
The shift toward engineering at AI scale represents a transition from viewing AI as a collection of point solutions to viewing it as a leadership strategy and a connected system of work. By unifying goals, knowledge, and execution on a single platform, organizations can unlock developer focus and shorten delivery cycles. The ultimate goal of an AI-native operating model is to provide a common language across teams, ensuring that the velocity gained in coding is not lost in the administrative and communicative "outer loop" of the software development lifecycle.