Artificial intelligence has become part of daily operations for an increasing number of workers, but the promised performance bump on an organizational level seems to remain stubbornly out of reach. A new survey from Atlassian indicates an explosion in personal AI use, but little to show for it on the enterprise level, highlighting a growing divide between experimentation and actual results.
What the Atlassian Report Reveals About AI at Work
For its research study, “Atlassian State of Work 2021,” the company spoke with 12,000 knowledge workers from six countries and 180 Fortune 1000 company executives. The firm discovered that the average use of AI per day has nearly doubled over the past year, while the proportion of workers describing AI as “useless” has dropped by 78%. Yet all that momentum generated a large sputtering: 96% of businesses said there’d been no “substantive improvements” in the efficiency, quality, or innovation of their work.
Just 3 percent of executives said AI has transformed the way their organizations are run more efficiently, and only 2 percent reported that work quality has improved significantly.
The qualitative feedback echoed that feeling: many teams are still working in the same way they were, “only now with extra bells and whistles.” In other words, people are quicker, but companies do not seem to be substantially better.
Why Increasing Usage Isn’t Converting to ROI
Most deployments exist at the edge of the workflow — in drafting a document, summarizing a reading, or answering queries — rather than in the system of record where value gets created and measured. Without transforming processes or incorporating AI into base tooling, reduced time doesn’t tend to aggregate as throughput, cost, and quality improvements.
Measurement is another barrier. Organizations measure the prompts, pilots, or “time saved” estimates but not the business results leaders care about: cycle time, case resolution, error rates, cost to serve, and revenue velocity. Where the metrics don’t align with money, ROI remains elusive — or doesn’t exist.
Returns are also limited by data access and context. Models starved for authoritative, timely enterprise data become glorified assistants. Integrations, retrieval pipelines, and permissioning are difficult, and plenty of programs grind to a halt before they pair AI with the appropriate knowledge. Culture clash only compounds the dilemma: down-in-the-ranks managers are reluctant to rewire processes, and workers lack guidance on where AI is permitted or worthwhile.
External research reinforces the pattern. A review by MIT found that about 95% of companies’ internal AI projects did not yield any real business results. Vendors will still show agents and copilots, but without changes to operating models, gains remain local and ephemeral.
Where Outliers Are Gaining Real, Measurable Advantages
Atlassian cites “AI-powered coordination” as a point of differentiation for the minority reaping benefits: using AI as tissue across projects, teams, and systems, not just an individual tool. In reality, that involves centralizing AI on a shared platform, connecting access to common goals and data, and feeding actions back into systems like Jira, Confluence, ServiceNow, or CRM so work moves forward — and not just text.
Good programs combine top-down governance with bottom-up experimentation. Leaders establish norms around data, security, and evaluation; teams iterate on specific use cases within existing workflows. Retrieval-augmented generation unlocks institutional knowledge for practical use; human-in-the-loop checkpoints keep quality high, and product-style ownership ensures the models and prompts evolve with the process.
Everyday high-yield patterns include:
- Ticket triage and summarization in IT and customer service
- Auto-switching between sales and post-sales
- Automating the creation of technical docs from accepted pull requests
- Financial close accelerators
The common theme: AI is embedded, auditable, and connected to a measurable bottleneck.
Metrics That Actually Demonstrate AI’s Business Impact
Leaders who are moving the needle make ROI visible, with a short list of repetition metrics, baseline-to-post comparisons, and control groups. Useful measures include:
- Core workflow lead time and throughput (e.g., incident resolution, case closure, quote-to-cash)
- First-contact resolution and deflection rates in support
- Content and code rework/defect rates
- Cost to serve and utilization for AI work teams
- Employee experience and manager span of control enhancements
Just counting the number of prompts — or some generic “hours saved” — isn’t good enough; organizations must hang on to the savings in terms of process changes, workload shifts, or capacity redeployment, otherwise the value just evaporates.
Security and Governance Are the Dragging Brakes
Risk concerns slow scale. Multiple data-protection vendors, such as Cyberhaven and Netskope, have discovered that a significant proportion of workers try to paste sensitive information into public AI tools at least once. Worries about exposing privacy and IP are legitimate, and many companies attempt simply to ban the use of these services altogether — stifling appropriate usage in the process.
A stronger position is to ground in familiar models, like the NIST AI Risk Management Framework: classify data, erect guardrails and DLP atop it, log access, and prefer enterprise-class instances governed by policy from providers like Microsoft, Google, and OpenAI. Practical direction and red-teaming mitigate fear and unshackle responsible experimentation.
What Leaders Can Do Now to Turn AI Use Into Value
Put AI to work on the bottlenecks that matter, not universal productivity. Stand up a single source of truth and governance, but allow for teams to iterate where the work itself resides. Invest in data pipelines and system integrations, so that models behave with context. Measure outcomes, not enthusiasm. Train managers to reengineer workflows and realize savings, not simply clap for demos.
The lesson from Atlassian’s research is sobering but valuable: more use does not always mean more value. The companies turning the hype into real advantages treat AI as a coordination layer, integrate it into the flow of work, and judge it by the same measures they would any operational change.