JetBrains Research Reveals AI Is Reshaping Developer Workflows in Ways Developers Don't Fully Perceive
Key Takeaways
- ▸AI redistributes and reshapes developer workflows in ways developers often don't fully perceive, creating a gap between perception and actual behavioral change
- ▸Mixed-method research combining objective log data with subjective self-reports compensates for individual blind spots, revealing deeper insights than either method alone
- ▸Two years of anonymized telemetry from 800 developers provides unprecedented scale and depth compared to existing short-term, smaller-scale studies on developer-AI interaction
Summary
JetBrains' Human-AI Experience (HAX) team has completed a comprehensive two-year mixed-method study examining how AI coding assistants are transforming developer workflows. The research analyzed anonymized log data from 800 software developers alongside survey responses and interviews, comparing objective behavioral changes with developers' subjective perceptions of how they work. A key finding is that AI tools redistribute and reshape developers' workflows in ways that often elude their own awareness—revealing a gap between what developers perceive has changed and what the data actually shows. The study, being presented at ICSE 2026 in Rio de Janeiro, combines telemetry data with qualitative feedback to provide a more complete picture of AI's real-world impact on development practices.
- AI coding assistants have moved from novelty features to standard components of daily development workflows, warranting long-term investigation of their genuine impact
Editorial Opinion
JetBrains' methodologically rigorous approach—combining two years of objective behavioral data with subjective perceptions—sets a new standard for studying AI's workplace impact. The finding that developers' perceived changes don't match actual behavioral shifts is particularly valuable, suggesting that many organizations may be overestimating or misunderstanding how AI tools are genuinely transforming their teams' productivity. This research should prompt deeper reflection across the industry about distinguishing hype from reality when evaluating AI coding assistants.


