{"id":5481,"date":"2026-01-15T23:12:45","date_gmt":"2026-01-15T23:12:45","guid":{"rendered":"http:\/\/codeguilds.com\/?p=5481"},"modified":"2026-01-15T23:12:45","modified_gmt":"2026-01-15T23:12:45","slug":"jetbrains-research-unveils-subtle-yet-profound-shifts-in-developer-workflows-driven-by-ai-coding-assistants","status":"publish","type":"post","link":"https:\/\/codeguilds.com\/?p=5481","title":{"rendered":"JetBrains Research Unveils Subtle Yet Profound Shifts in Developer Workflows Driven by AI Coding Assistants"},"content":{"rendered":"<p>JetBrains Research, a leading name in developer tools, has published groundbreaking findings revealing that AI coding assistants are fundamentally reshaping software development workflows in ways often imperceptible to developers themselves. This comprehensive, mixed-method study, spanning two years of intricate data analysis and qualitative insights, challenges prevailing assumptions about AI&#8217;s impact, highlighting a significant divergence between developers&#8217; perceptions and their actual behavioral changes. The results, presented this week at the prestigious International Conference on Software Engineering (ICSE 2026) in Rio de Janeiro, underscore the critical need for objective measurement in evaluating the long-term effects of AI integration into daily programming tasks.<\/p>\n<p>The proliferation of AI coding assistants, propelled into the mainstream by tools like ChatGPT and integrated IDE solutions, has rapidly transformed them from novel add-ons into standard components of the developer toolkit. Initial enthusiasm, supported by short-term surveys, suggested increased productivity and reduced time on mundane tasks. However, the long-term, real-world impact on development processes has remained largely unexplored. JetBrains&#8217; Human-AI Experience (HAX) team embarked on this ambitious investigation to bridge this knowledge gap, scrutinizing two years of anonymized log data from 800 software developers, complemented by surveys and follow-up interviews designed to capture self-reported perceptions.<\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/JB-social-BlogSocialShare-1280x720-1-7.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<p><strong>The Rise of AI in Software Development: A Brief Chronology<\/strong><\/p>\n<p>The journey towards AI-assisted coding has been swift and transformative. While rudimentary code completion tools have existed for decades, the advent of large language models (LLMs) marked a paradigm shift.<\/p>\n<ul>\n<li><strong>Late 2022:<\/strong> The public release of generative AI models like ChatGPT ignited widespread interest in AI&#8217;s potential across various domains, including software development. This period saw a rapid acceleration in the development and adoption of AI coding assistants.<\/li>\n<li><strong>Early 2023:<\/strong> Dedicated AI coding tools and integrations began appearing more frequently within Integrated Development Environments (IDEs), promising to revolutionize how developers write, debug, and manage code.<\/li>\n<li><strong>October 2022 &#8211; October 2024:<\/strong> This two-year window formed the core of the JetBrains HAX study, specifically chosen to capture the evolution of developer behavior from the nascent stages of widespread AI adoption.<\/li>\n<li><strong>April 2024:<\/strong> This month served as a crucial demarcation point for the study&#8217;s classification of &quot;AI users,&quot; as by this time, AI assistants were considered widely available and stable within IDEs, ensuring that users had genuinely integrated these tools into their workflows for a sustained period.<\/li>\n<li><strong>ICSE 2026, Rio de Janeiro:<\/strong> The culmination of the HAX team&#8217;s extensive research is being presented at this key industry conference, signifying the academic and practical importance of their findings.<\/li>\n<\/ul>\n<p><strong>Methodology: Triangulating Perception and Practice<\/strong><\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2025\/05\/IMG_0369-e1748595631231-200x200.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<p>Understanding the true impact of AI on developer workflows necessitates a multifaceted approach. JetBrains Research&#8217;s HAX team employed a mixed-method design, leveraging both quantitative telemetry data and qualitative self-reported insights. This strategy was crucial for compensating for the inherent blind spots of any single method. While telemetry can reveal <em>what<\/em> is changing in workflows, it often cannot explain <em>why<\/em>. Conversely, self-reports, though offering valuable context and motivations, are susceptible to biases and may overlook subtle behavioral shifts. By triangulating these data sources, the study aimed to construct a more complete and grounded picture of AI&#8217;s influence.<\/p>\n<p><strong>Telemetry: A Behavioral Lens on Developer Workflows<\/strong><\/p>\n<p>At the heart of the quantitative analysis was telemetry data \u2013 fine-grained, anonymized event streams automatically recorded by JetBrains IDEs. This method, rooted in principles of remote data collection dating back to the 19th century, provides continuous, real-time insights into user actions without constant manual measurement. In the context of this study, telemetry captured events such as the number of characters typed, debugging session starts, code deletions, paste operations, and window focus changes. Importantly, this data focused on <em>actions<\/em> and <em>sequences<\/em> rather than specific content, ensuring user privacy while offering a robust behavioral record. Previous research has demonstrated the efficacy of telemetry in uncovering interesting developer patterns, such as the finding that developers spend approximately 70% of their time on comprehension activities like reading and navigating code.<\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/03\/WEL06749.jpg\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<p>For this study, the HAX team analyzed anonymized usage logs from popular JetBrains IDEs, including IntelliJ IDEA, PyCharm, PhpStorm, and WebStorm. The dataset was meticulously filtered to include 800 devices active throughout the entire two-year study period (October 2022 to October 2024). These devices were then categorized into two distinct groups:<\/p>\n<ul>\n<li><strong>AI Users (400 devices):<\/strong> Defined as those that interacted with JetBrains AI Assistant at least once a month from April to October 2024, ensuring consistent integration of AI into their routines.<\/li>\n<li><strong>AI Non-Users (400 devices):<\/strong> Devices that never used the AI assistant during the entire study period, serving as a control group.<\/li>\n<\/ul>\n<p>The team extracted well-defined user actions representing five key workflow dimensions, aggregating these metrics per user per month. This allowed for the tracking of evolutionary patterns over time, resulting in a colossal dataset of 151,904,543 logged events.<\/p>\n<p><strong>Qualitative Insights: Developers&#8217; Perceptions<\/strong><\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/evolving-ai_table-results.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<p>To complement the behavioral data, the HAX team conducted an online survey involving 62 professional developers, focusing on their perceived changes in productivity, code quality, editing habits, reuse patterns, and context switching due to AI assistants. The survey utilized 5-point Likert scales to gauge the degree of change, ranging from &quot;significantly decreased&quot; to &quot;significantly increased.&quot; Following the survey, a smaller group participated in semi-structured interviews, providing deeper qualitative narratives on their day-to-day AI usage, trust in suggestions, and perceived workflow fragmentation. These interviews were critical for interpreting the telemetry curves, particularly when developers&#8217; self-reports diverged from objective data.<\/p>\n<p><strong>Key Dimensions of Workflow Evolution: A Detailed Analysis<\/strong><\/p>\n<p>The study meticulously examined five critical dimensions of developer workflow, comparing the behavioral data from telemetry with the self-reported perceptions from surveys and interviews.<\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/rq1_logs.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<ol>\n<li>\n<p><strong>Productivity: More Code, Faster<\/strong><\/p>\n<ul>\n<li><strong>Telemetry:<\/strong> The log data revealed a clear and sustained increase in typed characters for AI users. Over the two-year period, AI users increased their average typed characters by almost 600 per month, a stark contrast to AI non-users, who showed only an average increase of 75 characters per month. This sustained gap suggests a fundamental shift in coding output.<\/li>\n<li><strong>Perception:<\/strong> Developers&#8217; perceptions largely aligned with the behavioral data. Over 80% of survey respondents reported a slight or significant increase in productivity due to AI coding tools. Furthermore, more than half indicated a decrease in the overall time spent coding, while only 15% reported an increase. One developer, with 3-5 years of experience, articulated this sentiment: &quot;When I get stuck on naming or documentation, I immediately turn to AI, and it really helps.&quot;<\/li>\n<li><strong>Analysis:<\/strong> This dimension demonstrated a rare alignment between objective behavior and subjective perception. AI coding assistants appear to genuinely augment code generation, enabling developers to produce more code, consistent with industry expectations of increased efficiency.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Code Quality: Perceived Gains vs. Stable Debugging<\/strong><\/p>\n<ul>\n<li><strong>Telemetry:<\/strong> The study used the frequency of debugging session starts as a proxy for code quality, indicating instances where developers felt the need to investigate or fix issues. Interestingly, AI users showed no statistically significant change in their debugging behavior over the two years. AI non-users, however, exhibited a slight decrease in debugging starts during the same period. The debugging frequencies for both groups remained relatively close.<\/li>\n<li><strong>Perception:<\/strong> Despite the stable debugging behavior, almost half of the AI users surveyed reported a slight or significant <em>increase<\/em> in their code quality and readability due to AI tools. Only about 10% reported a decrease in quality, and 6.5% a decrease in readability (with 50% observing no change in readability). This perceived improvement, however, was tempered by a lingering distrust, as one developer with 3-5 years of experience admitted: &quot;I triple-check it, and even then, I still feel a bit uneasy.&quot;<\/li>\n<li><strong>Analysis:<\/strong> Here, a notable divergence emerges. Developers <em>perceive<\/em> AI as improving code quality, yet their objective debugging behavior does not reflect this. This could imply that AI assists in producing initially cleaner code, reducing the <em>need<\/em> for debugging, or that developers are spending more time <em>curating<\/em> AI suggestions, leading to a perception of higher quality without a change in the frequency of debugging entry points. The underlying issues might be different, or the confidence in AI-generated code isn&#8217;t absolute.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Code Editing: A Surge in Refinement<\/strong><\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/rq2_logs.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<ul>\n<li><strong>Telemetry:<\/strong> This dimension presented one of the most striking divergences. The log data showed a statistically significant increase of approximately 100 deletions per month for AI users. In stark contrast, AI non-users increased their deletions by only about 7 times per month during the same period. This indicates a substantial rise in editing and reworking activities among those using AI.<\/li>\n<li><strong>Perception:<\/strong> Developers, however, reported little change in their editing habits. Half of the respondents perceived no significant shift, while about 40% reported a slight or significant increase, and only 7% a decrease. This stark contrast between behavior and perception suggests that the increased editing might be so integrated into the AI workflow that it feels like a natural part of the process, rather than a distinct change. A system architect with over 15 years of experience offered a nuanced perspective: &quot;AI is like a second pair of eyes, offering pair programming benefits without social pressure \u2013 especially helpful for neurodivergent people. It\u2019s not always watching, but I can call on it for code review and feedback when needed.&quot;<\/li>\n<li><strong>Analysis:<\/strong> This finding is critical. While AI generates more code, it also necessitates more active curation and refinement from developers. The increased deletion and undo actions suggest that AI acts as a prolific, but not always perfect, first-pass generator, requiring human oversight and iterative adjustment. The developers&#8217; lack of perceived change indicates how deeply this &quot;curation&quot; aspect has become embedded in their routine. This aligns with other studies that found developers spend significant time double-checking and editing AI suggestions, with almost a fifth of accepted AI code later deleted and 7% heavily rewritten.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Code Reuse: A Nuanced Picture<\/strong><\/p>\n<ul>\n<li><strong>Telemetry:<\/strong> The study measured external paste operations (content not copied from the same IDE session) as a proxy for code reuse. While AI users generally exhibited a higher frequency of external pastes compared to non-users, neither group showed a significant change in this behavior over time.<\/li>\n<li><strong>Perception:<\/strong> Survey responses were inconclusive, with about a third reporting increased external code use, a fifth reporting decreased use, and 44% observing no change. One experienced developer expressed a preference for self-authored solutions: &quot;For me, it\u2019s better to take responsibility for what I did myself rather than adopt a third-party solution.&quot;<\/li>\n<li><strong>Analysis:<\/strong> The expected surge in external code reuse through AI-generated boilerplate or patterns was not clearly evident in either the behavioral or perceptual data. This suggests that while AI might provide snippets, it doesn&#8217;t necessarily translate into a dramatic shift in how developers acquire and integrate code from outside their immediate project. It might be that AI-generated code is often &quot;internalized&quot; and edited rather than treated as a direct &quot;paste&quot; from an external source, or that developers maintain their existing reuse habits despite AI&#8217;s assistance.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Context Switching: A Different Form of Fragmentation<\/strong><\/p>\n<ul>\n<li><strong>Telemetry:<\/strong> AI tools are often promoted as a means to reduce context switching by keeping developers &quot;in flow&quot; within the IDE. However, the log data presented a more complex reality. AI users showed a slight <em>increase<\/em> of about 6 IDE activations per month, while AI non-users experienced a decrease of approximately 7 activations per month. This indicates that AI users are switching contexts at least as much, if not more, than their non-AI counterparts.<\/li>\n<li><strong>Perception:<\/strong> Survey responses were largely ambiguous, with roughly a quarter reporting an increase in context switching, a fifth a decrease, and about half observing no change. While some developers felt AI reduced external searches (&quot;I stopped switching contexts, saving a few seconds every time I would have googled something&quot;), others indicated a new form of internal fragmentation. Previous studies have indeed shown that interacting with AI assistants can add cognitive overhead, as developers alternate between coding, interpreting suggestions, and managing the AI dialogue.<\/li>\n<li><strong>Analysis:<\/strong> The promise of AI reducing context switching appears to be partially unfulfilled, or at least reconfigured. While AI might reduce switching to external browsers for certain tasks, it seems to introduce a new form of internal context switching within the IDE, as developers engage in a continuous dialogue with the AI, evaluate suggestions, and adapt their focus. This &quot;AI-assisted flow&quot; may still involve frequent shifts in attention, trading one type of interruption for another.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><strong>AI&#8217;s Impact on Effort and Attention: Broader Implications<\/strong><\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/rq3_logs.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<p>The JetBrains HAX study offers invaluable insights into the evolving landscape of software development. The overarching conclusion is that AI coding assistants are quietly, yet fundamentally, reshaping developer workflows in ways that often elude their own conscious perception. The significant gap between what developers <em>feel<\/em> is changing and what objective data <em>shows<\/em> is changing highlights the power of mixed-method investigations.<\/p>\n<p><strong>Implications for Developers:<\/strong><\/p>\n<ul>\n<li><strong>Enhanced Productivity, Increased Curation:<\/strong> While AI undoubtedly boosts code generation, it also introduces a greater need for critical evaluation, editing, and refinement. Developers are becoming more active curators of AI-generated content.<\/li>\n<li><strong>Invisible Cognitive Load:<\/strong> The increased editing and nuanced context switching suggest a potential for increased cognitive load, even if it&#8217;s not always perceived as such. Developers might be performing more subtle verification and interaction steps that become habitual.<\/li>\n<li><strong>Skill Evolution:<\/strong> The role of a developer is shifting to include new competencies in prompt engineering, AI suggestion evaluation, and efficient integration of AI outputs.<\/li>\n<\/ul>\n<p><strong>Implications for AI Tool Builders:<\/strong><\/p>\n<figure class=\"article-inline-figure\"><img src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/04\/rq4_logs.png\" alt=\"Understanding AI&#039;s Impact on Developer Workflows | The Research Blog\" class=\"article-inline-img\" loading=\"lazy\" decoding=\"async\" \/><\/figure>\n<ul>\n<li><strong>Beyond Raw Output:<\/strong> AI tools need to evolve beyond simply generating code to actively assist in the <em>curation<\/em> and <em>verification<\/em> phases. Features that help developers efficiently review, compare, and integrate AI suggestions could be crucial.<\/li>\n<li><strong>Understanding &quot;Flow&quot; More Deeply:<\/strong> The findings on context switching suggest that &quot;staying in flow&quot; with AI is more complex than just reducing external tab switching. AI integration needs to be designed to minimize internal cognitive fragmentation.<\/li>\n<li><strong>Transparency and Trust:<\/strong> The persistent distrust in AI-generated code underscores the need for greater transparency in how AI suggestions are formed and for tools that facilitate easier verification.<\/li>\n<\/ul>\n<p><strong>Implications for the Industry:<\/strong><\/p>\n<ul>\n<li><strong>Objective Measurement is Key:<\/strong> Relying solely on self-reported data can lead to an incomplete or even misleading understanding of AI&#8217;s impact. Robust, long-term behavioral studies are essential for accurate assessments.<\/li>\n<li><strong>Training and Adaptation:<\/strong> Organizations adopting AI tools must consider training programs that not only introduce AI functionality but also educate developers on the potential subtle shifts in their workflows and the necessary skills for effective AI collaboration.<\/li>\n<li><strong>The Future of Human-AI Collaboration:<\/strong> The study paints a picture of a truly collaborative future, where AI acts as a powerful co-pilot, but the human remains firmly in control, engaging in a dynamic process of generation, evaluation, and refinement.<\/li>\n<\/ul>\n<p>In conclusion, the JetBrains HAX study delivers a powerful message: to truly understand and harness the potential of AI in software development, we must look beyond superficial perceptions and delve into the objective realities of evolving developer behavior. The subtle shifts uncovered by this research will undoubtedly inform the next generation of AI coding tools and redefine the human-AI partnership in the quest for innovation.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>JetBrains Research, a leading name in developer tools, has published groundbreaking findings revealing that AI coding assistants are fundamentally reshaping software development workflows in ways often imperceptible to developers themselves. This comprehensive, mixed-method study, spanning two years of intricate data analysis and qualitative insights, challenges prevailing assumptions about AI&#8217;s impact, highlighting a significant divergence between &hellip;<\/p>\n","protected":false},"author":25,"featured_media":5480,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[986,985,155,5,362,4,295,984,3,228,812,983,337,888],"newstopic":[],"class_list":["post-5481","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-software-engineering","tag-assistants","tag-coding","tag-developer","tag-development","tag-driven","tag-engineering","tag-jetbrains","tag-profound","tag-programming","tag-research","tag-shifts","tag-subtle","tag-unveils","tag-workflows"],"_links":{"self":[{"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/posts\/5481","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5481"}],"version-history":[{"count":0,"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/posts\/5481\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=\/wp\/v2\/media\/5480"}],"wp:attachment":[{"href":"https:\/\/codeguilds.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5481"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5481"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5481"},{"taxonomy":"newstopic","embeddable":true,"href":"https:\/\/codeguilds.com\/index.php?rest_route=%2Fwp%2Fv2%2Fnewstopic&post=5481"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}