<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[John Wilger]]></title><description><![CDATA[Dynamic and strategic Senior Engineering Leader with over 20 years of experience in driving technical innovation and leading high-performing engineering teams.]]></description><link>https://johnwilger.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 00:32:25 GMT</lastBuildDate><atom:link href="https://johnwilger.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Tools You Build Are More Important Than The Tools You Use]]></title><description><![CDATA[There's a pattern I've noticed among developers working with AI coding assistants. Some are having transformative experiences—shipping features faster, tackling problems they'd previously avoided, genuinely enjoying their work more. Others are frustr...]]></description><link>https://johnwilger.com/the-tools-you-build-are-more-important-than-the-tools-you-use</link><guid isPermaLink="true">https://johnwilger.com/the-tools-you-build-are-more-important-than-the-tools-you-use</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[plugins]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Mon, 29 Dec 2025 05:32:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/MCnzt2Udz_w/upload/e77fa8491d088ac1481652a88dfa65f7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There's a pattern I've noticed among developers working with AI coding assistants. Some are having transformative experiences—shipping features faster, tackling problems they'd previously avoided, genuinely enjoying their work more. Others are frustrated, producing buggy code, and increasingly skeptical that these tools offer anything beyond fancy autocomplete.</p>
<p>The difference isn't the model they're using. It's not their prompting technique. It's not even their underlying programming skill, though that certainly matters.</p>
<p>The developers succeeding with LLM-augmented development have stopped waiting for the perfect out-of-the-box experience and started building their own tools to solve their own problems.</p>
<h1 id="heading-the-myth-of-the-perfect-configuration">The Myth of the Perfect Configuration</h1>
<p>When Claude Code, Cursor, Windsurf, and similar tools started gaining traction, a cottage industry of "optimal configurations" emerged. GitHub repositories full of system prompts. Blog posts about the "ultimate" rules file. YouTube videos promising 10x productivity if you just copy these exact settings.</p>
<p>I tried many of them. Some helped marginally. Most did nothing. A few actively made things worse because they were optimized for someone else's workflow, someone else's codebase, someone else's pain points.</p>
<p>This shouldn't have surprised me. I've spent two decades watching the same pattern play out with every development tool and methodology. Cargo-culting someone else's Agile process doesn't make you agile. Copying another team's CI/CD pipeline doesn't give you their deployment confidence. Using the same text editor as a famous programmer doesn't make you write better code.</p>
<p>The tool is never the thing. The thing is understanding your own problems deeply enough to know what tool you need.</p>
<h1 id="heading-a-problem-worth-solving">A Problem Worth Solving</h1>
<p>I'd been using Claude Code heavily for several months, and one friction point kept recurring: GitHub issue management.</p>
<p>Not the basic stuff—creating issues, closing them, adding labels. The <code>gh</code> CLI handles that fine, and Claude can drive it without much trouble. The friction was in the relationships between issues.</p>
<p>I work with hierarchical issue structures. Epics contain stories. Stories contain tasks. Tasks might have sub-tasks. When you're trying to keep Claude oriented on what you're building and why, being able to say "this task is part of story #42, which is part of epic #15, which is about the payment system redesign" provides crucial context.</p>
<p>GitHub added sub-issues and blocking relationships over the past year, but they're only accessible through the web UI or the GraphQL API. Every time I needed Claude to help me restructure issue hierarchies or track dependencies, we'd end up in a frustrating dance: Claude would attempt some <code>gh api graphql</code> call, get the syntax wrong, try again, get the escaping wrong, try again, finally succeed, and by then I'd lost my train of thought on the actual problem I was trying to solve.</p>
<p>Worse, if I wanted to grant Claude permission to manage issues autonomously, I'd have to approve <code>Bash(gh api:*)</code> in my settings—which is far too broad. That pattern would let Claude make arbitrary API calls to GitHub, not just issue management operations.</p>
<p>I could have lived with this friction. Most people do. They work around limitations, accept the rough edges, wait for someone else to build a better solution.</p>
<p>Instead, I decided to build the tool I needed.</p>
<h1 id="heading-the-development-process-with-an-ai-partner">The Development Process (With an AI Partner)</h1>
<p>Here's where things get interesting. I didn't just sit down and write a GitHub CLI extension from scratch. I used Claude Code to help me build the tool that would make Claude Code more effective.</p>
<p>The process started with research. I asked Claude to investigate the GitHub GraphQL API, find the relevant mutations for sub-issues and blocking relationships, and figure out what operations were possible. This took some trial and error—we created test issues, experimented with API calls, discovered that sub-issues require a special <code>GraphQL-Features: sub_issues</code> header, learned that blocking relationships use <code>issueId</code> and <code>blockingIssueId</code> parameters (not the more intuitive names I initially guessed).</p>
<p>Every failed API call taught us something. Every error message refined our understanding. Claude kept notes, I asked questions, and gradually a clear picture emerged of what was possible and what syntax was required.</p>
<p>Then came the key insight: we could wrap all these GraphQL operations in a <code>gh</code> CLI extension. Instead of Claude needing to construct complex API calls every time, it could use simple commands like <code>gh issue-ext sub add 10 42</code>. And critically, I could grant permission for <code>Bash(gh issue-ext:*)</code> without opening the door to arbitrary API access.</p>
<p>The extension itself took maybe 20 minutes to write—a bash script that handles argument parsing, constructs the appropriate GraphQL queries, and presents results in both human-readable and JSON formats. Claude did most of the implementation work while I reviewed, asked questions, and occasionally corrected course.</p>
<p>But the extension was only half the solution. I also needed Claude to <em>know</em> how to use it effectively. So we built a Claude Code plugin: a skill document that teaches Claude about GitHub issue management, comprehensive reference documentation for every command, a setup command that installs the extension, and a session-start hook that reminds users if they haven't installed the extension yet.</p>
<p>The whole thing—research, experimentation, extension development, plugin creation, documentation—took an hour. And now I have a tool that solves my specific problem in exactly the way I need it solved.</p>
<h1 id="heading-why-this-matters">Why This Matters</h1>
<p>Let me be clear about something: the plugin I built isn't revolutionary. It's a wrapper around existing APIs with some documentation. Anyone could have built it.</p>
<p>But almost no one does.</p>
<p>Most developers treat their AI coding tools as fixed artifacts. The tool does what it does; your job is to figure out how to work within its constraints. If something is frustrating, you either live with it or switch to a different tool and hope it's better.</p>
<p>This mindset made sense when tools were expensive to modify. Writing an IDE plugin used to be a significant undertaking. Customizing your build system required deep expertise. The cost of building your own tools was high enough that it was usually better to adapt your workflow to existing solutions.</p>
<p>But that equation has changed. When you have an AI assistant that can help you build tools, the cost of custom solutions drops dramatically. That time I spent building the GitHub issue extension? I couldn't have done it that quickly five years ago. The research alone would have taken longer than the entire project did with Claude's help.</p>
<p>This creates a flywheel effect. You use AI to build tools. Better tools make your AI assistant more effective. A more effective AI assistant helps you build better tools faster. Each iteration compounds.</p>
<h1 id="heading-the-meta-skill">The Meta-Skill</h1>
<p>The developers I see thriving with AI coding assistants have developed a specific meta-skill: they notice friction, investigate root causes, and build solutions—rather than accepting friction as the cost of using new technology.</p>
<p>This requires a particular mindset:</p>
<p><strong>Treat configuration as code.</strong> Your rules files, system prompts, custom commands, and plugins are part of your development environment. They deserve the same attention you'd give any other code you maintain. Version control them. Iterate on them. Share them when they might help others, but don't expect others' configurations to solve your problems.</p>
<p><strong>Pay attention to repeated friction.</strong> Every time you find yourself working around a limitation, every time Claude makes the same mistake twice, every time you have to manually intervene in something that should be automatic—that's a signal. You've found a problem worth solving.</p>
<p><strong>Invest in understanding.</strong> Before building a solution, make sure you understand the problem deeply. My GitHub issue extension works because I spent time learning exactly how the GraphQL API behaves, what parameters it expects, what errors it returns. That understanding is embedded in the tool and the documentation.</p>
<p><strong>Build incrementally.</strong> You don't need to solve everything at once. I started with just sub-issue management because that was my most pressing pain point. Blocking relationships and linked branches came later. Each addition was motivated by a real problem I'd encountered.</p>
<p><strong>Document for your AI partner.</strong> Half of building effective tooling is teaching your AI assistant how to use it. The skill documentation I wrote for Claude isn't just for human readers—it's optimized for Claude to consume and apply. Clear examples, explicit command syntax, common patterns and workflows.</p>
<h1 id="heading-the-uncomfortable-truth">The Uncomfortable Truth</h1>
<p>There's an uncomfortable truth in all of this: the people who benefit most from AI coding assistants are the people who needed them least.</p>
<p>Experienced developers who already understand their problem domains deeply can direct AI assistants effectively. They know what questions to ask. They can evaluate generated solutions. They can identify when the AI is confidently wrong. And crucially, they have the skills to build custom tools when off-the-shelf solutions fall short.</p>
<p>Less experienced developers often struggle because they're trying to use AI assistants as a shortcut past understanding. They want the tool to just work, to give them correct answers without requiring them to evaluate those answers critically. When friction appears, they lack the context to even recognize that a solution might exist.</p>
<p>I don't have a neat resolution for this tension. AI coding tools genuinely do lower the barrier to building software. But they lower it most for people who've already climbed over that barrier.</p>
<p>Perhaps the best thing experienced developers can do is model this tool-building behavior openly. When you solve a problem by building a custom extension or plugin, share not just the artifact but the process. Show how you identified the friction, how you investigated solutions, how you iterated toward something that worked.</p>
<p>The tools we build are artifacts of our understanding. Sharing them helps. But sharing how we built them helps more.</p>
<h1 id="heading-getting-started">Getting Started</h1>
<p>If you've read this far and want to try building your own Claude Code tooling, here's what I'd suggest:</p>
<p><strong>Start with your actual problems.</strong> Don't go looking for things to optimize. Instead, pay attention over the next week. When do you feel friction? When does Claude struggle with something that should be straightforward? Write these down.</p>
<p><strong>Pick the smallest valuable problem.</strong> You don't need to build a comprehensive solution. Find something specific and bounded. Maybe it's a single command that automates a repetitive task. Maybe it's a snippet of documentation that helps Claude understand your project's conventions.</p>
<p><strong>Use Claude to build it.</strong> This is genuinely effective. Describe the problem you're trying to solve, work through potential solutions, iterate until you have something that works. You'll learn about Claude's capabilities and limitations in the process.</p>
<p><strong>Test it in real work.</strong> The best tools emerge from actual use. Build something minimal, use it for a few days, notice what's missing or awkward, improve it.</p>
<p><strong>Share what you learn.</strong> Not just the finished tool—the process, the problems, the failed approaches. Someone else has the same friction you do. They might not have realized yet that building a solution is possible.</p>
<hr />
<p>The GitHub issue management plugin I built is available at <a target="_blank" href="https://github.com/jwilger/claude-code-plugins">https://github.com/jwilger/claude-code-plugins</a>, and the gh extension is at <a target="_blank" href="https://github.com/jwilger/gh-issue-ext">https://github.com/jwilger/gh-issue-ext</a>. You're welcome to use them if they solve problems you have.</p>
<p>But more than that, I hope this piece encourages you to notice your own friction and do something about it. The tools you build for yourself will always fit better than the tools you borrow from others.</p>
]]></content:encoded></item><item><title><![CDATA[The Hidden Pitfalls of AI Software Development]]></title><description><![CDATA[Before we start, I want to make it clear that I'm not going to "blame the controller" for losing this game. I've always believed that a software engineer using AI assistants to write code is still responsible for reviewing and understanding every lin...]]></description><link>https://johnwilger.com/the-hidden-pitfalls-of-ai-software-development</link><guid isPermaLink="true">https://johnwilger.com/the-hidden-pitfalls-of-ai-software-development</guid><category><![CDATA[AI]]></category><category><![CDATA[copilot]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software]]></category><category><![CDATA[lessons learned]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Tue, 11 Mar 2025 15:22:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/mQTTDA_kY_8/upload/d7c503048890c755f33d92530c7d7736.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Before we start, I want to make it clear that I'm not going to "blame the controller" for losing this game. I've always believed that a software engineer using AI assistants to write code is still responsible for reviewing and understanding every line of that code, and this situation is no different. Even though the issue I faced was unexpected, it's still completely my responsibility. I'm just relieved that I'm the only one affected and that it didn't happen in a situation involving a client.</em></p>
<p>I've been trying out different AI-assisted software development workflows recently so I can guide clients and colleagues on which tools and methods to use or avoid. One setup I find quite promising is <a target="_blank" href="https://block.github.io/goose/">Goose</a>, an on-machine AI agent that interacts with the system through extensions using an MCP server. When used with a <code>.goosehints</code> file, which gives instructions for a ping-pong pairing, TDD approach to development, I've been quite impressed with its capabilities and how it can be adjusted to fit specific workflow preferences.</p>
<p>As part of my experimentation, I used Goose to help create a small TypeScript CLI utility. This tool takes a CSV file, where each record is a latitude/longitude coordinate pair, as input and outputs a new CSV file that includes the associated business name, website, and telephone number from the Google Places API. This task, having never used the Places API before and not being a regular TypeScript author, would likely have taken me 2-3 hours to complete. With AI assistance, it was done in about an hour. It was a complete, working solution that easily processed a test file with about 10,000 rows. It had full test coverage, and the code quality was decent, if not perfect. Success, right?</p>
<p>The problem is that bugs and low-quality code aren't the only issues that can cause trouble.</p>
<p>In my first attempt at creating this utility, I only extracted the business names from the Places API for each record. A quick check of the Places API pricing showed that I could make 10,000 requests for free, and additional requests up to 100,000 would cost $5 per 1,000 requests. Running an input file with 20,000 rows would cost about $50. I thought this expense was reasonable for the experiment, but I also wanted to avoid unnecessary spending. So, I decided to use a 10,000-row input file to stay closer to the free tier and maybe only spend $5-10 on the extra requests needed during testing.</p>
<p>Once I got the basic version working by refining the solution with Goose, I then asked Goose if I could also include the website URL and phone number for the business in the output. Goose was happy to help and did a great job updating the existing solution, including the tests, without unnecessarily rewriting major parts of the code.</p>
<p>At this point, anyone familiar with the Google Places API is probably shaking their head and laughing at my mistake.</p>
<p>The pricing I was looking at was for their "Place Details Essentials" level of requests. By simply asking for these two extra fields, URL and phone number, the request automatically moved to their "Place Details Enterprise" tier, which is much more expensive. At this tier, you only get 1,000 requests per month for free, and you pay <strong>$20 per 1,000 requests</strong> for anything beyond that! You can imagine my shock and horror when I received emails from Google later that day saying they had received payments from me totaling just under $2,000.00!</p>
<p>This probably wouldn't have happened if I had been writing this code without AI assistance. I would have needed to review the API documentation for the Places API to find out how to get those two additional fields of data. At that point, I likely would have noticed that this would move me into a different pricing tier. However, since using AI assistance meant I didn't need to look up this information myself, I didn't notice the change. I happily ran my tests and then the entire test file, and I was quite pleased with the output. It was a job well done, and it probably took about an hour less than it would have if I had written the code myself.</p>
<p>This post is not a criticism of Goose specifically; the issue I faced could happen with any LLM-based coding assistant. In fact, I would argue that the better the tool (and the more you trust it), the more likely you are to encounter a similar issue. The problem wasn't with the code itself. I reviewed the code before running it and made sure I understood what each line was doing. What I didn't do was thoroughly consider other factors that a professional software engineer should evaluate, such as the financial cost of running the solution. Thankfully, this was just a personal project and not a client system, so I'm the only one dealing with the consequences. Now imagine if a less-experienced developer made a similar mistake in a production system that processed many more records over a month. The consequences of that decision could be much worse.</p>
<p>While AI-assisted coding tools like Goose can significantly enhance productivity and streamline the development process, they also introduce new challenges and responsibilities for developers. It's crucial for software engineers to remain vigilant and thoroughly review not just the code, but also the broader implications of their work, such as financial costs and potential impacts on production systems. This experience underscores the importance of maintaining a balance between leveraging AI capabilities and exercising professional diligence to avoid costly oversights. As AI tools continue to evolve, developers must adapt and refine their practices to ensure they harness these technologies effectively and responsibly.</p>
]]></content:encoded></item><item><title><![CDATA[The Language Model Is Just Another User]]></title><description><![CDATA[The first time I worked on an application that heavily relied on OpenAI's chat completion API, my years of experience managing APIs within an extensive service infrastructure shaped my approach. It seemed straightforward: it was just another JSON API...]]></description><link>https://johnwilger.com/the-language-model-is-just-another-user</link><guid isPermaLink="true">https://johnwilger.com/the-language-model-is-just-another-user</guid><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[llm]]></category><category><![CDATA[openai]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Thu, 21 Mar 2024 20:30:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1710488977293/f0de341d-eb7f-446f-bf7e-275578260c49.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first time I worked on an application that heavily relied on OpenAI's chat completion API, my years of experience managing APIs within an extensive service infrastructure shaped my approach. It seemed straightforward: it was just another JSON API where you send a request to a known endpoint and get back data in a specific format. However, as the development progressed, we encountered problems due to the unpredictable nature of the generative AI responses. Features that worked one day would suddenly cause errors the next, leading us into a repetitive cycle of tweaking the application code and the prompts we were using. This situation was unsustainable; I couldn't in good conscience tell my client that their application was "complete" when it could break down at any moment. Is building anything more sophisticated than a fancy chatbot using this technology even possible?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710518866017/e8877cff-7fb0-4e49-ae69-1f47a2847c94.webp" alt class="image--center mx-auto" /></p>
<p>The system architecture we were using led me to a realization that has completely changed how I now integrate generative AI features into an application. I've always appreciated event-sourcing and CQRS (Command Query Responsibility Separation) architectures. We were developing this application in Elixir using the Commanded library. In this architecture, whenever a user performs an action that changes the system's state (like submitting an order form), this action is captured as a "command" that shows the intent to change the state. This command is checked and carried out, leading to either an error message or a state-change event. These events are the truth for the entire application's state, and no changes to the system's data happen without a matching event. The system records events in response to the language model's changes, just like those made by human users. Although it's possible to write an event directly to the event stream, the Commanded library makes it much easier to create a simple command that the system can execute, which results in the recording of an event.</p>
<p>It took me longer than I'd like to admit to see how everything fit together. The human user and the language model change the system's state using the same basic process: the command. Once I realized this, I wondered if we were approaching this from the wrong perspective. What are we using generative AI for? In most situations, a language model is creating results that a skilled user could also achieve on their own; we use these models as an aid to help us bridge a gap in either knowledge or efficiency. They aren't a <em>part</em> of the application as much as they are an assistant in <em>working with</em> it.</p>
<p>As I mentioned in <a target="_blank" href="https://johnwilger.com/generative-ai-is-a-ux-revolution">an earlier piece about how generative AI is changing UX paradigms</a>, we successfully let the language model control many aspects of an application humans would have previously performed. When creating <a target="_blank" href="https://apex.artium.ai/">APEX</a>, a generative-AI-integrated application we made at Artium to help produce product plans that are well-grounded in the needs of the business, we could have taken a different approach to the interaction. We could have had the user click on nodes, click an edit button, fill out a form with the information they wanted in each section, and click "save," and to incorporate generative AI, we could have put a little AI-sparkle button next to the text fields and used the model to fill in just that one piece.</p>
<p>Instead, the primary means of interacting with APEX is via a back-and-forth conversation with "the Artisan." The Artisan isn't just a chatbot that gives you the text to put into a form field; it can also update that text for you in the right spot.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710490780687/caa42080-902b-4113-aa17-a01b8be68c7e.png" alt class="image--center mx-auto" /></p>
<p>Imagine you're on a call with someone and you’re collaborating on a Google Doc. You read through the text they've just written and suggest changes. Right before your eyes, you see the text change as your writing partner edits the document on another computer. Once upon a time, that felt magical; today, it's mundane.</p>
<p>Using APEX to build a product plan is similar to this style of collaborative editing. The difference is that your writing partner is a language model that understands how to use the application. Sometimes, it offers its own opinions, but it also does what you say verbatim when you are explicit. Sometimes, it produces results you will love; sometimes, you need to iterate on its suggestions.</p>
<p>Witnessing this approach come together, I realized that this is precisely where generative AI can shine when integrated into our applications. I also learned how we can make language models a much more reliable part of our systems.</p>
<p>Rather than treating the language model as an internal component of your system, a better approach is to consider it as just another user sitting at their computer and sending inputs to the system. By treating the output from the language model <em>precisely</em> as if it had been typed into a form by a human user, the solutions to non-deterministic inputs suddenly become clear. It's just form validation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710490806221/24df3557-4a7a-46ca-8616-59286bca8716.webp" alt class="image--center mx-auto" /></p>
<p>Build your application as though multiple humans can concurrently and collaboratively edit the same entities. Then, teach your model to control your application. For example, your two human users might have a text chat about the documents they are working on. Treat text responses from the language model the same way; your application code processes the response by executing the same SendMessage command that a human user invokes if they click the send button on a message they typed. If the language model determines it needs to change the title of a document, it must send a message back that your client code can translate into an application command such as UpdateTitle. This command, again, is the same UpdateTitle command that the system will execute if a user clicks an "edit" link next to the title, changes the text, and then hits "save."</p>
<p>How you handle an invalid attempt to change the system data is an essential aspect of this approach. Language models work best with plain, human language. While their ability to run completions of computer code is currently helpful and still improving, typical language models are better at working with good old-fashioned prose. Because of this, you should respond to the incorrect function call in the same way you'd react to the human user: show it the natural language descriptions of the errors.</p>
<p>For example, we have a rule that a document title must be unique in the system. We enforce this invariant at the command execution layer. When a title is not unique, instead of recording a TitleUpdated event, the system responds with an error message: "Another document already uses the title Foo Bar Baz." If the language model sees this error in your response, it will have enough information to correct itself and attempt to execute the UpdateTitle command again with a different title. If, instead, the model receives a machine-friendly error message like "dup_title," there is less of a chance that it will interpret that to mean an error occurred or that it will sufficiently address the mistake.</p>
<p>Treating generative AI as just another user of your application means that you'll also be well-situated to add human/human or human/human/AI collaboration, should that be desired. It becomes more apparent how to handle errors when you receive invalid data or when the model tries to interact with your application in ways that aren't available to it, such as hallucinating functionality that doesn't exist or isn't permitted. Additionally, much knowledge and experience exists in designing UX for collaborative-editing applications. Rather than reinventing the wheel, we can rely on this existing body of knowledge to guide how we approach UX for generative-AI-enhanced applications.</p>
<p>By changing your perspective on the role that generative AI plays in your application, you can both delight your users and create an application that is more fault-tolerant and easier to maintain.</p>
]]></content:encoded></item><item><title><![CDATA[Generative AI is a UX Revolution]]></title><description><![CDATA[When I first engaged with computers, the landscape was predominantly shaped by command-line interfaces, a stark contrast to the GUI-based systems like Windows or MacOS that later emerged and democratized computing. This transformation was monumental,...]]></description><link>https://johnwilger.com/generative-ai-is-a-ux-revolution</link><guid isPermaLink="true">https://johnwilger.com/generative-ai-is-a-ux-revolution</guid><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[UX]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Tue, 12 Mar 2024 18:47:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709947871106/d67141a9-cb94-4a05-aaca-4cbade411abd.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I first engaged with computers, the landscape was predominantly shaped by command-line interfaces, a stark contrast to the GUI-based systems like Windows or MacOS that later emerged and democratized computing. This transformation was monumental, making technology accessible to a broader audience. Yet, it also introduced a dichotomy: while GUIs simplified interactions for the majority, they often fell short for expert users who valued the precision and efficiency of command-line interfaces. This divide underscored a technological gap where neither GUIs nor command-line interfaces could entirely meet the diverse needs of users. Attempts to redesign expert interfaces into graphical formats frequently resulted in a confusing mishmash of buttons and menus, complicating the user experience for novices without offering significant advantages to experts over the command line. Consequently, we find ourselves in a world where applications are often neither intuitive nor efficient, highlighting a significant challenge in the quest for inclusive and effective user interfaces.</p>
<p>In my role as a software engineer, I prefer using command-line and keyboard-driven tools. I avoid using the mouse while programming, not because I think "real programmers don't use mice," but because reaching for the mouse is significantly slower for tasks like navigating files, editing text, and running programs. Therefore, you'll often find me in a full-screen terminal window, using tmux (a tool for managing multiple terminal sessions in one window) and vim (a highly customizable text editor). Over time, I've developed a set of configuration tweaks and shell scripts that help me work efficiently. I use a minimalist UI that keeps the code I'm working on in focus, without the distraction of unnecessary UI elements that require mouse clicks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709939093205/b1a60861-5844-4855-8ec2-84377132c4e2.png" alt="This is my entire main screen when programming." class="image--center mx-auto" /></p>
<p>However, for those unfamiliar with these tools, becoming immediately productive in this environment can be quite challenging. I've been working this way for over 20 years, and I’m still finding new ways to boost my efficiency with vim. Given that I use this application almost daily for long hours as a crucial tool of my trade, the time invested in learning to use it as efficiently as possible is highly worthwhile. That said, I wouldn't suggest that all user interfaces should require this level of expertise to be effective. When I come across a system I'm not familiar with, I really value a simple, intuitive interface that helps me do the right thing.</p>
<p>On the other side of the spectrum, there are people who aren't as familiar with computer technology and can get frustrated by systems, even when those systems are meant to be used with a mouse or trackpad. My wife is one such person. She's not computer illiterate, but she often gets frustrated with websites and applications when she knows what she wants to do but can't find the right buttons to press to make it happen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709941595065/2804c54e-2218-4d1d-928c-f38cdce625f0.webp" alt class="image--center mx-auto" /></p>
<p>Recently, at <a target="_blank" href="https://artium.ai/">Artium</a>, I've been involved in building <a target="_blank" href="https://apex.artium.ai/">a new product called APEX</a>, where we are exploring a new approach to how users interact with applications. APEX is a tool to help you take an idea for a software product and bring that idea to life by walking you through the process of defining the problem space, refining the product vision and value proposition, and then focusing on the key users of the application and how they will use it. The goal is to arrive at an actionable plan for building the initial version of your product and provide you with the information you need to create a proposal and business plan that will help you launch your product. The primary interaction with APEX involves exchanging messages with a generative AI assistant. However, this is not just an AI chatbot that is generating text in a chat window for you to copy and paste into your plan. Our assistant is actively updating the representation of your product on-screen as your conversation progresses.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710267850978/5b83104c-e624-4469-99be-6325a60c1830.png" alt class="image--center mx-auto" /></p>
<p>This style of interaction represents a pivotal shift from the traditional dichotomy of GUI and command-line interfaces. By adopting a conversational, AI-driven approach, we can create a unified interface that intuitively adapts to the user's expertise level—simplifying the learning curve for novices while providing the depth and efficiency that experts crave. This adaptability showcases how generative AI can transcend the limitations of previous technologies, offering a seamless, inclusive user experience that traditional interfaces have struggled to provide.</p>
<p>While getting ready to launch our open beta for APEX, I asked my wife to try using the application to plan out a product. She was able to get through the entire process, and amazingly, I didn't hear her get frustrated with it even once. She is a registered nurse by trade, and as I mentioned before, she is often frustrated with computer programs; this is not someone who has experience with product design or comes to the table with a high level of knowledge about computer systems. When I asked her about the experience, she told me that she much preferred this way of interacting with a computer because it was just like having a conversation with another person. Rather than having to feel overwhelmed by options or frustrated that she couldn't figure out how to change something, she was able to simply converse with the assistant and watch the updates happen in the display of the product. And there are no "wrong answers" when talking to the assistant. If you misunderstand a question or simply choose to focus on a different aspect of the product, the assistant is able to adjust, guide, and redirect to help you complete your task.</p>
<p>What is amazing to me is that this style of interaction is equally appealing to me as an expert user of the system. While the conversation and guidance of the assistant can be essential for a novice user who doesn't necessarily understand how to approach product planning, as someone who does understand both the process and the types of information that APEX manages, I am able to very quickly create a product plan without ever touching my computer's mouse by simply being explicit with the assistant with my directions. I can directly say "set the product title to MyAwesomeProduct", and it does it. If I say "use Elixir, Phoenix, and LiveView with a Postgres data store", it updates the Technology section without me having to wait for the assistant to ask me about that. If I already have text-based documentation about a product idea (typed notes from a client planning session, for example), I can simply paste those notes into the assistant window and tell it to create a product plan based on those notes.</p>
<p>Despite the promise of generative AI in enhancing UX, several challenges loom. One significant concern is the potential for AI to misunderstand user intent, leading to frustrating experiences or, in worst-case scenarios, harmful outcomes. Ensuring that AI systems can accurately interpret and act on a wide range of human inputs is crucial for their success. Additionally, ethical considerations are at the heart of deploying generative AI in any user-facing application. Issues such as bias in AI responses, transparency in how AI decisions are made, and the autonomy of users in guiding the AI's actions are critical. As we integrate AI more deeply into our lives, ensuring these systems enhance rather than undermine human autonomy, respect user privacy, and promote fairness and inclusivity will be essential. By addressing these ethical challenges head-on, we can harness the benefits of generative AI while minimizing potential harms. Addressing these challenges requires new approaches to software testing to account for the non-deterministic nature of generative AI responses. At Artium, we have also been pioneering the use of <a target="_blank" href="https://artium.ai/insights/test-driving-ai-applications">Continuous Alignment Testing</a> to ensure that products using generative AI are able to function with a high degree of confidence and safety.</p>
<p>The impact of generative AI on UX design, as shown by our work on APEX, is clear. By enabling a more natural, conversational interaction with technology, we're doing more than just making applications more accessible and efficient; we're changing how humans interact with computers. This move towards intuitive, adaptive interfaces suggests a future where technology truly serves everyone, no matter their background or level of expertise. As we keep improving and broadening the abilities of generative AI, the potential for innovation in UX seems endless. The evolution from command lines to conversational interfaces is just the start of a UX revolution that will keep growing and surprising us.</p>
<p><em>I did not and could not have produced APEX on my own in the time that we were able to bring this together. APEX would not have been possible without the team that created it:</em></p>
<ul>
<li><p><em>Cauri Jaye</em></p>
</li>
<li><p><em>Ross Hale</em></p>
</li>
<li><p><em>Serena Epstein</em></p>
</li>
<li><p><em>Michael McCormick</em></p>
</li>
<li><p><em>Ryan Durling</em></p>
</li>
<li><p><em>Nick Mahoney</em></p>
</li>
<li><p><em>George Wambold</em></p>
</li>
<li><p><em>Nafisa Rawji</em></p>
</li>
<li><p><em>Mark Whaley</em></p>
</li>
<li><p><em>James Lenhart</em></p>
</li>
<li><p><em>Chay Landaverde</em></p>
</li>
<li><p><em>Gene Gurvich</em></p>
</li>
<li><p><em>Randy Lutcavich</em></p>
</li>
<li><p><em>John Wilger</em></p>
</li>
<li><p><em>and the rest of the Artium team who supported our work</em></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Complex Unique Constraints with PostgreSQL Triggers in Ecto]]></title><description><![CDATA[Ecto makes it easy to work with typical uniqueness constraints in your database; you just
define your table like this:
defmodule MyApp.Repo.Migrations.CreateFoos do
  use Ecto.Migration

  def change do
    create table(:foos) do
      add :name, :te...]]></description><link>https://johnwilger.com/complex-unique-constraints-with-postgresql-triggers-in-ecto</link><guid isPermaLink="true">https://johnwilger.com/complex-unique-constraints-with-postgresql-triggers-in-ecto</guid><category><![CDATA[ecto]]></category><category><![CDATA[Elixir]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[database]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Sun, 16 Feb 2020 00:40:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/lRoX0shwjUQ/upload/33be72f043a35f9b4b8257720d5a6a75.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://hexdocs.pm/ecto">Ecto</a> makes it easy to work with typical uniqueness constraints in your database; you just
define your table like this:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">MyApp.Repo.Migrations.CreateFoos</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Ecto.Migration

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">change</span></span> <span class="hljs-keyword">do</span>
    create table(<span class="hljs-symbol">:foos</span>) <span class="hljs-keyword">do</span>
      add <span class="hljs-symbol">:name</span>, <span class="hljs-symbol">:text</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
    <span class="hljs-keyword">end</span>

    create unique_index(<span class="hljs-symbol">:foos</span>, <span class="hljs-symbol">:name</span>)
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p>and a module with a <a target="_blank" href="https://hexdocs.pm/ecto/Ecto.Changeset.html#unique_constraint/3">changeset validation for the uniqueness constraint</a>,
perhaps like:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">MyApp.Foo</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Ecto.Schema

  <span class="hljs-keyword">import</span> Ecto.Changeset

  schema <span class="hljs-string">"foos"</span> <span class="hljs-keyword">do</span>
    field <span class="hljs-symbol">:name</span>, <span class="hljs-symbol">:string</span>
  <span class="hljs-keyword">end</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">changeset</span></span>(%__MODULE__{} = foo, %{} = changes) <span class="hljs-keyword">do</span>
      foo
      |&gt; cast(changes, [<span class="hljs-symbol">:name</span>])
      |&gt; unique_constraint(<span class="hljs-symbol">:name</span>)
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p>Then, when you run:</p>
<pre><code class="lang-elixir">result = MyApp.Foo.changeset(%MyApp.Foo{}, %{<span class="hljs-symbol">name:</span> <span class="hljs-string">"bar"</span>})
         |&gt; MyApp.Repo.insert()
</code></pre>
<p>The Ecto library will attempt to insert your record, and if there is already a record where the
name column is set to "bar", Ecto will see the uniqueness constraint violation error produced by
the database and turn it into a validation error on your changeset, so that it would look
something like:</p>
<pre><code class="lang-elixir">{<span class="hljs-symbol">:error</span>, %Ecto.Changeset{<span class="hljs-symbol">errors:</span> [<span class="hljs-symbol">name:</span> {<span class="hljs-string">"has already been taken"</span>, _}]}} = result
</code></pre>
<h3 id="heading-something-more-complex">Something More Complex</h3>
<p>I recently needed to enforce a database constraint similar in spirit to a unique index (if a
record were in violation, I wanted the same behavior on the Elixir end of things—the changeset
should report that the value for the field "has already been taken") however the criteria for what
should be considered "unique" was more complex than what a simple unique index in PostgreSQL would
be able to deal with.</p>
<p>Let's pretend we have a project that is managing meeting room reservations. The process for
reserving a meeting room is as follows:</p>
<ol>
<li>The customer chooses a room and selects the time range for the reservation.</li>
<li>Assuming the room is available at that time, the system places a hold on the room for 24 hours</li>
<li>Within that 24-hour period, the customer pays for the room and completes some contractual
information that must be signed by both the customer and the facility manager.</li>
<li>Once the payment is complete and the contracts are signed, the reservation is confirmed.</li>
</ol>
<p>Here, then, are the business rules for creating a reservation:</p>
<ul>
<li><p>A room reservation cannot be made for a given room and time-period if there exists another room
reservation for that same room with an overlapping time-period and the existing reservation is
in a "confirmed" status.</p>
</li>
<li><p>A room reservation cannot be made for a given room and time-period if there exists another room
reservation for that same room with an overlapping time-period, the existing reservation is in a
"hold" status, and the existing reservation was created within the last 24 hours.</p>
</li>
</ul>
<p>We decide to store the room reservations in a table defined as:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">Meetings.Repo.Migrations.CreateRoomReservations</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Ecto.Migration

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">change</span></span> <span class="hljs-keyword">do</span>
    create table(<span class="hljs-symbol">:room_reservations</span>) <span class="hljs-keyword">do</span>
      add <span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:binary_id</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:binary_id</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:timestamp</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:reservation_ends_at</span>, <span class="hljs-symbol">:timestamp</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:status</span>, <span class="hljs-symbol">:text</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>, <span class="hljs-symbol">default:</span> <span class="hljs-string">"hold"</span>
      timestamps()
    <span class="hljs-keyword">end</span>
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p>and ideally, we'd like to be able to model the <code>RoomReservation</code> in Elixir as:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">Meetings.RoomReservation</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Ecto.Schema

  <span class="hljs-keyword">import</span> Ecto.Changeset

  schema <span class="hljs-string">"room_reservations"</span> <span class="hljs-keyword">do</span>
    field <span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:binary_id</span>
    field <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:binary_id</span>
    field <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:utc_datetime</span>
    field <span class="hljs-symbol">:reservation_ends_at</span>, <span class="hljs-symbol">:utc_datetime</span>
    field <span class="hljs-symbol">:status</span>, <span class="hljs-symbol">:string</span>
    timestamps()
  <span class="hljs-keyword">end</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create</span></span>(%{} = res_data) <span class="hljs-keyword">do</span>
    %__MODULE__{}
    |&gt; cast(res_data, [<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
    |&gt; validate_required([<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
    |&gt; put_change(<span class="hljs-symbol">:status</span>, <span class="hljs-string">"hold"</span>)
    |&gt; unique_constraint(<span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">message:</span> <span class="hljs-string">"has already been reserved"</span>)
    |&gt; Meeting.Repo.insert()
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p>with the interesting bit being the <code>|&gt; unique_constraint(:room_id)</code> line. When a customer tries to
reserve a room that is already reserved for the given time period, we want the
<code>Meeting.RoomReservation.create/1</code> function to return with a validation error:</p>
<pre><code class="lang-elixir">%{<span class="hljs-symbol">:error</span>, %Ecto.Changeset{<span class="hljs-symbol">errors:</span> [<span class="hljs-symbol">room_id:</span> {<span class="hljs-string">"has already been reserved"</span>, _}]}} =
  Meetings.RoomReservation.create(%{
    <span class="hljs-symbol">customer_id:</span> <span class="hljs-string">"0af5716a-c3d2-4d4c-87f5-9fed9b2515d4"</span>,
    <span class="hljs-symbol">room_id:</span> <span class="hljs-string">"2c0d6242-3585-40cb-be92-a1818c0f4a73"</span>,
    <span class="hljs-symbol">reservation_starts_at:</span> DateTime.from_naive!(<span class="hljs-string">~N[2020-01-01 8:00:00]</span>, <span class="hljs-string">"Etc/UTC"</span>),
    <span class="hljs-symbol">reservation_ends_at:</span> DateTime.from_naive!(<span class="hljs-string">~N[2020-01-01 17:00:00]</span>, <span class="hljs-string">"Etc/UTC"</span>)
  })
</code></pre>
<p>Clearly, according to the business rules for the system, we can't just put a unique index on the
<code>room_id</code> column. To the best of my knowledge, an exclusion constraint also won't work here,
because of the need to <em>not</em> check against any rows that have a status of "hold" and an
<code>inserted_at</code> timestamp that is more than 24 hours ago relative to the current time. It <em>is</em>
however possible to use a trigger function to enforce the constraint in the database:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">Meetings.Repo.Migrations.CreateRoomReservations</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-keyword">use</span> Ecto.Migration

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">change</span></span> <span class="hljs-keyword">do</span>
    create table(<span class="hljs-symbol">:room_reservations</span>) <span class="hljs-keyword">do</span>
      add <span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:binary_id</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:binary_id</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:timestamp</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:reservation_ends_at</span>, <span class="hljs-symbol">:timestamp</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>
      add <span class="hljs-symbol">:status</span>, <span class="hljs-symbol">:text</span>, <span class="hljs-symbol">null:</span> <span class="hljs-keyword">false</span>, <span class="hljs-symbol">default:</span> <span class="hljs-string">"hold"</span>
      timestamps()
    <span class="hljs-keyword">end</span>

    execute(
      <span class="hljs-comment"># up</span>
      <span class="hljs-string">~S"""
      CREATE FUNCTION room_reservations_check_room_availability() RETURNS TRIGGER AS
      $$
      BEGIN
      IF EXISTS(
        SELECT 1
        FROM room_reservations rr
        WHERE rr.room_id = NEW.room_id
          AND tsrange(rr.reservation_starts_at, rr.reservation_ends_at, '[]') &amp;&amp;
              tsrange(NEW.reservation_starts_at, NEW.reservation_ends_at, '[]')
          AND (
            rr.status = 'confirmed'
              OR rr.inserted_at &gt; CURRENT_TIMESTAMP - interval '24 hours'
          )
      )
      THEN
        RAISE "room already reserved";
      END IF;
      RETURN NEW;
      END;
      $$ language plpgsql;
      """</span>,

      <span class="hljs-comment"># down</span>
      <span class="hljs-string">"DROP FUNCTION room_reservations_check_room_availability;"</span>
    )

    execute(
      <span class="hljs-comment"># up</span>
      <span class="hljs-string">~S"""
      CREATE TRIGGER room_reservations_room_availability_check
      BEFORE INSERT ON room_reservations
      FOR EACH ROW
      EXECUTE PROCEDURE room_reservations_check_room_availability();
      """</span>,

      <span class="hljs-comment"># down</span>
      <span class="hljs-string">"DROP TRIGGER room_reservations_room_availability_check ON room_reservations;"</span>
    )
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<p>The problem is that instead of that nice validation error on the changeset that we <em>want</em> to get,
we instead end up with an unhandled <code>Postgrex.Error</code> exception. We could, of course, simply rescue
that exception in our <code>Meetings.RoomReservation.create/1</code> function and add the error to the
changeset ourselves, but that starts to look a little ugly:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">Meetings.RoomReservation</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-comment"># ...</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create</span></span>(%{} = res_data) <span class="hljs-keyword">do</span>
    changeset =
      %__MODULE__{}
      |&gt; cast(res_data, [<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
      |&gt; validate_required([<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
      |&gt; put_change(<span class="hljs-symbol">:status</span>, <span class="hljs-string">"hold"</span>)
    Meeting.Repo.insert(changeset)
  rescue
    error <span class="hljs-keyword">in</span> Postgrex.Error -&gt;
      <span class="hljs-keyword">case</span> error <span class="hljs-keyword">do</span>
        %{<span class="hljs-symbol">postgres:</span> %{<span class="hljs-symbol">message:</span> <span class="hljs-string">"room already reserved"</span>}} -&gt;
          changeset
          |&gt; add_error(<span class="hljs-symbol">:room_id</span>, <span class="hljs-string">"has already been reserved"</span>)
      <span class="hljs-keyword">end</span>
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<h3 id="heading-tldr-its-all-about-the-raise">TL;DR - It's All About the Raise</h3>
<p>Knowing that <code>Ecto.Changeset.unique_constraint/3</code> works by intercepting an error raised by the
database, I set out to see if I could implement the complex unique constraint logic in the
database and still be able to use the <code>Ecto.Changeset.unique_constraint/3</code> validation without
needing to modify any Elixir code. Looking at <a target="_blank" href="https://github.com/elixir-ecto/ecto_sql/blob/0359b7ce5155974d566fcd0b127f8695aed8b3a9/lib/ecto/adapters/postgres/connection.ex#L18-L19">the relevant code in ecto_sql</a>, we
can see that the trick to getting the <code>room_reservations_check_room_availability</code> functionality to
work with that function is to change the PostgreSQL exception to use the correct error code as
well as a constraint name that is linked with the changeset validation. We can redifine the
PostgreSQL function as:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">FUNCTION</span> room_reservations_check_room_availability() <span class="hljs-keyword">RETURNS</span> <span class="hljs-keyword">TRIGGER</span> <span class="hljs-keyword">AS</span>
$$
<span class="hljs-keyword">BEGIN</span>
<span class="hljs-keyword">IF</span> <span class="hljs-keyword">EXISTS</span>(
  <span class="hljs-keyword">SELECT</span> <span class="hljs-number">1</span>
  <span class="hljs-keyword">FROM</span> room_reservations rr
  <span class="hljs-keyword">WHERE</span> rr.room_id = NEW.room_id
    <span class="hljs-keyword">AND</span> tsrange(rr.reservation_starts_at, rr.reservation_ends_at, <span class="hljs-string">'[]'</span>) &amp;&amp;
        tsrange(NEW.reservation_starts_at, NEW.reservation_ends_at, <span class="hljs-string">'[]'</span>)
    <span class="hljs-keyword">AND</span> (
      rr.status = <span class="hljs-string">'confirmed'</span>
        <span class="hljs-keyword">OR</span> rr.inserted_at &gt; <span class="hljs-keyword">CURRENT_TIMESTAMP</span> - <span class="hljs-built_in">interval</span> <span class="hljs-string">'24 hours'</span>
    )
)
<span class="hljs-keyword">THEN</span>
  <span class="hljs-keyword">RAISE</span> unique_violation
    <span class="hljs-keyword">USING</span> <span class="hljs-keyword">CONSTRAINT</span> = <span class="hljs-string">'room_reservations_room_reserved'</span>;
<span class="hljs-keyword">END</span> <span class="hljs-keyword">IF</span>;
RETURN NEW;
<span class="hljs-keyword">END</span>;
$$ language plpgsql;
</code></pre>
<p>and then add the <code>:name</code> option to our <code>unique_constraint</code> in the changeset validation:</p>
<pre><code class="lang-elixir"><span class="hljs-class"><span class="hljs-keyword">defmodule</span> <span class="hljs-title">Meetings.RoomReservation</span></span> <span class="hljs-keyword">do</span>
  <span class="hljs-comment">#...</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create</span></span>(%{} = res_data) <span class="hljs-keyword">do</span>
    %__MODULE__{}
    |&gt; cast(res_data, [<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
    |&gt; validate_required([<span class="hljs-symbol">:customer_id</span>, <span class="hljs-symbol">:room_id</span>, <span class="hljs-symbol">:reservation_starts_at</span>, <span class="hljs-symbol">:reservation_ends_at</span>])
    |&gt; put_change(<span class="hljs-symbol">:status</span>, <span class="hljs-string">"hold"</span>)
    |&gt; unique_constraint(<span class="hljs-symbol">:room_id</span>,
                         <span class="hljs-symbol">message:</span> <span class="hljs-string">"has already been reserved"</span>,
                         <span class="hljs-symbol">name:</span> <span class="hljs-string">"room_reservations_room_reserved"</span>)
    |&gt; Meeting.Repo.insert()
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Living Intentionally]]></title><description><![CDATA[A couple of weeks ago, I was thrown yet another curve-ball in a life
that has seen quite a few changes lately. I've spent the last four years
wearing the hats of senior developer, technical manager, process-coach,
and mentor at LivingSocial. It was e...]]></description><link>https://johnwilger.com/living-intentionally</link><guid isPermaLink="true">https://johnwilger.com/living-intentionally</guid><category><![CDATA[life]]></category><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Thu, 07 Apr 2016 00:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/U3C79SeHa7k/upload/7944625b1c9d3e741bb5daed64017b86.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A couple of weeks ago, I was thrown yet another curve-ball in a life
that has seen quite a few changes lately. I've spent the last four years
wearing the hats of senior developer, technical manager, process-coach,
and mentor at LivingSocial. It was exactly the right place for me to be
during a time when the rest of my life was in upheaval due to a divorce,
a (re)discovery of my passion for racing bicycles, a new marriage, and
the many ups and downs of trying to facilitate the bonding of a large,
blended family. In many ways, the last four years have been a second
adolescence for me; I've had the chance to try out some new things―both
professionally and in my personal life―and become reacquainted with the
idea of what makes me tick. My position at LivingSocial offered a <em>lot</em>
of flexibility both in terms of schedule and the ability to do valuable
work that fit well with the level of emotional effort I was capable of
at any given time. I am grateful to have worked with an engineering
organization led by people who cared enough about me as an individual to
stand by me through some fairly tough times.</p>
<p>Inevitably, all things <em>do</em> come to an end, and I, along with many other
very talented coworkers, was laid off a couple of weeks ago as my (now
former) employer struggles to build enough runway to make a pivot into a
new area of business. I have no complaints about the action: it was a
clearly necessary move, and LivingSocial was <strong>very</strong> generous with a
severance package for everyone affected. In fact, for the first time in
my adult life, I have the opportunity to spend a considerable amount of
time examining my life and making a considered decision about my next
steps without needing to simultaneously worry about earning today's
paycheck.</p>
<p>Because I needed to return to LivingSocial the company-provided laptop
that I've been using for both work and personal computing for the last
several years, one of the first things I did after finding out about the
lay-off was to purchase a new laptop. I've been growing more-and-more
frustrated with Apple's "walled-garden" approach to OS X lately, so I
decided I'd revisit an old love, and ordered a laptop from
<a target="_blank" href="https://system76.com">System76</a>, an <a target="_blank" href="https://system76.com/laptops/oryx">Oryx Pro</a> running Linux. For <em>most</em>
things, I find the modern Linux desktop perfectly adequate, but there
are two pieces of software that I rely upon heavily and have not yet
found a suitable replacement: Quicken for personal finance management,
and <a target="_blank" href="https://www.omnigroup.com/omnifocus">OmniFocus</a> for managing my (rather expansive) to-do
list.</p>
<p>It is while searching for a replacement for OmniFocus that I was turned
on to the process that I'm going to share with you for figuring out what
truly matters in your life <em>and</em> ensuring that you align the way you
spend your time with the person whom you really want to be. I can't
claim to have invented this method, but hopefully an example of how I've
been applying it recently will inspire you to do the same and take
charge of your life in a way that brings true clarity to everything you
will do from here on out. If you live every moment <em>intentionally</em>, you
can transform from a state of constant background-worries about whether
you are missing something important into a state of relaxed-action where
you can be confident that you are putting your energy into the right
work while also remaining flexible enough to change direction as new
events transpire.</p>
<h2 id="heading-a-system-for-intentional-living">A System for Intentional Living</h2>
<p>In my search for a new task-management system that would work outside of
Apple's walled-garden of OS X and iOS, one of the products I evaluated
is a web application called <a target="_blank" href="https://facilethings.com">FacileThings</a>. This
application provides an extremely railroaded implementation of the
popular <a target="_blank" href="http://gettingthingsdone.com">Getting Things Done</a> methodology, which can be nice when
you are a person (like me) who would otherwise be inclined to spend
nearly as much time fiddling with the <em>configuration</em> of a
task-management system as you would...well...actually getting things
done. Ultimately, I decided not to use FacileThings, but only because I
did not feel like it presented the tasks in a way that really helped me
narrow my view down to "exactly what should I be working on <strong>right
now</strong>?" (This is an area where the implementation of <em>Perspectives</em> in
OmniFocus has really spoiled me, and I can't find any other system that
does this well.)</p>
<p>The one feature in FacileThings that I found <strong>extremely</strong> useful,
however, is the "Perspective" section (not the same use of the word as
in OmniFocus). This section is what turned me on to the method I'm
describing here via it's guided approach to walking the user through
three fairly basic―but extremely effective―steps to help align your
effort with the things that matter most to you:</p>
<ol>
<li>Define your purpose</li>
<li>Describe your vision</li>
<li>Set your goals</li>
</ol>
<h3 id="heading-step-1-define-your-purpose">Step 1: Define Your Purpose</h3>
<p>The first question is, what is <em>your</em> purpose? In other words: why do
you exist; what is your unique contribution to this world?</p>
<p>I've often thought about what it is I want to do with my career (I
really enjoy the work of building and coaching high-functioning software
development teams). I've certainly put thought into what is important to
me in raising my children (I want them to be kind, independent adults
who are ultimately content with their lives). I've thought about what I
want to do with my personal time (I like to race bicycles, go
backpacking, and travel). But this, honestly, is the first time I've
been asked the question, "what is your purpose," in a way that caused me
to step back and examine that question not in terms of how it applies to
any one aspect of my life, but instead to find the true, underlying
<em>"thing"</em> about which I am passionate across <em>all</em> areas of my life.</p>
<p>After some reflection, this is what I came up with for myself:</p>
<blockquote>
<p>My passion is leading groups of people in collaborative efforts to
improve the world around us—for ourselves, for future generations, and
for other species—and to help develop other leaders who can
effectively do the same.</p>
</blockquote>
<p>The fundamentally important part of this "Purpose" is that it is
immediately applicable to <strong>every single action</strong> that I might take. I
want everything into which I put my effort to result in a tangible
improvement to the world. Now, that doesn't mean I am constantly working
on, say, "world peace" or "stop the polar ice caps from melting"; even
actions as simple as "clean the kitchen", "pick up that piece of trash
that someone left laying about", or "smile at the person whose path you
cross on the sidewalk" meet this litmus test. The way it is stated, I've
even allowed for actions that simply improve the world around me <em>for
me</em> (because it's important to take care of yourself, too!)</p>
<p>"Leading," of course, means "doing" in a way that encourages and
empowers others to do the same. In this sense, my purpose applies well
to both my career and my family-life. As an individual-contributor to
software projects, I want the work I do to exemplify
software-craftsmanship. As an agile coach or team manager, I want to
show people how an environment of respect and consensus can improve the
quality of our <em>product</em> as well as our lives. As a parent, I want to
demonstrate the kindness and stewardship of our world that I hope my
children will exhibit as adults.</p>
<p>Furthermore, I want to help create more leaders who will also do these
things. As a consultant or team-manager, it is not enough for me to
engage with a team and lead them in my way of thinking without also
developing the team's ability to continue and evolve after I am no
longer involved. As a parent, I want my children to pass these ideas on
to <em>their</em> children and their peers, so that society can benefit at a
scale that I could never reach on my own.</p>
<p>So, what is <em>your</em> purpose? Take some time to reflect on it. Write it
down somewhere! See if you can summarize it in just one or two sentences
that you can apply as a test whenever you are unsure what direction to
take with decisions big or small.</p>
<p>Importantly, while your purpose should be broad enough to cover all
aspects of your life and so fundamental that it is <em>unlikely</em> to change,
<strong>don't be afraid to change it!</strong> If, in a month, a year, or a decade
you feel like your purpose statement no longer fits you, spend the time
to reflect and write a new one that does. <em>Nothing</em> in this world will
guarantee failure so much as forcing oneself to work against one's true
purpose.</p>
<h3 id="heading-step-2-describe-your-vision">Step 2: Describe Your Vision</h3>
<blockquote>
<p>If you can visualize the whole of spring and see Paradise with the eye
of belief, you may understand the utter majesty of everlasting Beauty.
If you respond to that Beauty with the beauty of belief and worship,
you will be a most beautiful creature. ―Said Nursi</p>
</blockquote>
<p>Now that you understand your purpose, how does it shape the person that
you will be in five to ten years? Take a moment, close your eyes, and
visualize your life in the future. What do you see? What are you doing
for a living? What kind of lifestyle do you lead? What is your family
like?</p>
<p>When I think about what <em>my</em> life will look like five to ten years from
now, here's what <em>I</em> see (more-or-less in order of importance to me):</p>
<ul>
<li><p><strong>I am a parent of successful children.</strong> Our children are either
currently or well on their way to becoming independent, young adults
who are empowered to find their own successes and be content with
their lives.</p>
</li>
<li><p><strong>I am prepared for the future.</strong> We are prepared for the future with
a financial plan that puts us on the path to a long and enjoyable
retirement and have ensured that there will not be disastrous
financial or legal issues in the event of Amber's or my unexpected
death or inability to work or live independently.</p>
</li>
<li><p><strong>I have a beautiful, well-maintained home.</strong> Our home is
well-maintained, has enough space for everyone in the family, is
nicely landscaped, and well-decorated. It is a place where we are
comfortable living and where we are proud to entertain guests.</p>
</li>
<li><p><strong>I am a competitive masters cyclist.</strong> I am competitive within my
age group and category as a racing cyclist and am on track to remain
competitive until well-into my old age.</p>
</li>
<li><p><strong>I am a successful entrepreneur.</strong> I have built a successful
consulting business with a solid base of customers and referral
business that provides lasting financial security for my family and
allows me to partially fulfill my purpose via my career.</p>
</li>
<li><p><strong>I am socially/politically involved.</strong> I actively take part in
social, civic, and political activities that have a positive impact on
the world around me.</p>
</li>
</ul>
<p>There is no "correct" number of vision statements to have. Just try to
describe what your life looks like to you. Check that each vision is in
line with your purpose; if it doesn't directly contribute to your
purpose, it must at least not be in conflict with your purpose. If it is
in conflict, check that this is really <em>your</em> vision for your life and
not someone else's vision for your life! You only get one shot at this
life; be your own person.</p>
<h3 id="heading-step-3-set-your-goals">Step 3: Set Your Goals</h3>
<blockquote>
<p>It's OK to have a plan, to invest in your future―for your financial
security, your love life, your personal fulfillment, and even your
happiness. To have personal happiness as a stated goal doesn't detract
from it if you get there. ―Karen Finerman</p>
</blockquote>
<p>It's no good seeing where you want to be without a plan to get there.
Now that you've visualized the life you want to be living ten years from
now, what concrete achievements can help get you there? By setting
<a target="_blank" href="https://en.wikipedia.org/wiki/SMART_criteria">SMART goals</a> to help realize your vision, you will have a
tool to measure your progress towards it. While vision statements
reflect the long term outcomes of your life, your goals should generally
be things that are achievable within the next few years; they won't
necessarily get you all the way to your vision, but they materially move
you closer to it. Your goals should be even less set in stone than your
vision. As you achieve some goals, and as other life-events occur, you
will discover that your goals need to change as well. Never be afraid to
change you plan as you get more information.</p>
<p>Important to the "R" (relevant) part of a SMART goal: make sure your
goals are related to one (or more) of your vision statements. If you
can't find a way to relate a stated goal to a vision, check whether you
have a vision that you haven't made explicit and get it written down. If
the goal really doesn't apply to your vision of your life, <em>drop it
now</em>. This is where being explicit about your purpose and vision really
pays off: living your life intentionally means <em>not</em> spending your
limited time on things that are not important to you.</p>
<p>As of right now, here are my goals as they relate to each of my vision
statements:</p>
<ul>
<li><p><strong>I am a parent of successful children.</strong></p>
<ul>
<li><p><strong>Children all have defined Purpose, Vision, Goals</strong> Just like me,
each of our children will establish a habit of defining and
reviewing their Purpose, Vision, and Goals, so that I can better
help them. Timeframe: end of 2016</p>
</li>
<li><p><strong>Children are Financially Responsible</strong> Each of our children will
understand the basics of managing their finances, including using
bank accounts, creating and following a budget, and earning money.
Timeframe: end of 2018</p>
</li>
<li><p><strong>Children have effective, well-established study/homework habits</strong>
Children will have effective study and work habits that allow them
to achieve their full potential in school. Timeframe: end of
2016/2017 school year</p>
</li>
<li><p><strong>Take a family vacation each year</strong> While kids are still young,
take the entire family on a week-long (or more) vacation each year
to a different destination to expose them to different cultures and
environments. Timeframe: by end of each year</p>
</li>
</ul>
</li>
<li><p><strong>I am prepared for the future.</strong></p>
<ul>
<li><p><strong>Have Investments to Fund Retirement</strong> Set up and <em>understand</em>
investment accounts to fund our retirement. Timeframe: end of 2017</p>
</li>
<li><p><strong>Pay off all debt other than mortgage</strong> Timeframe: end of 2020</p>
</li>
<li><p><strong>Set up Proper Legal Documentation</strong> Ensure that appropriate legal
documentation is in place in the event that Amber or I die.
Timeframe: end of 2016</p>
</li>
</ul>
</li>
<li><p><strong>I have a beautiful, well-maintained home.</strong></p>
<ul>
<li><p><strong>Complete Home Remodel</strong> Have extensive remodeling work done on our
home to provide enough bedrooms for everyone, update the kitchen,
add a home-office and home-gym, and add a MiL suite/appartment.
Timeframe: end of 2017</p>
</li>
<li><p><strong>House is in a good state of repair</strong> The backlog of items needing
repair (major and minor) has been cleared out, so that we can focus
our attention on new issues as the arise. Timeframe: June 2017</p>
</li>
</ul>
</li>
<li><p><strong>I am a competitive masters cyclist.</strong></p>
<ul>
<li><strong>Upgrade to a Category 3 Road Racer</strong> Timeframe: end of 2018</li>
</ul>
</li>
<li><p><strong>I am a successful entrepreneur.</strong></p>
<ul>
<li><strong>Build a stable consulting business</strong> Create a business that will
provide for our financial needs in the near term. Timeframe: end of
2016</li>
</ul>
</li>
<li><p><strong>I am socially/politically involved.</strong></p>
<ul>
<li><p><strong>Develop and Run Programming Course at FGCS</strong> Build on the work
already done by Markus Roberts to develop and successfully run a
full-year, introductory computer programming course for the students
at FGCS. Timeframe: end of 2016/2017 school year</p>
</li>
<li><p><strong>Hold a local public office</strong> Serve my community by being elected
or appointed to and effectively running a local public office.
Timeframe: end of 2020.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-living-each-moment-with-intent">Living Each Moment with Intent</h3>
<p>Now that you understand your purpose, have visualized your future, and
have planned some concrete goals to help get you there, you have the
tools you need to live intentionally. The next time you have a task on
your plate, and you have that nagging feeling that you don't want to do
it, think about how it relates to this hierarchy of goals, vision, and
purpose. If it works against your purpose, drop it like a hot brick.
Don't waste a moment of your precious time doing something that isn't
<em>you</em>. Why would you put effort into making it <em>harder</em> to achieve your
purpose in life?</p>
<p>If the task simply doesn't help you meet one of your established goals,
consider if it is telling you there is another goal? To which of your
visions does it relate? If there is another goal, write it down as such,
but consider whether it's something that you should act on right now, or
if this is something you should set aside for the future, once more
immediate goals have been achieved. Maybe that nagging feeling is just
telling you this isn't the right time.</p>
<p>If, after considering your purpose and your goals, the task at hand
still doesn't relate to something you are trying to achieve, don't do
it! If someone else is asking you to do it, explain to them why it isn't
something you are going to spend time on. Perhaps they'll offer some
additional perspective to show you how it fits with your goals, or
perhaps you will simply have to insist they find someone else to take
care of it. Remembering to say "no" to the activities that don't support
your goals is the most powerful tool at your disposal to ensure you have
the time and energy for the things that really matter to <em>you</em>.</p>
<p>This system has already provided me with a great deal of clarity in my
day-to-day planning. Yes, there is still more I'd <em>like</em> to do than what
I have time for in any given day, but by being able to quickly evaluate
which of my goals any given task supports, I find that I am much better
able to focus my attention on the task at hand, safe in the knowledge
that it is the right thing on which for me to work at that moment.</p>
]]></content:encoded></item><item><title><![CDATA[Acceptance and Integration Testing with Kookaburra]]></title><description><![CDATA[UPDATE (2012-01-22): I realized this morning that the credit I gave to Sam
Livingston-Gray below may not have
adequately shown how instrumental he was in getting this project off the ground;
especially since much of his work was done in the private r...]]></description><link>https://johnwilger.com/acceptance-and-integration-testing-with-kookaburra</link><guid isPermaLink="true">https://johnwilger.com/acceptance-and-integration-testing-with-kookaburra</guid><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Sat, 21 Jan 2012 20:57:00 GMT</pubDate><content:encoded><![CDATA[<p><strong>UPDATE (2012-01-22):</strong> <em>I realized this morning that the credit I gave to <a target="_blank" href="http://resume.livingston-gray.com/">Sam
Livingston-Gray</a> below may not have
adequately shown how instrumental he was in getting this project off the ground;
especially since much of his work was done in the private repository from which
this was extracted. So, thanks, Sam. This might not have gone anywhere if you
hadn't worked to put the idea in practice in our application and helped everyone
on our team learn how to use the approach. I made a few minor changes below to
reflect this a bit better.</em></p>
<p>We've been using <a target="_blank" href="http://cukes.info/">Cucumber</a> for acceptance testing at
<a target="_blank" href="http://renewfund.com">Renewable Funding</a> since back when it was still part of
the <a target="_blank" href="http://rspec.info">RSpec</a> project (indeed, since before we were even
Renewable Funding). While we've always liked the ability to have plain-language
feature documentation that we could automatically test against, after years of
adding to and maintaining a fairly large set of Cucumber scenarios, the cost of
that maintenance was starting to really slow us down. The test suite began to
grow fragile, and it seemed like every time one of our UX designers changed
anything about the application's interface, the development team would spend a
bunch of time just babysitting Cucumber tests to get them passing again.</p>
<p>Last year, as I was reading Jez Humble's excellent <a target="_blank" href="http://www.amazon.com/gp/product/0321601912?tag=contindelive-20">Continuous
Delivery</a> book,
I was inspired when I came across the section titled "The Application Driver
Layer" (p. 198). This section describes an approach to acceptance testing where
the specification and the test implementation are isolated from the details of
the application's user interface by inserting a layer between the two that uses
good old OOP to abstract the user interface components. Martin Fowler describes
it as the <a target="_blank" href="http://martinfowler.com/eaaDev/WindowDriver.html">Window Driver</a> pattern on his website.</p>
<p>I started a proof-of-concept implementation of this pattern last summer, then my
coworker, <a target="_blank" href="http://resume.livingston-gray.com/">Sam Livingston-Gray</a> and I
started pulling it into a new project at work. After Sam and the rest of the
Renewable Funding team helped improve on my original attempt while putting it to
use for the last six or so months, we extracted a library to make it easier for
other Ruby developers to implement this pattern in their testing. I'm happy to
introduce <a target="_blank" href="http://github.com/projectdx/kookaburra">Kookaburra</a> to the world.</p>

<p>The following comes from Kookaburra's README as of version 0.7.1.</p>
<hr />
<h1 id="heading-kookaburra">Kookaburra</h1>
<p>Kookaburra is a framework for implementing the <a target="_blank" href="http://martinfowler.com/eaaDev/WindowDriver.html">Window Driver</a> pattern in
order to keep acceptance tests maintainable.</p>
<h2 id="heading-setup">Setup</h2>
<p>Kookaburra itself abstracts some common patterns for implementing the Window
Driver pattern for tests of Ruby web applications built on <a target="_blank" href="http://rack.rubyforge.org/">Rack</a>. You will need
to tell Kookaburra which classes contain the specific Domain Driver
implementations for your application as well as which driver to use for running
the tests (currently only tested with <a target="_blank" href="https://github.com/jnicklas/capybara">Capybara</a>). The details of setting up your
Domain Driver layer are discussed below, but in general you will need the
following in a locations such as <code>lib/my_application/kookaburra.rb</code> (replace
<code>MyApplication</code> with a module name suitable to your actual application:</p>
<pre><code class="lang-ruby"><span class="hljs-class"><span class="hljs-keyword">module</span> <span class="hljs-title">MyApplication</span></span>
  <span class="hljs-class"><span class="hljs-keyword">module</span> <span class="hljs-title">Kookaburra</span></span>
    <span class="hljs-symbol">:</span><span class="hljs-symbol">:Kookaburra</span>.adapter = Capybara

    <span class="hljs-comment"># Note: the following assigned classes are defined under your</span>
    <span class="hljs-comment"># application's namespace, e.g. MyApplication::Kookaburra::APIDriver</span>
    <span class="hljs-symbol">:</span><span class="hljs-symbol">:Kookaburra</span>.api_driver = APIDriver
    <span class="hljs-symbol">:</span><span class="hljs-symbol">:Kookaburra</span>.given_driver = GivenDriver
    <span class="hljs-symbol">:</span><span class="hljs-symbol">:Kookaburra</span>.ui_driver = UIDriver

    <span class="hljs-symbol">:</span><span class="hljs-symbol">:Kookaburra</span>.test_data_setup <span class="hljs-keyword">do</span>
      provide_collection <span class="hljs-symbol">:accounts</span>
      <span class="hljs-comment"># See section on Test Data for more examples of what can go here.</span>
    <span class="hljs-keyword">end</span>
  <span class="hljs-keyword">end</span>
<span class="hljs-keyword">end</span>
</code></pre>
<h3 id="heading-rspec">RSpec</h3>
<p>For <a target="_blank" href="http://rspec.info">RSpec</a> integration tests, just add the following file to your project:</p>
<p>``` ruby spec/support/kookaburra_setup.rb
require 'my_application/kookaburra'</p>
<p>RSpec.configure do |c|
  c.include(Kookaburra, :type =&gt; :request)
end</p>
<pre><code>
### Cucumber ###

For Cucumber, add the following file to your project

<span class="hljs-string">``</span><span class="hljs-string">` ruby features/support/kookaburra_setup.rb
require 'my_application/kookaburra'

Kookaburra.adapter = Capybara
World(Kookaburra)

Before do
  # Ensure that there isn't state-leakage between scenarios
  kookaburra_reset!
end</span>
</code></pre><p>This will cause the #api, #given and #ui methods will be available in your
Cucumber step definitions.</p>
<h2 id="heading-defining-your-testing-dsl">Defining Your Testing DSL</h2>
<p>Kookaburra attempts to extract some common patterns that make it easier to use
the Window Driver pattern along with various Ruby testing frameworks, but you
still need to define your own testing DSL. An acceptance testing stack using
Kookaburra has the following four layers:</p>
<ol>
<li>The <strong>Business Specification Language</strong> (Cucumber scenarios and step definitions)</li>
<li>The <strong>Domain Driver</strong> (Kookaburra::GivenDriver and Kookaburra::UIDriver)</li>
<li>The <strong>Window Driver</strong> (Kookaburra::UIDriver::UIComponent)</li>
<li>The <strong>Application Driver</strong> (Capybara and Rack::Test)</li>
</ol>
<h3 id="heading-the-business-specification-language">The Business Specification Language</h3>
<p>The business specification language consists of the highest-level descriptions
of a feature that are suitable for sharing with the non/less-technical
stakeholders on a project.</p>
<p>Gherkin is the external DSL used by Cucumber for this purpose, and you might
have the following scenario defined for an e-commerce application:</p>
<p>``` cucumber purchase_items_in_cart.feature
Feature: Purchase Items in Cart</p>
<p>  Scenario: Using Existing Billing and Shipping Information</p>
<p>    Given I have an existing account
    And I have previously specified default payment options
    And I have previously specified default shipping options
    And I have an item in my shopping cart</p>
<p>    When I sign in to my account
    And I choose to check out</p>
<p>    Then I see my order summary
    And I see that my default payment options will be used
    And I see that my default shipping options will be used</p>
<pre><code>
Note that the scenario is focused on business concepts versus interface details,
i.e. you <span class="hljs-string">"choose to check out"</span> rather than <span class="hljs-string">"click on the checkout button"</span>. If
<span class="hljs-keyword">for</span> some reason your e-commerce system was going to be a terminal application
rather than a web application, you would not need to change <span class="hljs-built_in">this</span> scenario at
all, because the actual business concepts described would not change.

### The Domain Driver ###

The Domain Driver layer is where you build up an internal DSL that describes the
business concepts <span class="hljs-keyword">of</span> your application at a fairly high level. It consists <span class="hljs-keyword">of</span>
three top-level drivers: the <span class="hljs-string">`APIDriver`</span> (available via <span class="hljs-string">`#api`</span>) <span class="hljs-keyword">for</span> interacting
<span class="hljs-keyword">with</span> your application<span class="hljs-string">'s external API, the `GivenDriver` (available via `#given`)
which really just wraps the `APIDriver` and is used to set up state for your
tests, and the UIDriver (available via `#ui`) for describing the tasks that a
user can accomplish with the application.

Given the Cucumber scenario above, the step definitions call into the Domain
Driver layer to interact with your application:

``` ruby step_definitions/various_steps.rb
Given "I have an existing account" do
  given.existing_account(:my_account)
end

Given "I have previously specified default payment options" do
  given.default_payment_options_specified_for(:my_account)
end

Given "I have previously specified default shipping options" do
  given.default_shipping_options_specified_for(:my_account)
end

Given "I have an item in my shopping cart" do
  given.an_item_in_my_shopping_cart(:my_account)
end

When "I sign in to my account" do
  ui.sign_in(:my_account)
end

When "I choose to check out" do
  ui.choose_to_check_out
end

Then "I see my order summary" do
  ui.order_summary.should be_visible
end

Then "I see that my default payment options will be used" do
  ui.order_summary.payment_options.should be_account_default_options
end

Then "I see that my default shipping options will be used" do
  ui.order_summary.shipping_options.should be_account_default_options
end</span>
</code></pre><p>The step definitions contain neither explicitly shared state (instance
variables) nor any logic branches; they are simply wrappers around calls into
the Domain Driver layer. There are a couple of advantages to this approach.
First, because step definitions are so simple, it isn't necessary to force <em>Very
Specific Wording</em> on the business analyst/product owner who is writing the
specs. For instance, if she writes "I see a summary of my order" in another
scenario, it's not a big deal to have the following in your step definitions (as
long as the author of the spec confirms that they really mean the same thing):</p>
<pre><code class="lang-ruby">    Then <span class="hljs-string">"I see my order summary"</span> <span class="hljs-keyword">do</span>
      ui.order_summary.should be_visible
    <span class="hljs-keyword">end</span>

    Then <span class="hljs-string">"I see a summary of my order"</span> <span class="hljs-keyword">do</span>
      ui.order_summary.should be_visible
    <span class="hljs-keyword">end</span>
</code></pre>
<p>The step definitions are nothing more than a natural language reference to an
action in the Domain Driver; there is no overwhelming maintenance cost to the
slight duplication, and it opens up the capacity for more readable Gherkin
specs. The fewer false road blocks you put between your product owner and a
written specification, the easier it becomes to ensure her participation in this
process.</p>
<p>The second advantage is that by pushing all of the complexity down into the
Domain Driver, it's now trivial to reuse the exact same code in
developer-centric integration tests. This ensures you have parity between the
way the automated acceptance tests run and any additional testing that the
development team needs to add in. You could write the same test using just
RSpec as follows:</p>
<p>``` ruby spec/integration/purchase_items_in_cart_spec.rb
describe "Purchase Items in Cart" do
  example "Using Existing Billing and Shipping Information" do
    given.existing_account(:my_account)
    given.default_payment_options_specified_for(:my_account)
    given.default_shipping_options_specified_for(:my_account)
    given.an_item_in_my_shopping_cart(:my_account)</p>
<p>    ui.sign_in(:my_account)
    ui.choose_to_check_out</p>
<p>    ui.order_summary.should be_visible
    ui.order_summary.payment_options.should be_account_default_options
    ui.order_summary.shipping_options.should be_account_default_options
  end
end</p>
<pre><code>
Whether <span class="hljs-keyword">in</span> Cucumber step definitions or developer integration tests, you will
usually interact only <span class="hljs-keyword">with</span> the GivenDriver and the UIDriver.

#### TestData ####

<span class="hljs-string">`Kookaburra::TestData`</span> is the component via which the <span class="hljs-string">`GivenDriver`</span> and the
<span class="hljs-string">`UIDriver`</span> share information. For instance, <span class="hljs-keyword">if</span> you create a user account via the
<span class="hljs-string">`GivenDriver`</span>, you would store the login credentials <span class="hljs-keyword">for</span> that account <span class="hljs-keyword">in</span> the
<span class="hljs-string">`TestData`</span> instance, so the UIDriver knows what to use when you tell it to
<span class="hljs-string">`#sign_in`</span>. This is what allows the Cucumber step definitions to remain free
<span class="hljs-keyword">from</span> explicitly shared state.

The <span class="hljs-string">`TestData`</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">can</span> <span class="hljs-title">be</span> <span class="hljs-title">configured</span> <span class="hljs-title">to</span> <span class="hljs-title">contain</span> <span class="hljs-title">both</span> <span class="hljs-title">collections</span> <span class="hljs-title">of</span> <span class="hljs-title">test</span> <span class="hljs-title">data</span>
<span class="hljs-title">as</span> <span class="hljs-title">well</span> <span class="hljs-title">as</span> <span class="hljs-title">default</span> <span class="hljs-title">data</span> <span class="hljs-title">that</span> <span class="hljs-title">can</span> <span class="hljs-title">be</span> <span class="hljs-title">used</span> <span class="hljs-title">as</span> <span class="hljs-title">a</span> <span class="hljs-title">starting</span> <span class="hljs-title">point</span> <span class="hljs-title">for</span> <span class="hljs-title">creating</span> <span class="hljs-title">new</span>
<span class="hljs-title">resources</span> <span class="hljs-title">in</span> <span class="hljs-title">the</span> <span class="hljs-title">application</span>. <span class="hljs-title">To</span> <span class="hljs-title">configure</span> `<span class="hljs-title">TestData</span>`, <span class="hljs-title">call</span>
`<span class="hljs-title">Kookaburra</span>.<span class="hljs-title">test_data_setup</span>` <span class="hljs-title">with</span> <span class="hljs-title">a</span> <span class="hljs-title">block</span> (<span class="hljs-title">usually</span> <span class="hljs-title">in</span> <span class="hljs-title">your</span>
`<span class="hljs-title">lib</span>/<span class="hljs-title">my_application</span>/<span class="hljs-title">kookaburra</span>.<span class="hljs-title">rb</span>` <span class="hljs-title">file</span>):

``` <span class="hljs-title">ruby</span> <span class="hljs-title">lib</span>/<span class="hljs-title">my_application</span>/<span class="hljs-title">kookaburra</span>.<span class="hljs-title">rb</span>
<span class="hljs-title">module</span> <span class="hljs-title">MyApplication</span>
  <span class="hljs-title">module</span> <span class="hljs-title">Kookaburra</span>
    # ...
    ::<span class="hljs-title">Kookaburra</span>.<span class="hljs-title">test_data_setup</span> <span class="hljs-title">do</span>
      <span class="hljs-title">provide_collection</span> :<span class="hljs-title">animals</span>
      <span class="hljs-title">set_default</span> :<span class="hljs-title">animal</span>,
        :<span class="hljs-title">name</span> </span>=&gt; <span class="hljs-string">'horse'</span>
        :<span class="hljs-function"><span class="hljs-params">size</span> =&gt;</span> <span class="hljs-string">'large'</span>,
        :<span class="hljs-function"><span class="hljs-params">number_of_legs</span> =&gt;</span> <span class="hljs-number">4</span>
    end
  end
end
</code></pre><p>Then, in any context where you have an instance of <code>TestData</code> (such as in
<code>GivenDriver</code> or <code>UIDriver</code>), you can add/retrieve items to/from collections and
access default data:</p>
<p>``` ruby lib/my_application/kookaburra/given_driver.rb
class MyApplication::Kookaburra::GivenDriver &lt; Kookaburra::GivenDriver
  def existing_account(nickname)
    default_account_data = test_data.default(:account)</p>
<h1 id="heading-do-something-to-create-account-in-application">do something to create account in application</h1>
<h1 id="heading-li4u">...</h1>
<h1 id="heading-make-the-details-of-the-new-account-available-to-the-rest-of-the-test">make the details of the new account available to the rest of the test</h1>
<p>    test_data.accounts[nickname] = account
  end
end</p>
<pre><code>
<span class="hljs-string">``</span><span class="hljs-string">` ruby lib/my_application/kookaburra/ui_driver.rb
class MyApplication::Kookaburra::UIDriver &lt; Kookaburra::UIDriver
  def sign_in(account_nickname)
    # pull stored account details from TestData
    account_info = test_data.accounts[account_nickname]

    # do something to log in using that account_info
  end
end</span>
</code></pre><h4 id="heading-apidriver">APIDriver</h4>
<p>The <code>Kookaburra::APIDriver</code> is used to interact with an application's external
web services API. You tell Kookaburra about your API by creating a subclass of
<code>Kookaburra::APIDriver</code> for your application:</p>
<p>``` ruby lib/my_application/kookaburra/api_driver.rb
class MyApplication::Kookaburra::APIDriver &lt; Kookaburra::APIDriver
  def create_account(account_data)
    post_as_json 'Account', 'api/v1/accounts', :account =&gt; account_data
    hash_from_response_json[:account]
  end
end</p>
<pre><code>
#### GivenDriver ####

The <span class="hljs-string">`Kookaburra::GivenDriver`</span> is used to create a particular <span class="hljs-string">"preexisting"</span>
state within your application<span class="hljs-string">'s data and ensure you have a handle to that data
(when needed) prior to interacting with the UI. Like the `APIDriver`, you will
create a subclass of `Kookaburra::GivenDriver` in which you will create part of
the Domain Driver DSL for your application:

``` ruby lib/my_application/kookaburra/given_driver.rb
class MyApplication::Kookaburra::GivenDriver &lt; Kookaburra::GivenDriver
  def existing_account(nickname)
    # grab the default account details and add a unique username and
    # password
    account_data = test_data.default(:account)
    account_data[:username] = "test-user-#{`uuidgen`.strip}"
    account_data[:password] = account_data[:username] + "-password"

    # use the API to create the account in the application
    account_details = api.create_account(account_data)

    # merge in the password (since API doesn'</span>t <span class="hljs-keyword">return</span> it) and store details
    # <span class="hljs-keyword">in</span> the TestData instance
    account_details.merge(:<span class="hljs-function"><span class="hljs-params">password</span> =&gt;</span> account_data[:password])
    test_data.accounts[nickname] = account_details
  end
end
</code></pre><h4 id="heading-uidriver">UIDriver</h4>
<p><code>Kookaburra::UIDriver</code> provides the necessary tools for driving your
application's user interface using the Window Driver pattern. You will subclass
<code>Kookaburra::UIDriver</code> for your application and implement your testing DSL
within your subclass:</p>
<p>``` ruby lib/my_application/kookaburra/ui_driver.rb
class MyApplication::Kookaburra::UIDriver &lt; Kookaburra::UIDriver</p>
<h1 id="heading-makes-an-instance-of-myapplicationkookaburrauidriversigninscreen">makes an instance of MyApplication::Kookaburra::UIDriver::SignInScreen</h1>
<h1 id="heading-available-via-the-instance-method-signinscreen">available via the instance method #sign_in_screen</h1>
<p>  ui_component :sign_in_screen</p>
<p>  def sign_in(account_nickname)
    account = test_data.accounts[account_nickname]
    navigate_to :sign_in_screen
    sign_in_screen.submit_login(account[:username], account[:password])
  end
end</p>
<pre><code>
### The Window Driver Layer ###

While your <span class="hljs-string">`GivenDriver`</span> and <span class="hljs-string">`UIDriver`</span> provide a DSL that represents actions
your users can perform <span class="hljs-keyword">in</span> your application, the [Window Driver] [<span class="hljs-number">1</span>] layer describes
the individual user interface components that the user interacts <span class="hljs-keyword">with</span> to perform
these tasks. By describing each interface component using an OOP approach, it is
much easier to maintain your acceptance/integration tests, because the
implementation details <span class="hljs-keyword">of</span> each component are captured <span class="hljs-keyword">in</span> a single place. If/when
that implementation changes, you can---<span class="hljs-keyword">for</span> example---fix every single test that
needs to log a user into the system just by updating the SignInScreen <span class="hljs-keyword">class</span>.

You describe the various user interface components by sub-classing
`Kookaburra::UIDriver::UIComponent`:

``` ruby lib/my_application/ui_driver/sign_in_screen.rb
<span class="hljs-keyword">class</span> MyApplication::Kookaburra::UIDriver::SignInScreen &lt; Kookaburra::UIDriver::UIComponent
  component_locator '#new_user_session<span class="hljs-string">'
  component_path '</span>/session/<span class="hljs-keyword">new</span><span class="hljs-string">'

  def username
    in_component { browser.find('</span>#session_username<span class="hljs-string">').value }
  end

  def username=(new_value)
    fill_in('</span>#session_username<span class="hljs-string">', :with =&gt; new_value)
  end

  def password
    in_component { browser.find('</span>#session_password<span class="hljs-string">').value }
  end

  def password=(new_value)
    fill_in('</span>#session_password<span class="hljs-string">', :with =&gt; new_value)
  end

  def submit!
    click_on('</span>Sign In<span class="hljs-string">')
    no_500_error!
  end

  def submit_login(username, password)
    self.username = username
    self.password = password
    submit!
  end
end</span>
</code></pre>]]></content:encoded></item><item><title><![CDATA[Production Release Workflow with Git]]></title><description><![CDATA[After growing the ProjectDX team from three to
eight software developers, our release process was a complete pain, and it
typically took two to three hours to get a good build on the production branch
(and even then some insidious issues would sneak ...]]></description><link>https://johnwilger.com/production-release-workflow-with-git</link><guid isPermaLink="true">https://johnwilger.com/production-release-workflow-with-git</guid><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Sat, 08 Jan 2011 20:54:00 GMT</pubDate><content:encoded><![CDATA[<p>After growing the <a target="_blank" href="http://www.projectdx.com">ProjectDX</a> team from three to
eight software developers, our release process was a complete pain, and it
typically took two to three hours to get a good build on the production branch
(and even then some insidious issues would sneak through). By making a few
changes to our development and acceptance process, we were able to turn it
into a five-minute, low-stress job.</p>

<p>A couple of years ago, our team was much smaller. We had around three
full-time developers working on the code, and we would all commit our
work-in-progress to the master branch in our <a target="_blank" href="http://git-scm.com/">git</a>
repository and push those commits out when we had a set that made sense to
share with the other developers. With only a few developers, we would usually
be working on one feature at a time, and once a small number of features were
accepted and ready, we could go ahead and release them to production. This
worked pretty well at the time, and when we <em>did</em> have a merge conflict, we
were all familiar enough with what was being worked on to be able to work it
out reasonably.</p>
<p>In 2010, the ProjectDX team was acquired by <a target="_blank" href="http://www.renewfund.com">Renewable
Funding</a>. Among other things, this gave us both the
resources and the need to grow the team beyond just a few developers. I took
over management of the software development team, and within several months
had hired three new developers and then added 2-3 contract developers to the team
at any given time. It's no big surprise that increasing the size of a team by
a factor of three in a six month period is going to have some challenges, and
one of those was how the flow of work through the system needed to
change.</p>
<p>At first, we didn't change anything about that process (I prefer not to change
things before I have any idea what the real problems are going to be).
Developers still committed work to the master branch, and that branch was
deployed to our internal alpha server by our automated CI system if and when
all of the tests passed. When the developers working on a feature felt that it
was complete, they put it up for acceptance, and the product owner verified
the feature on alpha before either accepting or rejecting the work. However,
our increased capacity meant that more individual features were worked on at
any given time, and this parallel development ran up against some shortcomings
in our process.</p>
<p>Our release cycle is once every two weeks. We plan our work into two-week
sprints, and a feature isn't considered "done" until it is accepted by the
product owner <em>and</em> actually released to production. The problem we ran into
was that—on occasion—a feature planned for a sprint would not be accepted in
time for that sprints's production deployment. This caused problems putting
together the production release. For example, let's say the team worked on
Feature A and Feature B during the sprint. Feature A gets accepted and is
ready to release, but there are some issues with Feature B that mean we are
not going to put it in production this time around. Since we worked on both
features at the same time, the commit log on the master branch contained a mix
of commits for both features and looked like:</p>
<ul>
<li>A</li>
<li>A</li>
<li>B</li>
<li>A</li>
<li>B</li>
<li>A</li>
<li>A</li>
<li>B</li>
<li>B</li>
</ul>
<p>When it came time to put together the release, we only wanted to take the
commits for the accepted feature and merge them into the production branch. This
meant that someone would have to go through the commit logs, find the relevant
commits, and cherry-pick them into production one at a time. First problem:
that's a mind-numbing chore for whoever is tasked with it.</p>
<p>Now imagine that—even if there are no inherent dependencies between Feature A
and Feature B—there are changes made in one of the early commits for Feature B
to a commonly used area of the code. Then, in the further work on Feature A,
the code is written in such a way that depends on this new behavior added by
Feature B's commits. Not a problem if both features get approved for the
release, but it does become an issue when only Feature A makes it to
production. At that point, the person tasked with putting together the
production branch ends up with failing tests and—possibly—a merge conflict.
It's especially fun when that person didn't actually work on the changes
involved.</p>
<p>That's bad enough; but it gets worse. It's not as if we stop working on
Feature B just because it doesn't get accepted for this release. The people
working on it continue to do so, and the est of the team starts work on
Feature C. By the end of the sprint, we have a commit log on the master branch
that looks something like:</p>
<ul>
<li>A</li>
<li>A</li>
<li>B</li>
<li>A</li>
<li>B</li>
<li>A</li>
<li>A</li>
<li>B</li>
<li>B</li>
<li>C</li>
<li>B</li>
<li>C</li>
<li>B</li>
<li>B</li>
<li>C</li>
<li>C</li>
</ul>
<p>Luckily, both B and C get accepted this sprint, so we can cherry-pick both of
these into production, and everything should be fine, right?</p>
<p>Well, the production branch now looks like this:</p>
<ul>
<li>A</li>
<li>A</li>
<li>A</li>
<li>A</li>
<li>A</li>
<li>B</li>
<li>B</li>
<li>B</li>
<li>B</li>
<li>C</li>
<li>B</li>
<li>C</li>
<li>B</li>
<li>B</li>
<li>C</li>
<li>C</li>
</ul>
<p>Notice an issue there? The production branch contains all of the same commits
as the master branch, but now they are applied in a different order. In
practice, this led to some <em>very interesting</em> merge conflicts. In
some cases, it didn't lead to conflicts that git was able to detect, but it
caused some defects to appear in production that were not present during our
internal acceptance process (performed against the master branch).</p>
<p>After a couple of "releases from hell", it was apparent to the whole team
that this was a problem that we needed to solve. We put our heads together and
came up with a solution that has been working great for us in the several
months since adopting it.</p>
<p>The first change is that we no longer have the whole team committing
work-in-progress to master. Whereas we used to have a master branch that
contained the main line of development and a production branch that only
contained work that was in (or ready to go to) production, we decided to turn
that on it's head a bit. We made the decision that the master branch wasn't
really all that important, and that the production branch was the most
critical part of the system. Regardless of what undeployed work might be in
commits on the master branch, the production branch is what is "real" and
"permanent". If you want to keep feature development independent from work on
other features, the production branch is the one thing you can rely on not
changing prior to the next production release.</p>
<p>Now, that's all well and good, but I'd have to cut a corner off my agile
practitioner's card if I suggested that we based each feature off of the
production branch and didn't integrate them until the end of the sprint.
On the other hand, we don't want to change the production branch until we're
ready for a release (in case we need to push out an emergency, mid-cycle
release to fix a critical defect.)</p>
<p>The challenge was to come up with a workflow that would protect the "actually
in production" nature of the production branch while making sure we didn't
fall into the trap of "big bang integration" at the end of the sprint.
In order to meet these goals, at the beginning of every sprint we
create a new branch based off of the production branch called sprint-N (where
N is the number of that sprint). Each feature that is being worked on during
that sprint also gets its own branch, and the feature branch is based off of the
sprint branch.</p>
<p>When a feature is ready for acceptance, the developer in charge of the feature
will deploy the code from our feature branch to the alpha environment for the
product owner to review. The product owner is able to review that feature in
isolation from all of the other work in progress. This gives him confidence
that—even if this were the only thing we got done this sprint—it would behave
the same way in production.</p>
<p>If the product owner rejects the feature, the developers go back to working on
it, and no other features are directly impacted. If the product owner accepts
the feature, then and only then does the feature branch get merged into the
sprint branch (we use <code>git merge --squash</code> so that the changes for the feature
show up as a single commit in the sprint branch). After the changes are merged
in, the rest of the team is notified that the sprint branch has been updated,
and all other in-progress feature branches are rebased onto the new HEAD of
the sprint branch. (For those not familiar with git, rebasing—in layman's
terms—means rewriting history such that all of the commits made only to the
feature branch will be applied <em>after</em> any changes that appear on the sprint
branch, regardless of the actual chronological order of those commits.)</p>
<p>With this system, we make sure that no feature has any unintentional
dependencies on work that may not be accepted, because the sprint branch
<em>only</em> contains code that is actually in production plus already accepted work
that will go out in the next release. At the same time, because changes are
merged into the sprint branch as soon as they are accepted, we are able to
test integration at the earliest practical point in time. When the developers
working on other features rebase their branches onto the new sprint branch,
they deal with any actual or semantic merge issues at that time. That way, the
people most familiar with the changes are able to resolve the conflicts while
those changes are still fresh in their minds.</p>
<p>We use TeamCity to run our automated, continuous integration. TeamCity runs
tests whenever it detects changes to either the sprint branch or the
production branch. Additionally, we can easily clone the build configuration
and have it run against the branch for a particular feature. We don't do so
for every feature, but we use that capability for larger features that need to
integrate the work of multiple pair-programming teams.</p>
<p>When the end of the sprint arrives, updating the production branch is as
simple as <code>git checkout production &amp;&amp; git merge sprint-N</code>. Since the sprint
branch was created directly from the production branch and only contains
accepted work, this is always a simple, fast-forward merge. We tag the
production branch with the release number and deploy that to our staging
environment for a final once-over before delivering to production.</p>
]]></content:encoded></item><item><title><![CDATA[What it Really Means to be "Agile"]]></title><description><![CDATA[Yesterday, Elizabeth Hendrickson posted her Agile Acid
Test in which she asks
three questions to determine whether or not a team is truly "agile". There is
also the Agile Manifesto which describes the
values that an agile team should adhere to. While...]]></description><link>https://johnwilger.com/what-it-really-means-to-be-agile</link><guid isPermaLink="true">https://johnwilger.com/what-it-really-means-to-be-agile</guid><dc:creator><![CDATA[John Wilger]]></dc:creator><pubDate>Wed, 15 Dec 2010 10:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Yesterday, Elizabeth Hendrickson posted her <a target="_blank" href="http://testobsessed.com/2010/12/14/the-agile-acid-test/">Agile Acid
Test</a> in which she asks
three questions to determine whether or not a team is truly "agile". There is
also the <a target="_blank" href="http://agilemanifesto.org/">Agile Manifesto</a> which describes the
values that an agile team should adhere to. While there is nothing that I
disagree with in either Elizabeth's post or the Agile Manifesto, there
are two simpler questions you can ask that get to the heart of whether or
not your team is agile: <em>Can you react immediately and without panic when
external constraints on your project are changed</em>, and <em>does your team regularly
and frequently review its processes to ensure the answer to the previous
question is always "yes"?</em></p>

<p>If you can answer yes to both questions, then your team is agile. Whether you
use XP, Scrum, Lean, Waterfall or any other process or mix of processes doesn't
matter at all for the purpose of determining agility; these processes may help
you deliver software that meets stakeholder needs in a timely fashion (or they
may not), but no single one of them is "the right way".</p>
<p>Note that I don't ask about delivered value or sustainable pace. It's not my
intent to discount the importance of either of these things <em>in general</em>; I do
not, however see them as a strict requirement of agility, per se. Granted, if a
team isn't able to do both of these things, that team probably will not be
successful in the long run, but if the team asks itself the two questions I
propose, then the other things are likely to fall into place naturally. (When I
say team, I am talking about the whole organization—it is critical that the team
is empowered to make changes that can affect its answers to these questions, so
the management needs to be part of the process.) Both the Agile Manifesto and
Elizabeth's questions address qualities that are typically evident in truly
agile teams, but the essence is that a team is able to <em>react quickly</em> to whatever
is thrown at it and is able to <em>react differently</em> to changing situations.</p>
<p>Most software development methodologies start out as experiments that worked for
one team, got introduced to others and managed to gain successes and proponents.
Then somebody puts a label on it and writes down the definition of what it means
to do, say, Scrum. Now we have a neat list of things that a consultant can point
to and say "yes, you are doing Scrum" or "no, that's not Scrum". This recipe for
Scrum is now ready to ship off to other development organizations, and if only
they perform the recipe just the way it's written down, everything will be fine.</p>
<p>Right...</p>
<p>A tangent: My wife and I both love to cook. One of my favorite celebrity chefs
is Alton Brown—he is, in fact, singularly responsible for my move from viewing
cooking as "something you had to do occasionally if you wanted to eat" to being
something I actually enjoy doing. The reason for this is simple: he doesn't just
give you a recipe, he gives you the tools you need to reflect on what you've
done and understand why a dish did or did not turn out the way you expected.
That way you know what to do differently next time as opposed to assuming you
can't make that particular recipe work and giving up on it.</p>
<p>The promotion of various "agile" methodologies as connect-the-dots exercises is
the biggest failing of the agile software development movement. A methodology
gains enough popularity that an organization's management (or often the
developers themselves) decide to adopt that methodology in order to take
advantage of the benefits of "agile".  They follow the process to the letter for
one or two releases, but they don't meet the goals of the organization, write
off "agile" as something that won't work for them and go back to old patterns
that are perceived to work at least a little bit better for them. Meanwhile, the
consultants and trainers have moved on to greener pastures with the occasional
tweet about "#notscrum".</p>
<p>I much prefer the approach of those like Diana Larsen of <a target="_blank" href="http://futureworksconsulting.com/">FutureWorks
Consulting</a> and co-author of the excellent
book <a target="_blank" href="http://pragprog.com/titles/dlret/agile-retrospectives">Agile
Retrospectives</a> who—while
promoting agile methodologies—stresses the importance of an organization
reflecting on what it has achieved, what it has failed at, and in both cases
<em>how and why</em> it has done so. She will then challenge that organization to find
the next steps it can take to get closer to meeting it's goals.</p>
<p>With such an approach, a team that is new to agile software development may
start out with a by-the-book Scrum implementation, but after several iterations
of delivery, retrospective and process modification, that team may or may not
have a process that still resembles Scrum. As long as that team is able to
answer yes to my two questions, they may be "#notscrum", but they can hardly be
called "#notagile".</p>
]]></content:encoded></item></channel></rss>