Working on a site that is in .NET, using Cucumber to test User Stories, and Watir to script the browser (Chrome, using Selenium's Webdriver). Here is an example of how I have been using it.
Thoughts on the process: I was initially pretty excited about this. There was a lot of quick initial progress getting everything working right. But it slowed after that. Using FireFox required me to kill my FireFox and restart it from the terminal with special flags every time I wanted to test. Since that is my primary browser, and I leave about 50 tabs open at a time, it was beginning to interrupt the rest of my life.
So I switched to Chrome, but there are some serious bugs (not sure of the source, I expect WebDriver) and would frequently lock up, and get into these "funks" where it was very slow to respond, and I would have to kill all the Chromes, and then let it sit for a while without doing anything (I expect it was a heat-based issue).
In these situations, or when my connection would be poor, huge numbers of scenarios would begin to fail, because they would check something before the browser finished updating, so results were nondeterministic, it could fail when it should have passed. And, in fact, it did this more often than it didn't.
Naming conventions in the HTML were inconsistent, for example, the same tab might have different ids on different pages. Or the data was displayed in a table row, which I wanted to click a button based on one of the elements in the row, so I had to query all rows, then experimentally determine which one was the one I wanted by looking at what data they contained (there were tables within tables, so I can't just say give me the row with the thing I want).
Because it is a .NET site, I can't just set it up in some given configuration for my test suite, I literally have to use it like a user. This means that all the scenarios are dependent on the scenarios before them, since each scenario affects state. Often, an early scenario would fail, causing a cascading chain such that almost all the others would fail afterward.
That means I can't work on one piece that I want to test. So I have to run the entire suite each time, so if the thing I am working on is at the end of the suite, that is potentially a minute just to see if the most recent change worked. Between the experimental nature of the scripting, and the lack of orthogonality in the scenarios, and the frequency of failures, it was taking hours to get each additional scenario to pass.
Even after getting it to pass, the whole thing was very fragile. If they changed anything, such as some layout thing, it could break the tests.
Ideally, you use this to drive development, known as behaviour driven development. But the process was just too cumbersome. Because I am in a different place, and they can't run my user stories, so they have to push their changes to github, then I merge them and put them on the server, then run the test suite, see it is not quite right and have to do it again. The feedback loop is too drawn out and painful.
In conclusion, I had a lot of fun with most of this stuff, but the combination of all these different things made this an unviable testing solution. I would definitely try Cucumber with webrat, for a Rack based site, though. I expect that would resolve all the contributing factors to the issues I had. But for this site, I'm going to do the rest of the testing by hand.
Every good project needs a good setup. In this episode, I set up a github repo, create a new rails application, hook in Cucumber and Rspec, write a Cucumber feature, and write the code to make it pass.