Andrew Radev wants to give this talk

Tricky testing

Theoretically, testing is pretty easy. Prepare some data, perform some operations on it, check the result. This description often doesn't paint the full picture. For instance, how do you test:

  • Deploy scripts, like a bunch of tasks you've built on top of capistrano?
  • Networking code: sockets, asynchronous streams?
  • Inter-process communication?
  • GUIs?

I don't have easy testing solutions for the above. There are options, but I think we can agree there's a category of programming problems that can be tricky to test. And I think it's common that, when faced with such problems, we're strongly inclined to avoid testing altogether.

So is it okay to skip tests in these cases? Or should we put effort into testing every little thing, even if it takes weeks to set up and ends up breaking randomly?

I'm going to give my thoughts on the matter, taking examples from spork, and from some of my own projects, like a Vimscript test runner and a tool that runs a rails command with music in the background. I'll also demonstrate how Vim plugins can be tested with rspec. These projects need more work than just setting up data and performing method calls, but once you've built a good toolkit, the specs flow quite nicely. Whether you should invest the time and energy is a question I'll try to address.