There's one difficult question that keeps coming up in scrum teams, and it never seems to go away. How do you complete all the PBIs in a sprint without having devs idle while the last bit of testing is done?
Scrum doesn't have any concept of a tester being separate from a dev - there is no separate tester role. This is partly to keep the framework simple, but also partly because (like a lot of software development thinking from the early 21st Century) it has roots in the open source movement where there are no strictly defined roles (beyond arguably the PO). That's a topic for another day, for the moment I'm interested in inefficient sprints, bored devs and frustrated testers.
The problem looks like this:
At the beginning of the sprint, the testers have nothing to do. At the end of the sprint, the devs have nothing to do.
There are dozens of suggested solutions to this out there, and they mostly talk about doing lots of little things to improve the situation. Testers can be preparing test cases at the beginning of the sprint, devs can be doing non-testable infrastructure work at the end. There are a couple of good posts dealing with this here:
That second one caught my eye: 'Manual testing in scrum is hard, but not impossible'. That's a massive alarm bell. Development processes shouldn't be hard, that just discourages people and stops development being fun. If a process is hard, people won't do it, and they certainly won't do it consistently in the future. Eliminate obstacles, don't build a staircase over them.
One of the commonest things that's advocated to address this problem is to make your PBIs smaller. While that's worthwhile anyway, and it can help a bit, it still doesn't fix the problem. Even a PBI that takes 3 hours to develop and an hour to test still leaves slack time at each end, and requires a lot of extra effort on the part of the team to break down & refine**.
There are a couple of simple ways you can reconcile manual testing with scrum without making it painful, I'm going to talk about two of them:
Simultaneous Testing
In theory a PBI follows a linear path like this:
In practice it usually looks a lot more like this:
Code is written, committed, tested and bugs are found. Once that cycle is complete it's merged, and quite frequently more bugs are found (usually when it gets migrated into another environment).
Each of those transitions from 'Write code' through to 'test' takes time. Tickets move around the Jira board, people look for something ready to work on, and very often a tester won't actually start work on a PBI until it's formally moved into a 'testing' column on the sprint backlog. What might work better is to have the tester picking up in progress work. We talk about 'commit early and often' - what's to stop a tester reading the commit message, working out what can be tinkered with and starting to test it? Yes, it's not finished, but bugs can still be found, and if we're going to have that cycle back and forth between the dev and the tester anyway, why not do it without the overhead.
This approach means stopping thinking about PBIs as 'done, then tested' and making the process more iterative and collaborative. But that's exactly what agile is all about - having that 'dev, then test' approach looks a lot like mini waterfall.
Test the increment
Testing a PBI against requirements and marking it as 'OK' in sprint has one massive problem - you're still changing the code. We all work hard to make code modular and loosely coupled, but the nature of sprint goals, epics and business requirements means that we're generally working in the same approximate area of code throughout the sprint. So you complete a PBI, the tester signs it off, and 5 minutes later someone commits a change that breaks it. The tester then tests the functionality they've changed, and a regression set that may not test your change, and a broken increment comes out of the end of the sprint.
Testing the increment means separating formal testing from the development process and making it a QA process instead. Normally you're going to want to reflect that in the organisational structure, which helps to make the testers independent again. You can still have strong relationships between testers and devs - that "is it supposed to do this?" conversation still needs to happen.
But if the code isn't formally tested, how do we know it's right? Good automated testing skills are part of a devs toolkit, it's as important as writing the code.
Testing the change is the developers job. The testers job is validating the product.
The big drawback that gets raised to 'testing the increment' is that it slows down fixing of bugs, because the bug doesn't get tested until the sprint after it's fixed. While that's kind of true, sometimes, that increment is also stable so the risk of other bugs being introduced is less, and nothing stops you sidestepping the system for an emergency fix. In any case, very often you find bugs well after a change has been merged anyway.
Conclusion
Testing the increment is my preferred approach - I think there are still massive inefficiencies in the simultaneous testing method, and it's probably not enough on it's own to fix the problem. Testing the increment is a significant change in approach, but it separates the workflows and gives you more confidence that the increment is OK.
So if you're running into this problem with slack time in sprints and testing is getting pushed into the last couple of days, it might be worth experimenting with one of these approaches to see if that makes your process smoother.
** there's also a tension between breaking things down into the smallest components you can, and what actually counts as a user story. It's pretty hard to say 'as a user, I want the small piece of prep work that combined with six other things will enable me to....'.