The Project Board (and how not to screw it up)
Kanban board, scrum board, project board - whatever you call it, will be vital to both your project organization this semester and your grade. Its purpose is to track all of your requirements (user stories) and tasks as they are created, fleshed out, assigned, completed, and closed. While we now use a scrum board built into PMTool, the website is based on Trello, an industry standard tool that became too expensive for us to use. As a Project Manager, you are expected to be familiar with all of the Process Standards that are communicated your teams, but you have some additional responsibilities of oversight in the way that the board is maintained.
Requirements/Stories and Tasks
Each item on the project board will be either a User Story, a Requirement, or a Task. This is indicated by applying the appropriate label to the item. There should be no items anywhere on the board that are not labeled as one of these three things.
Note that requirements are only applicable to projects in ELR - they cannot be used in other courses. The requirement tag may only be used if the functionality in question does not have any user interaction (for example, if there is some sort of performance requirement that involves code optimization, or a security requirement that involves encryption that are not testable by an end user, but are not part of any specific functionality); if there is any user interaction, it must use the "User Story" tag and format, and the tests must be written for an untrained user. If you have questions about whether or not the requirements tag is appropriate, you should discuss it with the 404/ELR instructors.
If your team has stories on the board that do NOT have one of these three tags, they will lose points during sprint grading, and you should indicate that issue by adding a "needs revision" tag to the card.
User Stories
User Stories are requirements that address a particular function desired by the users of the software. They follow a simple pattern:
As a <type of user> I want <to do some thing> so that <I can achieve some testable goal>
Note that the type of user should not be “user” (that tells us nothing about why you are using the program), and that the goal must be testable. Any story that cannot be tested by the type of user that story is for is not a valid story. The tests for a story should be put in the description section of that card. As a PM, it's not your job to create user stories (although you can certainly help guide the team when it comes to features), but it is your responsibility to verify that the user stories that the team creates follow the format of the story, and also that they are for a valid user of the system, and that they align to the vision for the product. Ideally, you will have enough user stories defined by the end of the first sprint to deliver the MVP based on the design the team creates. It may be that as you execute the project, you find that some stories no longer make sense, or that you think that alternate features will make a better product; that's part of the process. Stories that no longer make sense will stay in the backlog, and new ones can be created at any time, but you will need to be able to explain why they are needed.
Tasks
Tasks are the steps required for the developers to implement the User Stories. A user story with no tasks can't be implemented, and a task with no story rarely makes sense - tasks should be supporting a requirement. Tasks should support one and only one User Story; while there are instances where one task is necessary to more than one story, in our organization we will simplify the relationships by assigning it to only one of those stories (your team can choose which seems the most appropriate - usually the one being worked on first). A task must also be testable, but it is testable by the developers, rather than by the type of user for whom it's associated story is written. The tests for a task should be put in the description section of that card.
Research Tasks
In some cases (especially in ELR, but in other classes as well) you will need to make decisions that requrire some investigation to find out the right way forward. The team may also need to become familiar with a new tech stack, or try out potential APIs to see if they can support the desired app functionality. Since the answer may be no on the API, and since doing a tutorial on React doesn't really belong to a single requirement, you as a PM have the ability to designate a task as a Research task. Research tasks do not need to be blocking a User Story, but they DO need to have tests - how will you verify that the research has been completed? If you are looking at an API, this may be in the form of some proof of concept calls that are documented, and a writeup of the terms and conditions and limitations of the API. If you are running a tutorial, it may be creating a proof of concept app that does "Hello World", showing that the team can successfuly create and deploy something using that technology. There may not be any code related to these tasks, so it is also possible that there is no branch in the repo linked to them - but if there IS code, it should follow the same repo rules as any other task. Research tasks may also not have "alternate path" tests, since there is not necessarily an alternate path for reading a tutorial.
Research tasks must be tagged on the scrumboard with the "Research" label.
Design Tasks
In the first sprint, there is likely to be little coding - it is focused on creating a design that represents the vision of the completed app. These tasks will belong to a "bad" user story (See the FAQ below) but they still need to have tests - those tests will just be related to the figma prototypes rather than the code. As such, they will not have branches in the repo. They may or may not have alternate path tests, depending on what part of the application they are representing. If the working app will have an alternate path for the feature that is perceivable by the user, the design task should include that element - for example, if you are doing a prototype of the login, there should be a test of the prototype for what feedback will be given and where it will appear in the app if a user enters invalid credentials. If the task is related to designing a top level navigation bar, however, it is possible that there is not a realistic alternate path.
Design tasks must be tagged on the scrumboard with the "Design" label.
Lists
A scrum board consists of multiple lists. The purpose of the lists is to capture the state of your stories and tasks. Each organization determines the number and naming of those lists for projects, since there may be different needs in each organization, but internally, all boards should be the same. This eliminates the need to guess what the meaning of each state is; once you understand the process for your organization, you should be able to read the project board for any project and gain an accurate understanding of its state.
For our organization, there are six lists. These lists are named Backlog, Planned, In Progress, Testing, Complete, and Closed. Stories and tasks will (generally) move from left to right through these lists as they progress. They should not “skip” stages as they move - for example, you should not move from In Progress right to Complete without stopping in testing along the way.
Backlog
The Backlog contains any unfinished story or task that is not planned or in progress for the current sprint. While they are in Backlog, it's OK if the stories and tasks are incomplete or incorrect. Stories and tasks should be added to the board during the initial brainstorming process and as new ideas arise during development, even if they may not be fully fleshed out, have tests, or are still a bit vague. User stories in the Backlog may not (yet) be broken down into tasks, and the tests that are written may not be complete. Backlog is a big pile of possibility.
Planned
At the end of each sprint, you and your team will select the set of stories to be developed during that sprint (note, you have the final say on this, as you are responsible for the product). You need to assign the user stories to your team for decomposition into tasks, and if there are not yet accptance tests, they need to be added. Once the stories (and their associated tasks) have the required tests, you will move them into the Planned pipeline. This is the place where you will review the work for the sprint to ensure that
- All user stories follow the format
- All user stories have acceptance tests that ensure that the stories meet the expectations established
- All user stories have one or more associated tasks
- All (non research) tasks have exacty one associated user story
- All tasks have task tests that adequately test the task
- All tasks in planned also have their User Story in planned (or later, if the User Story was started in a previous sprint)
- All User Stories have at least one task in planned
You should not move the stories to planned if they do not meet these criteria. It is your team's responsibiliy to make sure that they do, and a place you need to establish accountability if they do not. You should also verify that the work that is in planned aligns with the roadmap that you have defined.
In Progress
Once a task has been assigned to a developer (note that you should reflect this assignment in the appropriate status report, by task number), the task and the associated story (assuming that it has not already been started) will be moved by the developer to In Progress. This indicates that they are actively working on that story and task. Note that not all the tasks need to move along with the story; it's OK to have some tasks for a story in planned and some in later stages. The devloper should add themselves to the task card, but there is no need to add themselves to the associated story.
Testing
Once a task has been completed by the developer, the developer moves to testing. While in this list, the developer will execute the task tests. Once all the tasks for a user story are complete, the user story will move into Testing, and its acceptance tests will be run. As a project manager, you want to make sure that your team is doing a good job testing their application, to avoid bugs that both cost the team points and require overhead to address. It is frequently a good idea to assign another developer to run them as well as a cross check.
Complete
After running the tests and seeing that they pass, the developer will move the task or user story to Complete. Once a task has moved to Complete, the task's branch can be merged back into dev for the sprint release.
Closed
Once the developer has moved a card into complete, you will review the task or story for correctness, and move the card to Closed. Moving a task or user story from Complete to Closed provides you a final opportunity for quality checks and should not be done by team members. You should not move a card into closed if that card is invalid, or if the tests do not pass. Moving the card is your stamp of approval that they have followed the process standards correctly, which means that
- All the checks required to move into planned are still true
- All tests pass
- There are sufficient tests (including alternate tests) to make a convincing case that the task or story are correct
- Tests have specific inputs and explict results
- User stories support the product vision
- User stories are developed in a way that produces a consistent product (UI/UX)
FAQ
What about tasks that are not related to a user story? How do I capture (for example) design tasks like wireframes?
This is one area in which the artificial nature of the classroom intrudes. There are a few different ways to address this. One is to have those tasks exist outside of the board, and this is frequently the case in agile, where the board is supposed to be all about the code. Unfortunately, this doesn't give us anything to grade. A second way is to have them be tasks associated with a user story that will actually be coded in sprint 2, but that also creates issues of not being able to make it all the way through the process in sprint 1, closing out tasks and stories. A third option is "make up bad stories". In order to give you experience moving through the pipelines, we can create stories for invalid users like "As a project investor, I want to see a prototype of the screens before investing so I can verify that the project looks promising". We will use approach three, which has the least impact on the overall SDLC and normalizes grading.
What if a story/task doesn't get finished during a sprint?
If that happens, the story and tasks should be tagged with BOTH the sprint it was started, and each subsequent sprint until it is complete. They should also contain a note about why they ended up crossing sprint boundaries. This will help highlight areas where you may be running into trouble, where the feature may be more complex than anticipated, or where some team memebers may need additional support.
What if a story or task turns out to be impossible?
Especially when you are experimenting with new (or new to you) technology, it is possible that something you planned turns out not to be feasible to implement. Those stories should have comments outlining what was done and why it is not feasible, and you (the PM) can move these back to the Backlog in the time between the end of sprint review and the team's first meeting of the next sprint - it's always possible that future advances make them feasible!
What if there are bugs that are discovered after a story or task is closed?
If bugs are found (and they will be!) the associated story and task should be moved from closed back to in progress, and tagged with the current sprint. The bugs should be fixed, and the cards should move back through the process (now with additional tests to catch the case that led to the bug).
What if we need to update our look and feel, but are not changing any functionality?
Especially towards the end of the semester, you may want to add consistency, polish your UI, add more user friendly messaging, or even redesign sections to make them more usable based on feedback. If this happens, re-open the original story for the pages that you are changing, and create new task for the upgrade. The new task should outline all language and visual changes (ideally with a wireframe or figma to indicate the new look and feel) and have tests to ensure that the new design is implemented correctly. Don't forget to make it responsive!
But about something that's really tiny, like changing the text from an error message to fix a spelling error, or changing a button from green to darker green?
If it were me, that change would somehow magically appear when a different task was merged in, and I would be shocked, shocked I say, that the spelling was now correct. Naturally, if this sort of thing happens frequently or on a large scale, it will likely be apparent during the evaluation, where it will lose points because work done was not reflected in the scrumboard.