-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update code insights planning process #4477
Conversation
78f2fee
to
949c117
Compare
949c117
to
d8731cf
Compare
d8731cf
to
ae668f9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for drafting this! most of my "important" comments were trying to add places to define that done = also tested, and where/how we define test plans!
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
- The issue should also have its **_Estimate_ column filled out**, so that it can be evaluated whether it fits into the iteration. If the proposer lacks the information to estimate the issue, they reply on the issue in GitHub or raise it in our Slack channel to get the missing information or get an estimate from the appropiate person. Teammates may also discuss this in ad-hoc synchronous meetings if beneficial. An assignee may also already volunteer or be proposed, but this may still be changed at the [Monday sync](#weekly-sync) to distribute workload. | ||
- If **technical exploration** is needed to get more information, a _spike_ (a time-boxed investigation task meant to facilitate more granular planning) can be proposed for the next iteration instead to get that information. | ||
|
||
- As much as possible, the proposer **involves the necessary stakeholders _asynchronously_ to get agreement** on whether the issue should be worked on in the next iteration before the [Monday sync](#weekly-sync). For example, the PM or EM might ping engineers in GitHub or Slack on whether an issue seems feasible, or engineers might ping their EM and PM to get buy-in whether the issue fits into our goals.<br> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
|
||
3. **Implementation and testing** (usually 1-4 weeks)<br> | ||
Engineers execute on the implementation plan, putting a set of issues from the tracking issue into each iteration. | ||
This also includes that each sub-implementation-task is sufficiently tested, meaning by the end of this phase the project is ready to ship to customers with confidence. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like us to specify where/how the testing goes. Do individual teammates add it to issues and does one of us review it? Do I/we include broader test plans earlier in this process? Either could work, curious what you were thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question (I hadn't thought about this as part of this PR). I would say both: The higher-level expectations could be documented as part of the scope agreement in the product RFC. Concrete testing for each sub-task (issue) I think should be included by the issue author (engineers, usually) when the issues are filed in the planning phase.
Co-authored-by: Joel Kwartler <LoJoel3+gh@gmail.com>
Notifying subscribers in CODENOTIFY files for diff 38e4bb3...c1b5ea2.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of typos. I think it all makes sense to me, but I will probably have more of an opinion once I've gone through a couple of iterations! 😄
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
handbook/engineering/developer-insights/code-insights/processes.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some thoughts - just challenging some points to vet them.
I will say in general I think this process works fine for loosely defined features, and I think it works less well for ambiguous problem statements. For example, "build dashboards" versus "improve performance". In the past I've been on teams that try to fit work that clearly doesn't match the model process and it's very awkward (square peg round hole thing). I'd love to see some flexibility that if we take on a project that doesn't fit the mold very well, we are flexible enough to adapt.
|
||
- During an iteration, teammates **work on their assigned issues for the iteration in the order they are listed** in the ["Current iteration" view](https://github.com/orgs/sourcegraph/projects/200/views/1) of the board. When starting work on a task, the teammate **updates its status column to "In Progress"** to communicate it to their team. This gives a good overview in the ["Current iteraton" view](https://github.com/orgs/sourcegraph/projects/200/views/1), which can also be viewed in [Kanban layout](https://github.com/orgs/sourcegraph/projects/200/views/1?layout=board), on how the iteration is tracking. | ||
|
||
- If one or more issues that were planned for an iteration are looking to **not get finished** (which includes testing) in the [current iteration](https://github.com/orgs/sourcegraph/projects/200/views/1) (while maintaining sustainable work practices) the assignee **raises this as soon as possible asynchronously** to the team (including the PM and EM), e.g. on the GitHub issue or Slack. These issues then **become _proposed_ issues for the next iteration** (meaning nothing carries over automatically, but we also don't just drop and forget missed issues). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we also don't just drop and forget missed issues
What does this mean? As I read this it implies that we are going to do something with issues that do not get pushed into a new iteration. As I understand, this proposal is that the issue becomes the placed back into the same priority as other proposed
issues - so what is preventing us from dropping this issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is preventing us from "just dropping and forgetting" missed issues is that the process says we discuss every "Proposed" issue before it gets added to "Todo". The outcome of that discussion can be moving the issue to the Backlog, but the point is that it is only done after discussion and making an explicit choice to do so. The other possible outcome is that we do put it on the next iteration plan.
This is contrary to a model where all missed issues get added to the Backlog by default.
This also includes that each sub-implementation-task is sufficiently tested, meaning by the end of this phase the project is ready to ship to customers with confidence. | ||
|
||
We sequentialize and parallelize projects in a way that we can _plan_ projects (step 1 and 2) while another project is currently being _implemented_, so that we always have the next project ready to be implemented by the time a project has finished implementation. | ||
We will however make sure to never have multiple projects in planning phase at the same time, as this leads to cognitive overload while also delivering on an implementation for another projects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will however make sure to never have multiple projects in planning phase at the same time, as this leads to cognitive overload while also delivering on an implementation for another projects.
I wonder if this is practical and / or feasible. Not every project is going to involve the entire team for all the time, while some projects will require significantly more architectural overhead than others. We've had multiple workstreams throughout q2 - I don't really see how scaling up the team would reduce the need for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal ultimately is to avoid cognitive overload on individuals, so you can think of it as "not more than one project going on at the same time per person". It is very likely that as our team grows, we'll need to loosen this up to allow multiple groups of people working on different projects simultaneously. But at the same time, we prefer to operate together as a team rather than a working group where everyone is working on different things. At least right now, most big projects we do will involve backend and frontend together, and we're just 2+2 engineers. We also have only one designer and one PM. So I believe at the moment if we do multiple large projects at the same time, it will result in increased cognitive load for the team, which I want to avoid. And again, this may change in the future!
#### Tracking issues | ||
|
||
To plan projects that span multiple iterations and need to be broken up into sub-tasks, we make use of [tracking issues](../../tracking_issues.md). | ||
The tracking issue is created by one of the teammates with a descriptive accompanying label, e.g. <span class="badge bg-info">insights-dashboards-v1</span> (milestones are not used for this, as they are used for iterations). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
milestones are not used for this, as they are used for iterations
Just curious - why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could technically also use labels for iterations and milestones for projects. We can't use both, because GitHub only allows one milestone per issue, but issues need to be possible to associate with an iteration AND a project at the same time – so we just need to align on one. Iterations are gonna be created more frequently, and they have a workflow of "closing" and due dates that labels don't have and are a bit more natural to display and edit in the new GitHub board (as it is a dedicated column). You also can't group by labels (or label "pattern") in the new boards, but you can group by milestone, which is used for the "All issues" view, which would not be possible if iterations were tracked by labels.
|
||
Despite following two-week iterations, our [releases are monthly on the 20th](../../releases.md#releases) and we may sometimes need to order tasks in a way that we can get important projects into the next release. | ||
|
||
We also intentionally plan so we can avoid merging significant work less than two days before a release (if a release is on the 20th, our last day to merge is the 18th). Exceptions require explicit approval of both the PM and EM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I don't like this code freeze. I think it hasn't been particularly effective at stopping issues, and I think it leaves a lot of unresolvable ambiguity around what is significant
. I also think it's a relatively low agency constraint that anyone in any form of leadership role should need to be an approver to merge code, as well as the more practical problem that leaving branches for many days in a very active repo can make merging unnecessarily complicated and risky.
What were the original motivations behind this freeze?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coury-clark as I remember that that was introduced to avoid last-minute change before the release cut which we had pretty often (and I guess we still have it). I think this rule isn't bad but because we have been violating this rule for a long time already (as I remember the last 3 releases had these last-minute changes literally) we need to accept that this rule just doesn't work for us.
But it is also important to note that we usually violate this rule not because we want to ship another big thing but cause we usually find some problems on staging right before the release cut. We probably need to reconsider this rule. Like to be a truly code-freezed before the release cut I think we should end our active development 2-3 days before the code-freeze and then start the testing process actively to check things that we haven't covered by unit/integrations tests but with that system, we reduce our time for developing something for the next release. So I don't have a good solution to remove/replace this rule with something else but just mix feeling about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be okay replacing this with an alternative proposal, but I would otherwise keep this as the "default in lieu of a better solution." As the team grows I would expect our process to naturally mature (and as we grow we should have fewer moments where we have to choose between building in testing, since we should have a capacity to do both).
I consider "significant" to be roughly anything that modifies more than one directional flow or that is not just a copy change (copy changes often land close to release due to needing more stakeholders to review + catching things on dogfood, and I'm okay with that). You can also think of "significant" as "how many possible paths to test this are there?" and anything with more than 1-2 paths is likely significant.
Thank you for the feedback everyone, looks like everyone is happy with now! I'll merge and don't forget we can keep iterating on it (feel free to also open PRs for any tweaks!) |
This is a proposal for how we can mature the planning process we used through FQ2, facilitated by the new GitHub project Beta, so it is easier to repeat and scales to our new team size and product maturity.
It aims to set clearer expectations for when and how in the iterations planning happens than we had previously documented. One of the goals is to be as asynchronous as possible, while still allowing synchronous discussion (in the weekly sync and ad-hoc).
As before, all teammates are given agency in planning (planning does not happen top-down).
It also includes the expectation for maximum workload and how to deal with overflowing issues to avoid overloading teammates or "crunching" in iterations (we want to work sustainably – that's the main reason we're planning at all).
It also adds some notes on how we deal with the mismatch between iterations<->release dates.
Additionally, this PR documents part of our process/philosophy we had been practicing but hadn't documented yet, namely how we approach longer-term (multi-iteration) projects: The process of @Joelkw's product RFCs, the following design/planning phase, and finally the implementation phase – which we parallelize to plan the next project ahead, but constrain to never have too many things going on at the same time.