Throughout this project, effort estimation and tracking turned out to be way more useful than I expected. At first, I honestly thought this whole process was just another requirement we had to complete. But once we got deeper into Milestones 2 and 3, I realized estimating my tasks and tracking my time actually helped me manage my workload, especially since I’m balancing school, work, and this project at the same time.
My estimates were mostly based on:
For example, when creating the Login Page issue, I estimated around one hour because the Registration Page earlier took me about an hour and a half. Another example was database logic—those tasks always take longer because debugging is unpredictable.
Every estimate had a comment like:
“This issue is similar to Issue #12 which took around 90 minutes, so I’m estimating the same.”
Even when the estimates were wrong, having a reason behind them made the process feel organized instead of random guessing.
Even when my estimates were off, estimating ahead actually benefited the project in two important ways:
Seeing that a task might take 2–3 hours helped me plan my week more realistically. With physics labs, ICS work, Chinese assignments, job shifts, and family responsibilities, this was surprisingly helpful.
One example:
We thought one feature would be a quick fix—maybe 45 minutes. After estimating and talking through the details, we realized it would take 3–4 hours involving UI, routing, and validation.
If we didn’t estimate first, we would have been caught off-guard later.
Tracking actual effort ended up being extremely useful.
The biggest surprise was discovering how much time non-coding work takes:
For example, I estimated 45 minutes for a “Profile Editing Page,” but I actually spent:
This helped me understand why my estimates were always too low—I wasn’t accounting for design time and debugging time. In future issues, I increased my estimates to be more realistic.
For coding effort, I used WakaTime inside VSCode. It tracked:
WakaTime only counts active time, so it felt pretty accurate.
For non-coding effort, I used:
I believe my non-coding tracking was about 85–90% accurate. Not perfect, but definitely better than guessing.
Some issues I created were too broad, such as “Complete User Directory Feature.” Breaking them into smaller tasks would have improved estimate accuracy and made progress easier to track.
I used AI throughout the project, but early on I didn’t always record how long I spent prompting or verifying responses. Later I improved, but next time I’d track from day one.
I underestimated debugging almost every time. In future projects, I would automatically add buffer time for testing and debugging.
Coding-related AI (counted as coding effort):
Non-coding AI (counted as non-coding effort):