Skip to main content

Project Timeline

Timeline (example)

Planning Timelines

Important

Important!:

  • This is the Development ("Realize") portion only of the full timeline example given here.
  • See the full time line here: Full example Project timeline
  • At this point there has already been significant planning and design work done, a Functional Specifications document and the Systems Architecture has been completed and approved.

Realize (to 6-9 months in)

DayTech/Systems TasksBusiness Planning Tasks
90Realize, schedule w/tech leadMarketing/Business misc.
75Dev 1 UIMarketing/Business misc.
60Dev 2 UI/BE setupMarketing/Business misc.
45Dev 3 UI/APIs/DB Backend/QAUI Approval, Support docs
30Dev 4 APIs/Backend/QABackend Approval, Marketing docs
15Dev 5 DEvOps/QADevops
10UAT/Final Validation, launch checklistFinal Validation, launch coordination
00LaunchLaunch approval

Post-launch - Limited Availability

DayTech/Systems TasksBusiness Planning Tasks
00Limited AvailabilityLaunch approval
+2Advise - Logging, FixesUser feedback/testing
+5Advise -Small iterationsMarketing/Business misc.
+10Full availabilityFull approval
+10Advise - Alerts, logging dashboardsMarketing/Business misc.
+15Advise - Stretch release workMarketing/Business misc.
+30Advise - Stretch releaseMarketing/Business misc.

Typical systems by risk-level

For informal reference (not a strict definition):

Major Features

  • Multiple UI/backend grouped functionality changes with a good amount of code and risk.
  • May be promoted to leadership/internal/users and so may have tech and user risk.
  • Standalone, may be only one or two developers, no fulltime UX designer, one stakeholder.
  • If grouped with other features, may have more risk and be a full application.
  • Usually only brief initial consultations with architect.
  • Risk-level varies based on case, high to low. High for system-critical, but often lower priority.

Small Risk Application

  • Multiple features grouped into a distinct and united UX/backend system.
  • Typically smaller, less complex and lower-profile than larger apps.
  • Usually a few developers or less.
  • One shared or no architect
  • Risk is moderate, mainly to the team/department

Medium-Large Risk Application

  • Multiple major features grouped into a distinct and united UX/backend system.
  • Has many other dependent people/groups internal or external.
  • Company contracts, other projects, etc. may be impacted.
  • High visibility. Important internally and/or with user awareness/dependencies.
  • One shared or dedicated architect
  • Risk is high to team, department, small company.

Large-scale High Risk Application

  • May involve multiple applications, and a dozen or dozens of features.
  • There could be 100 people working on this in some way in multiple departments.
  • May involve a lot of investment by the company- $5 million+ in staff alone could be low-end, while there could be hundreds of millions or even billions on the line.
  • Normally this is heavily promoted, and so has reputation impact company-wide.
  • Risk is very high on this application.

Testing Pre-launch

Testing

Testing is a critical part of the development process.

Recommended breakdown of testing:

  1. Unit tests
  2. Component tests
  3. Integration tests
  4. API tests
  5. UI tests

Ideally, you should have automated tests for each of these categories. Also it would help to have multiple teams. It should not only be tested by the devs who worked on it.

However, if you cannot automate tests, you should have a manual test plan and checklist for each of these categories.

It is very important to develop a test plan and checklist, as this can later be adapted to automated tests.

Other testing

Stress Testing: determines the maximum load the system can handle before it fails.

Spike Testing: rapidly increasing the load on the system to determine how it behaves under sudden, high load conditions.

Endurance Testing: verifies the system's behavior over an extended period of time under a constant load.

Scalability Testing: evaluates the system's ability to handle increased loads as more resources are added, such as more servers or users.

Volume Testing: determines the system's behavior when a large volume of data is processed.

Configuration Testing: assesses the system's behavior when different configurations are used, such as different hardware or software configurations.

Compatibility Testing: checks the system's compatibility with different environments, such as different operating systems or browsers.

Disaster Recovery Testing: This type of testing assesses the system's ability to recover from failures and resume normal operation.