01What is a process?

Whenever we provide a service or build a product, we follow a sequence of steps to accomplish a set of tasks. We do not paint the wall before the wiring of a house is installed. That ordered series of activities is what we call a process.

Defining characteristics of any process
  • It prescribes all the major activities
  • It uses resources and produces intermediate and final products
  • It may include sub-processes, with entry and exit criteria
  • The activities are organised in a sequence
  • Constraints or controls may apply (budget, availability of resources)

From process to software life cycle

When the process involves the building of a product, we refer to it as a life cycle. The software development process, also called the software life cycle, is a coherent set of activities for specifying, developing (designing and implementing), and validating (testing) software systems.

The four fundamental activities

Every software process, regardless of model, contains four core activities. Different models arrange them differently and revisit them at different cadences, but the activities themselves are universal.

I. Specification
Establishing what functions are required and the constraints on the system's operation and development. Often called requirements specification when the focus is on the document produced. Treated by Sommerville and Bruegge in the chapter on requirements engineering.
II. Development
Converting the specification into an executable system. Includes design (architectural, interface, component, data, algorithm) and implementation (also called coding and programming) along with debugging.
III. Validation
Showing that the system conforms to its specification and meets the customer's needs. Almost always called testing in industry, although strictly testing is one technique within validation. Includes verification (does the system implement the requirements correctly?) and evaluation (does it meet design goals such as performance and usability?).
IV. Evolution
Modifying the system after it is in use. Traditionally called maintenance, although the modern term evolution better captures that systems keep changing rather than just being repaired. As business circumstances change, the software must change too.

These four activities are studied in depth in the third companion in this series. Here we focus on how the different process models organise them.

Key vocabulary used throughout this companion
  • Stakeholder. Anyone affected by, or who can affect, the system: customers, end users, regulators, the development team, support staff. The plural matters – different stakeholders often want different things.
  • Requirement. A statement of what the system must do, or a constraint on how it must do it. Split into user requirements (high-level, in the customer's language) and system requirements (detailed, technical, used by developers).
  • Component. A self-contained unit of software with well-defined interfaces. May be developed in-house, bought as commercial off-the-shelf (COTS), or taken from an open-source library.
  • Architecture. The high-level structure of a system: which components exist, how they are arranged, how they communicate. The decisions hardest to change later.
  • Increment. A working slice of the final system, delivered partway through development, that adds value on top of what was delivered before.
  • Iteration. A repeated pass through the development activities. Each iteration may produce an increment, refine an existing increment, or both.

What is a process model?

A software process model is an abstract representation of a process. It describes a process from a particular perspective. The same software project can be described by different models that emphasise different things – the order of phases, the flow of artifacts, the management of risk, the role of feedback. No model is the truth; each is a useful lens.

Why we need process models at all

The classic cartoon below is funny because it is true. Without a process that brings the customer, the developer, and the deliverable together at the same level of understanding, projects routinely produce something nobody actually wanted.

Why we need a process: a project told in six pictures three tyres on a tree-swing rope 1. How the requirements were specified tyre swing, but suspended sideways 2. How the developer understood it a normal plank swing 3. How it was solved before just a rope 4. How it is solved now cushioned chair-swing with canopy 5. How marketing describes it a single-tyre swing 6. What the customer actually wanted
Figure 2. A long-running cartoon in software engineering, sometimes called the tree-swing cartoon. The customer wanted a simple tyre swing (panel 6). The requirements specification described an over-elaborate stack of tyres (panel 1). The developer built something that did not work (panel 2). The previous solution had been a perfectly fine plank swing (panel 3). What was actually delivered was a piece of rope with nothing on it (panel 4). Marketing described it as a luxury chair-swing with a canopy (panel 5). The point of a process model is to bring these six pictures into alignment.

02The five generic process models

The first split: plan-driven or evolutionary

Before naming individual models, it is worth seeing the highest-level division. Almost every software process can be classified along one axis: how much of the work is planned upfront versus how much is discovered as we go.

Plan-driven

Requirements are settled, then design, then code, then test. Each stage produces formal documents that are signed off before the next begins. Change is allowed but expensive. Strong on predictability and audit; weak on responsiveness.

Examples in this companion: waterfall, formal systems development, the V-model, reuse-oriented development when applied in a phased way.

Evolutionary

An initial version is built quickly, then evolved through repeated cycles in conversation with the customer. Specification, development, and validation overlap in time rather than running in sequence. Strong on responsiveness and learning; weak on predictability and document-heavy sign-off.

Examples in this companion: evolutionary development, prototype-driven development, and the whole Agile family covered in detail in the second companion.

The hierarchy of process models Software process Plan-driven Evolutionary Waterfall V-model Formal Reuse (phased) Evolutionary development Agile family (Scrum, XP, Kanban) Reuse (when iterative) Prototyping sits across both branches – it is a technique used inside whichever model is chosen.
Figure 1. The hierarchy that frames everything that follows. The first decision is plan-driven or evolutionary; the named models are answers to that decision. Reuse-oriented and prototyping appear in both branches because they can be applied either way.

The five named models we will examine

Within the hierarchy above, Sommerville names five generic models. We will examine each in turn:

Waterfall. Separate and distinct phases of specification and development, in sequence. The classical plan-driven model.
Evolutionary / Agile development. Specification and development are interleaved. The system grows by adding features as the customer proposes them.
Software prototyping. Strictly a technique rather than a process model, but treated on its own here because of its importance. Used inside either plan-driven or evolutionary processes.
Formal systems development. A mathematical system specification is formally transformed, step by step, into an implementation. Plan-driven by nature.
Reuse-oriented development. The system is assembled from existing components or commercial off-the-shelf (COTS) systems. Can be plan-driven or evolutionary depending on how the integration is organised.

Two further mechanisms – incremental development and spiral development – describe how iteration is layered on top of any of these. They are covered in the second companion in this pair, on iteration and Agile process models.

031. Waterfall model

The idea

Waterfall partitions a project's development into distinct sequential stages. Each stage produces a definitive output that is the input to the next. The model is named for the way work flows downward from one stage to the next, with little expectation of going back upstream once a stage is complete.

The model is based on hardware engineering practice and was widely used in military and aerospace industries – environments where requirements are typically well defined early and change is minimal.

Waterfall: stages cascade in sequence, with feedback between adjacent stages Requirements definition System and software design Implementation and unit testing Integration and system testing Operation and maintenance Each stage feeds the next. Going back to a previous stage is possible but expensive once the next stage is committed.
Figure 1. The waterfall model after Sommerville. Solid arrows in both directions between adjacent stages show that feedback exists, but going back is rare and expensive once a stage has been signed off.

The problems

Waterfall partitions projects rigidly. The drawback is the difficulty of accommodating change once the process is underway. More fundamentally, software is unlike hardware in ways that strain the model:

No fabrication step. Program code is itself another design level. There is no "commit" point – software can always be changed, so the discipline that makes waterfall work in hardware does not apply.
No sufficient body of design analysis. Most analysis is done on the running code. Problems are therefore not detected until late in the process.
Static view of requirements. Slow and expensive to respond to changing needs. User involvement is minimal once the specification is written.
Unrealistic separation of specification from design. In practice, design decisions clarify what the requirements should have said.
Cannot easily use prototyping or reuse. The model assumes everything is designed from scratch in sequence.

Where waterfall still fits

Despite its limitations, waterfall is the right choice in some circumstances:

Requirements are well understood at project start
If the customer truly knows what they want and that will not change, the predictability of waterfall is a strength, not a weakness.
Large and complex critical systems
For systems where formal sign-off and exhaustive documentation are required at each stage (regulated, safety-critical), waterfall's stage gates are appropriate. It is too expensive to use for small systems.

Strengths

  • Simple, well-known structure
  • Each stage produces formal documentation
  • Easy to manage and audit
  • Suitable for stable, well-understood domains

Limitations

  • Inflexible to change once stages are committed
  • Working software not visible until late
  • Risks discovered late are expensive to fix
  • Poor fit for most modern software contexts

042. Evolutionary / Agile development

The idea

Evolutionary development reverses waterfall's central assumption. Rather than fixing requirements upfront, the system evolves toward a final form by repeatedly producing versions that are shown to the customer. The customer reacts; the team adjusts; the next version goes further. Specification and development are interleaved rather than sequential.

Sommerville distinguishes two kinds:

Exploratory development
Work with customers to evolve a final system from an initial outline specification. Start with some well-understood requirements; the system grows as new features are proposed by the customer.
Evolutionary and incremental development
Use evolutionary and incremental techniques (such as prototyping) to control changing customer requirements. Aims to support the twelve principles of the Agile Manifesto, which focus on the customer, their changing requirements, and the development team and their interactions, for early and continuous delivery.
Evolutionary development Specification, development, and validation are concurrent, not sequential Outline description Concurrent activities Specification Development Validation Initial version Intermediate versions Final version
Figure 2. Evolutionary development. The three core activities run concurrently and feed each other. Each pass produces a version that goes back to the customer for feedback. Adapted from Sommerville (2016).

Problems

Lack of process visibility. Without distinct stages, managers cannot easily track progress through traditional milestones.
Systems are often poorly structured. Continuous change tends to erode architectural integrity unless the team actively refactors.
Special skills may be required. Rapid prototyping, continuous integration, and similar techniques demand experienced developers.
Higher communication overhead. Continuous interaction with the customer and within the team can be expensive.

Applicability

Small or medium-size interactive systems
Where requirements are not possible to detail at the start, and rapid feedback from real users provides the clearest signal.
Parts of large systems
For example, the user interface of a large system, where the form needs to evolve in response to user testing.
Short-lifetime systems
When the system will not be in use long enough to justify heavyweight specification.
Where powerful tools are available
Visual development environments, frameworks, and prototyping tools make the cost of evolutionary change much lower than in raw coding.

Examples of agile process models

Evolutionary / Agile is itself a family. Specific named models that fit under it include:

Extreme Programming (XP)
Engineering-focused; pair programming, TDD, refactoring, simple design.
Scrum
Lightweight management framework with sprints, accountabilities, and inspect-and-adapt events.
Adaptive Software Development (ASD)
Speculate-collaborate-learn cycle; emphasises adapting to complexity rather than controlling it.
Dynamic Systems Development Method (DSDM)
Time-boxed delivery with prioritisation by MoSCoW (Must, Should, Could, Won't).
Crystal
A family of methods (Crystal Clear, Crystal Yellow, etc.) calibrated to team size and criticality.
Feature Driven Development (FDD)
Plan and build by feature; emphasises a feature list and class ownership.
Lean Software Development (LSD)
Adapts Lean manufacturing principles (eliminate waste, build quality in, defer commitment) to software.
Agile Modeling (AM) and Agile Unified Process (AUP)
Lightweight modelling practices, and a streamlined version of the Rational Unified Process.

The second companion in this pair, on iteration and Agile process models, covers Scrum, XP, Kanban, and the Spiral model in detail. The other names listed here are mentioned for completeness – students may encounter them in industry but they are less commonly used than Scrum or XP.

05Software prototyping

What it is

Software prototyping is a development technique, not a process model. It is used to help understand system requirements, especially when the requirements are poorly understood at the start. The pattern is simple: develop a quick-and-dirty version of the system; expose it to user feedback; refine and re-develop. Repeat until an adequate system is developed, or until enough has been learned to specify the real system properly.

A prototype is

an initial version of a system used to demonstrate concepts and try out design options. It is not the system that ships – it is a tool for learning.

Where prototyping is used

In the requirements engineering process – to help with elicitation and validation. Showing customers something concrete is often more effective than asking them to imagine.
In the design process – to explore options and develop the user interface design.
In the testing process – to run back-to-back tests against an existing reference system.

The four-step prototyping process

The process of prototype development Establish prototype objectives Define prototype functionality Develop prototype Evaluate prototype Prototyping plan Outline definition Executable prototype Evaluation report
Figure 3. Prototyping is a four-step technique. Each step produces an artifact that feeds the next, ending with an evaluation report that informs whether to iterate, refine, or move on.

Benefits

Improved system usability. Real users react to a real prototype.
A closer match to users' real needs. Prototypes surface unstated requirements.
Improved design quality. Design choices are tested against a working artifact rather than speculation.
Improved maintainability. Issues caught early are cheaper to fix.
Reduced development effort. Counter-intuitively, throwing away a prototype is often cheaper than building the wrong thing first time.
A teaching note on KISS

Prototyping work is often described with the principle KISS – Keep It Simple and Stupid. The point is not that the prototype must be unsophisticated; it is that the goal of the prototype is to learn, and any complexity beyond what is needed for that learning is waste.

In your project

Phase 1 of the project asks the customer group to write a 1–2 page business outline, followed by an interview workshop with the developer group. That interview is, in effect, a chance to use prototyping techniques in their lightest form. If the developer group finds the customer's outline ambiguous, drawing a quick paper sketch of a screen, or describing a concrete user scenario, often surfaces hidden requirements faster than re-reading the document. You do not need to build software to prototype; sketches, mock screens, and scenario walk-throughs all count.

063. Formal systems development

The idea

Formal systems development is based on the transformation of a mathematical specification through different representations to an executable program. Each transformation is correctness-preserving, so showing that the program conforms to its specification is straightforward – the proof is built in.

This approach is embodied in the Cleanroom approach, originally developed by IBM. Some authors consider it a variant of the waterfall model, with the development phase replaced by a chain of formal transformations. The classic variant is sometimes called the V-model when test planning is mirrored onto each specification stage.

Formal systems development Specification is transformed into code through correctness-preserving steps Requirements definition Formal specification Formal transformation Integration and system testing Formal transformations T1 T2 T3 T4 Formal specification R1 R2 R3 Executable program P1 P2 P3 P4 Proofs of transformation correctness
Figure 4. Formal systems development. The formal specification is transformed step by step (T1, T2, T3, T4) into an executable program. Each transformation is accompanied by a proof of correctness (P1, P2, P3, P4). Adapted from Sommerville (2016).

What T1, T2, T3, T4 and P1, P2, P3, P4 actually mean: a worked example

The figure is abstract. To make it concrete, consider a small system: a traffic-light controller for a single intersection where two perpendicular roads meet. The safety property that must hold is simple: at no time may both roads show green simultaneously. Here is what the chain of refinements would look like.

FS
Formal specification. A mathematical description of what the controller must do. For our example: a state machine with four states (NS-green, NS-amber, EW-green, EW-amber), a clock, and the safety invariant not (NS-green and EW-green). Often written in Z, B, VDM, or TLA+.
T1
First transformation. Refine the abstract state machine into a more concrete one. The clock becomes a discrete tick counter; the four abstract states get explicit timer durations; the transitions become guarded rules. The output is the next-level representation, R1.
P1
First proof. Show that R1 preserves what FS specified. Concretely: prove that under the new tick-based representation, no reachable state has both NS-green and EW-green. The safety invariant must survive the refinement.
T2
Second transformation. Refine R1 toward an executable shape. The state machine becomes a structured-program skeleton with variables for the current direction and timer, and a main loop that advances on each tick. The output is R2.
P2
Second proof. Show that R2 preserves R1's behaviour. The structured-program version reaches the same states in the same order and obeys the same safety invariant.
T3
Third transformation. Add concrete data types and resource bindings. The timer becomes a 16-bit integer; the lights become writes to specific hardware addresses; the loop becomes an interrupt handler. The output is R3.
P3
Third proof. Show that R3 preserves R2. In particular, prove that integer overflow on the timer cannot cause an unsafe transition, and that hardware writes happen in an order that prevents both lights being green during the switch.
T4
Fourth transformation. Translate R3 into the target programming language (typically C, sometimes Ada SPARK or Rust for safety-critical work) and compile to executable code. The output is the executable program.
P4
Fourth proof. Show that the compiled code preserves the semantics of R3. In a Cleanroom-style process this might be done by a verified compiler; in less rigorous variants by careful inspection. Either way, by the time we reach this proof, the safety invariant has survived every step from the original mathematical specification down to the binary.

The point of all this work is the chain of guarantees: if FS is correct, and every Tᵢ–Pᵢ pair is correct, then by construction the executable cannot violate the safety property. This is what we mean by correctness-preserving transformations. The cost is real – every Pᵢ is a proof obligation that someone has to discharge – but for safety-critical systems the alternative is testing alone, and tests can only show the presence of bugs, never their absence.

Why the example is small on purpose

A real traffic-light controller has more states (yellow flashing for failure modes, pedestrian crossings, emergency-vehicle preemption), more invariants, and far more proof work. The example here has been deliberately stripped to its bare bones so the structure of the T-and-P chain is visible. Each Tᵢ refines the representation toward implementation; each Pᵢ rebuilds the safety guarantee at the new level. The same shape applies to airbag controllers, infusion-pump dosing logic, and railway interlocking systems.

Problems

Specialised skills required. Formal methods demand training in mathematical logic and specification languages (Z, B, VDM, TLA+).
Difficult to formally specify some aspects. User interfaces and other behaviour involving humans are hard to capture mathematically.

Applicability

Critical systems
Especially those where a safety or security case must be made before the system is put into operation. Avionics, nuclear control systems, secure cryptographic protocols.
Small systems or parts of large systems
Formal methods scale poorly. They are typically applied to the safety-critical core of a larger system rather than the whole.

074. Reuse-oriented development

The idea

Reuse-oriented development is based on systematic reuse: systems are integrated from existing components or commercial off-the-shelf (COTS) systems. Rather than build from scratch, the team finds, evaluates, and assembles. This approach is becoming more important and popular, but we still have limited experience with its wide use across different domains.

Process stages

1
Component analysis. Given the requirements specification, search for components that could provide the needed functionality. There is rarely an exact match – the components found will provide some of what is needed.
2
Requirements modification. Adjust the requirements to reflect what the available components actually provide. Where adjustment is impossible, the components must be searched again or a build-versus-buy decision made.
3
System design with reuse. Design the system framework around the chosen components. New software must be designed if reusable components are not available.
4
Development and integration. Software not bought is developed; components are integrated to form the system. Integration is part of development, not an addition to it.
Reuse-oriented development Requirements specification Component analysis Requirements modification System design with reuse Development and integration System validation
Figure 5. Reuse-oriented development. Requirements drive a search for components, the requirements then adjust to fit what is available, design wraps the components, and the system is integrated and validated. Adapted from Sommerville (2016).

Problems

Specialised analysis and integration skills. Selecting components for both functionality and quality requires real expertise.
Some aspects do not reuse easily. User interfaces, in particular, are highly project-specific.
Maintainability concerns. Reused components may not be supported by their suppliers indefinitely.
Evolution constrained by component suppliers. The team's ability to change the system is limited by what the component suppliers do.

Applicability

Non-critical systems with common functionality
Where reusable components exist for the common parts (authentication, payment, mapping, document handling), reuse is a strong default.
Large systems
Component analysis and integration may be too expensive for small or mid-size projects, but pay off when the project is large.
In your project

The course project does not reach the implementation phase – it ends with system modelling and design (component, architecture, deployment diagrams). Even so, when you draw your component diagram in Phase 4, you should think about which components you would write yourself and which you would buy or take from a library: a payment component, an authentication component, a mapping component. The component diagram is more honest – and more realistic – when it acknowledges what would actually be reused. The point of reuse-oriented thinking is to make that reuse a deliberate design decision, not an afterthought.

08Side by side

A single-page comparison of the four generic models, with prototyping shown as a technique that can be applied within any of them.

Waterfall Evolutionary / Agile Formal Reuse-oriented
Core idea Distinct sequential phases Specification and development interleaved Specification transformed into code by correctness-preserving steps System assembled from existing components
Best when Requirements are stable and well understood Requirements will evolve during the project Failure is unacceptable; correctness must be proven Common functionality already exists as components
Worst when Requirements will change Process visibility and formal sign-off are needed System has heavy human-interface content Project is small; integration costs dominate
Documentation Heavy, per stage Lean, evolves with the system Heavy, formal specifications and proofs Component contracts, integration design
Customer involvement Mostly at start (specification) and end (acceptance) Continuous At specification; less during transformation At requirements and at adjustment of requirements
Risk profile Late discovery of problems Drift if not disciplined Specialist skill bottleneck Component suppliers' future
Typical domain Aerospace, defence, regulated Most modern commercial software Safety-critical, security-critical kernels Enterprise applications, ERP, integration projects

How to choose at the broadest level

The honest answer is that real projects rarely use a pure form. A safety-critical project will use formal methods for the kernel, waterfall-like discipline for the structure, and prototyping for the user-facing parts. A start-up will use evolutionary development for the application, reuse-oriented development for non-differentiating parts, and prototyping for the user interface. The model is a lens, not a cage.

The factors that drive the choice:

How stable are the requirements? Stable → waterfall-friendly. Volatile → evolutionary.
How critical is the system? Safety-critical → formal methods for the core. Routine → reuse and evolutionary.
How much already exists? Lots of components → reuse-oriented. Greenfield → build from scratch.
How available is the customer? Continuously available → evolutionary. Hard to reach → waterfall, with all the risks that implies.
What does the contract say? Fixed-price, fixed-scope, fixed-date → some kind of plan-driven model. Time-and-materials with discovery → evolutionary.

The next section makes these factors concrete with four worked examples.

09What process for what software?

To make the choice tangible, here are four examples drawn from systems you may know. Each example states the problem, names the dominant factors, and proposes a process – usually a blend of models, not a pure one. The reasoning matters more than the conclusion.

Example 1 – A university course registration portal

The system

A web application where students browse the catalogue, enrol in courses, drop or add up to the deadline, and view their schedule. Faculty manage class lists; the registrar approves overrides. Replaces a paper-and-spreadsheet system.

Dominant factors. Requirements are well understood (every university already has a registration process); the organisation is bureaucratic and prefers to sign off documents; the system must integrate with existing student records; security and audit are important; the user community is captive (students must use it).

Proposed process. A largely plan-driven approach with a waterfall-like backbone for the integration and audit work, but with prototyping heavily used for the student-facing interface. The reason: the back end (catalogue, enrolment rules, transcript integration) maps cleanly onto a stable specification, but the user interface is where students will actually judge the system, and there the only reliable way to get it right is to put a prototype in front of real students. Reuse-oriented thinking applies to authentication, payment, and notification components – none of which the team should build from scratch.

What you would not do. Pure agile with no upfront documentation – the registrar will not sign off on anything that has not been specified. Pure waterfall – the user interface will be wrong on the first try.

Example 2 – A food-delivery mobile app start-up

The system

A mobile app and back end where customers order food from local restaurants, pay in-app, and track delivery. Drivers use a separate app to accept jobs and navigate. Restaurants use a dashboard to manage their menu and incoming orders.

Dominant factors. Requirements are not well understood at the start – the team needs to discover what users will actually pay for, what makes drivers stay, and what restaurants tolerate. The market changes faster than any specification document can keep up with. The team is small and co-located. There is no regulator to satisfy. Speed to market is critical.

Proposed process. Strongly evolutionary / agile, with two-week iterations and continuous customer feedback. Prototyping is used at the start to validate the core flows before any real engineering. Reuse-oriented for the parts that are not differentiators – maps, payments, push notifications, identity – buy or use third-party APIs rather than build. Waterfall would actively damage this project; the team needs to learn from real users in weeks, not produce a specification document over months.

What you would not do. Lock the specification before the first release. Build in-house what already exists as a paid API. Skip user testing because "we know what users want".

Example 3 – A medical infusion-pump controller

The system

Embedded software that controls the rate at which a hospital infusion pump delivers medication to a patient. A failure can kill. The software must be approved by a regulator before deployment.

Dominant factors. Failure consequences are catastrophic. Requirements come from a regulator and from clinical experts, and they are stable but voluminous. Documentation must be exhaustive – every requirement must be traceable to a design decision, to code, and to a test. Cost is a secondary concern; correctness is primary.

Proposed process. A formal systems development approach for the safety-critical control loop (drug-rate calculation, alarm logic), surrounded by a waterfall-like structure with full V&V (verification and validation) at every stage for the rest of the system. Prototyping may be used for the operator interface (which is where most clinical errors actually originate), but the prototype is then re-specified and re-built rigorously, not handed over as a finished product. Agile and Spiral are unsuitable here: the regulator does not accept "we will refine the requirements as we go".

What you would not do. Use evolutionary development for the safety kernel. Skip the formal proof obligations. Treat documentation as overhead.

Example 4 – A national tax-filing system being modernised

The system

A government department wants to replace a thirty-year-old mainframe tax-filing system with a modern web-based one. The new system must integrate with banks, employers, and existing audit databases. It must handle millions of submissions during the filing window.

Dominant factors. Requirements are mostly fixed by tax law (and change yearly when the law changes). Integration with legacy systems dominates the technical effort. The timeline is constrained by the tax year. The user base is enormous and politically visible – any failure makes the news. The procurement process favours fixed-price contracts and formal sign-off.

Proposed process. A primarily reuse-oriented approach – very little of this system should be written from scratch when commercial tax-engine components, identity-verification services, and government-cloud platforms exist. Plan-driven structure to satisfy procurement and political accountability. Incremental rollout (covered in detail in the second companion): launch first to a small pilot region, then expand. Prototyping for the citizen-facing forms, where errors translate into a flood of help-desk calls. Pure agile would not survive the procurement process; pure waterfall would deliver too late and too rigid to absorb annual tax-law changes.

What you would not do. Build a tax engine from scratch when several mature ones exist. Promise a single big-bang launch on a fixed date for the whole country.

The pattern across the four examples

Notice that none of the four answers was a single pure model. Every real project blends. The dominant model usually comes from the dominant factor – criticality drives the medical pump toward formal methods; market uncertainty drives the start-up toward agile; integration cost drives the tax system toward reuse; institutional process drives the registration portal toward plan-driven. But every project also borrows from the others where it makes sense.

In your project

The course project follows a structure that is closest to a plan-driven approach – fixed phases (Initiation, Business Definition, Requirements Engineering, Requirements Analysis and Modelling, System Modelling and Design), each with defined outputs, and a milestone interview between phases. This is deliberate: it gives you the chance to practise each activity carefully. But the milestone interviews themselves are evolutionary in spirit – they exist to surface misunderstandings and let the customer-developer pair correct course before the next phase. So even your project, at the activity level, is a small blend.

10What is next

The four generic models above describe how a project is organised at the highest level. They do not yet say how iteration is structured, how risk is managed, or how Agile teams actually work day to day.

The second companion in this pair, Iteration and Agile process models – Agile, Scrum, XP, Kanban and Spiral, takes that next step. It covers:

Process iteration: the difference between iterative and incremental development, and how iterations are structured.
The Agile philosophy: the four values and twelve principles of the 2001 Manifesto.
Scrum: the most-used Agile framework, with its accountabilities, events, and artifacts.
XP: the engineering practices that make iterative development actually work – pair programming, TDD, continuous integration, refactoring.
Kanban: a flow-based alternative for work that does not fit fixed sprints.
The Spiral model: Boehm's risk-driven process model that pre-dates Agile and remains the right choice for some large, risky projects.

Beyond this companion, the four core activities – Specification, Design, Validation, and Evolution – each become the focus of subsequent topics in the course. Requirements engineering covers specification in depth. The system modelling block covers design through UML. Validation is interleaved throughout and revisited in the design block.

11References

Primary sources for this lecture

Foundational papers and books

Recommended further reading