Software Development Lifecycle Crash Course

Master SDLC from Planning to Deployment

Lesson 1: SDLC Basics and Models

What is SDLC?

The Software Development Life Cycle (SDLC) is a structured process used by software development teams to design, develop, test, and deploy high-quality software. It provides a systematic approach to building software that meets customer requirements.

Key Benefit: SDLC provides a framework that ensures consistent quality, reduces development costs, shortens development time, and improves project management and control.

The SDLC Phases

Most SDLC models include these core phases:

  1. Planning: Define project scope, objectives, and feasibility
  2. Requirements Analysis: Gather and document what the software should do
  3. Design: Create architecture and detailed design specifications
  4. Implementation (Development): Write the actual code
  5. Testing: Verify the software works correctly
  6. Deployment: Release the software to users
  7. Maintenance: Fix bugs, add features, and provide support

SDLC Models Overview

Different projects require different approaches. Here are the main SDLC models:

1. Waterfall Model

The traditional, linear sequential approach where each phase must be completed before the next begins.

Waterfall Flow: Requirements → Design → Implementation → Testing → Deployment → Maintenance Characteristics: ✓ Sequential and linear ✓ Each phase has specific deliverables ✓ No overlap between phases ✓ Extensive documentation ✓ Changes are difficult and costly Best For: • Well-defined requirements • Stable technology • Short projects • Regulated industries requiring documentation
Waterfall Limitation: Cannot go back to previous phases easily. If requirements change mid-project, it can be very costly to adapt.

2. Agile Model

Iterative and incremental approach that emphasizes flexibility, collaboration, and customer feedback.

Agile Cycle (Iterative): Plan → Design → Develop → Test → Review → Deploy ↑_______________________________________________| (Repeat for each sprint/iteration) Characteristics: ✓ Short iterations (1-4 weeks) ✓ Continuous customer involvement ✓ Flexible to changes ✓ Working software over documentation ✓ Cross-functional teams Best For: • Evolving requirements • Customer-facing applications • Projects requiring fast delivery • Innovative products

3. Iterative Model

Build software through repeated cycles, each producing a more complete version.

Iterative Approach: Iteration 1: Core features → Test → Release v1.0 Iteration 2: Additional features → Test → Release v2.0 Iteration 3: Enhanced features → Test → Release v3.0 Characteristics: ✓ Software grows incrementally ✓ Early versions are functional ✓ Feedback incorporated each iteration ✓ Risk reduced through testing cycles Best For: • Large projects • Projects with clear core requirements • When early market entry is valuable

4. Spiral Model

Combines iterative development with systematic risk management.

Spiral Phases (Repeated): 1. Planning: Define objectives and constraints 2. Risk Analysis: Identify and resolve risks 3. Engineering: Build and test 4. Evaluation: Customer review and planning next iteration Characteristics: ✓ Risk-driven approach ✓ Suitable for large, complex projects ✓ Extensive risk analysis ✓ Flexible and adaptable Best For: • High-risk projects • Large-scale systems • Projects with unclear requirements • Mission-critical applications

5. V-Model (Validation and Verification)

Extension of waterfall with emphasis on testing at each development stage.

V-Model Structure: Requirements ←→ Acceptance Testing ↓ ↑ Design ←→ System Testing ↓ ↑ Architecture ←→ Integration Testing ↓ ↑ Implementation ←→ Unit Testing Characteristics: ✓ Testing planned parallel to development ✓ Highly disciplined ✓ Clear milestones ✓ Works well for small projects Best For: • Projects with clear, stable requirements • Safety-critical systems • Medical/automotive software • Where testing is paramount

6. DevOps Model

Integrates development and operations for continuous delivery.

DevOps Cycle: Plan → Code → Build → Test → Release → Deploy → Operate → Monitor ↑___________________________________________________________| (Continuous Loop) Characteristics: ✓ Automation emphasis (CI/CD) ✓ Continuous integration and delivery ✓ Collaboration between dev and ops ✓ Fast feedback loops ✓ Infrastructure as code Best For: • Cloud-native applications • Microservices architectures • Continuous deployment needs • High-frequency releases

Choosing the Right SDLC Model

Factor Waterfall Agile DevOps
Requirements Clear and fixed Evolving Continuous change
Project Size Small to medium Any size Medium to large
Customer Involvement Limited High Medium
Delivery Speed Slow Fast Very fast
Documentation Extensive Minimal Automated
Risk High (late testing) Low (frequent testing) Very low (continuous)

SDLC Best Practices

  • Choose the model that fits your project context
  • Involve stakeholders throughout the process
  • Document key decisions and requirements
  • Plan for testing from the beginning
  • Use version control for all code
  • Automate repetitive tasks
  • Conduct regular code reviews
  • Maintain clear communication channels
Modern Trend: Many organizations use hybrid approaches, combining elements of multiple models (e.g., Agile development with DevOps deployment) to best fit their needs.

Test Your Knowledge - Lesson 1

1. Which SDLC model follows a strict sequential approach where each phase must be completed before the next?

2. What is the primary focus of the DevOps model?

3. Which model emphasizes testing at each development stage with a corresponding validation phase?

Lesson 2: Requirements and Analysis

Requirements Engineering

Requirements engineering is the process of defining, documenting, and maintaining software requirements. It's the foundation of successful software development.

Critical Fact: Studies show that fixing a requirements defect found after deployment costs 100x more than fixing it during requirements phase. Getting requirements right is crucial!

Types of Requirements

1. Functional Requirements

Define what the system should do - specific behaviors and functions.

Functional Requirements Examples: E-commerce System: • User shall be able to add items to shopping cart • System shall calculate total price including tax • User shall be able to apply discount codes • System shall send order confirmation email • User shall be able to track order status Banking Application: • System shall allow users to transfer funds between accounts • System shall display transaction history for last 90 days • System shall require two-factor authentication for login • System shall generate monthly account statements

2. Non-Functional Requirements

Define how the system should perform - quality attributes and constraints.

Non-Functional Requirements Categories: Performance: • System shall respond to user requests within 2 seconds • System shall handle 10,000 concurrent users • Database queries shall complete in under 500ms Security: • All data shall be encrypted in transit using TLS 1.3 • User passwords shall be hashed with bcrypt • System shall log all authentication attempts Usability: • New users shall complete registration in under 3 minutes • Interface shall comply with WCAG 2.1 AA accessibility standards • System shall support English, Spanish, and French Reliability: • System uptime shall be 99.9% (except planned maintenance) • System shall backup data every 6 hours • System shall recover from failures within 5 minutes Scalability: • System shall scale to support 1 million users • System shall handle 100,000 transactions per hour

Requirements Gathering Techniques

1. Interviews

  • One-on-one or group discussions with stakeholders
  • Prepare questions in advance
  • Use open-ended questions to explore needs
  • Document and verify understanding

2. Workshops

  • Collaborative sessions with multiple stakeholders
  • Facilitate consensus building
  • Use techniques like brainstorming and voting
  • Produce immediate results

3. Surveys and Questionnaires

  • Gather input from large user base
  • Quantify preferences and priorities
  • Reach geographically distributed users
  • Analyze results statistically

4. Observation

  • Watch users perform tasks in their environment
  • Identify unstated needs and workarounds
  • Understand actual vs. perceived workflows
  • Discover edge cases

5. Prototyping

  • Build mockups or working prototypes
  • Get early feedback on design
  • Validate understanding of requirements
  • Reduce risk of misunderstanding

User Stories

In Agile development, requirements are often written as user stories:

User Story Format: As a [role/persona], I want to [goal/action], So that [benefit/value]. Examples: As a customer, I want to save items to a wishlist, So that I can purchase them later. As an administrator, I want to view system analytics dashboard, So that I can monitor application performance. As a mobile user, I want to use fingerprint authentication, So that I can login quickly and securely. Acceptance Criteria: Given [context/precondition] When [action/event] Then [expected outcome] Example: Given I am logged in When I click the "Add to Wishlist" button Then the item is saved to my wishlist And I see a confirmation message

Use Cases

Detailed descriptions of how users interact with the system:

Use Case: Process Online Order Actor: Customer Precondition: Customer has items in cart Postcondition: Order is placed and confirmed Main Flow: 1. Customer reviews items in shopping cart 2. Customer clicks "Checkout" 3. System displays shipping address form 4. Customer enters shipping address 5. System displays payment options 6. Customer selects payment method 7. Customer enters payment details 8. System validates payment information 9. System processes payment 10. System creates order record 11. System sends confirmation email 12. System displays order confirmation page Alternative Flows: A1: Payment Declined (at step 9) 1. System displays error message 2. Customer updates payment details 3. Return to step 8 A2: Item Out of Stock (at step 10) 1. System notifies customer 2. Customer removes item or waits 3. Return to step 1

Requirements Documentation

Software Requirements Specification (SRS)

A comprehensive document describing what the software will do:

SRS Template: 1. Introduction 1.1 Purpose 1.2 Scope 1.3 Definitions and Acronyms 1.4 References 1.5 Overview 2. Overall Description 2.1 Product Perspective 2.2 Product Functions 2.3 User Characteristics 2.4 Constraints 2.5 Assumptions and Dependencies 3. Functional Requirements 3.1 User Management 3.2 Order Processing 3.3 Reporting [etc.] 4. Non-Functional Requirements 4.1 Performance Requirements 4.2 Security Requirements 4.3 Usability Requirements 4.4 Reliability Requirements 5. External Interface Requirements 5.1 User Interfaces 5.2 Hardware Interfaces 5.3 Software Interfaces 5.4 Communication Interfaces 6. Other Requirements 6.1 Database Requirements 6.2 Legal and Regulatory Requirements

Requirements Prioritization

Not all requirements are equally important. Common prioritization methods:

MoSCoW Method

  • Must Have: Critical requirements, project fails without them
  • Should Have: Important but not vital, can be delayed if necessary
  • Could Have: Desirable but not necessary, nice to have
  • Won't Have (this time): Explicitly out of scope for this release

Requirements Validation

Ensure requirements are:

  • Complete: Nothing is missing
  • Consistent: No contradictions
  • Unambiguous: Clear and precise
  • Verifiable: Can be tested
  • Feasible: Can be implemented with available resources
  • Traceable: Can be tracked through development
Common Requirement Pitfalls:
  • Vague language: "The system shall be fast" (How fast?)
  • Gold plating: Adding unnecessary features
  • Scope creep: Uncontrolled requirement additions
  • Conflicting requirements: Different stakeholders want opposite things
  • Unstated assumptions: "Everyone knows that..."

Requirements Traceability

Track requirements throughout the SDLC:

Requirement ID Description Priority Design Ref Code Module Test Case Status
REQ-001 User login Must DES-001 auth.js TC-001 Implemented
REQ-002 Password reset Should DES-002 auth.js TC-002 In Progress
Best Practice: Involve end users in requirements gathering. They often know what they need better than intermediaries. Validate requirements with working prototypes whenever possible.

Test Your Knowledge - Lesson 2

1. Which type of requirement describes WHAT the system should do?

2. In the MoSCoW method, what does the "M" stand for?

3. What is the primary purpose of a Software Requirements Specification (SRS)?

Lesson 3: Design and Architecture

Software Design Principles

Good software design follows fundamental principles that make systems maintainable, scalable, and robust.

SOLID Principles

S - Single Responsibility Principle A class should have only one reason to change. Example: Separate data access from business logic O - Open/Closed Principle Software entities should be open for extension, closed for modification. Example: Use interfaces and inheritance to extend functionality L - Liskov Substitution Principle Derived classes must be substitutable for their base classes. Example: Subclasses should work wherever parent class is expected I - Interface Segregation Principle Clients shouldn't depend on interfaces they don't use. Example: Create specific interfaces rather than one large interface D - Dependency Inversion Principle Depend on abstractions, not concrete implementations. Example: Use dependency injection, depend on interfaces

DRY - Don't Repeat Yourself

Avoid code duplication. Each piece of knowledge should have a single, authoritative representation.

Bad (Repetitive): function calculateOrderTotal(order) { return order.subtotal + (order.subtotal * 0.08) + 5.99; } function calculateInvoiceTotal(invoice) { return invoice.subtotal + (invoice.subtotal * 0.08) + 5.99; } Good (DRY): function calculateTotal(subtotal) { const TAX_RATE = 0.08; const SHIPPING = 5.99; return subtotal + (subtotal * TAX_RATE) + SHIPPING; }

KISS - Keep It Simple, Stupid

Simplicity should be a key goal. Avoid unnecessary complexity.

YAGNI - You Aren't Gonna Need It

Don't add functionality until it's necessary. Avoid over-engineering.

Software Architecture Patterns

1. Layered (N-Tier) Architecture

Organizes the application into horizontal layers, each with specific responsibilities.

Typical 3-Tier Architecture: ┌─────────────────────────────┐ │ Presentation Layer │ UI, Views, Controllers │ (User Interface) │ Handles user interaction └─────────────────────────────┘ ↓↑ ┌─────────────────────────────┐ │ Business Logic Layer │ Services, Business Rules │ (Application Logic) │ Core application logic └─────────────────────────────┘ ↓↑ ┌─────────────────────────────┐ │ Data Access Layer │ Repositories, DAOs │ (Database) │ Database operations └─────────────────────────────┘ Benefits: ✓ Clear separation of concerns ✓ Easy to understand and maintain ✓ Testable layers independently ✓ Can replace layers without affecting others Use Cases: • Traditional web applications • Enterprise applications • Line-of-business applications

2. MVC (Model-View-Controller)

MVC Pattern: ┌─────────┐ ┌──────────────┐ │ View │ ←────── │ Controller │ │ (UI) │ │ (Logic) │ └─────────┘ └──────────────┘ ↑ ↓ │ ↓ └───── ┌─────────┐ ←──┘ │ Model │ │ (Data) │ └─────────┘ Components: • Model: Data and business logic • View: UI and presentation • Controller: Handles user input, updates model/view Benefits: ✓ Separation of concerns ✓ Parallel development ✓ Multiple views for same model ✓ Easier testing Examples: • Ruby on Rails • ASP.NET MVC • Django (MVT variant)

3. Microservices Architecture

Microservices Structure: ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ User │ │ Order │ │ Payment │ │ Service │ │ Service │ │ Service │ │ (DB) │ │ (DB) │ │ (DB) │ └─────────────┘ └─────────────┘ └─────────────┘ ↑ ↑ ↑ └───────────────┴─────────────────┘ │ ┌────────────────┐ │ API Gateway │ └────────────────┘ ↑ ┌─────────┐ │ Clients │ └─────────┘ Characteristics: ✓ Small, independent services ✓ Each service has own database ✓ Communicate via APIs (REST, gRPC) ✓ Deployed independently ✓ Technology agnostic Benefits: ✓ Scalability (scale services independently) ✓ Flexibility (different tech stacks) ✓ Resilience (service isolation) ✓ Fast deployment Challenges: ✗ Distributed system complexity ✗ Network latency ✗ Data consistency ✗ Testing complexity Best For: • Large, complex applications • High scalability needs • Multiple development teams • Cloud-native applications

4. Event-Driven Architecture

Event-Driven Pattern: Producer → Event → Event Bus → Subscribers (1) (2) (3) (4) Flow: 1. Service produces event (e.g., "Order Placed") 2. Event published with data 3. Event bus routes to interested subscribers 4. Multiple services consume and react Example: Order Placed Event → Inventory Service (reduce stock) → Email Service (send confirmation) → Analytics Service (track sale) → Shipping Service (prepare shipment) Benefits: ✓ Loose coupling ✓ Scalability ✓ Asynchronous processing ✓ Easy to add new consumers Use Cases: • Real-time systems • IoT applications • Streaming data processing • Complex workflows

Design Patterns

Reusable solutions to common software design problems:

Creational Patterns

  • Singleton: Ensure only one instance of a class exists
  • Factory: Create objects without specifying exact class
  • Builder: Construct complex objects step by step

Structural Patterns

  • Adapter: Make incompatible interfaces work together
  • Decorator: Add functionality to objects dynamically
  • Facade: Provide simplified interface to complex system

Behavioral Patterns

  • Observer: Notify multiple objects of state changes
  • Strategy: Encapsulate algorithms for easy swapping
  • Command: Encapsulate requests as objects

Database Design

Relational Database Design

Normalization Forms: 1NF (First Normal Form): • Eliminate repeating groups • Each column contains atomic values • Each column contains values of single type 2NF (Second Normal Form): • Must be in 1NF • All non-key attributes fully dependent on primary key • Remove partial dependencies 3NF (Third Normal Form): • Must be in 2NF • No transitive dependencies • Non-key attributes depend only on primary key Example: Bad (unnormalized): Orders: OrderID, CustomerName, CustomerAddress, Items Good (normalized): Customers: CustomerID, Name, Address Orders: OrderID, CustomerID, OrderDate OrderItems: OrderItemID, OrderID, ProductID, Quantity Products: ProductID, Name, Price

API Design

RESTful API Principles

REST API Design Best Practices: 1. Use nouns for resources, not verbs Good: GET /users/123 Bad: GET /getUser/123 2. Use HTTP methods correctly GET /users - Get all users GET /users/123 - Get specific user POST /users - Create new user PUT /users/123 - Update user (full) PATCH /users/123 - Update user (partial) DELETE /users/123 - Delete user 3. Use plural nouns for collections Good: /products Bad: /product 4. Use proper HTTP status codes 200 OK - Successful GET, PUT, PATCH 201 Created - Successful POST 204 No Content - Successful DELETE 400 Bad Request - Client error 401 Unauthorized - Authentication required 404 Not Found - Resource doesn't exist 500 Server Error - Server error 5. Version your API /v1/users /v2/users 6. Use filtering, sorting, pagination /products?category=electronics&sort=price&page=2&limit=20
Best Practice: Design for maintainability. Code is read far more often than it's written. Make your design easy for others (including future you) to understand.

Test Your Knowledge - Lesson 3

1. What does the "S" in SOLID stand for?

2. In MVC architecture, which component handles user input?

3. What HTTP method should be used to create a new resource in a RESTful API?

Lesson 4: Development and Implementation

Coding Best Practices

Writing clean, maintainable code is essential for long-term project success.

Clean Code Principles

1. Meaningful Names Bad: let d = 5; // elapsed time in days Good: let elapsedTimeInDays = 5; 2. Functions Should Do One Thing Bad: function processUserAndSendEmail(user) { ... } Good: function processUser(user) { ... } function sendEmail(user) { ... } 3. Keep Functions Small • Ideally 5-15 lines • Single level of abstraction • Easy to understand at a glance 4. Comment Why, Not What Bad: // Increment i i++; Good: // Skip invalid entries i++; 5. Consistent Formatting • Use linter (ESLint, Prettier) • Follow team style guide • Consistent indentation and naming 6. Error Handling • Don't ignore exceptions • Provide meaningful error messages • Fail fast and clearly

Version Control with Git

Essential for team collaboration and code management.

Basic Git Workflow: # Clone repository git clone https://github.com/team/project.git # Create feature branch git checkout -b feature/user-authentication # Make changes and stage git add src/auth.js git add tests/auth.test.js # Commit with meaningful message git commit -m "Add user authentication with JWT tokens" # Push to remote git push origin feature/user-authentication # Create pull request for review # After approval, merge to main # Update local main branch git checkout main git pull origin main # Delete feature branch git branch -d feature/user-authentication

Git Best Practices

  • Commit Often: Small, atomic commits
  • Write Good Commit Messages: Clear, descriptive, explain why
  • Use Branches: Feature branches, never commit directly to main
  • Pull Before Push: Stay up to date with team changes
  • Review Code: Use pull requests for code review
  • Don't Commit Secrets: Use .gitignore for credentials
Good Commit Messages: Bad: "Fixed bug" "Updated code" "Changes" Good: "Fix null pointer exception in user login" "Add pagination to product listing API" "Refactor database connection pooling for better performance" Format: [Type]: Short summary (50 chars max) Detailed explanation if needed - What changed - Why it changed - References to tickets/issues Types: feat, fix, refactor, docs, test, chore

Code Review

Peer review improves code quality and shares knowledge.

What to Look for in Code Review

  • Correctness: Does the code do what it's supposed to?
  • Design: Is the approach sound and maintainable?
  • Readability: Can others understand the code?
  • Style: Does it follow team conventions?
  • Tests: Are there adequate tests?
  • Performance: Any obvious performance issues?
  • Security: Any security vulnerabilities?
Review Etiquette:
  • Be constructive, not critical
  • Explain the "why" behind suggestions
  • Ask questions rather than make demands
  • Appreciate good code
  • Review promptly

Development Environments

Environment Types

Environment Purpose Who Uses
Development (Dev) Active development and debugging Developers
Testing/QA Quality assurance testing QA team, testers
Staging Pre-production testing, mirrors production QA, stakeholders
Production (Prod) Live system used by end users End users

Continuous Integration (CI)

Automatically build and test code changes frequently.

CI Pipeline Example (GitHub Actions): name: CI Pipeline on: [push, pull_request] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Setup Node.js uses: actions/setup-node@v2 with: node-version: '16' - name: Install dependencies run: npm install - name: Run linter run: npm run lint - name: Run tests run: npm test - name: Build application run: npm run build - name: Run security scan run: npm audit Benefits: ✓ Catch bugs early ✓ Ensure code quality ✓ Automated testing ✓ Faster feedback ✓ Reduce integration issues

Continuous Deployment (CD)

Automatically deploy code changes to production after passing tests.

CD Pipeline Stages: 1. Source → 2. Build → 3. Test → 4. Deploy to Staging → 5. Deploy to Production Stage Details: 1. Source: Code committed to repository 2. Build: Compile code, create artifacts 3. Test: Run automated tests 4. Stage: Deploy to staging environment 5. Production: Deploy to production (manual approval or automatic) Tools: • Jenkins • GitLab CI/CD • GitHub Actions • CircleCI • Travis CI • Azure DevOps

Configuration Management

Manage environment-specific settings separately from code.

Environment Configuration: # .env.development DATABASE_URL=localhost:5432 API_KEY=dev_key_12345 LOG_LEVEL=debug CACHE_ENABLED=false # .env.production DATABASE_URL=prod-db.company.com:5432 API_KEY=prod_key_secure_67890 LOG_LEVEL=error CACHE_ENABLED=true Code: require('dotenv').config(); const dbUrl = process.env.DATABASE_URL; const apiKey = process.env.API_KEY; Security: ✓ Never commit .env files to version control ✓ Add .env* to .gitignore ✓ Use secrets management (AWS Secrets Manager, HashiCorp Vault) ✓ Rotate credentials regularly

Documentation

Good documentation is crucial for maintenance and onboarding.

Types of Documentation

  • README: Project overview, setup instructions, getting started
  • API Documentation: Endpoint descriptions, parameters, examples
  • Code Comments: Explain complex logic, why not what
  • Architecture Docs: System design, component interactions
  • User Guides: How to use the application
  • Runbooks: Operational procedures, troubleshooting
README.md Template: # Project Name Brief description of what this project does. ## Features - Feature 1 - Feature 2 - Feature 3 ## Prerequisites - Node.js 16+ - PostgreSQL 13+ - Redis ## Installation ```bash git clone https://github.com/team/project.git cd project npm install cp .env.example .env # Edit .env with your settings ``` ## Usage ```bash npm start ``` ## Testing ```bash npm test ``` ## API Documentation See [API.md](docs/API.md) ## Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) ## License MIT License
Documentation Anti-Patterns:
  • Outdated docs worse than no docs
  • Documenting obvious code
  • Too much documentation (maintenance burden)
  • Documentation separate from code (gets out of sync)

Pair Programming

Two developers work together at one workstation.

  • Driver: Writes the code
  • Navigator: Reviews, suggests, thinks ahead
  • Switch roles regularly (every 15-30 minutes)
  • Benefits: Knowledge sharing, fewer bugs, better design
  • Works well for complex problems or knowledge transfer

Test Your Knowledge - Lesson 4

1. What does CI stand for in software development?

2. Which environment should mirror production for pre-release testing?

3. What is the recommended practice for committing to the main branch?

Lesson 5: Testing and Quality Assurance

Why Testing Matters

Testing ensures software quality, catches bugs early, and gives confidence in releases.

Cost of Bugs: A bug found in production costs 100x more to fix than one found during development. Testing saves time and money!

Testing Pyramid

Testing Pyramid (Bottom to Top): ╱╲ ╱E2E╲ ← Few (Slow, Expensive, Brittle) ╱──────╲ ╱Integration╲ ← Some (Medium Speed/Cost) ╱────────────╲ ╱ Unit Tests ╲ ← Many (Fast, Cheap, Stable) ╱────────────────╲ Distribution: • 70% Unit Tests • 20% Integration Tests • 10% End-to-End Tests Rationale: ✓ Unit tests are fast and pinpoint issues ✓ Integration tests verify components work together ✓ E2E tests validate complete user workflows

Types of Testing

1. Unit Testing

Test individual components or functions in isolation.

Example Unit Test (Jest/JavaScript): // Function to test function calculateTotal(price, quantity, taxRate) { if (price < 0 || quantity < 0 || taxRate < 0) { throw new Error('Values must be non-negative'); } const subtotal = price * quantity; const tax = subtotal * taxRate; return subtotal + tax; } // Unit tests describe('calculateTotal', () => { test('calculates total with tax correctly', () => { expect(calculateTotal(10, 2, 0.08)).toBe(21.6); }); test('handles zero tax rate', () => { expect(calculateTotal(10, 2, 0)).toBe(20); }); test('throws error for negative price', () => { expect(() => calculateTotal(-10, 2, 0.08)) .toThrow('Values must be non-negative'); }); test('handles zero quantity', () => { expect(calculateTotal(10, 0, 0.08)).toBe(0); }); }); Benefits: ✓ Fast execution ✓ Easy to debug ✓ Document code behavior ✓ Enable refactoring with confidence

2. Integration Testing

Test how multiple components work together.

Integration Test Example: // Test database integration describe('User Repository', () => { beforeAll(async () => { // Setup test database await database.connect(); }); afterAll(async () => { // Cleanup await database.disconnect(); }); test('creates and retrieves user', async () => { const user = { name: 'John Doe', email: 'john@example.com' }; // Create user const userId = await userRepository.create(user); // Retrieve user const retrieved = await userRepository.findById(userId); expect(retrieved.name).toBe('John Doe'); expect(retrieved.email).toBe('john@example.com'); }); test('handles duplicate email', async () => { const user = { name: 'Jane', email: 'jane@example.com' }; await userRepository.create(user); await expect(userRepository.create(user)) .rejects.toThrow('Email already exists'); }); });

3. End-to-End (E2E) Testing

Test complete user workflows from start to finish.

E2E Test Example (Cypress): describe('E-commerce Checkout Flow', () => { it('completes purchase successfully', () => { // Visit homepage cy.visit('https://example.com'); // Search for product cy.get('[data-test="search-input"]') .type('laptop'); cy.get('[data-test="search-button"]') .click(); // Add to cart cy.get('[data-test="product-1"]') .click(); cy.get('[data-test="add-to-cart"]') .click(); // Go to checkout cy.get('[data-test="cart-icon"]') .click(); cy.get('[data-test="checkout-button"]') .click(); // Fill shipping info cy.get('[data-test="shipping-name"]') .type('John Doe'); cy.get('[data-test="shipping-address"]') .type('123 Main St'); // Complete payment cy.get('[data-test="card-number"]') .type('4242424242424242'); cy.get('[data-test="submit-order"]') .click(); // Verify success cy.get('[data-test="order-confirmation"]') .should('contain', 'Order placed successfully'); }); });

4. Other Testing Types

  • Regression Testing: Ensure new changes don't break existing functionality
  • Performance Testing: Verify system handles expected load
  • Security Testing: Identify vulnerabilities
  • Usability Testing: Evaluate user experience
  • Acceptance Testing: Verify requirements are met (UAT)
  • Smoke Testing: Quick check that major features work

Test-Driven Development (TDD)

Write tests before writing code.

TDD Cycle (Red-Green-Refactor): 1. Red: Write a failing test • Write test for desired functionality • Run test (it fails - no implementation yet) 2. Green: Make the test pass • Write minimal code to make test pass • Don't worry about perfection 3. Refactor: Improve the code • Clean up code • Optimize • Tests still pass Repeat for each new feature or requirement Benefits: ✓ Better code design ✓ Higher test coverage ✓ Prevents over-engineering ✓ Living documentation ✓ Confidence in changes

Code Coverage

Measure how much code is tested.

Coverage Metrics: • Line Coverage: % of code lines executed • Branch Coverage: % of decision branches taken • Function Coverage: % of functions called • Statement Coverage: % of statements executed Example Report: File | Line % | Branch % | Function % ---------------|--------|----------|------------ auth.js | 95% | 88% | 100% user.js | 87% | 75% | 90% payment.js | 60% | 50% | 75% ---------------|--------|----------|------------ Total | 81% | 71% | 88% Target: 80%+ coverage (100% not always necessary)
Coverage Caveat: High coverage doesn't guarantee quality tests. You can have 100% coverage with poor tests. Focus on meaningful tests that validate behavior.

Performance Testing

Ensure system meets performance requirements.

Types of Performance Tests

  • Load Testing: Test system under expected load
  • Stress Testing: Test beyond normal capacity to find breaking point
  • Spike Testing: Test sudden increase in load
  • Endurance Testing: Test sustained load over time
  • Scalability Testing: Verify system scales appropriately
Load Test Example (k6): import http from 'k6/http'; import { check, sleep } from 'k6'; export let options = { stages: [ { duration: '2m', target: 100 }, // Ramp up to 100 users { duration: '5m', target: 100 }, // Stay at 100 users { duration: '2m', target: 0 }, // Ramp down to 0 users ], thresholds: { http_req_duration: ['p(95)<500'], // 95% requests under 500ms http_req_failed: ['rate<0.01'], // Less than 1% failures }, }; export default function () { let response = http.get('https://api.example.com/products'); check(response, { 'status is 200': (r) => r.status === 200, 'response time < 500ms': (r) => r.timings.duration < 500, }); sleep(1); }

Security Testing

Identify and fix security vulnerabilities.

Common Security Tests

  • SQL Injection: Test database query vulnerabilities
  • XSS (Cross-Site Scripting): Test for script injection
  • CSRF (Cross-Site Request Forgery): Test unauthorized actions
  • Authentication/Authorization: Test access controls
  • Dependency Scanning: Check for vulnerable libraries
  • Penetration Testing: Simulate real attacks
Security Testing Tools: • OWASP ZAP - Web application security scanner • Snyk - Dependency vulnerability scanner • npm audit - Check npm packages for vulnerabilities • SonarQube - Static code analysis • Burp Suite - Web security testing • Nmap - Network security scanner Example: $ npm audit found 3 vulnerabilities (1 moderate, 2 high) run `npm audit fix` to fix them

Test Automation

Automate repetitive testing tasks.

Benefits of Test Automation

  • Faster feedback on code changes
  • Consistent test execution
  • Reduced manual testing effort
  • Enable continuous integration/deployment
  • Better test coverage
  • Regression safety net

When to Automate

Automate Manual Testing
Repetitive tests Exploratory testing
Regression tests Usability testing
Critical workflows Ad-hoc testing
Data-driven tests Subjective evaluation
Performance tests One-time tests
Testing Best Practices:
  • Test early and often
  • Write tests that are independent and repeatable
  • Use descriptive test names
  • Keep tests fast
  • Maintain tests like production code
  • Fix failing tests immediately

Test Your Knowledge - Lesson 5

1. According to the testing pyramid, which type of test should you have the most of?

2. In TDD, what is the first step?

3. What does code coverage measure?

Lesson 6: Deployment and Maintenance

Deployment Strategies

How you deploy software impacts risk, downtime, and rollback capability.

1. Blue-Green Deployment

Blue-Green Strategy: Blue Environment (Current Production) ↓ Deploy to Green Environment (New Version) ↓ Test Green Environment ↓ Switch Traffic: Blue → Green ↓ Green is now Production Blue kept as backup for quick rollback Benefits: ✓ Zero downtime ✓ Easy rollback (switch back to blue) ✓ Full testing before switch Drawbacks: ✗ Requires double infrastructure ✗ Database migrations can be tricky

2. Canary Deployment

Canary Strategy: Release new version to small subset of users (5%) ↓ Monitor metrics (errors, performance, user feedback) ↓ If successful, gradually increase traffic (25%, 50%, 75%) ↓ Eventually 100% of traffic on new version ↓ If issues detected at any point, rollback Benefits: ✓ Lower risk (limited exposure) ✓ Real user testing ✓ Gradual rollout Drawbacks: ✗ Requires traffic routing capability ✗ Longer deployment time ✗ Managing two versions simultaneously

3. Rolling Deployment

Rolling Strategy: Server 1: Update → Test → In Service ↓ Server 2: Update → Test → In Service ↓ Server 3: Update → Test → In Service ↓ Continue until all servers updated Benefits: ✓ No additional infrastructure needed ✓ Gradual update ✓ Can pause if issues arise Drawbacks: ✗ Multiple versions running simultaneously ✗ Slower than instant deployment ✗ Rollback requires updating all servers back

4. Feature Flags

Feature Flag Pattern: Code deployed with features disabled ↓ Enable features selectively: • By user (beta testers) • By percentage (10% of users) • By region (US only first) • By time (scheduled release) ↓ Monitor and adjust ↓ Eventually enable for all users Implementation Example: if (featureFlags.isEnabled('new-checkout', user)) { // New checkout flow return renderNewCheckout(); } else { // Old checkout flow return renderOldCheckout(); } Benefits: ✓ Deploy code separately from release ✓ A/B testing capability ✓ Instant rollback (just disable flag) ✓ Gradual rollout Tools: • LaunchDarkly • Feature flags in CI/CD tools • Custom implementation

Deployment Checklist

Pre-Deployment: ☐ All tests passing (unit, integration, E2E) ☐ Code reviewed and approved ☐ Security scan completed ☐ Performance testing done ☐ Database migrations tested ☐ Rollback plan ready ☐ Monitoring and alerts configured ☐ Documentation updated ☐ Stakeholders notified ☐ Backup of current production taken During Deployment: ☐ Follow runbook/deployment guide ☐ Monitor logs and metrics ☐ Verify health checks passing ☐ Test critical user flows ☐ Check error rates Post-Deployment: ☐ Smoke tests completed successfully ☐ Monitor for errors and performance issues ☐ Verify analytics and tracking ☐ Confirm backups are working ☐ Document any issues encountered ☐ Notify stakeholders of completion

Containerization and Orchestration

Docker

Package applications with dependencies in containers.

Dockerfile Example: # Use official Node.js runtime FROM node:16-alpine # Set working directory WORKDIR /app # Copy package files COPY package*.json ./ # Install dependencies RUN npm ci --only=production # Copy application code COPY . . # Expose port EXPOSE 3000 # Define environment variable ENV NODE_ENV=production # Run application CMD ["node", "server.js"] Build and Run: $ docker build -t my-app:1.0 . $ docker run -p 3000:3000 my-app:1.0 Benefits: ✓ Consistent environments (dev = prod) ✓ Isolation from host system ✓ Easy to scale and distribute ✓ Version control for infrastructure

Kubernetes

Orchestrate containers at scale.

Kubernetes Deployment: apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-app:1.0 ports: - containerPort: 3000 resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "512Mi" cpu: "1000m" Capabilities: • Auto-scaling based on load • Self-healing (restart failed containers) • Load balancing • Rolling updates and rollbacks • Service discovery

Monitoring and Observability

Track system health and performance in production.

Three Pillars of Observability

1. Metrics

Key Metrics to Monitor: Application Metrics: • Request rate (requests/sec) • Error rate (errors/total requests) • Response time (p50, p95, p99) • Throughput Infrastructure Metrics: • CPU utilization • Memory usage • Disk I/O • Network traffic Business Metrics: • Active users • Conversion rate • Revenue • Feature usage Tools: Prometheus, Grafana, CloudWatch, DataDog

2. Logs

Structured Logging Example: // Good: Structured, searchable logger.info('User logged in', { userId: '12345', email: 'user@example.com', ip: '192.168.1.1', timestamp: new Date().toISOString() }); // Bad: Unstructured console.log('User user@example.com logged in from 192.168.1.1'); Log Levels: • ERROR: Application errors • WARN: Warning conditions • INFO: Informational messages • DEBUG: Detailed debugging info Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk

3. Traces

Distributed Tracing: User Request → API Gateway → Auth Service → Database ↓ Product Service → Cache ↓ Payment Service → External API Trace shows: • Request flow through services • Time spent in each service • Where bottlenecks occur • Failed service calls Tools: Jaeger, Zipkin, AWS X-Ray

Alerting

Get notified when things go wrong.

Alert Examples: Critical Alerts (Page Immediately): • Service is down (multiple health check failures) • Error rate > 5% • Response time p95 > 5 seconds • Database connection pool exhausted Warning Alerts (Notify): • Error rate > 1% • CPU usage > 80% • Disk space < 20% • Memory usage > 85% Best Practices: ✓ Alert on symptoms, not causes ✓ Make alerts actionable ✓ Avoid alert fatigue (too many alerts) ✓ Include context in alert messages ✓ Define on-call rotation ✓ Document runbooks for common issues

Incident Management

Process for handling production issues.

Incident Response Process: 1. Detect: Alert triggers or user reports 2. Acknowledge: On-call engineer acknowledges 3. Assess: Determine severity and impact 4. Respond: - Communicate to stakeholders - Start incident channel/war room - Assign roles (incident commander, scribe) 5. Mitigate: Stop the bleeding - Rollback deployment - Scale resources - Route traffic away 6. Resolve: Fix root cause 7. Document: Write incident report 8. Learn: Post-mortem meeting Severity Levels: • SEV1: Critical - System down, major features broken • SEV2: High - Significant degradation, some users affected • SEV3: Medium - Minor issues, workaround available • SEV4: Low - Cosmetic issues, no user impact

Post-Mortems

Learn from incidents without blame.

Post-Mortem Template: Incident Summary: • What happened • Impact (users affected, duration) • Root cause Timeline: • 14:05 - Deployment started • 14:12 - Error rate increased to 15% • 14:15 - Alert triggered • 14:18 - Incident declared • 14:25 - Rollback initiated • 14:30 - Service recovered Root Cause: • Detailed explanation • Contributing factors Resolution: • What fixed it • Why it worked Action Items: • [ ] Add validation for X (Owner: Alice, Due: 1 week) • [ ] Improve monitoring for Y (Owner: Bob, Due: 2 weeks) • [ ] Update deployment checklist (Owner: Carol, Due: 3 days) Lessons Learned: • What went well • What could be improved • What we learned

Software Maintenance

Ongoing care and improvement of software.

Types of Maintenance

  • Corrective: Fix bugs and defects
  • Adaptive: Update for new environments (OS updates, new browsers)
  • Perfective: Enhance features and performance
  • Preventive: Refactor to prevent future issues

Technical Debt

Shortcuts taken that require future rework.

Managing Technical Debt:
  • Track technical debt items
  • Allocate time each sprint for debt reduction
  • Don't let debt accumulate indefinitely
  • Balance new features with code quality
  • Refactor continuously, not in big rewrites

End-of-Life Planning

Eventually, software needs to be retired.

  • Notify users well in advance
  • Provide migration path to replacement
  • Archive data appropriately
  • Maintain security patches during transition
  • Document lessons learned
Deployment Best Practices:
  • Deploy during low-traffic periods
  • Deploy small changes frequently
  • Automate deployment process
  • Always have a rollback plan
  • Monitor closely after deployment
  • Test in staging that mirrors production
  • Use infrastructure as code
  • Document everything

Test Your Knowledge - Lesson 6

1. Which deployment strategy involves running two identical environments and switching traffic between them?

2. What are the three pillars of observability?

3. What is the purpose of a post-mortem?