Architecture Considerations with AWS Lambda

Building Serverless-Typi: Learning Serverless Architecture Through a Typing Practice App

When I decided to build a typing practice application, I saw it as an opportunity to dive deep into serverless architecture. What started as a simple project became Serverless-Typi, a typing exercise platform that helped me explore AWS Lambda, DynamoDB, and TypeScript in ways I hadn't before. Along the way, I learned a lot about serverless patterns, data modeling, and performance optimization that I'd like to share.

The Learning Goal: More Than Just Typing Practice

My goal was to create a typing practice app that could track user progress, store custom exercises, and maintain typing history, but I also wanted to use it as a learning vehicle for serverless architecture. I was particularly curious about exploring both microservices and monolithic approaches within the same codebase to see how they compared in practice.

What I ended up with was a platform that includes:

  • Custom typing exercises with full CRUD operations

  • Performance tracking including strokes per minute, error rates, and time spent

  • User progress analytics with key statistics and historical data

  • Multi-environment deployments with different architectural patterns

  • Testing strategies from unit tests to load testing

The Dual Architecture: An Interesting Experiment

One of the most interesting decisions I made was implementing both microservices and monolithic patterns, switching between them based on the deployment environment. This wasn't planned from the start - it evolved from trying to solve the slow build times I was experiencing during development:

Microservices Approach (Production/Test)

In production and test environments, each endpoint is handled by its own specialized Lambda function:

  • GetExercises (512MB) - Retrieves all typing exercises with optimized projection

  • GetExercisesAttribute (128MB) - Fetches specific exercise attributes

  • PostExercise (1024MB) - Creates new exercises with full validation

  • DeleteExercise (128MB) - Removes exercises and associated data

  • GetHistories (128MB) - Retrieves typing session history

  • PutHistory (128MB) - Records new typing sessions with intelligent fallback

  • DeleteHistory (256MB) - Removes specific typing sessions

  • GetUser (512MB) - Fetches complete user profiles

  • GetUserAttribute (128MB) - Retrieves specific user attributes

  • PatchUserKeyStats (128MB) - Updates user statistics

  • Register (128MB) - Handles user registration via Cognito triggers

I spent time tuning the memory allocation for each function based on usage patterns, which helped with cost optimization.

Monolithic Approach (Development)

For development, I route everything through a single Monolith Lambda that implements a route dispatcher:

export const handler: MonolithType = async (event, context, callback) => {
  switch (event.routeKey) {
    case "GET /exercises":
      return await handlers.GetExercises(event, context, callback);
    case "POST /exercise":
      return await handlers.PostExercise(event, context, callback);
    // ... handles all 10 endpoints
    default:
      return { statusCode: 404, body: JSON.stringify(event.body) };
  }
};

This approach helped reduce build times during development (from ~10 minutes to ~2 minutes) while keeping the same API behavior.

Technical Implementation

Infrastructure as Code with AWS CDK

I defined the infrastructure using AWS CDK with TypeScript, which let me create reusable constructs:

Lambda Factory Pattern

I built a LambdaFactory class to standardize how Lambda functions are created:

export class LambdaFactory {
  addLambda({ name, memorySize = 128, folder, props }: {
    name: string;
    memorySize?: number;
    folder?: string;
    props?: NodejsFunctionProps;
  }) {
    return new LambdaConstruct(
      this.scope, this.layers, this.table, this.env,
      name, memorySize, props, folder
    );
  }
}

Each Lambda is configured with:

  • Node.js 14.x runtime for consistency

  • Lambda layers for shared utilities and dependencies

  • Environment-specific bundling (minification in production only)

  • Reserved concurrency limits (100 per function) for cost control

  • Comprehensive environment variables for configuration

Dynamic Route Registration

The UserRoutesFactory handles IAM permissions based on HTTP method, which was handy:

addPOST(path: string, lambda: NodejsFunction) {
  const integration = new HttpLambdaIntegration(path + "-integration", lambda);
  this.api.addRoutes({ path, methods: [HttpMethod.POST], integration });
  this.table.grantWriteData(lambda); // Auto-assigns write permissions
}

Data Architecture: Learning Single-Table Design

I decided to use DynamoDB's single-table design pattern, which was a learning curve but helped me understand NoSQL data modeling better:

Primary Key Patterns

  • Users:

    pk: "USER-{userId}",

    sk: "INFO"

  • Exercises:

    pk: "USER-{userId}",

    sk: "EX-{exerciseTitle}"

  • Exercise History: Stored as nested objects within exercises using DynamoDB's map data type

Repository Pattern Implementation

I built a generic BaseRepository class to handle common CRUD operations:

export abstract class BaseRepository<T extends pkType> {
  async getAttributesFromAll<attrType extends keyof T & string>(
    pk: string, sk: searchKeyPrefix, attrs: attrType[]
  ) {
    const command = new QueryCommand({
      TableName: this.tableName,
      KeyConditionExpression: "pk = :userId AND begins_with(sk, :searchKey)",
      ProjectionExpression: Object.keys(attrTransformed).toString(),
      ExpressionAttributeNames: attrTransformed,
    });
    return Items as { [key in attrType]: T[key] }[];
  }
}

The UserExerciseRepository extends this with domain-specific methods like updateUserExerciseHistory().

Type Safety with Zod Schema Validation

I used Zod throughout the application for runtime type validation, which was really helpful for catching errors early:

export const userExercise = z.object({
  pk: exercisePrimaryKey.shape.pk,
  sk: exercisePrimaryKey.shape.sk,
  title: z.string().min(1).max(Number(process.env.MAX_TITLE_LENGTH)),
  text: z.string().min(1).max(Number(process.env.MAX_TEXT_LENGTH)),
  createdOn,
  histories: z.record(history).default({}),
});

export const history = z.object({
  createdOn,
  secondsSpent: z.number().positive().lt(Number(process.env.MAX_SECONDS_SPENT)),
  strokesTotal: z.number().positive().lt(Number(process.env.MAX_STROKES_TOTAL)),
  errorsTotal: z.number().nonnegative().lt(Number(process.env.MAX_ERRORS_TOTAL)),
});

This provides:

  • Runtime validation of all incoming data

  • Compile-time type safety throughout the application

  • Environment-based constraints (configurable limits)

  • Custom error messages for better user experience

Lambda Layer Architecture

I organized shared utilities in a Lambda layer (/opt/nodejs/util) to avoid code duplication:

Core Utilities

  • handlerHelper - Standardizes request/response handling across all functions

  • createReq - Extracts and validates request data from API Gateway events

  • handleError - Centralized error handling with Zod integration

  • Repository instances - Pre-configured DynamoDB clients

Request Processing Pipeline

Every Lambda follows the same pattern:

const fn = async (req: requestType) => {
  // Business logic here
  const result = await someOperation(req.userId, req.body);
  return { body: JSON.stringify(result) };
};

export const handler: HandlerType = async (event) => {
  return handlerHelper(event, fn);
};

The handlerHelper provides consistent error handling, request parsing, and response formatting across all functions.

Authentication & Security

Cognito Integration

I used AWS Cognito for authentication with automatic user registration:

userPool.addTrigger(
  env === "prod" 
    ? UserPoolOperation.POST_CONFIRMATION 
    : UserPoolOperation.POST_AUTHENTICATION,
  register
);

In production, users are registered after email confirmation. In development, registration happens on every authentication for easier testing.

JWT Token Processing

The createReq utility extracts user information from Cognito JWT tokens:

export const createReq = (event: Parameters<HandlerType>[0]) => {
  const userId = event.requestContext.authorizer.jwt.claims.sub as string;
  const body = JSON.parse(event.body || "{}");
  return { userId, body, pathParameters: event.pathParameters, routeKey: event.routeKey };
};

This ensures every function has access to the authenticated user's ID without additional lookups.

Performance Optimization Learnings

Memory Allocation Strategy

One thing that really helped with costs was tuning memory allocation for each function. Using AWS Lambda Power Tuning, I found better memory settings:

  • Data Retrieval Functions (512MB): GetExercises, GetUser - Handle larger datasets

  • Heavy Processing (1024MB): PostExercise - Complex validation and storage operations

  • Lightweight Operations (128MB): Most other functions - Simple CRUD operations

  • Moderate Processing (256MB): DeleteHistory - Handles nested data updates

Database Query Optimization

I tried to use DynamoDB's query capabilities efficiently:

Projection Expressions

async getAllExercises(pk: string): Promise<Pick<userExerciseType, "title" | "text" | "sk">[]> {
  const command = new QueryCommand({
    ProjectionExpression: "#title, #text, #sk",
    ExpressionAttributeNames: { "#title": "title", "#text": "text", "#sk": "sk" },
  });
}

This way I only fetch the attributes I actually need, which helps with performance and costs.

Conditional Updates

I used conditional expressions in write operations to prevent race conditions:

async create(item: T): Promise<void> {
  const command = new PutCommand({
    Item: item,
    ConditionExpression: "attribute_not_exists(sk)", // Prevents overwrites
  });
}

Fallback Logic

The PutHistory function includes some fallback handling for edge cases:

const fn = async (req: requestType) => {
  try {
    await exerciseRepo.updateUserExerciseHistory(primaryKey, newHistory);
  } catch (e) {
    // Fallback: Create exercise if it doesn't exist (for public exercises)
    const exercise = userExercise.parse({ pk, sk, histories, title, text: "default" });
    await exerciseRepo.create(exercise);
  }
};

This handles the case where users practice on public exercises that aren't yet in their personal collection.

Testing Strategy

Unit Testing with Jest and DynamoDB Local

I set up tests using @shelf/jest-dynamodb to test against a local DynamoDB instance:

describe("Test modifying exercise data", function () {
  it("tests if an exercise can be added to the db", async () => {
    const result = await handler(postExercise);
    expect(result.statusCode).toBe(200);
  });

  it("tests if a duplicate exercise can be added to the db", async () => {
    const result = await handler(postExercise);
    expect(JSON.parse(result.body)).toBe("ConditionalCheckFailedException");
  });
});

The tests cover the complete lifecycle of operations and error scenarios.

Load Testing with Artillery

I set up load testing with Artillery to simulate real usage patterns:

config:
  phases:
    - duration: 60
      arrivalRate: 1
      rampTo: 20
      name: Warm up
scenarios:
  - name: "Create and delete Exercise"
    flow:
      - function: "generateRandomExercise"
      - post: { url: "/exercise", json: { title: "{{ title }}", text: "{{ text }}" }}
      - get: { url: "/exercises" }
      - delete: { url: "/exercise/{{ exerciseId }}" }

This tests the complete user workflow under realistic load conditions.

Memory Optimization Testing

Individual JSON files for each function enable AWS Lambda Power Tuning integration:

  • get-exercises.json - Tests exercise retrieval performance

  • post-exercises.json - Tests exercise creation with various payloads

  • get-user.json - Tests user data retrieval patterns

Deployment & Environment Management

Multi-Environment Strategy

The application supports three distinct environments:

Development Environment

  • Single monolithic Lambda for fast iteration

  • Unminified bundles for better debugging

  • Post-authentication triggers for easier testing

  • Relaxed security constraints

Test Environment

  • Full microservices architecture matching production

  • Comprehensive logging for debugging

  • Pre-production validation environment

Production Environment

  • Optimized microservices with memory tuning

  • SSL certificate integration via Certificate Manager

  • Post-confirmation triggers for security

  • Minified bundles for performance

Performance Metrics & Results

Build Time Optimization

The dual architecture approach helped reduce development build times by using a single monolithic function instead of building multiple microservices during development.

Memory Optimization Results

Function-specific memory allocation helped reduce costs compared to uniform memory settings across all functions.

Query Performance

  • Efficient projection queries reducing data transfer

  • Conditional operations preventing race conditions

What I Learned

1. Dual Architecture Benefits

Having both patterns in one codebase turned out to be really useful:

  • Fast development iteration with monolithic approach

  • Production optimization with microservices

  • Identical API contracts across environments

  • Easy A/B testing of architectural approaches

2. TypeScript + Zod = Powerful Combination

The combination of TypeScript and Zod provides:

  • Compile-time type safety throughout the stack

  • Runtime validation with detailed error messages

  • Schema-driven development with automatic type inference

  • Environment-specific constraints without code duplication

3. DynamoDB Single-Table Design Mastery

Proper single-table design enables:

  • Cost-effective operations with minimal read/write units

  • Flexible query patterns using GSIs when needed

  • Atomic operations for related data

  • Scalable architecture without joins

4. Lambda Layer Strategy

Well-designed Lambda layers provide:

  • Code reuse across multiple functions

  • Consistent utilities and error handling

  • Reduced bundle sizes for individual functions

  • Centralized dependency management

5. Memory Tuning Impact

Function-specific memory allocation delivers:

  • Cost savings compared to uniform allocation

  • Better performance through right-sizing

  • Better resource utilization across the platform

  • Data-driven optimization through power tuning

Future Roadmap & Enhancements

Real-Time Features

Planning WebSocket integration for:

  • Live typing competitions between users

  • Real-time progress tracking during exercises

  • Collaborative typing sessions

  • Live leaderboards and social features

Advanced Analytics

Expanding analytics capabilities:

  • Machine learning insights for typing improvement

  • Personalized exercise recommendations

  • Progress prediction models

  • Advanced performance metrics

Platform Scalability

Considering architectural enhancements:

  • GraphQL API layer for flexible client queries

  • Event-driven architecture with EventBridge

  • Multi-region deployment for global performance

  • CDN integration for static content delivery

Conclusion: What This Project Taught Me

Building Serverless-Typi has been a great learning experience with serverless development. The project helped me understand that serverless architecture isn't just about individual functions - it's about creating systems that can adapt to different requirements and environments.

The dual architecture approach showed me that you don't always have to choose between development speed and production optimization. By keeping consistent APIs, you can switch between patterns based on what works best for each environment.

What I accomplished:

  • 10+ Lambda functions with tuned memory allocation

  • Type safety throughout the stack with TypeScript and Zod

  • Single-table design with DynamoDB (which was a learning curve!)

  • Multi-environment deployment with different optimizations

  • Testing coverage from unit tests to load testing

  • Cost optimization through performance tuning

The project scales well and provides a good developer experience. Most importantly, it taught me a lot about building serverless applications that can grow and change over time.

This project convinced me that with some planning and the right tools, you can build serverless applications that are both powerful and maintainable. There's still so much to learn in this space, but this was a great starting point.