Step 1: project skeleton - go mod, directory structure, placeholder main.go
diff --git a/cmd/matheval/main.go b/cmd/matheval/main.go
new file mode 100644
index 0000000..b66f2f7
--- /dev/null
+++ b/cmd/matheval/main.go
@@ -0,0 +1,7 @@
+package main
+
+import "fmt"
+
+func main() {
+	fmt.Println("matheval - math expression evaluator")
+}
diff --git a/docs/design.md b/docs/design.md
new file mode 100644
index 0000000..9b9ffd5
--- /dev/null
+++ b/docs/design.md
@@ -0,0 +1,194 @@
+# Math Expression Evaluator — Design Document
+
+## Requirements Summary
+- Language: Go
+- Operators: `+`, `-`, `*`, `/` with parentheses
+- Numbers: floating point (e.g. `3.14`, `42`, `0.5`)
+- Variables: none
+- Interface: CLI REPL
+- Error handling: print error message, continue REPL
+
+## Approaches Considered
+
+### 1. Recursive-Descent with AST (chosen)
+- **Lexer → Parser → AST → Evaluator → REPL**
+- Clean separation: each stage is independently testable
+- AST is a reusable intermediate representation
+- Easy to extend (new operators, pretty-printing, optimization)
+- Well-suited for 2 precedence levels + parentheses
+
+### 2. Recursive-Descent with Direct Evaluation
+- Parser evaluates inline — no AST
+- Fewer types, less code
+- Couples parsing and evaluation — harder to test, extend
+
+### 3. Shunting-Yard Algorithm
+- Converts to RPN then evaluates
+- Good for many precedence levels; overkill here
+- Harder to produce clear error messages
+
+**Decision:** Approach 1. The AST adds minimal overhead but provides clean boundaries.
+
+## Architecture
+
+```
+Input string
+    β”‚
+    β–Ό
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”
+ β”‚ Lexer β”‚  string → []Token
+ β””β”€β”€β”€β”¬β”€β”€β”€β”˜
+     β”‚
+     β–Ό
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
+ β”‚ Parser β”‚  []Token → AST (Node)
+ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
+     β”‚
+     β–Ό
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
+ β”‚ Evaluator β”‚  Node → float64
+ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
+     β”‚
+     β–Ό
+ β”Œβ”€β”€β”€β”€β”€β”€β”
+ β”‚ REPL β”‚  read line → eval → print result or error
+ β””β”€β”€β”€β”€β”€β”€β”˜
+```
+
+## Component Interfaces
+
+### Token (data type)
+
+```go
+package token
+
+type Type int
+
+const (
+    Number Type = iota
+    Plus        // +
+    Minus       // -
+    Star        // *
+    Slash       // /
+    LParen      // (
+    RParen      // )
+    EOF
+)
+
+type Token struct {
+    Type    Type
+    Literal string  // raw text, e.g. "3.14", "+"
+    Pos     int     // position in input (for error messages)
+}
+```
+
+### Lexer
+
+```go
+package lexer
+
+// Tokenize converts an input string into a slice of tokens.
+// Returns an error if the input contains invalid characters.
+func Tokenize(input string) ([]token.Token, error)
+```
+
+### AST (data types)
+
+```go
+package ast
+
+// Node is the interface all AST nodes implement.
+type Node interface {
+    node() // sealed marker method
+}
+
+// NumberLit represents a numeric literal.
+type NumberLit struct {
+    Value float64
+}
+
+// BinaryExpr represents a binary operation (e.g. 1 + 2).
+type BinaryExpr struct {
+    Op    token.Type  // Plus, Minus, Star, Slash
+    Left  Node
+    Right Node
+}
+```
+
+### Parser
+
+```go
+package parser
+
+// Parse converts a slice of tokens into an AST.
+// Returns an error for malformed expressions (mismatched parens, etc.).
+func Parse(tokens []token.Token) (ast.Node, error)
+```
+
+Grammar (recursive-descent):
+```
+expr   → term (('+' | '-') term)*
+term   → factor (('*' | '/') factor)*
+factor → NUMBER | '(' expr ')'
+```
+
+### Evaluator
+
+```go
+package evaluator
+
+// Eval evaluates an AST node and returns the result.
+// Returns an error on division by zero.
+func Eval(node ast.Node) (float64, error)
+```
+
+### REPL
+
+```go
+package repl
+
+// Run starts the read-eval-print loop, reading from r and writing to w.
+func Run(r io.Reader, w io.Writer)
+```
+
+## Package Layout
+
+```
+matheval/
+β”œβ”€β”€ cmd/
+β”‚   └── matheval/
+β”‚       └── main.go          # entry point, calls repl.Run
+β”œβ”€β”€ token/
+β”‚   └── token.go             # Token type and constants
+β”œβ”€β”€ lexer/
+β”‚   β”œβ”€β”€ lexer.go             # Tokenize function
+β”‚   └── lexer_test.go
+β”œβ”€β”€ ast/
+β”‚   └── ast.go               # AST node types
+β”œβ”€β”€ parser/
+β”‚   β”œβ”€β”€ parser.go            # Parse function
+β”‚   └── parser_test.go
+β”œβ”€β”€ evaluator/
+β”‚   β”œβ”€β”€ evaluator.go         # Eval function
+β”‚   └── evaluator_test.go
+β”œβ”€β”€ repl/
+β”‚   β”œβ”€β”€ repl.go              # REPL loop
+β”‚   └── repl_test.go
+β”œβ”€β”€ docs/
+β”‚   β”œβ”€β”€ design.md
+β”‚   └── plan.md
+β”œβ”€β”€ go.mod
+└── README.md
+```
+
+## Error Handling
+- Lexer: returns error for invalid characters (e.g. `@`, `#`)
+- Parser: returns error for syntax errors (unexpected token, mismatched parens)
+- Evaluator: returns error for division by zero
+- REPL: catches any error, prints it, prompts for next input
+
+## Key Design Decisions
+1. **Functional API over structs** — `Tokenize()`, `Parse()`, `Eval()` are stateless functions. No need for struct receivers since there's no configuration or state to carry.
+2. **Sealed AST interface** — unexported marker method prevents external implementations, keeping the node set closed.
+3. **Position tracking in tokens** — enables precise error messages ("error at position 5").
+4. **REPL takes io.Reader/io.Writer** — makes it testable without stdin/stdout.
diff --git a/docs/plan.md b/docs/plan.md
new file mode 100644
index 0000000..112d4bb
--- /dev/null
+++ b/docs/plan.md
@@ -0,0 +1,57 @@
+# Math Expression Evaluator — Implementation Plan
+
+## Phase: Implement
+
+Steps are ordered. Each step includes writing the code and its unit tests (TDD).
+
+### Step 1: Project Skeleton
+- `go mod init matheval`
+- Create directory structure: `cmd/matheval/`, `token/`, `lexer/`, `ast/`, `parser/`, `evaluator/`, `repl/`
+- Create placeholder `main.go`
+
+### Step 2: Token Package
+- Define `Type` enum constants
+- Define `Token` struct
+- Add `String()` method on `Type` for debugging
+
+### Step 3: Lexer
+- Implement `Tokenize(input string) ([]Token, error)`
+- Handle: whitespace skipping, number literals (integers and decimals), operators `+-*/`, parentheses `()`, EOF, invalid characters
+- **Tests:** valid expressions, decimal numbers, invalid chars, empty input, whitespace-only
+
+### Step 4: AST Package
+- Define `Node` interface with sealed marker
+- Define `NumberLit` struct
+- Define `BinaryExpr` struct
+
+### Step 5: Parser
+- Implement recursive-descent parser following grammar:
+  - `expr → term (('+' | '-') term)*`
+  - `term → factor (('*' | '/') factor)*`
+  - `factor → NUMBER | '(' expr ')'`
+- Internal parser struct to track position in token slice
+- Return error on: unexpected token, mismatched parens, trailing tokens
+- **Tests:** single number, simple binary, precedence, parentheses, nested parens, error cases
+
+### Step 6: Evaluator
+- Implement `Eval(node ast.Node) (float64, error)`
+- Recursively walk AST
+- Return error on division by zero
+- **Tests:** literals, all 4 operators, nested expressions, division by zero
+
+### Step 7: REPL
+- Implement `Run(r io.Reader, w io.Writer)`
+- Read line, tokenize, parse, evaluate, print result or error
+- Loop until EOF
+- **Tests:** successful expression, error expression, multi-line session
+
+### Step 8: main.go
+- Wire `repl.Run(os.Stdin, os.Stdout)`
+
+### Step 9: Integration Test
+- End-to-end test: feed expression string through all stages, verify result
+- Test edge cases: deeply nested parens, long expressions
+
+### Step 10: Final Commit & README
+- Write README.md with usage instructions
+- Final commit
diff --git a/go.mod b/go.mod
new file mode 100644
index 0000000..3612a82
--- /dev/null
+++ b/go.mod
@@ -0,0 +1,3 @@
+module matheval
+
+go 1.23.1