Resurrect old plan
diff --git a/docs/plan.md b/docs/plan.md
index dff29d6..112d4bb 100644
--- a/docs/plan.md
+++ b/docs/plan.md
@@ -1,73 +1,57 @@
-# Implementation Plan: Function Definitions
+# Math Expression Evaluator — Implementation Plan
-## Overview
-Bottom-up implementation through the stack: token → ast → lexer → parser → evaluator → repl → integration tests. Each step maintains backward compatibility and follows TDD.
+## Phase: Implement
-## Steps
+Steps are ordered. Each step includes writing the code and its unit tests (TDD).
-### Step 1: Token layer (`token/token.go`)
-- Add `Ident`, `Comma`, `Equals` constants to `Type` enum
-- Update `String()` for new types
-- No tests needed — pure data types
+### Step 1: Project Skeleton
+- `go mod init matheval`
+- Create directory structure: `cmd/matheval/`, `token/`, `lexer/`, `ast/`, `parser/`, `evaluator/`, `repl/`
+- Create placeholder `main.go`
-### Step 2: AST layer (`ast/ast.go`)
-- Add `Ident` struct: `Name string`; implements `Node`
-- Add `FuncCall` struct: `Name string`, `Args []Node`; implements `Node`
-- Add `Statement` interface with sealed `stmt()` marker
-- Add `ExprStmt` struct: `Expr Node`; implements `Statement`
-- Add `FuncDef` struct: `Name string`, `Params []string`, `Body Node`; implements `Statement`
-- No tests needed — pure data types
+### Step 2: Token Package
+- Define `Type` enum constants
+- Define `Token` struct
+- Add `String()` method on `Type` for debugging
-### Step 3: Lexer (`lexer/lexer.go`)
-- Add `isLetter(ch byte) bool` helper
-- Before the single-char switch, add branch: if `isLetter(ch)`, scan identifier (letter then letters/digits), emit `Ident` token
-- Add `','` → `Comma` and `'='` → `Equals` to single-char switch
-- **Tests:** identifiers (`x`, `foo`, `f1`), comma, equals, full definition `f(x) = x + 1`, call `f(1, 2)`, mixed with numbers
+### Step 3: Lexer
+- Implement `Tokenize(input string) ([]Token, error)`
+- Handle: whitespace skipping, number literals (integers and decimals), operators `+-*/`, parentheses `()`, EOF, invalid characters
+- **Tests:** valid expressions, decimal numbers, invalid chars, empty input, whitespace-only
-### Step 4: Parser (`parser/parser.go`)
-- Extend `factor()`:
- - `Ident` followed by `LParen` → parse `FuncCall`: consume `(`, parse args as comma-separated exprs, consume `)`
- - `Ident` not followed by `LParen` → return `&ast.Ident{Name}`
-- Add `parseFuncDef()`: expects `Ident(` params `) = expr`
-- Add `ParseLine(tokens) (Statement, error)`:
- - Scan for `Equals` token (not inside parens)
- - If found → `parseFuncDef()` → `*ast.FuncDef`
- - If not → `expr()` → `*ast.ExprStmt{Expr}`
-- Keep `Parse()` unchanged for backward compat
-- **Tests:** ParseLine for defs and exprs, factor for ident and func call, error cases
+### Step 4: AST Package
+- Define `Node` interface with sealed marker
+- Define `NumberLit` struct
+- Define `BinaryExpr` struct
-### Step 5: Evaluator (`evaluator/evaluator.go`)
-- Add `Evaluator` struct with `funcs map[string]*ast.FuncDef`
-- `New() *Evaluator`
-- `Define(def *ast.FuncDef) error` — error on redefinition
-- `Eval(node ast.Node, env map[string]float64) (float64, error)`:
- - `*ast.NumberLit` → return value
- - `*ast.BinaryExpr` → recurse left/right with same env
- - `*ast.Ident` → lookup in env, error if not found
- - `*ast.FuncCall` → lookup func, eval args in caller env, bind params, eval body in new env
-- Keep package-level `Eval(node) (float64, error)` as backward-compat wrapper
-- **Tests:** all existing tests still pass, new tests for Ident, FuncCall, Define, errors
+### Step 5: Parser
+- Implement recursive-descent parser following grammar:
+ - `expr → term (('+' | '-') term)*`
+ - `term → factor (('*' | '/') factor)*`
+ - `factor → NUMBER | '(' expr ')'`
+- Internal parser struct to track position in token slice
+- Return error on: unexpected token, mismatched parens, trailing tokens
+- **Tests:** single number, simple binary, precedence, parentheses, nested parens, error cases
-### Step 6: REPL (`repl/repl.go`)
-- In `Run()`: create `evaluator.New()` before loop
-- Replace `evalLine()` with inline logic using `ParseLine()`
-- `*ast.FuncDef` → `ev.Define(def)`, print `"defined <name>"`
-- `*ast.ExprStmt` → `ev.Eval(stmt.Expr, nil)`, print result
-- **Tests:** define + call across lines, redefine error, undefined func error
+### Step 6: Evaluator
+- Implement `Eval(node ast.Node) (float64, error)`
+- Recursively walk AST
+- Return error on division by zero
+- **Tests:** literals, all 4 operators, nested expressions, division by zero
-### Step 7: Integration tests (`integration_test.go`)
-- Update `eval()`/`evalErr()` helpers to use `Evaluator` struct
-- Add tests:
- - Define and call single-param function
- - Define and call multi-param function
- - Cross-function calls
- - Nested function calls in expressions
- - Error: undefined function
- - Error: wrong argument count
- - Error: function redefinition
- - Error: undefined variable
+### Step 7: REPL
+- Implement `Run(r io.Reader, w io.Writer)`
+- Read line, tokenize, parse, evaluate, print result or error
+- Loop until EOF
+- **Tests:** successful expression, error expression, multi-line session
-## Backward Compatibility
-- `Parse()` remains unchanged — returns `ast.Node`
-- Package-level `Eval()` remains — wraps `New().Eval(node, nil)`
-- Existing tests must continue to pass at every step
+### Step 8: main.go
+- Wire `repl.Run(os.Stdin, os.Stdout)`
+
+### Step 9: Integration Test
+- End-to-end test: feed expression string through all stages, verify result
+- Test edge cases: deeply nested parens, long expressions
+
+### Step 10: Final Commit & README
+- Write README.md with usage instructions
+- Final commit
diff --git a/docs/plan_02.md b/docs/plan_02.md
new file mode 100644
index 0000000..dff29d6
--- /dev/null
+++ b/docs/plan_02.md
@@ -0,0 +1,73 @@
+# Implementation Plan: Function Definitions
+
+## Overview
+Bottom-up implementation through the stack: token → ast → lexer → parser → evaluator → repl → integration tests. Each step maintains backward compatibility and follows TDD.
+
+## Steps
+
+### Step 1: Token layer (`token/token.go`)
+- Add `Ident`, `Comma`, `Equals` constants to `Type` enum
+- Update `String()` for new types
+- No tests needed — pure data types
+
+### Step 2: AST layer (`ast/ast.go`)
+- Add `Ident` struct: `Name string`; implements `Node`
+- Add `FuncCall` struct: `Name string`, `Args []Node`; implements `Node`
+- Add `Statement` interface with sealed `stmt()` marker
+- Add `ExprStmt` struct: `Expr Node`; implements `Statement`
+- Add `FuncDef` struct: `Name string`, `Params []string`, `Body Node`; implements `Statement`
+- No tests needed — pure data types
+
+### Step 3: Lexer (`lexer/lexer.go`)
+- Add `isLetter(ch byte) bool` helper
+- Before the single-char switch, add branch: if `isLetter(ch)`, scan identifier (letter then letters/digits), emit `Ident` token
+- Add `','` → `Comma` and `'='` → `Equals` to single-char switch
+- **Tests:** identifiers (`x`, `foo`, `f1`), comma, equals, full definition `f(x) = x + 1`, call `f(1, 2)`, mixed with numbers
+
+### Step 4: Parser (`parser/parser.go`)
+- Extend `factor()`:
+ - `Ident` followed by `LParen` → parse `FuncCall`: consume `(`, parse args as comma-separated exprs, consume `)`
+ - `Ident` not followed by `LParen` → return `&ast.Ident{Name}`
+- Add `parseFuncDef()`: expects `Ident(` params `) = expr`
+- Add `ParseLine(tokens) (Statement, error)`:
+ - Scan for `Equals` token (not inside parens)
+ - If found → `parseFuncDef()` → `*ast.FuncDef`
+ - If not → `expr()` → `*ast.ExprStmt{Expr}`
+- Keep `Parse()` unchanged for backward compat
+- **Tests:** ParseLine for defs and exprs, factor for ident and func call, error cases
+
+### Step 5: Evaluator (`evaluator/evaluator.go`)
+- Add `Evaluator` struct with `funcs map[string]*ast.FuncDef`
+- `New() *Evaluator`
+- `Define(def *ast.FuncDef) error` — error on redefinition
+- `Eval(node ast.Node, env map[string]float64) (float64, error)`:
+ - `*ast.NumberLit` → return value
+ - `*ast.BinaryExpr` → recurse left/right with same env
+ - `*ast.Ident` → lookup in env, error if not found
+ - `*ast.FuncCall` → lookup func, eval args in caller env, bind params, eval body in new env
+- Keep package-level `Eval(node) (float64, error)` as backward-compat wrapper
+- **Tests:** all existing tests still pass, new tests for Ident, FuncCall, Define, errors
+
+### Step 6: REPL (`repl/repl.go`)
+- In `Run()`: create `evaluator.New()` before loop
+- Replace `evalLine()` with inline logic using `ParseLine()`
+- `*ast.FuncDef` → `ev.Define(def)`, print `"defined <name>"`
+- `*ast.ExprStmt` → `ev.Eval(stmt.Expr, nil)`, print result
+- **Tests:** define + call across lines, redefine error, undefined func error
+
+### Step 7: Integration tests (`integration_test.go`)
+- Update `eval()`/`evalErr()` helpers to use `Evaluator` struct
+- Add tests:
+ - Define and call single-param function
+ - Define and call multi-param function
+ - Cross-function calls
+ - Nested function calls in expressions
+ - Error: undefined function
+ - Error: wrong argument count
+ - Error: function redefinition
+ - Error: undefined variable
+
+## Backward Compatibility
+- `Parse()` remains unchanged — returns `ast.Node`
+- Package-level `Eval()` remains — wraps `New().Eval(node, nil)`
+- Existing tests must continue to pass at every step