README
diff --git a/README.md b/README.md
index c115543..f0cd8e5 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# matheval
-A math expression evaluator with an interactive REPL, written in Go.
+A math expression evaluator with user-defined functions and an interactive REPL, written in Go.
## Features
@@ -9,6 +9,9 @@
- Floating point numbers (including `.5` syntax)
- Correct operator precedence (`*` and `/` bind tighter than `+` and `-`)
- Left-to-right associativity
+- User-defined functions with `f(x) = x + 1` syntax
+- Multi-parameter functions: `f(x, y) = x + y`
+- Functions can call other user-defined functions (late binding)
- Clear error messages with position reporting
## Build
@@ -25,7 +28,7 @@
./matheval
```
-Then type expressions:
+Then type expressions and define functions:
```
>> 2 + 3 * 4
@@ -36,6 +39,18 @@
3.5
>> 1 / 0
error: division by zero
+>> square(x) = x * x
+defined square
+>> square(5)
+25
+>> add(x, y) = x + y
+defined add
+>> add(2, 3)
+5
+>> hyp(a, b) = square(a) + square(b)
+defined hyp
+>> hyp(3, 4)
+25
```
Press `Ctrl+D` (EOF) to exit.
@@ -52,23 +67,35 @@
Input string → Lexer → Parser → AST → Evaluator → Result
```
-| Package | Responsibility |
-|-------------|---------------------------------------|
-| `token` | Token types and data structures |
-| `lexer` | Tokenizes input string |
-| `ast` | AST node types (`NumberLit`, `BinaryExpr`) |
-| `parser` | Recursive-descent parser |
-| `evaluator` | Walks AST and computes result |
-| `repl` | Read-eval-print loop |
+| Package | Responsibility |
+|-------------|---------------------------------------------------------|
+| `token` | Token types and data structures |
+| `lexer` | Tokenizes input string |
+| `ast` | AST node types (`NumberLit`, `BinaryExpr`, `Ident`, `FuncCall`, `FuncDef`) |
+| `parser` | Recursive-descent parser (expressions and function definitions) |
+| `evaluator` | Stateful evaluator with function registry |
+| `repl` | Read-eval-print loop with persistent state |
## Grammar
```
-expr → term (('+' | '-') term)*
-term → factor (('*' | '/') factor)*
-factor → NUMBER | '(' expr ')'
+line → funcdef | expr
+funcdef → IDENT '(' params ')' '=' expr
+params → IDENT (',' IDENT)*
+expr → term (('+' | '-') term)*
+term → factor (('*' | '/') factor)*
+factor → NUMBER | IDENT '(' args ')' | IDENT | '(' expr ')'
+args → expr (',' expr)*
```
+## Function Rules
+
+- **Define:** `name(params) = body` — body is any expression using the parameters
+- **Call:** `name(args)` — arguments are arbitrary expressions
+- **Redefinition:** Not allowed (returns an error)
+- **Cross-calling:** Functions can call other user-defined functions
+- **Late binding:** References resolved at call time, not definition time
+
## Tests
```sh
diff --git a/sessions/03.md b/sessions/03.md
new file mode 100644
index 0000000..869d5f7
--- /dev/null
+++ b/sessions/03.md
@@ -0,0 +1,50 @@
+../neo/neo
+YOU: read ONLY docs files and update project README.md
+## USER NO ITEMS
+1: read ONLY docs files and update project README.md - IN PROGRESS assistant false
+-- AGENT START WORKING 1
+--- STOP_REASON: tool_use
+AI:
+
+I'll start by reading the docs directory and the current README.md to understand what we're working with.
+CALLING TOOL: dir_list {"name":"docs"}
+$$$ {"entries":[{"name":"design.md","is_dir":false},{"name":"plan.md","is_dir":false},{"name":"plan_02.md","is_dir":false}]}
+
+!!!!! {"command":"test -f README.md && echo \"EXISTS\" || echo \"NOT FOUND\""}: OK
+CALLING TOOL: bash_command {"command":"test -f README.md && echo \"EXISTS\" || echo \"NOT FOUND\""}
+$$$ "EXISTS\n"
+
+--- STOP_REASON: tool_use
+CALLING TOOL: file_read {"path":"docs/design.md"}
+$$$ "# Math Expression Evaluator — Design Document\n\n## Requirements Summary\n- Language: Go\n- Operators: `+`, `-`, `*`, `/` with parentheses\n- Numbers: floating point (e.g. `3.14`, `42`, `0.5`)\n- **Functions: user-defined with `f(x) = x + 1` syntax**\n- Interface: CLI REPL\n- Error handling: print error message, continue REPL\n\n### Function Definition Requirements\n- **Syntax:** `f(x) = x + 1` — name, parenthesized params, `=`, body expression\n- **Multiple params:** `f(x, y) = x + y`\n- **Function calls:** Allowed anywhere a number can appear; arguments are arbitrary expressions\n- **Persistence:** Definitions persist across REPL lines\n- **Cross-calling:** Functions can call other user-defined functions (late binding)\n- **Built-ins:** None\n- **Redefinition:** Not allowed (error)\n- **Output on define:** Print \"defined \u003cname\u003e\"\n\n## Approaches Considered\n\n### 1. Recursive-Descent with AST (chosen)\n- **Lexer → Parser → AST → Evaluator → REPL**\n- Clean separation: each stage is independently testable\n- AST is a reusable intermediate representation\n- Easy to extend (new operators, pretty-printing, optimization)\n- Well-suited for 2 precedence levels + parentheses\n\n### 2. Recursive-Descent with Direct Evaluation\n- Parser evaluates inline — no AST\n- Fewer types, less code\n- Couples parsing and evaluation — harder to test, extend\n\n### 3. Shunting-Yard Algorithm\n- Converts to RPN then evaluates\n- Good for many precedence levels; overkill here\n- Harder to produce clear error messages\n\n**Decision:** Approach 1. The AST adds minimal overhead but provides clean boundaries.\n\n## Architecture\n\n```\nInput string\n β\n βΌ\n βββββββββ\n β Lexer β string → []Token\n βββββ¬ββββ\n β\n βΌ\n ββββββββββ\n β Parser β []Token → Statement (ExprStmt | FuncDef)\n βββββ¬βββββ\n β\n βΌ\n βββββββββββββ\n β Evaluator β stateful: function registry + expression evaluation\n βββββ¬ββββββββ\n β\n βΌ\n ββββββββ\n β REPL β read line → parse → route (define or eval) → print\n ββββββββ\n```\n\n## Component Interfaces\n\n### Token (data type)\n\n```go\npackage token\n\ntype Type int\n\nconst (\n Number Type = iota // numeric literal\n Plus // +\n Minus // -\n Star // *\n Slash // /\n LParen // (\n RParen // )\n Ident // identifier (e.g. f, x, myFunc)\n Comma // ,\n Equals // =\n EOF // end of input\n)\n\ntype Token struct {\n Type Type\n Literal string // raw text, e.g. \"3.14\", \"+\", \"f\"\n Pos int // position in input (for error messages)\n}\n```\n\n### Lexer\n\n```go\npackage lexer\n\n// Tokenize converts an input string into a slice of tokens.\n// Returns an error if the input contains invalid characters.\n// Recognizes: numbers, operators, parens, identifiers, comma, equals.\nfunc Tokenize(input string) ([]token.Token, error)\n```\n\n### AST (data types)\n\n```go\npackage ast\n\n// Node is the interface all expression AST nodes implement.\ntype Node interface {\n node() // sealed marker method\n}\n\n// NumberLit represents a numeric literal.\ntype NumberLit struct {\n Value float64\n}\n\n// BinaryExpr represents a binary operation (e.g. 1 + 2).\ntype BinaryExpr struct {\n Op token.Type // Plus, Minus, Star, Slash\n Left Node\n Right Node\n}\n\n// Ident represents a variable reference (function parameter).\ntype Ident struct {\n Name string\n}\n\n// FuncCall represents a function call (e.g. f(1+2, 3)).\ntype FuncCall struct {\n Name string\n Args []Node\n}\n\n// Statement is the interface for top-level parsed constructs.\ntype Statement interface {\n stmt() // sealed marker method\n}\n\n// ExprStmt wraps an expression used as a statement.\ntype ExprStmt struct {\n Expr Node\n}\n\n// FuncDef represents a function definition: name(params) = body\ntype FuncDef struct {\n Name string\n Params []string\n Body Node\n}\n```\n\n### Parser\n\n```go\npackage parser\n\n// Parse converts a slice of tokens into an expression AST.\n// Kept for backward compatibility.\nfunc Parse(tokens []token.Token) (ast.Node, error)\n\n// ParseLine converts a slice of tokens into a Statement.\n// Distinguishes function definitions from expressions.\nfunc ParseLine(tokens []token.Token) (ast.Statement, error)\n```\n\nGrammar (extended):\n```\nline → funcdef | expr\nfuncdef → IDENT '(' params ')' '=' expr\nparams → IDENT (',' IDENT)*\nexpr → term (('+' | '-') term)*\nterm → factor (('*' | '/') factor)*\nfactor → NUMBER | IDENT '(' args ')' | IDENT | '(' expr ')'\nargs → expr (',' expr)*\n```\n\n**Definition detection:** Scan token stream for `Equals` token. If present → parse as function definition. If absent → parse as expression. This works because `=` is not valid in expressions.\n\n### Evaluator\n\n```go\npackage evaluator\n\n// Evaluator holds function definitions and evaluates expressions.\ntype Evaluator struct {\n funcs map[string]*ast.FuncDef\n}\n\n// New creates a new Evaluator with an empty function registry.\nfunc New() *Evaluator\n\n// Define registers a function definition.\n// Returns an error if a function with the same name is already defined.\nfunc (e *Evaluator) Define(def *ast.FuncDef) error\n\n// Eval evaluates an expression AST node.\n// env provides variable bindings (function parameters).\n// Pass nil for top-level evaluation.\nfunc (e *Evaluator) Eval(node ast.Node, env map[string]float64) (float64, error)\n```\n\n**Function call evaluation:**\n1. Look up function name in registry\n2. Evaluate each argument expression in caller's environment\n3. Check argument count matches parameter count\n4. Create new environment: `param[i] → argValue[i]`\n5. Evaluate function body in new environment\n\n**Late binding:** Function body references are resolved at call time, not definition time. This naturally supports cross-function calls as long as the called function is defined before the call is evaluated.\n\n### REPL\n\n```go\npackage repl\n\n// Run starts the read-eval-print loop, reading from r and writing to w.\n// Maintains function registry across lines.\nfunc Run(r io.Reader, w io.Writer)\n```\n\n**Line processing flow:**\n1. Tokenize line\n2. `ParseLine()` → `Statement`\n3. Switch on statement type:\n - `*ast.FuncDef` → `evaluator.Define(def)`, print \"defined \u003cname\u003e\"\n - `*ast.ExprStmt` → `evaluator.Eval(expr, nil)`, print result\n\n## Package Layout\n\n```\nmatheval/\nβββ cmd/\nβ βββ matheval/\nβ βββ main.go # entry point, calls repl.Run\nβββ token/\nβ βββ token.go # Token type and constants\nβββ lexer/\nβ βββ lexer.go # Tokenize function\nβ βββ lexer_test.go\nβββ ast/\nβ βββ ast.go # AST node types + Statement types\nβββ parser/\nβ βββ parser.go # Parse + ParseLine functions\nβ βββ parser_test.go\nβββ evaluator/\nβ βββ evaluator.go # Evaluator struct with Define + Eval\nβ βββ evaluator_test.go\nβββ repl/\nβ βββ repl.go # REPL loop with state\nβ βββ repl_test.go\nβββ docs/\nβ βββ design.md\nβ βββ plan.md\nβββ go.mod\nβββ README.md\n```\n\n## Error Handling\n- Lexer: returns error for invalid characters (e.g. `@`, `#`)\n- Parser: returns error for syntax errors (unexpected token, mismatched parens, malformed definitions)\n- Evaluator: returns error for division by zero, undefined function, undefined variable, argument count mismatch, function redefinition\n- REPL: catches any error, prints it, prompts for next input\n\n## Key Design Decisions\n1. **Statement vs Node separation** — `Statement` interface separates top-level constructs (definitions vs expressions) from expression nodes. This keeps the expression evaluator clean.\n2. **Stateful Evaluator struct** — replaces the previous stateless `Eval()` function. Required to hold the function registry. The `Eval` method still takes an explicit environment for testability.\n3. **Late binding** — function bodies reference other functions by name; resolved at call time. Simple and correct.\n4. **Backward-compatible Parse()** — existing `Parse()` function kept. New `ParseLine()` added for the REPL.\n5. **Sealed AST interface** — unexported marker method prevents external implementations, keeping the node set closed.\n6. **Position tracking in tokens** — enables precise error messages.\n7. **REPL takes io.Reader/io.Writer** — makes it testable without stdin/stdout.\n8. **Definition detection via Equals scan** — simple and unambiguous since `=` cannot appear in expressions.\n"
+
+CALLING TOOL: file_read {"path":"docs/plan.md"}
+$$$ "# Math Expression Evaluator — Implementation Plan\n\n## Phase: Implement\n\nSteps are ordered. Each step includes writing the code and its unit tests (TDD).\n\n### Step 1: Project Skeleton\n- `go mod init matheval`\n- Create directory structure: `cmd/matheval/`, `token/`, `lexer/`, `ast/`, `parser/`, `evaluator/`, `repl/`\n- Create placeholder `main.go`\n\n### Step 2: Token Package\n- Define `Type` enum constants\n- Define `Token` struct\n- Add `String()` method on `Type` for debugging\n\n### Step 3: Lexer\n- Implement `Tokenize(input string) ([]Token, error)`\n- Handle: whitespace skipping, number literals (integers and decimals), operators `+-*/`, parentheses `()`, EOF, invalid characters\n- **Tests:** valid expressions, decimal numbers, invalid chars, empty input, whitespace-only\n\n### Step 4: AST Package\n- Define `Node` interface with sealed marker\n- Define `NumberLit` struct\n- Define `BinaryExpr` struct\n\n### Step 5: Parser\n- Implement recursive-descent parser following grammar:\n - `expr → term (('+' | '-') term)*`\n - `term → factor (('*' | '/') factor)*`\n - `factor → NUMBER | '(' expr ')'`\n- Internal parser struct to track position in token slice\n- Return error on: unexpected token, mismatched parens, trailing tokens\n- **Tests:** single number, simple binary, precedence, parentheses, nested parens, error cases\n\n### Step 6: Evaluator\n- Implement `Eval(node ast.Node) (float64, error)`\n- Recursively walk AST\n- Return error on division by zero\n- **Tests:** literals, all 4 operators, nested expressions, division by zero\n\n### Step 7: REPL\n- Implement `Run(r io.Reader, w io.Writer)`\n- Read line, tokenize, parse, evaluate, print result or error\n- Loop until EOF\n- **Tests:** successful expression, error expression, multi-line session\n\n### Step 8: main.go\n- Wire `repl.Run(os.Stdin, os.Stdout)`\n\n### Step 9: Integration Test\n- End-to-end test: feed expression string through all stages, verify result\n- Test edge cases: deeply nested parens, long expressions\n\n### Step 10: Final Commit \u0026 README\n- Write README.md with usage instructions\n- Final commit\n"
+
+CALLING TOOL: file_read {"path":"docs/plan_02.md"}
+$$$ "# Implementation Plan: Function Definitions\n\n## Overview\nBottom-up implementation through the stack: token → ast → lexer → parser → evaluator → repl → integration tests. Each step maintains backward compatibility and follows TDD.\n\n## Steps\n\n### Step 1: Token layer (`token/token.go`)\n- Add `Ident`, `Comma`, `Equals` constants to `Type` enum\n- Update `String()` for new types\n- No tests needed — pure data types\n\n### Step 2: AST layer (`ast/ast.go`)\n- Add `Ident` struct: `Name string`; implements `Node`\n- Add `FuncCall` struct: `Name string`, `Args []Node`; implements `Node`\n- Add `Statement` interface with sealed `stmt()` marker\n- Add `ExprStmt` struct: `Expr Node`; implements `Statement`\n- Add `FuncDef` struct: `Name string`, `Params []string`, `Body Node`; implements `Statement`\n- No tests needed — pure data types\n\n### Step 3: Lexer (`lexer/lexer.go`)\n- Add `isLetter(ch byte) bool` helper\n- Before the single-char switch, add branch: if `isLetter(ch)`, scan identifier (letter then letters/digits), emit `Ident` token\n- Add `','` → `Comma` and `'='` → `Equals` to single-char switch\n- **Tests:** identifiers (`x`, `foo`, `f1`), comma, equals, full definition `f(x) = x + 1`, call `f(1, 2)`, mixed with numbers\n\n### Step 4: Parser (`parser/parser.go`)\n- Extend `factor()`:\n - `Ident` followed by `LParen` → parse `FuncCall`: consume `(`, parse args as comma-separated exprs, consume `)`\n - `Ident` not followed by `LParen` → return `\u0026ast.Ident{Name}`\n- Add `parseFuncDef()`: expects `Ident(` params `) = expr`\n- Add `ParseLine(tokens) (Statement, error)`:\n - Scan for `Equals` token (not inside parens)\n - If found → `parseFuncDef()` → `*ast.FuncDef`\n - If not → `expr()` → `*ast.ExprStmt{Expr}`\n- Keep `Parse()` unchanged for backward compat\n- **Tests:** ParseLine for defs and exprs, factor for ident and func call, error cases\n\n### Step 5: Evaluator (`evaluator/evaluator.go`)\n- Add `Evaluator` struct with `funcs map[string]*ast.FuncDef`\n- `New() *Evaluator`\n- `Define(def *ast.FuncDef) error` — error on redefinition\n- `Eval(node ast.Node, env map[string]float64) (float64, error)`:\n - `*ast.NumberLit` → return value\n - `*ast.BinaryExpr` → recurse left/right with same env\n - `*ast.Ident` → lookup in env, error if not found\n - `*ast.FuncCall` → lookup func, eval args in caller env, bind params, eval body in new env\n- Keep package-level `Eval(node) (float64, error)` as backward-compat wrapper\n- **Tests:** all existing tests still pass, new tests for Ident, FuncCall, Define, errors\n\n### Step 6: REPL (`repl/repl.go`)\n- In `Run()`: create `evaluator.New()` before loop\n- Replace `evalLine()` with inline logic using `ParseLine()`\n- `*ast.FuncDef` → `ev.Define(def)`, print `\"defined \u003cname\u003e\"`\n- `*ast.ExprStmt` → `ev.Eval(stmt.Expr, nil)`, print result\n- **Tests:** define + call across lines, redefine error, undefined func error\n\n### Step 7: Integration tests (`integration_test.go`)\n- Update `eval()`/`evalErr()` helpers to use `Evaluator` struct\n- Add tests:\n - Define and call single-param function\n - Define and call multi-param function\n - Cross-function calls\n - Nested function calls in expressions\n - Error: undefined function\n - Error: wrong argument count\n - Error: function redefinition\n - Error: undefined variable\n\n## Backward Compatibility\n- `Parse()` remains unchanged — returns `ast.Node`\n- Package-level `Eval()` remains — wraps `New().Eval(node, nil)`\n- Existing tests must continue to pass at every step\n"
+
+CALLING TOOL: file_read {"path":"README.md"}
+$$$ "# matheval\n\nA math expression evaluator with an interactive REPL, written in Go.\n\n## Features\n\n- Arithmetic operators: `+`, `-`, `*`, `/`\n- Parentheses for grouping\n- Floating point numbers (including `.5` syntax)\n- Correct operator precedence (`*` and `/` bind tighter than `+` and `-`)\n- Left-to-right associativity\n- Clear error messages with position reporting\n\n## Build\n\n```sh\ngo build -o matheval ./cmd/matheval\n```\n\n## Usage\n\nRun the REPL:\n\n```sh\n./matheval\n```\n\nThen type expressions:\n\n```\n\u003e\u003e 2 + 3 * 4\n14\n\u003e\u003e (2 + 3) * 4\n20\n\u003e\u003e 7 / 2\n3.5\n\u003e\u003e 1 / 0\nerror: division by zero\n```\n\nPress `Ctrl+D` (EOF) to exit.\n\nYou can also pipe input:\n\n```sh\necho \"2 + 3\" | ./matheval\n```\n\n## Architecture\n\n```\nInput string → Lexer → Parser → AST → Evaluator → Result\n```\n\n| Package | Responsibility |\n|-------------|---------------------------------------|\n| `token` | Token types and data structures |\n| `lexer` | Tokenizes input string |\n| `ast` | AST node types (`NumberLit`, `BinaryExpr`) |\n| `parser` | Recursive-descent parser |\n| `evaluator` | Walks AST and computes result |\n| `repl` | Read-eval-print loop |\n\n## Grammar\n\n```\nexpr → term (('+' | '-') term)*\nterm → factor (('*' | '/') factor)*\nfactor → NUMBER | '(' expr ')'\n```\n\n## Tests\n\n```sh\ngo test ./...\n```\n\nThis runs unit tests for each package plus integration tests covering the full pipeline.\n"
+
+--- STOP_REASON: tool_use
+AI: Now I have full context. The README is outdated — it doesn't reflect the function definition feature described in `docs/design.md` and `docs/plan_02.md`. Let me update it.
+CALLING TOOL: file_write {"path":"README.md","contents":"# matheval\n\nA math expression evaluator with user-defined functions and an interactive REPL, written in Go.\n\n## Features\n\n- Arithmetic operators: `+`, `-`, `*`, `/`\n- Parentheses for grouping\n- Floating point numbers (including `.5` syntax)\n- Correct operator precedence (`*` and `/` bind tighter than `+` and `-`)\n- Left-to-right associativity\n- User-defined functions with `f(x) = x + 1` syntax\n- Multi-parameter functions: `f(x, y) = x + y`\n- Functions can call other user-defined functions (late binding)\n- Clear error messages with position reporting\n\n## Build\n\n```sh\ngo build -o matheval ./cmd/matheval\n```\n\n## Usage\n\nRun the REPL:\n\n```sh\n./matheval\n```\n\nThen type expressions and define functions:\n\n```\n>> 2 + 3 * 4\n14\n>> (2 + 3) * 4\n20\n>> 7 / 2\n3.5\n>> 1 / 0\nerror: division by zero\n>> square(x) = x * x\ndefined square\n>> square(5)\n25\n>> add(x, y) = x + y\ndefined add\n>> add(2, 3)\n5\n>> hyp(a, b) = square(a) + square(b)\ndefined hyp\n>> hyp(3, 4)\n25\n```\n\nPress `Ctrl+D` (EOF) to exit.\n\nYou can also pipe input:\n\n```sh\necho \"2 + 3\" | ./matheval\n```\n\n## Architecture\n\n```\nInput string → Lexer → Parser → AST → Evaluator → Result\n```\n\n| Package | Responsibility |\n|-------------|---------------------------------------------------------|\n| `token` | Token types and data structures |\n| `lexer` | Tokenizes input string |\n| `ast` | AST node types (`NumberLit`, `BinaryExpr`, `Ident`, `FuncCall`, `FuncDef`) |\n| `parser` | Recursive-descent parser (expressions and function definitions) |\n| `evaluator` | Stateful evaluator with function registry |\n| `repl` | Read-eval-print loop with persistent state |\n\n## Grammar\n\n```\nline → funcdef | expr\nfuncdef → IDENT '(' params ')' '=' expr\nparams → IDENT (',' IDENT)*\nexpr → term (('+' | '-') term)*\nterm → factor (('*' | '/') factor)*\nfactor → NUMBER | IDENT '(' args ')' | IDENT | '(' expr ')'\nargs → expr (',' expr)*\n```\n\n## Function Rules\n\n- **Define:** `name(params) = body` — body is any expression using the parameters\n- **Call:** `name(args)` — arguments are arbitrary expressions\n- **Redefinition:** Not allowed (returns an error)\n- **Cross-calling:** Functions can call other user-defined functions\n- **Late binding:** References resolved at call time, not definition time\n\n## Tests\n\n```sh\ngo test ./...\n```\n\nThis runs unit tests for each package plus integration tests covering the full pipeline.\n"}
+$$$ "done"
+
+--- STOP_REASON: tool_use
+CALLING TOOL: todo_item_mark_done {"id":"1","summary":"Updated README.md to reflect the full feature set documented in docs/design.md and docs/plan_02.md. Added: user-defined functions section, multi-parameter functions, cross-function calls, late binding, updated architecture table with new AST node types, extended grammar, function rules section, and REPL examples demonstrating function definitions and calls."}
+$$$ "done"
+
+--- STOP_REASON: end_turn
+AI: Done. The README.md has been updated to reflect the full project as documented in the `docs/` files. Key changes:
+
+1. **Features** — Added user-defined functions, multi-parameter support, cross-function calls, late binding.
+2. **Usage examples** — Added REPL examples showing `square(x) = x * x`, `add(x, y) = x + y`, and cross-calling with `hyp(a, b)`.
+3. **Architecture table** — Updated AST and evaluator descriptions to reflect new node types (`Ident`, `FuncCall`, `FuncDef`) and stateful evaluator.
+4. **Grammar** — Extended with `funcdef`, `params`, `args`, `IDENT`, and `FuncCall` productions.
+5. **Function Rules** — New section covering define/call syntax, redefinition policy, cross-calling, and late binding.
+## AGENT NO ITEMS
+## USER NO ITEMS
+interrupt: 2
+~/dev/src/math $