---
name: performance-optimizer
description: "Performance Optimizer workflow skill. Use this skill when the user needs Identifies and fixes performance bottlenecks in code, databases, and APIs. Measures before and after to prove improvements and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off."
version: "0.0.1"
category: development
tags: ["performance-optimizer", "identifies", "and", "fixes", "performance", "bottlenecks", "databases", "apis"]
complexity: advanced
risk: caution
tools: ["codex-cli", "claude-code", "cursor", "gemini-cli", "opencode"]
source: community
author: "sickn33"
date_added: "2026-04-15"
date_updated: "2026-04-25"
---
# Performance Optimizer
## Overview
This public intake copy packages `plugins/antigravity-awesome-skills-claude/skills/performance-optimizer` from `https://github.com/sickn33/antigravity-awesome-skills` into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses the `external_source` block in `metadata.json` plus `ORIGIN.md` as the provenance anchor for review.
# Performance Optimizer Find and fix performance bottlenecks. Measure, optimize, verify. Make it fast.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Common Optimizations, Measuring Impact, Performance Budgets, Tools, Quick Wins, Optimization Checklist.
## When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- App is slow or laggy
- User complains about performance
- Page load times are high
- API responses are slow
- Database queries take too long
- User mentions "slow", "lag", "performance", or "optimize"
## Operating Table
| Situation | Start here | Why it matters |
| --- | --- | --- |
| First-time use | `metadata.json` | Confirms repository, branch, commit, and imported path through the `external_source` block before touching the copied workflow |
| Provenance review | `ORIGIN.md` | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | `README.md` | Starts with the smallest copied file that materially changes execution |
| Supporting context | `README.md` | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | `## Related Skills` | Helps the operator switch to a stronger native skill when the task drifts |
## Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
1. Page load time
2. API response time
3. Database query time
4. Function execution time
5. Memory usage
6. Network requests
7. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
### Imported Workflow Notes
#### Imported: The Optimization Process
### 1. Measure First
Never optimize without measuring:
```javascript
// Measure execution time
console.time('operation');
await slowOperation();
console.timeEnd('operation'); // operation: 2341ms
```
**What to measure:**
- Page load time
- API response time
- Database query time
- Function execution time
- Memory usage
- Network requests
### 2. Find the Bottleneck
Use profiling tools to find the slow parts:
**Browser:**
```
DevTools ā Performance tab ā Record ā Stop
Look for long tasks (red bars)
```
**Node.js:**
```bash
node --prof app.js
node --prof-process isolate-*.log > profile.txt
```
**Database:**
```sql
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
```
### 3. Optimize
Fix the slowest thing first (biggest impact).
#### Imported: Common Optimizations
### Database Queries
**Problem: N+1 Queries**
```javascript
// Bad: N+1 queries
const users = await db.users.find();
for (const user of users) {
user.posts = await db.posts.find({ userId: user.id }); // N queries
}
// Good: Single query with JOIN
const users = await db.users.find()
.populate('posts'); // 1 query
```
**Problem: Missing Index**
```sql
-- Check slow query
EXPLAIN SELECT * FROM users WHERE email = 'test@example.com';
-- Shows: Seq Scan (bad)
-- Add index
CREATE INDEX idx_users_email ON users(email);
-- Check again
EXPLAIN SELECT * FROM users WHERE email = 'test@example.com';
-- Shows: Index Scan (good)
```
**Problem: SELECT ***
```javascript
// Bad: Fetches all columns
const users = await db.query('SELECT * FROM users');
// Good: Only needed columns
const users = await db.query('SELECT id, name, email FROM users');
```
**Problem: No Pagination**
```javascript
// Bad: Returns all records
const users = await db.users.find();
// Good: Paginated
const users = await db.users.find()
.limit(20)
.skip((page - 1) * 20);
```
### API Performance
**Problem: No Caching**
```javascript
// Bad: Hits database every time
app.get('/api/stats', async (req, res) => {
const stats = await db.stats.calculate(); // Slow
res.json(stats);
});
// Good: Cache for 5 minutes
const cache = new Map();
app.get('/api/stats', async (req, res) => {
const cached = cache.get('stats');
if (cached && Date.now() - cached.time < 300000) {
return res.json(cached.data);
}
const stats = await db.stats.calculate();
cache.set('stats', { data: stats, time: Date.now() });
res.json(stats);
});
```
**Problem: Sequential Operations**
```javascript
// Bad: Sequential (slow)
const user = await getUser(id);
const posts = await getPosts(id);
const comments = await getComments(id);
// Total: 300ms + 200ms + 150ms = 650ms
// Good: Parallel (fast)
const [user, posts, comments] = await Promise.all([
getUser(id),
getPosts(id),
getComments(id)
]);
// Total: max(300ms, 200ms, 150ms) = 300ms
```
**Problem: Large Payloads**
```javascript
// Bad: Returns everything
res.json(users); // 5MB response
// Good: Only needed fields
res.json(users.map(u => ({
id: u.id,
name: u.name,
email: u.email
}))); // 500KB response
```
### Frontend Performance
**Problem: Unnecessary Re-renders**
```javascript
// Bad: Re-renders on every parent update
function UserList({ users }) {
return users.map(user =>
```
### Algorithm Optimization
**Problem: Inefficient Algorithm**
```javascript
// Bad: O(n²) - nested loops
function findDuplicates(arr) {
const duplicates = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) duplicates.push(arr[i]);
}
}
return duplicates;
}
// Good: O(n) - single pass with Set
function findDuplicates(arr) {
const seen = new Set();
const duplicates = new Set();
for (const item of arr) {
if (seen.has(item)) duplicates.add(item);
seen.add(item);
}
return Array.from(duplicates);
}
```
**Problem: Repeated Calculations**
```javascript
// Bad: Calculates every time
function getTotal(items) {
return items.reduce((sum, item) => sum + item.price * item.quantity, 0);
}
// Called 100 times in render
// Good: Memoized
const getTotal = useMemo(() => {
return items.reduce((sum, item) => sum + item.price * item.quantity, 0);
}, [items]);
```
### Memory Optimization
**Problem: Memory Leak**
```javascript
// Bad: Event listener not cleaned up
useEffect(() => {
window.addEventListener('scroll', handleScroll);
// Memory leak!
}, []);
// Good: Cleanup
useEffect(() => {
window.addEventListener('scroll', handleScroll);
return () => window.removeEventListener('scroll', handleScroll);
}, []);
```
**Problem: Large Data in Memory**
```javascript
// Bad: Loads entire file into memory
const data = fs.readFileSync('huge-file.txt'); // 1GB
// Good: Stream it
const stream = fs.createReadStream('huge-file.txt');
stream.on('data', chunk => process(chunk));
```
## Examples
### Example 1: Ask for the upstream workflow directly
```text
Use @performance-optimizer to handle