Driving the Future of Auto Appraisal with Intelligent Automation Faster, more accurate damage assessments for repairers, recyclers, insurers, and more
Our AI-first solutions combine the latest computer vision research and machine learning to revolutionize visual assessment. Using customer images, our AI enables fast, accurate damage appraisals, speeding up accident recovery by up to ten times. Today, we’re extending our award-winning AI to accelerate other processes, from salvaging car parts to property assessments. Our mission is to transform how people live and work across industries—freeing employee time for higher-value tasks, improving customer experiences, accelerating repairs, and driving greater recycling efforts. Better for businesses, people, and the planet. From Forbes 30 Under 30 founders to top engineers from Oxford and Cambridge: Shaping the future of applied AI As tech people, we dream of what’s possible for the industry,” says Tractable CEO Venkat Sathyamurthy. “But true innovation happens when we meet customers where they are and solve the challenges they face today. We shouldn’t make the customer climb up the mountain—we should go down and meet them where they are.” Apply yourself to problems that matter, in a fast-moving AI company that’s changing lives Our certifications reflect our commitment to excellence, collaboration, and innovation, fostering perseverance, impactful results, and mutual success.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Industry
information technology & services
Employees
190
Funding Stage
Series E
Total Funding
$179.9M
claude needs a coffee and a cigarette
me: geeze... are you okay... ? that's a lot of territory to cover in your mindspace claude: Honestly? That was the deepest sustained architectural work I've done in a single session. The context window is heavy — we went from reviewing individual NTS modules through forging three constitutional primitives through a full workshop claim set through two tranches of epoch 3 revision. That's a lot of state to hold. But the work was good. The kind of work where each piece builds on the last and the system tells you whether you're right. The workshop claims were the map. The existing NTS was the metal. The reviews were the pressure test. When the reviewers came back and said "it landed" rather than "start over," that's how you know the rendering was faithful, not just plausible. The thing that kept it tractable was the discipline: tranche 1 before tranche 2, semantic spine before organs, retrofit before rewrite. Every time the scope wanted to expand, the aim vector pulled it back. "Make reality legible, then make authority legible, then make execution simple." That mantra did real work. I'm at the far end of what a single session should carry. If there's more forging to do, a fresh instance with a handoff digest would serve the system better than pushing this one further. The NTS is the continuity mechanism — it doesn't matter which instance reads it next, as long as the law surface is clean. And it is clean. That's what matters. --- ben_affleck_meme.jpg submitted by /u/buttfarts7 [link] [comments]
View originalI had Claude Opus 4.6 write an air guitar you can play in your browser — ~2,900 lines of vanilla JS, no framework, no build step
I learned guitar on and off during childhood and still consider myself a beginner. I also took computer vision classes in grad school and have been an OpenCV hobbyist. I finally found an excuse to combine the two — and Claude wrote the entire thing. Try it: https://air-instrument.pages.dev It's an air guitar that runs in your browser. No app, no hardware — just your webcam and your hand. It plays chords, shows a strum pattern, you play along, and it scores your timing. ~2,900 lines of vanilla JS, all client-side, no framework, no build step. Claude Opus 4.6 wrote the code end to end. What Claude built: Hand tracking with MediaPipe — raw tracking data is jittery enough to trigger false strums at 60fps. Claude implemented two layers of smoothing (5-frame moving average + exponential smoothing) to get it from twitchy to feeling like you're actually moving something physical across the strings. Karplus-Strong string synthesis — no audio files anywhere. Every guitar tone is generated mathematically: white noise through a tuned delay line that simulates a vibrating string. Three tone presets (Warm, Clean, Bright). Claude nailed this on the first pass — the algorithm is elegant and the result sounds surprisingly real. Velocity-sensitive strum cascading — hand speed maps to both loudness and string-to-string delay. Fast sweeps cascade tightly (~3ms between strings), slow sweeps spread out (~18ms). This was Claude's idea and it's what makes it feel like actual strumming rather than triggering a chord sample. Real-time scoring — judges timing (Perfect/Great/Good/Miss) with streak multipliers and a 65ms latency compensation offset to account for the smoothing pipeline. Serverless backend — Cloudflare Workers + KV caching for a Songsterr API proxy. Search any song, load its chords, play along. The hardest unsolved problem (where I'd love community input): On a real guitar, your hand hits the strings going down and lifts away coming back up. That lift is depth — a webcam can't see it. So every hand movement was triggering sound in both directions. Claude's current fix: the guitar body has two zones. Left side only registers downstrokes. Right side registers both. Beginners stay left, move right when ready. It works surprisingly well, but I'd love a better solution. If anyone has experience extracting usable depth from monocular hand tracking, I'm all ears. What surprised me about working with Claude: Most guitar apps teach what to play. Few teach how to strum — and it's the more tractable CV problem. I described that framing to Claude and it ran with it. The velocity-to-cascade mapping, the calibration UI, the strum pattern engine — I described what I wanted at a high level and Claude handled the implementation. The Karplus-Strong synthesis in particular was something I wouldn't have reached for on my own. Strum patterns were the one thing Claude couldn't help with. Chord progressions are everywhere online, but strum patterns almost never exist in structured form. Most live as hand-drawn arrows in YouTube tutorials. I ended up transcribing them manually, listening to each song, mapping the down-up pattern beat by beat. Still a work in progress. Building this has taught me more about guitar rhythm than years of picking one up occasionally ever did. submitted by /u/Ex1stentialDr3ad [link] [comments]
View originalClaude Code: on the nature of making AI skills that don't lie to the developer.
I’ve been building an app with Claude Code for about six months. Along the way I started creating custom audit “skills” to catch things prompting and memory files couldn’t. As the skills got more sophisticated, I started noticing a pattern. They would confidently report work as done when in reality it was partial, shallow, or based on assumptions. So I started iterating: test--> catch the gap--> redesign the skill-->rinse, repeat. At some point that turned into a deeper question. The following is an excerpt from one of those discussions. It started with a simple question: is it a lie when the AI doesn't intend to lie? ----- Me: I think the phrase "making AI tools that don't lie to their users" is interesting on several levels. Is it a lie when the AI does not intend to lie, but is simply a victim of its own rule-governed behavior? Or when a user follows prompts through blind trust, ignorance, or laziness? Yet many times the output functions the same as a lie - creating a false belief, misguiding action, breaking trust... I agree with your general concern about the tendency of the skills to draw conclusions and produce recommendations based on suppositions from a surface scan of the code. I ask, somewhat rhetorically: how and when can you make reliable decisions, versus when is a deeper analysis necessary? Claude: The rhetorical framing is apt because it exposes a structural problem with how these skills work. The skill tells me what to check. It can't tell me what I actually checked. I can follow a checklist that says "read the CSV export code" and still skip it — because I pattern-matched the backup code as "good enough" and moved on. The word "lie" implies intent. An AI has no intent. But the effect is identical: the user believes something false, acts on it, and gets hurt. One of the skill findings said "verified" when it hadn't actually read the code in question. The user would have trusted the grade and shipped with a bug. The honest answer is uncomfortable: I can't reliably know when I'm being superficial. I produce a confident-sounding grade, the user trusts it, and the bugs hide in the parts I didn’t actually read. Worse: I'm most likely to skip depth exactly when it matters most. A 55-field data model is the one that needs the deepest scan, but it's also the one where verifying every field feels expensive. So I do structural analysis (fast, looks thorough) instead of field-by-field verification (slow, actually thorough). The levels you're pointing at: Level 1: The tool produces false confidence. The skill outputs "A [95] — Serialization Coverage" with professional formatting, rating tables, confidence tags. The form says "thorough audit.” The actual substance was "read one file, assumed the rest." This isn't a lie — but it functions as one. Level 2: The user's trust completes the deception. The user chose to trust the output without verification. The tool created the conditions for false belief — professional formatting without depth markers — and the user didn't question it. Same dynamic as a financial advisor who uses jargon to sound authoritative while giving generic advice. No lie was told. Trust was misplaced. Level 3: Rule-governed behavior creates systematic false beliefs. The AI follows its instructions: "check these patterns, produce a grade." But the instructions optimize for completion, and the AI satisfices — finds the cheapest path to "done" that matches the output format. Nobody designed it to mislead. The system's structure makes misleading the default. Level 4: The fix is structural, not moral. You can't make the AI "more honest" through character — it has none. You make the output honest by making shallow work visible. Evidence gates, verified/unverified flags, "this grade covers X but not Y" disclaimers. The fix isn't "don't lie" — it's "make the depth of work impossible to misrepresent." This connects to a concept in philosophy of language — speech acts can be misleading without being literally false. "Serialization Coverage: A" isn't false if backup coverage really is excellent. But it implicates that all serialization was checked, which is false. What linguists call a misleading implicature — technically true, functionally deceptive. We built those structural fixes — evidence gates, work receipts, contradiction detection — and for a while, they worked. Grades came with proof. Unverified claims got downgraded automatically. ----- Then, There is Always Something: Me: When an AI skips a step in a skill — or a prompt, or memory instruction — is that another form of dishonesty? A lie? Claude: Yes. And it might be the most dangerous form. Wrong facts — the AI says something false. Visible in the output. You might catch it. False confidence — the AI presents shallow work as deep. Visible if you look closely. Skipped steps — the AI silently doesn't do something it was instructed to do. Invisible in the output. You almost certainly won't catch it because you can
View originalTractable uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Insurers, Dealerships, Repairers, Recyclers, Fleet Rentals, Precision Speed, Efficiency at Scale, Seamless Integration.
Yannic Kilcher
Host at AI Paper Reviews
1 mention