Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been working on [Vizzly](https://vizzly.dev) -- my attempt to make visual quality part of the development workflow, not an after-the-fact testing phase.

Most visual testing tools spin up a recreated environment (copy the DOM + assets, render in their own infra) and then compare diffs. The problem is: that’s not what your users actually see. You end up debugging rendering quirks in their browser instead of yours. I wrote about it here if you’re curious: [Why Visual Testing Needed a Different Approach.](https://vizzly.dev/blog/why-visual-testing-needed-a-differen...)

Vizzly flips that around by focusing on developer workflows first:

- Local TDD: vizzly tdd lets you iterate on UI changes locally, instantly, against real screenshots. - Bring your own screenshots: Works with whatever infra you already have (Playwright, Puppeteer, BrowserStack, CI runners, etc.). - Team review: When you push, every commit builds a visual review dashboard with position-based comments, review rules, deep links, etc.

The CLI/SDK are open, you can even use just the local TDD bits without ever touching Vizzly’s hosted service. The goal is to close the developer ↔ designer gap by making visual review a normal part of shipping features, not a separate QA checklist.

Would love feedback from folks who’ve wrestled with visual diffs before. Does this workflow resonate with you, or do you see gaps I should be thinking about?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: