Features

Everything you need to catch visual regressions before they reach production.

Pixel by Pixel Comparison

Detect every single-pixel change between screenshots with an exact XOR comparison. Ideal for catching regressions that subtler methods might miss, from shifted elements to color value drift.

Pixel by Pixel Comparison

ꟻLIP Comparison

Use NVIDIA's ꟻLIP algorithm to evaluate differences the way a human eye would perceive them. Filter out changes that are technically present but visually imperceptible, and focus on the ones your users will actually notice.

ꟻLIP Comparison

Metadata Based Comparison

Tag each test run with metadata like platform, resolution, or graphics API, and let PixelEagle automatically pair runs for comparison. No manual selection needed - just upload and let metadata matching do the work.

Metadata Based Comparison

Software Forge Integration

Feed comparison results back into your CI pipeline so pull requests show pass/fail status based on visual diffs. Upload screenshots from CI with the PixelEagle CLI and get automated feedback on every commit.

Software Forge Integration

Public Projects

Make any project publicly visible so contributors, QA teams, or the open-source community can browse screenshots and comparison results without needing an account.

Public Projects

Sensitivity Settings

Set a difference threshold per project to control how much change counts as a regression. Tune it to ignore sub-pixel rendering variations or anti-aliasing noise while still catching meaningful visual changes.

Sensitivity Settings

Screenshot History

Track how any screenshot evolves over time across runs. Select a screen name and browse every version side by side, with full visual comparison between consecutive changes. Quickly spot when a regression was introduced and how the UI has drifted.

Screenshot History

Coming Soon

We're building more tools to make visual testing even better.

Ready to catch visual regressions?