Hello everyone,
I want to share a new milestone of my FailTale journey: a new Proof of Concept focused on agentic test failure review for Uyuni test environments.
Project repository: FailTale for Uyuni on GitHub
This is the second PoC after my original FailTale experiment, and it is built around a multi-agent workflow powered by crewAI.
When an automated test fails in a large environment, the hardest part is not seeing the failure itself. The hardest part is collecting the right context quickly and turning it into a useful root cause hint.
This PoC focuses on automating exactly that path: from failure signal, to targeted data collection, to guided analysis.
tail, grep) to reduce noise and exposure.The flow is collaborative instead of linear:
Agent roles and tasks are configured in:
src/failtale/config/agents.yamlsrc/failtale/config/tasks.yamlThe project uses Python and UV tooling, with a couple of important runtime prerequisites:
GOOGLE_API_KEY configured in .env (Gemini is the default provider).docker available in PATH for the Uyuni MCP tool.npx available in PATH for the SSH MCP tool.nomic-embed-text for knowledge embeddings.A practical detail I like in this PoC is that inputs can be pointed to one concrete failure through .env variables (CONFIG_PATH, TEST_REPORT_PATH, TEST_FAILURE_PATH, SCREENSHOT_PATH), which makes it easier to run focused analysis sessions.
One of the most useful integrations is triggering FailTale in a Cucumber After hook only when a scenario fails. The hook can persist failure artifacts and call crewai run automatically, making post-failure debugging far more consistent.
Compared to the first PoC, this version moves toward a cleaner and more modular architecture:
The result is not just “more AI”, but a workflow that is easier to evolve and reason about.
I plan to keep improving this PoC in three directions:
If you are working with Uyuni, Cucumber-based suites, or large acceptance test environments, I would love your feedback.