Attending SeleniumConf 2025 was an incredible experience for me. I arrived on Wednesday around noon, and in the evening, we had a speakers’ dinner. The food was excellent, and I had the opportunity to start meeting some really interesting people.
Thursday marked the official start of the conference. The breakfast at the Hotel Las Arenas was amazing, and upon arriving at the venue, there was even more breakfast and coffee available. I was quite nervous, unsure if my talk would go well.
As soon as I arrived, the first thing I did was check if everything was working properly with my laptop, ensuring I could present smoothly while viewing my speaker notes. Everything was set up correctly, allowing me to relax and start enjoying the talks.
The first session was a keynote explaining that Selenium and Appium are aiming to collaborate on their respective projects as much as possible. This is why the conference was held jointly for both technologies.
The first talk I attended was The Ultimate Swiss Army Knife of UI Software Testing, Accelerating Workflow-Based Automation. It was a bit difficult to follow since the room was too bright, making it hard to see the code on the screen. The session focused on an implementation to test graphical interfaces.
Next, I attended TestOps: A Journey to Story-Based Releases The presenter set the bar incredibly high with an amazing presentation and visually engaging slides. The talk followed the journey of a team transitioning from merging everything into a single branch to working with multiple development, testing, and staging environments. This approach, based on microservices, highlighted the challenges of integrating different implementations occurring simultaneously.
After this, it was time for my talk: Speed Up PR Tests with Smart Code Coverage-Based Selection It went quite well—I was able to explain the design and showcase the code we implemented for Smart Test Selection. There were many questions, and even after the Q&A session, people approached me throughout the two days to discuss implementations and share ideas.
One key question was how to measure the effectiveness of recommended tests essentially, whether the suggested tests were truly the best ones. This is something we should investigate further to develop relevant metrics. Another idea was whether this approach could be applied to unit tests, though given their fast execution time, I am not sure it would be necessary.
A Mozilla engineer introduced me to BugBug, an AI-powered tool that selects the best tests for bug coverage based on bug reports. Additionally, I learned about Cluecumber, a project by Trivago for visualizing test reports, which seems worth exploring.
After lunch, I attended João Proenza’s talk, Decoding Synthetic Monitoring: A Journey from E2E UI Tests to Service-Level Probes This was particularly interesting because it aligns with something we have already started implementing. I call it “Fitness Metrics,” but they refer to it as “Synthetic Monitoring.” The concept involves deriving a metric from end-to-end tests to observe system performance trends. João’s example focused on API calls generated from user actions within the application, decoupling the UI layer and focusing on backend validation.
Later, I partially attended Lauro Moura’s session, Advancing WebDriver BiDi support in WebKit , which explained the benefits of BiDi, a technology that modern testing frameworks are increasingly adopting.
Another engaging talk was Beyond Logs: Achieving Observability with Selenium and Grafana by Giannis Papadakis. It covered test observability using a Prometheus Push Gateway to monitor test results in Grafana and send alerts via email or Slack. The talk also introduced Loki for better test log management, reinforcing my plan to explore Loki further. During the Q&A, I shared insights about our Jenkins-Exporter as an alternative approach to processing test reports for Grafana.
The first day’s closing keynote was delivered by Angie Jones, focusing on AI and MCPs. She demonstrated an MCP executing Selenium commands and shared various AI-driven automation techniques. It was amusing when she mentioned my talk, but to give an example of how AI can help to enhance the test selection.
On the second day, the opening keynote was delivered by my former colleague, Almudena Vivanco, a Performance Engineer at Lidl. She emphasized the importance of performance testing as a team-wide responsibility rather than just a task for a specialized group of engineers.
The first session I attended was Flakiness in Your Tests Isn’t Down to the Test Framework by David Burns, which provided strategies for better managing waits in test automation.
Next, I attended Reviving Windows App Automation: NovaWindows Driver for Appium 2 by Teodor Nikolov. He introduced a new Appium driver for Windows, which primarily relies on command-line executions but also integrates UI automation at times. His demonstration showed that this new implementation significantly improves test execution speed. Hopefully, it gains more contributions and visibility.
After enjoying a fantastic paella for lunch, I attended Unleash Synthetic Monitoring with Supercharged Test Automations by Leandro Meléndez, also known as Señor Performo. This talk reiterated concepts similar to our Fitness Metrics, confirming that we are on the right track with our implementation.
Following this, I had the chance to record a short podcast episode with Leandro, where we discussed my professional journey and my talk. He was a great host.
The last session I attended was the Lightning Talks, but nothing particularly stood out.
In summary, I left the conference highly motivated about test observability, although AI seemed to be the main theme of the event.