Talk details to be announced

More details coming soon!

Simon Stewart

Simon Stewart

Lead Committer, Selenium Project & Creator of WebDriver

Screenshots in Automated Testing: When? How? Why?

We're using screenshots more and more often in our day-to-day life and of course, in our automation process. When do we need to make screenshots? How to make them as part of automation tests and why we are doing it? The talk is dedicated to answering these questions.

Anna Tsibulskaya

Anna Tsibulskaya

Front-end Developer at Wix

The Bumpy Road Towards Continuous Delivery

Learn how the QA and test automation teams are set up for trivago's main product, how they work, and how we gradually introduced the test automation topic. This is not mainly about the technological aspects and challenges (yes, we use Selenium!) but more about how and why we established certain processes.

This talk will touch on a range of topics:

  • Who should automate
  • What should be automated
  • How to ensure that tests follow certain guidelines
  • How QA teams are empowered to use automation etc.

Benjamin Bischoff

Benjamin Bischoff

Web Test Automation Engineer at trivago

Using Selenium in the Verification and Validation Process for Medical Device Software

This talk will introduce the Medical Device Software V&V process as is used by most FDA approved medical devices that have the corresponding software. We'll then walk through how automation can help to create higher quality software within the constraints set up for medical device software. This will be followed by some of the issues that the team I work with has experienced in trying to create a full end-to-end suite of automated tests as well as solutions to these issues.

Lydia Tripp

Lydia Tripp

QA/Release Manager at Trov

Your Tests Aren't Flaky, You Are!

Flaky this, flaky that, the only things I like flaky are my puff pastry and a Cadbury’s Flake. However, a week doesn’t go by without seeing some activity in the community on the subject of flaky automated checks. Most recently was this lengthy post, with masses of analysis by a team at Google on where they believe their flaky checks come from. Some useful insights in there, mostly around the size of the check and the tools used. I feel they missed the all important part, you, me! the person who actually created the automated check in the first place. All flaky automated checks come from us.

Automated checks have become an essential part of most teams' approach to testing and trying to build a quality product. They’re important for many many reasons, we’ll discuss though during the talk. But, it’s important to remember where these checks come from, they come from us. More specifically from the knowledge we have on the tools being used and most importantly our applications and their architectures. I view automated checks as algorithms. Algorithms that are designed and implemented by us. Two important parts there to avoid flakiness in the final automated check. The design and the implementation. I feel far too much effort and focus in on the implementation, and we all know what happens if you implement bad design, right?

In this talk I intend to break down the process of designing and implementing automated checks, going deep into the areas that I believe are critical to creating automation that returns real value to the team, checks that aren’t flaky, checks that don’t result in some poor person continuously playing the role of broken flaky automated check fixer! I’ve played that role, it sucks!

Takeaways:

  • An appreciation of the skills required to design good automated checks
  • An appreciation of the skills required to implement a good automated check
  • How these skills differ, and how the whole team need to be involved
  • The importance of continuously reviewing our automated checks for implementation, risks and value added

Richard Bradshaw

Richard Bradshaw

FriendlyBoss at Ministry of Testing

REST APIs and WebDriver: In Perfect Harmony

A common pattern that Automator's fall into is trying to execute every action of a test via the UI, from logging in, creating required data, navigating to that specific data and then running assertions on it before logging out. This can lead to tests that are slow to run and likely to break due to the reliance on many Web elements.

This talk will demonstrate to participants how they can use HTTP request libraries and WebDriver in harmony. We'll cover how HTTP request libraries can take care of state manipulation and data setup so that WebDriver can be used to focus on the areas WebDriver is strongest at.

We'll look at:

  • How we design a test and what actions are involved in the execution of a test
  • How we can break up the different actions of a test and assign different tasks to different libraries
  • A practical demonstration of how to add an HTTP request library into a current WebDriver based framework to create data for WebDriver to use
  • An approach participants can use to organise HTTP request code to make it DRY and reliable
  • Tips and tricks for participants to use to help them determine what HTTP requests and WebDriver can help them with

Participants will leave with a deeper appreciation for the strengths of WebDriver and how to effectively improve their frameworks reliability and speed.

Mark Winteringham

Mark Winteringham

Consultant Test Lead

Build a Successful Team: Motivate Your Software Tester

The same complaints with no answers all over again.

  • Junior Tester: ‘Testing is a monotonous and repetitive job. Why should I want to be a tester?’
  • Senior Tester: ‘I’m getting no respect, no appreciation and I always have to deal with developers’ ego. I’m leaving.’
  • Test Manager: ‘Why do good testers leave my team? How to keep them? Why do I always have to build a team from the very beginning?’

On a global scale these questions are a part of some bigger problem which many companies are facing today, despite their size or the field of business. This talk will show participants the practical way to deal with it, using the following structure.

  1. Mr. Software Tester
    • Boosting testers' pride and importance
    • Quick answers to all testers' tricky questions and doubts
    • Different approaches to pouring energy into Junior or Senior testers
  2. Communication Expert Software Tester
    • Discuss how to engage parties such as Stakeholders, Business Department, Developers, Team Leaders and Analysts
    • Learning from experts: tomorrow's test leads
  3. Career Beast Software Tester.
    • Using a modernised ‘Carrot and Stick’ approach of motivation in society nowadays
    • Finding individual motivational targets (e.g. career progression, certifications, conferences, networking)

Participants will leave motivated and with the know-how to motivate others.

Petra Bouskova

Petra Bouskova

Test Coordinator at tesena

Committers Panel

Selenium Committers

Selenium Committers


My Story of Microservices Testing

Once monolithic applications was broken into microservices it brought new challenges to QA teams. Since that time it's not enough to just run tests, now you have to invest in operational costs of test execution.

In order to not waste time of build server, QA framework should fail fast in case deployment of microservices is insufficient, or became insufficient during test execution. As far as not each microservice is integrated with others, it is not worth to execute all tests when this microservice is changed. QA framework has to dynamically choose which tests to execute based on changed service. I believe that mocked services in microservice architecture can be not just request-response mocks, but some fake services with custom logic. Also I am going to describe happiness and pain of swagger in real and wide usage; present an overview of architecture of my QA framework with some useful features. All examples and solutions will be provided with code snippets.

Aleksandr Martiushov

Aleksandr Martiushov

Senior QA Automation Engineer at Bitplaces

Scalable Selenium Cluster: Up & Running

This talk will cover:

  • Big Selenium cluster requirements
  • Standard Selenium architecture and why it is not suitable for big clusters
  • Client-side load balancing solution
  • Server side load balancing (Ggr server): requirements, algorithms, an open-source implementation, how to launch the balancer
  • What is inside worker nodes
  • How to decrease resource consumption: creating lightweight nodes and using Docker
  • Selenoid server: how it works, how to use inside big Selenium cluster, where else it could be used

Ivan Krutov

Ivan Krutov

Developer at Aerokube

Ex Machina: The Framework that Knows its Bugs

Too often when discussing test automation we focus on how time-consuming it is to set up. Indeed, there is a significant effort required to implement a framework, add test cases and maintain all that as requirements evolve. However, a very important component remains overlooked: the daily routine of monitoring test results, including detecting defects both in the automation framework or the product, logging them in a tracking system, and removing the false positives caused by known issues. On a practical level, this presentation will explore ways to make the framework familiar with the defects and then “teach” it to get the boring work done.

Aneta Petkova

Aneta Petkova

Automation QA Lead at Honeywell

Adding Performance Assertion to Standard Functional Testing

The performance engineering team at GoDaddy has used many different tools and methods to prevent application performance regression, ranging from full-blown (load-)testing in test labs to server-side application performance monitoring to Real User Monitoring (RUM). Despite all of these efforts, the data keeps revealing that slow applications end up going live in production. The team realised that they had to somehow insert a safeguard right into the build process of the applications.

Most teams at GoDaddy follow a CI/CD process where Selenium is commonly used for test automation. When we realised that the Selenium WebDrivers provide access to the same APIs as real browsers, including the widely supported W3C performance API, the concept for the cicd-perf-api webservice was born! By inserting some JavaScript code into the WebDriver object, performance data can be collected and posted back to the webservice. The response from the webservice includes a boolean field that testers can use for assertion - just like they would with functional checks! The field indicates whether performance was above/below the baseline.

The cicd-perf-api webservice has been successfully implemented by many test teams at GoDaddy and we feel it is beneficial to anybody who is interested in adding performance assertion to their regular test cycles. This talk will explain the concept of the webservice and includes live demos of the webservice and the resulting data in a Kibana dashboard.

Marcel Verkerk

Marcel Verkerk

Lead Performance Engineer at GoDaddy

Automating Multi-Factor Authentication

The use of Multi-Factor Authentication is becoming more and more common online, especially in E-commerce. I believe that a true end-to-end monitoring system should be able to cover MFA steps without special tweaks.

This talk will describe the 3 most common methods used today to implement MFA:

  • SMS code verification
  • Automated phone-call that either reads a X-digits code or requires you to dial one yourself
  • Time-based One Time Password (TOTP) algorithm using dedicated apps such as Google Authenticator / 1Password / Okta /etc.

After understanding the differences between the above methods, we'll walk through one way to automate each form of MFA. While SMS and TOTP are relatively easy to automate, automating phone calls and speech-to-text is more complicated. In order to address that challenge, this talk will introduce a new technology: Asterisk - an open-source telecommunications engine.

The talk will feature 3 live demos, one for automating each MFA form:

  • How to use Twillio's API to automate the reception of SMS with verification code
  • How to use a Python library and a pre-configured user account to automate TOTP
  • How to use Asterisk and Amazon's ASR (automatic speech recognition) to automate the reception OR typing of a verification code of an automated phone call

All the demos and code-samples (including a dedicated Asterisk Dockerfile with the relevant configuration) will be open-sourced before the conference will start.

Or Polaczek

Or Polaczek

Research Engineer / Mobile Lead at Forter

The Death of Liberal Arts

Liberal arts, humanities, and critical thinking subjects are dying in education. WHY? Today, so much emphasis is put on Science, Technology, Engineering, and Math (STEM) in education, but I think that's excluding a critical part of learning! Let's explore the most utilised skills essential to my career in testing, all a result of my Liberal Arts background. Join me as I dive into why defunding these areas is detrimental to the automation field. We’ll go over each role I've encountered to investigate skills they use most. How can we apply these skills to our teams? What do we do when someone doesn't have that background? Let's take a look at how the humanities can help shape the future of automation. Don’t let Liberal Arts die - embrace them, and see the value it adds to your team.

Ashley Hunsberger

Ashley Hunsberger

Architect at Blackboard

Readable. Stable. Maintainable. E2E Testing @ Facebook

At Facebook we use end-to-end tests to ensure the quality of our website and suite of apps. Most of those tests have their roots in WebDriver, however, at that scale with multiple people writing and maintaining tests some issues with WebDriver emerged. Tests were becoming long and hard to read: flaky due to explicit waiting and stale element references, especially with React and highly dynamic UI; hard to maintain since the tests were very raw (i.e., get this select, click it, etc.). The intention of the tests wasn't clear.

Facebook's E2E Automation team in London built a framework around WebDriver, inspired by the concept of WebDriver page objects, and that of React components to make end-to-end tests declarative, stable, maintainable. By declarative, we mean make tests easier to read and write. The test should represent what they're testing, and a person should be able to figure out what the test is doing in a descriptive way. The framework also facilitates stability by inherently dealing with waits and stale elements. You describe what you expect on the screen, and the framework will wait for it.

Finally, through the notion of components, which represent pieces of UI, you can reuse them across pages and tests to assert about their properties or interact with them. So, you essentially never deal directly with web elements and describe your interactions based on this definition.

Archit Pal Singh Sachdeva

Archit Pal Singh Sachdeva

Software Engineer at Facebook

The Digital Divide and Test Automation

More details coming soon!

Katrina Clokie

Katrina Clokie

Test Practice Manager at Bank of New Zealand

Automated Accessibility Testing: Web is for Everyone

Most companies are moving into Continuous Delivery and DevOps model, with quick delivery cycles and Continuous Integration tools being implemented. To match this speed a fast feedback mechanism, continuous (automated) testing, is used to validate the build. But why just validate functionality? Let's also validate the accessibility of an app to check if the 20% of the population with impairments (approx. 1 billion people globally) are catered for.

In this talk:

  • What is accessibility testing?
  • Which tools are available to automate accessibility testing?
  • Do we need separate efforts to automate accessibility testing?
  • Do we need expertise on accessibility?
Manoj Kumar

Manoj Kumar

Principal Consultant at Assertify Consulting

Good, Cheap, and Fast: Scaling Your Selenium Grid in the Cloud

Recently, Amplify Education transitioned from a traditional data centre to utilise Amazon Web Services for its hosting needs. As part of this transition, my team had to handle the movement of our Selenium Grid infrastructure to the cloud. This presented a number of challenges but also yielded many rewards. We moved from a static, perpetually undersized Selenium Grid to one that dynamically resizes to fit out needs.

Challenges:

  • How do we keep our selenium grid up to date?
  • How do we handle “cleaning” “dirty” selenium grid nodes?
  • How do we actually discover our selenium grid nodes?
  • How do we automatically scale our selenium grid?
  • How do we handle scaling infinitely?
  • How do we do this cheaply?

Our ultimate solution utilises a number of AWS Services (EC2/Lambda/DynamoDB) to achieve a dynamically sizing Selenium Grid. We wrote an autoscaling service in Lambda using metrics from Datadog and wrote a proxy service for multiple Selenium Grid Hubs that handles both traditional Selenium Grid Hubs as well as seamlessly sending sessions to Sauce Labs. Finally, we took advantage of AWS EC2 Spot Instances to significantly reduce costs via Spotinst. The combination of Lambda and Spotinst has significantly reduced the costs of running our Selenium Grid infrastructure while at the same time making our Selenium Grid more capable by allowing us to run more powerful instances when we need them.

Johnathan Constance

Johnathan Constance

Senior Software Engineer at Amplify Education

What The Doc?!?! How to Write and Read Documentation That Allows You To Get Sh*t Done!

Documentation is often treated as an afterthought and an undesirable part of the creative process, but it doesn’t have to be. Documentation can be seen as your effective and efficient method for you to tell your story in a comprehensive and engaging way. What the Doc?!?! will help both the writer and reader understand the value in of documentation and the role it, consciously or unconsciously plays in communicating your story to the world.

In this talk, Kim will cover various aspects of the documentation process:

  • The importance of documentation, not from just a code perspective but as a tool for building strong, diverse, and inclusive communities around the project
  • Basic writing strategies for effective communication
  • Tips on how to make documentation newbie friendly
  • And even working through an audience example or two!
Kim Crayton

Kim Crayton

Community Engineer

The What, Why and How of Web Analytics Testing

What is Web Analytics and why is it important? We'll walk through techniques for manually testing your data and automating the validation process.

Just knowing about Web Analytics is not sufficient for business now. There are new kids in town - IoT and Big Data - two of the most used and well-known buzz words in the software industry! With a creative mindset looking for opportunities to add value, the possibilities for IoT are infinite. With each such opportunity, there's a huge volume of data being generated which, if analysed and used correctly, can feed into creating more opportunities and increased value propositions.

There are 2 types of analysis that one needs to think about:

  1. How is the end-user interacting with the product? - This will give some level of understanding into how to re-position and focus on the true value add features for the product.
  2. What are the patterns in the data? - With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns and find out new product and value opportunities based on these.

Anand Bagmar

Anand Bagmar

Director of Quality at Vuclip

Beyond Performance using Taurus

Learn to optimise performance scripts of API's using Taurus. This talk will cover:

  • How to test the application by hiding the complexities of running performance tests
  • A simple way to create, run and analyse performance tests making the process of test configuration and execution as simple as possible
  • A demo of how to write and run test automation using Gatling and Taurus
    • We will walk through our journey of writing script for an enterprise application and how it helped in accounting the bottlenecks of APIs. Reporting had given a clear picture of the business, helped in mapping it back to SLA's of APIs, and helped to measure and fine-tune the performance of the application during peak hours.

      Key takeaways of talk: Audience will be able to write a script using Taurus and get to know about Blazemeter reporting."

Varuna Srivastava

Varuna Srivastava

Quality Analyst at ThoughtWorks

Appium for Couch Potatoes: An HbbTV Driver

Almost 13 years ago we started with Selenium to automate websites. With Appium we generalised that concept on mobile and just recently entered the Windows and Mac space by adding a Windows and Mac OS driver to the Appium family. Let’s continue our StarDriver quest and enter a (not quite) new sphere: the television. Within the last years, a new standard called Hybrid Broadcast Broadband TV (HbbTV) evolved with which the latest generation of Smart TVs has been equipped. This standard allows broadcasters to build web apps for their broadcast channels to provide additional context information to the TV stream or videos on demand.

The number of HbbTV apps being developed is increasing more and more as the standard gets rolled out to the whole world. By now almost all TV manufacturers support the standard and due to the high number of TVs in the market the fragmentation is extreme. Different TVs run different proprietary rendering engines with a different level of JavaScript support. Until today the only way to test an HbbTV app is by taking the remote control and manually walking through the app, this has to change.

This talk will introduce a new driver to Appium that allows the running of automated tests based on the Webdriver protocol for HbbTV apps on Smart TVs. It will explain not only how the driver works but also how in general other drivers do their job in the Selenium and Appium world. We will look into the challenges that automating an app for a TV device brings and will talk about how anyone can build a driver for anything.

Christian Bromann

Christian Bromann

Software Engineer at Sauce Labs

Zero to Test: How to Run Your First Beta Testing Program

Similar to attending regular checkups to prevent an ER visit, successful partnerships between testing and UX teams is preventative medicine for building great software products. Beyond getting out of the building and talking to users, some of the most valuable insights you can gather come from beta testing: putting a prototype in a user's hand and learning more about how they use it over time.

In this talk, we'll cover the basics for running your first beta testing program as well as two detailed case studies (launching the FiscalNote iOS app and the beta version of the Real Talk app):

  • How to sell a manager or client on the idea of beta testing, defining the difference between automated testing and beta testing, and explaining how a combination of the two can lead to better products
  • The various decisions for how to integrate testing into the product development lifecycle & effectively partner with stakeholders
  • Best practices for working effectively with client success/account management (for B2B) or user support (for B2C)
  • How the FiscalNote Product and QA teams partner to use feedback from beta testing with end users to guide crafting automated regression testing plans that are grounded in customer data and insights
Crystal Yan

Crystal Yan

Product Manager and Design Lead at FiscalNote

Codeless Visual Testing with Selenium Builder

Test automation folklore is full of horror stories of failed attempts to apply record-playback tools to perform UI-based functional testing. In this talk we’ll take an objective look at record-playback tools and compare them with programming-based automation tools in order to evaluate their applicability to visual test automation. We will show that record-playback tools are very effective as visual testing drivers and implement a visual test for a responsive website using Selenium Builder without writing a single line of code.

Doron Zavelevsky

Doron Zavelevsky

Front-end Team Leader at Applitools

Keeping Your Tests Lean

We've come to realise that automation provides an immense amount of value in preventing regressions and helping to deliver quality software. As your automation grows and grows, it requires continuous maintenance so that tests remain fast, reliable, and valuable. If you're not scaling efficiently, your automation suite will turn into a messy, uncontrollable beast. Having a lean test suite will help to combat this.

In this session, Meaghan will present methods that you can use to keep your automated test suites lean and mean, so they always provide quick and accurate feedback to your software delivery team. Using a few examples, we'll discuss a wide range of ideas including evaluating a test's value, parallelizing tests, and producing consistent results!

Session attendees will walk away with strategies and practices to scale their test automation over time in a highly efficient and maintainable way.

Meaghan Lewis

Meaghan Lewis

QA Engineer at Lever

Selenium and the Four Rules of Simple Design

We're automating web application testing using Selenium WebDriver. It's easy to get started with automated tests but harder to maintain an automated test system. The entropy increases with time and different developers/testers; your once beautiful crafted test code may end up unrecognisable.

The cure for this is to apply the Four Rules of Simple Design by Kent Beck while you maintain the tests.

  1. Test pass
  2. Express intent
  3. No duplication
  4. Small

Thomas will take a test suite for a web application and by following the Four Rules of Simple design, transform it step by step into something that is easier to maintain.

This will be a live coding session with a lot of refactoring. All steps will be small and you will, therefore, be able to follow even if your main profession isn’t writing code. After this session, you will know that even the most horrible test code is possible to clean up just by slowly transforming the code in small steps.

Thomas Sundberg

Thomas Sundberg

Developer at Think Code AB

Partner with SeleniumConf

Become a sponsor of SeleniumConf Berlin 2017. Email us to request a copy of our sponsor pack.

Sauce Labs

Premier Sponsor

Hewlett Packard Enterprise

Platinum Plus

Applitools

Gold Sponsor & Video Sponsor

Sahabt

Gold Sponsor

SmartBear

Gold Sponsor

Ministry of Testing

Bronze Sponsor

Sticker Mule

Bronze Sponsor

Join the mailing list for announcements