We're using screenshots more and more often in our day-to-day life and of course, in our automation process. When do we need to make screenshots? How to make them as part of automation tests and why we are doing it? The talk is dedicated to answering these questions.
Learn how the QA and test automation teams are set up for trivago's main product, how they work, and how we gradually introduced the test automation topic. This is not mainly about the technological aspects and challenges (yes, we use Selenium!) but more about how and why we established certain processes.
This talk will touch on a range of topics:
This talk will introduce the Medical Device Software V&V process as is used by most FDA approved medical devices that have the corresponding software. We'll then walk through how automation can help to create higher quality software within the constraints set up for medical device software. This will be followed by some of the issues that the team I work with has experienced in trying to create a full end-to-end suite of automated tests as well as solutions to these issues.
Flaky this, flaky that, the only things I like flaky are my puff pastry and a Cadbury’s Flake. However, a week doesn’t go by without seeing some activity in the community on the subject of flaky automated checks. Most recently was this lengthy post, with masses of analysis by a team at Google on where they believe their flaky checks come from. Some useful insights in there, mostly around the size of the check and the tools used. I feel they missed the all important part, you, me! the person who actually created the automated check in the first place. All flaky automated checks come from us.
Automated checks have become an essential part of most teams' approach to testing and trying to build a quality product. They’re important for many many reasons, we’ll discuss though during the talk. But, it’s important to remember where these checks come from, they come from us. More specifically from the knowledge we have on the tools being used and most importantly our applications and their architectures. I view automated checks as algorithms. Algorithms that are designed and implemented by us. Two important parts there to avoid flakiness in the final automated check. The design and the implementation. I feel far too much effort and focus in on the implementation, and we all know what happens if you implement bad design, right?
In this talk I intend to break down the process of designing and implementing automated checks, going deep into the areas that I believe are critical to creating automation that returns real value to the team, checks that aren’t flaky, checks that don’t result in some poor person continuously playing the role of broken flaky automated check fixer! I’ve played that role, it sucks!
A common pattern that Automator's fall into is trying to execute every action of a test via the UI, from logging in, creating required data, navigating to that specific data and then running assertions on it before logging out. This can lead to tests that are slow to run and likely to break due to the reliance on many Web elements.
This talk will demonstrate to participants how they can use HTTP request libraries and WebDriver in harmony. We'll cover how HTTP request libraries can take care of state manipulation and data setup so that WebDriver can be used to focus on the areas WebDriver is strongest at.
We'll look at:
Participants will leave with a deeper appreciation for the strengths of WebDriver and how to effectively improve their frameworks reliability and speed.
The same complaints with no answers all over again.
On a global scale these questions are a part of some bigger problem which many companies are facing today, despite their size or the field of business. This talk will show participants the practical way to deal with it, using the following structure.
Participants will leave motivated and with the know-how to motivate others.
Once monolithic applications was broken into microservices it brought new challenges to QA teams. Since that time it's not enough to just run tests, now you have to invest in operational costs of test execution.
In order to not waste time of build server, QA framework should fail fast in case deployment of microservices is insufficient, or became insufficient during test execution. As far as not each microservice is integrated with others, it is not worth to execute all tests when this microservice is changed. QA framework has to dynamically choose which tests to execute based on changed service. I believe that mocked services in microservice architecture can be not just request-response mocks, but some fake services with custom logic. Also I am going to describe happiness and pain of swagger in real and wide usage; present an overview of architecture of my QA framework with some useful features. All examples and solutions will be provided with code snippets.
This talk will cover:
Too often when discussing test automation we focus on how time-consuming it is to set up. Indeed, there is a significant effort required to implement a framework, add test cases and maintain all that as requirements evolve. However, a very important component remains overlooked: the daily routine of monitoring test results, including detecting defects both in the automation framework or the product, logging them in a tracking system, and removing the false positives caused by known issues. On a practical level, this presentation will explore ways to make the framework familiar with the defects and then “teach” it to get the boring work done.
The performance engineering team at GoDaddy has used many different tools and methods to prevent application performance regression, ranging from full-blown (load-)testing in test labs to server-side application performance monitoring to Real User Monitoring (RUM). Despite all of these efforts, the data keeps revealing that slow applications end up going live in production. The team realised that they had to somehow insert a safeguard right into the build process of the applications.
The cicd-perf-api webservice has been successfully implemented by many test teams at GoDaddy and we feel it is beneficial to anybody who is interested in adding performance assertion to their regular test cycles. This talk will explain the concept of the webservice and includes live demos of the webservice and the resulting data in a Kibana dashboard.
The use of Multi-Factor Authentication is becoming more and more common online, especially in E-commerce. I believe that a true end-to-end monitoring system should be able to cover MFA steps without special tweaks.
This talk will describe the 3 most common methods used today to implement MFA:
After understanding the differences between the above methods, we'll walk through one way to automate each form of MFA. While SMS and TOTP are relatively easy to automate, automating phone calls and speech-to-text is more complicated. In order to address that challenge, this talk will introduce a new technology: Asterisk - an open-source telecommunications engine.
The talk will feature 3 live demos, one for automating each MFA form:
All the demos and code-samples (including a dedicated Asterisk Dockerfile with the relevant configuration) will be open-sourced before the conference will start.
Liberal arts, humanities, and critical thinking subjects are dying in education. WHY? Today, so much emphasis is put on Science, Technology, Engineering, and Math (STEM) in education, but I think that's excluding a critical part of learning! Let's explore the most utilised skills essential to my career in testing, all a result of my Liberal Arts background. Join me as I dive into why defunding these areas is detrimental to the automation field. We’ll go over each role I've encountered to investigate skills they use most. How can we apply these skills to our teams? What do we do when someone doesn't have that background? Let's take a look at how the humanities can help shape the future of automation. Don’t let Liberal Arts die - embrace them, and see the value it adds to your team.
At Facebook we use end-to-end tests to ensure the quality of our website and suite of apps. Most of those tests have their roots in WebDriver, however, at that scale with multiple people writing and maintaining tests some issues with WebDriver emerged. Tests were becoming long and hard to read: flaky due to explicit waiting and stale element references, especially with React and highly dynamic UI; hard to maintain since the tests were very raw (i.e., get this select, click it, etc.). The intention of the tests wasn't clear.
Facebook's E2E Automation team in London built a framework around WebDriver, inspired by the concept of WebDriver page objects, and that of React components to make end-to-end tests declarative, stable, maintainable. By declarative, we mean make tests easier to read and write. The test should represent what they're testing, and a person should be able to figure out what the test is doing in a descriptive way. The framework also facilitates stability by inherently dealing with waits and stale elements. You describe what you expect on the screen, and the framework will wait for it.
Finally, through the notion of components, which represent pieces of UI, you can reuse them across pages and tests to assert about their properties or interact with them. So, you essentially never deal directly with web elements and describe your interactions based on this definition.
More details coming soon!
Most companies are moving into Continuous Delivery and DevOps model, with quick delivery cycles and Continuous Integration tools being implemented. To match this speed a fast feedback mechanism, continuous (automated) testing, is used to validate the build. But why just validate functionality? Let's also validate the accessibility of an app to check if the 20% of the population with impairments (approx. 1 billion people globally) are catered for.
In this talk:
Recently, Amplify Education transitioned from a traditional data centre to utilise Amazon Web Services for its hosting needs. As part of this transition, my team had to handle the movement of our Selenium Grid infrastructure to the cloud. This presented a number of challenges but also yielded many rewards. We moved from a static, perpetually undersized Selenium Grid to one that dynamically resizes to fit out needs.
Our ultimate solution utilises a number of AWS Services (EC2/Lambda/DynamoDB) to achieve a dynamically sizing Selenium Grid. We wrote an autoscaling service in Lambda using metrics from Datadog and wrote a proxy service in Lambda for multiple Selenium Grid Hubs that handles both traditional Selenium Grid Hubs as well as seamlessly sending sessions to Sauce Labs. Finally, we took advantage of AWS EC2 Spot Instances to significantly reduce costs via Spotinst. The combination of Lambda and Spotinst has significantly reduced the costs of running our Selenium Grid infrastructure while at the same time making our Selenium Grid more capable by allowing us to run more powerful instances when we need them.
Documentation is often treated as an afterthought and an undesirable part of the creative process, but it doesn’t have to be. Documentation can be seen as your effective and efficient method for you to tell your story in a comprehensive and engaging way. What the Doc?!?! will help both the writer and reader understand the value in of documentation and the role it, consciously or unconsciously plays in communicating your story to the world.
In this talk, Kim will cover various aspects of the documentation process:
What is Web Analytics and why is it important? We'll walk through techniques for manually testing your data and automating the validation process.
Just knowing about Web Analytics is not sufficient for business now. There are new kids in town - IoT and Big Data - two of the most used and well-known buzz words in the software industry! With a creative mindset looking for opportunities to add value, the possibilities for IoT are infinite. With each such opportunity, there's a huge volume of data being generated which, if analysed and used correctly, can feed into creating more opportunities and increased value propositions.
There are 2 types of analysis that one needs to think about:
Learn to optimise performance scripts of API's using Taurus. This talk will cover:
We will walk through our journey of writing script for an enterprise application and how it helped in accounting the bottlenecks of APIs. Reporting had given a clear picture of the business, helped in mapping it back to SLA's of APIs, and helped to measure and fine-tune the performance of the application during peak hours.Key takeaways of talk: Audience will be able to write a script using Taurus and get to know about Blazemeter reporting."
Almost 13 years ago we started with Selenium to automate websites. With Appium we generalised that concept on mobile and just recently entered the Windows and Mac space by adding a Windows and Mac OS driver to the Appium family. Let’s continue our StarDriver quest and enter a (not quite) new sphere: the television. Within the last years, a new standard called Hybrid Broadcast Broadband TV (HbbTV) evolved with which the latest generation of Smart TVs has been equipped. This standard allows broadcasters to build web apps for their broadcast channels to provide additional context information to the TV stream or videos on demand.
This talk will introduce a new driver to Appium that allows the running of automated tests based on the Webdriver protocol for HbbTV apps on Smart TVs. It will explain not only how the driver works but also how in general other drivers do their job in the Selenium and Appium world. We will look into the challenges that automating an app for a TV device brings and will talk about how anyone can build a driver for anything.
Similar to attending regular checkups to prevent an ER visit, successful partnerships between testing and UX teams is preventative medicine for building great software products. Beyond getting out of the building and talking to users, some of the most valuable insights you can gather come from beta testing: putting a prototype in a user's hand and learning more about how they use it over time.
In this talk, we'll cover the basics for running your first beta testing program as well as two detailed case studies (launching the FiscalNote iOS app and the beta version of the Real Talk app):
Test automation folklore is full of horror stories of failed attempts to apply record-playback tools to perform UI-based functional testing. In this talk we’ll take an objective look at record-playback tools and compare them with programming-based automation tools in order to evaluate their applicability to visual test automation. We will show that record-playback tools are very effective as visual testing drivers and implement a visual test for a responsive website using Selenium Builder without writing a single line of code.
We've come to realise that automation provides an immense amount of value in preventing regressions and helping to deliver quality software. As your automation grows and grows, it requires continuous maintenance so that tests remain fast, reliable, and valuable. If you're not scaling efficiently, your automation suite will turn into a messy, uncontrollable beast. Having a lean test suite will help to combat this.
In this session, Meaghan will present methods that you can use to keep your automated test suites lean and mean, so they always provide quick and accurate feedback to your software delivery team. Using a few examples, we'll discuss a wide range of ideas including evaluating a test's value, parallelizing tests, and producing consistent results!
Session attendees will walk away with strategies and practices to scale their test automation over time in a highly efficient and maintainable way.
We're automating web application testing using Selenium WebDriver. It's easy to get started with automated tests but harder to maintain an automated test system. The entropy increases with time and different developers/testers; your once beautiful crafted test code may end up unrecognisable.
The cure for this is to apply the Four Rules of Simple Design by Kent Beck while you maintain the tests.
Thomas will take a test suite for a web application and by following the Four Rules of Simple design, transform it step by step into something that is easier to maintain.
This will be a live coding session with a lot of refactoring. All steps will be small and you will, therefore, be able to follow even if your main profession isn’t writing code. After this session, you will know that even the most horrible test code is possible to clean up just by slowly transforming the code in small steps.
You could also use a robotics challenge to engage your team around testing ideas. Sam shares her insights from running multiple testing challenges that engaged over 100 software engineers. From a lunch time robotics challenge to a company-wide bug bash, Sam has run many events that help enhance a company's testing culture.
More details coming soon!
Become a sponsor of SeleniumConf Berlin 2017. Email us to request a copy of our sponsor pack.