Today my manager Jay is leaving. It was a massive shock when I found out. As he’s the CTO I had two thoughts:
“Oh no, I’m sad now”
“Is the company in trouble?”
So in fact, he’s making the decision to go purely because he wants a new challenge which is completely understandable and I wish him the best of luck.
He has been the best manager I have had so far because he’s been supportive, encouraging, and an ear when I need to moan (which can happen often as I am very pickyparticular dedicated to delivering a great product). He also doesn’t just tell me things I want to hear (which is pretty rare in people). I get honest answers and opinions.
Being a woman in tech, I didn’t have many people to look up to in order to steer me in the right direction, getting me to where I am now. I have done it all by myself (which makes me proud now that I think about it) but it’s been exhausting.
I haven’t had someone else to give me their wisdom which would’ve undoubtedly meant that I probably would’ve made less mistakes along the way. But from the beginning Jay has given advice, let me feel like I can research everything that will help improve the team and the area that I work in for the company, has given me autonomy, let me attend the events that I want (through which I have met some great people), sends me Twitter links to awesome women in tech (like @TheAmyCode) for further encouragement, and in general has made me feel more confident about being in this role and that I deserve this role because I am awesome.
Find A Career Mentor
It’s important to find people like this to guide you through your career because no matter whether you’re male or female, careers are hard and a little guidance goes a long long way.
If you do find these people, make sure that it’s not just a one-sided relationship. Make sure that there’s something that you can give them too. Whether it’s small things like being self-sufficient so that they don’t need to worry about you or giving back something tangible that could help make their lives easier or better.
So, bye Jay. See you soon, it’s been great working with you and remember BBQ!
This post talks about the ways you can make sure that you stay employable by learning about different areas of the tech industry.
Earlier this year, I won a scholarship to use the A Cloud Guru video tutorials platform. Since I use the AWS Cloud Computing Services platform daily at my current job, I thought it was a great opportunity to learn more about it. And luckily I got selected. So this year, I intend to future proof myself by learning about cloud computing.
For those that don’t know, Appium is a mobile testing framework built upon the Webdriver technology. It was created in 2013 by Dan Cuellar. He was an Test Manager at Zoosk who was finding that the length of the test passes on the iOS product was getting out of hand. Appium allowed him to write automated tests for iOS as easily as Webdriver was used for automating websites.
Five years later, Appium has a massive community building up a successful open source project that can be used on Android, iOS, Windows in a variety of coding languages.
This year was the first Appium Conference.
There was only one track which was great because I got to see everything and didn’t have to pick between two talks that were probably going to be beneficial to me.
Interesting Things to Note
There was a lot of variety of languages being used with Appium (part of the appeal of using the product I suspect).
Lots of people were using Jenkins with it as their continuous integration/continuous delivery tool. There were a couple mentions of Circle CI but none about TeamCity. Because of this, I think I’ll be looking into Jenkins more than TeamCity for my app projects.
All the talks were highly interesting and not too difficult to follow for a beginner like me. So I want to share my biggest takeaways from each of the talks.
Keynote – Appium: The Untold Story
The day began with a keynote from Dan Cuellar and Jason Huggins. They spoke about the history of Appium, where it is now and briefly touched upon where he wants it to be in the future and mentioned their vision of StarDriver.
They want to see Appium grow it’s users and the platforms it can test particularly to the internet-of-things and various hardware.
The best takeaway for me from this keynote was the phrase “Challenge everything you see”.
Appium: A Better way to Play the Game
This first talk was given by Charlene Granadosin & Charlotte Bersamin. What I found interesting within their talk was how they were integrating their release and exit reports within Jira using Xray. They used a curl command to upload their latest test results to Jira so these results are clearly visible to the Product Owners or Managers of the team.
My biggest takeaway from this talk was to investigate whether the tools we were currently using for test case management was able to integrate with Jira to give such detailed reports and to try and get the automation up and running.
Deep Hacking Appium for Fun and Profit
Daniel Puterman‘s talk explained how he had contributed to the Appium project by creating a new endpoint to gather native application screenshots.
Because the company Daniel worked with was Applitools, my biggest takeway from this was to figure out whether visual testing tools would be useful for testing virtual reality (VR) applications as much as they would be for websites of mobile applications.
Why the h# should I use Appium with ReactNative?
Wim Selles delved into a comparison talk about why they chose Appium (which was extremely useful as I’ve also been debating what automation tool to use for mobile apps).
Out of all the frameworks he mentioned, they went with Appium because it fit with a lot of his requirements for testing ReactNative apps.
There were a lot of takeways for me from this talk.
Consider your own project requirements when you pick your automation tools
What are your requirements?
What should your app do (now/future)?
Which tool supports your needs/expectations?
Do a proof of concept test
Do research into competitive tools
He also gave a couple of good ways that you can speed up app testing:
Remove animations on screens
Utilise deep-linking to get directly to screens
Layout Automation Testing (GUI)
Prachi Nagpal explained how she was using Galen to perform her UI browser based testing for mobile and desktop devices. Galen does this by measuring the distance between elements that are being tested. You can also produce heatmap results from this tool.
It was a good talk and interesting to see a tool that I had never heard about.
Can you please provide the full Appium Server logs? A Brief Tour of the Logs
Isaac Murchie next walked us through the Appium logs and the takeaway here was that he pointed out that some lines are bolded to highlight their importance.
He also noted that the following are ways to know which are requests and responses in the logs.
This is a request:
This is a response:
Interaction with Native Components in Real Devices
In his talk, Telmo Cardoso told us how he tested native components of mobile operating systems. He explained the challenges that he faced (some tasks were difficult on one platform but easier on the other) and ways he and his team had got around them.
The areas he found challenging were:
Pushing files to a device
Simulating low battery
The biggest takeaway from this talk was that he used Cucumber for his automation framework along with with Appium successfully to test the native applications and features of smartphones and not just the applications running on them.
Using Appium for Unity Games and Apps
Because of my daily work with Unity projects, I was particularly looking forward to Ru Cindrea‘s talk on how she used Appium for her Unity games and apps.
She first explained how she used OpenCV an image recognition tool with Appium to try and test her Unity games.
The positives were:
Works for simple scenarios
No changes to game required
Found issues like performance issues or out of memory crashes
The negatives were:
Wasn’t fast enough
Not for games with lots of text
So she decided to create a component called AltUnityTester to help her with her issues.
AltUnityTester is created with Python bindings. It opens a socket connection and waits for a response on a specific port.
When the AltDriver is added into the Appium project it gets a list and knows everything about that Unity scene’s objects. It can then send a command to that port to get information back from the scene to perform tests e.g. checking the end position of elements or text output.
This solution is useful because it’s real-time but it does require changes to the project and it only works with Unity.
So my biggest takeaway was to investigate whether this AltUnityTester could be extended or something similar made in order to test VR applications using Unity.
Docker-Android: Open-source UI Test Infrastructure for Mobile Website and Android
Budi Utomo next talked about his Docker-Android image to test Android projects and websites on Android devices.
His plan for project development was to:
Create UI tests for Android devices
Write unit tests on Android
Create UI tests on Android apps
Implement Monkey/Stress tests
The biggest takeaway was the demo that showed how Appium can be used easily within Docker containers.
Application Backdoor via Appium
Rajdeep Varma explained how you can use Appium scripts to call development code methods from test code.
He used Appium in this way because he was having a number of problems when trying to write tests:
System pop-ups were called and were not needed when running tests
Driver limits e.g. mocking the device has been shaken or changing time limits
Tests were slow to run
This is where he is using backdoors and where he thinks they could also be used:
Changing backend URLs
Changing app locale
Getting session ids from the app
Disabling “What’s new” pop-ups
Disabling client side A/B tests
Faking SIM card for payments
Get analytics data from the app to validate it
The biggest takeaway from this talk was to be careful not to use backdoor for every test case and call incorrect methods in order to make tests pass.
Mobile Peer 2 Peer Communication Testing
Canberk Akduygu gave us a talk about his challenges when automating the BIP app (it’s like the Turkish Whatsapp).
He was building an extended grid solution to change to the right version of Appium and set the desired capabilities within their testing framework according to properties set in a JSON config file.
His demo was the biggest takeaway which showed two phones messaging and even calling the other. It was one scenario that was running two different steps on each device. It showed that the test steps needed to be synchronised.
From a Software Tester to an Entrepreneur: What I’ve Learned
Kristel Kruustuk next came on stage and walked us through why she founded Testilio and her struggles with the company despite being so successful and growing at a fast pace since she began.
My takeaway from this talk to was investigate and get in touch with the Testilio team and see if they had any future plans for expanding from manual and automation testing to VR testing.
Appium: The Next Five Years
Jonathan Lipps gave the final talk and began again with the history of Appium but then spoke more in depth about the vision of the product over the next five years and hopeful milestones. Some of these things were:
Node js base classes and libraries for easily writing new drivers
Lastly, we were treated to a performance featuring Jonathan Lipps, Appium and Selenium. I believe four or five instruments were being played by Appium, Selenium was outputting the lyrics and Jonathan was singing and playing the ukulele.
My biggest takeway from this is that Appium can be used in a variety of ways to perform a number of impressive tasks.
The after party was held at a bar a short walk from St Paul’s Cathedral. There I managed to talk to Ru Cindrea in more detail about the project I wanted to use the AltUnityTester for and whether she thought it would work. I also, managed to talk the ears off of both Charlene and Charlotte who were the speakers from the very first talk of the day.
I had a great day, met loads of wonderful people (including attendees and speakers) and I hope that next year, I’ll have begun using Appium for something that I do so I can share my own experiences with the community.
It seems that containers are the new technology within IT that everyone is trying to incorporate into their infrastructure. And why? Containers not only benefit those in DevOps, but have positive implications for all teams involved in product delivery.
At Immerse, we’ve recently moved our infrastructure to a containerised solution. It’s early days to analyse the impact that this has had on our development teams, but I thought it would be a good opportunity to deepen my understanding of the this technology.
What Are Containers?
Containers give us the ability to store a number of different systems virtually within the same location. This means that if one environment needs a website front-end, server and database in order to function, all of these can be stored within the same place.
What sort of systems can be run within containers? Well, that’s where container images come in.
What Are Images?
A container image is a stand-alone, executable package of a software that has everything needed to run it including code, runtime, system tools, system libraries, and settings.
An image is required in order to build a container, otherwise it will be empty when created.
So where did all this new tech come from then? It seems like it’s come out of nowhere but spread fast (kinda like Bitcoin right?). Well, it all started by a small company now called Docker Inc. They created the system that containers run on, Docker.
What is Docker?
Docker is a computer program that allows you toperform operating-system-level virtualization known as containerization. This is the creation of containers.
Docker allows independent containers to run within a single Linux instance. This reduces the overhead of starting and maintaining virtual machines (VMs).
Since Docker began, there are other tools than have been developed that can perform containerization.
The World Before Containers
Before we used containers, there were virtual machines (VMs). VMs remove the need for physical hardware and allows one server to be turned into multiple servers. App of this is possible because of a hypervisor.
A hypervisor (also considered a VM monitor), is software that creates and runs VMs. It is the reason why you can run many VMs on a single machine. Each VM will have a full copy of the required operating system, one or more applications and the needed binaries and libraries. All of this can take up tens of gigabytes of space!
Some companies have made the switch to containers from VMs because:
VMs can also be slow to boot, while you can spin up a container within a few minutes providing you have the right image. And if the don’t, that usually only takes a few minutes to obtain.
You can pack a lot more of your companies applications into a single physical server using containers than a what you can fit in a VM.
VMs take up a lot of system resources as they not only just run a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. All that a container needs is enough of an operating system, supporting programs and libraries, and system resources to run a specific program.
With containers you can create a portable, consistent operating environment for development, testing, and deployment.
So How Do You Manage Containers?
Container Orchestration are frameworks that are used to integrate and manage containers. These are not necessary for everyone using containers. Usually enterprise level organisations are more likely to use orchestration tools as they manage a large range of containers and their images. Examples of these tools are Kubernetes,ECS and Ansible.
These tools help to simplify container management from the initial deployment to managing multiple containers, scaling for load, availability, or networking.
The Benefits and Drawbacks
Like any new piece of technology or tool, containers too have their own list of benefits and drawbacks that mean you choose to either integrate them into your development pipeline or you don’t. So, what do containers offer?
The ability to spin up whole environments consisting of all the systems you need within minutes.
The ability to change the configuration and deploy those changes quickly.
Containers allow all users of the system to be self contained. This mean that rogue developers who develop new features, don’t run tests locally, push to the test environment, then leave to go home only for QA to find that test is broken is no longer a daily issue (wow, that sounded like a rant!).
Features can be tested in isolation easily.
Testers can be in control of checking out and pushing features to the test environment.
Differences between environments is no longer an issue because they’re all spun up from the same set of stable images
Initially, there maybe some complexity to setting up containers.
There are some security issues that you med to be aware of when using containers. For example, if a user or application has superuser privileges within the container, the underlying operating system could, in theory, be cracked.
It’s time intensive to set up decent security measures for containers. There’s no default, out if the box solution yet
Everyone is making container images and it could be easy to download something malicious into your system.
Breaking deployments into more functional discrete parts is smart, but that means we have more parts to manage. There’s an inflection point between separation of concerns and container sprawl!
Containers tend to lock you into a particular operating system version
Are containers the future of development?
It seems because of the fast adoption of containers that they may eventually replace VMs once their issues have been overcome. And because it is a new technology, there will be drawbacks at this early stage so don’t let these deter you from experimenting with the tech yourself on your own projects.
However, technology has changed extremely fast over the last 30 years, so it may be that containers are superseded by a new emerging technology that solves the drawbacks of containers and gives us a while load of other benefits too.
For more information of containers especially if you’re learning the basics, please check out the Docker videos by Nigel Poulton on Pluralsight. I found these videos extremely helpful in delivering information and background about a brand new technology. The concepts were also broken down into easy to understand topics which is perfect for beginners. After watching them, I was able to understand a lot more and felt more confident when speaking to the DevOps at Immerse about how they had implemented container and why they made the decisions they did.
What’s your experience with containers? Do you love them? Are they growing on you? Or, have you not yet made the leap into using them? Whatever your experience, I hope this article has given you a better insight into the background of containers.
For more information about containers, please feel free to view the references I used:
Last year, the hype of Cryptocurrencies and the words Blockchain were everywhere. Friends and family members where all jumping in and investing even when they hadn’t shown signs of being interested in the stock market previously. I’ve been following the ups and downs of certain stocks for a couple years now but even I was intrigued.
More than the need to invest, I was interested in how the technology that these currencies were built upon were going to affect me as a QA Test Professional. After all, someone’s got to test the software behind it all. So I decided to delve into the technology behind cryptocurrencies, Blockchain and look into what skills I’d need to test these types of applications.
Now that my QA team are adding to the test coverage by writing integration tests directly into the project code base, it’s finally time that I start to embrace rebasing (I sort of feel that lightning should crack when you read rebasing)!
As I survive with the basics of Git and because Irina and I only usually submit to our own repos, apart from branching and merging back to master, there’s not much activity going on. But once you work on a repo where at least two people are active on it, merging their code into master and creating branches weekly, your local changes can fall behind quite quickly. This is where rebasing comes in.
Rebasing essentially rewinds back your branch to when you created it from master (if that’s where you branched from) and applies all the commits that have happened from master branch to your own. If there are conflicts or differences between the commits to the master branch and yours, you can either:
fix them at each commit, then you
[sourcecode language=”css”]git rebase –continue[/sourcecode]
, or you can accept the changes on the other branch and
[sourcecode language=”css”]git rebase –skip[/sourcecode]
The concept is quite clear to me (now), but what I struggled with was doing this all on the commandline. I know I could use a tool like Source Tree to do all the heavy lifting for me, but in fact, despite me being quite a visual person, I like using the commandline for Git. So I vowed that this will be the time to learn and to do it more often so that it’s solidified in my brain.
So after being walked through the process, I learned that these are the steps that you need in order to rebase from master on to your own branch.
So these are the steps, pretty simple and after a couple of times, I reckon I’ll be able to do this from memory…providing the conflicts are few and far between!
git checkout master
git checkout MyBranchName
git rebase master
git rebase –continue (until you fix the conflicts) or git rebase –skip (to skip these changes)
The steps here are what I found works, but if you spot anything wrong, don’t be afraid to let me know.
I was brought into my current role at Immerse.io to uphold a high level of quality for the company’s awesome products. But where to start?
After I put together my test strategy I needed to know how much test coverage was currently in place.
Plenty of manual testing was taking place but it needed refining as not everything needed to be tested manually. Also, there were no test cases for the new system documented or being run so there was no way to know how much had been tested, what test was last run let alone what results those tests gained.
So with no test cases to look at, I turned my attention to the developers and their unit tests. The good news is that they had unit tests, the bad news is that they didn’t know how much. There were no code coverage tools integrated into their builds so they couldn’t tell.
So I first turned my attention to increasing unit tests in the projects and adding code coverage tools. Once we had the code coverage levels, we could work to increase the code coverage of the project codebases. If our code coverage level was high, it would give us more confidence about the quality of the system at a low level. But how many test cases should we create?
With this question in mind, I started thinking how I could use the Test Pyramid concept and apply it across our products.
What is the Test Pyramid?
The Test Pyramid is a concept that is used to determine what type of tests and how many of each type should be used at different levels within your product’s development. There are usually three different levels to the pyramid (although some depictions may contain between three to five). The image below is referenced from Martin Fowler’s blog post regarding the Test Pyramid.
As you go up in the levels there should be less tests. Also, as you progress up the pyramid, the tests become more complex because they involve connecting to more than one component or system. Therefore tests will take longer to run as you progress higher.
The Different Levels
The bottom level contains the majority of your tests. Unit tests are used for this level as run fast, have a small focus, are easy to add to and require little maintenance. Overall, they provide fast feedback on changes to the system at a low level.
The middle level contains your integration tests and the top level is for UI tests. The tests in the middle and top levels tend to be more susceptible to breaking because they connect with a number of components and systems.
UI tests are particularly brittle because they rely on web page elements which can be changed easily. Because the automation code is so tightly coupled to the page elements, if one is updated without the other, the tests will break. And, as these two codebases are usually maintained by different teams, sometimes the teams won’t remember to update one another. This could lead to one codebase being updated without the other one. These tests usually require more maintenance than unit and integration tests.
Fix issues as early as possible
The Test Pyramid supports the idea that a lot of thorough testing at the low levels and early phases of development helps to prevent bugs reaching the production environment.
Catching issues within your product during the later stages when the project is completed and ready for consumers is costly and can be more time consuming to fix. Attempting to apprehend issues, before they become defects i.e. production bugs, during the early phases of development is preferred because:
It involves less people to fix so can be presumed to be less time consuming
There is no impact to the customers so it’s less costly to the business
In your project, try and do the following when focusing on building your base level:
You should try to do the majority of your testing during the earlier phases of development with tests that are the cheapest to run.
It’s important to make sure that the amount of unit tests in your project is large enough to ensure that issues are caught and fixed before shipping.
Make sure you monitor the level of code coverage you have in your project to make sure you have the right amount to reach your target level of quality that will provide you with confidence in your product.
Follow the Test Pyramid and see what results you get.
Integrating Code Coverage
In order to follow the Test Pyramid concept, we first needed to build up the amount of tests in the base level.
Now that the tests were being added to, we needed them to be integrated within the CI tools we were using. So for one team, I set about getting these running within TeamCity. Using the Unity command-line test runner I managed to get this working when we were using Unity 5.4. Unfortunately, we found that this set up broke when we upgraded to Unity 5.6. Luckily, this has now be rectified in version of Unity 2017.2 and we have unit tests running in TeamCity on every build once again.
Unfortunately, there’s no way to measure code coverage on Unity projects so we’re just going have to be content with increasing the number of tests and making sure that these tests are meaningful. I’ll continue to monitor this though.
I next looked at getting a code coverage tool installed for the other team. Because of the tech stack here and how simple it was to setup on a simple project, I chose to use Istanbul. This has now been set up into a CI tool that runs on every push so that all code is checked and a code coverage report is produced. This gives us an idea of where we need to focus our efforts to increase the amount of unit tests.
Our Next Steps
As the production of unit tests are continuing and the developers are actively doing this themselves, I turn my attention to the next level of the Test Pyramid, the Integration tests.
This post was imported into WordPress in one click using Wordable.
After reading The Power Of Habit by Charles Duhigg, it made me realise that people’s habits can be used to change behaviours for the good and yes, sometimes the bad. These changes in behaviours can either positively or negatively impact your life.
So, in 2018, I’m going to try and do this for myself. Why not try and improve my development skills by building a new positive habit into my daily or weekly routine?
But building a habit is time consuming. It can also be difficult if what you’re trying to learn is brand new and interrupts other, more longstanding habits whether they’re good or bad.
Building a new positive habit into your routine is like when you first start going to the gym. It will take a conscious decision to keep it up in the beginning (it may hurt a bit too if what you’re doing is physical). The trick is to try and work the new behaviour into your routine little by little. Try and fit your new behaviour into times where it’s easy for you to implement.
Making sure your new behaviour fits into the SMARTER acronym will help you to get onto the right path building a new behaviour into a routine that will eventually become a habit.
If you’re unfamiliar of this well known guideline for target and goal limiting, each of the letters stands for a particular limit that you need to make sure the goal you set meets.
The traditional acronym has always just been “SMART”, however I recently came across this extended version. The addition of the “ER” is to make sure that you learn from the targets you set and make changes depending on the results. The practice of retrospectives are used a lot more within development teams now as it’s a vital meeting within the agile process of SCRUM which is widely used across many organisations now. So if you’re a part of a new organisation you’re probably well acquainted with these meetings. You analyse what you did well, what you didn’t do so well and how to improve next time. This essentially are what these last two letters add to your targets making them grow with you.
S is for Specific
Make sure you are clear and concise about your behaviour.
M is for Measurable
Your progress should be tangible. You should be able to clearly see how much you have progressed with quantifiable results.
A is for Achievable
You are only human with so much time, so ensure you can actually reach your target of the behaviour you want to incorporate.
R is for Realistic
Again, that silly little human aspect means that we can’t state that we’ll be able to learn a new programming language in a day because the scope is too broad and there’s just not enough time. Make sure you set yourself goals that are realistic.
T is for Time-bound
In order to measure your progress, it’s good to set a date to work towards. That way you can tell how much you have learned between a set amount of time.
E is for Evaluate
After you have measured your progress, look closer at your results and think why you’ve achieved those specific results. Questions like these will help you think more deeply:
What was the approach you took to learning your new behaviour?
How much time did you dedicate?
Was it the time of day that you chose?
How have these particular decisions affected your behavior being adopted? Could small changes to these make it easier to adopt other behaviours or even commit them to being a habit faster? Give this stage a bit of time to come to useful conclusions.
R is for Re-evaluate
The last stage is Re-evaluate. After analysing how your “behaviour to habit” building exercise has worked you must apply anything you have learned to your next behaviour to try and achieve the best results possible.
Start learning your new behaviour
As well as working to the SMARTER guide above, I would recommend you try and keep in mind the points below. This should help you turn your behaviour into a habit.
Limit your chances to be distracted
Set yourself up in environments where you’re less likely to procrastinate or get distracted. Unfortunately, due to the invention of the smartphone, we have a device capable of distracting or keeping us entertained most of the day. But at the times where you should be doing something productive, actively put your device on silent, hide it away or turn it off for the duration that you need to ensure you stay focused.
Make the sessions short
When you’re trying something new you should first try and introduce it in small increments. Depending on what you’re trying to achieve, think of a sensible and small timescale start with. By starting with small increments it means that it’s not so intimidating and will seem easier to accomplish. You are more likely to perform new activities especially if they’re in small chunks.
Ensure it can be done daily (or at least the majority of the week)
In order to build up your new tasks into a habit, you must perform the action daily. It’s been said that it takes about 21 days for new behaviors to become habits so performing the task everyday will make sure you build this new habit as fast as possible.
Building a new positive habit is a challenging goal, but one well worth it. Good luck setting your 2018 goals!
This post was imported into WordPress in one click using Wordable.