Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6131
Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the polylang domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6131
Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/wp-includes/functions.php:6131) in /var/www/html/wp-includes/feed-rss2.php on line 8
There’s another breakdown, as those who do, divide further into
As for the second group, it all depends on assumptions and the choice of tools in use. What may have appeared a good idea to store data, can be easily exposed as insufficient in case a security breach occurs. Preserving resources from intrusions, equipment failures, and catastrophes is crucial to preventing data loss events.
How to prepare data for restoring once something goes wrong?
The 3-2-1 is a desirable, yet difficult to achieve approach, where data is thoroughly secured by various means.
The perfect setup

What storage types do we mean by various means?
Where can the backup storage be located?
*not ideal, as reminded by an extensive fire at the OVH location in France, which downed millions of websites in May 2021.
Protecting data from malicious actions and other unfortunate events doesn’t end with keeping it out of the hands of unauthorized users.
Further steps entail:
(which applies to private, professional, business data of critical meaning and isn’t necessary in case of storing backup copies of e.g. legally purchased music, movies, games, etc.)
As the name suggests, it applies to encrypting whole discs. Covering with care the whole asset is the easiest way to protect the data it carries.
Windows
BitLocker – built-in
VeraCrypt
macOS
FileVault2
Linux
VeraCrypt
Dm-crypt (LUKS)
7-zip – a file archiver with a high compression ratio
Cryptomator – great for encrypting files in the cloud
GnuPG – recommended for encrypting communication
Maximum security thanks to end-to-end encryption
MEGA
Filen
While we vouch for the above solutions, when it comes to critical data, we still recommend further security measures. Tools like Restic and BorgBackup support backup and deduplication, which helps manage stored data. Backup Ninja and UrBackup help achieve not only safety, but also fast restoration time, easy restoration, and management of automated database backups.
We use an AWS S3 bucket to store data, followed by at least 2 copies of each 2 recent backups stored on our infrastructure. Both the infrastructure and bucket resources are backuped daily. Once a month, we transfer the recent backup of the most critical assets including GitLab, Youtrack, Mattermost, and Bitwarden to an external drive.
The AWS S3 provides an object lock mechanism, which secures from deleting or overwriting stored backups.
The tool requires setting retention periods during which set objects remain locked and under WORM protection. WORM stands for Write Once, Read Many, and is an element of storage where not changing stored data is critical.
Another useful mechanism provided by AWS is Bucket Versioning, allowing users to store multiple versions of a backuped resource in the bucket. As a result, each attempt to overwrite the backup results in creating (and storing) another version of such an item.
A standard way of storing data, paid by per set time period, irrespective of how long in fact data is preserved on AWS servers
A file archive with smaller costs, but with a minor inconvenience, as access to stored resources requires time (12 hours from request to access to desired backups)
Once a day we upload a daily backup of every tool we use. This data is available right away for 30 days and after this period, it is labeled Expired and lands in the queue for deleting.
Every week we upload a collective backup to the Glacier Deep Archive. After 180 days, these copies are labeled Expired and deleted.
Each backup is marked with a relevant tag (Daily, Weekly) for easier management and is secured by an object log protecting it from deleting and overwriting.
Every 2-3 months, we test our safely stashed resources. Using a separate server and the Docker service we make sure our backups are valid and ready to restore data in case of necessity.
This step should give the final answer to the “do we really have backups” question.
There’s no other good answer than yes, you should backup. Securing data critical to executed operations is necessary to maintain continuity and avoid operational downtimes. Better safe than sorry, no matter how cliche it sounds.
Recovering from unplanned events is critical to maintaining operations, and as such, shouldn’t be overlooked in favor of seemingly easier, inconsiderate approaches. While backups may not seem to be the most exciting part of software development, their importance is impossible to overestimate.
]]>There are a lot of ongoing debates around this topic. Automated testing is often regarded as a cure-all that solves all QA problems. Many development teams claim that they do great without QA engineers. Companies follow this approach without giving it publicity for obvious reasons, but some of them even boast about it.
Yahoo bragged about successfully eliminating QA at some point. Microsoft and Salesforce were reportedly working without dedicated QA teams too. For Salesforce such practice led to multi-instance core and communities service disruption though, but that’s another story. And for Windows… well, you know the case:

We are not among those who stick to redundant roles. However, we don’t see any reason to talk about eliminating the QA role, rather about transforming it.
Indeed, in some projects due to their scale, domain or context having a full-time QA specialist might be not necessary. But this is rather an exception. Developers and even more so automated tests cannot replace QA engineers. Moreover, there are many benefits in separating dev and QA roles. Here is why:
There is a crucial difference in developers’ and testers’ mentality.
Developers have special feelings about the code they write. They know it inside and out. They know the logic behind it. And they can never be impartial about it.

That’s why developers’ tests are often limited. Programmers already know how software should perform and stick to this familiar scope/scenarios while testing. This leads to creating a perfectly accurate but sometimes prone to errors in non-standard situations (missing edge cases) piece of code.
QA engineers, on the other hand, are not attached to code they test. They have no interest in testing it gently. They try out all creative ways of using the software, just like the future users, and find bugs that stay outside of developers’ reach.
As a rule, testers understand business assumptions better and follow specifications more precisely, which is necessary for delivering software that meets business needs.
All this doesn’t mean that one mentality is better than the another. Both are great and needed, as true superpower comes from the synergy between testers and developers.

Whereas programmers are usually responsible for a certain feature, system, microservice, or simply a piece of code, QA engineers are responsible for the product as a whole. They make sure tests are actual and new bugs and vulnerabilities do not appear after modifications (to different parts of software! by different development teams!) have been applied.
A great example here is the responsibility of performing end-to-end testing of a product based on microservices distributed among various dev teams. Having a QA specialist dedicated to such complex tasks that require the coordination of different teams is crucial.
We are big fans of automation, but let’s be honest – automation cannot cover all the tests as some of them require human cognitive ability and common sense.
Exploratory testing is a good example here. Testers need to use their creativity, experience, analytical skills and be familiar with the logic behind the software in order to find all possible issues. Sometimes it’s intuition and simply a human way of thinking that helps testers accomplish their task. You cannot expect it to be done by a computer.
Another example is already mentioned black-box testing, which isn’t possible without a “fresh” pair of human eyes.
Sorry if this comparison hurts someones’ feeling, but we couldn’t explain the idea better. We all know that software development is mentally heavy and talking to someone about your programming problems is a great way to find a great solution. And let’s be honest, the rubber duck’s role in such discussions is extremely exaggerated.

The situation with QA is similar to safety at work. Safety isn’t the sole responsibility of a Safety Officer – everyone should follow safety rules. If it’s safe at work, it doesn’t mean you can fire the Safety Officer. And people who follow safety instructions cannot replace the Safety Officer.
QA is especially crucial for the ongoing development of products that have been already launched and adopted by real users. Eliminating or moving QA role to someone else means transferring the responsibility of finding bugs on existing users, which is fraught with reputational risks.
We hope you found our arguments interesting. If you disagree or have something else to say – don’t hesitate to leave a message in a comment section below or contact us directly.
]]>Indeed! With an exception that the hustle is usually bigger than many companies expect.
Not to bog down in all the troubles connected with this change, we suggest to approach it strategically and think of what might go wrong before your team starts the transition:
It’s a common misconception that automated testing is the next logical career development step for manual testers. Yes, it happens. But no, it’s not as easy as a vertical promotion, when an employee fits a higher position thanks to collected experience only.
In fact, the transition to automation requires a lot of effort and learning that takes many specialists out of their comfort zone. This is why not all manual testers opt for that direction or manage to convert successfully in the end.
So before making a decision to train your whole manual testing team and make automation engineers out of them, take a closer look and assess your team members carefully: Do they really want to convert (or do they just fear to lose their job)? How many of them know how to code? Do they have the discipline for learning new things? Do they have the right skills for the new role?
And here is a small hint: If testers in your team are ready to convert to automation engineers, they most probably are already doing that by learning how to code and taking online courses in their free time, as well as storming their managers with suggestions to try automation testing practices. If you haven’t spotted such activity and talks, the chances are that your testers chose a manual testing career to avoid coding, and thus automation engineering is not their thing. In this case, your efforts to convert them will most probably go in vain.

If you do see the potential in your testing team and want to proceed with training them, be ready to face often higher than estimated costs of the transition. Here is why:
1) Training takes longer than expected
Even when it seems that the transition can be done rather quickly, the odds that you get autonomous specialists who don’t need to be looked after are pretty low. In practice, testers who convert too quickly still lack knowledge and experience. The code they produce is often buggy, randomly copy-paste and thus creates more problems for production than benefits. The proper transition needs time and patience.
2) In-house transition pulls opportunity costs along
Training testers on your own leaves a few questions unanswered: Who will be handling testing when testers are busy with learning automation? Who will teach them?
Involving developers into teaching testers how to code might look like a cheaper solution but will take developers out of their main tasks, and thus lead to lost opportunity costs. Not to mention that “a good developer” isn’t equal to “a good teacher”.
3) Life is simply unfair
Keep in mind that it’s also possible that your precious automation engineers might recognize their higher value after training is done and leave you shortly after the transition. Sadly, no one is safe from this scenario.
Hiring QA professionals instead of growing your own ones is often considered to be a luxury that only tech giants like Google or Amazon can afford. However, at the end of the day, it’s not such a bad decision considering all the costs and risks related to training testers in-house.
Hiring a good automation engineer is hard. And finding someone who would agree to overlook the team in the transitional period is even harder. Luckily, hiring is not the only option you have. Companies often opt for more cost-saving solutions like hiring a freelancer or a software house. The latter can not only ‘land’ you a QA specialist or a QA team for a specific time or project, but also help with organizing the transition, take care of training without hazarding production. Check out how this works and what are the costs of such services by contacting us here.
We hope you found this article useful. We’d love to hear about your experience of converting to automation and help you with your testing struggles. So don’t hesitate to drop us a line in a comment section below.
]]>