Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6131 Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the polylang domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6131 Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/wp-includes/functions.php:6131) in /var/www/html/wp-includes/feed-rss2.php on line 8 Dev – NeuroSYS: AI & Custom Software Development https://dev.neurosys.com Mon, 11 Jul 2022 09:16:41 +0000 en hourly 1 https://wordpress.org/?v=6.9 https://dev.neurosys.com/wp-content/uploads/2022/08/cropped-android-chrome-512x512-1-32x32.png Dev – NeuroSYS: AI & Custom Software Development https://dev.neurosys.com 32 32 What are HATEOAS links? https://dev.neurosys.com/blog/hateoas-links Thu, 02 Jun 2022 11:40:58 +0000 https://dev.neurosys.com/?post_type=article&p=13475 Hypermedia as the Engine of Application State (HATEOAS)

HATEOAS is a constraint of REST application architecture, distinguishing it from other network application builds. 

The term sounds intricate, but the idea behind it is simple. 

Thanks to HATEOAS, users interact with network applications entirely by responses provided by the server. Hypermedia from the server is sufficient. In simpler terms, no additional documentation is required when using REST API. 

A few words before we focus on the HATEOAS itself. 

Application Programming Interface, API, can be considered a contract binding two applications. This agreement designes how its sides communicate utilizing requests and responses. 

The API maturity model by Leonard Richardson distinguishes the following stages:

  1. HTTP RPC – Remote Procedures Call, an open-source framework streamlining the development of REST-based APIs. With RPC over HTTP, clients connect to servers, communicating through an intermediary RPC Proxy.
  2. Resources – data corresponding to models in the application. API design calls for using different URLs to interact with various resources within the app.
  3. HTTP verbs – help to perform CRUD operations on existent resources and share actions between them, without creating new resources for each process to be carried out.
  4. HATEOAS – provide the client with information on the actions they can perform & resources they can download.
HATEOAS links

HATEOAS serves as a guide, leading users through the interface and letting them find what they came for (resources and potential actions that the client can undertake using the API). The solution mirrors normal websites, which, using HTML, provide users with the same guidelines but using buttons. In the HATEOAS approach, the breadcrumb trail is allocated in the API.

For the implementation of HATEOAS responsible is HAL – Hypertext Application Language. HAL strictly defines what the links in question should look like. HAL also uses embedded resources to improve delivered actions. For example, in a case where a user downloads an ebook, the resources embedded can contain additional information, such as author details and a link to their website. 

HATEOAS use cases

Pagination – supporting navigation through the API with page and limit parameters. 

HATEOS links pagination

The back-end adopts the page and limit parameters.

Pagination with HAL allows for easy implementation of the infinite scroll on visited websites. 

HAL pagination in HATEOAS links

Links are optional and the infinite scroll is enabled under next.

Authorization and business logic

business logic to HATEOAS

An article can be published if it is not removed, not yet published, the user is the editor, and it has received minimum 3 positive reviews.

What does HATEOAS provide?

  • No URL concatenating (interconnecting) or splicing
  • No need to know all URLs on the client-side
  • Freedom to change the API without having to change it on the client-side
  • No duplication of logic and encapsulation of logic on the API side

What HATEOAS doesn’t provide

The method doesn’t include information or context on

  • Why the action is unavailable
  • Why the resource is not available

When is HATEOAS useful?

  • Feature flags – to hide or disable components in a UI
  • Roles and permissions
  • Business logic
  • Business process steps
  • Planned refactoring
  • Work carried out by separate people/teams
  • Separate implementations

When are HATEOAS links not useful?

  • When the system is too narrow
  • When front-end and back-end are developed by the same person (in monorepo – when code for multiple projects is stored in the same repository)
  • When the client API does not understand the intent

Advantages of HATEOAS

HATEOAS allows to fully define a control logic on the client-side. The constraint supports multiple implementations of the same service and one client needs to access more than one of them. Additionally, HATEOAS facilitates uploading content to other servers and allows the creation of explorable APIs, making it easier for developers to create the interface and its data structures. Using HATEOAS helps to guide users to necessary information and navigates them through the API. 

HATEOAS navigation through the API

The example shows back-end placing the image in the “public/covers” directory. The file name has the “<id-books>.png” format.

HATEOAS links replace the need to change the API in cases where certain services and features need to be temporarily disabled. Should e.g. a button no longer be displayed and redirect to a linked action, a HATEOAS link can be used to hide or disable it. As a result, the front-end needs no significant changes and saves developers’ work.

Disadvantages of HATEOAS

The REST application constraint has no established standards. As a result, the HATEOAS libraries carry various representations of links. With differing syntax, similar outputs may be achieved, but the overall build of the API can be more difficult to implement due to varying link representation. 

Higher latency and more bandwidth – the downsides affect mostly users of mobile devices. Payloads grow with the growing number of links provided as a response. To reduce the negative impact on equipment with lower latency than PCs, APIs developed for mobile devices should carry fewer payloads.

No library support. Before reaching great popularity, HATEOAS has already been outrun by other approaches (e.g. GraphQL). As a result, finding suitable libraries for every programming language may be challenging. 

The takeaway

HATEOAS enables a clear definition of control logic on the client-side of the API. As a result, clients can conveniently use API resources following embedded links. Separating client interaction from the URL structure reduces the risk of affecting the integration with possible changes. 

HATEOAS compliance is necessary for APIs to be truly RESTful but is not required for all API integrations. The actual needs vary depending on business conditions and requirements. As a result, there is no unambiguous answer to whether HATEOAS should be used or not.

Do you think your software project may require implementing HATEOAS methods? Let’s talk about the idea and see if the solution will be the most beneficial for your next great product or service. Book your one-hour free consultation and let us know about your needs – our team will take care of the REST (pun intended).

]]>
Why People Say Unity Engine Is Bad & What Is It Good For? https://dev.neurosys.com/blog/why-people-say-unity-engine-is-bad Wed, 04 May 2022 13:03:00 +0000 https://dev.neurosys.com/?post_type=article&p=3823

In this article

    What is Unity engine? (quick overview)

    Unity is a popular development platform that can be used for free (with some exceptions). It provides a set of tools (assets, plugins, libraries), the majority of which can be found on GitHub, Bitbucket, the Unity Asset Store and other similar platforms. 

    A large community around Unity with its rich knowledge base allows finding ready solutions for common errors in no time. There are also lots of guidelines for new users, which make Unity a good starting point for beginners, learners and amateur game developers. The wide pool of possibilities that this engine provides opened the way to new professions such as technical artists, gameplay programmers, UI programmers, and network engineers.

    Business-wise, Unity is a good free option for small and medium-sized companies whose gross annual income is not higher than USD 100,000. Only when a company exceeds this amount it is required to switch to the paid version of the tool.

    Is Unity engine really that bad as people are saying?

    Many developers don’t consider Unity to be an ideal engine for creating mobile applications, in particular – utility applications. Indeed, this engine was created mainly for game development. Although there are many sceptics who also question the quality of games on Unity engine. 

    In our opinion, it doesn’t have to be the fault of the engine itself. More often it’s a result of the very low entry threshold for creating applications on Unity. This factor in combination with the lack of programming experience leads to a high number of poor applications and games created on Unity engine. 

    For this reason, Unity is often referred to as an “easy to learn, hard to master” platform. The application development process in Unity can be compared to an avalanche: the more experience the programmer has, the more this application grows and gains quality.

    Common pitfalls in Unity

    Building UI with Unity

    Even though the engine is generally very friendly for beginners, there are many bad programming practices that an inexperienced developer can easily step into. One of such pitfalls lies behind creating a user interface (UI). 

    Overall working on UI in Unity is easy but only to a certain extent. When creating UI for the needs of the majority of games, you most probably won’t encounter any troubles. However, for more comprehensive and scalable solutions, it will require a lot of work since UI tools in Unity are not so developed, standardized and automated. 

    You won’t find any structured UI in this engine like in others (e.g. Xcode for IOS), but we believe it will evolve over time and switch to something like HTML/CSS. 

    The UI tool in Unity is flexible but not suitable for every project. All in all, it is a gaming engine and is therefore not universal. For more complex tasks a custom approach is usually required, such as creating UI on your own or using external solutions for that.

    Challenges in achieving clean code and architecture using Unity

    When working with Unity, achieving clean architecture and code might be a challenge. Without programming experience, you easily end up with chaotic architecture or no clear thought-through application structure.

    The Unity environment itself does not encourage or help programmers apply good programming practices. Often newbies take shortcuts using primitive solutions such as operating mainly on static classes, abusing singletons or storing logic in one file consisting of thousands of lines of code, trying to justify such behavior by “done is better than perfect” principle. 

    Creating effective architecture and advanced functionalities in Unity, similar to UI creation, requires learning certain conventions (e.g. design patterns) and interest in available plugins, unless you want to waste your time breaking open doors.

    In a situation when let’s say, you want to do a good job with InputField, you still have to write your own plugin that will handle it well. Let’s take a look at the example of building a mob app (e.g. on Android). Unity does not properly support InputField objects by default. When you select such a field, you lose “focus” and you cannot change the cursor position, for instance, because the system keyboard has appeared. A typical solution is to use an additional InputField that is built into the keyboard. So if you want to avoid it, you need to write a native plugin that will handle the field in the right way.

    Difficulties in applying SOLID principles in Unity

    Due to its architecture, Unity makes it difficult to apply SOLID principles*.

    *Five principles of Object-Oriented Design: SRP – Single Responsibility Principle; OCP – Open/Closed Principle; LSP – Liskov Substitution Principle; ISP – Interface Segregation Principle; DIP – Dependency Inversion Principle.

    This mainly applies to references between objects, where without the typical “static void Main() { … }“ in which an application starts we can face a problem of sharing references between objects. A typical approach would be adding references from the editor. But in the long run and with the development of the project, this can be a very bulky solution that causes many errors. 

    Especially bad practice here is the abuse of the UnityEvent class, which allows you to subscribe from the editor. A seemingly convenient approach can quickly cause the illegibility of information flow in our code, whereas juggling with dependencies leads to classic spaghetti.

    Threading in Unity

    Unity and its native objects are not thread-safe, which means we cannot use them other than as our main thread. On one hand, this protects programmers from poor use of multi-threading. On the other – it cuts off several important ways for optimization. 

    Parallelization of calculations is useful, for example, in the case of heavily burdening mathematical calculations. However, if we want to break down work on a specific native object into several frames, Coroutine – a way to write many asynchronous and delayed tasks – comes in handy.

    Anyways, multi-threading should be done with big caution because it can work counterproductively. For example, creating more threads than there are processor cores in applications is pointless, and switching between threads is also additional processor cycles. 

    A large part of operations is dependent on each other, but it makes sense to separate and transfer to other threads particularly important operations, such as the procedural model generation.

    Using modules and Unity version control problems

    Unity allows you to use modules typically associated with devices, such as IMU (sensors), but only at a certain level of abstraction, which leads to multiplatformity. If we want to use services specific to this platform (for example, Android, iOS), then native plugins will come in handy. 

    An example here would be getting detailed information from a GPS module (like the number of satellites available) or from a fully functional WebView. For devices using the Android system, we may be interested in classes such as AndroidJavaClass and AndroidJavaObject, which allow creating simple plugins at the project code level.

    Many developers are also complaining about Unity version control problems. Luckily, this problem has already been solved. The engine is getting better and better and subsequent versions appear every year.

    What Unity is good for?

    There are many opinions on the Internet regarding the advantages of using Unity, such as cross-platform. In our opinion, this is not the most important feature as many other environments offer it too.

    What is worth emphasizing is that Unity is a very well made engine that works perfectly on both iOS and Android. Just remember that in the case of iOS applications, you need to have a Mac to build a project for testing.

    Overall, Unity is a good and convenient tool with a very low entry-level. It is accompanied by a lot of well-described tutorials and its own learning platform.

    The engine works with  C# but you can alco use many languages, such as C ++, Python, Java in form of libraries, and is compatible with various operating systems (Windows, Linux) and devices (such as Chromebook, Mac). So we basicly programming  in C #, it is easy to find solutions to problems, not only within Unity resources but also in the documentation and articles related to the language itself. Besides, thanks to Garbage Collector in C #, we don’t have to worry about engine memory leaks.

    The platform works great when it comes to prototyping. The app prototype can be sketched very quickly* without the need for creating the architecture. However, the problem can be a very fast script flow.

    *Due to its approach to objects and built-in classes Unity works great for rapid prototyping of applications and games It is especially important for AR / VR applications developments where we can practically test the concept of some functionality on the device right away. However, you should keep in mind that rapid prototyping might lead to poor project architecture.

    Easy debugging (especially when it comes to logic and UI, e.g. rendering threads) is another advantage of Unity over other engines such as Godot, Unreal. A great tool for assessing the status and optimization of your application is the built-in Profiler, which allows finding bottlenecks.

    Despite the unstructured UI, the engine has well-resolved UI responsiveness. It allows us to define the arrangement, scaling and behaviour of containers from the editor level easily (e.g. stretching to the size of the window).

    Unity is a multi-platform environment and works great for creating cross-platform AR/VR applications, mobile games, console games, due to the possibility to create 3D graphics with a relatively small amount of work in a very easy way without extensive programming knowledge needed. This is not a case when programming in Android Studio for instance.

    The resources available in the ML-Agents module reduce the barriers faced by the developers of machine learning applications. Learn more about our machine learning application cases here.  

    Conclusion: so should I use Unity?

    It might look like Unity is not the best tool for creating mobile applications and it is a rather common opinion. However, more and more game studios use the Unity engine, as well as offer job positions for Unity engineers. 

    The reasons for that among others are: the dynamically developing Unity has a low entry threshold; it has a large pool of free learning assets and quite pleasant documentation. 

    Despite the fact that the engine is not perfect for creating utility and native applications, there are applications for which it works great, e.g. when creating cross-platform AR/VR applications running on multiple AR/VR devices. Prototyping applications and creating 3D games on the Unity engine has also a lot of advantages.

    Real case of using Unity for creating a cross-platform AR application

    At NeuroSYS, we’ve chosen Unity for building one of the core components of nsFlow –  the platform for creating AR applications. nsFlow, generally speaking, is an Industry 4.0 solution that streamlines knowledge management for industry, using augmented reality to provide the industrial workers with the necessary knowledge  anywhere and at any time. 

    nsFlow offers two functional modules: Workflows and Remote Support. The Workflows module allows clients to model real-world processes (hands-on training, service or maintenance procedures, etc.) using a dedicated Workflow Creator tool, and then deliver them to workers equipped with AR glasses, so they can use them as a training tool or assistance in their day-to-day work. 

    The Remote Support module breaks the barrier of the distance separating line workers and technicians from experts who could help them in unusual situations happening every day, giving both the possibility to consult over technical problems remotely using audio-video communication and handy visual tools for experts to guide technicians step-by-step straight to the desired solution. 

    From the very beginning, one of the core principles of nsFlow was to be hardware-agnostic. The market of AR devices changes rapidly every year so we didn’t want to stick to specific hardware and chase the market by implementing newer and newer versions of our AR app every time a new device is announced. In order to achieve the high independency from hardware we made two strategically critical decisions:

    • the architecture of the platform must limit the responsibility of the AR device and AR app to an absolute minimum, making it just a thin client, commanded by the central engine installed on a server,
    • the AR app must be built in a cross-platform technology that will allow us to onboard new devices with the least possible efforts and costs.

    Unity met the second assumption perfectly. It gave us a strong foundation for all nsFlow AR and mobile apps, allowing to share most of the mechanisms and, as a result, manage single source code for six different devices (Microsoft HoloLens, Vuzix M300, RealWear HMT-1, Epson Moverio BT-300, Google Glass Enterprise Edition 2019 and Android smartphone). 

    Moreover, Unity, as a games-oriented platform, provides a lot of tools and solutions that are extremely useful when building AR applications that have a lot in common with games (i.e. 3D scenes, physics, graphics, rendering, etc.). In our case, Unity was a perfect fit, however, it was a long and bumpy road until we domesticated that technology together with all its specificities.

    You can read more about nsFlow and see what other challenges we, and Unity itself met and how we faced them up.

    To sum up…

    In the world of technologies, there are no purely “good” or “bad” decisions. Every choice shall be made based on the wide range of project and business needs and considerations, and it’s often a compromise. 

    Despite all its pitfalls, we wouldn’t advise file Unity away in storage, as it has lots of advantages for specific applications as those we described above. However it should be always a conscious architectural decision to use this technology and there shall be strong arguments supporting it. 

    We hope you enjoyed this reading. Feel free to leave your opinion below regarding Unity or go directly to our Unity service page to get to know more. We will be happy to continue the discussion. 

    ]]>
    7 test types that will ensure your software quality https://dev.neurosys.com/blog/tests-software-quality Mon, 31 Jan 2022 09:10:00 +0000 https://dev.neurosys.com/?post_type=article&p=12808 But do I need quality, huh? 

    I can imagine an unlikely but nonetheless possible scenario when code quality isn’t on your priority list. You need to build your MVP or proof of concept as quickly as possible, not planning to develop it further. At least not in a given technology and current shape. It could happen, right? 

    Apart from similar cases, yes, you need quality. You need it for your digital product to work (simple as that), to be able to develop it further without technical constraints, and to avoid technical debt that would be costly in the long run. And this is where we come in with a list of tests that will ensure your software quality. 

    First things first: a plan 

    Boooooring, huh? Plan, strategy, starting with why – they all are big words these days. And there’s a reason behind it. Sorry not sorry, it is a test plan that allows your team (managers, testers, and developers) to test implemented solutions, in terms of not only code quality but also business requirements. With a good test plan, everyone knows what to do and nothing is left to chance.

    Your test plan should collate the information on:

    • strategies for particular solutions testing 
    • test cases and requirements 
    • testing environment
    • test execution and showing results
    • metrics for testing-related activities

    If you conduct tests for your client, the plan should be handed to them, so they know you put your money where your mouth is.

    1. Code Quality Assurance

    To ensure the high quality of the source code, we apply Code Quality Assurance procedures. They relate to the way of working with Git repository, performing code review, and following best coding practices. Their final look depends on the very project – its size, client, and needs. 

    The basic rules and procedures worth mentioning here are: 

    • All code is hosted on the source control management (SCM) system, preferably on de facto industry standard Git. 
    • The branching strategy has to be adjusted to the individual project’s needs (Gitflow, GitHub flow, etc.). 
    • The Continuous Integration process ensures that each code contribution is independently verified by automatic systems. This method guarantees that all existing tests pass, the code style is followed, and the established project’s code principles are met. 
    • All changes introduced by the team are subject to the verification process (so-called code review). To become a part of the product’s official code base, they need to be reviewed by fellow developers.

    2. Unit Testing

    Unit tests are used to examine the smallest pieces of source code that can be separated logically within a system, commonly individual functions or classes. Unit testing verifies if they work according to the design – and are ready for use. 

    This method allows us to find problems early in the development process. Components that didn’t pass should be fixed right away because they can negatively impact the whole application. However, in the end, it is a whole functionality that has to work efficiently. 

    Unit testing should be an inherent part of the development cycle – the first level of testing to be precise – to keep high code coverage. It leads to project predictability, process repeatability, and error elimination. They can be run by developers (which applies to all test types but manual) but performing them automatically during Continuous Integration (CI) pipeline runs has become an industry standard.

    3. Integration Testing

    Integration testing is the next phase that comes after unit testing. This time, system components (that have been covered by unit tests individually) are examined in groups. The process verifies that implemented components work together correctly in various scenarios. 

    Integration testing verifies that implemented components work correctly together.

    Integration tests are a part of regular development. They take place when all components/dependencies of a given functionality have been designed and their interfaces allow us to test interactions between them. 

    Again, they can be run by both devs and automatically during the Continuous Integration process, the latter being advised. If we’re testing ready modules that have been already implemented, then we drift towards functional or end-to-end testing

    4. Performance Testing

    Performance tests evaluate the application response time, stability, reliability, and resource usage under an assumed load. Their main aim is to identify and remove performance bottlenecks. 

    We do it by simulating actual and average user traffic in the system – so it reflects the real use. We do it up to their critical number – because we don’t want performance testing to change into stress testing. Performance testing is an effective way to detect system performance problems that need improvement. 

    Nowadays, performance testing takes place for the system that’s deployed on the environment as similar to the production one as possible.

    5. Stress Testing

    Stress tests are a particular case of performance testing. They verify that the system is operational and all functionalities work well even when it’s used under significant load – multiple users launching it at the same time and/or performing complex, long-running operations. The process is deliberately intense, often reaching breaking point, to identify performance bottlenecks and estimate the upper limit of users that can access the system at once.

    6. User Acceptance Testing

    UAT aims to ensure that the software can handle real-world scenarios and its end-users can operate the system as defined in the User Requirements document before it gets into the market.

    In the case of smaller apps, user acceptance testing can be performed manually by a tester and business representatives not being members of the development team. However, usually, the process is automated. User actions are recorded as macros and replayed over and over again. Automation is highly recommended here, otherwise more and more errors would arise after a while, because human errors are inevitable.

    UAT can take place independently from other test types. More, performance and stress testing can draw from scenarios written for them. In this case, a selected group of UAT macros are repeated more frequently, e.g. 1000 times per minute.

    7. Regression Testing

    Regression tests are performed to verify that the implemented changes and updates haven’t affected already existing parts of the system, which had worked perfectly before. This testing practice ensures that everything functions as expected (if it doesn’t, we call it regression). When a regression is found, corrective actions should be carried out in no time. 

    Regression testing performed manually in every Sprint would mean never-ending work. Thus, we do it automatically by running all user acceptance tests once again after a change has been implemented. 

    Summary: 7 test types to ensure IT project quality

    1. Code Quality Assurance
    2. Unit Testing
    3. Integration Testing
    4. Performance Testing 
    5. Stress Testing
    6. User Acceptance Testing
    7. Regression Testing 
    7 test types to ensure quality

    As you can see quality is a result of a habit – regular, many-sided testing that involves developers, testers, and business representatives, but wouldn’t be possible without automation. For more on testing and its benefits, go to our Quality Assurance service site. 

    ]]>
    CI/CD for Unity applications https://dev.neurosys.com/blog/ci-cd-for-unity-applications Fri, 28 Jan 2022 10:08:21 +0000 https://dev.neurosys.com/?post_type=article&p=12787 Is Unity a platform facilitating Continuous Integration and Continuous Development? How do developers handle the practices of automating software building?

    Unity, a versatile cross-platform development environment, consists of a powerful graphics engine and an editor providing all features necessary to create real-time digital experiences. The tool enables efficient multi-platform development, easy and fast prototyping, and the thriving community behind it supports creators. For more information about the Unity engine, check out the article giving a bigger picture. The engine, which most remarkably facilitates game development, can be employed with equal success in industrial and business AR/VR applications.

    In this article

      Continuous Integration/Continuous Development

      Continuous Integration (CI) is an approach in software development, involving immediate testing and reporting of frequent, stand-alone changes. The approach is aimed at early detection of possible malfunctions and allowing rapid response, improving the process flow. 

      Continuous Development (CD) is an umbrella term, covering topics such as CI, continuous testing, continuous delivery, and continuous deployment. 

      The CI/CD pipeline is a set of actions performed to deliver a digital product or its new version. The process requires easy access to single shared repositories, where the code can be integrated continuously. It’s used to automate the software development process, facilitating code creation and testing (that’s the CI part), and in the end – a secure deployment of the brand new version of the product (CD).

      CI/CD is important to quality software development as it helps accelerate the process and eliminate errors and defects, delivering a better product faster. Intuitively, the CI/CD, as a widely adopted set of practices aimed at improving the software creation process, should be commonly enabled without obstacles. Theoretically, it is, but the reality is a bit harsh. 

      The benefits of Unity

      Projects built with the engine are compatible with various platforms, devices, and operating systems. The compliance doesn’t occur automatically but stems from skillful development and implementation. Older app releases often can’t be seamlessly transferred into, for example, iOS, as their builds are based on too many solutions depending on the Android system (or any other platform, being their “home port”, for which they were primarily developed). Meticulous choice of libraries dedicated to certain platforms, or – to an extent – accepting the limitations of the engine is an inseparable part of Unity developers’ life. Otherwise, a significant amount of work would perform solely with one system. 

      The engine shows a give or take attitude, where every process has its price and requires other actions. Developers can create multi-platform applications but need to avoid solutions not supported on every platform, limit native plugins, or at least ensure such plugins or substitute features are available on each supported platform. That’s the side-effect of maintaining several projects in various technologies on each necessary platform.

      Jack of all trades, master of some

      Unity’s ability to facilitate applications working in nearly all environments is huge work- and time-saver. Thanks to the engine, companies do not need to hire developer teams for every operating system. Nevertheless, the familiarity of other platforms’ functioning comes in handy – Unity developers don’t need to be experts in Android, iOS, UWP development, but should know the platforms and, most preferably, be able to write simple code in their native technologies. 

      Unity enables the relatively easy creation of outstanding 3D experiences without specialized programming skills due to its built-in tools and functionalities. Its emergence was a revolution in game dev, arming creators with a life-changing tool and allowing them to build a plenitude of projects, also in the indie field. Used for years primarily in game development, the engine caught the interest of AR/VR developers, who appreciate its potential in serving 3D models in spatial projects. For more benefits and features of Unity see the overview

      The challenges of the CI/CD pipeline in Unity

      Unity is a well-seasoned, already established solution (released originally in 2005), but the dynamically growing demand causes growing pains. While the engine has many advantages, it still has some flaws, and the difficulty of establishing an efficient CI/CD pipeline is one of the major troubles. 

      While Unity is cross-platform and nearly universal, the nearly part is a challenge. Working well with applications developed for Android, iOS, and Linux, the engine lacks convenient free customizable CI/CD solutions, and even the paid official Unity solution still has some limitations and won’t support all platforms.

      The engine is problematic in the scope of the version control (Git, GitLab, etc). With the latest version, the engine creators started releasing working solutions, until then, Git service providers offered facilitations on their side. On Unity’s side, versioning required specific preparation before submitting work to the repository.

      Solutions

      1. Unity Cloud Build

      Released in 2015, the Unity Cloud Build brought change to the build process, enabling continuous integration services for Unity projects. It’s easy to set up, user-friendly, and allows avoiding licensing issues, but not free anymore. However, UCB is not Unity’s wunderkind, as using it is associated with performance loss and limited access to features. Currently, the supported platforms include applications developed for iOS, macOS, Android, WebGL, Windows Desktop, and Linux. 

      The UCB streamlined CI/CD execution in Unity, reducing the need for unnecessary workarounds. The solution monitors source control repositories, automatically updating detected changes. Automation with UCB covers code compilation, deployment, and testing, enabling rapid iterations of developed projects. 

      Automation services covered by UCB result in

      • faster distribution, thanks to the cloud-based build compiling infrastructure, executed in parallel for multiplatform works. The builds are downloadable for all team members without restriction
      • shorter time-to-market achieved through reduction of manual work and intervention, and, of course, the necessity of only one build process for projects meant for various platforms
      • improved quality, since the process detects changes continuously, spotting potential errors at the same time as they are compiled into the build
      ProsCons
      Easy configurationLack of support to some platforms
      Official Unity supportPaid
      Still evolvingLow performance
      Low flexibility
      Lack of integration with: Google Play, App Store, etc.

      2. Cloud Services / Cloud platforms (Azure, AWS, etc.)

      Cloud platforms are highly versatile tools for performing cloud computing and data processing without utilizing the user’s computing power. Thanks to their flexibility, it is possible to install a selected system, dependencies, and tools, including Unity. An environment prepared in the cloud can serve as a platform for CI/CD, which, due to the characteristics of cloud solutions, will be flexible and scalable, enabling continuous and stable access to services. Unfortunately, it is probably the most expensive solution, not only because of the sole pricing of cloud services. The size of cloud platforms and the vastness of the possibilities means that creating the environment requires highly skilled DevOps followed by expertise in Unity and its specific requirements. These skills also come at a cost. Cost optimization of cloud solutions is a broad topic, deserving a separate article.

      ProsCons
      FlexibilityHigh costs
      Good performanceConsiderable effort and knowledge required for configuration
      ScalabilityPotential problems with the configuration of some platforms
      Availability/stability

      3. Game CI

      A free solution based on Ubuntu docker image with Unity client installed. Thanks to the tutorials prepared by the developers, integration with the Git version control system and CI/CD execution is extraordinarily easy. The extensive documentation leaves few questions, reducing the tech threshold. Unfortunately, this solution requires a separate physical or virtual machine, since none is integral to it. While developers can use GitLab or GitHub servers, their capabilities are limited, especially in the free version. Still, already having access to a computing platform, developers can easily configure a docker image on it and integrate Game CI into their version control system with relatively little effort. However, utilizing Ubuntu means that not all Unity features are available (including the lack of il2cpp, responsible for improving applications’ support), therefore it becomes problematic or nearly impossible to prepare CI/CD for UWP and macOS and iOS platforms using il2cpp scripting backend.

      ProsCons
      Relatively easy configurationLack of support of some platforms
      Dockerization enables building on GitHub/GitLab servers, but……increasing performance still requires an own virtual/local machine 
      Free solution, but……the amount of free minutes available in GitHub Actions/GitLab CI is limited
      Documentation with examples

      4. Automation servers

      A type of CI/CD-dedicated servers, designed to make building, testing, and deploying software easier. The solution that works most effectively for applications developed in the Unity ecosystem, and that seems to handle the associated problems best, is Jenkins. It is a relatively flexible tool that is easy to customize to particular needs. Jenkins also requires a virtual or physical machine, facing the same issues as Game CI. However, it is more scalable, allowing integration of multiple devices, both virtual and physical, and is not dependent on a single system. 

      Other solutions worth mentioning are:

      • TeamCity – offering a dedicated Unity plugin and a free On-Premise tier
      • CodeMagic – offering a manual for integration with Unity, although with limited capabilities in its free version
      ProsCons
      High flexibilityRequires own servers
      Relatively easy configuration……but requires vast DevOps and Unity expertise
      High-performance
      Free Jenkins and free third-party tiers
      Possibility to use both local and virtual servers
      Full control over pipelines

      5. Own local computer-server

      A physical machine prepared on-site, adapted to actual needs, allowing cost reduction, particularly in smaller projects. The team can prepare their solutions or use previously mentioned options, such as Game CI or Jenkins. In the case of a proprietary solution and own machine, developers have full control over development and costs, are immune to unfavorable changes, and can adjust the direction of development to own needs. At the same time, the team needs to take full responsibility for the solutions’ development, as well as the stability and availability of the service. The biggest problem is the systems’ very limited scalability. If not prepared properly, it won’t allow extending the computational power of the machine beyond what is possible to achieve with a single computer, and creating a satisfactory scalable system requires huge amounts of work. Every extension of computing power requires physical intervention in the machine. 

      ProsCons
      Easy access to the machineProblematic increase of performance
      Unlimited configuration possibilitiesVulnerability to external threats (lack of power, fire on the premises, etc.)
      Possibility of integration with ready-made solutionsVast amounts of work and knowledge needed for configuration
      Relatively low costs

      What did we choose?

      For our AR app, developed under the Nsflow project, we are testing the solution on a local server running Windows 10 and configured GitLab Runner, which is dedicated to our application. The computer has good specifications, sufficient for our needs. Below, we have described a quick step-by-step guide to prepare the CI/CD server in its most basic version:

      1. In the first step, we configured GitLab Runner ➡ Install GitLab Runner on Windows on the local server and linked it to a given repository on GitLab – as the operating system is Windows, we set PowerShell as shell in config.toml.
      1. Then, we installed the necessary software: Unity in the appropriate version, Fastlane ➡ Getting started with Fastlane for Android along with AWS CLI dependencies, all necessary dependencies needed to build applications on Hololens augmented reality glasses including Windows SDK in the appropriate version and configured them appropriately (similar to local builds on the developer’s hardware).
      1. The next step was writing a script in C# on the Unity side, which can be called later using Unity CLI ➡ https://docs.unity3d.com/Manual/EditorCommandLineArguments.html. This allows us to run a build for each platform with proper configuration – for Hololens we used ready-made methods from the UnityPlayerBuildTools class found in Mixed Reality Toolkit. These methods trigger automatically the second part of the build in Microsoft Visual Studio.
      1. Next, we created gitlab-ci.yaml and included appropriate commands from Unity CLI to run previously written code. We also included rules in response to certain actions. The most important rule is building the application as:
      • Development version – and release of the application package on AWS S3 using AWS CLI for testers and deployment of the application for internal testing on Google Play using Fastlane
      • Production version – and deployment of the application on the production environment in Google Play and the appropriate place on AWS S3

      Each released change is preceded by tests available also from the Unity CLI.

      Basic CI/CD architecture with local computer usage

      We chose this approach for several reasons:

      We faced numerous troubles with building applications for Hololens 2, as very few off-the-shelf solutions support UWP and the very peculiar process of building applications for the Microsoft platform. The chosen solution is the most flexible, as we have full control over

      • what operating system the machine is running on
      • what version of Unity is installed on it (including the necessary SDK and platform tools)
      • what dependencies and additional software will be installed on it (AWS CLI, Fastlane, Windows SDK, etc.).

      This allows us to build applications for Hololens without any obstacles, which is often impossible using off-the-shelf solutions. The choice was also driven by the immediately available computer with the appropriate specifications (GPU + CPU), which significantly speeds up development time for both UWP and Android, compared to the builds that we were previously forced to do on our computers. Our current demand for builds is relatively small, and a single machine can meet it. Since it is a physical device, which we can easily access either locally or via “remote desktop”, analyzing and solving problems is much easier than in the case of ready-made solutions.

      Unfortunately, there are no perfect solutions, and the one we chose also has its limitations. 

      What issues do we need to tackle? The major problem is undoubtedly the scalability of this solution. The only possibility to scale our projects is to exchange computer components or extend the system with additional units. The machine must also be connected to a power source and network at all times, so any network connection issues or power supply downtimes prevent us from building applications. We must take full responsibility for the availability and reliability of our solution and its development. We cannot rely on third-party warranties. In addition, using Windows, we are not able to provide application support for Apple devices, so extending the system to support iOS or macOS will involve purchasing and integrating another physical computer running Apple or using an additional virtual machine running Apple. 

      Still, even facing the listed limitations, our choice is easier than providing support for Hololens 2 using the other proposed solutions. We plan to deploy Jenkins to the current system whenever the need arises to add another operating system or to increase application building performance. This should solve the problems mentioned above. 

      Is Unity good? Or does it have a justified bad reputation?

      Taking into consideration all of the above, do developers form a love-hate relationship with the engine? Or do they all more-or-less secretly wish for a new, more user- and CI/CD-friendly solution to enter the market?

      We neither intend to pass judgments nor deal in absolutes. As you’ve already seen above, some of our specialized projects are built using Unity, and despite some inconvenience (or taking detours), the advantages outweigh the disadvantages. If you’d ask us for predictions on Unity’s future and our thoughts on it, we wouldn’t hesitate with admitting that the engine is – and will remain – one of the best tools for developers of (nearly) all platforms, operating real-time 3D content. 

      Hopefully, its creators will stay on the path towards easier automation, enabling efficient, fluent development and integration of cutting-edge applications. As a solution offering unmatched opportunities coming in handy in augmented, virtual, and assisted reality applications development, Unity needs to incorporate better facilitation of CI/CD pipelines. Only then, the engine will accommodate the needs of developers and ultimately grow beyond the game creation field becoming the perfect solution for AR/VR applications it has the potential to be. 

      Are you looking for experienced developers to help you with an AR/VR application or other solutions, utilizing 3D models and real-time spatial data? Book your free consultation here. Let’s meet to discuss your idea and see how Unity can help bring your idea to life. 

      ]]>
      Why use React JS? Top reasons and advantages https://dev.neurosys.com/blog/react-js-library-for-scalable-web-mobile-apps Fri, 20 Aug 2021 12:40:44 +0000 https://dev.neurosys.com/?post_type=article&p=9195 React overview

      Initially released in 2013 and maintained since then by Facebook and a strong community, React.js is a popular open-source front-end JavaScript library for building user interfaces. Necessity is truly the mother of invention, as its creation answered a challenge that Facebook faced with creating highly efficient and dynamic interfaces. 

      What made it one of the most popular libraries in this field? Hint: most probably it wasn’t the social media-related origin. So, what was it?

      React js - advantages

      Why React.js?

      Components and one-way data flow

      React components are reusable, standalone pieces of code, similar to JavaScript functions. Nested components are the basis of React applications. Interaction between them occurs only one way, sparked by e.g. clicking an icon. Components’ state can be changed through performed actions, making an application less prone to errors, easier to debug and more efficient, as relations between elements are easier to define.

      The virtual DOM

      Document Object Model is a model created by the browser, each time the website is reloaded. DOM enables browsing for particular elements and altering their properties to introduce more dynamic elements. On the downside, using extensive DOMs is CPU intensive and burdens the browser, as the whole HTML tree is reloaded each time an element is changed. The virtual DOM represents HTML elements behind websites and applications. Keeping the virtual DOM record, React doesn’t need to reload the whole model, instead of generating its virtual representation, comparing elements and making changes in DOM only in those requiring updates. This enables hot reloading (real-time interface reloads) and improves applications’ efficiency. 

      React Hooks

      Hooks are a function in React that “hook into” lifecycle features of React components, letting developers use function components in most cases that used to require class components. 

      This enables access to components’ lifecycle functionalities, previously exclusive to class components. In pre-hooks time, function components couldn’t support more advanced logic and were excluded from most React features. 

      Developer tools

      The dedicated browser extension for React, React Developer Tools, is available for Chrome and Firefox users, facilitating developers’ work. React DevTools provides additional React-specific tools for better design and debugging. Developers can examine individual components in the virtual DOM, edit their state and properties, and improve the applications’ efficiency and security. 

      JSX

      React’s sidekick, JSX, is a JavaScript Extension Syntax. JSX simplifies writing in HTML, converting mock-ups and adding them to React. By using JSX, developers gain a tool for creating their own components to answer particular needs, resulting in customized, high-volume applications. 

      React Native

      React Native is a cross-platform framework created by Facebook for efficient mobile application development. It is widely used by developers creating apps for the most popular operating systems, combining React.js and native platforms features. React Native emerged two years later than React.js, as a response to the rapidly growing mobile development market. Users globally are turning mobile, and missing the opportunity to fully adapt websites to various devices can cause irreversible damage to business results. Incorporating React Native enables preserving app relevance through increased flexibility, live reload, great performance, and intuitive architecture. Building cross-platform applications for the most popular operating systems, reusing code and methodology from React-build applications made it easy for developers, resulting in time savings and improved mobile user experience. 

      Test-Driven Development

      Test-driven development is a software development approach aimed at automating software testing, enabling building more stable, reliable, and safe applications. Incorporating TDD practices in React.js projects, developers can create convenient test suites and reduce redundancies in the code, achieving higher test coverage, a better, less prone to errors architecture, and shortening time-to-market. 

      Progressive Web Apps (PWA)

      Intended to work on every device using standard browsers, progressive web applications are built with common technologies (HTML, CSS, JavaScript). What distinguishes them is the progressive feature, meaning enhancing user experience based on browser capabilities. PWA is a term describing the particular approach to app design and development, utilizing the newest technologies to improve user experience, as long as the user’s browser is capable of supporting them. 

      The term includes 9 core characteristics, defined by the Google Chrome engineer who introduced it in 2015, among which are: remaining up-to-date, responsiveness (adjusting to various screen sizes) and safety. The PWA approach leads to building efficient, responsive applications that look, feel and behave great, contributing to user’s satisfaction. React.js, equipped with developer tools and other add-ons, facilitates PWA development to bring out the best from the code for mobile web app users. 

      Maturity and stability 

      Being present for a few years already, React matured and became a reliable tool. Each successive release fixed issues previously hampering work, proving how well it is maintained and ready for large-scale projects. 

      The relatively low entry threshold

      React.js is a welcoming tool for users of JavaScript-related languages, enabling them to swiftly master the library. Starting work with React.js is possible even with a relatively small knowledge base, as users can hone their skills as the project develops. Hiring frontend developers who master React.js or adopting the library among existing teams is fairly easy, as the technology ranks among the most eagerly used tools.

      The community behind it

      Apart from React.js features making it a user-friendly tool with a simple syntax, one of its advantages is the active, supporting community behind it. React users and developers contribute to the library’s growth, creating one of the biggest online communities gathered around technology, providing support and solutions to the most commonly encountered issues. The community-supported library is more resilient to issues, and its daily use is secured by extensive expertise. 

      Is it worth it, after all?

      React’s popularity didn’t come out of the blue. Being fast, relatively simple and scalable, the library became a tool of choice among many Fortune 500 companies. LinkedIn, Apple, and Udemy are just a few of the companies building their services with React. The global usage among websites and market position in terms of traffic and popularity is still growing. Forecasts for the next few years are optimistic, as the demand for React.js powers isn’t coming to an end anytime soon. The library is far from being a silver bullet, but its advantages justifiably made React a huge player in web development. 

      Providing features aimed at simplifying web development, React.js ensures faster rendering, improves the process of building components, streamlining overall productivity and contributing to easier maintenance of written code. From the business’ point of view, React comes with a number of advantages, resulting in better UI/UX, faster development, more user- and search engine-friendly features, contributing to better market results. 

      Read more on React.js development.

      ]]>
      What is Node.js used for? Application and use cases https://dev.neurosys.com/blog/what-is-node-js-used-for-application-and-use-cases Wed, 23 Jun 2021 13:07:14 +0000 https://dev.neurosys.com/?post_type=article&p=8017 Node.js who? Overview

      Node.js is a popular JavaScript runtime environment used both in web and mobile application development. The environment is a cross-platform, open-source and highly effective solution, ranked as the most widely used non-language tool in the 2020 Stack Overflow’s Developer Survey. The idea behind the creation of Node.js was to bring into the world a tool allowing two-way connectivity between the client and server, breaking the mold of the stateless request-response paradigm. Node.js is a game-changer, enabling developers to use JavaScript both client-side and server-side. 

      Benefits of Node.js include its light weight, scalability and the potential of full-stack JavaScript. Released in 2009, the JavaScript runtime is built on Chrome’s V8 JavaScript engine and allows incorporating code written in other languages. Constant growth of libraries supports the development of powerful apps – Express, Passport, Sequelize, Mongoose, and countless others supplement the Node.js environment for efficient creation of applications. 

      Where to use Node.js

      Node.js is a favored choice for website development and back-end API’s, handling extensive datasets in real-time. Node’s NPM, the world’s largest package manager for JavaScript, also enables efficient mobile app development using the environment. Global enterprises and startups alike use Node.js in developing scalable, fully functional applications. 

      Real-time chats

      Real-time, multi-user applications are the place to shine for Node.js. Here’s where the environment puts to use the core qualities – speed, the ability to handle high-traffic, single-thread event loop, and asynchronous processing. Asynchronous I/O methods allow processing high traffic, resulting in real-time, efficient communication. The environment handles concurrent connections and assures fluidity on the user’s side. 

      Real-time collaboration tools

      When it comes to serving a number of users who not only communicate but also cooperate on a common task, Node.js is again cleared for duty. Handling extensive I/O requests and actions happening simultaneously (editing, commenting, uploading files) is backed by WebSockets protocol and Event API. Processing various actions in real-time with immediate updates and ensuring event-driven, non-blocking architecture made Node.js a solution fit to meet the server-side needs of Trello, the widely known online collaborative application. 

      Games

      Coupling Node.js with the Socket.IO library brings out the best results. While the environment contains the back-end side of the game, including the game’s logic taking place on the server, Socket.IO ensures a real-time web browser-server communication channel. Node.js works best in games requiring high responsiveness, while most point-and-click games won’t fully unleash its potential. As a non-blocking, event-driven solution, Node is suitable for browser-based games. 

      Streaming services

      The Native Stream API and a built-in interface of readable and writable streams make Node.js the go-to solution for streaming services. Streams in Node.js are a data-handling method used to sequentially process input/output. Contrary to the traditional methods, the program handles data pieces one by one, enabling less memory consumption. Since apps built with Node.js don’t require storing temporary data, the downloading process is steady and efficient.

      Streams enable reading extensive files, without using up the devices’ memory space. The textbook example of Node.js powered streaming is YouTube, where users watch videos composed of smaller data bits, processed continuously by the app. Additionally, streams contribute to code composability. Using streams facilitates connecting components with a common result in mind, made possible through data piping between code pieces.

      Highly scalable applications

      Fast, lightweight and good at handling a constantly growing stream of requests, Node.js is a solution to app scaling challenges. Even being single-threaded and not suitable for CPU-consuming applications, Node handles scaling admirably. Not only does it utilize to the fullest the available CPU power of the machine by using the cluster module, but it also comes with an adequate strategy. Cloning and splitting the application results in more “hands on deck” to handle workload segments, while decomposing breaks the application down into multiple apps with dedicated databases. 

      Web development/Complex Single Page Applications

      Combining Node.js environment with the Express framework and suitable packages brings out a number of benefits for web developers. Solving nearly all possible problems covers configuration of web app settings, integration with view-generating engines, creating request processing functions for various HTTP methods, and routing to various URLs. 

      Node.js helped change web applications development by transitioning from the previously typical web response paradigm. For 20 years of stateless request-response paradigm, the client initiated communication. With Node, web apps can exchange data freely in real-time, two-way communication with databases or other external services. 

      Microservices

      Microservices, or the microservice architecture, is an approach in providing services to clients. Instead of building one, extensive service, the distributed computing architecture offers a collection of bundled small services. Switching from monoliths to microservices helps to build and expand products by optimized scaling, ensuring compatibility and conducting processes independently in each of the services. 

      With its non-blocking event-driven I/O, Node.js speeds up development for low CPU applications, e.g. database queries. Node.js features like a lower memory footprint, a simple initial code setup and easy configuration make it a popular choice for microservices. One of the scaling possibilities, decomposing, is particularly useful in microservices and when implemented right, can be beneficial to the project. 

      Is Node.js the bee’s knees in web development solutions?

      The above fields are shining examples of how Node.js streamlines the development of various demanding projects. Node.js grew to be one of the most popular solutions among developers, used in extremely high-traffic websites and quickly growing applications. Its JavaScript origin allows for sharing pieces of code among built elements, both front and back end. Faster loading time, processing large amounts of requests simultaneously to keep up with the pace of dynamically growing products, are some reasons why the world’s biggest service providers like Uber, PayPal and Netflix chose Node.js to improve their applications. 

      The revolutionary “JavaScript everywhere” approach following Node’s appearance is a milestone in creating robust applications armed to bring value to the business. Despite not being free from some challenges and already having newly emerging technologies nipping at its heels, Node.js presents more advantages than disadvantages in its favor in developing real-time, relatively easy deployable applications handling vast amounts of data.

      Read more on Node.js development.

      ]]>
      MAGDA – our open-source solution for spaghetti code https://dev.neurosys.com/blog/magda-our-open-source-solution-for-spaghetti-code Mon, 12 Apr 2021 08:15:56 +0000 https://dev.neurosys.com/?post_type=article&p=6389 Introduction

      We would like to introduce you to our latest open-source library: MAGDA. The name is an abbreviation for “Modular Asynchronous Graphs with Directed and Acyclic edges”, which fully describes the idea behind it. The library enables building modular data pipelines with asynchronous processing in e.g. machine learning and data science projects. It is dedicated for Python projects and is available on the NeuroSYS GitHub, as well as on the PyPI repository. It aids our R&D teams not only by introducing some abstraction (classes and functions) but also by imposing an architectural pattern onto the project.

      Example of coffee brewing pipeline with MAGDA

      How does it get rid of spaghetti code?

      As described above, MAGDA is composed of a few features:

      1. Modular – code should be divided into small logical blocks (modules) with explicit input and output. The module could be a simple filter, database connector or a wrapper on a huge deep learning model. Just remember: one module – one role.
      2. Asynchronous – the library is based on asyncio and ray, which allows it to run modules simultaneously. This gives us a simple optimization out of the box.
      3. Graphs – modules are joined together into one connected pipeline/stream. During the design stage, we can think of modules as graph nodes and focus solely on their role and how they connect with each other.
      4. Directed – the modules’ dependencies (and graph’s connections) are asymmetric. Since the graph always “flows” in the same direction, we can easily determine the ancestors and predecessors of a module. Therefore, we can clearly point out where the pipeline begins and ends.
      5. Acyclic – each module is always processed just once during every run. This means that there is no path in the graph (modules’ dependencies) which starts and ends at the same module.

      By combining all of these features, MAGDA creates a concrete project template, where each part of the project is enclosed into a module with a specific input and output. Each module’s behavior can also be modified by providing custom, module-specific parameters. 

      Application flow is created by joining modules into a pipeline, where each part of the pipeline can be replaced by another module with a corresponding interface. Finally, the whole pipeline can be easily written to and automatically loaded from a single YAML file.

      When correctly applied, you obtain a project with clearly defined boundaries and interfaces. When modifying a module, you rely only on information provided by the accepted interfaces and parameters regardless of the rest of the system – similar to the “inversion of control” design pattern.

      Use-cases

      The library can be used in every Python project, which can be described as an instruction with a set of well-defined steps. Our R&D team is making use of MAGDA in various services: from small solutions with only a few modules to a complete Question-Answering pipeline. The most valuable is the easiness of replacing any part of the pipeline without concerning about the rest of the system. Creating a modular application is especially important when performing reliable and repeatable experiments, where only certain parts or parameters are modified. Apart from that, you can also gain from asynchronous processing of several subparts at the same time.

      Summary

      Since MAGDA is our brand new project (current version: 0.1), some features might still be missing. Feel free to create an issue, share a feature request, or post a question and contribute!

      Project co-financed from European Union funds under the European Regional Development Funds as part of the Smart Growth Operational Programme.
      Project implemented as part of the National Centre for Research and Development: Fast Track.

      ]]>
      Explain IT To Your Grandparents At The Christmas Dinner https://dev.neurosys.com/blog/explain-it-to-your-grandparents Mon, 23 Dec 2019 09:17:16 +0000 https://dev.neurosys.com/?post_type=article&p=2690 You cannot keep coding for too long because your fingers get cold…?

      You have to bypass the city centre because of the crowded marketplace…?

      You cannot plan Sprints normally because everyone around is going for vacations…?

      Well, these are undeniable signs of the Christmas season, which among all the above-mentioned inconveniences will also bring you to the dinner table with IT-ignorants (aka your family) and their hilarious questions: “So I’ve heard you’re no slouch on computers, hah? Could you take a look at my PC after dinner maybe? It’s getting impossible to shop on AliExpress. I’m sure it will take you only a minute…”

      We’ve all been in that position trying to clarify what our work in IT is about, being misunderstood and unappreciated, and there is no escape from it this year again. 

      But instead of accepting your bad fortune and agreeing to format the disk, or reinstall Windows XP, or clean up a browser for your aunty without a fight, we suggest making things clear this time. At the end of the day, there is nothing like having rapport and finding common ground with your family (let alone escaping Windows installation 😉 So here you go – explanation of how IT works that even your grandparents will get:

      *                 *                 *

      Me: So, folks, I guess you all went through the renovation of your house at some point, right? How did you do that?

      Grandpa: (sighs in relief) Oh finally you got interested in something besides your computers!

      I tried once to renovate the house on my own, but your grandma is so hard to please, that I had to ask your uncle Mike who works in a construction company to help me out. He recommended guys who could change the heating system to more modern and also those who specialize in those fancy-schmancy decoration works. They were renovating the house and Mike – (mumbles irritably) at least something he is suited for – was bringing them lunch and at the same time was keeping an eye on them. 

      Me: Okay, so people in IT work in a very similar way, just we call everything funny names. 

      The thing we are working on, like the house in your case, we call a Product. The person who is interested in its success, similar to you during the renovation, we call a Product Owner. And the nagging grandma, whom the whole hassle is for and who would scold you severely if something goes not the way she wants, we call a Stakeholder

      Similar to you hiring someone to do a renovation in your house, companies outsource or hire developers like me to make software (=products) for them. Some developers work on the core of the product (like heating), they are called backend developers. And others take care of the products fancy facade – we call them frontend developers

      And then we also have someone like uncle Mike, whose role is to make developers happy but also keep an eye on their work. If uncle Mike would work in IT, we would call him a Scrum Master or a Project Manager.

      Grandpa (chuckles): Mike – a Master? If he is a Master, why can’t he do the renovation on his own?

      Me: All developers ask the same question. But who would know without him whether the team works and uses building supplies effectively? 

      Grandpa reluctantly agrees.

      Me: And how often did you talk to the workers?

      Grandpa: Every few weeks we were meeting to see what should be done next (you know, your grandma has new ideas every week) and what materials they need for that. And every morning I was passing by just to make sure they have all they need. Besides, your grandma wanted to see every chunk of work when it was done…

      Grandma with displeasure: Why don’t you tell how you were getting drunk with them every second Friday?

      Grandpa squirms: These were business meetings. We were checking results and I was showing them how to do things better… 

      Me: We work exactly the same way in IT: 

      • We plan and estimate every few weeks, which we call a Sprint.
      • Then every day we quickly meet in the morning for a Daily to be sure everyone in the team has what they need.
      • And after a new chunk of work is done, we show it to stakeholders and call it a Sprint Review.
      • And we also gather for a Retrospective after every Sprint to see how the Sprint went and what could be improved. We don’t drink during the Retrospective though, only smoothies. 

      Grandma interferes: I’ve heard that our neighbours did it better. They just went for long vacations, and when they returned – the renovation was done. Why we didn’t do like them?

      Grandpa: Oh, please…! Then you would have to live with that olive and not sage green wall colour and would need to accept 1000 other things that you wanted to change during the renovation. You know you cannot change much when the work is done.

      Me: Exactly! Working in Sprints and being involved allows you to get exactly what you want in the end or make changes in the process easier (we call it Agile). And this way you can predict better when the renovation can be done, instead of coming from vacation to the windowless house.

      I hope it’s more clear for you now what I do for a living?

      Aunty: So you say IT is similar to house renovation?

      Me (frightenedly): Yeah…

      Aunty: I think my PC needs a renovation. Can you come tomorrow and take a look at it?

      Me: 

      frustrated man in IT

      We hope your story will have a better end. In any case, if you enjoy working in agile and want to learn more about it, check out these (way more serious!) blog posts:

      ]]>
      Will The QA Role Become Redundant Any Time Soon? https://dev.neurosys.com/blog/will-the-qa-role-become-redundant-any-time-soon Fri, 13 Dec 2019 11:10:18 +0000 https://dev.neurosys.com/?post_type=article&p=2668 As many manual tests got automated and more programmers start writing tests on their own today, the fair question is: Will the QA role become redundant any time soon?

      There are a lot of ongoing debates around this topic. Automated testing is often regarded as a cure-all that solves all QA problems. Many development teams claim that they do great without QA engineers. Companies follow this approach without giving it publicity for obvious reasons, but some of them even boast about it. 

      Yahoo bragged about successfully eliminating QA at some point. Microsoft and Salesforce were reportedly working without dedicated QA teams too. For Salesforce such practice led to multi-instance core and communities service disruption though, but that’s another story. And for Windows… well, you know the case:

      We are not among those who stick to redundant roles. However, we don’t see any reason to talk about eliminating the QA role, rather about transforming it. 

      Indeed, in some projects due to their scale, domain or context having a full-time QA specialist might be not necessary. But this is rather an exception. Developers and even more so automated tests cannot replace QA engineers. Moreover, there are many benefits in separating dev and QA roles. Here is why:

      Different mindsets: Developers write tests to prove their code is correct. Testers – to find how code may fail.

      There is a crucial difference in developers’ and testers’ mentality. 

      Developers have special feelings about the code they write. They know it inside and out. They know the logic behind it. And they can never be impartial about it.

      Precious Jared Padalecki

      That’s why developers’ tests are often limited. Programmers already know how software should perform and stick to this familiar scope/scenarios while testing. This leads to creating a perfectly accurate but sometimes prone to errors in non-standard situations (missing edge cases) piece of code. 

      QA engineers, on the other hand, are not attached to code they test. They have no interest in testing it gently. They try out all creative ways of using the software, just like the future users, and find bugs that stay outside of developers’ reach. 

      As a rule, testers understand business assumptions better and follow specifications more precisely, which is necessary for delivering software that meets business needs. 

      All this doesn’t mean that one mentality is better than the another. Both are great and needed, as true superpower comes from the synergy between testers and developers.

      Distribution of responsibility

      work conan

      Whereas programmers are usually responsible for a certain feature, system, microservice, or simply a piece of code, QA engineers are responsible for the product as a whole. They make sure tests are actual and new bugs and vulnerabilities do not appear after modifications (to different parts of software! by different development teams!) have been applied. 

      A great example here is the responsibility of performing end-to-end testing of a product based on microservices distributed among various dev teams. Having a QA specialist dedicated to such complex tasks that require the coordination of different teams is crucial.

      Not all types of testing can be automated or performed by developers

      We are big fans of automation, but let’s be honest – automation cannot cover all the tests as some of them require human cognitive ability and common sense. 

      Exploratory testing is a good example here. Testers need to use their creativity, experience, analytical skills and be familiar with the logic behind the software in order to find all possible issues. Sometimes it’s intuition and simply a human way of thinking that helps testers accomplish their task. You cannot expect it to be done by a computer. 

      Another example is already mentioned black-box testing, which isn’t possible without a “fresh” pair of human eyes. 

      A smart QA tester is the best kind of a rubber duck for a developer

      Sorry if this comparison hurts someones’ feeling, but we couldn’t explain the idea better. We all know that software development is mentally heavy and talking to someone about your programming problems is a great way to find a great solution. And let’s be honest, the rubber duck’s role in such discussions is extremely exaggerated. 

      coding

      The situation with QA is similar to safety at work. Safety isn’t the sole responsibility of a Safety Officer – everyone should follow safety rules. If it’s safe at work, it doesn’t mean you can fire the Safety Officer. And people who follow safety instructions cannot replace the Safety Officer.

      QA is especially crucial for the ongoing development of products that have been already launched and adopted by real users. Eliminating or moving QA role to someone else means transferring the responsibility of finding bugs on existing users, which is fraught with reputational risks.

      We hope you found our arguments interesting. If you disagree or have something else to say – don’t hesitate to leave a message in a comment section below or contact us directly. 

      ]]>
      Dream teams need trust and transparency (5 ways to make it work) https://dev.neurosys.com/blog/teams-need-trust-and-transparency-5-ways Mon, 21 Oct 2019 06:03:19 +0000 https://dev.neurosys.com/?post_type=article&p=2518 Trust as well as open and honest communication is key to any project’s success. That is a simple idea which is hard to disagree with. But when someone starts talking about transparency in projects, it’s often considered to be such a vague substance far from exact sciences, that the topic doesn’t get due attention.

      “Project transparency: definition

      By transparency in project management, we understand the culture where all the information is visible and easily accessible, and communication among the stakeholders is open and honest.”

      At NeuroSYS we think it’s important not only to talk about transparency in project management but strive to achieve it. And to make this conversation more practical, we want to share some specific methods and tools we use for making our projects more transparent.

      1. Solid client onboarding

      As a rule of thumb, the best projects come out as a result of a trustworthy partnership with a client. That’s why we take all the measures to help the client become a part of the team from day zero. And ensuring project transparency here is a key.

      We secure smooth cooperation by building and strengthening the bond within the team from the very beginning. Our onboarding is engaging, honest and personal (we always try to meet in person and introduce the whole dedicated team offline whenever possible; if not – cameras during our conversations are always on).

      We don’t impose our rules during onboarding. Instead, we offer to set up the way of cooperation, that would be convenient for both sides.

      The onboarding aims to make it clear: what is expected from every party and how they can participate; how to track the progress of the project; what unforeseen circumstances there can be and how to deal with them; etc. None of the questions or possible scenarios should stay undiscussed after onboarding.

      Last but not least, it’s also crucial to define during onboarding who will be the main person responsible for communication and making decisions from the client’s side. Having communication flow defined and under control will ensure that nothing gets lost or stays out of attention.

      2. Enhancing software transparency

      Let’s be realists: no matter how attached you are to the software projects you’re working on, one day you’ll most probably transfer it to someone else. And the least what you want to happen is your code becoming a black box or mysterious inscriptions of some ancient tribe in the hands of a newly joined IT team or a team member.

      To make sure that the software we create is easy to take over, we accompany it with clear and informative documentation, as well as generous wiki descriptions. Sorry, but there is no other way out of it. Simply:

      We practice the habit of taking care of the documentation for many years now and as a result, the code we create can be easily taken over at any moment.

      3. Being always true and loyal to agile

      Transparency is one of the main pillars of agility, and agile frameworks are packed with procedures and habits that will help you improve it.

      Following agile ceremonies and making sure that all the stakeholders attend them regularly is a great way to make a project glass-clear. Sprint planning makes it explicit what is expected from the team, when and why. Daily meetings help to keep everyone up-to-date and react fast if something goes wrong. And retros can be a great source of lessons learnt for the whole organisation, not only the team involved.

      4. Using tools for improving transparency in a project

      There are plenty of tools and instruments on the market that can help you visualize information better, make communication more efficient, and thus improve the transparency of your project.

      But in our humble opinion, it’s not so important what exact tool (or set of tools) you use. What matters is that people don’t get lost among too many tools and that they feel comfortable using them. We also appreciate the possibility to integrate tools between each other and apply customizations for creating one centralized flow of information.

      Here are some of the tools we are using:

      YouTrack by JetBrains: it’s an issue tracker for developers with a broad range of functionalities, allowing teams to visualise the agile board, organize user stories, do the planning, time tracking, reporting, and track bugs. Our clients get full access to all the information in YouTrack, so it easier for them to follow the status of the project and be involved.

      Mattermost / Slack

      These are the tools that great for communication overall. Our teams use dedicated channels there which are also integrated with the YouTrack, so they can get notifications when something happens that requires their attention (f.e.: a pull request is merged, or pull requests waiting for review, etc.).

      Confluence

      Confluence is another team collaboration tool that allows to upload, manage, share the project related documentation and work on it together. We like it for its wiki section and the whole documentary format, that helps us to keep the most important information about a project in one place.

      A more detailed overview of IT project management tools you can find here.

      5. Transparent culture

      Trust and transparency depend on the culture inside the company, or how people feel about sharing information or asking. And we believe there are also specific things you can do to improve it:

      Flat organisational structure
      Between our CEO and a junior, there are just a few steps in the management hierarchy. This means that everyone feels comfortable to approach the CEO or other higher-ranked colleagues regarding any matter, This makes communication less entangled. Also, it helps people of any position to influence the company decisions more easily.

      No shaming for making mistakes
      People tend to hide some information if they believe they can be punished or criticised for it. That’s why it’s important to reassure them in the opposite. Here agility helps again. Thanks to short sprints, it’s impossible to make a mistake that is hard to fix. People aren’t afraid to make mistakes and are more eager to communicate openly.

      * * *

      We can keep talking on this topic longer, but these were the main points for today. Hope you enjoyed the reading and got something useful for yourself this time. For more updates – stay tuned by following us on LinkedIn and Facebook.!

      ]]>