Refactoring a front end app – a journey with Protractor

By: Clement Pinchedez / 14 Jun 2016

Abstract

Some months ago, we had to refactor an internal web-based tool called the Design Tool. We radically redesigned the back-end and the front-end and that could not have been done with solid end-to-end tests to prevent regression.

With this article, we will share with you our experience with Protractor, since this is the framework we chose to setup this testing phase. In particular, we will tell you how important it is to stick to best practices and to get to know what is going on under the hood when you launch “protractor conf.js” in your console. We will especially cover the tricky use case of testing a file upload drag and drop element: how to simulate a file drag and drop to the browser and assert it was done correctly.

In the world of online advertising, the visual aspect of the final banners is traditionally the responsibility of designers. They are what we call the creative services. With some Photoshop magic, they choose the colors, design the layouts, create the logos image, the texts… and voila ! Here is the set of pretty banners to display around the World Wide Web.

The Design Tool is an internal Criteo tool that our creative services use to set up the visual elements of the banners. It enables the designers to do things like choosing the logos, the coupon slides, the fonts, the corner radius, the colors… They can also preview their work and create demo pages to send to our advertiser customers for validation.

The UI before refactoring.

The UI before refactoring.

Developed by the R&D team, the Design Tool is a .Net web application with a front-end code partly on Angular.

However, a few months ago we faced some issues with its maintainability. Especially, it was getting harder and harder to develop new features for this tool because of its lack of modularity. We also had to get through a tedious one-week-long manual validation phase on a test environment before each release.

That could not last any longer and therefore, in agreement with the creative and the product teams, we decided to freeze the development of new features for some time to exclusively focus on a redesign of the application.

The refactoring

On the backend side, we migrated from the legacy asmx endpoints to more recent Web Api Rest services (with Swagger for API documentation). We also simplified the code and we added logs and metrics to Kibana and Graphite for auditing and bug investigations.

But the most radical change was on the front-end side where we completely rebuilt the architecture to a full Angular application. For that we used Angular UI router to manage the page flow and Angular directives to componentize our UI elements. We also setup a front-end build and task automation pipeline using npm, bower and grunt. Last but not least, to give a more unified user experience, we also rebooted the complete look and feel of the UI by switching to Angular material.

UI after refactoring.

UI after refactoring.

Testing

We chose to use Protractor for that purpose. Since we were fully Angular, that was the natural choice.

Creating tests for the Design Tool was actually straightforward. The Design tool is basically a set of distinct panels, there is really no complex workflow. One of our typical test is to navigate to a panel, tick some options, fill some text fields and check that we have the expected result. We loved the syntactic sugars introduced by protractor and especially how it avoids us from keeping setting time out events before doing our asserts.

It came up quickly that we however had to stick to some test coding style. End-to-end tests should be useful, easy to maintain and reliable – especially in a context of a refactoring where the layout of the UI elements may change a lot across commits. So we adopted guidelines like avoiding element via xpath accessors, and adopting the creation of PageObjects not only to wrap our own code, but also the different angular material components. That was to decouple as much as possible our test implementation from the graphical library. These guidelines are actually explained on the protractor website, and I think they helped us a lot to write relevant tests.

Nevertheless, we had some trouble to test one of the important features of the Design Tool: the upload of files through a file drag and drop operation.

In other words, many changes have been done to make a better Design Tool application, but this could not have been done without testing and especially end-to-end tests to prevent regressions during that work. We already had a sandbox environment where the newest builds of the application were deployed – this is where we execute these tests.

Testing file Drag & Drop upload with Protractor

Upload of media is a current use case in the Design Tools. Designers upload things like logos for example – we cannot afford having UI glitches on this feature.

The protractor library is a Node.js program wrapping the WebDriverJs library to access a Selenium server, which itself pilot the browser behaviour. The issue with the file upload use case is that we cannot really ask a browser to drop a file from the filesystem to an upload zone. We have indeed two separate execution contexts: the protractor execution contexts and the browser contexts – we cannot easily switch the execution context that easily.

To overcome this problem we actually tried to simulate a “drop” mouse event on the browser. The trick to do that is to inject Javascript code in the browser using the executeAsyncScript function on the WebElement object.

Here is the code used to drop a file to a given file drop zone. Note that the code to transform the file data encoded in base64 to objects to attach to the mouse event is a bit tricky (it is greatly inspired from this http://stackoverflow.com/questions/16245767/creating-a-blob-from-a-base64-string-in-javascript).

module.exports = (function ()
{
	// use fs and path Node modules to access the test files on the disk
    var fs = require("fs");
    var path = require("path")
    var out = {};

	// this is the exposed function
	// inputs are a lit of file path to upload and the selector to access the file drop zone
    out.dropMedia = function (filePaths, selector)
    {
        var filePath;
        var mediaList = {};
 
        // get the file contents in base64
        for (var index = 0, max = filePaths.length; index < max; index++) {
            filePath = filePaths[index];
            mediaList[path.basename(filePath)] = fs.readFileSync(filePath).toString("base64");
        }
		
		// select drop zone with the selectors
        element(selector).getWebElement().then(function (selectedElement) {
			// use executeAsyncScript to execute Javascript code in the browser context
            browser.executeAsyncScript(function (mediaList, domElement, callback)
            {
				// here we are in the browser context
				// decode the media content with window.atob and create Blob objects
                var b64toBlob = function (b64Data, contentType, sliceSize)
                {
                    contentType = contentType || '';
                    sliceSize = sliceSize || 512;
                    var byteCharacters = window.atob(b64Data);
                    var byteArrays = [];
                    for (var offset = 0; offset < byteCharacters.length; offset += sliceSize)
                    {
                        var slice = byteCharacters.slice(offset, offset + sliceSize);
                        var byteNumbers = new Array(slice.length);
                        for (var i = 0; i < slice.length; i++)
                        {
                            byteNumbers[i] = slice.charCodeAt(i);
                        }
                        var byteArray = new Uint8Array(byteNumbers);
                        byteArrays.push(byteArray);
                    }
                    var blob = new Blob(byteArrays, { type: contentType });
                    return blob;
                }
				// create the array of file objects to attach to the mouse drop event
                var files = [];
                fileNameList = Object.getOwnPropertyNames(mediaList);
                for (var index = 0, max = fileNameList.length; index < max; index++)
                {
                    var fileName = fileNameList[index];
                    var imageData = mediaList[fileName];
                    var blob = b64toBlob(imageData, 'image/jpeg');
                    var file = new File([blob], fileName, {
                        lastModified: blob.lastModifiedDate,
                        type: "image/jpeg"
                    });
                    files.push(file);
                }
				// create the mouse drop event
                var event = new MouseEvent("drop");
                event.dataTransfer = { "files": { "item": function (i) { return files[i]; }, "length": files.length } }
				// dispatch the event
                domElement.dispatchEvent(event);
                // call the callback function
				callback();
            }, mediaList, selectedElement).then(function (result)
            {
                console.log("Media uploaded...");
            });
        });
    }
    return out;
}());

… and assert it was uploaded

But after the mouse event has been dispatched, how do we test that the file has been correctly deployed ?

In our case we wanted to do a simple HEAD query to check the image was actually uploaded to a given URL. To do that we had to understand well the mechanism of Protractor promises ( http://www.protractortest.org/#/control-flow). That was actually tricky because Protractor has its own way to stack all operations and execute them asynchronously returning promises which are eventually resolved during the Jasmine assertions. This is what is called the control flow.

So, to do some assert on the result of HEAD request, we created a hook into this Protractor control flow to actually return the result of an HTTP request as a promise. This is how it is done:

module.exports = (function ()
{
	// import modules to make HTTP request
    var http = require("http");
    var url = require("url");
    var request = require("request");
    var out = {};
    // the method to get the HTTP status of a request on a media URL promise
    out.getStatusCode = function (resourceUrlPromise) {
        // hook into the protractor control flow
        return protractor.promise.controlFlow().execute(function() {
            // create a protractor promise
            var deferred = protractor.promise.defer();
                resourceUrlPromise.then(function (resourceUrl)
                {
					// the actual HTTP HEAD request to the resource
                    request({
                        method: "HEAD",
                        followAllRedirects: true,
                        url: resourceUrl
                    }, function (error, response, body) {
                        // fulfill the promise as callback of HTTP response
                        deferred.fulfill(response.statusCode);
                    });
                });
            return deferred.promise;
        });
    }
    return out;
}());

Then we use it like this in a Jasmine assert:

expect(httpHelper.getStatusCode(myMediaUrl).toBe(200);

Conclusion

Protractor is great to write e2e tests as long as you stick to best practices to keep them reliable and readable. But definitely, for a real good use of Protractor it is really useful to know how this framework works under the hood : the browser, the Protractor JS contexts and the asynchronous control flow.

Finally, the refactoring of the Design tool was quite a success. We progressively delivered the refactored Design Tool and that was done with minimal pain.

We now have a more robust application with an extensible architecture. Especially we now enjoy how it it easy to develop new directives, and design panel layouts with Angular material.

We owe a lot to protractor but definitely, that refactoring could not have been done without the support of users and product owners who were willing to cope with a freeze of feature development for us to reduce the technical debt.