Wednesday, 28 April 2021

Kitura: Asynchronous Server-Side Swift Programming with GCD and OperationQueue

  Originally published in 2018 in the IBM swift initiative blog website. Entire project initiative failed and all the articles were removed including this.  Republishing the archive here just to keep my content alive. In 2021, this content is not so relevant. 


 

In the enterprise business, microservices are designed with best architectural practice and implemented to deliver the business solution in the form of services. These services are often consumed by HTTP REST API calls. In this blog, I have explained a better programming approach using GCD (Grand Central Dispatch) and Operation for server-side asynchronous APIs.

 

In general, the backend microservices execute heavy processes such as DB CRUD operation, Component level communication, Processing the media files etc. In the iOS side Swift programming, it’s a general practice to make use of closures and call back methods to make asynchronous calls. The server-side swift programming allows the developers to use the same closure and call back methods without any limitations for the asynchronous tasks. However, when compared to the iOS programming, the server-side API codes require a plenty of asynchronous calls in a single module block. This leads to multiple call back closures and nested async codes. An alternate solution is to make use of GCD and Operations [This will be a hyperlink] methods.

 

I have attempted to explain the solution and its benefits with the simple below example using Kitura. In our example, let us name the heavy processes with some ‘time to execute weight’ in terms of seconds. P1 = 3 sec, P2 = 6 sec, P3 = 4 sec, P4 = 2 sec, P5 = 1 sec

 

Say, these functions are simulated with sleep method to consume few seconds of execution and are defined as below.

 

func p1(_ onCompletion: @escaping (_ output:String)-> Void) {

    sleep(3)

    onCompletion("p1")

  }

  

 func p2(_ onCompletion: @escaping (_ output:String)-> Void) {

    sleep(4)

    onCompletion("p2")

  } 

 

 

Implementation Scenario

 

In our example, we construct a HTTP GET API, say /dataIntesiveJob, that requires all the above processes from P1 - P6. These processes can be dependent or independent and the module can be programmed with nested Async closures or the GCD. Then we get 4 types of implementation.

1.    "/dataIntensiveJobAsync/independent"

2.    “/dataIntensiveJobGCD/independent”

 

3.    ”/dataIntensiveJobAsync/dependant"

4.    ”/dataIntensiveJobGCD/dependant"

 

 

 

 

1.     Independent Task with Traditional Nested Async Closure Blocks

 

Typically, the module here requires a set of independent tasks to be completed. To avoid blocking the main thread, they can be programmed to run in multiple async blocks irrespective of order. Also, since there is no mechanism in place to notify about completion of all the tasksoperations could be sequenced in a nested async block. Hence, the completion of all the tasks is identified by completion of the innermost block.  Given below the code snippet. 

 

func executeIndependentHeavyProcesses(_ onCompletion: @escaping (_ outputMessage:[String])-> Void) {

    self.p1 { (output) in

      self.output.append(output)

      self.p2({ (output) in

        self.output.append(output)

        self.p3({ (output) in

          self.output.append(output)

          self.p4({ (output) in

            self.output.append(output)

            self.p5({ (output) in

              self.output.append(output)

              onCompletion(self.output)

            }) }) }) }) } }

 

The only advantage of doing this way is that we could write a quick code. However, this forms a pyramid structure when nested further and becomes complex as the number of lines of code increases.  It also ends up having many open and close bracket which makes the code extremely difficult to read. Here, the processes are executed in a defined sequence. Hence the total response time is the sum of execution of individual processes.

 

Execution Order:  P1 ->P2 ->P3 ->P4 -> P5

API Execution Total Response Time: 16.015 sec

 

 

 

2.     Independent Task with Operation Class

 

The alternative approach for executing the independent tasks is to use Operation. Here an instance of OperationQueue is created. Operation queues are concurrent by default. We can also sequence and serialize it with optional attributes. Independent tasks are added as operations to the queue in a block of codes. At the end of the module, one line of code, self.operationQueue.waitUntilAllOperationsAreFinished() is called – to ensure that the next line of completion callback method is invoked only when all the submitted operations are executed.  We can create multiple operation queues if required. Below is the equivalent module code using Operation methods.

 

let operationQueue = OperationQueue()

var output = [String]()

  

  func executeIndependentHeavyProcesses(_ onCompletion: @escaping (_ outputMessage:[String])-> Void) {

    self.operationQueue.addOperation{

      self.p1({ (output) in

        self.output.append(output)

      })

    }

    self.operationQueue.addOperation{

      self.p2({ (output) in

        self.output.append(output)

      })

    }

    …. // Other processes

    …..

    …..

    self.operationQueue.waitUntilAllOperationsAreFinished()

    onCompletion(self.output)

  }

 

Although the number of lines of code is slightly more than nested async function approach, this is much better than the first for the following reasons.

-    Independent tasks are executed concurrently in the multiple sub-threads for faster response time.

-    This code has better readability and control. Each block is divided into sub-blocks and hence, it is easy to follow up with brackets. In fact, the above defined task can be assigned to an operation variable and added to the same or different queues for reusability.

-    QoS factors and thread priority can be set as attributes to these queues, unlike async closure block which uses the system default background thread.

 

Here the execution order depends on the submission time of each task to the queue and executed in parallel. Hence the total response time is the maximum possible parallel execution time.

 

Execution Order:  P5 -> P3 -> P1 -> P2 -> P4

API Execution Total Response Time:  6.19 sec

 

 

 

3.     Dependant Task with Nested Async Closure Blocks

 

Here, the module requires a defined set of subtasks to be completed. A few or all the subtasks are dependent on other subtask(s) within the same module. Hence the module expects all the subtasks to be completed in a defined execution order. The nesting should be done carefully to preserve the execution order. Even here, to get notified on the last completed task, it is required to chain both dependant and the independent task together. Let’s say that the module requires four tasks - P1, P2, P3 & P4 to be completed. P1 and P2 are mutually dependants; P3 and P4 are mutually dependants. Then the code looks similar to the first use case.

 

func executeDependentHeavyProcesses(_ onCompletion: @escaping (_ outputMessage:[String])-> Void) {

    self.p1 { (output) in

      self.output.append(output)

      self.p2({ (output) in

        self.output.append(output)

        self.p3({ (output) in

          self.output.append(output)

          self.p4({ (output) in

            self.output.append(output)

            onCompletion(self.output)

          }) }) })  }  }

 

This approach is identical to the first one (Independent task with nested Async Closure) as explained earlier, except for the fact that the execution order within dependant subtasks should be preserved. It could be P1->P2->P3->P4 or P3->P4->P1->P2. It holds all the disadvantage of the first use case and the total response time is the sum of execution of individual processes.

 

Execution Order:  P1 ->P2 ->P3 ->P4

API Execution Total Response Time: 15.28 sec

 

 

 

4.     Dependant Task with Operation and GCD

 

Again, the alternate solution is to use GCD for complex use case in addition to the Operation Queue explained in the scenario 2. When the subtasks are dependant, maintaining the order of execution becomes a critical factor and OperationQueue’s concurrent execution might not work well there. Then, the implementation could be extended with GCD, Serialized OperationQueue, simple Async block etc. The variations are listed below.

 

a.    When the module contains few dependent tasks that can be grouped

Here, we try to group the entire dependent tasks and run them in nested blocks. In our example, p1, p2 makes one group and p3, p4 makes another group. Since the dependency is between subtasks within the group and groups are independent to each other, we submit the group block to the Operation queue.  Now, to know the completion status of each block, we create a GCD DispatchGroup object called ‘dispatchGroup’. Every subtask will have a group entry and an exit code. The dispatchGroup.wait() method is called at the end of the module which blocks further execution, but not on the main queue. 

Here, OperationQueue acts more like a simple background GCD Queue. So, as an alternative, we can also use a simple GCD concurrent Queue and submit the group.

 

b.    When the module contains all dependant subtasks that cannot be grouped

In this case, we can still use OperationQueue execution sequence but with 'notification API' to control the sequence of execution. When the number of subtasks is less, it is better to go with the nested completion blocks to keep the code simple, otherwise consider sequencing the Operations when the subtask is complex. 

 

Below is the code snippet that uses the grouping of subtask and GCD Dispatch Group.

let dispatchGroup = DispatchGroup()

 

func executeDependentHeavyProcesses(_ onCompletion: @escaping (_ outputMessage:[String])-> Void) {

    self.operationQueue.addOperation {

      self.dispatchGroup.enter()

      self.p1({ (output) in

        

        self.output.append(output)

        

        self.p2({ (output) in

          self.output.append(output)

          self.dispatchGroup.leave()

        }) })

    }

    

    self.operationQueue.addOperation {

      self.dispatchGroup.enter()

      self.p3({ (output) in

        

        self.output.append(output)

        

        self.p4({ (output) in

          self.output.append(output)

          self.dispatchGroup.leave()

        }) }) }

    

    self.dispatchGroup.wait()

    onCompletion(self.output)

  }

 

 

The major advantage of using this GCD Dispatch Group is that we get a scalable, easy to read and simplified implementation. We also get a better performance boost as the concurrency is achieved at the group level.

 

Execution Order:  P3 ->P1 ->P2 ->P4

API Execution Total Response Time: 8.0094 sec

 

 

 

Edge Case: Iteration on Dependant (or) Independent Module

Let’s consider an edge case scenario, where we need to iterate and execute the entire dependent and independent modules several times. A good example is - ‘Deletion/Additions of bulk users’. We could achieve it following an ugly way of using a ‘for loop’ and a counter variable to execute the modules several times. Really a bad Idea!!! A better approach would be to use a recursive callback closure. That means, oncompletion, call the same block repeatedly, until the count condition is satisfied. Even then, it works sequentially and becomes hard to debug when a bug arises. Operation and GCD really does the magic here by providing a clean and scalable implementation. We also get the advantage of achieving maximum concurrency. So, if five of the user records should be added, then all five ‘add user’ modules and its subtasks get executed in the best possible number of parallel threads. I am skipping the details of example code as its pretty straightforward, but it is included in my source code(GIT) for the reference. 

 

 

Performance Comparison: 

 

We can categorize the advantages that we discussed so far into 1. Better performance 2. Ease of Coding and maintenance. While ease of code is a concern from the development and scalability perspective, performance factor is something which cannot be compromised in a lightweight microservice server architecture. We want the API request calls to respond as quickly as possible. With the given examples of different implementation scenarios explained thus far, I have run the code and measured the total response time using POSTMAN REST Client tool. 

 

It is well-known fact that concurrency will give better turnaround time and performance boost. But it is interesting to see the below results as it tells us how drastically performance is affected when we fail to follow the right approach. This reiterates the importance of incorporating concurrent threaded programming approach in a swift based microservice API implementation.

 

 

Use Case

Nested Async Closure Implementation Approach

Using GCD and Operations 

Dependant Task 

 

Response Time: 15029 ms

Response Time: 8330 ms

Independent Task 

Response Time: 16025 ms

Response Time: 6025 ms

Iteration(3x) on Independent Task

Response Time: 48061 ms

Response Time: 6019 ms

 

The result clearly shows the need to focus on the right implementation approach based on the use case. For instance, if we take the ‘iteration’ use case, we see a visible difference in the response just with three (3) times loop. One may argue that we do not make the client wait until the operation is complete. We have solutions like ‘return 202 Accepted’. But we must realize that the processing and task turnaround time would still get a bad hit. When we talk about a real-time use case like user management in the production environment, we could experience a potential and significant difference in the processing time.

 

 

Conclusion:

In an ideal code development environment, constraints such as final valid expected output and time to deliver, make the developers go with a simple approach like nested async with closures. I have personally experienced how, often a PoC code developed quickly, is refined and directly pushed into production due to time constraint. During the initial development stage, it is quite common to focus on the expected output and ignore the performance factor. However, refactoring the code at a later stage to achieve performance becomes cumbersome. So, it is the best practice to write the code where performance could be improved and tuned with minimum effort. This blog is not intended to compare the performance of concurrent programming with nested async, but to highlight the significance and advantage of choosing a right approach for the given scenario.

 

The need to use Operation and GCD is highly dependent on the requirement use case. The same approach cannot be applied everywhere as it makes the code inconsistent and cumbersome. While designing the code structure, developers should give a thought about factors like scalability, ability to modularize the code blocks, scope of requirement changes, number of lines of code etc. 

 

I have done the sample coding and the project source code is uploaded to my GIT repository (link given below) for reference. Feel free to add comments or reach out to me for any discussions.

 

Happy Coding!!!

Core ML with iOS/Kitura Swift - A comparison study with Watson Service

 

Originally published in 2018 in the IBM swift initiative blog website. Entire project initiative failed and all the articles were removed including this.  Republishing the archive here just to keep my content alive. In 2021, this content is not so relevant. 

 

At WWDC2017, Apple announced iOS 11 which has a new AI based framework, Core ML. Core ML helps the app developer bring iOS app development to the next level of enterprise business that demands AI and decision-making capabilities. 

 

With server-side Swift’s ability to run on different OS platforms, including iOS, I did a quick PoC to explore and understand the Core ML capabilities. I integrated Kitura with iOS App and exploited the native iOS Core ML framework service as an open external API. In other words, Core ML is not a tightly bound service to use only with its built-in native iOS SDK. By integrating Kitura in the iOS App, you can make the service available for use with any external device. Now, a single iPhone device or an Xcode Simulator can serve as a mini server to provide the AI service to other devices.

 

The first part of this article is an introduction to Core ML. You will learn how to integrate Core ML with Kitura Server Side Swift. The second part of this article is a detailed comparison of Watson Visual Recognition and the Core ML service. This comparison should help you understand how enterprise ready these services are. 

 

Core ML for iOS 

Core ML is a framework that enables devices to run and process machine learning models. At the heart of the machine learning execution process is mlmodel, the trained model. Apple's developer documentation website includes links to some of the Core ML compatible format trained machine learning model files, for you to download and try. 

 

Image Classification and Object Identification are interesting features of the deep learning concept. Its application and usage statistics in today’s enterprise development is also a key factor to consider for its gaining popularity. I created a simple iOS application that takes an image as input and uses a Core ML API and trained mlmodel file to recognize and classify the image. This app then converts to a hybrid Kitura Swift integrated application. For more information about this, read Kitura/iOS: Running a Web Server on your iPhone.

 

The idea to integrate server-side swift with iOS app is to expose the core ML service like any other standard REST API that accepts image data and returns the JSON result. The section below gives the solution to achieve this.

 

 

Delegation protocol and Server Swift multiple router handler

 

Delegation is a design pattern that enables a class or structure to hand off (or delegate) some of its responsibilities to an instance of another type. Delegation is one of the most commonly used patterns in iOS programming with protocol implementation. Server-side Swift router is designed to handle the request response with multiple handlers. Combining Delegation and the server-side Swift router gives the ability to divide the request handler tasks into subtasks and delegate any iOS specific subtask (API hits counting task, in this example) to the class instance methods. This combination is a quick workaround to extend iOS features as a Kitura Swift API service. 

 

A few examples to think about include:

  • API that does Core ML operations and returns JSON response
  • API that sends email using iPhone device’s registered account
  • API that returns reverse Geo Coordinates using Map Kit
  • API that uses Siri's intelligence and Siri Kit to process request

 

In the code snippet below, the Kitura post request module uses three different handlers. The notifyRequest and responseProcessed methods delegate the task to the main view controller class for API request and responses notifications. processRequest calls the Core ML image processing module and returns the JSON result. 

 

 

Code snippet: Kitura post request module uses three different handlers 

mainRouter!.post("/data/analyzeImage",handler:self.notifyRequest,self.processRequest,self.responseProcessed)

 

func responseProcessed(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {

     DispatchQueue.main.async {

      self.delegate!.didHitApi()

     }

     next()

  }

 

 func notifyRequest(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {

    DispatchQueue.main.async {

      self.delegate!.didReceiveRequest(info: request.originalURL)

    }

    next()

  }

 

func processRequest(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {

    var dataa = Data()

    do {

      let data = try request.read(into: &dataa);

…….

…..

 

Figure 1 shows the integrated app that runs the Core ML service and exposes the service using Kitura Swift. For every request, the API Hits count increases.  The delegates in the main thread handle the API Hits. It's a success; there's a working prototype of a Kitura Core ML service running on an iOS device. 

 

 

../../../../../../Downloads/IMG_2C7FC8EFA5A8-1Text Box: Figure 1 Integrated app that runs the Core ML serviceRead on to learn about the other iOS native only app to consume this service that I made. I've also included a comparison to the Watson Visual Recognition service.

 

Comparison of visual recognition services

 

Core ML is Apple’s iOS based framework to process machine learning models. Watson is IBM's platform built with AI and cognitive capabilities. Watson provides multiple AI and decision-making services for enterprise business systems and is available as a set of open APIs and SaaS products on Bluemix. 

 

Curious about the accuracy and maturity of the systems, I extended the native-only PoC app to run and compare a set of images against:

  • Core ML API made with Kitura Swift 
  • Uses the pre-trained VGG16 Keras Python model set 
  • Watson Visual Recognition Service API 
    This is a default API service which comes pre-configured with a trained example classifier engine to recognize images

 

I chose 15 different digitally processed images for the comparison. I chose processed images to test the accuracy of the trained model services when the input image loses natural color and pixel information. Because of file size issues, the images were further compressed to a JPEG format. 

 

Figure 2 shows the app running with each service for comparisons. Two parameters of the service response are extracted in the screenshot, the Classified/Identified string, and the Confidence factor (the ranking is from 0-1).

 

Figure 2. The images and comparisons

 

 

 

../../../../../../Desktop/Simulator%20Screen%20Shot%20Jul%2013,%202017%20a../../../../../../Desktop/Simulator%20Screen%20Shot%20Jul%2013,%202017%20a../../../../../../Desktop/Simulator%20Screen%20Shot%20Jul%2013,%202017%20a

 

Results

The result set below shows that Core ML with VGG16 identified 6 out of 15 images correctly. Of those six, two have a higher than 90% confidence index. Watson recognized 11 out of 15 images and had a 90% plus confidence index for seven of the images. There were some instances where the Core ML VGG16 model identified better. For example, the image of tablet pills on the right (shown in Figure 2, above). But, on a larger count, Watson predicted accurately more often. It is important to remember that the samples used are artificially processed and compressed images.

 

Figure 3. Results of the comparison

../../../../../../Desktop/Simulator%20Screen%20Shot%20Jul%2013,%202017%20a

 

Conclusion

There's more to this than just the accuracy rate. Another consideration is the trained model. Both Watson and Core ML provide the ability to customize the classifier to use another model to improve accuracy. However, the effort to create and convert a better classifier trained ML model for iOS is much larger than using a simple Watson service. For Watson, you only need to provide a set of positive and negative images, and to train the engine. For Core ML you need to create a complex Python model and convert it into a mlmodel file. 

The effort to create and convert a better classifier might with time. For now, though, Watson is a mature and stable service for the enterprise world.  I consider Core ML to be an experimental platform until Apple releases significant improvements.

 

This article demonstrated how to convert any core iOS feature into a working Kitura Swift server-side service and compared the maturity and enterprise development readiness of Core ML and a Watson Service.

 

All the source code for this article is in my GIT repository, please fork and reuse. My next article will analyze the capabilities of converting the Core ML framework into a full-fledged server side framework… stay tuned.

 

References

 

Disclaimer

This is not a peer nor expert reviewed article. I based the content on my knowledge and experience. My intent is to share my learnings and findings with fellow developers. The information might not be 100% accurate. 

 

@Developers

The enterprise demands powerful and intuitive frameworks to transform applications to the next level of intelligence. It is important to learn, understand, and adopt the trending technologies early. Keep coding and keep learning. If you have any questions or comments, you can reach me at @sangesiv@in.ibm. 

 

Happy Coding!!!!