Operating System

Sharing my very old notes on operating system.

Reliability: can be defined as the probability that a system will produce correct outputs up to some given time t. Reliability is enhanced by features that help to avoid, detect and repair hardware faults. A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption.

Availability: means the probability that a system is operational at a given time, i.e. the amount of time a device is actually operating as the percentage of total time it should be operating. High-availability systems may report availability in terms of minutes or hours of downtime per year. Availability features allow the system to stay operational even when faults do occur.

Scalability: The ability to retain performance levels when adding additional processors.

Clock Speed: The Clock Speed (of CPU) is used to determine the maximum amount of wok a computer (CPU) can perform in a specific time (a time unit). So we can say, our system executes our programs with x clock speed.

Thread: From a technical standpoint, a thread is a combination of the kernel-level and application-level coding (data structure), which is needed to manage the execution of code or a task. The kernel-level structures coordinate the dispatching of events to the thread and the preemptive scheduling of the thread on one of the available cores. The application-level structures include the call stack for storing function calls and the structures the application needs to manage and manipulate the thread’s attributes and state.

Difference between Task, Process and Threads

Threads are used for small tasks, whereas processes are used for more ‘heavyweight’ tasks – basically the execution of applications. Another difference between a thread and a process is that threads within the same process share the same address space, whereas different processes do not.

Process
Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.

Thread
A thread is the entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread’s set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread’s process. Threads can also have their own security context, which can be used for impersonating clients.

Cloud computing, also on-demand computing, is a kind of Internet-based computingthat provides shared processing resources and data to computers and other devices on demand.

Virtual memory is a feature of an operating system (OS) that allows a computer to compensate for shortages of physical memory by temporarily transferring pages of data from random access memory(RAM) to disk storage.

Critical Section: In concurrent programming, a critical section is a part of a multi-process program that may not be concurrently executed by more than one of the program’s processes.

  • In other words, It is a piece of a program that requires mutual exclusion of access.
  • Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that does not allow multiple concurrent accesses.
  • A critical section may consist of multiple discontiguous parts of the program’s code. For example, one part of a program might read from a file that another part wishes to modify. These parts together form a single critical section, since simultaneous readings and modifications may interfere with each other.
  • Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor.

Concurrency issues

  • Resource Starvation: In computer science, starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks
  • Mutual Exclusion: Mutual exclusion is in many ways the fundamental issue in concurrency. It is the requirement that when a process P is accessing a shared resource R, no other process should be able to access R until P has finished with R. Examples of such resources include files, I/O devices such as printers, and shared data structures.

A Mutex (Lock) is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).
– very good answer http://stackoverflow.com/a/346678
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.

Deadlock / Livelock (how to avoid?) http://wikipedia.moesalih.com/Deadlock#Livelock
http://stackoverflow.com/questions/6155951/whats-the-difference-between-deadlock-and-livelock

DEADLOCK Deadlock is a condition in which a task waits indefinitely for conditions that can never be satisfied – task claims exclusive control over shared resources – task holds resources while waiting for other resources to be released – tasks cannot be forced to relinguish resources – a circular waiting condition exists

  • Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for one the others to do something.
  • For example, consider two processes, P1 and P2, and two resources, R1 and R2. Suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources. The two processes are deadlocked

Live Locked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work.

  • Livelock: A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work:
  • Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.
  • Example: Suppose that three processes (P1, P2, P3) each require periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3 and that P1 again requires access before P3 completes its critical section. If the OS grants access to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.
  • A live lock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.
  • A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time.
  • Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen randomly or by priority) takes action.

Context switch: http://wikipedia.moesalih.com/Context_switch
In computing, a context switch is the process of storing and restoring the state (more specifically, the execution context) of a process or thread so that execution can be resumed from the same point at a later time. This enables multiple processes to share a sing CPU and is an essential feature of a multitasking operating system.

Process Scheduling: Job scheduler selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound.
http://www.tutorialspoint.com/operating_system/os_process_scheduling.htm

Race Condition: A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly.

Green Thread: Green threads are threads that are scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system. Green threads emulate multithreaded environments without relying on any native OS capabilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support

Multiprocessing — multiple CPUs executing concurrently
Multitasking — the operating systems simulates concurrency on a single CPU by interleaving the execution of different tasks.

Interrupt-ability – Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn’t necessarily mean they’ll ever both be running at the same instant. Eg. multitasking on a single-core machine.

  • A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.

Independent-ability Parallelism is when tasks literally run at the same time, eg. on a multicore processor.

  • A condition that arises when at least two threads are executing simultaneously.

Conditions for Deadlock: Three policy conditions are necessary for deadlock to be possible.

  • Mutual exclusion Only one process may use a resource at one time.
  • Hold and wait A process may hold some resources while waiting for others.
  • No preemption No process can be forced to release a resource. A fourth condition is required for deadlock to actually occur.
  • Circular wait A closed chain of processes exists, such that each process is blocked waiting for a resource held by another process in the set. Three approaches exist for dealing with deadlock.
  • Prevention involves adopting a static policy that disallows one of the four conditions above.
  • Avoidance involves making dynamic choices that guarantee prevention.
  • Detection and recovery involves recognising when deadlock has occurred, and trying to recover.

Deadlock Prevention: We can prevent deadlock from occurring by adopting either an indirect policy that disallows one of the three necessary conditions, or a direct policy that disallows sets of processes that can exhibit a circular wait.

Scalability Issues with synchronization

Scalability: The ability to retain performance levels when adding additional processors.
In context of mutual exclusion ME lock.

  1. Latency: If a thread wants to acquire a lock, it has to do some operations, it has to go to memory, get this lock and make sure no one else is competing with it. So the time that is spent by a thread in acquiring the lock is called Latency. Latency is saying, lock is currently not being used. How long does it take for me to go and get it. That really the key question that latency is.
  2. Waiting Time: Scalability with synchronization, is the waiting time. That is, If i want to go and get the lock, how long i wait in the order to get the lock?
  3. Contention: Un-scalability of lock is Contention. When so many thread waits to get a particular, and only one of them wins. Thats contention part of implementing synchronization primitive.

Distributed System / Cloud Computing

Definition:
Distributed processing involves multiple processes on multiple systems.
A distributed system is a collection of Nodes which are interconnected by a Local Area Network (LAN) or a Wide Area Network (WAN) and the LAN may be implemented using a twisted pair, coaxial cable and optical fiber, and if it is a WAN, it could be implemented using a satellite communication, microwave links and so on. That’s sort of the picture of What a distributed system is, number one.

Memory
Number two, there’s no physical memory that is shared between nodes of the distributed system. So the only way nodes can communicate with one another is by sending messages on the local area network to one another. So there is no shared memory (or physical memory) for communication between the nodes of the distributed system.

Time
Event Computation Time is the time it takes on a single node to do some meaningful processing, that computation time is what we are calling as the event computation time. And a node may also communicate With other nodes in the system and that’s what we’re calling as communication time or the messaging time. And the third property, is the fact that the message communication time is significantly larger than event computation time that happens on a single node.

Reason
A system is distributed if a message transmission time, is not negligible compared to the time between events in a single process.

Example
What is the implication of this definition? Interestingly, even a cluster is a distributed system by this definition. Because processors have become blazingly fast, so the event computation has shrunk quite a bit. On the other hand the message communication time is also becoming better but not as fast as the computation time that happens on a single processor and therefore even on a cluster which is all contained in a single rack in a data center, the message transmission time is significantly more than The event time. And so even a cluster is a distributed system by this definition.

Benefits of Cloud Computing

  • Reduced Cost
  • Automatic Updates
  • Green Benefits of Cloud computing
  • Remote Access
  • Disaster Relief
  • Self-service provisioning
  • Scalability
  • Reliability and fault-tolerance
  • Ease of Use
  • Skills and Proficiency

Happy Reading 🙂

Advertisements

Queue in swift

Here, we’ll be creating a simple data structure Queue, to demonstrate some of the functionalities of swift language, which includes:

  • Value-type generics,
  • Secure memory pointers,
  • Mutating functions on struct,
  • Sequence Type and Generator Types protocols, and
  • Copy-on-write feature on swift

Queue is an abstract data type or a linear data structure, in which the first element is inserted from one end called REAR (also called Tail), and the deletion of existing element takes place from the other end called as FRONT (also called Head). This makes queue as FIFO data structure, which means that element inserted first will also be removed first. Like Stack, Queue is also an ordered list of elements of similar data types.

 

queue

The process to add an element into queue is called Enqueue and the process of removal of an element from queue is called Dequeue.

Queue Protocol

It defines the functionality of Queue, having three functions `enqueue:`, `dequeue`, and `peek`. All functions are marked as `mutating` because they will be mutating the queue properties. Since it’ll be a homogenous queue we’ve associated the protocol with type `Element` which will be queued or dequeued.

protocol QueueType {
  typealias Element
  mutating func enqueue(element: Element)
  mutating func dequeue() -> Element?
  func peek() -> Element?
}

Mutating

By default, the properties of a value type (like struct or enum) cannot be modified from within its instance methods. However, if you need to modify the properties of your structure or enumeration within a particular method, you can opt in to mutating behavior for that method.
References: Apple Documentation, A blog post by @NatashaTheRobot

Queue Storage:

final class Storage<Element> {

 private var pointer: UnsafeMutablePointer<Element>
 private let capacity: Int

 init(capacity: Int) {
   pointer = UnsafeMutablePointer<Element>.alloc(capacity)
   self.capacity = capacity
 }

 static func copy(storage: Storage) -> Storage<Element> {
   let storageNew = Storage<Element>(capacity: storage.capacity)
   storageNew.pointer.initializeFrom(storage.pointer, count: storage.capacity)
   return storageNew
 }

 func add(element: Element, at index: Int) {
   (pointer + index).initialize(element)
 }

 func removeAt(index: Int) {
   (pointer + index).destroy()
 }

 func itemAt(index: Int) -> Element {
   return (pointer + index).memory
 }

 deinit {
   pointer.destroy(capacity)
   pointer.dealloc(capacity)
 }
}

We’ll create a generic class to store queue elements in it. This class will initialize a mutable pointer with given capacity and will have functions like `add` and `remove` at given index.

Functions:

init(capacity: Int);
It will allocate memory with capacity for pointer, which will keep queue elements. Here is a good explanation about Memory Pointers in swift

func add(element: Element, at index: Int)
func removeAt(index: Int)
func itemAt(index: Int) -> Element
These functions will let us add, remove and return an element at a given position in the storage.

Note: (ptr + position).memory works because of + is overloaded in swift’s stdlib which gives the pointer incremented by the number provided.

`public func +<Memory>(lhs: UnsafeMutablePointer<Memory>, rhs: Int) -> UnsafeMutablePointer<Memory>`

deinit
Swift doesn’t provide any option to do a cleanup when a value type is removed from a memory. So we need to use class as a storage, it gives deinit function where we can write the cleanup. deinit is called when a reference type is removed (A reference type is removed when there are zero references to it), we have to get rid of all the memory we asked for while creating this storage when our queue goes out of scope.

static func copy(storage: Storage) -> Storage<Element>
We’ll need to copy the storage when our queue is passed around and is mutated.

The Queue:

struct Queue<Element> : QueueType {

  private var storage: Storage<Element>
  private var rear: Int = 0
  private var front: Int = 0
  private var count: Int = 0
  private let capacity: Int

  init(capacity: Int) {
    self.capacity = capacity
    storage = Storage<Element>(capacity: capacity)
  }

  private mutating func makeUnique() {
    if !isUniquelyReferencedNonObjC(&storage) {
      storage = Storage.copy(storage)
    }
  }

  mutating func enqueue(element: Element) {
    guard count < capacity else {
      print("Queue is full.")
      return
    }
    makeUnique()
    storage.add(element, at: rear)
    rear = (rear + 1) % capacity
    count = count + 1
  }

  mutating func dequeue() -> Element? {
    guard count > 0 else {
      print("Queue is empty.")
      return nil
  }

   makeUnique()
   let item = storage.itemAt(front)
   storage.removeAt(front)
   front = (front + 1) % capacity
   count = count - 1
   return item
  }

  func peek() -> Element? {
   guard count > 0 else {
     print("Queue is empty.")
     return nil
   }
   return storage.itemAt(front)
  }
}

Queue is implemented as a struct containing five properties, storage keeps the buffer, rear and front keeps the track of Tail and Head of the queue respectively, count keeps the total number of elements and capacity defines total size of queue buffer. The init method will initialize storage with the provided capacity.

Functions:

private mutating func makeUnique()
It keeps the common functionality of struct. It calls isUniquelyReferencedNonObjC() internally (a method defined in stdlib) which tells us if a non-objc object ie pure swift object has reference count equal to one or not.

`public func isUniquelyReferencedNonObjC<T : AnyObject>(inout object: T) -> Bool`

if !isUniquelyReferencedNonObjC(&storage) {
  storage = Storage.copy(storage)
}

so when one queue instance is assigned into another variable it will share the storage instance until enqueue, dequeue, or peek is called on the new instance (or the old one) and it will detach the old storage and create a new copy for itself to use by calling the copy method on the storage. This is how copy-on-write is achieved. Thanks @aciidb0mb3r for the clarification. Here is Doc and an example code which simplifies the same.

enqueue() will insert an element at end of the queue.
dequeue() will remove first element from the queue.
peek() will return the first element of the queue.

In order for queue to support for..in looping just like an array, It needs to implement SequenceType protocol. Here is the key is generate() function, which returns a GeneratorType. Thus we also need to implement the GeneratorType protocol, which makes a call to dequeue() function from next() function implementation. Reference: Swift Collection Protocols

extension Queue: SequenceType {
  func generate() -> QueueGenerator<Element> {
    return QueueGenerator<Element>(queue: self)
  }
}

struct QueueGenerator<Element> : GeneratorType {
  var queue: Queue<Element>
    mutating func next() -> Element? {
    return queue.dequeue()
  }
}

Examples

Let’s try out queue with Int and also calls SequenceType’s functions to do operations on the queue.

var intQueue = Queue<Int>(capacity: 20)
intQueue.enqueue(11)
intQueue.enqueue(12)
intQueue.dequeue() // Remove from front ie 11
intQueue.enqueue(13)
print("Print elements in queue")
for i in intQueue {
 print(i)
}

let queueValuesMultipliedByTwo = intQueue.map { $0 * 2 }
print(queueValuesMultipliedByTwo)

Storing reference type in Queue

Now, let’s try the queue with reference types, we’ll create a simple class, which confirms to CustomStringConvertible protocol to print class description. And add few instances into the queue.

class Foo : CustomStringConvertible {
 let tag: Int
 init(_ tag: Int) {
  self.tag = tag
 }
 deinit {
  print("Removing...\(tag)")
 }
 var description: String {
  return "#\(tag)"
 }
}
var queueClass = Queue<Foo>(capacity: 20)
queueClass.enqueue(Foo(1))
queueClass.enqueue(Foo(2))
queueClass.dequeue()
print(queueClass)

The entire playground friendly code is available here.
Happy coding, cheers 🙂

Interview Stuff

A list of interview topics for the developers, who are looking to hire or get hired for iOS work. These stuff are based on my experience while giving interviews or taking some for an iOS developer job. It also includes some self-test questions which have not been asked.

__kindOf (Avoid compiler warnings for subclass access or to avoid casting)
http://stackoverflow.com/a/31399395/559017

Garbage Collector in Objective-C
No, Objective C does not have GC, It does not scan heap for unused objects, no whole app pause, no deterministic release of any NS Object.
Objective C used Manual Reference Counting until iOS 5 in terms of ‘retain’, ‘release’, and Automatic Reference Count in terms of ‘strong’, ‘weak’.
https://mikeash.com/pyblog/friday-qa-2010-07-16-zeroing-weak-references-in-objective-c.html

weak v/s assign
The only difference between weak and assign is that if the object a weak property points to is deallocated, then the value of the weak pointer will be set to nil, so that you never run the risk of accessing garbage. If you use assign, that won’t happen, so if the object gets deallocated from under you and you try to access it, you will access garbage. weak does not work with primitive types like int, double. weak only works with ObjectiveC objects.

Compiler directive?
http://nshipster.com/at-compiler-directives/

instancetype?
http://nshipster.com/instancetype/ and http://stackoverflow.com/a/8976920

NULL (void* 0), Nil (class 0), nil (id 0) and NSNull [NSNull null]
http://nshipster.com/nil/

Differentiate #import, #include and @class?
#import: Improved version of include, brings entire header file into current file
#include: With objective C, there is performance hit with #include, compiler must open up each header file to notice the include guard.
@class: A forward declaration compiler directive, It tells the compiler that class exists, does not know anything about class. It minimizes the amount of code seen by the compiler or linker.

Objective-C Variable Length Argument (var args).
https://izeeshan.wordpress.com/2015/03/24/217/

Method swizzling / Aspect Oriented Programming
http://nshipster.com/method-swizzling/
https://github.com/steipete/Aspects

Objective-C Collection Types and An array of weak objects.
https://mikeash.com/pyblog/friday-qa-2010-05-28-leopard-collection-classes.html
https://mikeash.com/pyblog/friday-qa-2012-03-09-lets-build-nsmutablearray.html

Design patterns.

  1. Thread Safe Singleton: https://izeeshan.wordpress.com/2014/08/31/threadsafe-singleton/
  2. Facade – Wrapper on a complex system or apis.
  3. Decorator – Dynamically adds behaviors and responsibilities to an object without modifying its code, Ex. Category and Delegation (Modifies the behavior of an object instance.)
  4. Adapter – An Adapter allows classes with incompatible interfaces to work together. It wraps itself around an object and exposes a standard interface to interact with that object. Ex. NSCopying protocol implemented by so many, to provide standard copy method. Delegates and Datasources are examples of Adapter design pattern.
  5. Observer / Broadcasting Listeners – One object notifies other objects of any state changes. The objects involved don’t need to know about one another – thus encouraging a decoupled design. This pattern’s most often used to notify interested objects when a property has changed. Ex. NSNotification (Asynchronous message passing between two different objects) and KVO
  6. Memento – It captures and externalizes an object’s internal state. In other words, it saves your stuff somewhere. Later on, this externalized state can be restored without violating encapsulation; that is, private data remains private. Ex. NSUserDefaults and Archiving, unarchiving.
  7. Command – (Ex. Target-Action of a button) It encapsulates a request or action as an object. The encapsulated request is much more flexible than a raw request and can be passed between objects, stored for later, modified dynamically, or placed into a queue.
  8. Dependency Injection: A software design pattern that implements inversion of control for software libraries. Caller delegates to an external framework the control flow of discovering and importing a service or software module specified or “injected” by the caller.

NSObject: the Class and the Protocol
https://mikeash.com/pyblog/friday-qa-2013-10-25-nsobject-the-class-and-the-protocol.html

  1. Multiple root classes in objective c, need common rules.
  2. Protocol inheritance (can access all NSObject protocol functions if inherited).

Explain: [self self], [self class], [self release]

Objective-C Category,dealloc and +load methods in category, category with library and frameworks.
https://izeeshan.wordpress.com/2012/09/06/ios-category/
Properties: http://nshipster.com/associated-objects/
Dealloc: http://stackoverflow.com/questions/14708905/how-do-i-access-the-dealloc-method-in-a-class-category
Linker Flags: http://stackoverflow.com/questions/2567498/objective-c-categories-in-static-library

+load / +initialize
http://stackoverflow.com/a/13326633/559017
https://www.mikeash.com/pyblog/friday-qa-2009-05-22-objective-c-class-loading-and-initialization.html

IPA size: https://developer.apple.com/news/?id=02122015a

NSAutoreleasePool: https://mikeash.com/pyblog/friday-qa-2011-09-02-lets-build-nsautoreleasepool.html

  • If a pool is destroyed which is not at the top of the stack of pools, it also destroys the other pools which sit above it. In short, NSAutoreleasePool instances nest.
  • drain vs release, NSAutoreleasePool: drain is same as release except it signals to the collector that this might be a good time to run a collection cycle.

NSZombie: It’s a memory debugging aid. Specifically, when you set NSZombieEnabled then whenever an object reaches retain count 0, rather than begin deallocated it morphs itself into an NSZombie instance. Whenever such a zombie receives a message, it logs a warning rather than crashing or behaving in an unpredictable way.

Why retainCount should never be used in shipping code. http://stackoverflow.com/a/4636477

  • Invoking a method on nil returns a zero 0 value.
  • Best practices are implicit instructions to compiler.

Core Data & Migration: https://izeeshan.wordpress.com/2014/11/10/core-data-migration/

XML Parsers

  • SAX: Simple API for XML; Its one where our code gets notified as the parser reads the xml tree (data). We construct the objects and keep the track of states.
  • DOM: Document Object Model; It reads the entire document and builds the object in which we can query for different elements.
  • NSXMLParser: Its a SAX parser written in Objective-C and is quite straightforward to use.
  • libxml2: A C based API and an Open source library, which supports both SAX and DOM processing. One cooler feature of this library, as it can parse the data as it’s being read from network.

Differences

  • In SAX we get notified but in DOM we have to query for object.
  • In SAX we construct the objects and keep the track of states but in DOM we get build objects and keys as elements.
  • Memory: DOM usually require more memory than SAX, because it reads all the data at once and kept that in memory; this is something to consider when we deal with large documents.

Web Services (WSDL / SOAP / AJAX)
Web Service is like program-to-program communication over the Internet through XML. It provides a service, based on Operations, defined in its interface. It communicated between different language, different platforms.

  • WSDL: Web Service provides a way to describe their interface in enough details, and that description is usually provided in an XML document called Web Service Description Language (WSDL). In web service we send XML and we receive XML as response.
  • SOAP: Web Service exposes useful functionality to Web users through a standard web protocols. In most cases, the protocol used is SOAP. It stands for Simple Object Access Protocol.
  • JWS: Java Web Service – JAX-WS stands for Java API for XML Web Service. JAX-WS is a technology for building web services and clients that communicate using XML.
  • AJAX: AJAX stands for Asynchronous JavaScript and XML. It allows asynchronous communication between a client and server. It does not mandate that the end user must wait for processing a request.

REST APIs? How is it differ from SOAP?

  1. Representational State Transfer v Simple Object Access Protocol.
  2. REST permits many different data formats where as SOAP only permits XML.
  3. REST better browser support (coz of JSON), better performance, scalability.
  4. REST reads can be cached, SOAP can not be.
  5. SOAP used for WS-Security, WS-Atomic Transaction, WS-Reliable Messaging, Supports ACID transactions.

Opaque v Alpha: http://stackoverflow.com/a/8520656
A struct is a special C data type that encapsulates other pieces of data into a single cohesive unit. Like an object, but built into C.

Coordinates v/s Drawing
The coordinates of Core Graphics drawings start from the bottom-left corner, while UIKit global coordinates start from the top-left.

NSPredicate v NSScanner: http://nshipster.com/nsscanner/

Frame v Bound v Center
Frame: A view’s frame (CGRect) is the position of its rectangle in the superview’s coordinate system. By default it starts at the top left.
Bounds: A view’s bounds (CGRect) expresses a view rectangle in its own coordinate system.
Center: A center is a CGPoint expressed in terms of the superview’s coordinate system and it determines the position of the exact center point of the view.

Application States:

  • Not Running: The app has not been launched or it has been terminated.
  • Inactive: The app is in the foreground but not receiving events, Ex. User has locked the device with the app active.
  • Active: The normal state of “In Use” for an app.
  • Background: The app is no longer on-screen, but still executing the code.
  • Suspended: The app is still resident in memory but is not executing code.

BLOCK:

dispatch barrier: https://izeeshan.wordpress.com/2014/09/03/dispatch-barriers/
dispatch_group:
http://amro.co/gcd-using-dispatch-groups-for-fun-and-profit
https://www.mikeash.com/pyblog/friday-qa-2009-09-04-intro-to-grand-central-dispatch-part-ii-multi-core-performance.html
http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2

Multithreading: Sync v Async, Concurrency v Non-Concurrency, Parallel v Concurrency.
https://izeeshan.wordpress.com/2014/08/17/multi-threading-using-nsoperation/

Operating System: https://izeeshan.wordpress.com/2016/06/04/operating-system/

Difference between +alloc and +allocWithZone : When one object creates another, it’s sometimes a good idea to make sure they’re both allocated from the same region of memory. The zone method (declared in the NSObject protocol) can be used for this purpose; it returns the zone where the receiver is located. (2014) Apple documentation says that the zone parameter is ignored, and “This method exists for historical reasons; memory zones are no longer used by Objective-C.”

#TODO
Network layers, SSL pinning
Cryptography
Public key v Private key:

Other References
  1. NEVER MISS Whats Newhttps://developer.apple.com/library/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS9.html
  2. Cameron Banga
  3. Raywenderlich Blog
  4. NSHipster Quizzes
  5. http://bizzydevapps.weebly.com/
  6. http://huntmyideas.weebly.com/blog/category/ios-interview-questions-and-answers
  7. https://horseshoe7.wordpress.com/2012/07/25/what-to-ask-an-ios-developer-at-their-interview/
Thanks for reading.
Feel free to add other links or questions in comments, Cheers!!

why Swift

Why did Apple introduce new programming language Swift instead of embracing an existing one like Java, Python, C++ one that already has community, exiting developers, and lots of resources?

First of all, Apple is very much notorious for ignoring what everyone else is doing, and heading off in their own direction if they think it’s the best solution.

Objectives:

  • To update Objective-C so that we can have a faster, easier, safer language without destroying Cocoa?
  • To take advantage of the headway they have made with compiler technology.
  • To achieve low level work, all the way down to the kernel if needed.
  • Watch kit development.
  • Last but not least, most important goal is interoperation, any replacement would have to work naturally with the existing frameworks.

Currently on plate:

  • Java belongs to someone else and needs a virtual machine.
  • Apple already support C++ to some extent. Including making a connection between C++ and Cocoa. But replacing one complicated C-based language with an even more complicated C-based language would not result in progress.
  • Ruby 2.1, on the other hand, does support keyword arguments that would work with Objective-C. In fact, Apple wrote this at one point—it was called MacRuby. Apple abandoned MacRuby, though, so apparently they weren’t happy with it.
  • Neither Python or Ruby are compiled which rules them out.

So they created Swift, which not only replaces Objective-C but also works with all existing frameworks naturally. And it isn’t particularly hard to learn. By creating their own language from the ground up, Apple is constrained by nothing but their own requirements and resources. Since their requirements are rather specific and their resources include one of the world’s best compiler teams, that’s a pretty good place to be.

How swift is swift?

  1. Static Typing
  2. Value Types
  3. No pointer aliasing
  4. Constants
  5. Copy on write
https://developer.apple.com/swift/

 

Custom NSLog

ZKLog

Writing our own NSLog using c-function with variable length arguments.

I’ve used fprintf() function to write on console, because it sends the formatted output to the stream, where printf() internally invokes fprintf() with stdout.

stderr stands for standard error device. The benefit of doing this is that anything written to standard error is not buffered so it is immediately written to the screen and is useful for debugging.

void ZKLog(NSString*msg,...) {
  va_list arguments;
  va_start(arguments, msg);
  NSString* message = [[NSString alloc] initWithFormat:msg arguments:arguments];
  va_end(arguments);
  fprintf(stderr, "\n%s\n", [message UTF8String]);
  [message release]; message = nil;
}

Core Data Migration

Core Data

A framework which manages, where data is stored, how it is stored, data caching, and memory management. It provides APIs to handle our data like insertion, update, deletion and for data validation. Basically these core data APIs takes care of all data management rules of sqlite.
When did it come? It was ported to the iPhone from Mac OS X with the 3.0 iPhone SDK release.

Why Core Data?

Why we might want to use Core Data with SQLite for storage over property lists, a custom XML format, or direct SQLite database access? Here we have some reasons:
  1. It allows developers to create and use a database, perform queries using SQL-less conditions (without SQLite).
  2. We interact with SQLite in Objective-C or in objects way. And we don’t have to worry about connections or managing the database schema. It’s basically a fully object-oriented API, to store data in a database. So we can say, Object oriented database on top of SQL database.
  3. The main benefit to this approach is that it reduces the development time and simplifies the process. It can reduce the memory overhead of our app, increase responsiveness, and save us from writing a lot of boilerplate code. Otherwise writing complex SQL queries and handling SQLite operations are very difficult.

How does it work?

There is a tool in Xcode, which we use for creating a visual mapping between the objects (actually NSManagedObject Subclasses) that we are gonna stored in our database, and then Core Data manages all the interaction in between.
Once we have this visual mapping created in Xcode then we can create new objects, put them in the database, query for objects in the database which is having an SQL database behind it. Core Data manages all the communication behind the scenes, all we see is Object Oriented side of it.

Core Data Terms

CoreDataTerms

A Model (Managed Object Model / xcdatamodeld)

A model that has all tables information. It’s schema or a database schema – a collection of all the tables (Entities, Data Models) that we use in our application. NSManagedObjectModel is a class that contains definitions for each of the table objects (also called “Entities”) that we have in our database. Usually, we use the visual editor (xcdatamodeld) to set up what tables are in the database, what their columns, and how they relate to each other. However, we can do this with code too!

A Sore (Persistent Store) – sqlite file

A file or a database file (with .sqlite extension) stored in our application’s document directory.

A Coordinator (Persistent Store Coordinator)

A coordinator, who first associate store (sqlite file) with model, then coordinates (mediate) between store(s) and context(s).
It’s a database connection, where we set up the actual names and locations of what databases will be used to store the objects, and any time a managed object context needs to save something it goes through this single coordinator.

A Context (Managed Object Context)

The job of a context is to use coordinator to first retrieve model information, then save our data into store. A context without a coordinator is of no use as it cannot access a model except through a coordinator. Its primary responsibility is to manage a collection of managed objects.
We can think of this as a “scratch pad” for objects that come from the database. It’s also the most important of the three for us, because we’ll be working with this the most. Basically, whenever we need to get objects, insert objects, or delete objects, we call methods on the managed object context (or at least most of the time!)
Only a coordinator can access the model. Not context.
Only a context can access the coordinator. Not Us.
So, We access context.

Problem & Solution

The problem is we can’t add / remove / modify any column in any table at runtime. We can’t alter it by anyway once it’s shipped. And to do the same we need to create something called Migration. Or we can say, when we change our data model, then we also need to move the data in existing stores to new version — changing the store format is known as migration.

When migration is required?

The easiest answer to this common question is “when we need to make changes to the data model.” When the Managed Object Model (mom/xcdatamodel) does not match, a migration is REQUIRED (an instance of NSMigrationManager).

How does it work? 

The migration process updates data created with a previous version of the Data Model to match the current Data Model. Before going into migration process, lets see how core data initialisation works.

Initializing Core Data

We initialize our Core Data stack by calling Context (NSManagedObjectContext), and then context initiate the rest of the work.
  1. Initialises model with momd (compiled xcdatamodeld).
  2. Initialises persistent store coordinator with model.
  3. Add a store to persistent store coordinator. (Here Core Data checks for versioning.)
  4. Allocate the context, then sets the persistent store coordinator to it.
When we encounter these steps, Core Data does a few things in-between just prior to adding the store to the coordinator.
  1. Core Data analyzes the store’s model version the one which we are passing to add into coordinator.
  2. It compares this version to the coordinator’s configured data model if there is already one configured earlier.
  3. If the store’s model version and the coordinator’s model version don’t match, then Core Data will perform a migration, when enabled.
  4. If migrations aren’t enabled, and the store is incompatible with the model, Core Data will simply not attach the store to the coordinator and specify an error with an appropriate reason code.

The Migration Process

To start the migration process, Core Data needs the original data model (ver 1) and the destination model (ver 2). It uses these two versions to load or create a mapping model for the migration. Then It uses mapping model to convert data in ver 1 (in the original store) to data that it can store to ver 2 (in the new or destination store). Once Core Data determines the mapping model, the migration process can start in earnest.

During migration, Core Data creates two stacks, one for the source store and one for the destination store. Core Data then fetches objects from the source stack and inserts the appropriate corresponding objects into the destination stack.

Basically, Migrations happen in three steps: First, Core Data copies over all the objects from one data store to the next. Next, Core Data connects and relates all the objects according to the relationship mapping. Enforce any data validation in the destination model. Core Data disables destination model validation during the data copy.

Changes that do not require Migration:

Basically anything that doesn’t change the underlying SQLite backing store, including:

  1. Changing the name of an NSManagedObject subclass
  2. Adding or removing a transient property
  3. Making changes to the user info dictionary
  4. Changing validation rules.

Requirements for the Migration Process: 

Migration of a persistent store is performed by an instance of NSMigrationManager. To migrate a store, the migration manager requires several things:

  1. The managed object model for the destination store. (This is the persistent store coordinator’s model.)
  2. A managed object model that it can use to open the existing store.
  3. Typically, a mapping model that defines a transformation from the source (the store’s) model to the destination model.

You don’t need a mapping model if you’re able to use lightweight migration.

Core Data Migration Ways

Primary ways to create a migration:

  • Automatic (aka lightweight),
  • Manual, and
  • Custom code.
But in reality, the migration process may involve one or more of these techniques.

The golden rule when it comes to Core Data migrations is, choose lightweight whenever possible. Manual migrations and migrations requiring custom code are a magnitude more complex and memory intensive.

NOTE – Core Data does not perform migration linearly. It’ll update the app with whatever available like ver 1 to ver 3.

Types of Migrations:

These are not official categories of migration.

  1. Lightweight migrations
  2. Manual migrations
  3. Custom manual migrations
  4. Fully manual migrations

Lightweight migrations: A lightweight migration is Apple’s term for the migration with the least amount of work involved on your part. If you just make simple changes to your model (such as adding a new attribute to an entity), Core Data can perform automatic data migration, referred to as lightweight migration.

Lightweight migration is fundamentally the same as ordinary migration, except that instead of you providing a mapping model, Core Data infers one from differences between the source and destination managed object models.

Manual migrations: Manual migrations involve a little more work on our part. We need to specify how to map the old set of data onto the new set, but we get the benefit of an explicit mapping model file to configure. Setting up a mapping model in Xcode is much like setting up a data model, with similar GUI tools and some automation.

Custom manual migrations: If the transformation (add/remove properties or entities to your existing model) is more complex, however, you might need to create a subclass of NSEntityMigrationPolicy to perform the transformation.

This is level 3 of the migration complexity index. You still use a mapping model, but add to that custom code with the ability to also specify custom transformation logic on data. In this case, custom entity transformation logic involves creating an NSEntityMigrationPolicy subclass and performing custom transformations there.

Fully manual migrations: Fully manual migrations are for those times when even specifying custom transformation logic isn’t enough to fully migrate data from one model version to another. In this case, custom version detection logic and custom handling of the migration process are necessary.

Lightweight Migration

What lightweight migration can do.

Lightweight migrations can handle the following changes:

  1. Adding or removing an entity, attribute, or relationship
  2. Making an attribute non-optional with a default value
  3. Making a non-optional attribute optional
  4. Renaming an entity or attribute using a renaming identifier

Enable Lightweight Migrations –

You need to pass a dictionary containing two keys to the options parameter of the method that adds the persistent store to the coordinator. These keys are:

  • NSMigratePersistentStoresAutomaticallyOption – attempt to automatically migrate versioned stores
  • NSInferMappingModelAutomaticallyOption – attempt to create the mapping model automatically

Steps –

  1. Open your .xcdatamodeld file
  2. Click on Editor in Menu Bar
  3. Select Add Model Version….
  4. Add a new version of your model (the new group of datamodels added)
  5. Select the main file, open File Inspector (Right-Hand Panel)
  6. And under Model Version core data model select your new version of data model for current data model from drop down.
  7. That’s not all, We need to perform so-called “light migration” with code.
  8. Go to your Core Data stack manager and find where the persistentStoreCoordinator is being created.
  9. Find this line if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error])
  10. Replace nil options value with @{NSMigratePersistentStoresAutomaticallyOption:@YES, NSInferMappingModelAutomaticallyOption:@YES} (actually provided in the commented code in that method)

Here you go, have fun!

Manual Migration

If you want to make a change to the new model that’s not supported by lightweight migrations, you need to create a mapping model. A mapping model needs a source and a destination data model. When you add a new version of your data model, you are asked to select the model on which it should be based.

Adding a version to data models:

  1. Open your .xcdatamodeld file
  2. Click on Editor in Menu Bar
  3. Select Add Model Version….
  4. Add a new version of your model (the new group of datamodels added)
  5. Select the main file, open File Inspector (Right-Hand Panel)
  6. And under Model Version core data model select your new version of data model for current data model from drop down.
  7. Make the desired changes in Newly created Data Models (xcdatamodel file).

Adding a mapping model for versions:

  1. Select New, File… from File menu on menu bar.
  2. Select Core Data from left panel and Mapping Model from right options then click Next.
  3. Select Source Data Model (Version 1) from list of xcdatamodel files then click Next.
  4. Choose Target Data Model (Destination or Next Version) from same list of xcdatamodel files then click Next.

Now, let’s perform Manual Migration with Code

  1. Go to your Core Data stack manager and find where the persistentStoreCoordinator is being created.
  2. Find this line if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error])
  3. Replace nil options value with @{NSInferMappingModelAutomaticallyOption:@YES}.

Is migration needed? Migration is needed if destinationModel is NOT compatible with store meta data. Let’s check the same.

  • Get metadata for source store from its url with given type (NSSQLiteStoreType).
  • Get the destination managed object model from xcdatamodeld url.
  • Call ‘compatible with store meta data’ function on destination managed object model. It returns bool value.
  • We need migration if above function returns false.
- (BOOL)isMigrationNeeded {
  NSError *error = nil;   
  NSDictionary *sourceMetadata = [NSPersistentStoreCoordinator metadataForPersistentStoreOfType:NSSQLiteStoreType URL:[self sourceStoreURL] error:&error];
  BOOL isMigrationNeeded = NO;
  if (sourceMetadata != nil) {
    NSManagedObjectModel *destinationModel = [self managedObjectModel];
// Migration is needed if destinationModel is NOT compatible 
      isMigrationNeeded = ![destinationModel isConfiguration:nil compatibleWithStoreMetadata:sourceMetadata]; 
   }   
   NSLog(@"isMigrationNeeded: %@", (isMigrationNeeded == YES) ? @"YES" : @"NO"); 
   return isMigrationNeeded;
}

The Migration Method

  1. Get metadata for source store from its url with given type (NSSQLiteStoreType).
  2. Create managed object model from source meta data.
  3. Get the destination managed object model from xcdatamodeld url.
  4. Get Mapping Model from bundle with source and destination managed object model.
  5. Create a destination store url (a different sqlite file path).
  6. Initialise a Migration Manager with source and destination managed object model.
  7. Now, NSMigrationManager can infer the mapping model between two models. So, call a migrate store function on manager with mapping model from source store url to destination store url.
  8. Remove old store file(s) (shm or wal) and then copy the destination store url to old source store location.
- (BOOL)migrate {

    NSURL *sourceUrl = [self sourceStoreURL];

    // 1. Get metadata for source store from its URL with given type.
    NSError *error = nil;
    NSDictionary *sourceMetadata = [NSPersistentStoreCoordinator metadataForPersistentStoreOfType:NSSQLiteStoreType URL:sourceUrl error:&error];
    if (sourceMetadata == NO) {
        NSLog(@"FAILED to create source meta data");
        return NO;
    }

    // 2. Create model from source store meta deta.
    NSManagedObjectModel *sourceModel = [NSManagedObjectModel mergedModelFromBundles:@[[NSBundle mainBundle]] forStoreMetadata:sourceMetadata];
    if (sourceModel == nil) {
        NSLog(@"FAILED to create source model, something wrong with source xcdatamodel.");
        return NO;
    }

    // 3. Get the destination managed object model from xcdatamodeld url.
    NSManagedObjectModel *destinationModel = [self managedObjectModel];

    // 4. Get Mapping model from bundle with source and destination managed object model.
    NSMappingModel *mappingModel = [NSMappingModel mappingModelFromBundles:@[[NSBundle mainBundle]] forSourceModel:sourceModel destinationModel:destinationModel];

    // 5. Create the destination store url (a different sqlite/database file path)
    NSString *fileName = @"ZKManualMigration_V2.sqlite";
    NSURL *destinationStoreURL =  [[[[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask] lastObject] URLByAppendingPathComponent:fileName];

    // 6. Migrate from source to latest matched destination model,
    NSMigrationManager *manager = [[NSMigrationManager alloc] initWithSourceModel:sourceModel destinationModel:destinationModel];
    BOOL didMigrate = [manager migrateStoreFromURL:sourceUrl                                              type:NSSQLiteStoreType                                                    options:nil
                                withMappingModel:mappingModel                                             toDestinationURL:destinationStoreURL                                      destinationType:NSSQLiteStoreType                                         destinationOptions:nil                                                    error:&error];
    if (!didMigrate) {
        return NO;
    }
    NSLog(@"Migrating from source: %@ ===To=== %@", sourceUrl.path, destinationStoreURL.path);

    // 7. Delete old sqlite file
    NSError *err = nil;
    NSFileManager *fm = [NSFileManager defaultManager];
    if (![fm removeItemAtURL:sourceUrl error:&err]) {
        NSLog(@"File delete failed.");
        return NO;
    }
    NSString *str1 = [NSString stringWithFormat:@"%@-shm",sourceUrl.path];
    [fm removeItemAtURL:[NSURL fileURLWithPath:str1] error:&err];
    str1 = [NSString stringWithFormat:@"%@-wal",sourceUrl.path];
    [fm removeItemAtURL:[NSURL fileURLWithPath:str1] error:&err];

    // Copy into new location
    if (![fm moveItemAtURL:destinationStoreURL toURL:sourceUrl error:&err]) {
        NSLog(@"File move failed.");
        return NO;
    }

    NSLog(@"Migration successful");
    return didMigrate;
}
 Sample project on Manual Migration.

Thanks for reading, cheers… 👍

Handling Shared Resources using Dispatch Barriers

Handling Thread-Unsafe Shared Resource

In the last post, we have discussed about creating thread-safe singleton objects. But creating a thread-safe singleton does not solve all the issues. If your singleton object uses any mutable object like NSMutableArray, then you need to consider whether that object is itself thread-safe or not. There is a misconception that the Foundation framework is thread-safe and the Application / UI Kit framework is not. Unfortunately, this is somewhat misleading. Each framework has areas that are thread-safe and areas that are not thread-safe. Apple maintains a helpful list of the numerous Foundation framework classes which are not thread-safe.

The Problem

Lets take an example, we have a class DocumentManager, which manages all our document read, write handling. And the DocumentManager class has been implemented as a singleton. This singleton object uses a NSMutableArray property for keeping all the document names, which is a Thread-Unsafe class. In DocumentManager, we have two functions addDocumentName and allDocs. Although many threads can read an instance of NSMutableArray simultaneously without issue, It’s not safe to let one thread modify the array while another is reading it. Our singleton doesn’t prevent this condition from happening in its current state. To see the problem, have a look at code, which has been reproduced below:

- (void)addDocumentName:(NSString*)docName {
    if (docName)
       [_arrDocs addObject:docName];
}

This is a write method, which modifies the mutable array object by adding a document name into it. Now take a look at getter method:

- (NSArray*)allDocs { 
    return [NSArray arrayWithArray:_arrDocs];
}

This is a read method as it’s reading the mutable array property. It makes an immutable copy for the caller in order to defend against the caller mutating the array inappropriately, but none of this provides any protection against one thread calling the the write method addDocumentName: while simultaneously another thread calls the read method allDocs. This is the classic software development Readers-Writers Problem. GCD provides an elegant solution of creating a Readers-writer lock using dispatch barriers.

Dispatch Barriers

It allows you to make thread-unsafe object to thread-safe. It creates a synchronisation point for a code block executing in a concurrent dispatch queue. Dispatch barriers are a group of functions acting as a serial queue style objects when working with concurrent queues. Using GCD’s barrier API ensures that the submitted block is the only item executed on the specified queue for that particular time. This means that all items submitted to the queue prior to the dispatch barrier must complete before the block will execute. When the block’s turn arrives, the barrier executes the block and ensures that the queue does not execute any other blocks during that time. Once finished, the queue returns to its default implementation. GCD provides both synchronous and asynchronous barrier functions. The diagram below illustrates the effect of barrier functions on various asynchronous blocks:

Dispatch Barrier

Execution of Dispatch Barrier

Notice how in normal operation the queue acts just like a normal concurrent queue. But when the barrier is executing, it essentially acts like a serial queue. That is, the barrier is the only thing executing. After the barrier finishes, the queue goes back to being a normal concurrent queue. Here’s when you would – and wouldn’t – use barrier functions:

  • Custom Serial Queue: A bad choice here; barriers won’t do anything helpful since a serial queue executes one operation at a time anyway.
  • Global Concurrent Queue: Use caution here; this probably isn’t the best idea since other systems might be using the queues and you don’t want to monopolise them for your own purposes.
  • Custom Concurrent Queue: This is a great choice for atomic or critical areas of code. Anything you’re setting or instantiating that needs to be thread safe is a great candidate for a barrier.

Since the only decent choice above is the custom concurrent queue, you’ll need to create one of your own to handle your barrier function and separate the read and write functions. The concurrent queue will allow multiple read operations simultaneously. Creating a custom concurrent queue is easy: just pass DISPATCH_QUEUE_CONCURRENT to the dispatch_queue_create function. Open your singleton class, and add the following private property to the class extension category:

@property (nonatomic, strong) dispatch_queue_t concurrentDocumentQueue;

Now replace addDocumentName function with below implementation:

- (void)addDocumentName:(NSString*)docName { 
    if (docName) { // 1 
        dispatch_barrier_async(self. concurrentDocumentQueue, ^{ // 2 
            [_arrDocs addObject:docName]; // 3 
        }); 
    } 
}

Here’s how your new write function works:

  1. Check that there’s a valid document name before performing all the following work.
  2. Add the write operation using your custom queue. When the critical section executes at a later time this will be the only item in your queue to execute.
  3. This is the actual code which adds the object to the array. Since it’s a barrier block, this block will never run simultaneously with any other block in concurrentDocumentQueue.

This takes care of the write method, but we also need to implement the allDocs read method and instantiate concurrentDocumentQueue. To ensure thread safety with the writer side of matters, you need to perform the read on the concurrentDocumentQueue queue. You need to return from the function though, so you can’t dispatch asynchronously to the queue because that wouldn’t necessarily run before the reader function returns. In this case, dispatch_sync would be an excellent candidate. dispatch_sync() synchronously submits work and waits for it to be completed before returning. Use dispatch_sync to track of your work with dispatch barriers, or when you need to wait for the operation to finish before you can use the data processed by the block. If you’re working with the second case, you’ll sometimes see a __block variable written outside of the dispatch_sync scope in order to use the processed object returned outside the dispatch_sync function. You need to be careful though. Imagine if you call dispatch_sync and target the current queue you’re already running on. This will result in a deadlock because the call will wait to until the block finishes, but the block can’t finish (it can’t even start!) until the currently executing task is finished, which can’t! This should force you to be conscious of which queue you’re calling from” as well as which queue you’re passing in. Here’s a quick overview of when and where to use dispatch_sync:

  • Custom Serial Queue: Be VERY careful in this situation; if you’re running in a queue and call dispatch_sync targeting the same queue, you will definitely create a deadlock.
  • Main Queue (Serial): Be VERY careful for the same reasons as above; this situation also has potential for a deadlock condition.
  • Concurrent Queue: This is a good candidate to sync work through dispatch barriers or when waiting for a task to complete so you can perform further processing.

Now replace allDocs method with the following implementation:

- (NSArray*)allDocs {
    __block NSArray *array; // 1
    dispatch_sync(self.concurrentDocumentQueue, ^{ // 2
        array = [NSArray arrayWithArray:_arrDocs]; // 3
    });
    return array;
}

Here’s your read function. Taking each numbered comment in turn, you’ll find the following:

  1. The __block keyword allows objects to be mutable inside a block. Without this, array would be read-only inside the block and your code wouldn’t even compile.
  2. Dispatch synchronously onto the concurrentDocumentQueue to perform the read.
  3. Store the document array in array and return it.

Finally, you need to instantiate your concurrentDocumentQueue property. Since we are working on a singleton class, so we will make the changes in the sharedManager function so that it instantiate only once. Change sharedManager method to instantiate the custom concurrent queue like so:

+ (instancetype)sharedManager {
    static DocumentManager *sharedDocumentManager = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
      sharedDocumentManager = [[DocumentManager alloc] init];
      sharedDocumentManager->_arrDocs = [NSMutableArray array];
      // ADD THIS:
      sharedDocumentManager->_concurrentDocumentQueue = dispatch_queue_create("com.mycompany.ThreadySafety.documentQueue",DISPATCH_QUEUE_CONCURRENT); 
    });
    return sharedDocumentManager;
}

This initialises concurrentDocumentQueue as a concurrent queue using dispatch_queue_create. The first parameter is a reversed DNS style naming convention; make sure it’s descriptive since this can be helpful when debugging. The second parameter specifies whether you want your queue to be serial or concurrent. Note: When searching for examples on the web, you’ll often see people pass 0 or NULL as the second parameter of dispatch_queue_create. This is a dated way of creating a serial dispatch queue; it’s always better to be specific with your parameters. Your mutable property now thread safe. No matter where or how you read or write this property, you can be confident that it will be done in a safe manner with no amusing surprises.

You can download the source code which contains all the code snippet used in this post. Some parts of this post is derived from Ray Wenderlich Tutorials. Thanks for reading. 🙂