About async/await you should takeios image and graphics best practice oneit seriously

UIKit - Images and Graphics Best Praticies - Techniques and strategies for how to efficiently use graphic content in your app


Problem Solving - How to integrate advanced cpu and gpu technology into your app
When more CPU is used in an app, there will be a negative impact on battery life and app responsiveness
What may not be so obvious is that when your app and other apps on the system, consume more memory, it also results in higher cpu usage, which has a further detrimental effect on battery life and performance
Therefore, we will focus on how to improve the use of these resources
What better context for this discussion than an app that handles a lot of graphic content?
Like Photos app
UIImages UIKits are high-level classes for working with graphics data
We tend to divide graphic content in apps into two categories - rich content [Content] (like photos), and also as a data type in UIKit [Iconography is used to represent something, such as an icon displayed in a button]


UIImageView is a class provided by UIKit for displaying UIImage
If the classic MVC model is used for analogy, UIImage can be regarded as a model object
UIImageView is a view, these objects and views have responsibilities in the classic model
UIImage is responsible for loading the image content, UIimageView is responsible for displaying and rendering it


Image Rending Pipline

Rendering is a continuous process, not a one-time event
There is actually a hidden stage that is crucial to measuring the performance of the app, this stage is called decoding
In order to discuss decoding, we need to discuss a concept called "buffer",
Custom region of memory - Often viewed as sequence of elements

A buffer is just a contiguous area of ​​memory, but we tend to use the term "cache" to refer to memory consisting of a series of elements that are of the same size and often have the same internal structure, and our focus is where A very important kind of cache (i.e. image cache)

We use this term to refer to a specific cache that holds an in-memory representation of some image, each element of this cache describes the color and transparency of each pixel in the image
So the size of this cache in memory is proportional to the size of the images it contains
A particularly important example of caching is the framebuffer
The frame buffer

The framebuffer is responsible for holding the actual rendered output in your app, so when your app updates its view hierarchy, UIKit will re-render the app's window and all of its subviews into the framebuffer

This framebuffer provides color information for each pixel, which the display hardware will read in order to light up the corresponding pixel on the display

The last part happens at regular intervals

It might happen at 60 frames per second, i.e. every 1/60th of a second
Or on an iPad with ProMotion Display it can happen once in 1/120th of a second

If nothing has changed in your app, the display hardware will pull the same data from the framebuffer it last saw

But when you change the content of the view, for example you specify a new UIImage for the image view, UIKit will re-render your app window

and put it into the framebuffer


The next time the display hardware is pulled from the framebuffer, it will get your new content


Now you can compare the image cache with another "data cache"

This is just a kind of cache that contains a series of bytes, in our case we are concerned with the data cache containing image files, maybe we have downloaded them from the web, or loaded them from disk (data cache)
Data caches containing image files usually start with some metadata describing the size of the images stored in the data cache
then contains the image data itself, the image data is encoded in some form such as JPEG compression or PNG
This means that the bytes following this metadata don't actually directly describe anything about the pixels in the image,
Therefore, we can take a deeper look at this pipeline we set up
Here's an ImageView, and we've highlighted the area of ​​the framebuffer that will be rendered and filled by the imageview

We have assigned a UIImage to this image view, which has a data cache representing the contents of the image file, which may be downloaded from the network or read from disk, but we need to fill the frame buffer with data for each pixel
In order to do this, UIImage will allocate an image buffer
Pipline in Action

Its size is equal to the size of the image contained in the data cache, while performing an operation called decoding, which converts JPEG or PNG or other encoded image data into per-pixel image information

then depends on the content mode of our image view
When UIKit asks an image view to render,

It copies and scales the data from the image buffer as it copies the data to the frame buffer
Now the decoding stage is CPU intensive, especially for large images
So instead of doing this process every time UIKit asks the image view to render
UIImage is bound on the image cache, so it only executes this process once

So your app may keep a large memory allocation for each decoded image
As mentioned earlier, this memory allocation is proportional to the size of the input image and not necessarily related to the size of the view actually rendered by the framebuffer, which can have some pretty adverse performance consequences

Large chunks of memory allocation in the app's address space that may force other related content away from what it wants to reference This situation is called fragmentation (a consequence of abusing memory)
If your app starts taking up more and more memory, the OS will step in and start transparently compressing the contents of physical memory
In addition to the CPU usage by your own app, the CPU also needs to be involved in this operation, and you may increase the global CPU usage beyond your control.
Eventually your app may consume so much physical memory that the OS needs to start a kill process which will start with a low priority background process
If your app consumes a certain amount of memory, your app itself may be terminated and the terminated background process may be doing some important work on behalf of the user so they may restart as soon as they terminate
So even though your app may only consume memory for a short period of time, it can have a profound impact on CPU usage so we want to reduce your app's memory usage
We can achieve this with a technique called downsampling
Now let's look at some more details of the image rendering pipeline, including the fact that the image view in which we want to display the image is actually smaller than the image to display

Usually the Core Animation framework will take care of shrinking that image during the rendering phase, but we can save some memory by using this downsampling technique
Essentially what we're trying to do is capture that zoom out and put it into the thumbnail object

We will eventually reduce memory usage as we will have a smaller cache of decoded images
This way we set up an image source, create a thumbnail then capture the decoded image cache into a UIImage while assigning that UIImage to our image view
we can discard the data cache containing our original image

The result is that our app will have a smaller long-term memory footprint
There are several steps in the code to perform this operation. Let's briefly go through this process and take a preliminary look at the effect.
We load a 1.2M 3840x2400 image

Loading an image without a strategy consumes 112M of memory

Using the downsample function consumes 66.7M memory, which saves nearly half of the memory

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00