IOS GCD: DISPATCH_QUEUE_PRIORITY_BACKGROUND what does "disk I / O" mean in document?

As the doc says:

DISPATCH_QUEUE_PRIORITY_BACKGROUND Items submitted to the queue will be executed on background priority, that is, the queue will be scheduled to run after all higher priority queues have been scheduled and the system will run items in that queue on the background thread according to the specified priority (2) (that is, disk I / O is throttled and threads are scheduling priority set to the lowest value).

The last part of the document, what does it mean "disk I/O is throttled"

here?

Does this mean that tasks running at the level DISPATCH_QUEUE_PRIORITY_BACKGROUND

cannot access the disk?

+3


source to share


1 answer


From the documentation we can conclude that it DISPATCH_QUEUE_PRIORITY_BACKGROUND

runs on a thread with a "background status according to setpriority (2) ".

setpriority (2) has a parameter prio

that can be installed on 0

and on PRIO_DARWIN_BG

. I guess this means what is used PRIO_DARWIN_BG

and the documentation describes it as:

When a thread or process is in the background, the scheduling priority is set to the lowest value, disk IO is throttled (with behavior similar to using setiopolicy_np (3) to set the throttle policy), and network IO is throttled for any sockets open after going to background state. Any previously opened sockets are not affected.



setiopolicy_np (3) can establish IO policy on flow IOPOL_IMPORTANT

, IOPOL_STANDARD

, IOPOL_UTILITY

, IOPOL_THROTTLE

, or IOPOL_PASSIVE

. He describes the impact of throttling disk I / O as:

If a throttle request occurs during a small request time window with a higher priority, the thread that issued the throttle I / O is forced to sleep for a short period. (And these windows and timeouts depend on the throttle I / O policy.) This slows down what the throttle I / O issues so that higher priority I / O can run with low latency and get a larger share of disk bandwidth. In addition, an IMPORTANT I / O request can bypass previously issues a throttling I / O request in the kernel or driver queue and is sent to the device first. In some circumstances, very large throttling I / O requests will be broken down into smaller requests, which are then mass-produced.

This basically means that reading and writing to the disk may slow down or delay if another disk with a higher priority is also accessing the disk. So, no, it does not prevent access to tasks running on DISPATCH_QUEUE_PRIORITY_BACKGROUND

from accessing the disk.

+5


source







All Articles