MPP Scheduling

This section describes MPP scheduling flows, task types and their usage in SOF based on Zephyr API.

MPP Scheduling defines four task categories:

  • Low Latency audio data processing tasks (LL) - high priority,

  • Tasks with Budget (TwB) - medium priority,

  • audio Data Processing tasks (DP) - low priority,

  • background (idle) priority tasks

NOTE: As of today, only LL tasks has been integrated with Zephyr. TwB, DP and idle tasks are work in progress (WIP).

The role of MPP Scheduling is limited to task threads definition, configuration and state management. The thread scheduling itself is handled by Zephyr.

MPP Scheduling is designed to:

  • address strict real-time requirements,

    • i.e. to avoid under/overflows on isochronous interfaces such as I2S,

  • provide predictable latency,

  • reduce amount of buffering needed,

MPP Scheduling defines two tasks categories:

Task categories characteristic:

  • LL tasks for latency sensitive audio data processing,

    • LL tasks are organized in queues shared between component instances,

    • there is one non-preemptive high priority LL Thread assigned to exactly one core. For example, for HW configuration with 4 cores there will be 4 LL Threads,

    • each queue is statically linked to one LL Thread and all queue tasks will be processed on a core that LL Thread is assigned to,

    • there are multiple queues per LL Thread which represent a priority and guarantee tasks execution order,

  • TwB for medium priority processing (e.g., IPC/IDC message handling),

    • each TwB is scheduled as a separate preemptive thread,

    • TwB has assigned budget for processing that is refreshed in each sys tick (Zephyr Thread time slicing),

    • TwB priority is dropped to low when budget is consumed,

  • DP tasks for low priority audio processing,

    • DP tasks are scheduled based on earliest deadline first (EDF) algorithm,

    • each DP task is scheduled as a separate preemptive thread,

    • DP tasks can be assigned to one of the available cores,

  • idle tasks for background processing,

    • idle tasks are scheduled as separate preemptive threads,

    • they have the lowest priority and are scheduled when all other tasks completed their processing,

    • they are used in Fast Mode. For example, in data draining from firmware to host.

NOTE: Each task is assigned by MPP Scheduling to one core. Tasks are executed by the assigned core till termination.

NOTE: For Earliest Deadline First (EDF) algorithm description, please refer to link: Wikipedia.

NOTE: For Zephyr Scheduling description, please refer to link: Zephyr Scheduling.

@startuml
allowmixing

scale max 1280 width

package “RTOS layer” {

	package "SOF kernel extension" as KERNEL_EXTENSION {
		package "MPP Scheduling" as MPP_SCHEDULING {
			component "LL Tasks" as LL_TASKS
			component "DP Tasks" as DP_TASKS
			component "Tasks with Budget" as TWB
			component "Idle Tasks" as IDLE_TASKS

			LL_TASKS -[hidden]right- DP_TASKS
			DP_TASKS -[hidden]right- TWB
			TWB -[hidden]right- IDLE_TASKS
		}
	}

	package "Zephyr" as ZEPHYR_LAYER {
		package "Services" as SERVICES {
			component "Timing" as TIMING
			component "Interrupts" as INTERRUPTS
		}

		package "Scheduling" as SCHEDULING {
			component "Threads" as THREADS
			component "EDF Scheduler" as EDF
			component "Time-Slice Scheduler" as TIME_SLICE_SCHEDULING

			THREADS -[hidden]right- EDF
			EDF -[hidden]right- TIME_SLICE_SCHEDULING
		}

		package "Drivers" as DRIVERS {
			component "Timer" as TIMER_DRV
			component "Watchdog" as WATCHDOG_DRV
		}

		package “SoC HAL” as SOC_HAL {
			component "OEM SoC 1" as OEM_SOC_1
			component "OEM SoC 2" as OEM_SOC_2
			component "Other SoCs" as OTHER_SOCS
		}

		component "XTHAL" as XTHAL

		SERVICES -[hidden]right- SCHEDULING
		SERVICES -[hidden]down- XTHAL
		SCHEDULING -[hidden]down- SOC_HAL
		SCHEDULING -[hidden]down- DRIVERS
		DRIVERS -[hidden]right- SOC_HAL
		DRIVERS -[hidden]right- XTHAL
	}

	KERNEL_EXTENSION -[hidden]down- ZEPHYR_LAYER
}

@enduml

Figure 45 SOF MPP Scheduling based on Zephyr

LL Tasks

Low Latency Tasks are executed within one of the non-preemptive high priority LL Threads that runs all ready-to-run tasks till completion during a single cycle. There is one LL Thread scheduled per core with its own queues and LL tasks to execute.

MPP Scheduling adds ready tasks to LL queues at the beginning of each scheduling period. There are a number of queues to add tasks to. LL Thread iterates over the queues, and runs all tasks from one queue before moving to the next queue. Therefore, it is possible to guarantee that some tasks are always run before others during a cycle.

There are also two special queues: pre-run queue and post-run queue. Tasks from pre-run queue are run at the beginning of each cycle (may consider them to have the highest priority).

Tasks from post-run queue are run at the end of each cycle (may consider them to have the lowest priority).

Example of a pre-run task may be a task registered by the sink driver that starts the sink at the very beginning of the cycle if data was supplied during the previous cycles and link has been stopped.

DP Tasks

The data processing components are represented as a DP tasks that are scheduled as separate preemptive threads. DP threads scheduling is done according to EDF (Earliest Deadline First) algorithm that is part of Zephyr.

To meet real-time processing criteria algorithm operates by choosing component task that is closest to its deadline (time when output data is required).

For playback case algorithm starts from sink and going backward calculates deadline for data delivery:

  • Time required by component to process data depend on processing period and compute.

  • Goal is to process data through chain before real-time sink deadline

EDF scheduling example

The capture pipelines operate in the same way.

It is important to consider that EDF assumes preemptive scheduling of the DP Tasks and lack of dependency between them.

Task With Budget

This is a specialized version of DP task that has pre-allocated MCPS budget renewed with every system tick. When the task is ready to run, then depending on the budget left in the current system tick, either MEDIUM_PRIORITY or LOW_PRIORITY is assigned to task thread. The latter allows for opportunistic execution if there is no other ready task with a higher priority while the budget is already spent.

Examples of tasks with budget: Ipc Task, Idc Task.

Task with Budget (TWB) has two key parameters assigned:

  • cycles granted: the budget per system tick,

  • cycles consumed: number of cycles consumed in a given system_tick for task execution

The number of cycles consumed is being reset to 0 at the beginning of each system_tick, renewing TWB budget. When the number of cycles consumed exceed cycles granted, the task is switched from MEDIUM to LOW priority. When the task with budget thread is created the MPP Scheduling is responsible to set thread time slice equal to task budget along with setting callback on time slice timeout. Thread time slicing guarantee that Zephyr scheduler will interrupt execution when the budget is spent, so MPP Scheduling timeout callback can re-evaluate task priority.

If there is a budget left in some system tick (task spent less time or started executing close to the system tick that preempts execution), it is reset and not carried over to the next tick.

NOTE The Zephyr Scheduler track time slice budget of the TWB when preempted and log warning if the budget is significantly exceeded (some long critical section inside the task’s code might be responsible for this).

NOTE The MPP Scheduling must be notified by TWB on processing complete and update cycles consumed in the current system tick. This allows to schedule TWB more than once (if necessary) in the single system tick with MEDIUM_PRIORITY. The second TWB schedule should be done with modified time slice value, equal to delta between budget and cycles consumed.

Scheduling flows

Zephyr scheduling

The presented Zephyr scheduling flow takes place on each core that has MPP tasks scheduled.

@startuml

scale max 1280 width

skinparam maxMessageSize 400
skinparam BoxPadding 4

box "SOF" #LightBlue
	participant "MPP Scheduling"
	participant "Zephyr Scheduler"
	participant "Zephyr Thread"
	participant "Timer"
end box

"Timer" -> "MPP Scheduling": sys_tick callback
activate "MPP Scheduling"
	loop for each core
		"MPP Scheduling"-> "Zephyr Scheduler": resume LL Zephyr Thread\n(k_sem_give)
		activate "Zephyr Scheduler"
	end

	"MPP Scheduling"-> "MPP Scheduling": DP and Task with Budget\nZephyr Threads update

"Zephyr Scheduler"-> "Zephyr Thread": schedule LL Zephyr Thread\n(context switch)
 deactivate "MPP Scheduling"
	activate "Zephyr Thread"
	"Zephyr Thread"-> "Zephyr Thread": zephyr_ll_run
	activate "Zephyr Thread"

		loop for each LL pending task
		note left: LL pending tasks are scheduled operations\nthat are waiting for certain circumstances\n(like data arrival) to start processing
			opt if task is ready for processing
				"Zephyr Thread"-> "Zephyr Thread": move task \nto LL run queue
			end
		end

		loop for each task in LL queues
			"Zephyr Thread"-> "Zephyr Thread": run LL task callback
		end
	return

	"Zephyr Thread"-> "Zephyr Thread": suspend LL Zephyr Thread\n(k_sem_take)
	return

loop for each Task With Budget (TwB) Zephyr Thread
	"Zephyr Scheduler"-> "Zephyr Thread": schedule TwB Zephyr Thread\n(context switch)
		activate "Zephyr Thread"
		"Zephyr Thread"-> "Zephyr Thread": run

		alt if time_slice (budget) timeout
			"Zephyr Thread"-> "Zephyr Scheduler": time_slice timeout
				"Zephyr Scheduler"-> "MPP Scheduling": time_slice callback(thread)
				activate "MPP Scheduling"
					"MPP Scheduling"-> "Zephyr Thread": k_thread_priority_set(thread, LOW_PRIO)
					note right: when budget is consumed\nreset time_slice to default\nand lower priority
					"MPP Scheduling"-> "Zephyr Thread": k_thread_time_slice_set(thread, slice_ticks = budget)
				deactivate "MPP Scheduling"

		else if processing complete (no time_slice timeout)
			"Zephyr Thread"-> "MPP Scheduling": on processing complete (thread)
			activate "MPP Scheduling"
				"MPP Scheduling"-> "Zephyr Thread": k_thread_runtime_stats_get(thread)
				activate "Zephyr Thread"
					return return thread_cycles - absolute number of cycles consumed by thread
				"MPP Scheduling"->"MPP Scheduling": update thread\ncycles_consumed_in_sys_tick += (thread_cycles - thread_ref_cycles)
				note right: thread_ref_cycles is a reference number of cycles consumed by thread\nupdated on each sys_tick start and processing complete
				"MPP Scheduling"->"MPP Scheduling": update thread_ref_cycles = thread_cycles
				return
			deactivate "MPP Scheduling"

			"Zephyr Thread" -> "Zephyr Thread": suspend TwB Zephyr Thread\n(k_sem_take)
			note left: TwB Threads are expected to be resumed when there is new data for processing\nfor example IPC TwB Thread will be resumed on IPC interrupt
			"Zephyr Thread" --> "Zephyr Scheduler"
			deactivate "Zephyr Thread"
		end
end

loop for each DP Zephyr Thread
	"Zephyr Scheduler"-> "Zephyr Thread": schedule DP Zephyr Thread with earlieast deadline\n(context switch)
	note right: TwB Threads with low priority are treated\nas threads with max deadline and will be\nscheduled after DP threads complete processing
	activate "Zephyr Thread"
		"Zephyr Thread"-> "Zephyr Thread": run
		note right: DP thread runs till completion\nor till earlier deadline or\nhigher priority thread is available
		return
	deactivate "Zephyr Thread"
end

@enduml

Figure 46 Zephyr scheduling of MPP threads flow

MPP Data Processing and Task with Budget threads periodic update

Zoom in to Data Processing (Earliest Deadline First) and Task with Budget Threads periodic update operations on each system tick start.

@startuml

scale max 1280 width

skinparam maxMessageSize 400
skinparam BoxPadding 4

box "SOF Firmware" #LightBlue
	participant "MPP Scheduling"
	participant "Zephyr Thread"
	participant "Timer"
end box

"Timer" -> "MPP Scheduling": sys_tick callback
activate "MPP Scheduling"

loop for each Task with Budget
	"MPP Scheduling"-> "MPP Scheduling": reset task with budget\ncycles_consumed_in_sys_tick
	"MPP Scheduling" -> "Zephyr Thread": k_thread_priority_set(thread, MEDIUM_PRIO)
	"MPP Scheduling" -> "Zephyr Thread": k_thread_time_slice_set(thread, slice_ticks = budget)
	note right: Reset priority and budget\nto default value
	"MPP Scheduling"-> "Zephyr Thread": k_thread_runtime_stats_get(thread)
	activate "Zephyr Thread"
		return return thread_cycles - absolute number of cycles consumed
	"MPP Scheduling"-> "MPP Scheduling": save thread_ref_cycles = thread_cycles as a reference
end

loop for each DP task
	opt if DP task is ready for processing
		"MPP Scheduling"-> "MPP Scheduling": re-calculate task deadline
		"MPP Scheduling" -> "Zephyr Thread": k_thread_deadline_set(thread, deadline)
		"MPP Scheduling" -> "Zephyr Thread": resume thread
	end
end

deactivate "MPP Scheduling"

@enduml

Figure 47 DP and TWB threads sys tick update flow

Task with budget scheduling

@startuml

skinparam maxMessageSize 400
skinparam BoxPadding 4

box "SOF Firmware" #LightBlue
	participant "Firmware Manager"
	participant "MPP Scheduling"
	participant "Zephyr Scheduler"
	participant "Zephyr Thread"
end box

activate "Zephyr Scheduler"

"Zephyr Scheduler"-> "Zephyr Thread": schedule IPC Task with Budget (TWB) thread\n(MEDIUM_PRIO)
activate "Zephyr Thread"

	"Zephyr Thread"-> "Zephyr Thread": run
	"Zephyr Thread"-> "MPP Scheduling": on processing complete
	activate "MPP Scheduling"
		"MPP Scheduling"-> "Zephyr Thread": k_thread_runtime_stats_get
		activate "Zephyr Thread"
		return
		"MPP Scheduling"-> "MPP Scheduling": update IPC Task with budget\ncycles_consumed_in_sys_tick
	return
	"Zephyr Thread"-> "Zephyr Thread": suspend TWB Zephyr Thread\n(k_sem_take)
return

"Zephyr Scheduler"-> "Zephyr Thread": schedule EDF thread\n(LOW_PRIO)
activate "Zephyr Thread"
	"Zephyr Thread"-> "Zephyr Thread": run

	activate "Firmware Manager"
	"Firmware Manager"-> "Firmware Manager": Host IPC message received
		"Firmware Manager"-> "MPP Scheduling": request IPC processing
		activate "MPP Scheduling"
		"MPP Scheduling"-> "Zephyr Thread": resume IPC TWB Zephyr Thread\n(k_sem_give)
		"MPP Scheduling" --> "Firmware Manager"
		deactivate "MPP Scheduling"
	deactivate "Firmware Manager"

"Zephyr Thread" --> "Zephyr Scheduler": EDF thread gets preempted
deactivate "Zephyr Thread"

"Zephyr Scheduler"-> "Zephyr Thread": schedule IPC task with budget thread\n(MEDIUM_PRIO)
	activate "Zephyr Thread"
		"Zephyr Thread"-> "Zephyr Thread": run
	return

@enduml

Figure 48 Task with budget example scheduling flow

Example timeline of MPP Scheduling on a DSP core

The below diagram shows how scheduling looks like on a DSP core. At the timer interrupt, LL scheduler runs as the first one and then DP scheduler is executed.

@startuml

Title Task scheduling on DSP core

legend
Assumptions:
1) 1ms scheduling
2) 0.1ms takes LL task execution
3) 0.5ms takes execution of all DP tasks
end legend

scale 1 as 200 pixels

concise "DP Tasks Processing" as DP_Processing
concise "LL Tasks Processing" as LL_Processing
robust "DSP" as DSP
concise "Timer Interrupt" as Interrupt

DSP is Idle

@DSP
@1.2 <-> @2: Time available for\nDP tasks execution
@2.2 <-> @2.7: Actual execution time\nof DP tasks
@3 <-> @3.2: Actual execution time\nof LL tasks

@Interrupt
@0 <-> @1 : Schedule period

@0
Interrupt -> DSP
DSP -> LL_Processing
DSP is "Scheduling tasks"
LL_Processing is Busy
DP_Processing is {-}

@+0.2
DSP -> DP_Processing
LL_Processing is {-}
DP_Processing is Busy

@+0.5
DP_Processing -> DSP
DP_Processing is {-}
DSP is Idle

@1
Interrupt -> DSP
DSP -> LL_Processing
DSP is "Scheduling tasks"
LL_Processing is Busy

@+0.2
DSP -> DP_Processing
LL_Processing is {-}
DP_Processing is Busy

@+0.5
DP_Processing -> DSP
DP_Processing is {-}
DSP is Idle

@2
Interrupt -> DSP
DSP -> LL_Processing
DSP is "Scheduling tasks"
LL_Processing is Busy

@+0.2
DSP -> DP_Processing
LL_Processing is {-}
DP_Processing is Busy

@+0.5
DP_Processing -> DSP
DP_Processing is {-}
DSP is Idle

@3
Interrupt -> DSP
DSP -> LL_Processing
DSP is "Scheduling tasks"
LL_Processing is Busy

@+0.2
DSP -> DP_Processing

LL_Processing is {-}
DP_Processing is Busy

@+0.5
DP_Processing -> DSP
DP_Processing is {-}
DSP is Idle

@enduml

Figure 49 Example timeline of MPP Scheduling on DSP core with LL and DP tasks scheduling

Example timeline of DP tasks scheduling on secondary DSP core

The below diagram shows a detailed example of how DP tasks are scheduled on the secondary DSP core.

@startuml

Title DP tasks scheduling on secondary DSP core

legend
Assumptions:
1) 1ms scheduling
2) No LL tasks assigned to example secondary DSP core
3) DP Task B do not depend on Task A completion
(otherwise, Task B would start on next timer interrupt after A
completion)
end legend

scale 1 as 150 pixels

concise "Task B" as Task_B
concise "Task A" as Task_A

concise "DP task processing" as DP_Processing
robust "DSP" as DSP
concise "Timer interrupt" as Interrupt


@Task_A
0 is Busy
1.5 is {-}

4 is Busy
5.5 is {-}

8 is Busy
9.5 is {-}

@0 <-> @4: Task A schedule period (4ms)
@4 <-> @5.5: Task A execution time (1.5ms)

DP_Processing@0 -[#Orange]> Task_A@0
DP_Processing@1 -[#Orange]> Task_A@1
DP_Processing@1.5 -[#Orange]> Task_A@1.5


@Task_B
0 is Busy
2 is {-}

6 is Busy
8 is {-}

@0 <-> @6: Task B schedule period (6ms)
@6 <-> @8: Task B execution time (2ms)

DP_Processing@1.5 -[#Brown]> Task_B@0
DP_Processing@2 -[#Brown]> Task_B@0.5
DP_Processing@3 -[#Brown]> Task_B@1.5
DP_Processing@3.5 -[#Brown]> Task_B@2

DSP is Idle
DP_Processing is {-}

@0
DP_Processing is "A"

@0
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DSP is "Scheduling"
DP_Processing is "A"

@1
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "A"

@1.5
DP_Processing -> DSP
DSP -> DP_Processing
DP_Processing is "B"

@2
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "B"

@3
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "B"

@3.5
DP_Processing -> DSP
DSP is Idle
DP_Processing is {-}

@4
Interrupt -[#DarkViolet]> DSP
DSP is "Scheduling"
DSP -> DP_Processing
DP_Processing is "A"

@5
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "A"

@5.5
DP_Processing -> DSP
DSP is Idle
DP_Processing is {-}

@6.001
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DSP is "Scheduling"
DP_Processing is "B"

@7.001
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "B"

@8.001
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "A"

@9.001
Interrupt -[#DarkViolet]> DSP
DSP -> DP_Processing
DP_Processing is "A"

@9.5
DP_Processing -> DSP
DSP is Idle
DP_Processing is {-}

@enduml

Figure 50 Example of DP tasks scheduling on secondary DSP core

Example timeline of MPP scheduling on multiple DSP cores

The below diagram shows how scheduling looks like on many DSP cores. The DP task deadlines are reevaluated on each core in Timer sys tick callback.

@startuml

Title Tasks scheduling on multiple DSP cores

legend
Assumptions:
1) 1ms system tick

Notes:
2) Core #0 has only LL tasks assigned schedule in 1ms period
3) Core #1 has one DP task assigned that is dependent on Core #0 LL tasks data, scheduled in 1ms period
(e.g. multicore pipeline with DP module scheduled on separate core)
4) Core #2 has LL tasks scheduled in 1ms period and DP task scheduled in 2ms period
(e.g. pipeline processing with LL and DP components components where DP component has 2ms scheduling period)
end legend

scale 1 as 300 pixels

concise "DSP #2" as DSP_2
concise "DSP #1" as DSP_1
concise "DSP #0" as DSP_0

concise "Timer interrupt" as Interrupt

@DSP_0
0 is "LL proc."
0.5 is {-}

1 is "LL proc."
1.5 is {-}

2 is "LL proc."
2.5 is {-}

3 is "LL proc."
3.5 is {-}

4 is "LL proc."
4.5 is {-}

@0 <-> @1: DSP#0 LL schedule period (1ms)

@DSP_1
0 is {-}

1 is "DP proc."
1.6 is {-}

2 is "DP proc."
2.6 is {-}

3 is "DP proc."
3.6 is {-}

4 is "DP proc."
4.6 is {-}
5 is {-}

@0 <-> @1: delay one period (waiting for first DSP#0 LL data)
@1 <-> @2: DSP#1 DP schedule period (1ms)

@DSP_2

0 is "LL proc."
0.3 is {-}

1 is "LL proc."
1.3 is {-}

2 is "LL proc."
2.3 is "DP proc."

3 is "LL proc."
3.3 is "DP proc."
3.7 is {-}

4 is "LL proc."
4.3 is "DP proc."

@0 <-> @1: DSP#2 LL schedule period (1ms)
@2.3 <-> @4.3: DSP#2 DP schedule period (2ms)

@0
Interrupt -[#DarkViolet]> DSP_0
Interrupt -[#DarkViolet]> DSP_1
Interrupt -[#DarkViolet]> DSP_2

@1
Interrupt -[#DarkViolet]> DSP_0
Interrupt -[#DarkViolet]> DSP_1
Interrupt -[#DarkViolet]> DSP_2

@2
Interrupt -[#DarkViolet]> DSP_0
Interrupt -[#DarkViolet]> DSP_1
Interrupt -[#DarkViolet]> DSP_2

@3
Interrupt -[#DarkViolet]> DSP_0
Interrupt -[#DarkViolet]> DSP_1
Interrupt -[#DarkViolet]> DSP_2

@4
Interrupt -[#DarkViolet]> DSP_0
Interrupt -[#DarkViolet]> DSP_1
Interrupt -[#DarkViolet]> DSP_2

@enduml

Figure 51 Example of MPP Scheduling on many cores - LL and DP tasks scheduling

Fast Mode

The Fast Mode is used to process data faster than real time. The processing faster than real time is only needed for a short time period and it happens i.e. when firmware performs low power Wake on Voice. In such case SOF firmware is working in low power mode, performing i.e. key phrase detection algorithm, accumulating last few seconds of audio samples in history buffer. When a key phrase detection happens, there is a need to stream the accumulated history to Host as quickly as possible with optional additional processing on DSP. It is only possible when a sink interface to Host transfer burst of data from DSP.

The Fast Mode is an idle low priority task. The task is only executed when other DP tasks with deadlines has completed their processing and there is still enough DSP cycles before a next system tick.

When the Fast Mode task is created by i.e. History Buffer, the component instance (i.e. History Buffer) needs to provide a list of LL component instances that will be executed within a Fast Mode thread, similar as it is done with LL tasks queues and LL Thread. When the Fast Mode thread is executed it will trigger processing of LL components in similar way as LL Thread does. The Fast Mode task is executed in the critical section. It will check if there is data available in an input queue and there is enough space in an output queue. Only then it will execute a LL component. What is important to note is that the Fast Mode task does not call processing on the DP components directly.

As described in the previous sections, the processing on DP components is called according to EDF algorithm. A periodicity of a component processing is determined by time needed to fill an input queue using real time source of data. When an input queue has sufficient amount of data, the processing on DP component can be called. The input queues for DP components that are on the Fast Mode task path will be filling much faster than real time as the side effect of the Fast Mode task execution - LL components will move data to DP component input queue and out of DP component output queue. As the result, DP component can be executed much earlier than real time - a DSP task reports “ready to run” as soon as it has sufficient amount of data in input queue and output queue has enough space for produced frame. That can lead to starvation of other tasks and to prevent it a Fast Mode tasks must be scheduled as idle tasks in background.

Watchdog timer

Depending on HW configuration there can be a single watchdog timer, watchdog available for each DSP core or none.

All DSP cores shall enable watchdog when they are active to monitor health of subsystem. When one of watchdogs will expire, the entire subsystem will be reset by Host.

Watchdog shall be enabled when:

  • DSP core is enabled,

  • tasks are assigned to DSP core,

Watchdog shall be disabled when:

  • DSP core is disabled,

  • no tasks are assigned to DSP core,

  • DSP core goes to low power state,

Watchdog timer shall be programmed to value of a few scheduling periods.

Watchdog timer when enabled shall be updated at every system tick. In case of primary DSP core, it should be after running LL tasks. In case of secondary HP DSP cores, it should be on system tick end.