Download Middleware for Embedded Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Embedded Middleware
National Chiao Tung University
Chun-Jen Tsai
3/21/2011
Embedded Middleware
“
Middleware”is a very popular terminology in
embedded system software design

Just like “
cloud computing,”it is merely a new name for old
technologies
 We can further divide embedded middleware into two
layers


System middleware –an API library that provides system
abstraction to application software to increase application
portability
Hardware abstraction middleware –an API specification that
acts as a hardware abstraction layer for chip-level integration
2/48
Why System Middleware
 The purpose of system middleware is to provide a
homogeneous environment for applications

Operating system does the same thing, but due to the
variety of embedded hardware and competitions among OS
companies, we need something beyond OS
application
Defined by an international
standard organization
Middleware
Designed by a
system
manufacturer
OS 1
Hardware 1
Middleware
OS 2
Hardware 1
Middleware
OS 1
Hardware 2
3/48
System Middleware Components
 A system middleware must provide the following
abstractions for (multimedia) applications


Application model
GUI subsystem




Graphics, video, audio rendering
User event handling
Timing control
File system
Menu
e.g. DVB-MHP application
4/48
DVB-MHP Architecture Revisited
 Multimedia Home Platform functional blocks:
Data
Graphic library
(e.g. Microwin)
Audio Codec (HE-AAC)
SW
Video Codec
(H.264, MPEG-2)
Native
Application
Transport
Demux
(MPEG-2 TS,
RTSP/RTP)
Xlets
Sun
Java-TV
(Application
Model)
JMF 1.1
I/O Devices
HAVI
(UI
Abstraction)
Application Manager
(Navigator)
DAVIC
(FS
Abstraction)
Platform-dependent
modules
MHP Middleware
Java Standard Classes (CDC/PBP)
Operating Systems
HW
DVB
(Event, Services
Abstraction)
Xlets based on GEM
RISC Processor
(< 300MHz)
Graphics
accelerator
audio
accelerator
Video
accelerator
Java
Processor
SoC
5/48
Java-based Middleware
 DVB-MHP 1.0 (aka DVB-J) system abstraction is
based on Java RE, just like many other middleware

In particular, CDC/PBP is adopted for MHP
 Google Android is also based on Java Model

Google defined their own “
profile”and VM (not JVM bytecode compatible)
 Rationale: Java is “
virtually a computer plus an OS!”
Applications
Applets, Xlets,
MIDlets, etc.
OS
Class loader, GC,
API classes, etc.
CPU
VM Exec. Engine
Real Computer Model
Java Model
6/48
Native vs. Java-based Middleware
 We can use native code to design system middleware.
For example, BREW from Qualcomm is a native
middleware for mobile handsets:




A C/C++ development environment for ARM-based platforms
Application: BREW “
Applet”model; cooperative multitasking
(IThread API)
Graphics API: OpenGL/ES
Multimedia API: IMedia
 Supposedly, BREW is faster than Java, but with a
properly designed system, this may not be true
7/48
MHP Application Model Abstraction
 Application implements the javax.tv.xlet.Xlet interface


Similar to java.applet.Applet, but simpler
Methods are called in the following sequence:



initXlet()
startXlet() : resource allocation begins here!
destroyXlet()
 Xlet life cycle (initiated by the MHP application
manager, which sets up an XletContext for the Xlet)
initXlet()
Start
Loaded
startXlet()
Paused
destroyXlet()
Active
pauseXlet()
destroyXlet()
destroyXlet()
Destoryed
8/48
Middleware Architecture for DVB-J
 MHP middleware layers†:
DVB UI
Application Manager and Xlet API
Service
Selection
JMF
Conditional
Access
Return
Channel
Tuning
DSM-CC
Transport
Stream
AWT
HAVi
Inter-Xlet
Comm.
UI
Events
SI
Section Filtering
MPEG
†S. Morris and A. Smith-Chaigneau, Interactive TV Standards, Elsevier, 2005
Java
9/48
DVB-J APIs
 Java Language API
 Application Model (Life Cycle) API

Based on Java TV xlets
 Graphical API

Based on AWT and HAVi UI level 2
 Data Access API
 Multimedia API

Based on JMF
 Service and User Settings API
10/48
MHP Application UI Construction
 An MHP application can allocate screen resource,
called HScene, using HAVi API

An HScene represents a region on the TV display
HScreen
HScene
 UI components are mainly drawn using


java.awt.Graphics primitives
HAVi widget classes
11/48
MHP Display Model Abstraction
 The MHP display model is defined in Sun Java-TV
middleware:
Background
Data
Video
Data
Background
Decoder
HBackgroundDevice
Decoder
Format
Conversion
Video
Decoder
scaling
JMF
Manipulation
HVideoDevice
+
broadcast
metadata
user preference
display info.
TV
Behavior
Control
Application
Behavior
Control
Graphics
Data
Graphics
Composition
HScreen
Format signaling
to display
HGraphicsDevice
Graphics primitives created using
java.awt and HAVi level 2 UI
Implementation?
12/48
MHP Display Model Implementation
 If MHP adopts Sun’
s PBP reference implementation:
add()
widgets
org.havi.ui.HScene
Java PBP Class Libraries
java.awt.Container
Portable Graphics Library (MicroWindows)
java.awt.Component
Should be
merged for
efficiency!
Native Graphics Library (X11)
OS (Linux)
fbdev driver
HScene
HScreen
has
HVideoDevice
has
HW video frame buffer
VGA/DVI
Controller
HScreen
has
HGraphicsDevice
HScreenDevice
HBackgroundDevice
13/48
Building a GUI Application
 During initXlet()

Configure the required HScene and retrieve the handle to
the HScene object
 During startXlet()



Create widget components under the HScene
Display the HScene using setVisible(true)
Request the focus
 During destroyXlet


Hide the HScene
Release the allocated resources (components, images, etc.)
14/48
Controlling Media from Xlets
 Xlets can control audio/video media using Java
Media Framework


Based on JMF 1.0
Use org.davic.net.dvb.DVBLocator to select what to show
 MHP provides extra JMF controls for




Video scaling
Subtitle control and service components monitoring
Audio playback fine control (for local playback)
Notification of changes in incoming TV signals
15/48
JMF Class Architecture
 JMF is used as
media playback
interface in MHP
 Only JMF 1.1 API
are used:



Manager
has a
Clock
TimeBase
Duration
JMF
extends
Player
DataSource
Control
extends
Controller
extends
Player
Class
Library
AWT
Networking
MediaHandler
extends
has a
JNI
DataSource
Lang
JVM
16/48
Video-Graphics Integration†for MHP
getBestConfiguration()
HScreen
Java TV
getSource()
locator
1
1
1
ServiceContext
1
1
1
HVideoDevice
1
1
1 ServiceContext
1
HAVi L2 UI
getSource()
1
1
MediaLocator
HVideoConfiguration
getVideoDevice()
HVideoComponent
1
extends
1
Implements
Constructed
with new
MediaSelector
(locator)
HScreenConfiguration
1 getDefaultVideoDevice()
n
1 n
select()
Constructed
with new
DataSource
(MediaLocator)
n
MediaPlayer
getVideoComponent()
Implements
1
Player
Java Media Framework
†iTV Handbook, Prentice Hall 2003
17/48
DVB File System Model†
to cable
network
Remote
file
system
Head station
Up-link
video
TV 1
encoder
audio
encoder
data
encoder
MPEG
program
MUX
Channel
coding &
modulation
38 Mbps
Tuner,
demodulation, &
channel decode
MPEG Transport
DEMUX
User input,
program selection
conditional access
TV 2
TV 3
TV 4
MPEG
transport
MUX
MPEG Program
DEMUX
encoder
video
encoder
audio
encoder
data
Local
file
system
18/48
†L. Atzori et al., “
Multimedia Information Broadcasting Using Digital TV Channels,”IEEE T-Broadcasting, Sep. 1997
DVB File System Abstraction
 DVB applications (Xlets, etc.) and data can be
transmitted to STBs using DSM-CC Object Carousel†
Local virtual
file system
†D.-H. Park et al., “
Real-Time Carousel Caching and Monitoring in Data Broadcasting,”IEEE T-Consumer
Electronics, Feb. 2006
19/48
Extraction of Xlets from DSM-CC
 One possible implementation of Xlets extractor:
DSM-CC Parser
Object
Cache
DII Cache
DII Cache
Real-Time Thread Pool
Version
Manager
Incoming data
stream
Monitor
Threads
Download
Threads
Section Filter
Xlet
Cache
Xlet Running
Space
Application
Manager
DSM-CC
Cache Space
User Input
20/48
Synchronization
 Synchronization of application with media are
facilitated by the following API functions:

DSM-CC stream events



Media Time (NPT)


org.dvb.dsmcc.DSMCCStreamEvent.subscribe()
org.davic.media.StreamEventControl.subscribeStreamEvent()
org.davic.media.MediaTimeEventControl
Private section events

org.davic.mpeg.sections
21/48
Java for 2.5/3G Mobile Services
 3GPP adopts the concept of middleware for CPE


For application middleware, Java RE has been selected
(CLDC/MIDP 2.0)
For multimedia device integration middleware, OpenMAX
has been proposed (by ARM, Motorola, Nokia, TI, etc)
 Google also selects Java R.E. for G-phone, but use
their own virtual machine and profile libraries

This avoids Sun’
s J2ME licensing fee
22/48
Google Android Architecture
Applications
Home
Contacts
Phone
Browser
...
Application Framework
Activity
Manager
Window
Manager
Content
Providers
View
System
Notification
Manager
Package
Manager
Telephony
Manager
Resource
Manager
Location
Manager
XMPP
Service
System Libraries
Android Runtime
Surface Manager
Media Framework
SQLite
OpenGL/ES
FreeType
WebKit
SGL
SSL
C Library
Core Libraries
Dalvik Virtual Machine
Linux Kernel
23/48
Linux’
s Role in Android
 Java model does not have an I/O subsystem, Linux
kernel is only (hopefully) used to provide



Hardware abstraction layer (for different drivers)
Multi-threading management
Shared library interface
 The following “
infamous”Linux components are not
included


Native windowing system (e.g. X11)
glibc
24/48
Android Applications
 An Android application package (APK) is a collection
of components which usually run in one thread

Services or ContentProviders sometimes need another
thread (process)
Activity
Activity
Activity
Activity
ContentProvider
ContentProvider
Process
Service
Service
Process
Process
APK Package 1
APK Package 2
25/48
Android Application Model
 An android application has four building blocks




Activity  an operation that optionally associated with a UI
component (window), similar to execution context in MHP
Service  background task
Content Provider  file system abstraction
Intent Receiver  event handler
Intent
Receiver
Thread
Looper
Local
Service Call
UI
Events
Thread
Activity
Activity
Message
Queue
System
Events
External
Service Call
26/48
Activities Life Cycle
Activity starts
 Application activities
(e.g. UI objects) have
following entry points



Life cycle begins with
onCreate(), ends with
onDestroy()
Visible life cycle begins
with onStart(), ends
with onStop()
Foreground (in-focus)
life cycle begins with
onResume(), ends with
onPause()
onCreate()
User navigates
back to the
activity
onStart()
Process is killed
onResume()
Activity running
onRestart()
The activity
comes to the
foreground
Another activity
comes in front of the
activity
Other
applications
need memory
onPause()
activity no longer
visible
activity comes to the
foreground
onStop()
onDestory()
Activity shutdown
27/48
Booting an Android Platform
 Init process starts the zygote process:

Zygote spawns and initializes Dalvik VM instances based on
requests; it also loads classes
Android Managed Services
Window
Manager
Activity Manager
Package
Manager
Audio Flinger
Surface Flinger
Service Manager
System Server
daemons
daemons
daemons
runtime
Init
Zygote
Dalvik VM
28/48
Invoking Media Accelerators
 Invocation of audio device from a Java application
Application
Application Framework (Java)
MediaPlayer
JNI
C library
Binder IPC
MediaPlayer
Audio Flinger
Dynamic loading
Media
Framework
libaudio.so
ALSA
Linux Kernel
Device driver
29/48
System Middleware Issues
 Although system middleware can ideally solve the
application portability problem, it does not (yet)


Mobile phone Java games may not work on all Java
handsets
MHP xlets may not run on all MHP set-top boxes
 However, as a system designer, the holy goal of
middleware should still be honored
30/48
HW/SW Integration Issues
 In the past, IC/IP designers tend to define their own
HW interface, including IC pinout specs, IC control
registers, and subsystem behaviors
 For complex systems, the partition of a system into
sub-behaviors is not standardized
video
bitstream
Combined into one circuit?
decoded
image
To output
(display)
N
VLD
DC/AC-1
Q-1
IDCT
Use MC?
Y
+
MC
Or break down as three circuits?
Bilinear
reference
image
31/48
Example Code to Control ICs/IPs
 When a software function needs to interact with the
hardware circuit, how do we invoke†the circuit from a
program?
/* HW
short
short
short
accelerator interface */
*input
= (short *) 0x0F0000;
*output
= (short *) 0x0F0200;
*call_idct = (short *) 0x0F0400;
void idct(short *block)
{
int unit = sizeof(short);
memcpy(input, block, 64*unit);
*call_idct = 1;
memcpy(block, output, 64*unit);
}
†Here, memory-mapped I/O is used
/* HW accelerator interface */
short *input
= (short *) 0x0F0000;
short *output
= (short *) 0x0F0200;
volatile short
*call_idct = (short *) 0x0F0400;
void idct(short *block)
{
int unit = sizeof(short);
memcpy(input, block, 64*unit);
*call_idct = 1;
while (call_idct != 0) /* idle */;
memcpy(block, output, 64*unit);
}
32/48
Chip vs. System Designers
 For large IC design houses, they want to sell chips to
as many system manufacturers as possible
 For small IC design houses, they want to compete
with large IC design houses without stepping on
patents/copyrights
 For system manufacturers, they want to use the
lowest-priced chip (given same specification)
available without changing their application software
applications
middleware
an open standard here can
enables third party chip market!
HW (chips)
33/48
OpenMax Middleware
 The OpenMAX†standard was originally conceived as
a method of enabling portability of components and
media applications throughout the mobile device
landscape
 The proposal of OpenMax was brought to the
Khronos Group in mid-2004 by a handful of key
mobile hardware companies
 Khronos Group is a member-funded industry
consortium focused on the creation of open standard
APIs for multimedia applications
†Check http://www.khronos.org/openmax for the spec.
34/48
Khronos Multimedia Middlewares†
Applications or middleware libraries (JSR 184 engines, Flash players, media players etc.)
AL
3D
Small footprint 3D for
embedded systems
EGL
Vector 2D
Platform Media
Frameworks
Low-level vector
acceleration API
Graphics surface management for efficient
mixed mode 2D/3D/video rendering
Playback and
recording
interfaces
SOUND
Component interfaces
IL for codec integration
Enhanced
Audio
Image Libraries, Video
Codecs, Sound Libraries
DL
Accelerated media
primitives for codec
development
Media Engines –CPUs, DSP, Hardware Accelerators etc.
†Jim van Welzan, “
OpenMax Overview”
, 2007
35/48
Why OpenMax?
 Goal:

Solve the problem of supporting expanding multimedia
standards and the increasing complexity of solutions for
system and chip manufacturers
 Solution:

Standardize acceleration API for multimedia kernels


Standardize acceleration API for multimedia codecs


Focused on key ‘
hotspot’functions and algorithms
Abstract system architecture for increased OS portability
Provide reference designs across a wide variety of
architectures and OS’
s
 Question:

Who shall write (port) the middleware?
36/48
OpenMax Layers†
 OpenMAX DL –Development Layer

For codec authors that need to leverage low-level hardware
accelerated primitives
 OpenMAX IL –Integration Layer

For system integrators that need to build solutions from midlevel component pieces (e.g. codecs) potentially from
multiple sources
 OpenMAX AL –Application Layer

For application writers that need high-level access to
multimedia functionality (e.g. player, recorder, input, and
output object)
†Jim van Welzan, “
OpenMax Overview”
, 2007
37/48
Integration and Development Layers†
†Neil Trevett, “
Introduction to OpenMax”
, SIGGRAPH presentation, 2004
38/48
OpenMax DL Domains
 Audio Coding
 MP3, AAC
 Image Coding
 JPEG (encode and decode)
 Image Processing
 Color space conversion
 Pixel packing/unpacking
 De-blocking / de-ringing
 Rotation, scaling, compositing, etc.
 Signal Processing
 FIR, IIR, FFT, Dot Product
 Video Coding
 MPEG-4 SP/H.263 BL (encode and decode)
 H.264 (encode and decode)
39/48
Example: Video DL
 Provide common building block API to handle




MPEG-4 SP/H.263 BL (encode & decode)
H.264 (encode/decode)
Java Video DL (under discussion)
Hardware accelerators (under discussion)
 The building blocks includes:






8x8 SAD and 16X16 SAD
8x8 DCT+Q and 8x8 IDCT+Q
MPEG-4 Variable Length Decode
Multiple 8x8 SADs in one function
Multiple 8x8 DCTs in one function
...
40/48
Example: Image DL
 Provide common building block API to handle




JPEG / JPEG 2000 (encode and decode)
Color space conversion and packing/unpacking
De-blocking / de-ringing
Simple rotation and scaling
 The building blocks includes:







JPEG 8x8 DCT and 8x8 IDCT
JPEG quantization
JPEG Huffman encoding and decoding
Color conversion
Multiple 8x8 DCT and quantization in one function
Multiple 8x8 DCTs in one function
...
41/48
OpenMax Integration Layer (IL)
 Abstracts Hardware/Software Architecture

Provides a uniform interface for framework integration
across many architectures
 Integrates with Major Frameworks (OS’
s, Systems):

Microsoft DirectShow, Symbian MDF, GStreamer, Java
MMAPI
Application
Application
Application
Framework
Framework
Framework
IL
IPC
Codec Running
On Accelerator
IL
Codec Running
On Host Processor
IL
IPC
Part of Codec
On Host Processor
Part of Codec
On Accelerator
42/48
OpenMax IL Architecture
Multimedia Framework
IL client
IL client
commands
IL core API
commands
IL core
callbacks
callbacks
OMX Component
OMX Component
port
data tunnel
port
Platform
43/48
OpenMax IL Concept
 OpenMax IL is based on the visual programming
model (similar to DirectShow)
 Example: A DVD player can use the OpenMax IL
core API to set up a component network:
time
Clock
Component
File
Reader/
Demux
Audio
Renderer
Audio
Decoder
data
Video
Decoder
data
Video
Scheduler
Video
Renderer
time
 The system behavior must specify†


Data flow
Timing (scheduling of processing)
†More on this topic will be discussed in “
system formalism”in future classes
44/48
OpenMax Core API
 Core API is used to build a “
component graph”




OMX_init(), OMX_Deinit()
OMX_ComponentNameEnum(),
Component enumeration
OMX_GetComponentsOfRole(),
OMX_GetRolesOfComponent()
Component invocation
OMX_GetHandle(), OMX_FreeHandle()
Component connection
OMX_SetupTunnel()
 There are many types of components:

Source, filter, sink, mixer, scheduler, …, etc.
45/48
Relationship Between IL and DL
Application
Application
Application
Multimedia Framework/Middleware
System Media Driver
OpenMax Integration Layer
Component
Interface
Component
Interface
Component
Interface
codec
codec
OpenMax Development Layer
DL Primitives
Hardware
DL Primitives
46/48
OpenMAX AL Concept
 OpenMax IL is too complex for many developers


AL is designed for simplified streaming media applications
AL adopts an object-oriented media approach (similar to the
concept in MPEG-21)
 Media Objects enable PLAY and RECORD of media
Camera
Audio Mix
Audio Input
Display Window
URI
?
OpenMAX AL
Media Object
?
URI
Memory
Memory
Content pipe
Content pipe
47/48
Discussions
 OpenMax is not the only portable framework for
multimedia applications



MPEG has MPEG-21, MPEG-B/C Reconfigurable Video
Coding, and MPEG Multimedia Middleware (M3W)
Microsoft has Direct X/DirectShow framework
Many, many other stuff …
 If there are too many “
standard”frameworks for the
developers to support, these “
standards”are as bad
as no standard at all
48/48