Download Literature Review - Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

SIP extensions for the IP Multimedia Subsystem wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

CAN bus wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

VMEbus wikipedia , lookup

I²C wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Power over Ethernet wikipedia , lookup

IEEE 802.11 wikipedia , lookup

Zero-configuration networking wikipedia , lookup

IEEE 1355 wikipedia , lookup

UniPro protocol stack wikipedia , lookup

IEEE 1394 wikipedia , lookup

Transcript
Literature Review
Shane Haw
The X170 Protocol as a Vehicle for 3D
Sound Control
Table of Contents
1.0 Introduction ...................................................................................................................................... 1
2.0 Real time Multimedia Networking .................................................................................................... 1
2.1 FireWire (IEEE 1394) ..................................................................................................................... 1
2.1.1 FireWire Motivations ............................................................................................................. 1
2.1.2 FireWire Communication Model............................................................................................ 3
2.1.3 IPV4 over IEEE 1394 ............................................................................................................... 4
2.1.4 Possible FireWire configurations for the Surround Sound System ........................................ 5
2.2 Ethernet AVB ................................................................................................................................. 5
2.2.1 Timing and Synchronisation ................................................................................................... 6
2.2.2 Stream Reservation Protocol ................................................................................................. 6
2.2.3 Forwarding and Queuing Enhancements for TSS .................................................................. 6
2.2.4 Possible Ethernet AVB Configurations for the Surround Sound System ............................... 7
2.2 AES-X170 ....................................................................................................................................... 7
2.2.1 Parameter Description ........................................................................................................... 8
2.2.2 Types of Messages ................................................................................................................. 9
2.2.3 Parameter Creation and Access ........................................................................................... 10
3.0 Graphics .......................................................................................................................................... 14
3.1 Google Sketchup ......................................................................................................................... 14
3.2 Blender ........................................................................................................................................ 15
3.3 3DCrafter..................................................................................................................................... 15
4.0 Human Computer Interaction ......................................................................................................... 15
4.1 Three Dimensional Input Device ................................................................................................. 15
4.2 Optical Hand Tracking ................................................................................................................. 15
4.3 Nintendo Wii ............................................................................................................................... 16
4.3.1 Johnny Chung Lee’s Wii Projects ........................................................................................ 16
4.3.2 Wii3D.................................................................................................................................... 16
4.4 Microsoft Xbox 360 Kinect .......................................................................................................... 17
5.0 Summary ......................................................................................................................................... 17
References ............................................................................................................................................ 18
1.0 Introduction
This literature review investigates three main technologies and their application within a three
dimensional sound control system. The three technologies used are real time multimedia
networking, computer graphics and human-computer interaction. This review provides an
overview of how the three technologies work and shows how they fit together to accomplish
three dimensional sound control. It will look at each technology in turn and show the
available options that could be used in this research.
2.0 Real time Multimedia Networking
There are two main standards-based technologies for transmitting audio and video data across
a network deterministically, namely Firewire (IEEE 1394) [2] and Ethernet AVB (IEEE
802.1(x) and IEEE 1722(x)) [11]. In order for the surround sound system to control the
speakers in a room these speakers will need to be networked. This surround sound control
system will need to deterministically stream audio data across the network and to these
speakers at high speeds with constant transfer rates, as well as control the individual volume
levels of each speaker.
2.1 FireWire (IEEE 1394)
Development of FireWire was initiated in the mid-1980s by Apple Computer. As other
manufacturers gained interest in FireWire, a working committee was formed in order to
create a formal standard. This resulted in the IEEE 1394-1995 specification. This
specification has been refined and improved upon until its latest version: IEEE 1394.b
specification which defines higher throughput but with backwards compatibility [2].
2.1.1 FireWire Motivations
Some of the motivations behind FireWire’s development include high speed media
streaming, auto configuration and control, and support for deterministic streaming [2]. These
motivations and their development within the IEEE 1394 specification fit with the
requirements of the surround sound control system.
1
2.1.1.1 High Speed Media Streaming
The speakers, which will convert the audio samples within the isochronous packets sent to
them on the network back into the original analogue signal, require those packets to reach
them deterministically—within a prescribed time frame. IEEE 1394.b provides asynchronous
throughputs of 100 Mb/s through to 800 Mb/s [2].
2.1.1.2 Auto Configuration and Control
The IEEE 1394 serial bus supports automatic configuration and therefore, when a 1394 node
is attached to the bus it automatically participates in the configuration process without
intervention from the user. This makes setting up the FireWire network significantly easier
and eliminates the need for the processor and the memory to be involved with the transfer of
data between devices [2]. The IEEE 1394 standard also supports the implementation of a
control protocol on top of it. This control protocol will allow the sending and receiving of
control messages across the FireWire network. Through this control protocol devices can be
discovered for streaming [14].
2.1.1.3 Support for Deterministic Streaming
The IEEE 1394 serial bus supports applications such as audio and video, both of which
require constant transfer rates. The isochronous transfer support reduces the amount of
buffering needed by the isochronous applications thereby reducing cost [2]. This is the most
important characteristic of FireWire for the surround sound control system as the network
must be able to move audio samples from the audio input to the speakers through the network
deterministically.
2
2.1.2 FireWire Communication Model
The IEEE 1394 serial bus supports two data transfer types, namely asynchronous transfers
and isochronous transfers. Asynchronous transfers target a particular node based on a unique
48-bit address. Figure 1 demonstrates the Request/Response protocol that asynchronous
transfers use. They do not require a constant data rate and therefore do not require regular use
of the serial bus however they do require fair access over time. Fair access to the serial bus is
gained through the arbitration mechanism. This mechanism ensures that isochronous transfers
have priority above asynchronous transactions and that priority within isochronous transfers
is granted to nodes which are closest to the root node [2, 14].
Figure 1: Request/Response Protocol (Asynchronous Transfers)
Isochronous transfers require that their packets be delivered at constant intervals. Instead of
making use of a unique address, isochronous packets use a channel number which allows the
isochronous data stream to be broadcast to one or more nodes listening to that channel
number. These packets require regular use of the serial bus and therefore have a higher
priority than asynchronous transfers. Any nodes that wish to perform isochronous transfers
must request the bandwidth that they need from the isochronous resource manager based on
the number of desired allocation units. Figure 2 demonstrates the Isochronous Transfer which
only consists of a Request Transaction [2].
3
Figure 2: Isochronous Transfer
2.1.3 IPV4 over IEEE 1394
RFC 2734 defines the necessary methods, data structures and codes that are required for the
transport of Internet Protocol Version 4 (IPv4) datagrams. This standard additionally defines
methods for the address resolution protocol (ARP) [12]. Being able to transport IPv4 packets
is of importance to a surround sound control system because the system requires the ability to
set volume levels for each individual speaker. This requires the stacking of another protocol
4
on top of the IEEE 1394. IPv4 allows this stacking and in particular it allows the use of the
AES-X170 protocol.
IP datagrams can be transported within the payload of a 1394 block write requests or within
the payload of global asynchronous stream packets (GASP). GASP packets have the format
and the characteristics of an isochronous packet but are placed in the same transmission
queue as other asynchronous transfers and are hence named asynchronous stream packets.
When a block write request is used it will be directed towards the specific memory address of
the node 0xFFFF000D0000. At this address there will be a handler which will receive the
packets and upon parsing the frame, pass the IP datagram to the IP stack [12]. Figure 3
demonstrates the ARP mechanism [14].
Figure 3: IEEE 1394 ARP request and response
2.1.4 Possible FireWire configurations for the Surround Sound System
There are two possible configurations that the surround sound control system could take. The
first of which is to daisy-chain each of the individual speakers, each with their own IEEE
1394 node, to each other and then to a central control node; or to connect each speaker to a
break out box which will also be connected to a control node. The main advantage the daisychaining configuration is that the speakers are connected to each other and there isn’t the
requirement of running long cables to each speaker. However, due to hardware restrictions,
this option is not available and so the surround sound system will use the break out box to
connect each speaker individually.
2.2 Ethernet AVB
As mentioned above there are two main standards-based technologies for the deterministic
transportation of audio data across a network [14]. The other main technology is Ethernet
AVB (Audio/Video Bridging) [2]. Ethernet AVB consists of a suite of standards:

802.1AS Timing and Synchronization [15];

802.1Qat Stream Reservation Protocol [16];

802.1Qav Forwarding and Queuing Enhancements for Time-Sensitive Streams [17];
and

802.1BA Audio Video Bridging (AVB) Systems [11]
5
2.2.1 Timing and Synchronisation
This standard provides the protocols and procedures used to ensure that the synchronisation
requirements of time-sensitive applications, such as audio and video, are met. It describes
how devices can minimize jitter, wander, and time synchronisation problems so as to meet
requirements for time-sensitive applications. Time synchronisation is performed by a
grandmaster sending information including the current synchronised time to all directly
attached time-aware systems. Each of these attached time-aware systems must correct the
received synchronised time by adding the propagation time needed for the information to
transit from the grandmaster to itself. If the device is a time-aware bridge, then the device is
required to forward the corrected time, including additional corrections for the forwarding
delay, to all other attached time-aware systems. For this mechanism to work two time
intervals need to be precisely known. These delays are the forwarding delays within a timeaware bridge; this is known as the residence time; and the time taken for the synchronised
time information to transit between the two time-aware systems. This standard also describes
the procedure to measure the required time delays, namely gPTP (generalized precision time
protocol) [15]. Timing and Synchronisation is important in a surround sound system as audio
packets need to be presented to the speakers at the right time, in the right order.
2.2.2 Stream Reservation Protocol
This standard provides the protocols, procedures, and managed objects, usable by existing
higher layer mechanisms to allow network resources to be reserved for specific traffic
streams along a bridged local area network. It characterizes resource requirements of traffic
streams to a level that bridges can determine the required resources and provide a dynamic
maintenance of those resources. This standard specifies the use of Dynamic Reservation
Entities in the filtering database of bridges for the forwarding of frames associated with a
particular stream. This standard also specifies the SRP (Stream Reservation Protocol)
Standard which facilitates the registration, de-registration and maintenance of stream
reservation information in relevant bridges along a streams path from one end to the other
[16].
2.2.3 Forwarding and Queuing Enhancements for TSS
This standard allows bridges to provide the performance guarantees for time-sensitive, losssensitive real-time audio video data stream (TSS) transmission. It specifies priority
6
regeneration and controlled bandwidth draining algorithms. This standard defines status
parameters that allow the boundaries of the SRP domain to be identified and maintained; it
specifies how the priority information received at the SRP domain boundary ports are
regenerated; it specifies how priority information is used to determine the traffic classes to be
used by time-sensitive streams; lastly, it defines a credit-based shaper algorithm to shape
traffic in accordance with stream reservations [17].
These standards work together to form the Ethernet AVB protocol. The surround sound
control system needs deterministic transport of audio and video data and the ability to carry
control messages such as the AES-X170 messages. Ethernet AVB provides this required
functionality and so is an option in the implementation of the surround sound system.
2.2.4 Possible Ethernet AVB Configurations for the Surround Sound System
The only possible configuration for the surround sound control system with using Ethernet
AVB is to have a central Ethernet AVB router which connects to several individual Ethernet
AVB enabled speakers and to a central control PC. Currently, there are no Ethernet AVB
enabled speakers and so this option is not available.
2.2 AES-X170
The AES-X170 protocol is an IP-based peer to peer network protocol through which
connected devices can send and receive connection management, control and monitoring
messages. Every device must be addressable via a hierarchical structure that reflects the
natural layout of the device. An AES-X170 message shall access a parameter on a device by
providing the hierarchical address that models the parameter within the device. In order for
this address to be parsed and the parameter to be located an AES-X170 stack must be
implemented on each device. Apart from addressing the parameter via its hierarchical
position in the device, it is also possible to address it via a unique identifier. Once this
identifier has been obtained a parameter can from that point on be referenced by it instead of
the hierarchical address, which reduces bandwidth [14]. In the context of a surround sound
control system the AES-X170 messages will relate to the volume parameter of each speaker.
Every AES-X170 message is contained within a user datagram protocol (UDP) datagram.
Within the datagram is the 104-bit AES-X170 address block which is used to address the 7level hierarchy.
7
2.2.1 Parameter Description
Parameters are positioned at the lowest level of the hierarchy. The seven levels of the
hierarchy are as follows:

Section Block

Section Type

Section Number

Parameter Block

Parameter Block Index

Parameter Type

Parameter Index
The section block is the highest functional group. Any device could be considered to be
comprised of a number of sections [14]. The volume parameters of the surround sound
system would be considered as part of the Output section block as they describe an aspect of
the output that the speakers provide.
The section type is used to differentiate between the different subgroups within the section
block [14]. As the speakers only produce one type of output the section type is not needed in
the description of the volume parameters.
The section number is used to indicate the interface or the channel number [14]. The audio
stream that goes to each speaker will be on a specific channel. This channel will be specified
in the hierarchical description of the volume parameters for each speaker.
The parameter block is used to describe which cluster or parameter group the parameter being
described falls into [14]. This level of the hierarchy is not needed for the description of the
volume parameter of each speaker.
The parameter block index allows for the further grouping of similar components within a
parameter block [14]. This level of the hierarchy is also not needed in the description of the
volume parameter of each speaker.
The parameter type describes the type of parameter being accessed [14]. This level will
describe a gain or a volume parameter as the parameter type of the output being produced by
the speaker.
8
Lastly the parameter index allows for the addressing of individual parameters within the same
type of parameter [14]. As each speaker only produces one level of output there is only one
gain or volume parameter and so this level will also not be needed in the description of the
volume parameter.
2.2.2 Types of Messages
Every AES-X170 message will either be a request sent from one device to another, or will be
a response from a device after a request has been made. A message is deemed to be a request
or a response by the Message Type field of the AES-X170 message. The nature of a request
is indicated by the Command Executive and Command Qualifier fields. Table 1 shows the
various AES-X170 message types [14].
Table 1: Message Type Definitions
Message Type Definitions
Value
Full Address block with response
0x00
Full Address block with no
0x01
Indexed message with response
0x02
Indexed message with no response
0x03
Response message
0x04
The Command Executive component describes the fundamental nature of the command
whilst the Command Qualifier allows the command to be directed at a particular attribute of
the parameter. Table 2 describes the different Executive Commands and Table 3 describes the
different Command Qualifiers [14].
Table 2: Executive Commands
Hex ID
Command
Description
0x00
GET
Get one or more data values
0x01
SET
Set one or more data values
0x02
ACT
Perform an action
0x03
JOIN
Join a parameter to a group
0x04
UNJOIN
Detach a parameter from a group
9
0x05
CREATE
Create a structure such as a list
0x06
SAVE
Save a structure such as a list
Table 3: Command Qualifiers
Hex ID
Command Qualifier
Description
0x00
VAL
Refers to a parameter’s value
0x01
VTBL
Refers to the value names of a parameter
0x02
CLA
Refers to the child level alias
0x03
FLAG
Refers to the various flags of a device
0x04
SEC
Refers to the user access level
0x05
PUSH
Adds a parameter to a Push list
0x06
PUSH_OFF
Removes from the Push list of a parameter
0x07
DATA_BLOCK
Refers to a set of values pushed by a parameter
0x08
MASTERS
Refers to the master group of a slave parameter
0x09
SLAVES
Refers to the slave group of a master parameter
0x0A
MASTER_OFF
Refers to the removal of a master group parameter
0x0B
SLAVE_OFF
Refers to the removal of a slave group parameter
0x0C
PEER_OFF
Refers to the removal of a peer group parameter
0x0D
MSTGRP
Refers to the master group associated with a parameter
0x0E
PTPGRP
Refers to the peer to peer group of a parameter
0x0F
GRPVAL
Refers to the value of a parameter within a group
0x10
PTP
Used with the JOIN command executive
0x11
MSTSLV
Used with the JOIN command executive
0x12
SNP
Refers to a snapshot of a device’s parameters
The surround sound system will simply need to make use of the GET and the SET Executive
commands in conjunction with the VAL Command Qualifier. These will be used to get and to
set the individual volume or gain parameters for each speaker in the network.
2.2.3 Parameter Creation and Access
Each AES-X170 Device will have associated with it an application, an AES-X170 Stack with
10
its associated hierarchical tree structure, and a parameter store containing all the parameters
for that particular device. Upon start up an application will request the creation of parameters
via the AES-X170 Stack by making use of the AES-X170 Stack API. Within this request will
be the hierarchical address that describes the parameter. There will also be a pointer to a
callback function that contains code to process the parameter request. The AES-X170 Stack
will incorporate this hierarchical address into the AES-X170 tree by ensuring that there is a
node in the tree for each identifier at each level of the description. Finally, the AES-X170
Stack creates a parameter store entry for the parameter, and adds at the bottom of the tree a
pointer to this parameter as well as a pointer to the callback function. Figure 4 provides a
diagram of this conceptual model [14].
Figure 4: Conceptual Modal of parameter creation
11
When a device receives and AES-X170 message it passes this message to the AES-X170
Stack for processing. The AES-X170 Stack extracts the address block from the AES-X170
message and uses the successive level identifiers of the address block to traverse the nodes of
the tree. Once the appropriate leaf of the tree is found the callback associated with that leaf is
called passing the application data and the value received from the AES-X170 message with
it. Finally, the callback will perform the required function on the parameter. Figure 5
provides a diagram of this process [14].
12
Figure 5: AES-X170 messaging and parameter access
13
3.0 Graphics
The surround sound control system requires a way for the user to interact with a virtual
representation of itself. The human computer interaction component and the computer
graphics component make up this virtual representation. The user will interact with the virtual
world through a three dimensional input device. Three dimensional input devices will be
discussed in a later section. Within this virtual world there needs to be a single component to
represent each speaker within the system and a single crosshair component which will be
moved around in the world by input from the user. A sketch of the desired virtual world is
shown in Figure 6. Out of the available software there were several options to select from.
Figure 6: Sketch of Virtual World
3.1 Google Sketchup
Google Sketchup gives the user freedom to create any three dimensional models that they
might require [8]. It provides intuitive tools for the creation of these models as well as a Ruby
API (Application Programming Interface) which provides for external interaction [9]. For the
surround sound system the external interaction required would be in the form of providing 3D
coordinates from the user input device to move the crosshair component around the world.
The Google Sketchup Ruby API provides an animation class which implements methods
14
such as nextFrame, pause, resume and stop [10]. These methods allow for the required
animation.
3.2 Blender
Blender is an open source 3D content creation suite which is available to all major operating
systems under the GNU General Public Licence [3]. Blender provides a wide range of
functionality and has been used in a wide range of applications such as three dimensional
movies, computer games and three dimensional pictures [4, 5, 6]. Blender allows for the
loading and running of external scripts through its Python API [7]. Blender will provide all
the functionality needed for the surround sound system.
3.3 3DCrafter
3D Crafter is a real-time 3D modelling and animation tool. It provides an intuitive drag-anddrop approach. It allows the creation of animations by placing shapes within the three
dimensional scene at each unit in time. This is similar to the approach taken by Google
Sketchup. The free version of 3DCrafter does not allow scripting or the use of third party
plugins and so it does not allow all the functionality that the surround sound system requires
however the 3DCrafter Pro suite does allow for these [1].
4.0 Human Computer Interaction
4.1 Three Dimensional Input Device
For the surround sound control system the user must be able to move the sound within the
room by making use of some form of three dimensional input device. This device must
capture movements of the users hand so as to provide the effect of the user moving the sound
around the room manually and produce three dimensional coordinates for the crosshair
component within the graphics component of the surround sound system which will then in
turn produce volume changes in each of the speakers.
4.2 Optical Hand Tracking
Hand tracking is predominantly performed through three main methods: optical tracking,
magnetic tracking, and acoustic tracking [18]. For the surround sound control system only
15
optical tracking will be considered as there will not be enough time to research and build a
new hand tracking device. There are two main methods of optical tracking. The first of which
is where LEDs or small infra-reflecting dots are put on the body or on a glove and a series of
cameras surround the subject and pick out the markers in their visual field. Software will then
correlate the multiple viewpoints and will use the different perspectives to calculate a three
dimensional coordinate for each marker. The second method uses a single camera to capture
the silhouette image of the subject to determine the positions of the various parts of the body
and user gestures [18].
In consideration of what device would provide the type of three dimensional input that the
surround sound system requires two devices were taken into consideration, one from each
type of optical hand tracking method. These devices were the Nintendo Wii Controller,
implemented in research by Johnny Chung Lee [13] and João Lourenço [19]; and the
Microsoft Xbox 360 Kinect [21].
4.3 Nintendo Wii
4.3.1 Johnny Chung Lee’s Wii Projects
Johnny Chung Lee has done three main projects involving human computer interaction and
the Nintendo Wii. These projects are: “Tracking you Fingers with the Wiimote”, “Low-Cost
Multi-point Interactive Whiteboards Using the Wiimote”, and “Head Tracking for Desktop
VR Displays using the Wii Remote”. The two projects which are most applicable to the
surround sound control system are the “Tracking Your Fingers with the Wiimote” and the
“Head Tracking for Desktop VR Displays using the Wii Remote”. The first research project
displays the ability of the Nintendo Wii Remote to track individual fingers accurately;
however this tracking is limited to two dimensions. The second research project displays the
ability of the Nintendo Wii Remote when used in conjunction with the sensor bar to track a
person’s head in three dimensional space [13]. This research has been extended by João
Lourenço in his Computer Science Honours Thesis: “Wii3D: Extending the Nintendo Wii
Remote into 3D” [9].
4.3.2 Wii3D
João Lourenço made use of two Nintendo Wii Remotes to track movement of infra-red lights
attached to the user. He correlated the multiple viewpoints from each Nintendo Wii Remote
16
to resolve the points into three dimensional space. Stereoscopic triangulation was used, in this
instance, to resolve the viewpoints into a set of three dimensional coordinates [19]. This
approach fulfils the requirements of the surround sound control system. The other possibility
for tracking hand movements optically is through capturing the silhouette of the user and the
Microsoft Xbox 360 Kinect makes use of this approach [19].
4.4 Microsoft Xbox 360 Kinect
The Microsoft Xbox 360 Kinect has a feature that Microsoft named skeletal tracking whereby
one or two people within the Kinect’s field of view are tracked and this allows for easy
developments of gesture-driven applications [22]. The Kinect SDK (Standard Development
Kit) provides the programmer with depth information, video and skeleton data so as to
develop applications. Within the Skeletal data is the tracking of hands. This hand tracking
used in conjunction with the depth information provides the ability for the programmer to
determine the three dimensional coordinates of one or both of the hands and so could be used
within the surround sound control system. This hand tracking does not require the use of any
external device besides the Microsoft Xbox 360 Kinect [20].
5.0 Summary
The surround sound control system will make use of the three technologies: Realtime
Multimedia Networking, Computer Graphics and Human-Computer Interaction. Out of these
technologies there are several options as to which piece of software the surround sound
system will use or which device the system will use. The surround sound system will use
FireWire (IEEE 1394) as its networking protocol. The system will use the AES-X170
protocol to send and receive control messages to and from each of the devices connected to
the network. The system will have a virtual representation of itself which will be displayed
within a virtual world. This virtual world will be created and run within Google Sketchup and
its Ruby Interface. The user will interact with the system via Microsoft’s Xbox 360 Kinect
which will track the user’s hands and will enable the user to move sound around the room by
moving their hands.
17
References
1 Amabilis, Products Page, Available from http://www.amabilis.com/products.htm ;
accessed 22 June 2011.
2 Anderson, Don. FireWire System Architecture. Addison Wesley Longman, Inc., Reading,
1999.
3 Blender, Blender Home Page, Available from http://www.blender.org/ ; accessed 22 June
2011.
4 Blender, Blender Features, Available from http://www.blender.org/featuresgallery/features/ ; accessed 22 June 2011.
5 Blender, Blender Movies Page, Available from http://www.blender.org/featuresgallery/movies/ ; accessed 22 June 2011.
6 Blender, Blender Art Gallery, Available from http://www.blender.org/featuresgallery/gallery/art-gallery/ ; accessed 22 June 2011.
7 Blender, Blender Documentation contents, Available from
http://www.blender.org/documentation/blender_python_api_2_57_release/ ; accessed 22
June 2011.
8 Google, Google Sketchup Home Page, Available from
http://sketchup.google.com/intl/en/product/gsu.html ; accessed 22 June 2011.
9 Google, Google Sketchup Ruby API, Available from
http://code.google.com/apis/sketchup/ ; accessed 22 June 2011.
10 Google, Google Sketchup Ruby API Animation Interface, Available from
http://code.google.com/apis/sketchup/docs/ourdoc/animation.html ; accessed 22 June
2011.
11 IEEE. Audio Video Bridging (AVB) Systems. Institute of Electrical and Electronics
Engineers, 2010.
12 Johansson, P. Ipv4 over IEEE 1394. Request For Comments, 1999.
13 Johnny Chung Lee, Johnny chung lee – projects – wii. Available from
http://johnnylee.net/projects/wii/ ; accessed 22 June 2011.
14 R. Foss, P. Foulkes, R. Laubscher, R. Gurdan, B. Klinkradt, N. Chgwamba. Layer 2
Multimedia. Grahamstown, 2011.
18
15 IEEE. Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area
Networks. Institute of Electrical and Electronics Engineers, New York, 2011.
16 IEEE. Stream Reservation Protocol (SRP). Institute of Electrical and Electronics
Engineers, 2010.
17 IEEE. Forwarding and Queuing Enhancements for Time-Sensitive Streams. Institute of
Electrical and Electronics Engineers, New York, 2009.
18 David J. Surman, David Zelter. A Survey of Glove-based Input. IEEE Computer Graphics
& Applications (January 1994), 30-39.
19 Lourenço, João. Wii3D: Extending the Nintendo Wii Remote into 3D. Rhodes University,
Grahamstown, 2010.
20 MICROSOFT. SkeletalViewer Walkthrough. Microsoft Research, 2011.
21 Microsoft, Kinect for Windows, Available from http://research.microsoft.com/enus/um/redmond/projects/kinectsdk/ ; accessed 22 June 2011.
22 Microsoft, About Kinect for Windows, Available from http://research.microsoft.com/enus/um/redmond/projects/kinectsdk/about.aspx ; accessed 22 June 2011.
19