Download a hypothetical example / mockup of a real time pattern

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
'''
a hypothetical example / mockup
of a real time pattern dispensation system
for developing & improvising with
databases of atomic / motific materials
'''
# the "pitch class universe"
# active for the current composition
# (typically would be derived from Parseval / Athena or the like)
PCsets = {}
# atomic elements in composite combinations creating patterns
# defined as instances of Pattern Class below
patternbook = {}
# rhythmic motifs that can drive patterns
# can be dispersed in realtime
# or recorded from real time rhythmic gestures
# driving defined patterns
# one step at a time
rhythmbook = {}
# midikeynumber<int>:(rhythmkey (or rhythmclock advance),patternkey)
# the real time triggerable part
# midi is assumedly directed from Csound
# into python via pyops
keymappings = {}
# "targetinstrumentdict"
# i have an existing instrument class definition in Parseval
# that "realises itself" as csound score
# in non real time
# based on the instrument objects variables
# many of which are synonymous
# with "atomic elements"
# the only difference between
# what is to happen here in real time
# & the existing Parseval implementation
# is that conditional duration statements
# are not collated & resolved as in Parseval
# in one, totalistic functional iteration
# but rather are here held pending conditional termination
# (***see Pattern.targets below)
# in orchestral / live performance terms,
# this targetinstrumentdict would be the "stage"
# where performers are assembled
# to perform their organ grinding tricks
# i could cut down the existing, non real time Parseval class
# to make a lightweight version
# facilitating real time use
targetinstrumentdict = {}
# *********************************
# the ultimate aim in all this therefore
# is not to simply deliver final audio file
# representations of the real time performance
# but to generate score output
# (in my case Parseval score)
# that enables further processing & editing to take place
# yet is reverse engineerable / self analysing
# in terms of motivic content
# & relationships
# a nestable chain of pattern derivations
# could be applied to the class
# to extract a full developmental unravelling
# or thematic / motivic history
# i also have experiments within Parseval
# to this end accordingly
# they would not be relevant to any real time useage
# but could be useful in more advanced tree structure GUI
# representations of thematic materials
class Pattern():
def __init__(self):
# index back to the PCsets dictionary above
# allowing relevant functions to be called upon
# these PC sets where applicable
# (rotations, transpositions, equivalence, difference etc etc..)
self.PCkey = ""
# how many unique pitch integer elements in the set
# (could be derived from PCkey above)
self.cardinality = 0
# ********************************
# 1 == a vertical / chordal pcset
# >1 == "horizontal / melodic" distribution
# (can still contain vertical coincidences however)
self.rhythmsteps = 1
# ********************************
# every metronomic advance upon the pattern
# will trigger if it matches
# this associated index
# i.e a 3 voice chord would == [1,1,1]
# triggering all on 1st rhythmstep
self.triggerorder = []
# ********************************
# don't have to be unique
self.pitches = []
# supplied to pitches at the matching index value
self.octivation = []
# ********************************
# conditional duration statements
# determining termination of currently held notes
# available options:
# "n" - hold until next note
# "k" - hold until next note of same pitch
# (implements keyboard / harp based polyphonic instrument models)
# "t" - hold over (if next note to target is same pitch, else release on next)
# "l" - legato tie to next regardless of pitch (use for MONO only)
# (& utilise Yi tiestatus opcode appropriately)
# (or send appropriate sustain switch MIDI message for example - such as in Garritan)
# "." - rest
# "d" (realtime use only) - "play for defined duration" (i.e release on release trigger)
# (otherwise an actual duration value will need to be specified..
# how? - with a +ve duration value..)
# REDUNDANT STATEMENTS# "r" - release trigger on next (with no follow up note)
# NOT REQUIRED as == to "n" followed by "."
# i presently have working prototypes
# for 99% of this duration handling
# in both a real-time Csound
# & non real time parseval / python
# based implementation
# consolidation of these methods
# should yield all desired outcomes
# (python being the more manageable & intuitive means to implement)
# which is what i'm sketching here
# as a real time possibility
self.terminators = []
# ********************************
# pfield presets
# most instr p fields (othre than pitch)
# will be separately defined & used like "presets"
# this approach will support
# 1) unique p field values for every associated score event
# or alternately enable
# 2) recall of same tonal colour or articulation
# as defined by pfield input
# deemed useful in separating heavy pfield instr dependency
# from basic rhythmic & tonal
# compositional operations
# suggests a similar handling
# of any GUI representations
# these pfield presets
# could be written & saved & loaded
# in csound itself via ftables
# or databased in python also
# to contribute to the wxGUI possibilities
# preferred option if achievable)
self.pfieldsets = []
# a target is an abstraction
# of a "physical instrumentalist or voice"
# the abstraction is relevant
# to the resolution of
# conditional durations
# i.e - this is like saying "first violins"
# or "1st horn player only"
# it has nothing to do
# with physical instrument architectures or types
# or uniquely numbered
# csound instr statements
# it could, in some cases,
# even refer to individual monophonic strings
# on a violin or guitar model for example...
# polyphonic / keyboard / harp based architectures
# can still be supported however
# (refer terminators above)
self.targets = []
# ADDITIONAL NOTES:
# whilst duplication of Pattern variables
# occurs from Pattern to Pattern
# sorting & equivalence of atomic elements
# will be applicable to all defined patterns
# with custom display of sorted output
# which generally would only be of value to sorting by
# PCset
# rhythmsteps ("how long"?)
# pfield presets ("what articulation or expression"?)
# targets (same "physical player / performance identity")