The Many Faces of a Temporal Recursion
[Andrew Sorensen - 2013]
For some people, a temporal recursion is a variation on recursively timed events/tasks, for others it is a concurrency vehicle closely related to co-routines, and finally for others it is a vehicle for managing on-the-fly code hot-swapping at runtime. And yet, temporal recursion, as implemented in Extempore/Impromptu, draws together these ideas to become greater than the sum of its parts. By providing explicit temporal semantics, and explicit management of control-state, temporal recursion supports the end-user development of powerful time and concurrency abstractions. Temporal recursions are a unique tool that deserve to have an identity of their own.
This post will attempt to provide a whirlwind guide to temporal recursions, from the basics through to more advanced use scenarios, such as the temporal recursion of continuations. The document is provided as an introduction to the principle of temporal recursion rather than as a tutorial for end users. As such the examples, while provided in valid Extempore code, are kept very simple in an effort to make them more readable, even for those with no background in Extempore/Impromptu/Scheme.
These ideas have been presented in several publications, most recently "Programming with time: Cyber-Physical Programming with Impromptu" A Sorensen, H Gardner ACM Sigplan Notices 45 (10), 822-834.
Table of Contents
- 1 Entree
- 2 Main Course
- 2.1 Recursion vs Temporal Recursion
- 2.2 Cooperative Concurrency
- 2.3 Temporal Encapsulation
- 2.4 On-the-fly code substitution
- 2.5 Periodic, Aperiodic, Sporadic
- 2.6 Temporal Constraints
- 2.7 Temporal Deadlines
- 2.8 Temporal Exceptions
- 2.9 Nested Temporal Recursions
- 2.10 Temporal Continuations
- 2.11 Synchronous Temporal Recursion?
- 2.12 Distributed and strongly timed and concurrent and …
- 2.13 Timed Petri-Nets
- 2.14 Timeless vs Timefull concurrency
- 3 Dessert
1 Entree
As a brief prelude, my own interest in temporal recursion started with a discovery. A C++ framework that I was working on needed the ability to schedule method calls with strong temporal accuracy, so I added a real-time scheduler with the ability to schedule the execution of C++ method calls. Of course within a few days of using this new functionality the obvious idea to make the scheduling recursive struck. Such a simple and obvious idea but boy was I excited when I stumbled onto it.
So temporal recursion was born in early 2005, at least for me. Of course the idea was so obvious that others must be using the same technique, right? So I began trawling the archives. At which point I began to realise that the technique was far from common, indeed I struggled to find any references at all. After a lot of searching I discovered that Roger Dannenberg had used a similar technique in his CMU MIDI toolkit from the early 1990's, and in turn that he had borrowed the idea from Douglas Collinge's extremely interesting "Moxie" language (early 1980's). Neither Dannenberg nor Collinge used the term "temporal recursion" but the concept, albeit in a simplified form, was definitely there. James McCartney also made use of temporal recursion in his SuperCollider language from around 1996. Generally though the use of Temporal Recursion in practice, and in theory, appears to be quite rare.
What struck me more however, was that in the broader computer science literature I discovered even less material directly related to temporal recursion. There was a wealth of literature relating to co-routines and also to recursively timed events/tasks. However, although these techniques share similar properties to temporal recursion they differ in some important respects. One suggested reason for the rarity of temporal recursion in-the-wild, is that system designers prefer to offer more declarative temporal solutions. This certainly does appear to be the case, and aligns with other trends in CS, which may go some way towards explaining the lack of CS literature. Attempting to demonstrate the utility of a more imperative approach to time is a large motivation for writing this post.
Finally a shout out! If anyone knows of any other evidence of early work specifically related to the timed scheduling of recursive functions (methods, closures etc..), or the use of the term "temporal recursion", then please let me know! I would be particular grateful for any formal CS references.
2 Main Course
Concurrency in Extempore (and previously in Impromptu) is a multi-layered affair. Extempore supports both asynchronous cooperative (i.e. non-preemtive) concurrency and synchronous preemptive concurrency. Asynchronous concurrency comes in the form of temporal recursions, a technique that supports concurrency although not parallelism (i.e. only a single CPU core). Synchronous concurrency in Extempore comes in the form of an Extempore process which supports full parallel preemptive concurrency (i.e. multiple CPU core's as well as preemptive single core).
Each Extempore process is capable of running on an independent CPU core, and indeed on an independent host machine, as Extempore processes are network addressable. As a general rule Extempore processes have independently addressable memory (i.e. not shared), although there is an escape hatch to allow for shared memory access for performance intensive applications. Communication between Extempore processes is therefore primarily via message passing - i.e. the actor model
2.1 Recursion vs Temporal Recursion
Temporal recursion is a deceptively simple design pattern based upon the idea of a timed event system that runs in a given Extempore process. Deceptively simple in so far as its simplicity belies its utility. Any system that is able to support the scheduled execution of code (function application, method call etc..) is capable of supporting a temporal recursion model - although as we will see there are degrees of what supporting might mean.
A temporal recursion is most simply defined as any block of code (function, method, etc..) that schedules itself to be called back at some precise future point in time. In theory a standard recursive function is a temporally recursive function that calls itself back immediately - i.e. without any temporal delay.
;; A standard recursive function (define my-func (lambda (i) (println 'i: i) (if (< i 5) (my-func (+ i 1))))) ;; A temporally recursive function with 0 delay ;; (callback (now) my-func (+ i 1)) ~= (my-func (+ i 1)) ;; (now) here means immediately - straight away (define my-func (lambda (i) (println 'i: i) (if (< i 5) (callback (now) my-func (+ i 1)))))
In the preceeding example (callback (now) my-func (+ i
1))
serves a similar function to (my-func (+ i 1))
- both are responsible for calling back
into my-func
immediately, passing an incremented
value for i
. However, the way in which these two
recursive calls operate is substantially different. The temporal
recursion, that is formed by the recursive call (callback (now)
my-func (+ i 1))
, is implemented as an event distinct from the
current control-state. In other words, while the call (my-func
(+ i 1))
maintains the control flow, and potentially (assuming
no tail optimisation) the call stack, the (callback (now)
my-func (+ i 1))
schedules my-func
and then
returns control flow to the real-time scheduler.
One implication of passing control flow back to the scheduler is that multiple temporal recursions (even immediate temporal recursions) can be interleaved. Thus providing a simple concurrency model.
Another important distinction between a standard recursion and an immediate (no temporal delay) temporal recursion is that a temporal recursion blows away the stack, much like a standard tail optimised recursive call. Because the stack is blown away the temporal recursion must also maintain any values that are needed as arguments to the next recursive call (i.e. the arguments will not be available on the stack). The arguments must therefore be maintained in some way by the temporal recursion event.
2.2 Cooperative Concurrency
The distinction between a standard recursion and a temporal recursion
becomes more obvious once the temporal recursion chooses to schedule
itself into the future, i.e. (now)
plus some number of
ticks into the future. Because callback
returns
control-state to the scheduling engine, it is possible to interleave
the execution of multiple temporal recursions concurrently. Consider
this simple example of two interleaved temporal recursions.
First a
runs then b
then a
then b
etc..
(define a (lambda (time) (println "I am running a") (callback (+ time 40000) a (+ time 40000)))) ;; argument time (define b (lambda (time) (println "I am running b") (callback (+ time 40000) b (+ time 40000)))) ;; argument time (let ((time (now))) (a time) (b (+ time 20000)))
time
is passed as a parameter to both
function a
and function b
, with
both a
and b
incrementing time at the same
rate. The interleaving therefore is defined by the initial offset
of 20000
ticks.
a
and b
are substantially the same, and can
be easily abstracted into c
.
(define c (lambda (time name) (println "I am running: " name) (callback (+ time 40000) c (+ time 40000) name))) ;; arguments time and name (let ((time (now))) (c time "a") (c (+ time 20000) "b"))
It is worth considering what would happen if no offset was provided in the example above. A potential temporal impasse is decided by the fact that the scheduling engine resolves the ambiguity that arises when two (or more) scheduled calls share an identical start-time, by selecting the oldest waiting call. In other words, if scheduled calls cannot be resolved by time alone, they are resolved FIFO. This ensures that scheduled recursions, sharing a start-time, are resolved round-robin style. In the previous example then, if "a" and "b" shared the same start-time (i.e. no offset for "b"), they would continue to interleave successfully, and appear to execute in parallel, to the degree that the performance of the system matches/exceeds human perception. The print lines would always be ordered "a" then "b", but would appear to the end-user to print at the same time.
2.3 Temporal Encapsulation
The previous example code spawns two temporal recursions
with name
's "a" and "b". An important, and not
immediately obvious, benefit of temporal recursion which is state
encapsulation. In the example above we run two concurrent temporal
recursions - one with a name
value of "a" and the other
with a name
value of "b". name
and time
are maintained independently by their respective
temporal recursions. Of course we could run as many independent
temporal recursions over function c
as we might wish to
start, each with its own independent state for time
and name
.
As well as independent state it is also possible to introduce shared
state for temporal recursions. In Extempore closures provide the
vehicle for introducing shared, but still encapsulated, state for
temporal recursions. Consider a variation to function c
that introduces a captured variable count
.
(define c (let ((count 0)) (lambda (time name) (println "I am running: " name " count: " count) (set! count (+ count 1)) (callback (+ time 40000) c (+ time 40000) name)))) ;; time and name (let ((time (now))) (c time "a") (c (+ time 20000) "b"))
In this example count
is shared between both temporal
recursions but is still encapsulated. Most importantly, because
temporal recursions are non-preemptive, access to the shared
variable count
is strictly ordered - temporally ordered.
This makes shared temporal recursion state easy to reason about.
2.4 On-the-fly code substitution
Another benefit of a temporal recursion that may not be immediately obvious is on-the-fly hot-swapping of code. The temporal recursion process allows the programmer to redefine the behaviour of a function between scheduled calls from the scheduling engine. Any system that supports dynamic symbol binding/rebinding and on-the-fly code compilation/interpretation, is capable of modifying the behaviour of a temporal recursion in situ. This is a powerful technique that is used extensively in Extempore (and Impromptu).
2.5 Periodic, Aperiodic, Sporadic
It is worth briefly noting at this point that there is no requirement
for the callback
time to be periodic. By adjusting the increment to
time
a temporal recursion can be aperiodic or sporadic.
;; an example of aperiodic temporal recursion ;; random duration of 1000, 10000 or 100000 ticks (define c (lambda (time name duration) (println "I am running: " name) (callback (+ time duration) c (+ time duration) name ;; time and name (random '(1000 10000 100000))))) ;; duration (c (now) "a" 1000)
2.6 Temporal Constraints
Up until this point we have been focusing on the
scheduled start-time of a temporal recursion without much
consideration for the execution-time of the code. For the
simple code examples demonstrated thus far the execution time has been
so short as to be effectively instantaneous. However, as the work
load, and thus the execution-time increases, there is the risk of
having a temporal recursions scheduled start-time be behind the
actual system time. In other words time
< (now)
To test for this we can query the current relationship between the
scheduled start-time time
and the current system
time (now)
.
(define c (let ((count 0)) (lambda (time name) (println "I am running: " name " time lag: " (- (now) time)) (set! count (+ count 1)) (callback (+ time 40000) c (+ time 40000) name)))) (let ((time (now))) (c time "a") (c (+ time 20000) "b"))
A slight variance in the (- (now) time)
relationship is
effectively rounding error. What we are really looking to avoid is a
situation where the difference between (now)
and time
continues to expand unbounded. We can easily
induce this unrecoverable lag by increasing the execution time
of c
beyond the inter-offset (the time between each
scheduled start-time) time of each temporal recursion.
(define c (let ((count 0)) (lambda (time name) (println "I am running: " name " time lag: " (- (now) time)) (set! count (+ count 1)) ;; waste some time (dotimes (i 200000) (* 1 2 3 4 5)) (callback (+ time 40000) c (+ time 40000) name)))) (let ((time (now))) (c time "a") (c (+ time 20000) "b"))
In the example above the difference between (now)
and time
continues to expand. The only possible recovery
from this situation is to schedule function c
's
start-time further into the future to compensate for the additional
execution-time required by function c
. It should
clear from this example that management of temporal constraints
requires the balancing of a temporal recursions start-time with its
execution-time. This situation becomes increasingly complex as the
number of interleaved temporal recursions increases.
2.7 Temporal Deadlines
Extempore provides the programmer with some assistance in this
endeavour by supporting execution-time semantics along
with start-time semantics. An execution deadline constraint can
be added to any callback
to provide an exception handling
pathway for code that executes beyond its scheduled deadline. Suppose
that we take a single temporal recursion that uses more than
its allocated time as below:
(define d (lambda (time) (println "time lag: " (- (now) time)) ;; waste some time (dotimes (i 4000) (* 1 2 3 4 5)) (callback (+ time 1000) d (+ time 1000)))) (d (now))
In the above example time lag continues to increase because the
dotimes
loop is chewing up more time than we have
allocated (1000 ticks). One answer to this problem would be to
modulate the callback rate, that is the inter-offset of the temporal
recursions start-times, to automatically adjust to changing
execution-times. Something like the example below will provide a
variable and dynamic inter-offset to adjust for changing execution
times.
(define d (lambda (time) (println "time lag: " (- (now) time)) ;; waste some time (dotimes (i 5000) (* 1 2 3 4 5)) ;; self regulate optimial callback time ;; by passing new revised time rather than static time. (callback (+ time 1000) d (+ time 1000 (- (now) time))))) (d (now))
In the example above whatever the number of iterations we set
for i
the temporal recursion will self-regulate. Now, if
concurrency was all we cared about then this might be an option.
However, temporal recursions are not only about providing concurrency
but also accurate real-time temporal constraint. We don't just care
about running many things, we care about running many things at the
right times! So the above self regulation option is out.
2.8 Temporal Exceptions
Instead of modulating the timing accuracy of a temporal recursion to support the performance profile of the hardware, we would instead prefer to be informed that our precise timing was not supported by the hardware profile - at least not with the current performance profile of our code.
Extempore (and Impromptu before) support this idea by providing an explicit timing deadline constraint for each temporal recursion. An optional argument to callback provides a maximum execution time constraint after which an exeception will be thrown to alert the programmer to the temporal recursions inability to meet its execution deadlines.
Most importantly this deadline time is respected by the Extempore compiler/interpreter to ensure that the exception is thrown when the deadline is breached - rather than when the late function completes. This ensures that a single misbehaving temporal recursion will not adversely interfere with other ongoing temporal recursions.
(define d (lambda (time) (println "time lag: " (- (now) time)) ;; waste some time (dotimes (i 500) (* 1 2 3 4 5)) ;; added execution contraint deadline of 900 (callback (cons (+ time 1000) 900) d (+ time 1000)))) (d (now))
The example above adds a deadline constraint of 900
Ideally of course we would have the system provide these temporal constraints (i.e. the 900 ticks) based on some form of static code, or dynamic runtime temporal analysis. Forcing the programmer to calculate the complex timing interrelationships between all possible temporal-recursions is complex and error prone. In practice this is a non-trivial problem which is made extremely difficult in livecoding scenarios where the overall behaviour of the system is completely runtime modifiable. This runtime modifiability makes both static and/or dynamic temporal analysis very challenging.
Overall though it is worth mentioning that although temporal constraints issues are a very real concern they are less problematic in practice than in theory. Don't be frightened :-). A combination of patterns and tools help to make most common usage quite safe.
2.9 Nested Temporal Recursions
As an introduction to some more complex temporal relationships let us consider the problem of nested temporal recursions. Nested temporal recursions are easy to define but can be tricky to reason about, requiring the programmer to line up all of the temporal relationships.
(define bottom (lambda (time duration r2) (println " I am the bottom -> " time) (if (> r2 1) (callback (+ time duration) bottom (+ time duration) duration (- r2 1))))) (define middle (lambda (time duration r1 r2) (println " I am the middle -> " time) (bottom time duration r2) (if (> r1 1) (callback (+ time (* r2 duration)) middle (+ time (* r2 duration)) duration (- r1 1) r2)))) (define top (lambda (time duration r1 r2) (println "I am the top -> " time) (middle time duration r1 r2) (callback (+ time (* r1 r2 duration)) top (+ time (* r1 r2 duration)) duration r1 r2))) ;; 2 middles and 3 bottoms duration 20000 ticks (top (now) 20000 2 3)
The example above contains three levels of temporal recursion - top, middle, bottom. Here is an excerpt from my console when running this example code.
.I am the top -> 912285184
etc..
..I am the middle -> 912285184
...I am the bottom -> 912285184
...I am the bottom -> 912305184
...I am the bottom -> 912325184
..I am the middle -> 912345184
...I am the bottom -> 912345184
...I am the bottom -> 912365184
...I am the bottom -> 912385184
.I am the top -> 912405184
..I am the middle -> 912405184
...I am the bottom -> 912405184
...I am the bottom -> 912425184
2.10 Temporal Continuations
Implementing nested recursions is pretty straight forward. However,
the temporal accounting is a bit of a pain. Luckily Extempore also
provides a handy function called sys:wait
. For an insight
into sys:wait
consider that callback
could,
and in Extempore can, schedule not only closures (functions), but also
continuations. One of the many ways in which this can be useful is to
make asynchronous code behave sychronously.
;; implementation of sys:wait is trivial (define sys:wait (lambda (until-time) (call/cc (lambda (cont) (callback until-time cont #t) (*sys:toplevel-continuation* 0) #t))))
Note that this is all that is required to
implement sys:wait
on top of Extmepore's existing
temporal recursion infrastructure. Although
simple, sys:wait
is powerful. The following example
resembles ChucK's 'holding back the world' synchronous style, but is
semantically still implemented using the standard Extempore
asychronous callback
architecture.
(define wait-test (lambda (time name) (dotimes (i 10) (println name '-> i (now)) (set! time (+ time 40000)) (sys:wait time)))) ;; start fred (wait-test (now) 'fred) ;; start concurrent john (wait-test (now) 'john)
2.11 Synchronous Temporal Recursion?
This code looks like you would expect from code using a blocking
sleep, however it is completely asynchronous, and strongly timed. Why
this is important is that sys:wait
will not block
other temporal recursions from running and is scheduled with the same
temporal accuracy as other temporal recursions. sys:wait
makes it extremely simple to write nested temporal recursions by
taking care of all the accounting for us. Here is the nested example
from above rewritten using sys:wait
. Note that all of the
accounting for time is greatly simplified because we just increment as
we go.
(define bottom (lambda (time duration r2) (dotimes (i r2) (println " I am the bottom -> " time) (set! time (+ time duration)) (sys:wait time)) time)) ;; return incremented time (define middle (lambda (time duration r1 r2) (dotimes (i r1) (println " I am the middle -> " time) (set! time (bottom time duration r2))) time)) ;; return incremented time (define top (lambda (time duration r1 r2) (println "I am the top -> " time) (set! time (middle time duration r1 r2)) (callback time top time duration r1 r2))) ;; 2 middles and 3 bottoms duration 20000 ticks (top (now) 20000 2 3)
Of course we could also run many of these nested stacks concurrently. Just call top a few times! This is a simple example but hopefully goes some way to demonstrating what a powerful abstraction temporal continuations are. Indeed even the small amount of time accounting in the above example can be abstracted away into temporal-appends and temporal-joins - for which there is a short code example in the dessert section of this document by way of demonstration.
It is worth emphasising that much of the utility here is in the
ability to build these temporal abstractions easily in user space.
Consider how easy it is to add deadline checking
to sys:wait
.
;; addition of optional deadline is trivial (define sys:wait (lambda (until-time . args) (call/cc (lambda (cont) (if (null? args) (callback until-time cont #t) (callback (cons until-time (car args)) cont #t)) (*sys:toplevel-continuation* 0) #t))))
In summary, scheduled continuations, are used in many places in Extempore to make what would otherwise be blocking calls behave cooperatively in a temporally recursive world.
2.12 Distributed and strongly timed and concurrent and …
Ultimately there are still situations, long running functions or blocking calls to external libraries for example, that will require the use of multiple cores, and Extempore attempts to make it very simple to spawn temporal recursions in new processes to support heavy and distributed workloads. Consider this multiprocess map example that spawns multiple distributed processes and then distributes a worker function to each, compiling natively at runtime on whatever architecture and operating system the process happens to be running on (ARM or X86/Windows,OSX,Linux).
;; a silly work function (bind-func work (lambda (a:i64 b:i64) (let ((i:i64 0)) ;; 1 billion iterations (dotimes (i 1000000000) (* 2 3 4 5 6))) (* a b))) ;; start 5 new processes ;; on 5 host machines ;; ipc:bind-func compiles 'work' on each new process (i.e. host) (define processes (map (lambda (host name port) (ipc:new host name port) (ipc:bind-func name 'work) name) ;; return list of names to define as processes (list "192.168.1.10" "192.168.1.11" "192.168.1.12" "192.168.1.13" "192.168.1.14") (list "proc-a" "proc-b" "proc-c" "proc-d" "proc-e") (list 7097 7096 7095 7094 7093))) ;; ipc:mapcall calls 'work' on all 'processes' ;; passing required arguments (1 1) = (a b) for each call ;; ipc:mapcall will then 'block' waiting for all results ;; which it assembles into a list that is the return ;; value of ipc:mapcall. ;; ;; call 'work' on 'processes' returning a list of results (println 'result: (ipc:mapcall 'work processes '(1 1) '(2 2) '(3 3) '(4 4) '(5 5)))
ipc:mapcall
is another example of a scheduled
continuation which can block quite harmlessly in an otherwise
temporally recursive environment. Neat stuff!
2.13 Timed Petri-Nets
Using callback
with a stochastic, or otherwise
conditional target, makes it extremely easy to build a variety of
temporal graph based systems, such as timed petri nets. The example
below outlines a simple timed petri net with two independent play
heads.
(define node1 (lambda (head) (println (string-append "I am play head " head " in node1")) (callback (+ (now) (* 10000 (random 1 6))) ;; random time (random '(node1 node2 node3)) ;; random callback head))) (define node2 (lambda (head) (println (string-append "I am play head " head " in node2")) (callback (+ (now) (* 10000 (random 1 6))) ;; random time (random '(node1 node2 node3)) ;; random callback head))) (define node3 (lambda (head) (println (string-append "I am play head " head " in node3")) (callback (+ (now) (* 10000 (random 1 6))) ;; random time (random '(node1 node2 node3)) ;; random callback head))) ;; start two play-heads running around ;; the 'timed' petri-net. (let ((time (now))) (callback time node3 "one") (callback time node3 "two"))And finally just to keep your brains active, there is nothing to stop you from building a temporally recursive tree. I will leave you to have a think about how this one turns out.
(define fork (let ((idx 0)) (lambda (time duration dir cnt) (set! idx (+ idx 1)) (println dir 'depth: cnt 'index: idx 'time: time) (callback (+ time (* duration 2)) fork (+ time (* duration 2)) (* duration 2) "left" (+ cnt 1)) (callback (+ time duration (* duration 2)) fork (+ time duration (* duration 2)) (* duration 2) "right" (+ cnt 1))))) (fork (now) 1000 "start" 0)
2.14 Timeless vs Timefull concurrency
Hopefully this post has gone some way towards helping to demonstrate how powerful and flexible temporal recursions can be. The concept of a temporal recursion is based on the simple principle that timed events can be scheduled recursively. However, a suitably powerful implementation of temporal recursion (one supporting closures and continuations for example) can go well beyond this simple theory, to support a rich temporal framework capable of supporting an extended set of practical time and concurrency tooling. Most importantly, temporal recursion makes it relatively straight forward to build many of these tools in user space.
Finally from a pedagogical perspective temporal recursions can be a useful vehicle for reasoning in concurrent environments as they make the often hidden temporal aspects more explicit. Unlike the current trend of pure-functional/immutable approaches to concurrency, which we might consider to be a timeless approach, temporal recursion makes mutability tractable by making the temporal concerns first class. If we consider the status quo of concurrency (locks, mutexes etc..) to sit somewhere in the middle, pure-functional sits to the left by attempting to remove time, whereas temporal recursion sits to the right by making time first class.
3 Dessert
;; ;; A final code example, demonstrating some ;; more 'sophisticated' temporal appends and joins ;; for those still reading :) ;; ;; The example assumes a valid 'inst' instrument ;; from either Impromptu or Extempore ;; (define chorus-p (vector 60 67 63 68 72)) (define chorus-d (vector 1 1/2 1/2 1 2)) (define verse-p (vector 58 60 62 63 72 70)) (define verse-d (vector 1/3 1/3 1/3 1 2 1)) (define-macro (temporal-append . args) `(begin ,@(map (lambda (a) (cons 'set! (cons 'beat (list a)))) args))) (define-macro (temporal-join . args) `(let ((lst '())) (set! lst (cons (call/cc (lambda (k) ,@(map (lambda (a) `(callback (*metro* (- beat 1/8)) ,@a k)) args) (*sys:toplevel-continuation* 0))) lst)) (if (< (length lst) (length ',args)) (*sys:toplevel-continuation* 0) (set! beat (apply max lst))))) ;; args is now a continuation if coming from a join ;; null if coming from an append (define player (lambda (beat pitches durations . args) (dotimes (i (vector-length pitches)) (let ((p (vector-ref pitches i)) (d (vector-ref durations i))) (play-note (*metro* beat) inst p 80 (*metro* 'dur d)) (set! beat (+ beat d)) (sys:wait (*metro* (- beat 1/2))))) (if (null? args) beat ((car args) beat)))) (define runner (lambda (beat) (temporal-append (player beat verse-p verse-d) (player beat chorus-p chorus-d) (temporal-join (player beat chorus-p chorus-d) (player beat '#(36) '#(7)) (player beat '#(84 82 80) '#(3 3 3)) (player beat verse-p verse-d)) (player beat verse-p verse-d) (temporal-join (player beat '#(36 24 57) '#(5 14/3 1/3)) (player beat '#(79 77 75) '#(3 3 3)))) (callback (*metro* (- beat 1/8)) runner beat))) (runner (*metro* 'get-beat 4))