~amirouche/sink-kernel

1eaa72f8cfc788a7a49b53f8825e376611e4b139 — Amirouche 7 months ago d8264e0 master
wip
1 files changed, 351 insertions(+), 356 deletions(-)

M R-1RK.md
M R-1RK.md => R-1RK.md +351 -356
@@ 307,7 307,7 @@ Kernel also has a second type of combiners, applicatives, which act on
their evalu- ated arguments. Applicatives are roughly equivalent to
Scheme procedures. However, an applicative is nothing more than a
wrapper to induce operand evaluation, around an underlying operative
(or, in principle, around another applicative, though that isn’t
(or, in principle, around another applicative, though that isn't
usually done); applicatives themselves are mere facilitators to
computation.



@@ 317,7 317,7 @@ implementation of Kernel, criteria for an implementation of Kernel to
qualify as supporting or excluding an optional module of the language;
and criteria for an implementation of Kernel to qualify as
comprehensive and/or robust; and it also documents the derivation of
Kernel’s design from basic principles.
Kernel's design from basic principles.

The remainder of Section 0 discusses Kernel design principles, and
briefly explains the status of the report itself. Section 1 provides an


@@ 325,7 325,7 @@ overview of the language, and describes conventions used for
describing the language. Section 2 describes the lex- emes used for
writing programs in the language. Section 3 explains basic semantic
elements of Kernel, notably the Kernel evaluator algorithm. Sections
4–15 describe the various modules exhibited by Kernel’s ground
4–15 describe the various modules exhibited by Kernel's ground
environment. Section 16 provides a formal syntax and semantics for
Kernel. Appendix A summarizes past, and suggests possible future,
evolution of Kernel. Appendix B discusses first-class objects in depth.


@@ 355,7 355,7 @@ From this supposition, it follows that crystalization of style can
only be fully effective if the pure style is one that can be reconciled
with practical concerns without compromise, neither to the style nor
to the practicalities. We claim that the Kernel language model is a
pure style of this kind, i.e., one that needn’t be compromised.
pure style of this kind, i.e., one that needn't be compromised.
Embracing this claim in the language design process means, directly,
that we should focus entirely on pure articulation of the style; and
indirectly, that when we find ourselves being led into compromise, we


@@ 371,11 371,11 @@ can be reconciled without compromise, it follows that any given facet
of the language design will eventually be right and never require
further adjustment (up to our choice of paradigm/pure style,
anyway). The empirical perception of software design tinkering as an
endless process is, in this context, an artifact of founding one’s
endless process is, in this context, an artifact of founding one's
software design on a programming platform that already contains
compromises that are not open for reassessment.

If it isn’t clear what constitutes a compromise, then even if
If it isn't clear what constitutes a compromise, then even if
compromise can be avoided, there is little chance that it will be
avoided. Therefore, as this report defines the Kernel language, it also
extensively documents the motivations for the design, from high-level


@@ 399,12 399,12 @@ effect, the Kernel supposition re purity does not depend on that
suspicion. (A formal criterion for abstractive power is proposed in
[Sh08]; for a general discussion of abstractive power, see [Sh09,
§1.1].) current revision, a full general-purpose language (what is
often —lamentably, from Kernel’s perspective— called a “full-featured”
often —lamentably, from Kernel's perspective— called a “full-featured”
language); nor does it particularly aspire to become such a language
in any particular time-frame, although it does seek to evolve in that
direction. Pressure to provide additional functionality promptly is a
significant vector for compromise, and so cannot be reconciled with the
Kernel design’s no-compromise policy. Each possible addition to the
Kernel design's no-compromise policy. Each possible addition to the
language must be thoroughly vetted for subtle inconsistency with the
design principles, or the design principles In particular,
compatibility with implemented extensions to one cannot survive.


@@ 457,7 457,7 @@ apply directly to specific tactical design decisions (as called for in
  difficult to program by accident.

  Guideline G3 was formulated specifically to protect the ideal of
  ‘removing weaknesses and restrictions’ from devolving into mere
  ‘removing weaknesses and restrictions' from devolving into mere
  amorphism. It has proven to be a partic- ularly useful heuristic in
  practice, being explicitly invoked in the report more often (at last
  count) than all the other guidelines put together.


@@ 488,8 488,8 @@ apply directly to specific tactical design decisions (as called for in
  made to achieve it.

  This is a refinement of the no-compromise policy described above in
  §0.1.1. It really has two aspects — that elegance shouldn’t be
  compromised for efficiency; and that it doesn’t have to be, because
  §0.1.1. It really has two aspects — that elegance shouldn't be
  compromised for efficiency; and that it doesn't have to be, because
  efficiency will be a natural beneficiary of design deci- sions made for
  other reasons. A key example of the latter, efficiency benefiting from
  other factors, is the ability to restrict mutation of ancestral


@@ 536,7 536,7 @@ Krishnamurthi, Jim Miller, Marijn Schouten, and Mitchell Wand.

### 1.1 Semantics

This subsection gives an overview of Kernel’s semantics. A detailed
This subsection gives an overview of Kernel's semantics. A detailed
informal seman- tics is the subject of §§3–15. For reference purposes,
§16.2 provides a formal semantics of the Kernel evaluator. The main
semantic differences from Scheme are summarized in §A.1.


@@ 613,10 613,10 @@ exception-handling. See §7.

4Stability of this sort manifests formally as strength of equational
theory. There has been some popular misconception, based on
overgeneralization of the formal result in Mitchell Wand’s (broadly
overgeneralization of the formal result in Mitchell Wand's (broadly
titled) 1998 paper “The Theory of Fexprs is Trivial”, [Wa98], that all
calculi of fexprs necessarily have trivial equational
theories. Actually, Wand’s formal result applies only to calculi
theories. Actually, Wand's formal result applies only to calculi
constructed in a certain way; see Appendix C.

Kernel environments are first-class objects as well. Their first-class


@@ 646,7 646,7 @@ standard quasiquotation facilities.5 Most Lisps, including Scheme, use
quasiquotation. MetaML ([Ta99]) is a prominent example of a non-Lisp
language using quasiquotation.

Kernel’s model of arithmetic is designed to provide useful access to
Kernel's model of arithmetic is designed to provide useful access to
different ways of representing numbers within a computer, while
minimizing intrusion of those rep- resentation choices into the way
arithmetic is performed by a program. Every integer is a rational,


@@ 688,7 688,7 @@ decomposition of the data it reads; see §15.1.7. Since all objects are
evaluable, there is no such thing as a syntactically invalid
program. Once a program text has been converted to an object (as by
read ), any further errors —such as an unbound variable, or a
combination whose car doesn’t evaluate to a combiner— are semantic
combination whose car doesn't evaluate to a combiner— are semantic
errors, and consequently will not occur until and unless the object is
evaluated.



@@ 791,9 791,9 @@ language to be standardized without requiring them to be implemented
in situations where those aspects are irrelevant.

Optionality of modules should be reserved for that purpose: not to
make the implementor’s life easier, nor because the design of some
module is tentative (either it’s worth including in the report or it
isn’t), but because requiring the module simply wouldn’t make sense in
make the implementor's life easier, nor because the design of some
module is tentative (either it's worth including in the report or it
isn't), but because requiring the module simply wouldn't make sense in
some of the situations where Kernel might reasonably be
implemented. Which modules ought to be optional is thus a reflection of
the range of situations toward which the language is targeted.


@@ 860,11 860,11 @@ Rationale:

Deliberately useful behavior in a no-signal-required error situation
is literally an in- vitation to write one of the most anti-portable
kinds of code: implementation-dependent code that isn’t readily
kinds of code: implementation-dependent code that isn't readily
identifiable because the nonstandard feature it uses is camouflaged
under the name of a standard feature. Of course, all behavior can be
made use of; the point here is that since it’s bad programming
practice, the implementor shouldn’t be promoting it.
made use of; the point here is that since it's bad programming
practice, the implementor shouldn't be promoting it.

Serious consideration was given to simply requiring all errors to be
signaled. This ex- treme measure was not taken only because it was


@@ 971,22 971,22 @@ Unless otherwise stated, there are four pathological cases excepted:
correct types for the combiners on the left side of the equivalence
(or the first side, if the sides are presented on separate lines). The
right/second side of the equivalence may place weaker constraints on
the subexpression types, but the equivalence isn’t guaranteed under
the subexpression types, but the equivalence isn't guaranteed under
the weaker constraints.

20

• The expressions themselves are assumed not to be mutated before
their struc- ture is used in evaluation. For example, the second
equivalence above for list* doesn’t cover situations where evaluation
equivalence above for list* doesn't cover situations where evaluation
of arg1 causes mutation to more-args.

• If the expression evaluations involve subsidiary argument
evaluations, then the equivalence only holds if either (1) the
argument evaluations don’t have side- effects, or (2) the argument
argument evaluations don't have side- effects, or (2) the argument
evaluations are performed in the same order for both sides of the
equivalence. This is significant because, when an expression contains
nested combinations, Kernel’s eager argument evaluation may prohibit
nested combinations, Kernel's eager argument evaluation may prohibit
some orderings. For example, in the second equivalence above for list*
, the arguments to list* on the left may be evaluated in any order at
all, while in the nested expression on the right, arg1 will be


@@ 1257,7 1257,7 @@ follows it:
- [ ] { } | Left and right square brackets and curly braces and
  vertical bar are reserved for possible future use in the language.

- ’ ‘ , ,@ Single-quote character, backquote character, comma, and the
- ' ‘ , ,@ Single-quote character, backquote character, comma, and the
  sequence comma at-sign are reserved as illegal lexemes in Kernel, to
  avoid confusion with their use in other Lisps for quasiquotation.



@@ 1327,16 1327,16 @@ combiner call.
The ability to construct an environment with multiple parents (§4.8.4)
could be used to merge the exported facilities of several separate
modules, and make them all visible to the internal environment of
another module that depends on them (similarly to Java’s import).
another module that depends on them (similarly to Java's import).

26

Kernel symbol lookup uses depth-first search because (1) depth-first
search respects the encapsulation of the parent environments, not
requiring the child to know its parents’ ancestry, and (2) exactly
requiring the child to know its parents' ancestry, and (2) exactly
because depth-first search respects the encapsulation of the parents,
it is the simplest algorithm for searching a tree, in accordance with
Kernel’s design philosophy.
Kernel's design philosophy.

### 3.3 The evaluator



@@ 1366,12 1366,12 @@ environment to do so.

The Kernel evaluator uses the following algorithm to evaluate an
object o in an environment e. The algorithm is simplified here only in
that it doesn’t mention continuations (§7). Top-level expressions,
that it doesn't mention continuations (§7). Top-level expressions,
such as those input to an interactive Kernel interpreter, are
evaluated in an initially standard environment (‘initially’ because,
evaluated in an initially standard environment (‘initially' because,
as evaluations proceed, it may be mutated).

1. If o isn’t a symbol and isn’t a pair, the evaluator returns o.
1. If o isn't a symbol and isn't a pair, the evaluator returns o.

2. If o is a symbol, the evaluator returns the value bound by o in e.



@@ 1434,7 1434,7 @@ for §3.9.)
This report specifies that certain of the object types presented here
are “encapsu- lated”. This means, in essence, that implementations are
not permitted to support any operation on objects of that type that
wouldn’t be supported by a comprehensive implementation without
wouldn't be supported by a comprehensive implementation without
extensions.

The concept of encapsulation is rather slippery, especially when


@@ 1459,7 1459,7 @@ object, as is done when
28

storing it in a data structure (see §3.1 and Appendix B), does not
“involve” the referent’s type, only the fact that the referent is
“involve” the referent's type, only the fact that the referent is
first-class; so mere reference is always permitted. (Note that
extension types must also satisfy the partitioning requirements of
§3.5.)


@@ 1468,7 1468,7 @@ extension types must also satisfy the partitioning requirements of
using only features covered under (1) and (2), without such
implementation causing modification of any non-error behavior, nor any
required error-signaling behavior, of any feature covered under
(1). The features that would be used to implement it needn’t be
(1). The features that would be used to implement it needn't be
included in the actual implementation; however, even if the features
that would be used are themselves omitted, the implementation cannot
claim to exclude the module that contains them (§1.3.2).


@@ 1545,7 1545,7 @@ environment, it provides ready means for the programmer to create
variant languages that do provide direct support, in whatever way the
programmer chooses. Creation of such variant languages is quite
straightforward in Kernel (and, BTW, incurs little overhead even in a
fairly naive implementation), because of Kernel’s mixture of
fairly naive implementation), because of Kernel's mixture of
first-class environments and first-class operatives. With exclusively
first-class operatives, nearly all the language semantics are derived
from the ground en- vironment; and with articulate support for


@@ 1575,7 1575,7 @@ predicates to partition the objects possible under the extension.

Rationale:

Kernel’s typing policy is an exercise in balancing the two halves of
Kernel's typing policy is an exercise in balancing the two halves of
guideline G3 of §0.1.2 — that dangerous behaviors should be tolerated
but not invited. Tolerance rules



@@ 1584,7 1584,7 @@ but not invited. Tolerance rules
out traditional manifest typing, whose basic purpose is to
proactively exclude from the language behaviors that might later lead
to illegal acts. Partitioning latently typed objects by primitive type
promotes the ‘not invited’ half of the equation, by clarifying the
promotes the ‘not invited' half of the equation, by clarifying the
role of each object at the level of the base language, so that
ambiguities of purpose only occur if they are deliberately introduced
by the programmer.


@@ 1678,7 1678,7 @@ by read , but never misparsed.
The representations generated by write are also subject to a
constraint similar to (but, in this instance, stronger than) the rules
governing encapsulation from §3.4: the representation of an object o
by write cannot reveal any information about o that couldn’t be
by write cannot reveal any information about o that couldn't be
determined without write .

A clear distinction must be observed between the object represented by


@@ 1701,8 1701,8 @@ descriptions of the types in the modules that support them, in §§4–15.

Another important concept in Kernel is that of limiting what
information is accessible from within a Kernel program. Such limits
are imposed notably by Kernel’s type encapsulation (§3.4), and the
limits are in turn used to constrain Kernel’s treatment
are imposed notably by Kernel's type encapsulation (§3.4), and the
limits are in turn used to constrain Kernel's treatment

32



@@ 1711,7 1711,7 @@ limits are in turn used to constrain Kernel’s treatment

Because implementations must only match the abstract behavior
described in this report, there is nothing to keep them from
maintaining additional information internally, as long as it isn’t
maintaining additional information internally, as long as it isn't
accessible from within a program. Moreover, by the same reasoning,
this additional information can be shared with the user of the Kernel
system, as long as it remains unavailable to the program. For example,


@@ 1730,7 1730,7 @@ conceivably wish to do), it could in principle capture diagnostic
messages from the subsidiary process and extract encapsulated
information from them.  Such a practice might be admissible, under the
Kernel design guideline that ‘dangerous things should be difficult to do
by accident’ (G3 of §0.1.2), but admitting it without undermining the
by accident' (G3 of §0.1.2), but admitting it without undermining the
definition of type encapsulation would be a neat trick. Since no such
facility is currently provided, for the moment this subsection merely
points out admissible uses of internal information.


@@ 1750,7 1750,7 @@ object (§3.1); other actions described in §§4–15 constitute mutations
only when explicitly specified.

Note that mutation of the object referred to by a reference does not,
in itself, constitute mutation of the referring object (unless they’re
in itself, constitute mutation of the referring object (unless they're
the same object, of course).  For example, suppose p is a list (1 2
3). Then (set-car! p 4) mutates object p.  However, while (set-car!
(cdr p) 5) mutates the object referred to by the cdr of p, it does not


@@ 1854,7 1854,7 @@ structure qualifies as self-referencing. For example, the following is
an acyclic, a.k.a.  finite, list exactly because the car references in
a list are external. The same objects with different internal reference
designations could constitute a cyclic data structure; but that
structure wouldn’t be a list.
structure wouldn't be a list.

($define! bar (list 1)) (set-car! bar bar)



@@ 1951,11 1951,11 @@ argument list:

Rationale:

The R5RS specifically allows Scheme’s equal? predicate to not terminate
The R5RS specifically allows Scheme's equal? predicate to not terminate
when comparing self-referencing structures. This presents a danger
when working with self- referencing structures, since forgetfully
using equal? may cause the program to diverge.  Such pitfalls inhibit
the programmer’s free use of self-referencing structures, degrading
the programmer's free use of self-referencing structures, degrading
the practical status of self-referencing structures as first-class
objects (G1a of §0.1.2).



@@ 1982,8 1982,8 @@ risk, there is an additional risk, and the guideline is perturbed.

The second compromise is more elemental. The basic design philosophy
(top of §0.1.2) precludes unnecessary features.10 But the introduction
of a naive-traversal feature isn’t necessary. In those cases where a
tightly coded robust traversal isn’t fast enough, and the compiler is
of a naive-traversal feature isn't necessary. In those cases where a
tightly coded robust traversal isn't fast enough, and the compiler is
unable to prove acyclicity and substitute a naive traversal, the
programmer can hand-code naive traversals; as explicitly coded
algorithms go, naive depth-first traversals are quite simple — in


@@ 2022,7 2022,7 @@ therefore, error-signaling is limited to cases where the action of the
combiner would be undefined, rather than merely unlikely to be useful
(so as not to second-guess a deliberate decision by the
programmer). On the other hand, accidents with cyclic lists may still
occur if a combiner’s behavior contradicts the programmer’s
occur if a combiner's behavior contradicts the programmer's
expectation; hence, surprises are avoided by ensuring that all
combiners in the report adhere strictly to the simple policy presented
above.


@@ 2036,20 2036,20 @@ Kernel must refrain from endorsing any particular order of processing
for the combiner, regardless of whether they use any particular
order. (This is in contrast to Scheme, where particular
implementations routinely “extend” the RxRS by specifying order of
processing in cases where the RxRS doesn’t.)
processing in cases where the RxRS doesn't.)

10In effect, the design philosophy is itself a refinement of the
principle of necessity, the medieval scholastic principle that
“Entities should not be multiplied unnecessarily.” Toward the end of
the scholastic period, William of Occam used the principle of
necessity so effectively in cutting his opponents’ arguments to
ribbons, that today the principle is commonly called Occam’s Razor.
necessity so effectively in cutting his opponents' arguments to
ribbons, that today the principle is commonly called Occam's Razor.

37

The behavior of the evaluator algorithm on a cyclic operand list to
an applicative is largely determined by the general policy: arguments
don’t have to be evaluated in any particular order (§3.3), so a cyclic
don't have to be evaluated in any particular order (§3.3), so a cyclic
operand list must be handled in finite time. (So too, in accordance
with the above reasoning, implementations should refrain from
endorsing any particular order of argument evaluation.) Details of the


@@ 2102,7 2102,7 @@ properly tail-recursive implementation returns to that continuation
directly.

Proper tail recursion was one of the central ideas in Steele and
Sussman’s original version of Scheme. Their first Scheme interpreter
Sussman's original version of Scheme. Their first Scheme interpreter
implemented both functions and actors.  Control flow was expressed
using actors, which differed from functions in that they passed



@@ 2134,8 2134,8 @@ modules. §§5–6 describe the associated library features.
Rationale:

The rough criteria for a module to be considered core are that (1)
nontrivial Kernel programming can’t be done without it, or (2) the
Kernel evaluator algorithm can’t be understood without it. Optional
nontrivial Kernel programming can't be done without it, or (2) the
Kernel evaluator algorithm can't be understood without it. Optional
modules Pair mutation and Environment mutation were judged to fall
within the close penumbrae of their respective core types. The two
most notable omissions from the core are modules Continuations and


@@ 2209,7 2209,7 @@ be verified especially quickly in any particular implemen- tation.  In
language design terms, Scheme introduces a third equivalence predicate
for the express purpose of promoting implementation-dependent
intrusion of concrete perfor- mance issues on the abstract semantics
of the language — which directly violates Kernel’s principles on
of the language — which directly violates Kernel's principles on
simplicity and generality as well as its guideline on efficiency (G5 of
§0.1.2).



@@ 2221,12 2221,12 @@ but Environment mutation does not.
For cross-implementation compatibility, the behavior of eq? is defined
in terms of a comprehensive implementation of Kernel. For example, two
pairs returned by different calls to cons are not eq?, even if they
have the same car and cdr and the implementa- tion doesn’t support
have the same car and cdr and the implementa- tion doesn't support
pair mutation; and two empty environments returned by different calls
to make-environment are not eq?, even if the implementation doesn’t
to make-environment are not eq?, even if the implementation doesn't
support en- vironment mutation. The latter case shows how the
implementation-independence can impact implementations even if they
don’t support eq?, since the behavior of required predicate equal? on
don't support eq?, since the behavior of required predicate equal? on
environments is tied to that of eq? (which is, in turn, why module
Environment mutation does not require this module).



@@ 2300,7 2300,7 @@ description of the type.

3 If the two objects (object1 and object2 ) are non-interchangeable
in any way that could affect the behavior of a Kernel program that (a)
performs no mutation and (b) doesn’t use eq? (neither directly nor
performs no mutation and (b) doesn't use eq? (neither directly nor
indirectly), then equal? must return false. For example:

– If the two objects are observably not of the same type, equal? must


@@ 2314,14 2314,14 @@ false.
false.

– If the objects are both numbers, and numerically equal, but have
different inexactness bounds (e.g., one is exact and the other isn’t;
different inexactness bounds (e.g., one is exact and the other isn't;
§12.2), equal?  must return false.

4 If equal? is not required to return false by the preceding rule, and
this fact can be determined with certainty by a (correct) Kernel
program that (a) is independent of the objects (they don’t refer to it
program that (a) is independent of the objects (they don't refer to it
and it only refers to them as parameters), (b) examines the objects
only passively (doesn’t use them or parts of them as combiners in
only passively (doesn't use them or parts of them as combiners in
evaluation), (c) performs no mutation, and (d) always terminates
(provided the quantity of actual data within the runtime system is
finite), then equal? must return true. For example,


@@ 2372,15 2372,15 @@ in §6.6.1.
Rationale:

The Kernel predicate equal?, unlike its Scheme counterpart, has to
terminate for all possible arguments (since it isn’t given
terminate for all possible arguments (since it isn't given
dispensation to do otherwise). The set of cases in

42

which equal? is required, by Rule 3 above, to return false is
formally undecidable; that doesn’t interfere with termination of
formally undecidable; that doesn't interfere with termination of
equal?, but does guarantee that the terminating predicate returns
false in some cases where it isn’t required to. (Terminating predicate
false in some cases where it isn't required to. (Terminating predicate
eq?  must similarly return false in some unrequired cases; see §4.10.)
The set of cases in which equal? is required to return true, by Rule 4
above, might appear at first glance to be undecidable but, in practice,


@@ 2388,7 2388,7 @@ it is unproblematically decidable. Because Rule 4 stipulates that the
determining program can only examine the objects passively, the
determining program cannot get bogged down in comparing the formally
undecidable active behavior of algorithms; degree of encapsulation
doesn’t actually matter to this point, as, for example, if the body of
doesn't actually matter to this point, as, for example, if the body of
a compound combiner were made publicly visible so that it could be
used in the determination, different algorithms that do the same thing
would be un-equal?  (hence un-eq? ) exactly because the combiners


@@ 2433,7 2433,7 @@ Rationale:
Some combiners are called for their side effects, not their results. In
the C family of languages, functions called for effect have return type
void . The later Scheme reports describe the results of for-effect
procedures as ‘unspecified’, which is a politically neces- sary hedge
procedures as ‘unspecified', which is a politically neces- sary hedge
because different Scheme implementations already in place follow a
variety of



@@ 2445,7 2445,7 @@ procedures to return useful information, which creates a temptation
for programmers to write anti-portable code by using the
result. Kernel avoids this regrettable turn of events by explicitly
requiring the result of each for-effect combiner to be inert. Since the
inert type is encapsulated, its one instance doesn’t carry any usable
inert type is encapsulated, its one instance doesn't carry any usable
information beyond its identity and type, which are isomorphic. (But
see §3.7.)



@@ 2482,14 2482,14 @@ On the exclusion of non-boolean results from conditional tests, see
the rationale under

In R5RS Scheme, the halternativei operand to if is optional; and if it
is omitted, and htesti evaluates to false, the result is ‘unspecified’
is omitted, and htesti evaluates to false, the result is ‘unspecified'
— which would mean, in Kernel, that the result would be inert. For
consistency with the design purpose of `#inert` — which is to convey no
information— two-operand $if ought to return `#inert` regardless of
whether hconsequenti is evaluated; but at that point, it becomes
evident that the two- and three-operand operations are really
separate, and by rights ought not to be lumped into a single operative
(which lumping doesn’t square well with the uniformity guideline, G1
(which lumping doesn't square well with the uniformity guideline, G1
of §0.1.2, anyway); instead, if both operations are supported they
should be given different names. The two-operand form, though, is just
a specialized shorthand; so both clarity (thus accident-avoidance, G3


@@ 2557,7 2557,7 @@ respectively object1 and object2 .

Note that the general laws governing mutation (§3.8: constructed
objects are mu- table unless otherwise stated) and the general laws
governing eq? (§4.2.1: indepen- dently mutable objects aren’t eq? )
governing eq? (§4.2.1: indepen- dently mutable objects aren't eq? )
conspire to guarantee that the objects returned by two different calls
to cons are not eq? .



@@ 2633,7 2633,7 @@ algorithms, however, are intended by the programmer to be im- mutable,
and therefore, when an object is primarily meant to represent an
algorithm, mutating it is a dangerous activity that ought to be
difficult to do by accident (G3 of §0.1.2). The notion of the
‘evaluation structure’ of an object is meant to correspond to the
‘evaluation structure' of an object is meant to correspond to the
algorithm that the object represents. Combiners that are used
particularly to construct representations of algorithms acquire
immutable copies of the given evaluation structures: $vau (§4.10.3)


@@ 2644,7 2644,7 @@ an argument for mutation, as to set-car! and set-cdr!,
§4.7.1). Applicative copy-es-immutable empowers the programmer to
duplicate these built-in Kernel facilities (G1b of §0.1.2).

Symbols aren’t copied by the applicative because, although they
Symbols aren't copied by the applicative because, although they
clearly play a direct role in specifying algorithms, they are
immutable so there is never any need to make immutable
copies. Alternatively, one could claim that they are copied, but


@@ 2803,7 2803,7 @@ objects returned by two different calls to make-environment are not eq?
Rationale:

Symbol lookup in an environment is by depth-first search of the
environment’s im- proper ancestors (§3.2).  If there is no local
environment's im- proper ancestors (§3.2).  If there is no local
binding for the symbol, the parents are searched; and if at least one
of the parents exhibits a binding for the symbol, the binding is used
whose exhibiting parent occurs first on the list of parents. Because


@@ 2825,12 2825,12 @@ mutation module,

because environments are eq? iff they are equal? .

It isn’t clear to what extent one can do serious Kernel programming
It isn't clear to what extent one can do serious Kernel programming
without mutating environments; but separating the mutators into an
optional module allows language im- plementors to explore this
question within the bounds of Kernel specified by this report.  In the
absence of environment mutators as such, the programmer would
presumably fall back on Kernel’s rich vocabulary of environment
presumably fall back on Kernel's rich vocabulary of environment
constructors (notably the $let family, as §5.10.1), which the report
does not class as mutators, although environment initializa- tion is
routinely described in terms of adding bindings. (See especially the


@@ 2909,9 2909,9 @@ attempt was made in Kernel to imitate this context-sensitivity, as it
was considered philosophically incompatible with making the $define!
combiner first-class.

Formal parameter trees were first developed for Kernel’s $vau operative
Formal parameter trees were first developed for Kernel's $vau operative
(§4.10.3), as a generalization of the formal parameter lists of
Scheme’s lambda operative. Formal parameter trees are permitted
Scheme's lambda operative. Formal parameter trees are permitted
uniformly in every situation (G1 of §0.1.2) where a definiend is given,
i.e., where binding is specified.11 By empowering versatile interaction
between the separately versatile devices of pair-based data structures


@@ 2952,7 2952,7 @@ return”. How- ever, this Scheme scenario has another arbitrary
restriction built into it, because Scheme requires the argument tree
of c to be a list. Since Kernel eliminates this restriction (ra-
tionale in §4.10.5), one must then ask what would happen if c were
given an argument tree that isn’t a list at all — and in that case,
given an argument tree that isn't a list at all — and in that case,
one is back to passing just one value, an argument tree. Since one is
passing a single value to c either way, the question of whether to
allow multiple-value returns is effectively reduced to whether the


@@ 2971,7 2971,7 @@ value, a data structure containing however many values are to be
returned; but this argument has of- ten failed to convince because, in
most languages, it is syntactically clumsy to return a data structure
and then decompose it to get at the multiple values within. By the use
of Kernel’s generalized formal parameter trees, especially in
of Kernel's generalized formal parameter trees, especially in
conjunction with operatives of the $let family (§5.10.1), one can bind
variables directly to the parts of a structure as it is returned,
rather than returning it first and then decomposing it in a separate


@@ 3079,7 3079,7 @@ The stipulation that combiners are equal? iff eq? avoids a loophole in
the general rules for predicate equal? . The general rules do not
require the predicate to distinguish objects of the same type unless
they can be otherwise observably distinguished by a program that
doesn’t perform any mutation (Rule 3 of §4.3.1); but the only way to
doesn't perform any mutation (Rule 3 of §4.3.1); but the only way to
distinguish between two operatives is to call them, so under the
general rules, if each of several operatives would immediately cause
mutation when called, predicate equal?  would be permitted to equate


@@ 3121,7 3121,7 @@ signaled.
A vau expression evaluates to an operative; an operative created in
this way is said to be compound. The environment in which the vau
expression was evaluated is remembered as part of the compound
operative, called the compound operative’s
operative, called the compound operative's

54



@@ 3161,7 3161,7 @@ Without the ability to ignore the dynamic environment, every compound
combiner application would create a local environment containing a
reference to the dynamic en- vironment of the
combination. Consequently, the dynamic environment of a tail call
wouldn’t become garbage (in a straightforward implementation) until
wouldn't become garbage (in a straightforward implementation) until
the call returned.  Proper tail recursion would thus be undermined.

The central constructor of compound combiners for Kernel was named


@@ 3172,7 3172,7 @@ and also in part because, in comparison to other classical Greek
letters, it is relatively unencumbered by competing uses. Oddly, it
was not observed until long after the Kernel nomenclature had
stabilized that, since the $ prefix was originally a stylized s as in
special form, the entire name $vau of Kernel’s “constructor of special
special form, the entire name $vau of Kernel's “constructor of special
forms” is itself a roundabout acronym for special form.

When the $vau operative constructs a compound operative, it stores in


@@ 3220,7 3220,7 @@ compound.

Rationale:

It’s common practice to limit access to privileged information by
It's common practice to limit access to privileged information by
exporting combiners from a local environment (§6.8.2). The exported
combiners then have exclusive access to the information, because there
is no way for anyone else to determine their static environment. This


@@ 3231,81 3231,77 @@ encapsulation of operative protects the called from the caller, while
encapsulation of environment protects the caller from the called.)

For an example of the use of static environments to hide local state,
see the rationale
see the rationale in §6.8.2.

in §6.8.2.
#### 4.10.4 `wrap`

#### 4.10.4 wrap

(wrap combiner )

biner .

Rationale:

The wrap applicative returns an applicative whose underlying combiner
is com-

As the primitive constructor for type applicative, wrap has the virtue
of orthogonal- ity with the primitive constructor for type operative
($vau , §4.10.3); whereas the more commonly used library constructor
$lambda (§5.3.2) mixes concerns from both types.

12MIT Scheme provides a primitive procedure procedure-environment for
extracting the static

environment of a compound procedure.
```scheme
(wrap combiner)
```

56
> Rationale:
>
> The wrap applicative returns an applicative whose underlying
> combiner is combiner.
>
>
> As the primitive constructor for type applicative, wrap has the
> virtue of orthogonality with the primitive constructor for type
> operative ($vau , §4.10.3); whereas the more commonly used library
> constructor $lambda (§5.3.2) mixes concerns from both types.

#### 4.10.5 unwrap
#### 4.10.5 `unwrap`

```scheme
(unwrap applicative)
```

If applicative is not an applicative, an error is signaled. Otherwise,
the unwrap

applicative returns the underlying combiner of applicative.

Rationale:
the `unwrap` applicative returns the underlying combiner of
applicative.

It is almost possible to simulate the behavior of type applicative
using only operatives, thus bypassing wrap and unwrap
altogether. Given an operative O, one would simulate (wrap O) by
constructing a new operative O' that requires a list of operands,
evaluates the operands in its dynamic environment, and passes them on
to O.

The flaw in the simulation arises when one then attempts to simulate
(unwrap O').  One could construct an operative O'' that requires a
list of operands, quotes them all, and passes them on to O'. When O'
evaluates its operands, their evaluation removes the quotes, and the
operands passed from O' to O are the same ones that were originally
passed in to O''.

If O requires a list of operands, then O'' has the same behavior as O,
modulo eq? - ness of the pairs in the operand list. However,
operatives do not necessarily require a list of operands. An
applicative combination must have a list of operands, but that is only
because the operands must be evaluated singly to produce arguments;
there is no reason it should be inherent to the underlying operative,
and Kernel maintains orthogonality be- tween these two issues —
evaluator handling of an applicative combination, and operative
handling of its operand tree. The difference between the two is evident
in the following example using the apply applicative (whose principal
advantage is, as noted in §5.5.1, that it overrides the usual rule for
evaluating arguments).

(apply ($lambda x x) 2)

In Kernel, this expression evaluates to the value 2. Scheme disallows
the expression (using lambda instead of $lambda, of course); but to do
so it must artificially constrain either apply or the procedure type
itself, neither of which has any inherent interest in the form of the
second argument to apply, diverting responsibility for the constraint
away from the rule for applicative combination evaluation, where the
limitation really is inherent.  (Why would ($lambda x x), internally,
care about the form of the value bound to x?)
> Rationale:
>
> It is almost possible to simulate the behavior of type applicative
> using only operatives, thus bypassing `wrap` and `unwrap`
> altogether. Given an operative O, one would simulate (wrap O) by
> constructing a new operative O' that requires a list of operands,
> evaluates the operands in its dynamic environment, and passes them
> on to O.
>
> The flaw in the simulation arises when one then attempts to simulate
> (unwrap O').  One could construct an operative O'' that requires a
> list of operands, quotes them all, and passes them on to O'. When O'
> evaluates its operands, their evaluation removes the quotes, and the
> operands passed from O' to O are the same ones that were originally
> passed in to O''.
>
> If O requires a list of operands, then O'' has the same behavior as
> O, modulo eq? - ness of the pairs in the operand list. However,
> operatives do not necessarily require a list of operands. An
> applicative combination must have a list of operands, but that is
> only because the operands must be evaluated singly to produce
> arguments; there is no reason it should be inherent to the
> underlying operative, and Kernel maintains orthogonality be- tween
> these two issues — evaluator handling of an applicative combination,
> and operative handling of its operand tree. The difference between
> the two is evident in the following example using the apply
> applicative (whose principal advantage is, as noted in §5.5.1, that
> it overrides the usual rule for evaluating arguments).
>
> ```scheme
> (apply ($lambda x x) 2)
> ```
>
> In Kernel, this expression evaluates to the value 2. Scheme
> disallows the expression (using lambda instead of $lambda, of
> course); but to do so it must artificially constrain either `apply`
> or the procedure type itself, neither of which has any inherent
> interest in the form of the second argument to apply, diverting
> responsibility for the constraint away from the rule for applicative
> combination evaluation, where the limitation really is inherent.
> (Why would `($lambda x x)`, internally, care about the form of the
> value bound to `x`?)

## 5 Core library features (I)



@@ 3314,24 3310,21 @@ core modules.  They are presented in an order that allows each to be
derived from those that precede it. The resulting order does not well
respect the grouping of features into modules by type. Once these
features have been derived, the remaining majority of core library
features can and will be ordered by module in §6.

57

On the division of the language core into sections, see the
  rationale discussion at the
features can and will be ordered by module in §6. On the division of
the language core into sections, see the rationale discussion at the
beginning of §4.

Rationale:

beginning of §4.

### 5.1 Control

#### 5.1.1 $sequence
#### 5.1.1 `$sequence`

```scheme
($sequence . hobjectsi)
```

The $sequence operative evaluates the elements of the list hobjectsi
The `$sequence` operative evaluates the elements of the list hobjectsi
in the dy- If hobjectsi is a cyclic list, namic environment, one at a
time from left to right.  element evaluation continues indefinitely,
with elements in the cycle being evaluated repeatedly. If hobjectsi is


@@ 3352,7 3345,7 @@ operand, as opposed to, e.g., prog2 which also evaluates the operands
left-to-right but then returns the result from the 2nd operand. The
mnemonic value of progn is thus largely dependent on its belonging to
a set of similar names; and Kernel, which prefers to minimize its set
of primitives, doesn’t include any of the other opera- tives in the
of primitives, doesn't include any of the other opera- tives in the
set. The name sequence was used in the first edition of the Wizard Book
([AbSu85]; but it was changed to begin in the second edition,
[AbSu96], for compatibility with standard Scheme).


@@ 3608,7 3601,7 @@ head (cons head (apply list* tail)))))
Rationale:

Like many library derivations in the report, this one (written either
way) isn’t robust, because it fails to signal a type error; in this
way) isn't robust, because it fails to signal a type error; in this
case, if the argument list is cyclic the compound applicative will
loop through it indefinitely (until the implementation runs out of
memory, or, perhaps, until somebody decides the program has hung and


@@ 3618,14 3611,16 @@ kills it).

#### 5.3.1 $vau

($vau <formals> heformali . hobjectsi)
```scheme
($vau <formals> <eformal> . <objects>)
```

This operative generalizes primitive operative $vau , §4.10.3, so that
the con- structed compound operative will evaluate a sequence of
expressions in its local en- vironment, rather than just one.

<formals> and heformali should be as for primitive operative $vau . If
hobjectsi has length exactly one, the behavior is identical to that of
<formals> and <eformal> should be as for primitive operative $vau . If
<objects> has length exactly one, the behavior is identical to that of
primitive $vau . Otherwise, the expression

($vau hxi hyi . hzi)


@@ 3675,7 3670,7 @@ env))))

#### 5.3.2 $lambda

($lambda <formals> . hobjectsi)
($lambda <formals> . <objects>)

§4.9.1.



@@ 3686,9 3681,9 @@ is equivalent to
Rationale:

```
($lambda <formals> . hobjectsi)
($lambda <formals> . <objects>)

(wrap ($vau <formals> #ignore . hobjectsi))
(wrap ($vau <formals> #ignore . <objects>))
```

<formals> should be a formal parameter tree as described for operative


@@ 3768,9 3763,9 @@ only previously defined features.

(caar pair ) · · · (cddddr pair )

These applicatives are compositions of car and cdr , with the a’s and
d’s in the same order as they would appear if all the individual car’s
and cdr’s were written out in prefix order. Arbitrary compositions up
These applicatives are compositions of car and cdr , with the a's and
d's in the same order as they would appear if all the individual car's
and cdr's were written out in prefix order. Arbitrary compositions up
to four deep are provided. There are twenty-eight of these
applicatives in all.



@@ 3876,7 3871,7 @@ well-behavedness of apply, the principle advantage of apply is that it
provides a convenient way to override the usual rule for evaluating
arguments (§3.3) with an arbitrary alternative computation.  There is
no similar advantage to an analogous operate applicative for use with
operatives, because the operands of an operative aren’t evaluated
operatives, because the operands of an operative aren't evaluated
anyway, so there is no usual rule to override. Looking at it from
another angle, if there were an operate applicative, its litmus test
would be equivalence of expressions


@@ 3901,7 3896,7 @@ The general equivalence is true for all combiners c, and the natural
behavior for operate would make the litmus equivalence true for all
combiners as well. So nothing about the behavior of operate is specific
to operatives, and it really ought to be called combine ; but at that
point, why bother with it at all? It isn’t the analog for operatives
point, why bother with it at all? It isn't the analog for operatives
of apply for applicatives, and its implementation is so simple that
using it would only serve to slightly obscure what is actually being
done.


@@ 3957,7 3952,7 @@ omits it. The same effect can be achieved at least as clearly, and more
uniformly, by specifying #t as the htesti on the last clause.

In the no-clause base case (which comes into play whenever hclausesi
is acyclic and all the htesti’s evaluate to false), the Kernel analog
is acyclic and all the htesti's evaluate to false), the Kernel analog
of traditional Lisp (particularly, Scheme) behavior is to return
`#inert`; while the most straightforward alternative would be to signal
an error. When $cond is called for effect, the programmer would prefer


@@ 3971,7 3966,7 @@ However, if $cond were split in two for effect/value, uniformity of
design would suggest splitting every standard operative that performs
a `<body>`, starting with $vau , and notably including the entire $let
family, bloating the language vocabulary. Returning `#inert` for value
is dangerous, and relatively easy to do if it’s the base case for
is dangerous, and relatively easy to do if it's the base case for
$cond (hence, disfavored by G3 of §0.1.2); but that base-case behavior
does have the virtue of admitting both for-effect and for-value
use. So, until some major new insight recommends itself, we prefer to


@@ 4065,21 4060,21 @@ programmer can be insulated from the danger of cycles by
intermediate-level tools, such as map (§5.9.1), that handle cyclic
lists robustly (as in the derivation of combiner?,
§6.2.1). get-list-metrics is a lower-level tool for those
contingencies that the intermediate-level tools don’t cover. It
contingencies that the intermediate-level tools don't cover. It
provides a complete characterization of the shape of the list, in a
format suitable for detailed analysis (as in the derivation of length
, §6.3.1, or, most elaborately, in the derivation of map).

Actually, most of the (by intent, very few) contingencies that escape
the intermediate- level tools will only need to bound their list
traversal by the number of pairs in the list, and won’t even care
traversal by the number of pairs in the list, and won't even care
whether the list is cyclic. Some will care about the prefix/cycle
breakdown, though; and in the acyclic case, it’s trivial for
breakdown, though; and in the acyclic case, it's trivial for
get-list-metrics to determine whether or not the terminator is nil,
whereas fetching the same information later would require more effort.

In theory, get-list-metrics doesn’t provide any capability that the
programmer wouldn’t be able to reproduce without it (i.e., it’s a
In theory, get-list-metrics doesn't provide any capability that the
programmer wouldn't be able to reproduce without it (i.e., it's a
library feature); but in practice, the programmer caught without it
would find it troublesome to reimplement from scratch (easy to get
wrong, thus contrary to the spirit of G3 of §0.1.2).


@@ 4117,7 4112,7 @@ using previously defined features, and number primitives (§12).

Rationale:

For expository purposes, the above derivation has two merits: it’s
For expository purposes, the above derivation has two merits: it's
(relatively) simple, and it works. Unfortunately, it also takes
quadratic time in the number of pairs in the list, because it compares
each pair to all of its predecessors before moving on to the next


@@ 4134,7 4129,7 @@ first entered.

Other kinds of cycle-handling traversal may achieve asymptotic
speed-up by tem- porarily marking visited objects, but list traversal
doesn’t need marking to achieve linear time because the structure
doesn't need marking to achieve linear time because the structure
being traversed is innately linear. The non-marking algorithm outlined
above might even be faster than marking, since cache considerations
can make writing memory much more expensive than reading it.


@@ 4190,14 4185,14 @@ returned by encycle! is inert.  Cf. get-list-metrics, §5.7.1.
Rationale:

This tool complements get-list-metrics . The general idiom for using
them is to measure the input with get-list-metrics, perform one’s
them is to measure the input with get-list-metrics, perform one's
operations (whatever they are) robustly by counting pairs rather than
expecting a null terminator, assemble an acyclic list of output
elements, and encycle the output list just before returning it. If it
were really that simple, of course, the client programmer would be
using map, §5.9.1, instead of fussing with list metrics. encycle!
isn’t provided to the programmer to manage an anticipated situation;
it’s provided in the hope that, by naturally complementing
isn't provided to the programmer to manage an anticipated situation;
it's provided in the hope that, by naturally complementing
get-list-metrics, it will help to manage unanticipated situations.

Derivation


@@ 4238,7 4233,7 @@ order of their arguments in the original lists.

If lists is a cyclic list, each argument list to which applicative is
applied is struc- If any of the elements of lists is a cyclic list,
they all turally isomorphic to lists.  must be, or they wouldn’t all
they all turally isomorphic to lists.  must be, or they wouldn't all
have the same length. Let a1 . . . an be their acyclic prefix lengths,
and c1 . . . cn be their cycle lengths. The acyclic prefix length a of
the resultant list will be the maximum of the ak, while the cycle


@@ 4282,7 4277,7 @@ readily achieved by simply wrapping the operative, thus:
The treatment of dynamic environments differs from that of apply . Both
behaviors are based on the governing principle of accident avoidance
(G3 of §0.1.2); in differentiating the two, the decisive factor was
whether there would be a danger of contradicting the programmer’s
whether there would be a danger of contradicting the programmer's
expectation. map is conceptually a way of constructing a series of
ordinary applicative combinations, and ordinary applicative
combinations can access their dynamic environments. apply, on the


@@ 4518,7 4513,7 @@ map has been implemented, everything else is “easy”.

#### 5.10.1 $let

($let hbindingsi . hobjectsi)
($let hbindingsi . <objects>)

76



@@ 4530,29 4525,29 @@ more than one of the <formals>.

($let ((hform1i hexp1i) ...

(hformni hexpni)) . hobjectsi)
(hformni hexpni)) . <objects>)

The expression

is equivalent to

(($lambda (hform1i ... hformni) . hobjectsi) hexp1i ... hexpni)
(($lambda (hform1i ... hformni) . <objects>) hexp1i ... hexpni)

Thus, the hexpki are first evaluated in the dynamic environment, in any
order; then a child environment e of the dynamic environment is
created, with the hformki matched in e to the results of the
evaluations of the hexpki; and finally the subexpres- sions of
hobjectsi are evaluated in e from left to right, with the last (if
any) evaluated as a tail context, or if hobjectsi is empty the result
<objects> are evaluated in e from left to right, with the last (if
any) evaluated as a tail context, or if <objects> is empty the result
is inert.

Rationale:

Because the hexpki are evaluated in the dynamic environment of the
call to $let but matched in a child of that environment, the hexpki
can’t see each other’s bindings; and if any hexpki is a $vau or
$lambda expression, the resulting combiner can’t be recursive because
it can’t see its own binding.
can't see each other's bindings; and if any hexpki is a $vau or
$lambda expression, the resulting combiner can't be recursive because
it can't see its own binding.

These constraints may sometimes be intended, or at least
unproblematic. For occa- sions when they are not wanted, three


@@ 4570,7 4565,7 @@ $letrec), the structure of hbindingsi is mapped onto a single formal
parameter tree, so the acyclicity constraint on hbindingsi follows
from the acyclicity constraint on formal In the ordered variants
($let* and $letrec*), the ordered parameter trees (§4.9.1).  sequence
of bindings is followed by expression sequence hobjectsi, and their
of bindings is followed by expression sequence <objects>, and their
chronological concatenation is undefined when the former sequence is
cyclic by the same reasoning that forbids cyclic non-final arguments to
applicative append (§6.3.3).


@@ 4645,7 4640,7 @@ of its argu-

Rationale:

Because and? doesn’t process its arguments in any particular order, it
Because and? doesn't process its arguments in any particular order, it
must terminate

in finite time even if arguments is cyclic (per the rationale


@@ 4711,16 4706,16 @@ previously defined features.

#### 6.1.4 $and?

($and? . hobjectsi)
($and? . <objects>)

The $and? operative performs a “short-circuit and” of its operands: It
evaluates them from left to right, until either an operand evaluates
to false, or the end of the list is reached. If the end of the list is
reached (which is immediate if hobjectsi is nil), the operative
reached (which is immediate if <objects> is nil), the operative
returns true. If an operand evaluates to false, no further operand
evaluations are performed, and the operative returns false. If
hobjectsi is acyclic, and the last operand is evaluated, it is
evaluated as a tail context (§3.10). If hobjectsi is cyclic, an
<objects> is acyclic, and the last operand is evaluated, it is
evaluated as a tail context (§3.10). If <objects> is cyclic, an
unbounded number of operand evaluations may be performed.

If any of the operands is evaluated to a non-boolean value, it is an


@@ 4731,7 4726,7 @@ Cf. the and? applicative, §6.1.2.

Rationale:

Because the behavior of $and? is still definable when hobjectsi is
Because the behavior of $and? is still definable when <objects> is
cyclic, and the operands are processed in a fixed order, and the
processing of any operand may have side- effects, $and? continues
processing operands in a cyclic list indefinitely (per the rationale


@@ 4750,7 4745,7 @@ being the meaning of the predicate suffix on the name “$and?” of the
operative. However, we do not require implementations to signal a
dynamic type error on the result of a tail context. If error signaling
were sufficiently important to the design to require the error to be
signaled (which it isn’t, here), we would lift the tail-context
signaled (which it isn't, here), we would lift the tail-context
requirement; although a dynamic boolean type check certainly can be
imposed on a tail context without compromising its tail-context
status, doing so is a burden that we would not lightly impose on


@@ 4778,16 4773,16 @@ Derivation

#### 6.1.5 $or?

($or? . hobjectsi)
($or? . <objects>)

The $or? operative performs a “short-circuit or” of its operands: It
evaluates them from left to right, until either an operand evaluates
to true, or the end of the operand list is reached. If the end of the
operand list is reached (which is immediate if hobjectsi is nil), the
operand list is reached (which is immediate if <objects> is nil), the
operative returns false. If an operand evaluates to true, no further
operand evaluations are performed, and the operative return true. If
hobjectsi is acyclic, and the last operand is evaluated, it is
evaluated as a tail context (§3.10). If hobjectsi is cyclic, an
<objects> is acyclic, and the last operand is evaluated, it is
evaluated as a tail context (§3.10). If <objects> is cyclic, an
unbounded number of operand evaluations may be performed.  If any of
the operands is evaluated to a non-boolean value, it is an error; and
if


@@ 5019,13 5014,13 @@ Rationale:

Helper applicative aux2 really ought to signal an error when its first
argument is a cyclic list, because that could happen by accident; so,
even though we don’t usually do non-required error signaling in our
even though we don't usually do non-required error signaling in our
expository library derivations, we would be tempted

84

to do so here, if not that it would depend on details of error
handling that haven’t been finalized yet in this revision of the
handling that haven't been finalized yet in this revision of the
report.

#### 6.3.4 list-neighbors


@@ 5047,7 5042,7 @@ For example,

Rationale:

This applicative is one of Kernel’s intermediate-level tools for
This applicative is one of Kernel's intermediate-level tools for
handling potentially cyclic lists (as opposed to the lower-level tools
that use explicit list metrics). It addresses list handling that
involves considering consecutive elements two at a time (whereas most


@@ 5115,7 5110,7 @@ no such elements, the result is acyclic.

Rationale:

Because filter doesn’t process the list elements in any particular
Because filter doesn't process the list elements in any particular
order, it must terminate in finite time even if list is cyclic (per the
rationale discussion in §3.9; the possibility that applicative could
have side-effects would only matter to the policy if the elements were


@@ 5124,7 5119,7 @@ to be processed in order).
The two paradigmatic examples of a standard applicative that takes an
applicative argument are apply and map (§§5.5.1, 5.9.1). apply allows
the caller to specify an environment to be used when calling its
applicative argument; and this makes sense for apply , because apply’s
applicative argument; and this makes sense for apply , because apply's
purpose is to facilitate general calls to its argument; but the
purpose of filter is list processing, so such a general interface
would be tangential. map calls its applicative argument using the


@@ 5197,8 5192,8 @@ alist)))

Rationale:

This isn’t a particularly efficient way to implement assoc , but it is
interesting in that it emphasizes that assoc doesn’t depend on
This isn't a particularly efficient way to implement assoc , but it is
interesting in that it emphasizes that assoc doesn't depend on
processing the list in any particular order. Because of this
order-independence, it must handle cyclic lists in finite time (per the
rationale discussion in §3.9).


@@ 5224,7 5219,7 @@ by assoc on failure, one can write
hconsequenti halternativei))

and have hconsequenti executed if key is found in alist, halternativei
if it isn’t found.  Accordingly, assoc in Scheme returns #f on failure
if it isn't found.  Accordingly, assoc in Scheme returns #f on failure
([KeClRe98]), while in more tradi- tional Lisps (as [Pi83]) it returns
nil. However, Kernel forbids non-boolean values for conditional tests
(§3.5 rationale). The above expression in Kernel (replacing let/if


@@ 5251,8 5246,8 @@ therefore only be admitted after a very compelling case had been made
for it; and it seems unlikely that such a case could be made, since
there are already two significant facilities in the language that
address much the same purpose: the exception handling supported by
Kernel’s continuation type (§7), and the data structure parsing
supported by Kernel’s generalized matching algorithm (§4.9.1).
Kernel's continuation type (§7), and the data structure parsing
supported by Kernel's generalized matching algorithm (§4.9.1).

Generalized matching bears on the general problem of return codes by
effectively supporting simultaneous return of multiple values. If a


@@ 5261,9 5256,9 @@ caller can immediately decompose the returned structure into its
constituent parts. In the case of assoc , one wants to return either
one or two values: always, a boolean indicating whether object was
found in alist ; and on success, also the alist element that was
found. It’s undesirable to return a second value on failure, since
found. It's undesirable to return a second value on failure, since
that would make it easier to accidently use the second value as if it
had been found.  Let’s call the combiner that behaves this way
had been found.  Let's call the combiner that behaves this way
assoc*. One could define it by:

($define! assoc*


@@ 5294,7 5289,7 @@ Here, formal parameter tree (found . result) is locally matched to the
data structure returned by assoc*, binding found to #f or #t, and
result to () or the matching element of alist . An optimizing Kernel
compiler may recognize that the returned data structure cannot
actually be accessed, and therefore needn’t actually be constructed as
actually be accessed, and therefore needn't actually be constructed as
long as its two parts are properly exported from assoc* .

The entire technique works because of the special status that the


@@ 5341,7 5336,7 @@ and the redundancy that caused it, by providing the simpler assoc
rather than the needlessly elaborate assoc* . The use of nil to
represent nothing in a return value is thus seen to be neither
arbitrary nor (in itself) ad hoc, but rather a natural consequence of
the special status afforded to types pair and null by Kernel’s matching
the special status afforded to types pair and null by Kernel's matching
algorithm (§4.9.1).

89


@@ 5508,7 5503,7 @@ Rationale:
Because reduce uses an unspecified associative order of binary
operations, the guide- lines in §3.9 require it to reduce a cyclic
list in finite time if the reduction behavior can be naturally defined
in the cyclic case. However, an arbitrary binary operation can’t be
in the cyclic case. However, an arbitrary binary operation can't be
automatically generalized to reduce an infinite sequence of elements in
finite time. So, either the client must provide additional information
on how to handle cycles, or cyclic-list reduction must be an error.


@@ 5618,7 5613,7 @@ nonempty list, and all of its elements except the last element (if
any) must be acyclic lists. The append! applicative sets the cdr of
the last pair in each nonempty list argument to refer to the next
non-nil argument, except that if there is a last non-nil argument, it
isn’t mutated.
isn't mutated.

It is an error for any two of the list arguments to have the same last
pair.  The result returned by this applicative is inert.  The


@@ 5835,7 5830,7 @@ guments.  different as judged by the primitive predicate.

Rationale:

Because the applicative doesn’t process its arguments in any
Because the applicative doesn't process its arguments in any
particular order, it must

terminate in finite time even if objects is cyclic (per the rationale


@@ 5955,8 5950,8 @@ Derivation

The following expression defines the $binds? operative, using
previously defined features, and features from the Continuations module
(§7).  (The latter features aren’t all primitive, but they don’t use
binds? directly or indirectly, so there’s no circularity in the
(§7).  (The latter features aren't all primitive, but they don't use
binds? directly or indirectly, so there's no circularity in the
derivation.)

```


@@ 5988,7 5983,7 @@ ss))
Rationale:

Presenting this predicate as a library feature drives home the point
that it doesn’t introduce any capability that wasn’t already provided
that it doesn't introduce any capability that wasn't already provided
by the language.  In particular, for purposes of type encapsulation
(§3.4), there is still no way for a Kernel program to generate a
complete list of the variables exhibited in an arbitrary


@@ 6014,16 6009,16 @@ The get-current-environment applicative returns the dynamic
environment in

Operatives should be used only when there is a specific reason to do
so, so that the programmer can assume that $’s always flag out
so, so that the programmer can assume that $'s always flag out
exceptions to the usual rules of argument evaluation. Accordingly,
throughout this report zero-ary combiners, such as this one, are
always wrapped: get-current-environment rather than
$get-current-environment .  Some combiner names are nouns, while
others are verbs. When a combiner acts on one or more operands, it’s
others are verbs. When a combiner acts on one or more operands, it's
clear that it describes action, so we consider it acceptably clear to
name the combiner for its result (e.g., lcm , §12.5.14, which returns
the lcm of its arguments). Often such a combiner can be called with no
operands, but usually isn’t, so the degenerate case shouldn’t
operands, but usually isn't, so the degenerate case shouldn't
interfere with understanding the nomenclature. However, when the
combiner is primarily, or even exclusively, called without operands,
there is some danger that because its name is a noun, the programmer


@@ 6107,7 6102,7 @@ alternative derivation in §6.4.2.) However, each of these evaluations
takes place in a child of the environment of the previous one, and
bindings for the previous evaluation take place in the child, too. So,
if one of the binding expressions is a $vau or $lambda expression, the
resulting combiner still can’t be recursive; and only the first binding
resulting combiner still can't be recursive; and only the first binding
expression is evaluated in the dynamic environment, so if the dynamic
environment is to be bound, only the first binding can do it.



@@ 6156,7 6151,7 @@ evaluated in any order. None of them are evaluated in the dynamic
environment, so there is no way to capture the dynamic environment
using $letrec ; and none of the bindings are made until after the
expressions have been evaluated, so the expressions cannot see each
others’ results; but since the bindings are in the same environment as
others' results; but since the bindings are in the same environment as
the evaluations, they can be recursive, and even mutually recursive,
combiners.



@@ 6164,8 6159,8 @@ The R5RS requires its special form letrec to provide dummy bindings
for the hsymki (bindings to “undefined values”) while the hexpki are
being evaluated, but then goes on to say that it is an error for the
evaluation of any hexpki to actually look up any of the hsymki in
e'. So the bindings have to be there, and you’re supposed to pretend
they aren’t.
e'. So the bindings have to be there, and you're supposed to pretend
they aren't.

Derivation



@@ 6269,7 6264,7 @@ latter depen- dence: the binding expressions are evaluated in the
dynamic environment, but the local environment is a child of some
other environment specified by the programmer. This promotes semantic
stability, by protecting the meaning of expressions in the body from
unexpected changes to the client’s environment (much as static scoping
unexpected changes to the client's environment (much as static scoping
protects explicitly constructed compound combiners).

In the interests of maintaining clarity and orthogonality of


@@ 6317,7 6312,7 @@ hbindingsi . `<body>`)

Rationale:

one’s source code.
one's source code.

This is a common case of $let-redirect ; providing a shorthand for it
unclutters


@@ 6465,7 6460,7 @@ constructed combination
`eval`), `$define!` evaluates its second operand in its dynamic
environment, and then matches `<formals>` to the result, again in its
dynamic environment. Here, its dynamic environment is the result of
`$set!`’s local evaluation of `(eval exp1 env)`, call it `e1`; but
`$set!`'s local evaluation of `(eval exp1 env)`, call it `e1`; but
because the second operand `($eval <exp2> env)` is an operative
combination, `<exp2>` is evaluated only in `env`, the dynamic
environment of the call to `$set!` — not in `e1`.


@@ 6635,7 6630,7 @@ result returned by for-each is inert.
Rationale:

The Scheme procedure for-each is defined to perform its applications
from left to right ([KeClRe98, §6.4]). By Kernel’s general policy on
from left to right ([KeClRe98, §6.4]). By Kernel's general policy on
list handling (rationale for §3.9), though, if for-each were required
to process from left to right, it would have to loop forever on cyclic
lists. Handling cyclic lists in finite time was judged a more useful


@@ 6645,8 6640,8 @@ processing is pointedly unspecified.
Derivation

The following expression defines for-each using only previously defined
features.  (Since we’re only trying to show derivability, we can take
the mathematician’s way out by reducing it to a previously solved
features.  (Since we're only trying to show derivability, we can take
the mathematician's way out by reducing it to a previously solved
problem.)

```


@@ 6705,8 6700,8 @@ First-class continuations can be used to implement a wide variety of
advanced control structures. Their inclusion in Kernel may therefore
be justified from the generality de- sign goal. (Justification from
first-class-ness Guideline G1a of §0.1.2 has a suggestion of
circularity about it, because it’s problematic whether continuations
would qualify as ‘ma- nipulable entities’ if they weren’t capturable.)
circularity about it, because it's problematic whether continuations
would qualify as ‘ma- nipulable entities' if they weren't capturable.)
However, the ability to capture and invoke first-class continuations
leads to a subtle partial undermining of the ability of algorithms to
regulate themselves (which falls under the aegis of


@@ 6767,13 6762,13 @@ of the continuation for the call to $if.
evaluations of expressions in the body of the operative (via $sequence
, §5.1.1) are children of the continuation for the call to the
operative (except that, if the body is acyclic, the last
expression-evaluation has no distinct continuation because it’s a tail
expression-evaluation has no distinct continuation because it's a tail
context).

• When the evaluator evaluates the operands to an applicative
combination, the continuation for the evaluation of each operand is a
child of the continuation for the evaluation of the combination. It
doesn’t matter whether the applicative happens to be a converted
doesn't matter whether the applicative happens to be a converted
continuation (via continuation->applicative , §7.2.5), whose
underlying operative will abnormally pass the argument list; the
operand evaluations take place before that, and are part of the


@@ 6798,7 6793,7 @@ language has manifest types, the type hierarchy is used for the
purpose; in languages with no manifest type hierarchy, a spe- cial
type-like logical hierarchy of exceptions may be introduced. While the
introduction of an entirely new hierarchy for exceptions would not sit
well with Kernel’s simplicity design goal, the natural ordering of
well with Kernel's simplicity design goal, the natural ordering of
continuations in a Kernel computation provides a pre-existing
hierarchy suitable for the purpose. The ability to extend the
hierarchy ex- plicitly (i.e., by specifying the extension without


@@ 6890,11 6885,11 @@ underlying combiner of applicative, rather than as a single operand,
parallels the design of applicative continuation->applicative ,
§7.2.5.

In order to wield Kernel’s continuation hierarchy effectively for
In order to wield Kernel's continuation hierarchy effectively for
exception handling, the programmer must be able to extend the
hierarchy of possible destination continuations without triggering
entry/exit guards, which are meant to intercept actual exceptions, and
oughtn’t get in the way of defining potential destinations when no
oughtn't get in the way of defining potential destinations when no
exceptional condition yet exists. Applicative call/cc cannot do this,
because it must be used from inside the dynamic extent of the
continuation that it captures.


@@ 6999,11 6994,11 @@ should be evaluated, to avoid accidents its evaluation should be
performed within the jurisdiction of the interceptor. So: when
constructing an interceptor call we expect to unwrap the interceptor,
and, in order to handle all interceptors in a uniform way, we unwrap
each interceptor exactly once (which means the interceptor can’t have
each interceptor exactly once (which means the interceptor can't have
been operative to begin with); but, if the underlying combiner of an
interceptor were itself applicative, the value passed to it would
still be evaluated outside the jurisdiction of the interceptor; so we
don’t allow the interceptor to be multiply wrapped, and leave it to
don't allow the interceptor to be multiply wrapped, and leave it to
the interceptor to explicitly call eval at its discretion.

116


@@ 7039,7 7034,7 @@ received by a continuation, via a list-structured definiend (see the
rationale of §4.9.1); the latter by passing an atomic value in place
of an argument list (see the rationale of §4.10.5).

In Scheme, however, the mismatch isn’t mild, because neither
In Scheme, however, the mismatch isn't mild, because neither
expectation is readily overcome. Deconstructing a list received by a
continuation is laborious, since formal parameter lists are only
supported for deconstructing the argument list received by a


@@ 7087,7 7082,7 @@ associated inner continuation (that is, if the source is within that
dynamic extent, and the destination is not within it); and an
entry-guard list is considered iff the abnormal pass enters the dynamic
extent of the associated outer continuation (the destination is within
that extent and the source isn’t). Exit-guard lists are considered
that extent and the source isn't). Exit-guard lists are considered
first, proceeding from smallest to largest dynamic extent (thus,
outward from the source), followed by entry-guard lists proceeding
from largest to smallest dynamic extent (thus, inward to the


@@ 7096,15 7091,15 @@ within the extent of the source, and no extents are entered if the
source is within the extent of the destination.

For each exit-guard list considered, the first interceptor (if any) is
selected whose selector’s dynamic extent contains the
selected whose selector's dynamic extent contains the
destination. Symmetrically, for each entry- guard list considered, the
first interceptor (if any) is selected whose selector’s dynamic extent
first interceptor (if any) is selected whose selector's dynamic extent
contains the source. Thus, at most one interceptor is selected from
each list.

Rationale:

It doesn’t matter whether a guard list is cyclic, because testing a
It doesn't matter whether a guard list is cyclic, because testing a
selector has no

side-effects; cf. the handling of cyclic parent lists by


@@ 7115,7 7110,7 @@ barriers are passed through — exit from successively larger extents,
followed by entrance to succes- sively smaller extents. Since all
relevant guard lists are considered along the path from source to
destination, selecting at most one match from each list maximizes the
pro- grammer’s control over the interception process — analogously to
pro- grammer's control over the interception process — analogously to
taking a conjunction of disjunctions in boolean algebra.

In the case of exit guards, this conjunction-of-disjunctions algorithm


@@ 7129,26 7124,26 @@ rethrown) at no more than one catch clause for each try.

The handling of entry guards is ruthlessly symmetric to that of exit
guards. Languages with type-based exception hierarchies have no clear
analog to Kernel’s entry guards, which
analog to Kernel's entry guards, which

118

occur as a concept only because the hierarchy of exception
destinations is alike in kind to the hierarchy of exception sources;
there is therefore, within the author’s experience, no precedent for
there is therefore, within the author's experience, no precedent for
the entry-guard facility, and its design is based entirely on the
strength of the exit/entry symmetry.

There was some deliberation over the possibility that, when specifying
a nontrivial selector for an entry guard (i.e., a selector that
doesn’t simply intercept everything), the guarded destination extent
doesn't simply intercept everything), the guarded destination extent
will usually have a trusted source extent from which it allows
unobstructed abnormal passing, and will want to intercept just those
abnormal entries that do not come from the trusted extent. One could
imagine reversing the polarity of selection on entry guards, so that
an exit guard is selected if the destination belongs to the selector’s
an exit guard is selected if the destination belongs to the selector's
extent, but an entry guard is selected if the source does not belong
to the selector’s extent. However, even if this does turn out to be
to the selector's extent. However, even if this does turn out to be
the most common usage pattern for entry guards, it was judged more
symmetrical and straightforward (thus, less accident-prone) to use the
same selection algorithm for entry as for exit. Complementary behavior


@@ 7225,7 7220,7 @@ then carefully arranged so that, when an interceptor does call its
second argument, no interceptions will occur (external to the
interceptor call).

If an interceptor doesn’t simply ignore its second argument, its
If an interceptor doesn't simply ignore its second argument, its
interest will almost always be to pass an object to the outer
continuation. At need, the interceptor could construct a continuation
with equivalent behavior to the outer continuation, by evaluating the


@@ 7256,7 7251,7 @@ outer continuation.

R5RS Scheme supports a limited form of extent guarding through its
procedure dynamic-wind , which unconditionally intercepts all
entry/exit of its dynamic extent.  The R5RS doesn’t fully define how
entry/exit of its dynamic extent.  The R5RS doesn't fully define how
dynamic-wind interacts with first-class continuations (specifically,
what happens when a continuation is captured from inside an
interceptor),


@@ 7297,8 7292,8 @@ normally (For example, if the system is
receives a value, it terminates the Kernel session.  running a
read-eval-print loop, it exits the loop.)

If the hierarchy of continuations didn’t have a maximal element, or if
that element weren’t made available to the programmer, there would be
If the hierarchy of continuations didn't have a maximal element, or if
that element weren't made available to the programmer, there would be
no way to create a guard clause that is always selected. See for
example the implementation of dynamic-wind in the rationale discussion
of §7.2.5.


@@ 7395,18 7390,18 @@ using only previously defined features.

#### 7.3.2 $let/cc

($let/cc hsymboli . hobjectsi)
($let/cc hsymboli . <objects>)

A child environment e of the dynamic environment is created,
containing a binding of hsymboli to the continuation to which the
result of the call to $let/cc should normally return; then, the
subexpressions of hobjectsi are evaluated in e from left to right,
with the last (if any) evaluated as a tail context, or if hobjectsi is
subexpressions of <objects> are evaluated in e from left to right,
with the last (if any) evaluated as a tail context, or if <objects> is
empty the result is inert. That is,

($let/cc hsymboli . hobjectsi)
($let/cc hsymboli . <objects>)

=== (call/cc ($lambda (hsymboli) . hobjectsi))
=== (call/cc ($lambda (hsymboli) . <objects>))

Rationale:



@@ 7525,7 7520,7 @@ Rationale:

The main advantage of exit over root-continuation is that it is an
applica- tive, rather than a continuation. An obvious benefit is that
combiners are easy to call (it’s their reason for being, after all),
combiners are easy to call (it's their reason for being, after all),
whereas a continuation can only be invoked through an auxiliary call
to another combiner (typically apply-continuation , occasion- ally
continuation->applicative). A more subtle benefit (though it comes into


@@ 7535,7 7530,7 @@ type, it signals an error from within the dynamic extent of the call
to the operative. If a continuation receives a value of the wrong
type, it too signals an error — but by the time the continuation
actually receives a value, the source dynamic extent of the abnormal
pass has been lost, reducing Kernel’s ability to provide useful
pass has been lost, reducing Kernel's ability to provide useful
diagnostic information about the source of the problem. It is
therefore of- ten desirable, for safety, to build known type
constraints into an applicative shell around a continuation, and call


@@ 7772,12 7767,12 @@ Encapsulations module, §8), there is no need to prohibit the operand
to $lazy from evaluating to a non-promise object; so we remove that
restriction (per the core statement of design philosophy,
§0.1.2). Since the operand to $lazy can then yield an arbitrary value,
it has the same type signature as Scheme’s delay , eliminating the
it has the same type signature as Scheme's delay , eliminating the
possible type-signature motive for a Kernel constructor $delay
(compatibility with Scheme being already a non-starter).

History shows that promises are exceedingly easy to mis-implement, and
the detailed description of Kernel’s $lazy operative is chosen to
the detailed description of Kernel's $lazy operative is chosen to
preclude several problems that have occurred in published
implementations.



@@ 7806,7 7801,7 @@ promise during the evaluation. While this does curtail needless
iteration (in case the result of evaluation is an undetermined lazy
promise), its primary purpose is to guarantee that any given promise
will always determine the same value. The R3RS implementation of
promises didn’t check for a previous determination before storing the
promises didn't check for a previous determination before storing the
result of evaluation, and consequently violated this invariant; the
R4RS corrected this, but accompanied its bug-fix with test code that
did not actually test for the bug, and the bug reappeared in the April


@@ 8000,7 7995,7 @@ y ), the code that orchestrates the forcing process makes a tail
call. In the implementation of promises in the R5RS , the
orchestrating code failed to make a tail call, resulting in the memory
leak that motivated SRFI-45. (Correcting the problem involved a fairly
drastic internal rearrangement of the implementation, so one can’t
drastic internal rearrangement of the implementation, so one can't
point to any one place in the R5RS implementation where a tail call
should have occurred.) Here is a sequence of expressions that test
iterative forcing.


@@ 8041,7 8036,7 @@ this sequence should iteratively force ten billion and one promises,
finally producing a cons cell whose car is ten billion, and whose cdr
is a stream. (Forcing the cdr stream would never terminate, since none
of the integers greater than ten billion are equal to ten billion —
but shouldn’t run out of memory, either, since the implementation is
but shouldn't run out of memory, either, since the implementation is
properly tail-recursive.)

A subtlety, BTW, of the last example is that it only works in bounded


@@ 8059,7 8054,7 @@ s))

then we would eventually run out of memory, because the entire stream
of integers from zero on up would have to be retained in memory. (Even
the “bounded space” version isn’t really bounded, because the
the “bounded space” version isn't really bounded, because the
increasing integers themselves take space logarithmic in their
magnitude; but since occupying all of memory that way would take
longer than the age of the universe, we choose to overlook it.)


@@ 8084,11 8079,11 @@ general computation

(§B); and promises are supposed to represent the potential to do
general computation.  So if there were some particular type t of
objects that could be the result of general computation, but couldn’t
objects that could be the result of general computation, but couldn't
be the result of forcing a promise, one would have to conclude either
that type promise wasn’t entirely living up to its obligations, or (by
that type promise wasn't entirely living up to its obligations, or (by
extrapolation of the usual right of first-class objects) that type t
wasn’t quite as first-class as it ought to be. (Moreover, the inability
wasn't quite as first-class as it ought to be. (Moreover, the inability
to return objects of type t from lazy computations would in turn
weaken their right to be stored in lazily expanded data structures (as
illustrated below) — again casting doubt on their first-class status.)


@@ 8181,12 8176,12 @@ Kernel. The name $delay was rejected on the grounds that that name
would tend to somewhat obscure the relationship of what is being done
to the underlying iterative-forcing mechanism (thus hid- ing
information that the programmer should be aware of, rather than
information that the programmer shouldn’t be bothered by). A more
information that the programmer shouldn't be bothered by). A more
explicit name, such as $lazy-memoize, was considered, but seems
scarcely more readable that the idiomatic nested calls, so would smack
of feature bloat. There would be no feature bloat in including
$lazy-memoize if memoize itself were omitted from the language; but
guaranteeing that iterative forcing won’t pass a certain point is
guaranteeing that iterative forcing won't pass a certain point is
logically distinct from postponing computation, so it seems that
memoize ought to be provided.



@@ 8305,10 8300,10 @@ default if the primitive accessor fails:
> than in an empty environment.  However, our expectation is that the
> combiner argument will usually be customized for the particular
> binder-combination; in such a case, the combiner is constructed
> during the combination’s argument-evaluation, at which time the
> during the combination's argument-evaluation, at which time the
> dynamic environment could be deliberately captured. We therefore
> prefer the more hygienic, hence less accident-prone, behavior; also
> cf. call/cc , §7.2.2. The caller’s dynamic environment would be
> cf. call/cc , §7.2.2. The caller's dynamic environment would be
> preferred if general call semantics were fundamental to the
> operation (as with apply , §5.5.1, or, even more so, map,
> §5.9.1). Also, the dynamic environment would naturally be provided


@@ 8318,7 8313,7 @@ default if the primitive accessor fails:
> since in that case there would be no argument-evaluation during
> which to deliberately capture the dynamic environment.

> As long as Kernel doesn’t support concurrency, it would be possible
> As long as Kernel doesn't support concurrency, it would be possible
> to implement dynamic variables as a library facility; for each
> dynamic variable, one would simply maintain a global stack of
> values, which at any given moment would simulate the calls to `b`


@@ 8343,7 8338,7 @@ Cf. module Keyed dynamic variables, §10.
> being publicly accessible keys, they are a natural basis for
> general-purpose information distribution.
>
> A binding (of a symbol or other key) is shadowed when it isn’t
> A binding (of a symbol or other key) is shadowed when it isn't
> visible in an environment `e` despite being contained in an ancestor `a`
> of `e`, because another binding for the same key is located in some
> ancestor `a'` of `e` and, during lookup in `e`, `a'` is visited before


@@ 8366,7 8361,7 @@ Cf. module Keyed dynamic variables, §10.
> A factory applicative (here, `make-keyed-static-variable`) generates
> unique matching sets of tools (here, a binder and an accessor). The
> tools themselves are then subject to being statically bound by
> symbols in a local environment, and are thus subject to Kernel’s
> symbols in a local environment, and are thus subject to Kernel's
> general-purpose access-regulation techniques.

### 11.1 Primitive features


@@ 8423,8 8418,8 @@ returns different `b` and `a`.
> an effectively non-destructive way whether any given environment is
> an ancestor of another given environment. Given candidate ancestor a
> and candidate descendant d, one could reliably find a symbol s that
> isn’t bound in d (using string->symbol and $binds? ; §§13.1.1,
> 6.7.1); then, if s is bound in a, a isn’t an ancestor of d,
> isn't bound in d (using string->symbol and $binds? ; §§13.1.1,
> 6.7.1); then, if s is bound in a, a isn't an ancestor of d,
> otherwise mutate a by binding s in it, and look to see whether the
> new binding is visible in d. The process is destructive because
> there is then no way to get rid of the binding of s in a; however,


@@ 8473,7 8468,7 @@ would be evaluated in e, so could capture e by

but then, in order to grant c access to e', one would have to capture
e' with a binding that would be visible throughout e. A general
keyed-static-variable facility oughtn’t have to wrestle with this
keyed-static-variable facility oughtn't have to wrestle with this
access issue at all (neither to choose between e and e', nor to
provide both constructors and thus bloat the feature interface); so,
instead, we model the constructor on applicative make-environment


@@ 8529,7 8524,7 @@ The numerical sublanguage described in this section is designed to be
largely in- dependent of both what internal number formats are used,
and how large a domain of mathematical numbers is modeled. It draws
freely on its R5RS Scheme counterpart ([KeClRe98, §6.2]), but modifies
its approach for Kernel’s design policies, and incor- porates two
its approach for Kernel's design policies, and incor- porates two
significant extensions to the R5RS Scheme notion of number: infinities,
and bounds on inexact real numbers.



@@ 8547,10 8542,10 @@ suggested for doing so is that, while attempting to be in- dependent
of internal number formats, the abstract Scheme treatment of numbers
allowed so many variant behaviors that it left itself open to rabid
non-portability. The Kernel design agrees with this objection (on
grounds that non-portable number support degrades the programmer’s
grounds that non-portable number support degrades the programmer's
ability to use numbers freely, contrary to first-class-ness Guideline
G1a of §0.1.2); but the R6RS alternative multiplies the number of
numeric features, violating Kernel’s core design philosophy (top of
numeric features, violating Kernel's core design philosophy (top of
§0.1.2). There is a profound difference between constraining low-level
behavior, which promotes portability; and inflating the programmer
interface with low-level details, which, besides the feature bloat


@@ 8584,7 8579,7 @@ positive infinity, and lower bound negative infinity.
When an arithmetic operation involves an infinity, either among its
arguments or as its result, it should be understood as a limit, as
with the arctangent of positive infinity, which is exactly pi over two
(though an implementation of module Real doesn’t have to support exact
(though an implementation of module Real doesn't have to support exact
modeling of that mathematical number). This principle also implies
that certain operations are not defined, such as division by zero,
which has no determinate primary value since it has different one-sided


@@ 8634,7 8629,7 @@ Complex assumes module
that trigonometry is involved even if only Gaussian integers are
constructed. It would seem a daunting task to implement module Real
without module Inexact, but in case someone has a reason to do so, the
report doesn’t preclude it, i.e., module Real doesn’t assume module
report doesn't preclude it, i.e., module Real doesn't assume module
Inexact.

### 12.2


@@ 8648,7 8643,7 @@ to capture a mathematical number that the client wants to reason
about, either because the intended mathematical number cannot be
represented by an internal number (as with exclusively rational
internal number formats confronted with an irrational mathematical
number), or because the client doesn’t know exactly which mathematical
number), or because the client doesn't know exactly which mathematical
number to model (as with an ex- perimental measurement, or the result
of arithmetic on numbers that were already inexact). Optional module
Inexact associates to each real Kernel number a mathe- matical upper


@@ 8696,10 8691,10 @@ guments and real result),
values of the arguments, taking into account the internal number
formats of those primary values. (Therefore, the primary value of the
result cannot depend on the bounds, robustness, nor exactness, of the
arguments.) If any of the arguments doesn’t have a primary value, the
arguments.) If any of the arguments doesn't have a primary value, the
result cannot have a primary value.

If the result doesn’t have a primary value (and even if all of the
If the result doesn't have a primary value (and even if all of the
arguments do have primary values), an error is or is not signaled
depending on the current value of the strict-arithmetic keyed dynamic
variable (§12.6.6).


@@ 8798,7 8793,7 @@ false (§12.6.6), and the numeric result of an operation depends on the
value of a numeric argument that has no primary value, the operation
has the option of returning a number with no primary value instead of
signaling an error. However, non-numeric types generally do not have
states signifying ‘probably, but not necessarily, non-existent’; so
states signifying ‘probably, but not necessarily, non-existent'; so
when a non-numeric result depends on the value of a numeric argument
with no primary value, there is no non-strict alternative to an error
signal. In the case of predicates on numbers, such as numeric


@@ 8827,7 8822,7 @@ Internal numbers
Internal numbers are, as stated earlier, presumed data structures of
the Kernel im- plementation. Because implementations only have to
match the abstract behavior described in this report, internal numbers
needn’t be realized as concrete data struc- tures, and even if they
needn't be realized as concrete data struc- tures, and even if they
are, the format of the concrete data structures is not con- strained
by the report. However, under various circumstances implementations
are constrained to behave as if internal numbers were concrete data


@@ 8860,10 8855,10 @@ coordinates is finite. In point-projective form, an exact finite
non-zero complex number (the point) is specified, which determines the
direction from zero of the represented infinite complex; that is, the
represented number is exact positive infinity times the point. The
point doesn’t have to be stored internally in any normal form (it only
point doesn't have to be stored internally in any normal form (it only
has to be exact, finite, and non-zero), but any given implementation
must normalize the point when writeing the infinity, so as not to
reveal anything about eq? numbers that couldn’t be determined without
reveal anything about eq? numbers that couldn't be determined without
write (per §3.6).

146


@@ 8883,7 8878,7 @@ constraint is to disambiguate the behavior of predicate exact? without
causing turbulence between exact finite arithmetic and internal real
for- mats. From §12.2, any exact arithmetic operation —i.e., an
operation in module Number or Rational— given only exact arguments
must return an exact result; but we don’t mean to require any
must return an exact result; but we don't mean to require any
implementation to support exact irrational reals. Consider the complex
number z = 1i, where the radix 1 and exponent i are both exact. When
we construct z, we know exactly which mathematical number we are


@@ 8893,7 8888,7 @@ number exactly in polar coordinates, even though both of its
rectangular coordinates are irrational. However, adding exact one to z
gives a complex number z + 1 whose rectangular and polar coordinates
are all irrational. (Of course, an implementation that does support
exact irrationals won’t have a problem with that.)
exact irrationals won't have a problem with that.)

An exact complex infinity is only internally representable in
rectangular coordinates if one of its rectangular coordinates is


@@ 8932,7 8927,7 @@ Rationale:

The report is intended to specify the behavior of each operation
sufficiently that, in principle, given exact arguments it has just one
possible exact result (or is undefined). The operation usually isn’t
possible exact result (or is undefined). The operation usually isn't
required to return this exact result, the largest class of exceptions

147


@@ 9132,7 9127,7 @@ integer. However, given only primary value and upper and lower bounds,
the only ways to be certain of this would be (1) the implementation
only supports integers, so that every finite number computed is sure to
be an integer; or (2) the upper and lower bounds are identical to the
primary value. We don’t want the behavior of the predicate on inexact
primary value. We don't want the behavior of the predicate on inexact
numbers to change when additional modules are supported; and we do
want to support inexact integers. Another approach would be to
maintain an additional integer tag on each inexact real number, akin


@@ 9223,7 9218,7 @@ Applicative + returns the sum of the elements of numbers.  If numbers
is empty, the sum of its elements is exact zero.  If a positive
infinity is added to a negative infinity, the result has no primary
value.  If a complex number with magnitude infinity is added to another
complex number with magnitude infinity, and they don’t have the same
complex number with magnitude infinity, and they don't have the same
angle, the result has no primary value.

152


@@ 9240,7 9235,7 @@ Rationale:

We view the sum of a cycle as the limit sum of an infinite series. When
the acyclic sum of the elements of the cycle is zero, but some of the
elements are non-zero, the sum of the series isn’t convergent.
elements are non-zero, the sum of the series isn't convergent.

#### 12.5.5 *



@@ 9448,11 9443,11 @@ finite integer (by multiplying that finite integer by zero). Therefore,
the gcd of any non-zero finite integer n with zero is abs(n), and the
gcd of any non-zero finite integer n with any real infinity is
abs(n). However, the gcd of zero with an infinity is indeterminate: it
can’t be positive infinity, because there isn’t anything that
multiplied by zero gives infinity, but it can’t be any particular finite
can't be positive infinity, because there isn't anything that
multiplied by zero gives infinity, but it can't be any particular finite
integer either, because every finite positive integer can be multiplied
by zero to give zero, and by an infinity to give positive
infinity. Therefore, if gcd has a zero argument, but doesn’t have a
infinity. Therefore, if gcd has a zero argument, but doesn't have a
non-zero finite argument, its result has no primary value.  The
behaviors on nil argument list preserve equivalences



@@ 9501,8 9496,8 @@ every element of numbers is undefined.
Rationale:

Although none of these predicates are type predicates (because they
don’t take ar- bitrary objects as arguments), none of them depend on
their arguments having primary values, either, so they don’t signal an
don't take ar- bitrary objects as arguments), none of them depend on
their arguments having primary values, either, so they don't signal an
error on no-primary-value (per §12.2).

#### 12.6.2 get-real-internal-bounds, get-real-exact-bounds


@@ 9607,7 9602,7 @@ If this applicative used the general rule for internal representations
for arithmetic op- erations (§12.3.3), the programmer would have to
regulate the internal representations of all three arguments in order
to regulate the internal representation of the result. Adopting the
internal representation of real2 simplifies the programmer’s regulation
internal representation of real2 simplifies the programmer's regulation
task.

A seemingly more straightforward behavior for this applicative would


@@ 9616,7 9611,7 @@ base the bounds of the result on the primary values of real1 and real3
this was deemed potentially error-prone, since any error accumulated
in calculating real1 and real3 probably should be applied to the num-
ber constructed from them — and in the presumably less usual case that
it shouldn’t, the programmer can stipulate the primary values of real1
it shouldn't, the programmer can stipulate the primary values of real1
and real3 via applicative get-real-internal-primary (§12.6.3).

It would be possible to derive get-real-internal-primary from this


@@ 9790,7 9785,7 @@ Rationale:

The results of these operations are, in principle, distributed
arbitrarily over the entire specified interval, with no preference
given to any ‘central’ part of the interval. Therefore, if the bounds
given to any ‘central' part of the interval. Therefore, if the bounds
of the arguments were included in the interval used for the operation,
the primary values of the arguments would be completely irrelevant to
the operation; and unintendedly large intervals would tend to have


@@ 9823,9 9818,9 @@ real iff all its imaginary parts have zero primary values and bounds.)

Rationale:

If we’re told that a number is real, we shouldn’t have to think about
imaginaries at all; we shouldn’t have to ask about bounds on the
imaginary part of a complex number, and we certainly shouldn’t have to
If we're told that a number is real, we shouldn't have to think about
imaginaries at all; we shouldn't have to ask about bounds on the
imaginary part of a complex number, and we certainly shouldn't have to
know what non-complex kinds of numbers might be supported by the
implementation.



@@ 9915,11 9910,11 @@ the current case, a reference to a closed port is simply a means by
which the programmer could possibly cause an i/o error by trying to
use it. Therefore, when the internal state of a data type is
administrative, a dominant concern in the design of the type support
is to minimize the programmer’s dependence on explicit references to
is to minimize the programmer's dependence on explicit references to
objects of the type.

The port-based i/o tools in R5RS Scheme are already well suited to
this design goal, and so Kernel adopts Scheme’s port tools
this design goal, and so Kernel adopts Scheme's port tools
substantially intact. The opening and closing of ports can be handled
in three ways:



@@ 9975,7 9970,7 @@ Every port must be admitted by at least one of these two predicates.

Rationale:

A port has no purpose if it can’t be used for input or
A port has no purpose if it can't be used for input or
output. Although this report does not allow a single port to be used
for both, it would not be unreasonable for an extension to want to
support that, and therefore input port and output port were not made


@@ 10143,12 10138,12 @@ one or more occurrences of hthingi.

Rationale:

Kernel expression syntax is much simpler than Scheme’s, because there
Kernel expression syntax is much simpler than Scheme's, because there
is no need to go any further than simple expressions — no special
forms, no syntax transformers, and no syntactic sugar for quotation or
quasiquotation. However, Kernel lexical structure is nearly identical
to that of Scheme because, for the safety of Scheme programmers, it
includes Scheme lexemes such as ’ and ,@ that are not actually legal
includes Scheme lexemes such as ' and ,@ that are not actually legal
tokens in Kernel.  This common Scheme/Kernel lexical structure is
fairly simple, but the simplicity is not evident from a pure
extended-BNF grammar for tokens. We therefore present the lexical


@@ 10169,7 10164,7 @@ is only needed after the token class is already known.
16.1.1 Lexemes

Here we describe an algorithm for dividing a sequence of characters
into lexemes. The algorithm doesn’t classify lexemes by type, not even
into lexemes. The algorithm doesn't classify lexemes by type, not even
by which lexemes are legal and which are illegal; that is accomplished
by a second algorithm, below in §16.1.2.



@@ 10191,23 10186,23 @@ quotes). All three kinds of right lexemes are also left
14Lookahead of one character means that, when scanning a lexeme from
left to right, we can process characters one at a time, without
worrying about what, if anything, comes after a given character until
after we’ve decided whether to include it in the current lexeme.
after we've decided whether to include it in the current lexeme.

169

lexemes, because it’s perfectly clear when the right lexeme has
lexemes, because it's perfectly clear when the right lexeme has
ended, so if the fol- lowing character is non-atmosphere (not a space,
tab, newline, etc.), it must be the beginning of a new lexeme.

In addition to the three kinds of right lexemes, there are also five
other left lexemes:

‘ ’ #( , ,@
‘ ' #( , ,@

(In words: backquote, quote, hash-left-paren, comma, and comma-at.)
Kernel de- fines these to be lexemes to avoid accidents that could
otherwise result from lexical incompatibilities with Scheme; but the
language syntax doesn’t use any of them, so they will all be classified
language syntax doesn't use any of them, so they will all be classified
as illegal in §16.1.2.

...


@@ 10242,7 10237,7 @@ on their mutation (mitigating hygiene concerns).

• Generalized formal parameter trees for binding constructs
(effectively support- ing simultaneous return of multiple values
without Scheme’s extra-functional constructs values and
without Scheme's extra-functional constructs values and
call-with-values; see §4.9.1).

• Object `#inert` (to be returned in lieu of useful information).


@@ 10272,7 10267,7 @@ existing incomplete draft of the R-1RK should be made public, and
updated from time to time thereafter.

The incompleteness of the draft was most prominently because portions
simply hadn’t been written yet, only planned in rough
simply hadn't been written yet, only planned in rough
outline. However, some existing mate- rial required further rationale
discussion, which could sometimes lead to amendments; the development
of omitted materials might also occasionally induce changes to pre-


@@ 10386,7 10381,7 @@ in Algol are second class citizens — they always have to appear in
person and can never be represented by a variable or expression
(except in the case of a formal parameter)

Strachey’s detailed enumeration of rights and privileges is specific to
Strachey's detailed enumeration of rights and privileges is specific to
Algol, but the concept generalizes naturally. In any given programming
language, first-class ob- jects may be freely manipulated, according to
certain broad classes of manipulations that are characteristic of the


@@ 10411,11 10406,11 @@ in Scheme.
3. They may be returned by procedures.

15Christopher Strachey originally evolved the notion of first-class
value from W.V. Quine’s prin-
value from W.V. Quine's prin-

ciple To be is to be the value of a variable ([La00]).

Quine’s principle is a criterion for what a statement assumes to exist
Quine's principle is a criterion for what a statement assumes to exist
(not a criterion for what actually does exist). He proposes
reformulating any given statement using quantified variables and
predicates; in the reformulation, whenever a variable must have a


@@ 10425,7 10420,7 @@ assumed to exist. Thus the sentence “Pegasus does not exist,” in which
“¬(exists x)(is-Pegasus(x))”, in which x is not bound to anything and so no
existence is assumed. ([Qu61].)

16This could also be taken as an example of why one shouldn’t
16This could also be taken as an example of why one shouldn't
compromise on the elegance of a

language design.


@@ 10438,7 10433,7 @@ A significant omission from this list may be illustrated by the
following thought experiment. Suppose Scheme were slightly modified, by
granting these four properties to, say, the built-in special-form
combiners and and or . There would then be two new “first-class”
objects in the Scheme programmer’s repertory. However, there would be
objects in the Scheme programmer's repertory. However, there would be
little to gain by it. The new objects would remain anomalies, because
they would always be the only two “first-class” examples of a larger
type, with its own peculiar abstract properties; in fact, conceptually


@@ 10452,7 10447,7 @@ problem, here is an additional criterion for first-class objects17
(still without claiming sufficiency, of course):

5. There are no arbitrary restrictions on the set of objects, in the
   given object’s
   given object's

naturally arising value domain, that satisfy criteria 1–4.



@@ 10473,7 10468,7 @@ of nonnegative integers fails the criterion. (Any strictly first-class
representation of the integers would have to be infinite-precision,
i.e., Lisp-style bignums.)

Under Criterion 5, the Scheme special-form combiners can’t be made
Under Criterion 5, the Scheme special-form combiners can't be made
first-class in isolation; a conceptually complete domain of such
creatures must be identified, and given first-class status all at
once. Thus, type operative would not be first-class without its


@@ 10504,7 10499,7 @@ rationale discussion of §4.9.1), and cyclic list and tree structures
The use of a term-reduction calculus to model any Lisp with fexprs
has, in recent years, acquired a folk reputation for theoretical
inadmissibility, due to an overgeneral- ization based on the (quite
correct) central theoretical result of Mitchell Wand’s 1998 paper “The
correct) central theoretical result of Mitchell Wand's 1998 paper “The
Theory of Fexprs is Trivial” ([Wa98]). The paper is precise in stating
its formal result:



@@ 10529,7 10524,7 @@ http://mitpress.mit.edu/sicp/sicp.html
The second edition of the Wizard Book [Ra03, “Wizard Book”].

[Ba78] John Backus, “Can Programming Be Liberated from the von Neumann
Style?  A Functional Style and its Algebra of Programs’,
Style?  A Functional Style and its Algebra of Programs',
Communications of the ACM 21 no. 8 (August 1978), pp. 613–641.

Augmented form of the 1977 ACM Turing Award Lecture, which proposes


@@ 10547,7 10542,7 @@ This is the first of the RxRS s with a huge pile of authors (for two
perspec- tives, see [ReCl86, Introduction], [SteGa93, §2.11.1]). There
is, as one might expect from a committee, almost nothing in the way of
motivation; however, there is also —as one would not normally expect
from a committee— a verse about lambda modeled on J.R.R. Tolkien’s
from a committee— a verse about lambda modeled on J.R.R. Tolkien's
verse about the Rings of Power.

[Cl98] William Clinger, “Proper Tail Recursion and Space Efficiency”,


@@ 10572,7 10567,7 @@ Report on the Algorithmic Language Scheme”, Lisp Pointers 4 no. 3
http://www.cs.indiana.edu/scheme-repository/doc.standards.html

[CrFe91] Erik Crank and Matthias Felleisen, “Parameter-Passing and the
Lambda Calculus”, POPL ’91 : Conference Record of the Eighteenth
Lambda Calculus”, POPL '91 : Conference Record of the Eighteenth
Annual ACM Sym- posium on Principles of Programming Languages,
Orlando, Florida, January 21–23, 1991, pp. 233–244. Available (as of
October 2009) at URL: http://www.ccs.neu.edu/scheme/pubs/#popl91-cf


@@ 10595,10 10590,10 @@ correct type.

(2) The object can be used in any expression of the correct type.

(3) The lifetime of the object is unbounded (I’m talking high-level
(3) The lifetime of the object is unbounded (I'm talking high-level
semantics

here, so I don’t have to mention garbage collection.)
here, so I don't have to mention garbage collection.)

(4) It is not (in general) decidable that a given object is produced
by no expression in a program. (In other words, the object may be


@@ 10639,7 10634,7 @@ and Their Computation by Machine”, Communications of the ACM 3 no. 4
The original reference for Lisp.

[McC+62] John McCarthy, Paul W. Abrahams, Daniel J. Edwards, Timothy
P. Hart, and Michael I. Levin, LISP 1.5 Programmer’s Manual,
P. Hart, and Michael I. Levin, LISP 1.5 Programmer's Manual,
Cambridge, Massachu- setts: The MIT Press, 1962. Available (as of
October 2009) at URL:
http://www.softwarepreservation.org/projects/LISP/book/


@@ 10650,12 10645,12 @@ The second edition was 1965, same authors.

http://www.gnu.org/software/mit-scheme/

[Mor73] James H. Morris Jr., “Types are not Sets”, POPL ’73:
[Mor73] James H. Morris Jr., “Types are not Sets”, POPL '73:
Conference Record of ACM Symposium on Principles of Programming
Languages, Boston, Mas- sachusetts, October 1–3, 1973, pp. 120–124.

[Mos00] Peter D. Mosses, “A Foreword to ‘Fundamental Concepts in
Programming Languages’ ”, Higher-Order and Symbolic Computation 13
Programming Languages' ”, Higher-Order and Symbolic Computation 13
no. 1/2 (April 2000), pp. 7–9.

[Pi83] Kent M. Pitman, The Revised Maclisp Manual (Saturday evening


@@ 10715,9 10710,9 @@ Design”, SIG- PLAN Notices 10 no. 7 (July 1975) [Special Issue on
Programming Language Design], pp. 18–21.

A retrospective survey of the subject, somewhat in the nature of a
post- mortem. The essence of Standish’s diagnosis is that the
post- mortem. The essence of Standish's diagnosis is that the
extensibility features required an expert to use them. He notes that
when a system is complex, mod- ifying it is complex. (He doesn’t take
when a system is complex, mod- ifying it is complex. (He doesn't take
the next step, though, of suggesting that some means should be sought
to reduce the complexity of extended systems.)



@@ 10725,7 10720,7 @@ He classifies extensibility into three types: paraphrase (defining a new
fea- ture by showing how to express it with pre-existing features —
includes ordi- nary procedures as well as macros); orthophrase (adding
new facilities that are orthogonal to what was there — think of adding
a file system to a language that didn’t have one); and metaphrase
a file system to a language that didn't have one); and metaphrase
(roughly what would later be called “reflection”).

[Ste90] Guy Lewis Steele Jr., Common Lisp: The Language, 2nd Edition,


@@ 10766,7 10761,7 @@ clearly expressed by the notes.
Languages”, Higher-Order and Symbolic Computation 13 no. 1/2 (April
2000), pp. 11–49.

This is Strachey’s paper based on his lectures, [Str67]. On the
This is Strachey's paper based on his lectures, [Str67]. On the
history of the

paper, see [Mos00].