32fdcfbf8396eb1fc80172c6c8b4ecbeed0d832a — ilmu a month ago 42341ca master
put some notes on model approach wip
1 files changed, 43 insertions(+), 0 deletions(-)

@@ 98,3 98,46 @@ w.r.t. reality, so oracle problem.
The operation graph can invoke arbitrary programs as provided by the metaprogram, the metaprogram also
provides compatible data (modulo context specific guarantees) via the places. 


Objects are hashes. Properties are names. Properties can be related logically and given causality.
That is not completely correct, I suppose objects are just any height 0 cxp (so just netstrings).

In order to associate an object with many properties we use an OP-graph which is itself a property of
the associated data, namely; the adjacencies between elements in the list of objects (hashes) and list
of properties (names / references).

Mostly this involves answering queries from the front end about what kinds of properties it has access
to for the current object being displayed in the interface.

The fundamental TOP-graph (trust object property graph) has trust thresholds associated with
edges, this kind of thing is necessary but still thinking about it, anyway the core trusted definitions
will include the interface property definitions (because you must be able to trust that your interface
is telling you the truth about your data and that no code runs without permission).

The immutable objects are associated mutably with the properties but we try to converge on property
definitions so that we can achieve confluence. Metatheory of progress is clarity. Information sources
which contribute improvenents to clarity somewhat reliably are more trustworthy and therefore treated
with higher priority. For this reason: When convergence happens it can be assumed to be censorship
resistant in the same way as mathematical results. Therefore we can safely (in the societal sense)
rely on anonymity to protect vulnerable parties with interesting perspectives. This motivates peers
to play mixnets; or use broadcasts based on (zk)proofs-of-membership rather than proofs-of-identity.

Privacy must always be considered in relation to the censorship resistance properties because privacy
predicated on a censor broadcast is equivalent to just having the censor (since the private party has
no recourse when censored except to give up anonymity). Due to our aim of achieving confluence we will
depend on common vocabularies and therefore legitimate improvements to definitions will probably be
censorship resistant enough to motivate anonymity (since peers benefit from the information they will
share it with those they trust).

Different representations (of the confluent concepts) are unavoidable for the foreseeable future.
Therefore we must rely on logic and probability theory to make sense of these things. However; in
order for these webs of knowledge to remain convincing, the individual must remain sovereign and
have the benefit of the doubt. That way we continuously reorient the viewpoints to cover different
cases of perspective mismatch (canonical/most trustworthy presentation vs some humans cognition).

Clearly whatever works will be used and therefore we can assume that those who understand will be
able to explain and somehow keep the knowledge alive. Good luck.