M pages/03.blog/01.getting-started-with-rizin-radare2/item.en.md => pages/03.blog/01.getting-started-with-rizin-radare2/item.en.md +6 -6
@@ 61,17 61,17 @@ First, let's figure out the basics of Radare2. [Download the `first-binary`](htt
**You should not run any binary that you don't trust!**
-[hl=console]
+```console
$ ./first-binary
Hello World!
-[/hl]
+```
That was simple. Let's take a deeper look at the binary itself.
-[hl=console]
+```console
$ file first-binary
first-binary: ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, BuildID[sha1]=977c5cd4724f1e78c23e1edbaa01f8ae149890fd, for GNU/Linux 4.4.0, not stripped
-[/hl]
+```
Let's dissect this. We know that the file is a 32 bit file for Linux and it is [not stripped](https://en.wikipedia.org/wiki/Stripped_binary).
@@ 282,10 282,10 @@ This is where knowing your assembly really helps. The code does the following: c
First, we have to open up visual mode with `V` (enter), and switch to the disassembly view with `p`. "Scroll" down to the line we want to patch at `0x1202` and press `A`. This will put us into edit mode, and we can directly type in the Assembly code. We want to change this to `push 0xa`, so let's do that. After saving the changes, we can quit Radare2 with `q` to exit the visual mode and `^D` to exit the app. Running our modified program gives us another result:
-[hl=console]
+```console
$ ./patching
Nice
-[/hl]
+```
# Closing Thoughts
M pages/03.blog/05.setting-up-matrix-dendrite/item.en.md => pages/03.blog/05.setting-up-matrix-dendrite/item.en.md +12 -12
@@ 44,13 44,13 @@ Installing Go is simple on Debian 10: `sudo apt install go-1.15`
[And installing the latest PostgreSQL is easy too](https://wiki.postgresql.org/wiki/Apt#Quickstart).
-[hl=console]
+```console
$ sudo apt install curl ca-certificates gnupg
$ curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
$ sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
$ sudo apt update
$ sudo apt install postgresql-13
-[/hl]
+```
You also need your own SSL certificate. I'm assuming that you already have Apache or Nginx running with a combined certificate for your domain and the Matrix subdomain.
@@ 83,7 83,7 @@ Ensure that PostgreSQL is started and running with `sudo systemctl enable --now
Create a systemd service file `/etc/systemd/system/dendrite.service` with the following content:
-[hl=toml]
+```toml
[Unit]
Description=Dendrite Matrix Server
After=network.target
@@ 97,7 97,7 @@ Restart=on-abort
[Install]
WantedBy=multi-user.target
-[/hl]
+```
Don't start it just yet! We still have to configure Apache first.
@@ 109,15 109,15 @@ First, install the requirements: `sudo apt install build-essential python3-dev l
Then, as the Matrix user, create the virtual environment
-[hl=toml]
+```toml
$ python3 -m venv sydent
$ source sydent/bin/activate
$ pip3 install matrix-sydent
-[/hl]
+```
Next, tie this into systemd: create a file `/etc/systemd/system/sydent.service` with the following content:
-[hl=toml]
+```toml
[Unit]
Description=Sydent Identity Server
After=network.target
@@ 131,7 131,7 @@ Restart=on-abort
[Install]
WantedBy=multi-user.target
-[/hl]
+```
Enable and start the service with `sudo systemctl enable --now sydent.service`. Check to see if it's running with `sudo systemctl status sydent.service`.
@@ 143,7 143,7 @@ Edit the file `/etc/apache2/ports.conf` and add `Listen 0.0.0.0:8448` under all
Next, add a file `/etc/apache2/sites-available/matrix.conf` with the following content (replace where needed):
-[hl=apache]
+```apache
<VirtualHost \*:80>
ServerName matrix.YOURDOMAIN.tld
ServerAdmin webmaster@localhost
@@ 183,7 183,7 @@ Next, add a file `/etc/apache2/sites-available/matrix.conf` with the following c
</Location>
</VirtualHost>
</IfModule>
-[/hl]
+```
This configuration assumes that you have an SSL certificate already. If not, set up a combined certificate for YOURDOMAIN.tld and matrix.YOURDOMAIN.tld.
@@ 200,13 200,13 @@ Download the tarball for Element from [the GitHub releases page](https://github.
Now, add this folder to Apache: edit `/etc/apache2/apache2.conf` and add the following near the bottom:
-[hl=apache]
+```apache
<Directory "/var/www/element">
Options -Indexes +FollowSymLinks
AllowOverride All
Require all granted
</Directory>
-[/hl]
+```
Reload Apache with `apachectl -t && systemctl reload apache2`.
M pages/03.blog/06.hashing-data-with-chess/item.en.md => pages/03.blog/06.hashing-data-with-chess/item.en.md +10 -10
@@ 26,7 26,7 @@ As I was feeling lazy, I decided to use a [PyPi package for the chess movements]
I started out with some very simple code to read every byte of a file, and select a move from a list of legal moves, and repeat.
-[hl=python]
+```python
import chess
data = b'Sample data to hash'
board = chess.Board()
@@ 35,7 35,7 @@ for byte in data:
index = byte % len(moves)
board.push(moves[index])
print(board)
-[/hl]
+```
This gave me the following chessboard:
@@ 45,7 45,7 @@ Anyone with a trained eye for chess can tell you that this is a bad position for
Now, let's try hashing something larger, like a file. For this example, I'm going to use the SVG of the board above.
-[hl=python]
+```python
import chess
with open("board0.svg", "rb") as f:
data = f.read()
@@ 56,7 56,7 @@ for byte in data:
index = byte % len(moves)
board.push(moves[index])
print(board)
-[/hl]
+```
This will give us a new board:
@@ 70,7 70,7 @@ Of course, that would lead to just encoding a SHA sum as a base13 (six unique pi
Creating a new game is easy enough. Just check if the game is finished and then append the board to a list.
-[hl=python]
+```python
board = chess.Board()
games = []
for byte in data:
@@ 82,11 82,11 @@ for byte in data:
index = byte % len(moves)
board.push(moves[index])
games.append(board)
-[/hl]
+```
So now, we have many boards! Now all that's left to do is to merge the boards together. We know that there are thirteen possible states for every square, so we can just XOR the values together.
-[hl=python]
+```python
chars = ['.', 'B', 'K', 'N', 'P', 'Q', 'R', 'b', 'k', 'n', 'p', 'q', 'r']
def merge(a, b):
@@ 100,11 100,11 @@ def merge(a, b):
]
)
return "".join(output)
-[/hl]
+```
This will return an ASCII board with no newlines or spaces. Let's put together the code we have, and make it able to read a file from `sys.argv`.
-[hl=python]
+```python
import chess
import sys
@@ 148,7 148,7 @@ if __name__ == "__main__":
data = f.read()
f.close()
print(hash(data))
-[/hl]
+```
Hashing the first board again gives us this as our hash: `qnQpKQN.pnKQBkkrBNpnnpKbpN.NBkKNK.QKPkrp.KQkpbpBqb.QKpkKB.NNKnQR`. Represented as something visible, we have the following board:
M pages/03.blog/10.multi-device-adb-with-internet-over-wifi/item.en.md => pages/03.blog/10.multi-device-adb-with-internet-over-wifi/item.en.md +4 -4
@@ 32,7 32,7 @@ After scouring the internet, we learned that ADB is an example of the classic [c
There is a simple(ish) Batch program that can be used to automate this for Windows.
-[hl=dos]
+```dos
@echo off
echo.
@@ 71,7 71,7 @@ GOTO noparams
:end
echo.
-[/hl]
+```
Other devices, if they are connected to the same network, can connect to the ADB server through either a command line argument (ie `adb -H [IP of server] -P [Port] [commands that you want to run]`). The port option does not have to be used if ADB server is running on the default port of 5037.
@@ 79,7 79,7 @@ The better solution is to set an environment variable instead: `ANDROID_ADB_SERV
However, Android Studio on Windows does not like to play nice with another device acting as an ADB server. The solution is both extremely stupid and smart. We forward all requests to `localhost` port 5037 to the other device with Windows commands. Running this command: `netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=5037 connectaddress=[IP of ADB server] connectport=5037` will forward all requests from `localhost` port 5037 to the IP address of the ADB server. Turning this off is another Admin command away: `netsh interface portproxy delete v4tov4 listenaddress=127.0.0.1 listenport=5037`. This is only a Windows issue! MacOS and Linux work fine with the environment variable set. A simple Batch program can also be used to automate this.
-[hl=dos]
+```dos
@echo off
echo.
@@ 125,7 125,7 @@ GOTO help
:end
echo.
-[/hl]
+```
The next issue is how to have both internet connection and connection to the robot. This is a very simple solution: the ADB server needs to have a second WiFi adapter and connect to both the robot and the internet-enabled WiFi.
M pages/03.blog/11.recovering-postgresql-hard-drive-fail/item.en.md => pages/03.blog/11.recovering-postgresql-hard-drive-fail/item.en.md +6 -6
@@ 31,7 31,7 @@ As some of you can guess, I'm a [big fan of RSS](https://ersei.net/en/blog/rss-w
First things first, I shut down the Postgres server. Then, I made a full backup of the current state. However, some files did not copy over from the old disk to the new disk properly. So, step one was to use [ddrescue](https://www.gnu.org/software/ddrescue/) to get as much of the broken file as possible from the old hard drive.
-[hl=console]
+```console
# ddrescue 278344 ~/278344
GNU ddrescue 1.23
Press Ctrl-C to interrupt
@@ 42,7 42,7 @@ non-tried: 0 B, bad-sector: 32768 B, error rate: 256 B/s
pct rescued: 99.73%, read errors: 67, remaining time: 0s
time since last successful read: 35s
Finished
-[/hl]
+```
Great! I got most of the file!
@@ 60,7 60,7 @@ First, because I'm wonderful at following directions, I reindexed the toast tabl
I knew the toasted values in the rows was the message, or the body of the RSS item. So, I used the script on the website (with a few modifications) to locate broken lines.
-[hl=pgsql]
+```pgsql
DO $f$
declare
curid BIGINT := 0;
@@ 85,7 85,7 @@ FOR badid IN SELECT id FROM fr_me_entry LOOP
end loop;
end;
$f$;
-[/hl]
+```
I got the output of the script.
@@ 132,7 132,7 @@ The website said to unlink the rows by updating the values to no longer use the
Rerunning the script gave the exact same results, but without the line `NOTICE: data for message 1621857039614111 is corrupt`. Wonderful. I modified the script slightly to automatically unlink the information.
-[hl=pgsql]
+```pgsql
DO $f$
declare
curid BIGINT := 0;
@@ 158,7 158,7 @@ FOR badid IN SELECT id FROM fr_me_entry LOOP
end loop;
end;
$f$;
-[/hl]
+```
Let's open FreshRSS.
M pages/03.blog/12.to-ssg-or-not-to-ssg/item.en.md => pages/03.blog/12.to-ssg-or-not-to-ssg/item.en.md +2 -2
@@ 67,11 67,11 @@ If configured right, a CMS is (almost as) fast as and (almost as) light as an SS
Command to test latencies used:
-[hl=bash]
+```bash
for i in {0..10}; do
curl 'https://example.com/test.html' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8,ja;q=0.6' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36' -H 'Connection: keep-alive' --compressed -s -o /dev/null -w "%{time_starttransfer}\n"
done
-[/hl]
+```
The results (done once targeted at my Grav CMS and done once targeted at a static HTML file with the same response as the Grav CMS) are as follows:
M pages/03.blog/26.git-bisect/item.en.md => pages/03.blog/26.git-bisect/item.en.md +16 -16
@@ 27,69 27,69 @@ I ran into a bug with my CMS [Grav](https://getgrav.org). RSS feeds just _broke_
Let's start off by first getting the Grav [git source](https://github.com/getgrav/grav). Luckily there are [instructions](https://learn.getgrav.org/17/basics/installation) for deploying from a Git repository. Let's do that.
-[hl=console]
+```console
$ git clone https://github.com/getgrav/grav
-[/hl]
+```
Now, we can start bisecting the repository. I know that the website was working at [1.7.38](https://github.com/getgrav/grav/releases/tag/1.7.38). At the time of writing, there have been 29 commits to the develop branch since then. It shouldn't be too difficult to find the culprit. I also know that the [1.7.39](https://github.com/getgrav/grav/releases/tag/1.7.39) version is buggy. So, let's go ahead and get bisecting!
-[hl=console]
+```console
$ git bisect start
$ git bisect good 95aa57c
status: waiting for bad commit, 1 good commit known
$ git bisect bad 4dd9861
Bisecting: 9 revisions left to test after this (roughly 3 steps)
[4c762c0ac3b36ddc45ab3b3b8fba8c96a3208b0c] add php 8.2 to test (#3662)
-[/hl]
+```
That's pretty good! Let's go! First, we're going to install the dependencies and get the CMS running.
-[hl=console]
+```console
$ composer install --no-dev -o
$ bin/grav install
$ rm -rf user
$ cp -r ../working-grav/user .
-[/hl]
+```
We're using my website's configuration and plugins and pages and themes so that we can find the issue easier. Let's check if the commit [`4c762c0ac`](https://github.com/getgrav/grav/commit/4c762c0ac) is good.
-[hl=console]
+```console
$ bin/grav server
$ curl http://localhost:8000/en/blog.atom
<HTML here>
-[/hl]
+```
Hm. Seems like this commit is bad. We should have gotten an [XML-based Atom](https://en.wikipedia.org/wiki/Atom_(web_standard)) feed, but instead we got the blog's HTML page. Let's tell Git that this commit is bad.
-[hl=console]
+```console
$ git bisect bad 4c762c0ac
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[d99c84d9f8d54cdfe6aeb07fbfad1bf776197ade] empty date to avoid confusion
-[/hl]
+```
Let's run the composer and reinstall and redo the user stuff from earlier (just to be sure there's nothing left over) and run that cURL command...
-[hl=console]
+```console
$ curl http://localhost:8000/en/blog.atom
<Atom XML here>
-[/hl]
+```
Good! Let's tell Git that this commit is good.
-[hl=console]
+```console
$ git bisect good d99c84d9f
Bisecting: 2 revisions left to test after this (roughly 1 step)
[ea010f19f0194540aefc9b06a1e1e3fbbb2b2781] Fix for bad rendering of modules
-[/hl]
+```
And one more time... and [`ea010f19f`](https://github.com/getgrav/grav/commit/ea010f19f) is bad. Tell Git, blah blah blah, and now we have something else in the bisect output:
-[hl=console]
+```console
$ git bisect bad ea010f19f
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[1fae4504a2c17a32a91f20c29b6e33d8956f9869] move account info under account section
-[/hl]
+```
So that means that [`1fae4504a`](https://github.com/getgrav/grav/commit/1fae4504a) is the last good commit! Let's see what's right after it (I didn't use any fancy commands, I just looked at the `git log` and saw what was right after this commit). Seems like the bug was introduced in [`ea010f19f`](https://github.com/getgrav/grav/commit/ea010f19f0194540aefc9b06a1e1e3fbbb2b2781). The change was like, one line. I'm not familiar enough with Grav to find out why that commit broke it, but it's enough to help the maintainers out a lot. I've created a GitHub issue [here](https://github.com/getgrav/grav/issues/3689).
M pages/03.blog/27.rsa-basics/item.en.md => pages/03.blog/27.rsa-basics/item.en.md +64 -64
@@ 33,35 33,35 @@ Cryptography is the foundation of modern online communications. Without it, onli
Modern asymmetric cryptography (also known as public key cryptography or private key cryptography) relies on two parties generating two linked “keys”: a public key, which can be shared with everyone, and a private key, which must be kept secret. Using the example in the original RSA study, if Alice wants to send a document to Bob, then Alice will encrypt that document with Bob’s public key, and Bob can decrypt that document with his own private key. It is possible to derive the private key from the public key, but it is computationally difficult to do so. Assuming an RSA cryptosystem with a key length of 4096 bits (the number in binary is 4096 digits long), it is possible to estimate how long it would take to brute-force this key. According to the prime number theorem, there are roughly
-[tex]
+$$
\frac{x}{\ln{x}}
-[/tex]
+$$
-primes up to [texi]x[/texi]. Therefore, with a prime length of exactly 2048 bits (modern implementations vary the prime length by a few bits to make this even more computationally difficult), there are
+primes up to $x$. Therefore, with a prime length of exactly 2048 bits (modern implementations vary the prime length by a few bits to make this even more computationally difficult), there are
-[tex]
+$$
\frac{2^{2049}}{\ln{2^{2049}}} - \frac{2^{2048}}{\ln{2^{2048}}}=2.2743 \times 10^{613}
-[/tex]
+$$
-possible keys. Therefore, the number of distinct prime pairs (p and q, required for RSA key generation) is
+possible keys. Therefore, the number of distinct prime pairs ($p$ and $q$, required for RSA key generation) is
-[tex]
+$$
\frac{(2.2743 \times 10^{613})^{2}}{2} - 2.2743 \times 10^{613} = 2.5863 \times 10^{1226}
-[/tex]
+$$
-pairs. Assuming that every atom in the observable universe (roughly [texi]10^{80}[/texi]) checks one pair per nanosecond ([texi]10^{-9}[/texi] seconds), it would take
+pairs. Assuming that every atom in the observable universe (roughly $10^{80}$) checks one pair per nanosecond ([texi]10^{-9}[/texi] seconds), it would take
-[tex]
+$$
\frac{2.5863 \times 10^{1226}\text{ns}}{10^{80}\text{processors}}=2.5863 \times 10^{1146}\text{ns}
-[/tex]
+$$
-or [texi]8.19555 \times 10^{1129}[/texi] years. For reference, the universe is about [texi]10^{10}[/texi] years old. Although there are algorithms that exist that can factor the public key much faster (such as Shor’s algorithm and quantum computing technology, or the general number field sieve using typical classical computing approaches), it is still not feasible for these key lengths for the foreseeable future.
+or $8.19555 \times 10^{1129}$ years. For reference, the universe is about [texi]10^{10}[/texi] years old. Although there are algorithms that exist that can factor the public key much faster (such as Shor’s algorithm and quantum computing technology, or the general number field sieve using typical classical computing approaches), it is still not feasible for these key lengths for the foreseeable future.
-The principle of RSA encryption is that it is computationally difficult to factor numbers. If two very large primes are multiplied together, it is difficult to find the original two numbers from the product. However, it has not yet been proven whether a classical algorithm can factor numbers in polynomial time[^1] exists or does not exist. Mathematician Peter Shor discovered an algorithm in 1994 that can factor numbers in polynomial time, but it only works on a large quantum computer. According to Martin Roetteler, a quantum computing scientist at Microsoft, it would take "[texi]2n+2[/texi] qubits which leads to a quantum circuit implementation that has less than [texi]448n^{3} \log_{2}n[/texi] number of T-gates. For a bit-size of [texi]n=1024[/texi], this would work out to be 2050 logical qubits and [texi]4.81 \times 10^{12}[/texi] T-gates." As of the end of 2021, the largest quantum computer in the world, IBM’s Eagle Quantum computer has 127 logical qubits. If Rose’s Law[^2] is applicable here, it would take
+The principle of RSA encryption is that it is computationally difficult to factor numbers. If two very large primes are multiplied together, it is difficult to find the original two numbers from the product. However, it has not yet been proven whether a classical algorithm can factor numbers in polynomial time[^1] exists or does not exist. Mathematician Peter Shor discovered an algorithm in 1994 that can factor numbers in polynomial time, but it only works on a large quantum computer. According to Martin Roetteler, a quantum computing scientist at Microsoft, it would take "$2n+2$ qubits which leads to a quantum circuit implementation that has less than [texi]448n^{3} \log_{2}n[/texi] number of T-gates. For a bit-size of [texi]n=1024[/texi], this would work out to be 2050 logical qubits and [texi]4.81 \times 10^{12}[/texi] T-gates." As of the end of 2021, the largest quantum computer in the world, IBM’s Eagle Quantum computer has 127 logical qubits. If Rose’s Law[^2] is applicable here, it would take
-[tex]
+$$
5(\log_{2}2050 - \log_{2}127) = 20\;\text{years}
-[/tex]
+$$
to reach the number of qubits needed to break a weak 1,024 bit RSA key. Hopefully, the internet has moved to a stronger form of encryption by then.
@@ 79,23 79,23 @@ The easiest way to grasp RSA is by an analogy of paint: it is easy to mix togeth
This example with paint is not trivial to convert into something computers can use. However, Ron Rivest, Adi Shamir, and Leonard Adleman published how to do so in 1978, changing communication forever[^3]. The process of generating a shared secret number is similar to the paint experiment, but still not trivial to discover:
-1. Alice and Bob each generate two large prime numbers [texi]p[/texi] and [texi]q[/texi]. [texi]p[/texi] and [texi]q[/texi] must differ by a few orders of magnitude to make factorization by Fermat’s factorization method[^4] difficult.
-2. Alice and Bob each generate and share [texi]n = pq[/texi]. This is the public key.
-3. Alice and Bob each calculate Carmichael’s totient function of [texi]n[/texi] as [texi]\lambda (n)[/texi].
-4. Alice and Bob both agree on an integer e such that [texi]e < \lambda (n)[/texi] and [texi]e[/texi] is coprime to [texi]\lambda (n)[/texi].
-5. Alice and Bob each calculate [texi]d[/texi] as being the modular multiplicative inverse of [texi]e[/texi] modulo [texi]\lambda (n)[/texi].
+1. Alice and Bob each generate two large prime numbers $p$ and [texi]q[/texi]. [texi]p[/texi] and [texi]q[/texi] must differ by a few orders of magnitude to make factorization by Fermat’s factorization method[^4] difficult.
+2. Alice and Bob each generate and share $n = pq$. This is the public key.
+3. Alice and Bob each calculate Carmichael’s totient function of $n$ as [texi]\lambda (n)[/texi].
+4. Alice and Bob both agree on an integer e such that $e < \lambda (n)$ and [texi]e[/texi] is coprime to [texi]\lambda (n)[/texi].
+5. Alice and Bob each calculate $d$ as being the modular multiplicative inverse of [texi]e[/texi] modulo [texi]\lambda (n)[/texi].
-Once these values are generated and [texi]n[/texi] and [texi]e[/texi] are shared, Alice and Bob can encrypt messages to each other that can only be decrypted by the other person. If Alice wishes to send a message to Bob, she would take her message and turn it into a number. This is possible to do so by converting the message into binary then into an integer. Then, Alice will split the message up so that each chunk is less than [texi]n[/texi]. Finally, Alice will encrypt the message [texi]m[/texi] with Bob’s public key [texi]n[/texi] and the shared exponent [texi]e[/texi]:
+Once these values are generated and $n$ and [texi]e[/texi] are shared, Alice and Bob can encrypt messages to each other that can only be decrypted by the other person. If Alice wishes to send a message to Bob, she would take her message and turn it into a number. This is possible to do so by converting the message into binary then into an integer. Then, Alice will split the message up so that each chunk is less than [texi]n[/texi]. Finally, Alice will encrypt the message [texi]m[/texi] with Bob’s public key [texi]n[/texi] and the shared exponent [texi]e[/texi]:
-[tex]
+$$
e \equiv m^{e} \bmod n
-[/tex]
+$$
Alice can transmit the ciphertext c to Bob, and Bob can obtain the original message m by solving for m:
-[tex]
+$$
m \equiv c^{d} \bmod n
-[/tex]
+$$
With the information transmitted, Eve—who is only listening to the communication between Alice and Bob—can not easily find p and q, and thus can not decrypt the message.
@@ 103,93 103,93 @@ A practical example would be Bob attempting to send Alice a secret number: 1234.
| Value | Alice | Bob |
| ------------------------ | --------------------------------------------------- | ------------------------------------------------- |
-| [texi]p[/texi] | [texi]74177[/texi] | [texi]13613[/texi] |
-| [texi]q[/texi] | [texi]99817[/texi] | [texi]99817[/texi] |
-| [texi]n[/texi] | [texi]74177 \times 99817 = 7404125609[/texi] | [texi]13613 \times 66293 = 902446609[/texi] |
-| [texi]\lambda (n)[/texi] | [texi]7403951616[/texi] | [texi]902366704[/texi] |
-| [texi]e[/texi] | [texi]127[/texi] | [texi]127[/texi] |
-| [texi]d[/texi] | [texi]127^{-1} \bmod 7403951616 = 2623447423[/texi] | [texi]127^{-1} \bmod 902366704 = 412104479[/texi] |
+| $p$ | [texi]74177[/texi] | [texi]13613[/texi] |
+| $q$ | [texi]99817[/texi] | [texi]99817[/texi] |
+| $n$ | [texi]74177 \times 99817 = 7404125609[/texi] | [texi]13613 \times 66293 = 902446609[/texi] |
+| $\lambda (n)$ | [texi]7403951616[/texi] | [texi]902366704[/texi] |
+| $e$ | [texi]127[/texi] | [texi]127[/texi] |
+| $d$ | [texi]127^{-1} \bmod 7403951616 = 2623447423[/texi] | [texi]127^{-1} \bmod 902366704 = 412104479[/texi] |
Now, Bob would encrypt 1234 with Alice’s public key and e:
-[tex]
+$$
1234^{127} \bmod 7403951616 = 6062692598
-[/tex]
+$$
-Bob’s secret message is [texi]c=6062692598[/texi]. Alice would receive that message and decrypt it like so:
+Bob’s secret message is $c=6062692598$. Alice would receive that message and decrypt it like so:
-[tex]
+$$
6062692598^{2623447423} \bmod 7404125609=1234
-[/tex]
+$$
In real life, p and q would be much larger, e would typically be 65537, and the message would be first compressed to increase entropy then encrypted with a shared password that was agreed upon with RSA (since it is faster to share a secret password then encrypt the message using that password). However, this demonstration shows the encryption and decryption process using RSA with about 40 bits of strength.
RSA has also been proved in the original paper to be correct with Fermat’s little theorem[^5]. Proving
-[tex]
+$$
(m^{e})^{d} \equiv m \bmod n
-[/tex]
+$$
-for any integer [texi]m[/texi] where [texi]n[/texi] is the product of two primes [texi]p[/texi] and [texi]q[/texi], and [texi]e[/texi] and [texi]d[/texi] are positive integers that are the modular multiplicative inverse of Carmichael’s totient function is trivial.
+for any integer $m$ where [texi]n[/texi] is the product of two primes [texi]p[/texi] and [texi]q[/texi], and [texi]e[/texi] and [texi]d[/texi] are positive integers that are the modular multiplicative inverse of Carmichael’s totient function is trivial.
Since Carmichael’s totient function is defined as
-[tex]
+$$
\lambda ( pq )= \text{lcm}( p −1, q −1)
-[/tex]
+$$
-then it is divisible by both [texi]p-1[/texi] and [texi]q-1[/texi].
+then it is divisible by both $p-1$ and [texi]q-1[/texi].
-Since [texi]\lambda ( pq )[/texi] is divisible by both [texi]p-1[/texi] and [texi]q-1[/texi], then
+Since $\lambda ( pq )$ is divisible by both [texi]p-1[/texi] and [texi]q-1[/texi], then
-[tex]
+$$
ed - 1 = h(p-1)=k(q-1)
-[/tex]
+$$
where h and k are two positive integers. This statement is true for any e and d that satisfy
-[tex]
+$$
ed \equiv 1 \bmod ((p-1)(q-1))
-[/tex]
+$$
-because [texi](p-1)(q-1)[/texi] is divisible by [texi]\lambda (pq)[/texi], and therefore also by [texi]p-1[/texi] and [texi]q-1[/texi].
+because $(p-1)(q-1)$ is divisible by [texi]\lambda (pq)[/texi], and therefore also by [texi]p-1[/texi] and [texi]q-1[/texi].
-By the properties of the modulus operator, checking if [texi]a \equiv b \bmod pq[/texi] is equivalent to checking if [texi]a \equiv b \bmod p[/texi] and [texi]a \equiv b \bmod q[/texi] separately.
+By the properties of the modulus operator, checking if $a \equiv b \bmod pq$ is equivalent to checking if [texi]a \equiv b \bmod p[/texi] and [texi]a \equiv b \bmod q[/texi] separately.
Given that m is not zero,
-[tex]
+$$
m^{ed} \equiv m \bmod p
-[/tex]
+$$
then,
-[tex]
+$$
m^{ed} = m^{ed-1} \times m = m^{h(p-1)} \times m = (m^{p-1})^{h} \times m
-[/tex]
+$$
By Fermat's Little Theorem,
-[tex]
+$$
(m^{p-1})^{h} \times m \equiv 1^{h} \times m \equiv m \bmod p
-[/tex]
+$$
-If [texi]m[/texi] is zero, then a special case occurs. If [texi]m[/texi] is congruent to [texi]0 \bmod p[/texi], then [texi]m[/texi] is a multiple of [texi]p[/texi].
+If $m$ is zero, then a special case occurs. If [texi]m[/texi] is congruent to [texi]0 \bmod p[/texi], then [texi]m[/texi] is a multiple of [texi]p[/texi].
-Therefore, [texi]m^{ed}[/texi] is also a multiple of [texi]p[/texi]. The logical conclusion for that special case of [texi]m=0[/texi] is the following:
+Therefore, $m^{ed}$ is also a multiple of [texi]p[/texi]. The logical conclusion for that special case of [texi]m=0[/texi] is the following:
-[tex]
+$$
m^{ed} \equiv 0 \equiv m
-[/tex]
+$$
-Proving that [texi]m^{ed}[/texi] proceeds in the same way as proving that [texi]m^{ed} \equiv m \bmod p[/texi]. Therefore, for any positive integer [texi]m[/texi], and positive integers [texi]e[/texi], [texi]d[/texi] such that [texi]ed \equiv 1 \bmod \lambda (pq)[/texi], then [texi]m^{ed} \equiv m \bmod pq[/texi].
+Proving that $m^{ed}$ proceeds in the same way as proving that [texi]m^{ed} \equiv m \bmod p[/texi]. Therefore, for any positive integer [texi]m[/texi], and positive integers [texi]e[/texi], [texi]d[/texi] such that [texi]ed \equiv 1 \bmod \lambda (pq)[/texi], then [texi]m^{ed} \equiv m \bmod pq[/texi].
Understanding RSA is essential for understanding how the internet operates securely. The advent of RSA has revolutionized modern communication. Because it is computationally difficult to factor a large semiprimes[^6] but it is very easy to multiply large numbers together, it is difficult to break an RSA key but it is easy to generate one. Because it is near impossible to break a long RSA key with the equipment currently available, then malicious actors must move to alternate ways to break encryption. From the US Government’s backdoor (and frontdoor) efforts against cryptography, such as lobbying to reduce key sizes, banning the export of cryptography software, arresting software developers, and so on and so forth, to the movement away from encrypted communications platforms toward centralized ones for the sake of convenience (such as Discord and Slack), it seems that RSA—and all other forms of encryption—are only as strong as those who implement it.
-[^1]: Computational complexity describes the amount of computer time to run an algorithm. Polynomial time is denoted in Big-O notation: [texi]O(b^k)[/texi], where k is some integer. Big-O notation describes the behavior of the function as it approaches infinity. As the size of the input of the algorithm approaches infinity, the O-notation approaches some function: [texi]f(x) = O(g(x))[/texi] as [texi]x[/texi] approaches infinity, where [texi]f[/texi] is a real or complex valued function and [texi]g[/texi] is a real-valued function. Functions can scale linearly with the input ([texi]O(n)[/texi]), which means that as the input size increases, then the time also increases linearly. The “best” O-notation could be [texi]O(1)[/texi], where regardless of input, the algorithm takes the same time to perform computations. Likewise, [texi]O(n!)[/texi] would be disastrously slow as the input size increases.
+[^1]: Computational complexity describes the amount of computer time to run an algorithm. Polynomial time is denoted in Big-O notation: $O(b^k)$, where k is some integer. Big-O notation describes the behavior of the function as it approaches infinity. As the size of the input of the algorithm approaches infinity, the O-notation approaches some function: [texi]f(x) = O(g(x))[/texi] as [texi]x[/texi] approaches infinity, where [texi]f[/texi] is a real or complex valued function and [texi]g[/texi] is a real-valued function. Functions can scale linearly with the input ([texi]O(n)[/texi]), which means that as the input size increases, then the time also increases linearly. The “best” O-notation could be [texi]O(1)[/texi], where regardless of input, the algorithm takes the same time to perform computations. Likewise, [texi]O(n!)[/texi] would be disastrously slow as the input size increases.
[^2]: Considered the "Moore’s Law of Quantum Computers" and represents the growth of quantum computers over time.
[^3]: British mathematician Clifford Cocks discovered the same cryptosystem earlier in 1973 while working for the GCHQ (the British signal intelligence agency). It was kept a secret until it was declassified in 1997.
-[^4]: Fermat’s factorization method works on the basis that odd integers can be represented in the format [texi]n = a^2 - b^2 = (a+b)(a-b)[/texi], where [texi](a+b)[/texi] and [texi](a-b)[/texi] are factors of the number [texi]n[/texi]. Prime numbers are odd, so Fermat’s method is applicable here. Taking the square root of [texi]n[/texi] can determine [texi]p[/texi] and [texi]q[/texi] easily if [texi]p[/texi] and [texi]q[/texi] are close together.
-[^5]: If [texi]p[/texi] is a prime number, then for any integer [texi]a[/texi], the number [texi]a^p - a[/texi] is a multiple of [texi]p[/texi]. If [texi]a[/texi] is not divisible by [texi]p[/texi], Fermat’s little theorem states that [texi]ap-1 \equiv 1 \bmod p[/texi].
+[^4]: Fermat’s factorization method works on the basis that odd integers can be represented in the format $n = a^2 - b^2 = (a+b)(a-b)$, where [texi](a+b)[/texi] and [texi](a-b)[/texi] are factors of the number [texi]n[/texi]. Prime numbers are odd, so Fermat’s method is applicable here. Taking the square root of [texi]n[/texi] can determine [texi]p[/texi] and [texi]q[/texi] easily if [texi]p[/texi] and [texi]q[/texi] are close together.
+[^5]: If $p$ is a prime number, then for any integer [texi]a[/texi], the number [texi]a^p - a[/texi] is a multiple of [texi]p[/texi]. If [texi]a[/texi] is not divisible by [texi]p[/texi], Fermat’s little theorem states that [texi]ap-1 \equiv 1 \bmod p[/texi].
[^6]: Two prime numbers multiplied together
## Sources
@@ 206,7 206,7 @@ Editor, Csrc Content. “Man-in-the-Middle Attack (MitM) - Glossary.” Nist.Gov
Graeme. “Understanding Public Key Cryptography with Paint – modulo Errors.” Straylight.Co.Uk, https://maths.straylight.co.uk/archives/108. Accessed 11 Feb. 2022.
-“How Many Logical Qubits Are Needed to Run Shor’s Algorithm Efficiently on Large Integers ([texi]n > 2^{1024}[/texi])?” Quantum Computing Stack Exchange, https://quantumcomputing.stackexchange.com/questions/5048/how-many-logical-qubits-are-needed-to-run-shors-algorithm-efficiently-on-large/5056. Accessed 12 Jan. 2022.
+“How Many Logical Qubits Are Needed to Run Shor’s Algorithm Efficiently on Large Integers ($n > 2^{1024}$)?” Quantum Computing Stack Exchange, https://quantumcomputing.stackexchange.com/questions/5048/how-many-logical-qubits-are-needed-to-run-shors-algorithm-efficiently-on-large/5056. Accessed 12 Jan. 2022.
“IBM Unveils Breakthrough 127-Qubit Quantum Processor.” IBM Newsroom, https://newsroom.ibm.com/2021-11-16-IBM-Unveils-Breakthrough-127-Qubit-Quantum-Processor. Accessed 12 Jan. 2022.
M pages/03.blog/28.updates-2023-03/item.en.md => pages/03.blog/28.updates-2023-03/item.en.md +4 -4
@@ 47,7 47,7 @@ The TL;DR is that I have two servers. One in the cloud that is publicly availabl
This is the `wg0.conf` configuration on `ersei.net`:
-[hl=toml]
+```toml
[Interface]
PrivateKey = [REDACTED]
Address = 192.168.77.2/32
@@ 61,11 61,11 @@ PublicKey = [REDACTED]
Endpoint = [PROXY]
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 30
-[/hl]
+```
and here's the `wg0.conf` on `proxy`:
-[hl=toml]
+```toml
[Interface]
Address = 192.168.77.1/24
ListenPort = [REDACTED]
@@ 81,7 81,7 @@ PostDown = iptables -t nat -D PREROUTING -p tcp --dport 443 -j DNAT --to-destina
PublicKey = [ERSEI.NET]
AllowedIPs = 192.168.77.2/32
PersistentKeepalive = 30
-[/hl]
+```
Works perfectly. Almost like magic. I still use Rathole for services that don't need to know the connecting IP address (such as a private Minecraft server or three), but this is a much better solution for HTTP(S).
M pages/03.blog/30.diy-programming-language/item.en.md => pages/03.blog/30.diy-programming-language/item.en.md +40 -40
@@ 166,15 166,15 @@ And then [email me](/contact-me) to show me what you've made ❤️
*None of this actually made it to the end project. Go down to the [Designing the New Language](#designing-the-new-languag) section to see something more similar to the end result.*
-[hl=console]
+```console
$ cargo new cane-lang
-[/hl]
+```
Let's also add in the lexer dependency into our `Cargo.toml`.
-[hl=console]
+```console
$ cargo add pest
-[/hl]
+```
Now let's waste precious time for Cargo to update the Crates index.
@@ 190,12 190,12 @@ Let's do all of that.
I am also now realizing that my Rust skills are… rusty. I forget how to do everything. Maybe Rust wasn't the best language to pick, but here I am.
-[hl=console]
+```console
$ rm -rf cane-lang
$ cargo new cane-lang --lib
$ cd cane-lang
$ mkdir src/bin
-[/hl]
+```
Now that we have some structure in our project, I can get started on actually doing work! Let's figure out what our programming language will look like (spoiler alert: it's gonna look like C). Yes, there will be semicolons and you can't stop me.
@@ 493,7 493,7 @@ After spending a few minutes trying to figure out how to do Rust development on
In `src/stdtypes.rs`:
-[hl=rust]
+```rust
#[derive(Clone)]
pub struct List(pub LinkedList<Data>);
@@ 514,13 514,13 @@ pub struct Data {
val_number: Number,
val_list: List,
}
-[/hl]
+```
Next, we can implement the basic `Action`s: `add` and `print`. This is the minimum to implement a basic "Hello world" program. We will worry about the rest later.
A snippet of the `add` action:
-[hl=rust]
+```rust
impl Data {
pub fn add(&mut self) {
if self.get_kind() == Types::LIST {
@@ 566,7 566,7 @@ impl std::ops::Add<&mut Data> for Data {
}
}
}
-[/hl]
+```
The remainder of the code is left as an exercise for the reader.
@@ 587,7 587,7 @@ For this interpreter, we will need to keep track of a few things:
A struct that can keep track of this information could look something like this:
-[hl=rust]
+```rust
struct Interpreter<R: Read> {
reader: BufReader<R>,
call_stack: Vec<SeekFrom>,
@@ 606,7 606,7 @@ impl<R: Read> Interpreter<R> {
}
}
}
-[/hl]
+```
Now to make the actual interpreter… this might take a while.
@@ 646,7 646,7 @@ That does bring into question how functions will be handled. We can worry about
The big question is how am I going handle recursion? After distracting myself and reading a random article on RSS, I came to the conclusion that the interpreter should hold a reference to a data object in the interpreter state. This should be done through the call stack. In addition to the previous position, it will also contain the location of the previous data object as a reference. In addition, the interpreter will also keep track of a "working data" state. Easy enough, right? Let's get started.
-[hl=rust]
+```rust
pub fn execute(&mut self) -> Result<(), io::Error> {
let mut buf: [u8; 1] = [0];
loop {
@@ 758,7 758,7 @@ pub fn execute(&mut self) -> Result<(), io::Error> {
_ => return Err(Error::new(ErrorKind::UnexpectedEof, "Unexpected EOF!".to_string())),
}
}
-[/hl]
+```
This (and the supporting data structures) took me like another eight hours to put together, mostly since Rust got in the way (but good thing it did otherwise hard-to-find errors would have popped up).
@@ 779,7 779,7 @@ The *relief*.
We are at 18.5 hours. Time to implement the other basic parts. Next up is numbers. It's not too hard, I just have to patch the token parsing to check for numbers.
-[hl=rust]
+```rust
let mut reset = true;
match self.call_stack.last_mut().unwrap().get_token().as_str() {
"add" => self.call_stack.last_mut().unwrap().get_data().add(), // Add data
@@ 802,7 802,7 @@ match self.call_stack.last_mut().unwrap().get_token().as_str() {
}
},
}
-[/hl]
+```
That took less than ten minutes. I'm making good time here! What's next? Unwraps!
@@ 828,7 828,7 @@ First `1` and `2` will be added, then `3 4` will be added, then the contents of
which will act as expected.
-[hl=rust]
+```rust
if buf[0] == Symbols::Unwrap as u8 {
let last_val = self
.call_stack
@@ 859,11 859,11 @@ if buf[0] == Symbols::Unwrap as u8 {
}
}
}
-[/hl]
+```
As I am writing code demos for the language, I am running into issues with newlines. There are no string escapes! I can't write in `\n`, I have to put in a newline. Let's fix that.
-[hl=rust]
+```rust
Some(State::StringEscape) => {
self.call_stack.pop();
match buf[0] {
@@ 885,7 885,7 @@ Some(State::StringEscape) => {
}
};
}
-[/hl]
+```
It should be pretty easy to add in more escape characters later on (like if I want to print a single quote). Another forty-five minutes have passed. I am now at 21 hours. I have three hours left to make this language useful.
@@ 909,7 909,7 @@ Here's the plan. When `defact` is hit, then the last item in the data is popped
To load the data into the hashmap (this is alongside `print`):
-[hl=rust]
+```rust
"defvar" => {
let key = self
.call_stack
@@ 941,11 941,11 @@ To load the data into the hashmap (this is alongside `print`):
.insert(key.unwrap().to_string(), var.unwrap());
println!("{:?}", self.variables);
}
-[/hl]
+```
and to retrieve it (this is after the token to number check):
-[hl=rust]
+```rust
match self
.variables
.get(self.call_stack.last_mut().unwrap().get_token())
@@ 969,7 969,7 @@ match self
}
None => (),
}
-[/hl]
+```
That was just one more hour spent. Two hours remain.
@@ 989,7 989,7 @@ I think I have enough time. Let's do it. I've already implemented the code to sa
Maybe the best (worst) way to do this is to start another interpreter with a shared variable state and to use a `Cursor` to read the function data.
-[hl=rust]
+```rust
Types::ACTION => {
let mut c = Cursor::new(Vec::new());
@@ 1022,7 1022,7 @@ Types::ACTION => {
Err(e) => return Err(e),
}
}
-[/hl]
+```
It has now been two hours. I have hit the deadline.
@@ 1036,7 1036,7 @@ Implementing `eq`, `lt`, `gt`, and so on will be pretty easy. Each of these will
Furthermore, these comparisons will be made strictly. Casting will be implemented pretty easily:
-[hl=rust]
+```rust
"tolist" => {
self.call_stack.last_mut().unwrap().get_data().as_list();
let data = self.call_stack.last_mut().unwrap().get_data();
@@ 1056,7 1056,7 @@ Furthermore, these comparisons will be made strictly. Casting will be implemente
changeme.as_number();
data.get_list().push(changeme);
}
-[/hl]
+```
We are popping the last value in the list, converting it, then pushing it back.
@@ 1066,7 1066,7 @@ Now to finish off what we started: conditionals.
Here's how it's going to work: if the conditional is found, pop the last two items (for the true and false conditions), then check the data. Based on the check, execute the true or false condition.
-[hl=rust]
+```rust
"eq" | "lt" | "gt" | "lte" | "gte" => {
let token = self.call_stack.last_mut().unwrap().get_token().clone();
@@ 1112,7 1112,7 @@ Here's how it's going to work: if the conditional is found, pop the last two ite
Err(e) => return Err(e),
}
}
-[/hl]
+```
I have also realized that the syntax for conditionals is ugly. If you want to return a string, then it looks so gross:
@@ 1166,7 1166,7 @@ Maybe it was a mistake to have the call stack and the parse stack merged...
Anyway, two hours later I have this:
-[hl=rust]
+```rust
Some(State::LazyEval) => {
if buf[0] == Symbols::StringStart as u8 {
self.call_stack
@@ 1225,7 1225,7 @@ Some(State::LazyEval) => {
.push_str(from_utf8(&buf).unwrap());
}
}
-[/hl]
+```
Of course, the rest of the code was tweaked to get this working too.
@@ 1249,7 1249,7 @@ This should get rid of the recursion issue with loops!
For example, `flatten` is implemented like so:
-[hl=rust]
+```rust
pub fn make_flat(&mut self) {
let mut flattened: LinkedList<Data> = LinkedList::new();
if self.kind == Types::LIST {
@@ 1264,7 1264,7 @@ pub fn make_flat(&mut self) {
self.set_list(List::new(flattened));
}
}
-[/hl]
+```
mmmm recursion. Tasty. Can you tell I'm losing my sanity over this project?
@@ 1288,7 1288,7 @@ It has been another hour and a half. I should probably add the last two features
Join was easy:
-[hl=rust]
+```rust
pub fn join_string(&mut self, joiner: &str) {
if self.kind != Types::LIST {
return;
@@ 1306,11 1306,11 @@ pub fn join_string(&mut self, joiner: &str) {
self.set_string(new_string);
self.kind = Types::STRING;
}
-[/hl]
+```
along with the interpreter side of things:
-[hl=rust]
+```rust
"join" => {
let joiner = self
.call_stack
@@ 1334,11 1334,11 @@ along with the interpreter side of things:
}
data.join_string(joiner.unwrap().get_string());
}
-[/hl]
+```
Splitting was much the same. This post is already very code-heavy, but here goes anyway:
-[hl=rust]
+```rust
"split" => {
let splitter = self
.call_stack
@@ 1393,7 1393,7 @@ Splitting was much the same. This post is already very code-heavy, but here goes
.push(data);
}
}
-[/hl]
+```
I have now hit 24 hours for the second time. I should be done now.
M pages/03.blog/32.its-nixin-time/item.en.md => pages/03.blog/32.its-nixin-time/item.en.md +28 -28
@@ 53,16 53,16 @@ I follow one of the lead Fish devs on Mastodon and he talks about the Fish shell
I just had to put the following code in my NixOS configuration:
-[hl=nix]
+```nix
programs.fish.enable = true;
users.defaultUserShell = pkgs.fish;
-[/hl]
+```
I'm pretty happy with the defaults. Let's load in my aliases from Zsh:
In `~/.config/fish/conf.d/aliases.fish`:
-[hl=bash]
+```bash
alias ls="exa --color=auto -a -g"
alias cp="cp -v"
alias sl="ls"
@@ 82,7 82,7 @@ alias nvidia-off="sudo nvidia-off"
alias feh="echo imv"
alias paru="sudo sh -c 'nix-channel --update && nixos-rebuild switch'"
-[/hl]
+```
[This](https://git.sr.ht/~fd/nix-configs/tree/f4b1270b71e6ffd72c21ea4d78c6eb13ab13cddd) is what my NixOS configuration looked like before I decided to make everything harder for myself.
@@ 103,7 103,7 @@ There were a couple of options for installing home-manager: standalone, or as a
I started off with using home-manager to configure the Fish shell (I just copy-pasted from the wiki):
-[hl=nix]
+```nix
programs.fish = {
enable = true;
interactiveShellInit = ''
@@ 126,7 126,7 @@ programs.fish = {
# }
];
};
-[/hl]
+```
…and it worked? Kinda. I still had to install `fish` globally (in my NixOS configuration along with the home-manager configuration) so I could change my shell properly (foreshadowing).
@@ 134,11 134,11 @@ programs.fish = {
I wanted the [done](https://github.com/franciscolourenco/done) plugin to work. It's pretty simple: it'll notify you when your terminal is unfocused and a long-running command finishes. I find it pretty cool. I set it up to use `notify-send`, and it worked just fine!
-[hl=nix]
+```nix
programs.fish.shellInit = ''
set __done_notification_command '${pkgs.libnotify}/bin/notify-send \$title \$message'
'';
-[/hl]
+```
Forgetting to put the `/bin/` bit of the path really messed me up, and it kinda drove me insane why it wasn't working. Building the home-manager generation did not throw an error, and neither did Fish.
@@ 150,7 150,7 @@ So, I had to do a little bit of trickery to get the variables to load in right.
Home-manager keeps the Fish plugins in `~/.config/fish/conf.d` named `plugin-pluginname.fish`. Naturally, we just need to save a file that'll load variables before those plugins are loaded. Home-manager can do that for you, with [`xdg.configFile`](https://nix-community.github.io/home-manager/options.html#opt-xdg.configFile)!
-[hl=nix]
+```nix
xdg.configFile = {
"fish/conf.d/00-home-manager-vars.fish" = {
enable = true;
@@ 161,7 161,7 @@ xdg.configFile = {
[...]
'';
}
-[/hl]
+```
Because `00` comes before `pl`, the text in that file will be loaded first! Of course, I'm sure there's a better way to do this, but I'm not familiar enough with Nix to figure it out. Yet.
@@ 173,9 173,9 @@ Day in, day out, I pieced together my configuration for Sway, Waybar, and Neovim
And then I ran the incantation.
-[hl=bash]
+```bash
home-manager switch
-[/hl]
+```
I did not realize that it would reload Sway. That was unexpected. I thought everything that I had open would just keep going until I reloaded it to use the new config. But home-manager did that for me.
@@ 232,10 232,10 @@ Remember the `done` plugin we installed for Fish [earlier](#a-little-fishy-diver
But I had an idea. In the course of configuring my terminal, [Foot](https://codeberg.org/dnkl/foot), I saw something in the configuration files:
-[hl=toml]
+```toml
[main]
notify=notify-send -a ${app-id} -i ${app-id} ${title} ${body}
-[/hl]
+```
That's weird. I've never gotten a notification from my terminal. Opening the [README](https://codeberg.org/dnkl/foot/src/branch/master/README.md) showed me that the terminal supports something called OSC777.
@@ 245,10 245,10 @@ Implementing OSC777 to send a notification from Fish shouldn't be too hard, righ
I'll spare you that pain.
-[hl=bash]
+```bash
set __done_notification_command 'echo -e "\e]777;notify;$title;$message\e\\ "'
set __done_allow_nongraphical 1
-[/hl]
+```
Yes, that space at the end before the first double quote is important. Yes, the order of the quotes matter. Yes, I learned the hard way that this has to be in the XDG-managed Fish configuration. No, I don't even know what backslashes are anymore.
@@ 260,7 260,7 @@ Absolutely.
Oh yeah, the (relevant) terminal configuration:
-[hl=nix]
+```nix
programs.foot = {
enable = true;
settings = {
@@ 270,7 270,7 @@ programs.foot = {
};
};
};
-[/hl]
+```
## Home-Manager Where No Home Has Been Managed Before
@@ 284,28 284,28 @@ So, I went on down to the [NixOS wiki page](https://nixos.wiki/wiki/Nix_Installa
The first item on the list was [nix-user-chroot](https://github.com/nix-community/nix-user-chroot). The university's server supported [user namespaces](https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html), so I went ahead and ran the command on the wiki:
-[hl=console]
+```console
mkdir -m 0755 ~/.nix
./nix-user-chroot ~/.nix bash -c 'curl -L https://nixos.org/nix/install | sh'
-[/hl]
+```
And it worked! I now had Nix! Now to have it load in the chroot by default:
In `.profile`:
-[hl=bash]
+```bash
if [[ ! -e ~/.nix-profile ]]; then exec ~/nix-user-chroot ~/.nix bash; fi
-[/hl]
+```
This checks if symlink is not working (which means we are not in the chroot yet), then loads the chroot.
And to finish it off, we add this to the `.bashrc` so we can get proper Nix paths:
-[hl=bash]
+```bash
if [ -f ~/.nix-profile/etc/profile.d/nix.sh ]; then
source ~/.nix-profile/etc/profile.d/nix.sh
fi
-[/hl]
+```
Now to move my dotfiles over! Let's first install home-manager the normal way (standalone) like we did earlier (and like how it's [documented](https://nix-community.github.io/home-manager/index.html#sec-install-standalone)).
@@ 370,7 370,7 @@ And then there was a glimmer of recognition in my eye. As I was reading through
In response to the [fractureiser malware](https://prismlauncher.org/news/cf-compromised-alert), I wrote [a post](/blog/isolate-minecraft) describing how I can isolate Minecraft. I used `bwrap`. This time, instead of isolating the program, I wanted to isolate as little as possible while still messing with the program-facing filesystem.
-[hl=bash]
+```bash
#!/usr/bin/env bash
if [ -z ${NIXDIR+x} ]; then
@@ 431,11 431,11 @@ devbind \
/var
exec bwrap "${args[@]}" "$@"
-[/hl]
+```
And to use the Nix environment by default, I put the following into my `.profile`:
-[hl=bash]
+```bash
if [ -e $HOME/.nix-profile/etc/profile.d/nix.sh ]; then
source $HOME/.nix-profile/etc/profile.d/nix.sh
else
@@ 443,7 443,7 @@ else
exec env NIXDIR=$HOME/scratch/nix $HOME/nix-configs/bwrap.sh $HOME/.nix-profile/bin/fish
fi
fi
-[/hl]
+```
The good: Everything works. Yes really.
M pages/03.blog/34.nix-all-the-way-down/item.en.md => pages/03.blog/34.nix-all-the-way-down/item.en.md +54 -54
@@ 36,7 36,7 @@ You have likely heard of [Wine](https://www.winehq.org), a project that aims to
What if you took that concept, but with macOS instead of Windows? Enter [Darling](https://www.darlinghq.org), a "translation layer that lets you run macOS software on Linux".
-[hl=console]
+```console
$ sudo darling shell
Bootstrapping the container with launchd...
@@ 45,11 45,11 @@ To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
Darling# uname
Darwin
-[/hl]
+```
Of course, keeping in true Ersei fashion, I have to install Nix on it.
-[hl=console]
+```console
Darling# sh <(curl -L https://nixos.org/nix/install)
...
/unpack/nix-2.16.1-x86_64-darwin/install-darwin-multi-user.sh: line 221: xmllint: command not found
@@ 71,7 71,7 @@ https://github.com/NixOS/nix/issues/new?labels=installer&template=installer.md
Or get in touch with the community: https://nixos.org/community
Darling#
-[/hl]
+```
Honestly, I don't know what I expected.
@@ 81,7 81,7 @@ The Nix installer fails because the directory services commands (for modifying u
They all seem like simple enough fixes, though. Now that the installer script has been sufficiently mutilated to allow installing in single-user mode as root, let's go ahead and install Nix!
-[hl=console]
+```console
Darling# mkdir /etc/nix
Darling# echo "build-users-group =" > /etc/nix/nix.conf
Darling# ./install --no-daemon
@@ 92,13 92,13 @@ installing 'nix-2.13.4'
building '/nix/store/07rs9myg8yh532ii196qk5xvfmy2wk9c-user-environment.drv'...
error: clearing flags of path '/nix/store/rghh1vgdc9zd044651054b86y2zsz0lh-user-environment/bin': Invalid argument
./install: unable to install Nix into your default profile
-[/hl]
+```
…what does `clearing flags of path: Invalid` mean? At this point, I was feeling a little bit concerned, but it should not be that hard to track down and fix, right?
Time to do some digging. I grepped through the [Nix source](https://github.com/NixOS/nix) to find `clearing flags of path`. It led me to this code block:
-[hl=cpp]
+```cpp
#if __APPLE__
/* Remove flags, in particular UF_IMMUTABLE which would prevent
the file from being garbage-collected. FIXME: Use
@@ 108,7 108,7 @@ Time to do some digging. I grepped through the [Nix source](https://github.com/N
throw SysError("clearing flags of path '%1%'", path);
}
#endif
-[/hl]
+```
Okay, so it seems like `lchflags` is failing to run. I hope that this isn't a herald of predicaments to come! (foreshadowing!)
@@ 124,12 124,12 @@ Okay, so what is causing the problem? I don't want to compile Nix from source ju
After adding `set -x` to the install script, the broken step is made apparent: running `nix-env`.
-[hl=shell]
+```shell
Darling# /nix/store/51sbkakrhiq8lms2lijkbm947yq9s4y2-nix-2.13.4/bin/nix-env -i /nix/store/51sbkakrhiq8lms2lijkbm947yq9s4y2-nix-2.13.4
installing 'nix-2.13.4'
building '/nix/store/07rs9myg8yh532ii196qk5xvfmy2wk9c-user-environment.drv'...
error: clearing flags of path '/nix/store/rghh1vgdc9zd044651054b86y2zsz0lh-user-environment/bin': Invalid argument
-[/hl]
+```
Maybe I can reach out to the Darling folks and see if they have a solution? At the time of writing, I have still not gotten a response on their Discord.
@@ 139,17 139,17 @@ I guess it's time to compile a fixed Nix.
Luckily for me, Nix has [a list of prerequisites](https://nixos.org/manual/nix/stable/installation/prerequisites-source.html) available. I installed [Brew](https://brew.sh) to download the prerequisites and save me from compiling the dependencies from scratch (foreshadowing!!!).
-[hl=console]
+```console
Darling# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
==> Checking for `sudo` access (which may request your password)...
Don't run this as root!
-[/hl]
+```
Thank you, Brew, for saving me.
Although Darling does not yet have support for creating new accounts, I can trick Brew by changing my UID and GID.
-[hl=console]
+```console
Darling# sudo -u 1000 -g 1000 bash
Darling$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
...
@@ 162,7 162,7 @@ Twitter or any other official channels. You are responsible for resolving any
issues you experience while you are running this old version.
...
Floating point exception: 8 (core dumped)
-[/hl]
+```
Not ominous at all! I also learn that Darling is pretending to be macOS 10.15 (Catalina). After waiting *an hour* for Brew to install and fail (did I mention that Darling is pretty slow?), I get a `Floating point exception: 8 (core dumped)`.
@@ 203,14 203,14 @@ Error: Empty installation
I'm sure it's fine. It did say "Installation successful", so let's keep chugging!
-[hl=console]
+```console
Darling$ brew install automake autoconf autoconf-archive libpthread-stubs pkg-config libtool make jq boost gettext libtool curl intltool
...
==> Running `brew cleanup autoconf`...
==> ./configure --prefix=/usr/local/Cellar/autoconf-archive/2023.02.20
==> make install
Error: Empty installation
-[/hl]
+```
`Error: Empty installation` makes its second (and hopefully last) appearance on this show.
@@ 228,7 228,7 @@ I ran the configure script, installed the dependencies, and repeated and compile
With days of pain and frustration, etched on my granite face, I had a compiled Nix binary.
-[hl=console]
+```console
Darling$ ./src/nix
dyld: dyld cache load error: shared cache file open() failed
dyld: Symbol not found: __ZTINSt3__14__fs10filesystem16filesystem_errorE
@@ 240,7 240,7 @@ Symbol not found: __ZTINSt3__14__fs10filesystem16filesystem_errorE
Referenced from: src/nix
Expected in: /usr/lib/libc++.dylib
in src/nix; code: 4
-[/hl]
+```
Would you look at that. None of it mattered in the end anyway.
@@ 254,10 254,10 @@ I know that the prebuilt Nix binary at least *starts* without complaining about
Let's leave the Darling environment for now and grab a copy of the macOS Nix files. I know that the `nix` binary is downloaded alongside the installer, so let's poke around in it.
-[hl=console]
+```console
$ strings nix | grep "clearing flags of path"
$
-[/hl]
+```
…no results? That's weird. It should be somewhere, right? Are the strings obfuscated for no reason? Maybe I need to put my tax dollars to good use and poke around the file in [Ghidra](https://en.wikipedia.org/wiki/Ghidra).
@@ 279,38 279,38 @@ Failed to identify external linkage address? Does that mean that I've been looki
Oh, the joys of dynamic linking. Maybe I should look at the other files before I spiral into despair.
-[hl=console]
+```console
$ grep -rn "clearing flags of path"
grep: lib/libnixstore.dylib: binary file matches
-[/hl]
+```
There you are! I thank the NSA (just a little) as I curse myself for not looking at the other files too. I open up the library in the slightly-more-familiar [Rizin](https://rizin.re)—time to poke around the raw bits and bytes of compiled code!
Because the folks over at Nix release their binaries optimised for production use, they have most symbols stripped. Hopefully it's not optimised to the point where it's impossible to grok what the instructions mean. Due to the optimisation, I can't just tell Rizin to go to the `canonicalisePathMetaData` function. I can, however, look for `lchflags`.
-[hl=console]
+```console
[0x00000000]> fl | grep lchflags
0x001d1e90 6 sym.imp.lchflags
0x0021a0a0 8 reloc.lchflags
-[/hl]
+```
We don't care about the [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) bits. Let's find out where `lchflags` is being used:
-[hl=console]
+```console
[0x00000000]> axm 0x001d1e90
0x0012713f CALL
[0x00000000]> s 0x0012713f
-[/hl]
+```
Now let's see what's going on and print a couple lines of disassembly.
-[hl=console]
+```console
[0x0012713f]> pd 4
0x0012713f call sym.imp.lchflags
0x00127144 test eax, eax
0x00127146 je 0x127156
0x00127148 call sym.imp.__error
-[/hl]
+```
What's going on here? It's not too complicated. First, `lchflags` is called, and the result is stored in the register (think variable) `eax`. Next, we are checking if `eax` is `0`. If it is zero (if `lchflags` worked), then the program will jump to the address `0x127156`. If not, then we will call the error function.
@@ 320,15 320,15 @@ Why `jmp` instead of removing the `lchflags` call entirely? Simple. Binaries are
Let's edit it out[^1]!
-[hl=console]
+```console
[0x00127146]> pd 2
0x00127146 jmp 0x127156
0x00127148 call sym.imp.__error
-[/hl]
+```
Fingers crossed…
-[hl=console]
+```console
$ sudo darling shell
Darling# cd /Volumes/SystemRoot/home/ersei/workspaces/nix-macos/nix-2.13.4-x86_64-darwin
Darling# mkdir /etc/nix
@@ 351,11 351,11 @@ variables are set, please add the line
. /Users/root/.nix-profile/etc/profile.d/nix.sh
to your shell profile (e.g. ~/.profile).
-[/hl]
+```
Did it… did it just work? You know what I have to do now, right? Install [home-manager](https://github.com/nix-community/home-manager)!
-[hl=console]
+```console
Darling# nix-shell '<home-manager>' -A install
this derivation will be built:
/nix/store/skd8jyfq47cw73s3h5a1jlgjj1dsmr1l-home-manager.drv
@@ 363,7 363,7 @@ building '/nix/store/skd8jyfq47cw73s3h5a1jlgjj1dsmr1l-home-manager.drv'...
...
error: executing '/nix/store/krj9mlsrl8l544z26680ailir2jxzfqs-bash-5.2-p15/bin/bash': Bad file descriptor
error: builder for '/nix/store/skd8jyfq47cw73s3h5a1jlgjj1dsmr1l-home-manager.drv' failed with exit code 1
-[/hl]
+```
Ouch. I really thought that home manager would just work. I guess I never learn.
@@ 373,17 373,17 @@ Considering that I will never use this system for anything, it's probably fine t
Maybe it is time to throw off the shackles of the sunk cost fallacy. This system is unstable, lacks features, and the software that won't crash on there will likely not work properly.
-[hl=console]
+```console
Darling# nix-env -i fish
...
Darling# fish
Something called the old LKM API (nr = 4158)
Illegal instruction: 4 (core dumped)
-[/hl]
+```
But I *really have to know* what's causing the issue. Does building anything work in Nix?
-[hl=console]
+```console
Darling$ nix-build --expr 'derivation {name = "test"; builder = "/bin/bash"; system = "x86_64-darwin";}'
this derivation will be built:
/nix/store/i7fn0rj86xdx7mi49wsn4k79ncx89abg-test.drv
@@ 391,7 391,7 @@ building '/nix/store/i7fn0rj86xdx7mi49wsn4k79ncx89abg-test.drv'...
error: executing '/bin/bash': Bad file descriptor
error: builder for '/nix/store/i7fn0rj86xdx7mi49wsn4k79ncx89abg-test.drv' failed with exit code 1
note: build failure may have been caused by lack of free disk space
-[/hl]
+```
Can Nix tell me more? It can the `--debug` flag!
@@ 442,7 442,7 @@ And now I have three million lines of debug output. Maybe that was a tad too muc
It looks like `sandbox-exec` is being called, and it so happens that `/usr/bin/sandbox-exec` doesn't exist on Darling! Can we get around it? Where is `sandbox-exec` in the code?
-[hl=cpp]
+```cpp
if (getEnv("_NIX_TEST_NO_SANDBOX") != "1") {
builder = "/usr/bin/sandbox-exec";
args.push_back("sandbox-exec");
@@ 459,18 459,18 @@ if (getEnv("_NIX_TEST_NO_SANDBOX") != "1") {
builder = drv->builder;
args.push_back(std::string(baseNameOf(drv->builder)));
}
-[/hl]
+```
So if I set `_NIX_TEST_NO_SANDBOX=1`, then everything should work?
-[hl=console]
+```console
Darling# export _NIX_TEST_NO_SANDBOX=1
Darling# nix-shell '<home-manager>' -A install
...
mv: cannot move 'build.xml' to '/nix/store/svnf2aiagc98l1222ni7qycqa2i160r8-docbook-xsl-ns-1.79.2/share/xml/docbook-xsl-ns/build.xml': Function not implemented
Unimplemented syscall (488)
...
-[/hl]
+```
Another error. Wonderful. At least Nix is building things now!
@@ 480,7 480,7 @@ It seems like the version of `mv` that Nix downloads is too new and is using a s
Yes, modifying the contents of a Nix store directly is a bad idea. I just can't bring myself to care.
-[hl=console]
+```console
Darling# nix-shell '<home-manager>' -A install
this derivation will be built:
/nix/store/424sw82b28ip4v59cyvxsx813lnkc7ic-options-docbook.xml.drv
@@ 488,7 488,7 @@ building '/nix/store/424sw82b28ip4v59cyvxsx813lnkc7ic-options-docbook.xml.drv'..
/nix/store/j7lvji2dg1bkr7j8das3fak85hbxjzpx-python3-3.10.12/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
error: builder for '/nix/store/424sw82b28ip4v59cyvxsx813lnkc7ic-options-docbook.xml.drv' failed due to signal 15 (Terminated: 15)
-[/hl]
+```
Is the Python stuff causing an error? Peeking at the code, it's just a warning message and does not cause any error.
@@ 496,7 496,7 @@ Let's find the smallest command that causes the issue so debugging is a little f
So what's the issue? Can I just skip building the docs?
-[hl=console]
+```console
Darling# nix-env -i git
these 31 paths will be fetched (9.51 MiB download, 62.77 MiB unpacked):
/nix/store/j0hpv9n6fbnj1qaz4vzvd0zq9m30ss5l-git-2.41.0
@@ 507,22 507,22 @@ Darling# git clone https://github.com/nix-community/home-manager
Cloning into 'home-manager'...
...
Resolving deltas: 100% (21733/21733), done.
-[/hl]
+```
I edited `modules/modules.nix` and removed `./manual.nix` as well as removed all mention of the docs from `default.nix`. That should do it!
-[hl=console]
+```console
Darling# nix-shell -A install
these 6 derivations will be built:
...
Activating onFilesChange
Activating setupLaunchAgents
Segmentation fault: 11 (core dumped)
-[/hl]
+```
The sporadic crashes return. If this doesn't work when I run it again, I'll be pretty sad.
-[hl=console]
+```console
Darling# nix-shell -A install
...
Activating setupLaunchAgents
@@ 537,17 537,17 @@ All done! The home-manager tool should now be installed and you can edit
to configure Home Manager. Run 'man home-configuration.nix' to
see all available options.
-[/hl]
+```
Victory is mine! Now I just need to get home-manager to use my configs. I cloned my [nix repo](https://git.sr.ht/~fd/nix-configs), copied in the new `home.nix` that was generated for me, and added the imports:
-[hl=nix]
+```nix
imports = [
./nvim.nix
./shell.nix
./common.nix
];
-[/hl]
+```
I linked the new config, ran `nix-shell -A install` and prayed. I'm not superstitious, but if praying would help then I would do it.
@@ 581,7 581,7 @@ Of course it couldn't be that easy. What am I going to do, delete `nvim`? At thi
Time to debug *again*, I guess. Let's open the derivation in a shell so we can step through the issues.
-[hl=console]
+```console
Darling# nix-shell /nix/store/c5n06d7jd1hmsykampx34gc7qj1yyjsz-neovim-0.9.1.drv
Darling-nix# source $stdenv/setup
Darling-nix# set -x
@@ 591,7 591,7 @@ Generating remote plugin manifest
++ touch /nix/store/b8cxllw65bmk8yyj2qhczkxsh6vsd94j-neovim-0.9.1/rplugin.vim
Segmentation fault: 11 (core dumped)
...
-[/hl]
+```
Not only was `mv` broken, but so is `touch`! I commit another Nix Crime™ and rebuild home-manager.
M pages/03.blog/36.typst/item.en.md => pages/03.blog/36.typst/item.en.md +4 -4
@@ 86,23 86,23 @@ After [adding](https://git.sr.ht/~fd/nix-configs/commit/a81b364ee21d26e9d43ad285
As is now apparently custom, I decide to do the last homework of the year for my linear algebra class in Typst, but this time without a web app in the way. If you are unaware, my linear algebra homework consists of a lot of matrices. Previously, in LaTeX/Obsidian, I would have to do something like this:
-[hl=tex]
+```tex
\left(\begin{array}{ccc|c}
-2 & 3 & 1 & 0 \\
1 & 0 & 1 & 3
\end{array}\right)
-[/hl]
+```
Which would make a matrix like this:
-[tex]
+$$
\left(
\begin{array}{ccc|c}
-2 & 3 & 1 & 0 \\
1 & 0 & 1 & 3
\end{array}
\right)
-[/tex]
+$$
The equivalent Typst code looks something like this:
M pages/03.blog/37.srht-time/item.en.md => pages/03.blog/37.srht-time/item.en.md +8 -8
@@ 57,7 57,7 @@ Make a note of the UUID generated!
Then, in the repository, I created the Sourcehut build file `.build.yml`:
-[hl=yaml]
+```yaml
image: alpine/edge
secrets:
- 4df836f8-5313-40b1-bc4e-b7b20cfd147e # Set your UUID here!
@@ 69,13 69,13 @@ tasks:
cd ~/"${REPO}"
git config --global credential.helper store
git push --mirror "https://github.com/${GH_USER}/${REPO}"
-[/hl]
+```
This file assumes that the repositories are named the same on both Sourcehut and GitHub. In the case that the names are different, keep the `REPO` variable as the GitHub repository name and change the line with `cd` to be `cd ~/sourcehut-repository-name`.
As you push the repository with the new changes, Sourcehut should notify you that the build has started, like so:
-[hl=shell]
+```shell
$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
@@ 87,7 87,7 @@ remote: Build started:
remote: https://builds.sr.ht/~fd/job/1143073 [.build.yml]
To git.sr.ht:~fd/grav-plugin-staticmath
2d70c2f..622bbf0 main -> main
-[/hl]
+```
Go to the URL to ensure the push succeeded, and take a look at the GitHub repository to check if the mirror process worked.
@@ 105,7 105,7 @@ Unfortunately, you cannot disable pull requests on GitHub. To prevent pull reque
In the `.github/pull_request_template.md`, I added the following:
-[hl=markdown]
+```markdown
# Your Pull Request Will Not Be Merged
This is a mirror of the [Sourcehut](https://git.sr.ht/~fd/grav-plugin-staticmath) repository.
@@ 118,18 118,18 @@ Yes, it requires sending an email.
It's easier than it sounds.
[Direct all attention to the Mailing List](https://lists.sr.ht/~fd/grav-plugin-staticmath)
-[/hl]
+```
This is in imperfect solution at best. At worst, people will ignore the message and create the pull request anyway. In this case, a GitHub action that automatically closes all pull requests may be in order (along with unwatching the repository so you are not spammed with emails for each individual who does not read the instructions).
Worse still, it is now harder (if not impossible) for people to then find the mailing lists and issue tracker. I've resolved to placing a set of links at the top of the README that link to the relevant places on Sourcehut:
-[hl=markdown]
+```markdown
- [Grav StaticMath Plugin](https://git.sr.ht/~fd/grav-plugin-staticmath)
- [StaticMath Server](https://git.sr.ht/~fd/staticmath-server)
- [Issues](https://todo.sr.ht/~fd/grav-plugin-staticmath)
- [Mailing List](https://lists.sr.ht/~fd/grav-plugin-staticmath)
-[/hl]
+```
This has the wonderful side-effect of offering links from the Sourcehut Git repository to the Sourcehut issue tracker (which does not exist otherwise, as the main project page is the only place that contains the resources side-by-side).
M pages/03.blog/40.fuse-root/item.en.md => pages/03.blog/40.fuse-root/item.en.md +26 -26
@@ 60,14 60,14 @@ The initramfs needs to have both network support as well as the proper FUSE bina
I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it works, as opposed to something like Alpine.
-[hl=console]
+```console
$ git clone https://github.com/dracutdevs/dracut
$ podman run -it --name arch -v ./dracut:/dracut docker.io/archlinux:latest bash
-[/hl]
+```
In the container, I installed some packages (including the `linux` package because I need a functioning kernel), compiled `dracut` from source, and wrote a simple module script in `modules.d/90fuse/module-setup.sh`:
-[hl=bash]
+```bash
#!/bin/bash
check() {
require_binaries fusermount fuseiso mkisofs || return 1
@@ 82,11 82,11 @@ install() {
inst_multiple fusermount fuseiso mkisofs
return 0
}
-[/hl]
+```
That's it. That's all the code I had to write. Buoyed by my newfound confidence, I powered ahead, building the EFI image.
-[hl=console]
+```console
$ ./dracut.sh --kver 6.9.6-arch1-1 \
--uefi efi_firmware/EFI/BOOT/BOOTX64.efi \
--force -l -N --no-hostonly-cmdline \
@@ 111,11 111,11 @@ reboot with "rd.debug" added to the kernel command line.
Dropping to debug shell.
dracut:/#
-[/hl]
+```
*Hacker voice* I'm in. Now to enable networking and mount a test root. I have already extracted an Arch Linux root into a S3 bucket running locally, so this should be pretty easy, right? I just have to manually set up networking routes and load the drivers.
-[hl=console]
+```console
dracut:/# modprobe fuse
dracut:/# modprobe e1000
dracut:/# ip link set lo up
@@ 132,11 132,11 @@ dracut:/# switch_root /sysroot /sbin/init
switch_root: failed to execute /lib/systemd/systemd: Input/output error
dracut:/# ls
sh: ls: command not found
-[/hl]
+```
Honestly, I don't know what I expected. Seems like everything is just... *gone*. Alas, not even tab completion can save me. At this point, I was stuck. I had no idea what to do. I spent days just looking around, poking at the `switch_root` source code, all for naught. Until I remembered a link [Anthony](https://a.exozy.me) had sent me: [How to shrink root filesystem without booting a livecd](https://unix.stackexchange.com/questions/226872/how-to-shrink-root-filesystem-without-booting-a-livecd/227318#227318). In there, there was a command called `pivot_root` that `switch_root` seems to call internally. Let's try that out.
-[hl=console]
+```console
dracut:/# logout
...
[ 430.817269] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100 ]---
@@ 145,7 145,7 @@ dracut:/# cd /sysroot
dracut:/sysroot# mkdir oldroot
dracut:/sysroot# pivot_root . oldroot
pivot_root: failed to change root from `.' to `oldroot': Invalid argument
-[/hl]
+```
Apparently, `pivot_root` is [not allowed](https://unix.stackexchange.com/a/455224) to pivot roots if the root being switched is in the initramfs. Unfortunate. The Stack Exchange answer tells me to use `switch_root`, which doesn't work either. However, part of that answer sticks out to me:
@@ 153,20 153,20 @@ Apparently, `pivot_root` is [not allowed](https://unix.stackexchange.com/a/45522
Would it be possible to manually switch the root *without* a specialized system call? What if I just chroot?
-[hl=console]
+```console
...
dracut:/# mount --rbind /sys /sysroot/sys
dracut:/# mount --rbind /dev /sysroot/dev
dracut:/# mount -t proc /proc /sysroot/proc
dracut:/# chroot /sysroot /sbin/init
Explicit --user argument required to run as user manager.
-[/hl]
+```
Oh, I need to run the `chroot` command as PID 1 so Systemd can start up properly. I can actually tweak the initramfs's init script and just put my startup commands in there, and replace the `switch_root` call with `exec chroot /sbin/init`.
I put this in `modules.d/99base/init.sh` in the Dracut source after the udev rules are loaded and bypassed the `root` variable checks earlier.
-[hl=bash]
+```bash
modprobe fuse
modprobe e1000
ip link set lo up
@@ 177,7 177,7 @@ s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
mount --rbind /sys /sysroot/sys
mount --rbind /dev /sysroot/dev
mount -t proc /proc /sysroot/proc
-[/hl]
+```
I also added `exec chroot /sysroot /sbin/init` at the end instead of the `switch_root` command.
@@ 191,16 191,16 @@ Nobody stopped me, so I kept going.
I log in with the very secure password `root` as `root`, and it unceremoniously drops me into a shell.
-[hl=console]
+```console
[root@archlinux ~]# mount
s3fs on / type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
...
[root@archlinux ~]#
-[/hl]
+```
At last, Linux booted off of an S3 bucket. I was compelled to share my achievement with others—all I needed was a fetch program to include in the screenshot:
-[hl=console]
+```console
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
core.db failed to download
@@ 212,7 212,7 @@ error: failed retrieving file 'core.db' from mirror.leaseweb.net : Could not res
warning: fatal error from mirror.leaseweb.net, skipping for the remainder of this transaction
error: failed to synchronize all databases (invalid url for server)
[root@archlinux ~]#
-[/hl]
+```
Uh, seems like DNS isn't working, and I'm missing `dig` and other debugging tools.
@@ 220,7 220,7 @@ Wait a minute! My root filesystem is on S3! I can just mount it somewhere else w
Some debugging later, it seems like systemd-resolved doesn't want to run because it `Failed to connect stdout to the journal socket, ignoring: Permission denied`. I'm not about to try to debug systemd because it's too complicated and I'm lazy, so instead I'll just use Cloudflare's.
-[hl=console]
+```console
[root@archlinux ~]# echo "nameserver 1.1.1.1" > /etc/resolv.conf
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
@@ 228,7 228,7 @@ Some debugging later, it seems like systemd-resolved doesn't want to run because
extra is up to date
...
[root@archlinux ~]# fastfetch
-[/hl]
+```
![Fastfetch showing the system running in QEMU](fastfetch.png)
@@ 259,14 259,14 @@ In the meantime, I added the token files generated from my laptop into the initr
[^2]: I set `acknowledge_abuse=true`, and `root_folder=fuse-root`.
-[hl=bash]
+```bash
...
inst ./gdfuse-config /.gdfuse/default/config
inst ./gdfuse-state /.gdfuse/default/state
find /etc/ssl -type f -or -type l | while read file; do inst "$file"; done
find /etc/ca-certificates -type f -or -type l | while read file; do inst "$file"; done
...
-[/hl]
+```
![A screenshot of Google Drive showing the root of a typical Linux filesystem](google-drive-root.png)
@@ 280,21 280,21 @@ Perhaps they did not bother to stop me because they knew I would fail.
I know the file exists since, well, it *exists*, so why is it not found? Simple: Linux is kinda weird and if the binary you call depends on a library that's not found, then you'll get "File not found".
-[hl=console]
+```console
dracut:/# ldd /sysroot/bin/bash
linux-vdso.so.1 (0x00007e122b196000)
libreadline.so.8 => /usr/lib/libreadline.so.8 (0x00007e122b01a000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007e122ae2e000)
libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007e122adbf000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007e122b198000)
-[/hl]
+```
However, these symlinks don't actually exist! Remember how earlier we noted that relative symlinks don't work? Well, that's come back to bite me. The Kernel is looking for files in `/sysroot` inside `/sysroot/sysroot`. Luckily, this is an easy enough fix: we just need to have `/sysroot` linked to `/sysroot/sysroot` without links:
-[hl=console]
+```console
dracut:/# mkdir /sysroot/sysroot
dracut:/# mount --rbind /sysroot /sysroot/sysroot
-[/hl]
+```
Now time to boot!
M plugins/highlight-php/README.md => plugins/highlight-php/README.md +2 -0
@@ 1,3 1,5 @@
+MODIFIED TO NOT USE SHORTCODES
+
# Highlight PHP Plugin
The **Highlight PHP** Plugin is an extension for [Grav
M plugins/highlight-php/blueprints.yaml => plugins/highlight-php/blueprints.yaml +0 -1
@@ 16,7 16,6 @@ license: MIT
dependencies:
- { name: grav, version: '>=1.7.0' }
- - { name: shortcode-core, version: '>=4.2.2' }
form:
fields:
M plugins/highlight-php/highlight-php.php => plugins/highlight-php/highlight-php.php +124 -14
@@ 8,10 8,9 @@ use Grav\Common\Filesystem\Folder;
use Grav\Common\Inflector;
use Grav\Common\Plugin;
use Grav\Framework\File\File;
-use InvalidArgumentException;
-use Pimple\Exception\FrozenServiceException;
-use Pimple\Exception\UnknownIdentifierException;
use RocketTheme\Toolbox\Event\Event;
+use DomainException;
+use Exception;
/**
* Class HighlightPhpPlugin
@@ 35,7 34,8 @@ class HighlightPhpPlugin extends Plugin
{
return [
'onPluginsInitialized' => ['onPluginsInitialized', 0],
- 'onGetPageBlueprints' => ['onGetPageBlueprints', 0]
+ 'onGetPageBlueprints' => ['onGetPageBlueprints', 0],
+ 'onMarkdownInitialized' => ['onMarkdownInitialized', 0],
];
}
@@ 49,6 49,105 @@ class HighlightPhpPlugin extends Plugin
return require __DIR__ . '/vendor/autoload.php';
}
+ public function onMarkdownInitialized(Event $event)
+ {
+ $markdown = $event['markdown'];
+
+ // $page = $this->grav['page'];
+ // $config = $this->mergeConfig($page);
+ // if (!($config->get('enabled') && $config->get('active'))) {
+ // return;
+ // }
+ $markdown->addBlockType('`', 'Highlight', true, true, 0);
+
+ // Taken entirely from Parsedown with a few modifications to add the relevant classes and such
+ $markdown->blockHighlight = function($Line) {
+ if (preg_match('/^['.$Line['text'][0].']{3,}[ ]*([^`]+)?[ ]*$/', $Line['text'], $matches))
+ {
+ $Element = array(
+ 'name' => 'code',
+ 'handler' => 'line',
+ 'text' => '',
+ );
+
+ $language = "";
+
+ if (isset($matches[1]))
+ {
+ /**
+ * https://www.w3.org/TR/2011/WD-html5-20110525/elements.html#classes
+ * Every HTML element may have a class attribute specified.
+ * The attribute, if specified, must have a value that is a set
+ * of space-separated tokens representing the various classes
+ * that the element belongs to.
+ * [...]
+ * The space characters, for the purposes of this specification,
+ * are U+0020 SPACE, U+0009 CHARACTER TABULATION (tab),
+ * U+000A LINE FEED (LF), U+000C FORM FEED (FF), and
+ * U+000D CARRIAGE RETURN (CR).
+ */
+ $language = substr($matches[1], 0, strcspn($matches[1], " \t\n\f\r"));
+
+ $class = 'hljs language-'.$language;
+
+ $Element['attributes'] = array(
+ 'class' => $class,
+ );
+ }
+
+ $Block = array(
+ 'char' => $Line['text'][0],
+ 'lang' => $language,
+ 'element' => array(
+ 'name' => 'pre',
+ 'handler' => 'element',
+ 'text' => $Element,
+ 'attributes' => [
+ 'class' => "hljs"
+ ]
+ ),
+ );
+
+ return $Block;
+ }
+ };
+
+ $markdown->blockHighlightContinue = function($Line, $Block) {
+ if (isset($Block['complete']))
+ {
+ return;
+ }
+
+ if (isset($Block['interrupted']))
+ {
+ $Block['element']['text']['text'] .= "\n";
+
+ unset($Block['interrupted']);
+ }
+
+ if (preg_match('/^'.$Block['char'].'{3,}[ ]*$/', $Line['text']))
+ {
+ $Block['element']['text']['text'] = substr($Block['element']['text']['text'], 1);
+
+ $Block['complete'] = true;
+
+ return $Block;
+ }
+
+ $Block['element']['text']['text'] .= "\n".$Line['body'];
+
+ return $Block;
+ };
+
+ $markdown->blockHighlightComplete = function($Block) {
+ $text = $Block['element']['text']['text'];
+
+ $Block['element']['text']['text'] = $this->render($text, $Block['lang']);
+
+ return $Block;
+ };
+ }
+
/**
* Initialize the plugin
*/
@@ 61,7 160,6 @@ class HighlightPhpPlugin extends Plugin
// enable other required events
$this->enable([
- 'onShortcodeHandlers' => ['onShortcodeHandlers', 0],
'onPageInitialized' => ['onPageInitialized', 0]
]);
@@ 117,15 215,6 @@ class HighlightPhpPlugin extends Plugin
}
/**
- * Register shortcodes
- */
- public function onShortcodeHandlers()
- {
- // FYI: `onShortCodeHandlers` is fired by the shortcode core at the `onThemesInitialized` event
- $this->grav['shortcode']->registerAllShortcodes(__DIR__ . '/shortcodes');
- }
-
- /**
* Helper function to make other code's intent clearer
* @param string $styleName basename of a CSS file
* @return bool true if $styleName is not the string 'None'
@@ 220,4 309,25 @@ class HighlightPhpPlugin extends Plugin
$types = $event->types;
$types->scanBlueprints('plugin://highlight-php/blueprints');
}
+
+ /**
+ * Helper method to produce processed, syntax-highlightable HTML
+ * @param string $lang language or alias supported by highlight.php
+ * @param string $code the code to tokenize and syntax highlight
+ * @param bool $isInline true if the snippet is to be rendered inline, false if block
+ * @return string the HTML with the appropriate classes to be rendered as highlighted in the browser
+ * @throws DomainException
+ * @throws Exception
+ */
+ private function render(string $code, string $lang)
+ {
+ try {
+ $hl = new \Highlight\Highlighter();
+ $highlighted = $hl->highlight($lang, $code);
+ return $highlighted->value;
+ } catch (DomainException $e) {
+ // if someone uses an unsupported language, we don't want to break the site
+ return $code;
+ }
+ }
}
D plugins/highlight-php/shortcodes/HighlightPhpShortcode.php => plugins/highlight-php/shortcodes/HighlightPhpShortcode.php +0 -49
@@ 1,49 0,0 @@
-<?php
-
-namespace Grav\Plugin\Shortcodes;
-
-use DomainException;
-use Exception;
-use Thunder\Shortcode\Shortcode\ShortcodeInterface;
-
-class HighlightPhpShortcode extends Shortcode
-{
- public function init()
- {
- $rawHandlers = $this->shortcode->getRawHandlers();
-
- $rawHandlers->add('hl', function (ShortcodeInterface $sc) {
- $lang = $sc->getBbCode();
- $content = $sc->getContent();
- $isInline = is_null($content);
- $code = $isInline ? $sc->getParameter('code') : $content;
- $code = trim($code, "\n\r");
- return $this->render($lang, $code, $isInline);
- });
- }
-
- /**
- * Helper method to produce processed, syntax-highlightable HTML
- * @param string $lang language or alias supported by highlight.php
- * @param string $code the code to tokenize and syntax highlight
- * @param bool $isInline true if the snippet is to be rendered inline, false if block
- * @return string the HTML with the appropriate classes to be rendered as highlighted in the browser
- * @throws DomainException
- * @throws Exception
- */
- private function render(string $lang, string $code, bool $isInline)
- {
- try {
- $hl = new \Highlight\Highlighter();
- $highlighted = $hl->highlight($lang, $code);
- $output = $highlighted->value;
- $display = $isInline ? 'inline' : 'block';
- $codeElement = "<code class='hljs language-$highlighted->language' style='display: $display'>$output</code>";
- return $isInline ? $codeElement : "<pre class='hljs'>$codeElement</pre>";
- } catch (DomainException $e) {
- // if someone uses an unsupported language, we don't want to break the site
- $codeElement = "<code class='hljs whoops-$lang-unknown-language'>$code</code>";
- return $isInline ? $codeElement : "<pre class='hljs'>$codeElement</pre>";
- }
- }
-}
M plugins/staticmath/CHANGELOG.md => plugins/staticmath/CHANGELOG.md +7 -0
@@ 1,3 1,10 @@
+# v2.0.1
+## 30-10-2024
+1. [](#improved)
+ * Clean up documentation, blueprints, and language strings.
+ * Clarify license with bundled font files
+ * Improve plugin metadata (icon, tags)
+
# v2.0.0
## 27-10-2024
1. [](#improved)
M plugins/staticmath/LICENSE => plugins/staticmath/LICENSE +458 -1
@@ 1,6 1,10 @@
+All PHP code is licensed under the MIT license. Parts of the code are derived
+from Sommerragen's Grav MathJax plugin. That code has been relicensed to MIT at
+my request.
+
The MIT License (MIT)
-Copyright (c) 2023 Ersei Saggi
+Copyright (c) 2024 Ersei Saggi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ 19,3 23,456 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
+
+---
+
+This plugin bundles a copy of the Latin Modern font converted to WOFF2 format
+for convenience. The font is under the GUST Font License, reproduced in full
+below. The GUST license references the LPPL v1.3c, which is also included.
+
+This is version 1.0, dated 22 June 2009, of the GUST Font License.
+(GUST is the Polish TeX Users Group, http://www.gust.org.pl)
+
+For the most recent version of this license see
+http://www.gust.org.pl/fonts/licenses/GUST-FONT-LICENSE.txt
+or
+http://tug.org/fonts/licenses/GUST-FONT-LICENSE.txt
+
+This work may be distributed and/or modified under the conditions
+of the LaTeX Project Public License, either version 1.3c of this
+license or (at your option) any later version.
+
+Please also observe the following clause:
+1) it is requested, but not legally required, that derived works be
+ distributed only after changing the names of the fonts comprising this
+ work and given in an accompanying "manifest", and that the
+ files comprising the Work, as listed in the manifest, also be given
+ new names. Any exceptions to this request are also given in the
+ manifest.
+
+ We recommend the manifest be given in a separate file named
+ MANIFEST-<fontid>.txt, where <fontid> is some unique identification
+ of the font family. If a separate "readme" file accompanies the Work,
+ we recommend a name of the form README-<fontid>.txt.
+
+The latest version of the LaTeX Project Public License is in
+http://www.latex-project.org/lppl.txt and version 1.3c or later
+is part of all distributions of LaTeX version 2006/05/20 or later.
+
+---
+
+The LaTeX Project Public License
+=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
+
+LPPL Version 1.3c 2008-05-04
+
+Copyright 1999 2002-2008 LaTeX3 Project
+ Everyone is allowed to distribute verbatim copies of this
+ license document, but modification of it is not allowed.
+
+
+PREAMBLE
+========
+
+The LaTeX Project Public License (LPPL) is the primary license under
+which the LaTeX kernel and the base LaTeX packages are distributed.
+
+You may use this license for any work of which you hold the copyright
+and which you wish to distribute. This license may be particularly
+suitable if your work is TeX-related (such as a LaTeX package), but
+it is written in such a way that you can use it even if your work is
+unrelated to TeX.
+
+The section `WHETHER AND HOW TO DISTRIBUTE WORKS UNDER THIS LICENSE',
+below, gives instructions, examples, and recommendations for authors
+who are considering distributing their works under this license.
+
+This license gives conditions under which a work may be distributed
+and modified, as well as conditions under which modified versions of
+that work may be distributed.
+
+We, the LaTeX3 Project, believe that the conditions below give you
+the freedom to make and distribute modified versions of your work
+that conform with whatever technical specifications you wish while
+maintaining the availability, integrity, and reliability of
+that work. If you do not see how to achieve your goal while
+meeting these conditions, then read the document `cfgguide.tex'
+and `modguide.tex' in the base LaTeX distribution for suggestions.
+
+
+DEFINITIONS
+===========
+
+In this license document the following terms are used:
+
+ `Work'
+ Any work being distributed under this License.
+
+ `Derived Work'
+ Any work that under any applicable law is derived from the Work.
+
+ `Modification'
+ Any procedure that produces a Derived Work under any applicable
+ law -- for example, the production of a file containing an
+ original file associated with the Work or a significant portion of
+ such a file, either verbatim or with modifications and/or
+ translated into another language.
+
+ `Modify'
+ To apply any procedure that produces a Derived Work under any
+ applicable law.
+
+ `Distribution'
+ Making copies of the Work available from one person to another, in
+ whole or in part. Distribution includes (but is not limited to)
+ making any electronic components of the Work accessible by
+ file transfer protocols such as FTP or HTTP or by shared file
+ systems such as Sun's Network File System (NFS).
+
+ `Compiled Work'
+ A version of the Work that has been processed into a form where it
+ is directly usable on a computer system. This processing may
+ include using installation facilities provided by the Work,
+ transformations of the Work, copying of components of the Work, or
+ other activities. Note that modification of any installation
+ facilities provided by the Work constitutes modification of the Work.
+
+ `Current Maintainer'
+ A person or persons nominated as such within the Work. If there is
+ no such explicit nomination then it is the `Copyright Holder' under
+ any applicable law.
+
+ `Base Interpreter'
+ A program or process that is normally needed for running or
+ interpreting a part or the whole of the Work.
+
+ A Base Interpreter may depend on external components but these
+ are not considered part of the Base Interpreter provided that each
+ external component clearly identifies itself whenever it is used
+ interactively. Unless explicitly specified when applying the
+ license to the Work, the only applicable Base Interpreter is a
+ `LaTeX-Format' or in the case of files belonging to the
+ `LaTeX-format' a program implementing the `TeX language'.
+
+
+
+CONDITIONS ON DISTRIBUTION AND MODIFICATION
+===========================================
+
+1. Activities other than distribution and/or modification of the Work
+are not covered by this license; they are outside its scope. In
+particular, the act of running the Work is not restricted and no
+requirements are made concerning any offers of support for the Work.
+
+2. You may distribute a complete, unmodified copy of the Work as you
+received it. Distribution of only part of the Work is considered
+modification of the Work, and no right to distribute such a Derived
+Work may be assumed under the terms of this clause.
+
+3. You may distribute a Compiled Work that has been generated from a
+complete, unmodified copy of the Work as distributed under Clause 2
+above, as long as that Compiled Work is distributed in such a way that
+the recipients may install the Compiled Work on their system exactly
+as it would have been installed if they generated a Compiled Work
+directly from the Work.
+
+4. If you are the Current Maintainer of the Work, you may, without
+restriction, modify the Work, thus creating a Derived Work. You may
+also distribute the Derived Work without restriction, including
+Compiled Works generated from the Derived Work. Derived Works
+distributed in this manner by the Current Maintainer are considered to
+be updated versions of the Work.
+
+5. If you are not the Current Maintainer of the Work, you may modify
+your copy of the Work, thus creating a Derived Work based on the Work,
+and compile this Derived Work, thus creating a Compiled Work based on
+the Derived Work.
+
+6. If you are not the Current Maintainer of the Work, you may
+distribute a Derived Work provided the following conditions are met
+for every component of the Work unless that component clearly states
+in the copyright notice that it is exempt from that condition. Only
+the Current Maintainer is allowed to add such statements of exemption
+to a component of the Work.
+
+ a. If a component of this Derived Work can be a direct replacement
+ for a component of the Work when that component is used with the
+ Base Interpreter, then, wherever this component of the Work
+ identifies itself to the user when used interactively with that
+ Base Interpreter, the replacement component of this Derived Work
+ clearly and unambiguously identifies itself as a modified version
+ of this component to the user when used interactively with that
+ Base Interpreter.
+
+ b. Every component of the Derived Work contains prominent notices
+ detailing the nature of the changes to that component, or a
+ prominent reference to another file that is distributed as part
+ of the Derived Work and that contains a complete and accurate log
+ of the changes.
+
+ c. No information in the Derived Work implies that any persons,
+ including (but not limited to) the authors of the original version
+ of the Work, provide any support, including (but not limited to)
+ the reporting and handling of errors, to recipients of the
+ Derived Work unless those persons have stated explicitly that
+ they do provide such support for the Derived Work.
+
+ d. You distribute at least one of the following with the Derived Work:
+
+ 1. A complete, unmodified copy of the Work;
+ if your distribution of a modified component is made by
+ offering access to copy the modified component from a
+ designated place, then offering equivalent access to copy
+ the Work from the same or some similar place meets this
+ condition, even though third parties are not compelled to
+ copy the Work along with the modified component;
+
+ 2. Information that is sufficient to obtain a complete,
+ unmodified copy of the Work.
+
+7. If you are not the Current Maintainer of the Work, you may
+distribute a Compiled Work generated from a Derived Work, as long as
+the Derived Work is distributed to all recipients of the Compiled
+Work, and as long as the conditions of Clause 6, above, are met with
+regard to the Derived Work.
+
+8. The conditions above are not intended to prohibit, and hence do not
+apply to, the modification, by any method, of any component so that it
+becomes identical to an updated version of that component of the Work as
+it is distributed by the Current Maintainer under Clause 4, above.
+
+9. Distribution of the Work or any Derived Work in an alternative
+format, where the Work or that Derived Work (in whole or in part) is
+then produced by applying some process to that format, does not relax or
+nullify any sections of this license as they pertain to the results of
+applying that process.
+
+10. a. A Derived Work may be distributed under a different license
+ provided that license itself honors the conditions listed in
+ Clause 6 above, in regard to the Work, though it does not have
+ to honor the rest of the conditions in this license.
+
+ b. If a Derived Work is distributed under a different license, that
+ Derived Work must provide sufficient documentation as part of
+ itself to allow each recipient of that Derived Work to honor the
+ restrictions in Clause 6 above, concerning changes from the Work.
+
+11. This license places no restrictions on works that are unrelated to
+the Work, nor does this license place any restrictions on aggregating
+such works with the Work by any means.
+
+12. Nothing in this license is intended to, or may be used to, prevent
+complete compliance by all parties with all applicable laws.
+
+
+NO WARRANTY
+===========
+
+There is no warranty for the Work. Except when otherwise stated in
+writing, the Copyright Holder provides the Work `as is', without
+warranty of any kind, either expressed or implied, including, but not
+limited to, the implied warranties of merchantability and fitness for a
+particular purpose. The entire risk as to the quality and performance
+of the Work is with you. Should the Work prove defective, you assume
+the cost of all necessary servicing, repair, or correction.
+
+In no event unless required by applicable law or agreed to in writing
+will The Copyright Holder, or any author named in the components of the
+Work, or any other party who may distribute and/or modify the Work as
+permitted above, be liable to you for damages, including any general,
+special, incidental or consequential damages arising out of any use of
+the Work or out of inability to use the Work (including, but not limited
+to, loss of data, data being rendered inaccurate, or losses sustained by
+anyone as a result of any failure of the Work to operate with any other
+programs), even if the Copyright Holder or said author or said other
+party has been advised of the possibility of such damages.
+
+
+MAINTENANCE OF THE WORK
+=======================
+
+The Work has the status `author-maintained' if the Copyright Holder
+explicitly and prominently states near the primary copyright notice in
+the Work that the Work can only be maintained by the Copyright Holder
+or simply that it is `author-maintained'.
+
+The Work has the status `maintained' if there is a Current Maintainer
+who has indicated in the Work that they are willing to receive error
+reports for the Work (for example, by supplying a valid e-mail
+address). It is not required for the Current Maintainer to acknowledge
+or act upon these error reports.
+
+The Work changes from status `maintained' to `unmaintained' if there
+is no Current Maintainer, or the person stated to be Current
+Maintainer of the work cannot be reached through the indicated means
+of communication for a period of six months, and there are no other
+significant signs of active maintenance.
+
+You can become the Current Maintainer of the Work by agreement with
+any existing Current Maintainer to take over this role.
+
+If the Work is unmaintained, you can become the Current Maintainer of
+the Work through the following steps:
+
+ 1. Make a reasonable attempt to trace the Current Maintainer (and
+ the Copyright Holder, if the two differ) through the means of
+ an Internet or similar search.
+
+ 2. If this search is successful, then enquire whether the Work
+ is still maintained.
+
+ a. If it is being maintained, then ask the Current Maintainer
+ to update their communication data within one month.
+
+ b. If the search is unsuccessful or no action to resume active
+ maintenance is taken by the Current Maintainer, then announce
+ within the pertinent community your intention to take over
+ maintenance. (If the Work is a LaTeX work, this could be
+ done, for example, by posting to comp.text.tex.)
+
+ 3a. If the Current Maintainer is reachable and agrees to pass
+ maintenance of the Work to you, then this takes effect
+ immediately upon announcement.
+
+ b. If the Current Maintainer is not reachable and the Copyright
+ Holder agrees that maintenance of the Work be passed to you,
+ then this takes effect immediately upon announcement.
+
+ 4. If you make an `intention announcement' as described in 2b. above
+ and after three months your intention is challenged neither by
+ the Current Maintainer nor by the Copyright Holder nor by other
+ people, then you may arrange for the Work to be changed so as
+ to name you as the (new) Current Maintainer.
+
+ 5. If the previously unreachable Current Maintainer becomes
+ reachable once more within three months of a change completed
+ under the terms of 3b) or 4), then that Current Maintainer must
+ become or remain the Current Maintainer upon request provided
+ they then update their communication data within one month.
+
+A change in the Current Maintainer does not, of itself, alter the fact
+that the Work is distributed under the LPPL license.
+
+If you become the Current Maintainer of the Work, you should
+immediately provide, within the Work, a prominent and unambiguous
+statement of your status as Current Maintainer. You should also
+announce your new status to the same pertinent community as
+in 2b) above.
+
+
+WHETHER AND HOW TO DISTRIBUTE WORKS UNDER THIS LICENSE
+======================================================
+
+This section contains important instructions, examples, and
+recommendations for authors who are considering distributing their
+works under this license. These authors are addressed as `you' in
+this section.
+
+Choosing This License or Another License
+----------------------------------------
+
+If for any part of your work you want or need to use *distribution*
+conditions that differ significantly from those in this license, then
+do not refer to this license anywhere in your work but, instead,
+distribute your work under a different license. You may use the text
+of this license as a model for your own license, but your license
+should not refer to the LPPL or otherwise give the impression that
+your work is distributed under the LPPL.
+
+The document `modguide.tex' in the base LaTeX distribution explains
+the motivation behind the conditions of this license. It explains,
+for example, why distributing LaTeX under the GNU General Public
+License (GPL) was considered inappropriate. Even if your work is
+unrelated to LaTeX, the discussion in `modguide.tex' may still be
+relevant, and authors intending to distribute their works under any
+license are encouraged to read it.
+
+A Recommendation on Modification Without Distribution
+-----------------------------------------------------
+
+It is wise never to modify a component of the Work, even for your own
+personal use, without also meeting the above conditions for
+distributing the modified component. While you might intend that such
+modifications will never be distributed, often this will happen by
+accident -- you may forget that you have modified that component; or
+it may not occur to you when allowing others to access the modified
+version that you are thus distributing it and violating the conditions
+of this license in ways that could have legal implications and, worse,
+cause problems for the community. It is therefore usually in your
+best interest to keep your copy of the Work identical with the public
+one. Many works provide ways to control the behavior of that work
+without altering any of its licensed components.
+
+How to Use This License
+-----------------------
+
+To use this license, place in each of the components of your work both
+an explicit copyright notice including your name and the year the work
+was authored and/or last substantially modified. Include also a
+statement that the distribution and/or modification of that
+component is constrained by the conditions in this license.
+
+Here is an example of such a notice and statement:
+
+ %% pig.dtx
+ %% Copyright 2008 M. Y. Name
+ %
+ % This work may be distributed and/or modified under the
+ % conditions of the LaTeX Project Public License, either version 1.3
+ % of this license or (at your option) any later version.
+ % The latest version of this license is in
+ % https://www.latex-project.org/lppl.txt
+ % and version 1.3c or later is part of all distributions of LaTeX
+ % version 2008 or later.
+ %
+ % This work has the LPPL maintenance status `maintained'.
+ %
+ % The Current Maintainer of this work is M. Y. Name.
+ %
+ % This work consists of the files pig.dtx and pig.ins
+ % and the derived file pig.sty.
+
+Given such a notice and statement in a file, the conditions
+given in this license document would apply, with the `Work' referring
+to the three files `pig.dtx', `pig.ins', and `pig.sty' (the last being
+generated from `pig.dtx' using `pig.ins'), the `Base Interpreter'
+referring to any `LaTeX-Format', and both `Copyright Holder' and
+`Current Maintainer' referring to the person `M. Y. Name'.
+
+If you do not want the Maintenance section of LPPL to apply to your
+Work, change `maintained' above into `author-maintained'.
+However, we recommend that you use `maintained', as the Maintenance
+section was added in order to ensure that your Work remains useful to
+the community even when you can no longer maintain and support it
+yourself.
+
+Derived Works That Are Not Replacements
+---------------------------------------
+
+Several clauses of the LPPL specify means to provide reliability and
+stability for the user community. They therefore concern themselves
+with the case that a Derived Work is intended to be used as a
+(compatible or incompatible) replacement of the original Work. If
+this is not the case (e.g., if a few lines of code are reused for a
+completely different task), then clauses 6b and 6d shall not apply.
+
+
+Important Recommendations
+-------------------------
+
+ Defining What Constitutes the Work
+
+ The LPPL requires that distributions of the Work contain all the
+ files of the Work. It is therefore important that you provide a
+ way for the licensee to determine which files constitute the Work.
+ This could, for example, be achieved by explicitly listing all the
+ files of the Work near the copyright notice of each file or by
+ using a line such as:
+
+ % This work consists of all files listed in manifest.txt.
+
+ in that place. In the absence of an unequivocal list it might be
+ impossible for the licensee to determine what is considered by you
+ to comprise the Work and, in such a case, the licensee would be
+ entitled to make reasonable conjectures as to which files comprise
+ the Work.
M plugins/staticmath/blueprints.yaml => plugins/staticmath/blueprints.yaml +7 -7
@@ 1,18 1,18 @@
-name: staticmath
-slug: StaticMath
+name: StaticMath
+slug: staticmath
type: plugin
-version: 2.0.0
-description: Converts LaTeX to static math
-icon: plug
+version: 2.0.1
+description: "Compiles LaTeX to static and accessible MathML using an [external rendering server](http://git.sr.ht/~fd/staticmath-server)."
+icon: square-root-alt
author:
name: Ersei Saggi
email: contact@ersei.net
homepage: https://sr.ht/~fd/grav-plugin-staticmath
demo: https://ersei.net/en/blog/rsa-basics
-keywords: grav, plugin, etc
+keywords: math, latex
bugs: https://todo.sr.ht/~fd/grav-plugin-staticmath
docs: https://git.sr.ht/~fd/grav-plugin-staticmath/tree/main/item/README.md
-license: MIT
+license: MIT, LPPL-1.3c
dependencies:
- { name: grav, version: '>=1.6.0' }
M plugins/staticmath/languages.yaml => plugins/staticmath/languages.yaml +0 -4
@@ 11,7 11,3 @@ en:
NO: "No"
ENABLED: "Enabled"
DISABLED: "Disabled"
- OUTPUT_MODE: "Output Mode"
- HTML: "HTML Only"
- MATHML: "MathML Only"
- HTML_AND_MATHML: "HTML and MathML"
M plugins/staticmath/staticmath.php => plugins/staticmath/staticmath.php +98 -21
@@ 1,6 1,6 @@
<?php
/**
- * Grav StaticMath plugin v2.0.0
+ * Grav StaticMath plugin v2.0.1
*
* This plugin renders math server-side and displays it to the client with
* Temml.
@@ 8,7 8,7 @@
* Based on the code from the Grav MathJax plugin: https://github.com/sommerregen/grav-plugin-mathjax
*
* @package StaticMath
- * @version 2.0.0
+ * @version 2.0.1
* @link <https://sr.ht/~fd/grav-plugin-staticmath>
* @author Ersei Saggi <contact@ersei.net>
* @copyright 2024, Ersei Saggi
@@ 20,7 20,7 @@ use Composer\Autoload\ClassLoader;
use Grav\Common\Plugin;
use RocketTheme\Toolbox\Event\Event;
use Grav\Common\Page\Page;
-use Grav\Common\Data\Blueprints;
+use Grav\Common\Grav;
/**
* Class StaticmathPlugin
@@ 49,9 49,10 @@ class StaticmathPlugin extends Plugin
public static function getSubscribedEvents(): array
{
return [
- 'onPluginsInitialized' => ['onPluginsInitialized', 0],
- 'onGetPageBlueprints' => ['onGetPageBlueprints', 0]
- ];
+ 'onPluginsInitialized' => ['onPluginsInitialized', 0],
+ 'onGetPageBlueprints' => ['onGetPageBlueprints', 0],
+ 'onMarkdownInitialized' => ['onMarkdownInitialized', 0],
+ ];
}
/**
@@ 65,12 66,12 @@ class StaticmathPlugin extends Plugin
}
/**
- * Register shortcodes
- */
- public function onShortcodeHandlers()
- {
- $this->grav['shortcode']->registerAllShortcodes(__DIR__ . '/shortcodes');
- }
+ * Register shortcodes
+ */
+ public function onShortcodeHandlers()
+ {
+ $this->grav['shortcode']->registerAllShortcodes(__DIR__ . '/shortcodes');
+ }
/**
* Initialize the plugin
@@ 83,10 84,68 @@ class StaticmathPlugin extends Plugin
return;
}
- $this->enable([
- 'onShortcodeHandlers' => ['onShortcodeHandlers', 0],
- 'onPageInitialized' => ['onPageInitialized', 0]
- ]);
+ $this->enable([
+ 'onShortcodeHandlers' => ['onShortcodeHandlers', 0],
+ 'onPageInitialized' => ['onPageInitialized', 0]
+ ]);
+ }
+
+ public function onMarkdownInitialized(Event $event)
+ {
+ $markdown = $event['markdown'];
+
+ $page = $this->grav['page'];
+ $config = $this->mergeConfig($page);
+ if (!($config->get('enabled') && $config->get('active'))) {
+ return;
+ }
+ $markdown->addBlockType('$', 'Staticmath', true, false);
+ $markdown->addInlineType('$', 'Staticmath');
+
+ $markdown->blockStaticmath = function($Line) {
+ if (preg_match('/^\$\$$/', $Line['text'], $matches)) {
+ $Block = [
+ 'element' => [
+ 'name' => 'div',
+ 'handler' => 'lines',
+ 'text' => [],
+ ],
+ ];
+
+ return $Block;
+ }
+ };
+
+ $markdown->blockStaticmathContinue = function($Line, array $Block) {
+ if (isset($Block['interrupted'])) {
+ return;
+ }
+ if (!preg_match('/^\$\$$/', $Line['text'])) {
+ $Block['element']['text'][] = $Line['text'];
+ } else {
+ $text = implode(
+ "\n",
+ $Block['element']['text']
+ );
+
+ $Block['element']['text'] = (array) $this->render($text);
+ }
+ return $Block;
+ };
+
+ $markdown->inlineStaticmath = function($Line) {
+ if (preg_match('/\$(.+?)\$/', $Line['text'], $matches)) {
+ $Block = [
+ 'extent' => strlen($matches[0]),
+ 'element' => [
+ 'name' => 'span',
+ 'handler' => 'lines',
+ 'text' => (array) $this->render($matches[1], true),
+ ],
+ ];
+ return $Block;
+ }
+ };
}
/**
@@ 108,10 167,28 @@ class StaticmathPlugin extends Plugin
}
}
- public function onGetPageBlueprints($event)
- {
- $types = $event->types;
- $types->scanBlueprints('plugin://staticmath/blueprints');
- }
+ public function onGetPageBlueprints($event)
+ {
+ $types = $event->types;
+ $types->scanBlueprints('plugin://staticmath/blueprints');
+ }
+
+ private function render($content, $inline = false) {
+ $mode = $inline ? "inline" : "block";
+ $staticmath_server = Grav::instance()['config']->get('plugins.staticmath.server');
+ $postfield = "mode=" . urlencode($mode) . "&data=" . urlencode($content);
+ $ch = curl_init($staticmath_server);
+ curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
+ curl_setopt($ch, CURLOPT_POSTFIELDS, $postfield);
+ curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
+ curl_setopt($ch, CURLOPT_HTTPHEADER, [
+ 'Content-Length: ' . strlen($postfield)
+ ]);
+ $result = curl_exec($ch);
+ if (!$result) {
+ return "<pre>" . $content . "</pre>";
+ }
+ return $result;
+ }
}
M plugins/staticmath/staticmath.yaml => plugins/staticmath/staticmath.yaml +0 -1
@@ 1,4 1,3 @@
enabled: true
built_in_css: true
-output: "htmlAndMathml"
server: "http://localhost:3000"