~sqwishy/linkers.ooo

source code for https://linkers.ooo (maybe nsfw)
fixes single character hostnames being not linked
-webkit-scrollbar background-color

refs

master
browse  log 

clone

read-only
https://git.sr.ht/~sqwishy/linkers.ooo
read/write
git@git.sr.ht:~sqwishy/linkers.ooo

You can also use your local clone with git send-email.

#linkers.ooo

Logs a chat room and previews links posted to it.

It might be running (maybe nsfw) at https://linkers.ooo

#todo

  • Scrolling is still jank. Jumping to the end removes the location #anchor but it auto-scrolls. So if you link to the bottom result like ?stalk=randomferret#most-recent-id and visit that on a new page then the #anchor part will be removed when the page loads.

  • only check to auto-page up/down when start or end becomes visible? issue is with a short list/all links collapsed both start & end are visible and when you page up the start jumps above the viewport so only the end is visible so then it immediately pages down

  • Increase the limit of items returned in view results?

  • Document some stuff like the configuration or the WebSocket view requests?

#applications

There are three services that run on the bAcKeNd.

  1. coggers; logs chats
  2. linkers; previews links that coggers logs
  3. mmmm; allows users to remove their logs and opt out of future logging

#coggers

Makes a WebSocket connection to log chat messages. Writes everything to a SQLite database.

Also listens and makes WebSockets out of new TCP connections, handles requests from them about filtering and seeking through the logs.

Doesn't create its own database so run sqlite3 some-database-file.sqlite < coggers/coggers.sql to make one.

Doesn't use tokio, so the event loop is a bit confusing.

#linkers

Connects to coggers over a WebSocket and filters on links without preview data. For each link it sees, it fetches a preview, trying to read opengraph data.

For images or videos, will try to get a preview of them using ffmpeg if a preview isn't already cached. Generated previews create a webp thumbnail on the filesystem and an entry in a SQLite database with information about the target like video duration, file size, and whether the target media is animated (has more than one frame).

Does use tokio, so the event loop is a bit confusing.

#mmmm

Does an oauth so users can verify their nick and opt out of logging and delete their logs.

But a limitation with this, because I used a HTTP library that doesn't ship TLS support I guess, is that this program can't use TLS (https://) to talk to the OAuth provider that this gets user information from. This happens to work for me because I have a weird config with nginx that lets it initiate TLS connections to proxy cleartext HTTP but it's weird and dumb and not useful generally so idk...

#web

The webshit that runs in browsers visiting https://linkers.ooo

Most of this is Rust compiled to a WebAssembly program. Uses mogwai to build the DOM and respond to events.

Note about the development workflow: Two compile-time environment variables affect the behaviour of the WebAssembly target.

  • WS_URL (default "wss://linkers.ooo/ws") The URI that the WebSocket connects to.
  • STATICS (default "") A prefix added to src attributes on thumbnail images.

While developing, I'll typically run ninja with STATICS="https://linkers.ooo" to connect to and serve the images hosted there.

#woofers

A rust library for shared stuff for logging, ansi colors, argument parsing, cursor/timestamp data types.

#making a release

coggers, linkers, and mmmm builds can be done locally in chroots/environments using rpmbuild.

web builds use bwrap via a build script.

Considerations:

  • the spec file for rpmbuild might not work outside of fedora

  • The webshit may require rust/cargo tooling to be installed to the system. It does mount ~/.local/bin into the build environment for wasm-bindgen so maybe it can find rustc and cargo there too if you happen to use nightly from rustup or whatever people do these days.

(note to future self; idfk how anybody is supposed to remember this; env -C /srv/www/coggers createrepo_c --deltas ./ glhf)

#coggers linkers mmmm

Building this normally is just the usual cargo build but making a release uses some scripts to tar up the sources and then compile a binary rpm.

  • First, commit. The build script packages & builds sources from HEAD.

    FYI: Cargo.lock isn't committed to master, so the build script will make one. This is not great since release builds will commonly build against different dependencies than development.

    Instead, the lock file should be committed in a commit tagged for the release. And the sources should be taken from that tagged commit instead of HEAD. But I am too lazy to do that.

  • ./rpm-build.sh NN where NN is like a release number bigger than the last.

    I've not been bumping the versions specified in Cargo.toml, because nobody cares, and instead just picking a natural number higher than the last one I used.

  • mmmm's templates depend on a couple files like mmmm.css that needs to be manually put in the right place depending on how it's being hosted. it also depends on ptsans.css from webshit. some of that is hard-coded in templates so glhf

#webshit

  • ./bwrap-build [tmpdir]

    This should build everything to tmpdir or make a new temporary if you didn't pass one. On success the output directory should be the only thing written to stdout.

  • Then rsync it into the cloud or whatever.

#license

Apache-2.0

Do not follow this link