- Jul 16, 2024
-
-
Gigadoc 2 authored
-
- Jul 02, 2024
-
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
These are supposed to go into a standalone collection, but for now they are here.
-
Gigadoc 2 authored
-
Gigadoc 2 authored
Still acme-dns-tiny, but updated (and used for wildcard certificates now)
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
- Jun 24, 2024
- Oct 10, 2020
-
-
Gigadoc 2 authored
-
- Oct 06, 2020
- Sep 19, 2020
-
-
Gigadoc 2 authored
I have not used (or updated) it in quite a while, and now Mastodon just looks more attractive. Apart from spambots all my users (including myself) have not used their accounts in years, so I am just going to delete it all. Sorry, Diaspora devs :(
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
Let's not get over the Let's Encrypt rate limits.
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
Defining same-named variables in includes is kind of stupid, and it broke the v6-only variant. Make the entire thing two functions now (forward ipv4 and forward ipv6, now seperate because they didn't share any common configuration anyway), and call the functions with everything as an argument. The internal ipv4 network definitions are now moved up to the top-level file, as the container networks are routed over the main bridge and not their own bridges anymore.
-
Gigadoc 2 authored
-
Gigadoc 2 authored
-
Gigadoc 2 authored
The image should always be read-only, but using low-numbered ports is not a problem in this netowrk.
-
Gigadoc 2 authored
-
- Sep 18, 2020
-
-
Gigadoc 2 authored
More of a workaround right now, as this information is a statically configured variable…
-
- Sep 13, 2020
-
-
Gigadoc 2 authored
-
Gigadoc 2 authored
The built-in podman volumes don't work very well with ansible and have other downsides (see the content of this commit). This means the existing containers will be moved over to this structure as well, at some point.
-
Gigadoc 2 authored
For now, we just take literal commands, to be flexible. If that turns out to be too much trouble later on, it could be made into an "in-container" command. On the other hand, the podman role could also just export the binary location.
-
Gigadoc 2 authored
A bridge between Matrix rooms and Discord channels.
-
Gigadoc 2 authored
A bridge bridging Matrix with either personal IM networks via libpurple, or XMPP chatrooms via xmpp.js. Only half-shot knows why this is one project instead of two. I use the latter functionality.
-
Gigadoc 2 authored
This adds Rocket.Chat with its dependency MongoDB as very simple podman containers. I am probably going to remove them again soon, it was more of an experiment, and both MongoDB and Rocket.Chat are weird.
-
Gigadoc 2 authored
This role is the main ingredient for the application container setup. It can be used as a normal role, in which case it will just pull in the container-router role and install podman. More importantly though, it can be included from other roles with the `per-container.yml` tasks file. In that case it will "install" a container with the given parameters. The usual wisdom is that you don't need to install anything, as everything is bundled in the container image, but that is not quite true for real world deployments. Here, installation means that we create a UID/GID pair for the container to use (please don't just run containers as root or with an UID that is used by other containers and/or your host system itself), a CNI network configuration (because app container people have never heard of running stuff without a load balancer, or services that connect to the internet themselves), and a systemd service file to launch it with dependencies (somewhat) respected (after all, this is why I use podman in the first place).
-
Gigadoc 2 authored
All my container networks are routed, there is no bridging or overlay networking going on. As such, the container host also acts as a router, and it is also firewalling the containers. Keeping the firewall close to the container namespaces themselves (as opposed to just having the central router firewall, like I am doing with the VMs for now) helps making the internal network less of a "trusted zone", hopefully aiding damage control in case of a compromised service. Currently the firewall is realised with ferm, I probably have to switch to nftables though soon, Fedora has already dropped ferm. Note that the container routers don't do NAT: I want to avoid NAT as much as possible, so the idea is that only the VM router itself does NAT. As a consequence of this, the VM router needs to know any and all subnets in use, but that is what the automation is for, right?
-
- Sep 10, 2020
-
-
Gigadoc 2 authored
-