Little bit of everything!

Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 10 Posts
  • 204 Comments
Joined 3 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • Sure! I use Kaniko (Although I see now that it’s not maintained anymore). I’ll probably pull the image in locally to protect it…

    Kaniko does the Docker in Docker, and I found an action that I use, but it looks like that was taken down… Luckily I archived it! Make an action in Forgejo (I have an infrastructure group that I add public repos to for actions. So this one is called action-koniko-build and all it has is this action.yml file in it:

    name: Kaniko
    description: Build a container image using Kaniko
    inputs:
      Dockerfile:
        description: The Dockerfile to pass to Kaniko
        required: true
      image:
        description: Name and tag under which to upload the image
        required: true
      registry:
        description: Domain of the registry. Should be the same as the first path component of the tag.
        required: true
      username:
        description: Username for the container registry
        required: true
      password:
        description: Password for the container registry
        required: true
      context:
        description: Workspace for the build
        required: true
    runs:
      using: docker
      image: docker://gcr.io/kaniko-project/executor:debug
      entrypoint: /bin/sh
      args:
        - -c
        - |
          mkdir -p /kaniko/.docker
          echo '{"auths":{"${{ inputs.registry }}":{"auth":"'$(printf "%s:%s" "${{ inputs.username }}" "${{ inputs.password }}" | base64 | tr -d '\n')'"}}}' > /kaniko/.docker/config.json
          echo Config file follows!
          cat /kaniko/.docker/config.json
          /kaniko/executor --insecure --dockerfile ${{ inputs.Dockerfile }} --destination ${{ inputs.image }} --context dir://${{ inputs.context }}     
    

    Then, you can use it directly like:

    name: Build and Deploy Docker Image
    
    on:
      push:
        branches:
          - main
      workflow_dispatch:
    
    jobs:
      build:
        runs-on: docker
    
        steps:
        # Checkout the repository
        - name: Checkout code
          uses: actions/checkout@v3
    
        - name: Get current date # This is just how I label my containers, do whatever you prefer
          id: date
          run: echo "::set-output name=date::$(date '+%Y%m%d-%H%M')"
    
        - uses:  path.to.your.forgejo.instance:port/infrastructure/action-koniko-build@main # This is what I said above, it references your infrastructure action, on the main branch
          with:
            Dockerfile: cluster/charts/auth/operator/Dockerfile
            image: path.to.your.forgejo.instance:port/group/repo:${{ steps.date.outputs.date }}
            registry: path.to.your.forgejo.instance:port/v1
            username: ${{ env.GITHUB_ACTOR }}
            password: ${{ secrets.RUNNER_TOKEN }} # I haven't found a good secret option that works well, I should see if they have fixed the built-in token
            context: ${{ env.GITHUB_WORKSPACE }}
    

    I run my runners in Kubernetes in the same cluster as my forgejo instance, so this all hooks up pretty easy. Lmk if you want to see that at all if it’s relevant. The big thing is that you’ll need to have them be Privileged, and there’s some complicated stuff where you need to run both the runner and the “dind” container together.












  • This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.

    I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.

    The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.

    So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.







  • It’s out of date, and in desperate need of a rewrite. PHP might have been an okay choice 15 years ago, but no one in their right mind should be using PHP for modern server development. (Yes I’m calling out Pixelfed too). With so many languages and frameworks, that’s probably one of the worst right now.

    Then it was proven that they don’t really get modern infrastructure either, as their docker containers depend on stateful code, with combinations of environment variables and php files that need to be stored in volumes, and then plugins which are also stateful - meaning that on new updates they need to go through an “update” process. This is directly opposite of good practice as docker containers should be 100% immutable and be able to run just by using docker run. They also have required volume mounts scattered throughout the OS, it was just never designed with containers in mind.

    I can’t recommend nextcloud right now, it’s incredibly brittle and slow.


  • Agree with others, if you try to do a replica it’s going to be very inefficient, and your costs will be high. You’re looking for a backup, then just nightly/weekly you perform your backups. Any blob storage then will do, just work out what pricing works for you. Just plan out how you’d do a restore in case everything came crashing down - from ground up how would you bring your services back online?