Tuesday, March 7, 2023

Reading downloaded logs from Quay.io

Quay is the container registry service sponsored by Red Hat inc. based on the projectquay.io free/open source project.

It supports also building images. One issue though is that logs are downloaded in a custom JSON format.

A simple example is:

> {"logs":[{"data":{"datetime":"2023-03-07 15:19:36.159268"},"message":"build-scheduled","type":"phase"}]}


In this short post I give you a very simple way to read those logs in terminal:

> jq -r '.logs[].message' < /tmp/98afc879-8ef1-4cc6-9425-cf5e77712a5f.json

Wednesday, April 28, 2021

Updating UEFI boot record on Fedora

This is more of a personal note.

Basically grub-install is deemed unnecessary now. If you use it, you will break secure boot. To restore your boot record, you can do

sudo dnf reinstall shim-* grub2-*

then if you also need to update your grub config, which you should not need normally

Fedora 33 and older:

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Fedora 34 and newer:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

In case you have upgraded to Fedora 34 from an earlier version, you better use sudo rpmconf -a to restore /boot/efi/EFI/fedora/grub.cfg to the new default version.


While on it, I also learned about the  efibootmgr utility, appears interesting although I don't know what use one can have of it ¯\_(ツ)_/¯ 

Actually efibootmgr can help in case you have a messed up boot entry that doesn't actually boot to grub or whatever boot manager (or UKI image) you desire.

# list current entries
sudo efibootmgr
# remove existing entry
sudo
efibootmgr -B -b 0
# create a new entry
sudo efibootmgr --disk /dev/vnme0n1 --part 1 -L Fedora -l \EFI\fedora\grubx64.efi
# change boot order (you see current order with the first command)
sudo efibootmgr -o 0000,0001,001C,001D,001E,001F,0020,0021,0022,0023,0024,0025

Important: when performing the fixes above, make sure to use a Fedora live image or a netinst image in recovery mode lest you mess up selinux labeling and machine fails to start with Failed to mount API filesystem (as it happened to me). Then you will have to boot with enforce=0 kernel cmdline argument, then run fixfiles relabel to fix that up.

Literature:

Monday, April 26, 2021

Rsync between volumes on two different OpenShift clusters

This is a short HOWTO about rsync-ing data between 2 distinct OpenShift clusters.

You always have the option to oc rsync the data from source OpenShift cluster to your local workstation and then oc rsync from your workstation to target cluster. But if you have halt a terabyte of data you may not have enough space or it may take several days because of network bandwidth limitation.

The method I describe below avoids any such inefficiencies as well the rsync process is restarted in case some network or system glitch kills it.

It basically works by having:

  • a kubeconfig file with access to the target OpenShift cluster inside a secret on the source OpenShift cluster
  • a pod on target OpenShift cluster with target volume mounted
  • a pod on source OpenShift cluster with source volume and kubeconfig secret mountedand an entrypoint running oc rsync

 So lets start with generating a proper kubeconfig secret.

$ touch /tmp/kubeconfig
$ chmod 600 /tmp/kubeconfig
$ oc login --config=/tmp/kubeconfig # make sure to use target cluster API endpoint
$ oc project my-target-cluster-namespace --config=/tmp/kubeconfig 
Note that command below will run against source OpenShift cluster.
$ oc login # use source cluster API endpoint
$ oc create secret generic kubeconfig --from-file=config=/tmp/kubeconfig

I will assume that you have your target pod already running inside target cluster. Otherwise you can create one similar to the pod in source cluster below, just use some entrypoint command to keep it permanently running. For example /bin/sleep 1000000000000000000.

Now all we need to do is run a proper pod in source cluster to do the rsync task. Here is an example pod YAML with comments to make clear what to use in your situation:

apiVersion: v1
kind: Pod
metadata:
  name: rsync-pod
  namespace: my-namespace-on-source-cluster
spec:
  containers:
    # use client version ±1 of target OpenShift cluster version
    - image: quay.io/openshift/origin-cli:4.6
      name: rsync
      command:
      - "oc"
      args:
      - "--namespace=my-target-cluster-namespace"
      - "--kubeconfig=/run/secrets/kube/config"
      # insecure TLS is not recommended but is a quick hack to get you going
      - "--insecure-skip-tls-verify=true"
      - "rsync"
      - "--compress=true"
      - "--progress=true"
      - "--strategy=rsync"
      - "/path/to/data/dir/"
      - "target-pod-name:/path/to/data/dir/"
      volumeMounts:
        - mountPath: /path/to/data/dir
          name: source-data-volume
        - mountPath: /run/secrets/kube
          name: kubeconfig
          readOnly: true
  # restart policy will keep restarting your pod until rsync completes successfully
  restartPolicy: OnFailure
  terminationGracePeriodSeconds: 30
  volumes:
    - name: source-data-volume
      persistentVolumeClaim:
        claimName: source-persistant-volume-claim-name
    - name: kubeconfig
      secret:
        defaultMode: 420
        secretName: kubeconfig
And last needed command is to run this pod inside the source cluster:
$ oc create -f rsync-pod.yaml
Now check what state is your pod in:
$ oc describe pod rsync-pod
If it start properly, then monitor your progress:
$ oc logs -f rsync-pod

Friday, December 4, 2020

Why Linux sucks with drivers?

I just found this blog post as an unpublished drart and only one line

I never liked Microsoft in particular. But no

I honestly can't remember what I intended to write here. My guess is that I was frustrated with ROCm and state of GPU drivers and frameworks in Linux. And situation is still quiet frustrating.

But I recently saw the situation with Windows drivers and I'm now convinced Windows is no better with drivers unless you're buying latest hardware.

Last week I've upgraded home wifi router to latest OpenWRT with more secure settings including WPA3 optional support. And while everything else started to work better, one Windows 10 laptop started to disconnect from network very often and network performance was not enough to play Youtube videos.

The machine a pretty descent one, HP Inspiron 15 3000 series with i7 CPU and descent amount of RAM, still a few years old. So I thought that the old Atheros/Qualcomm wifi card needs a driver update.

What I found in HP website was from 2017 and didn't yield any better results. Then with fear I tried ath-drivers.eu as an unofficial driver source and latest driver for the card from 2019. No luck either.

Now I had the option to configure the old router just for this laptop. But this didn't sound right and still compromises the whole local network. Then I decided to find a second hand mPCIe Wi-Fi card. Choices basically boil down to old Intel, Broadcom and Realtek models. Realtek is the one I did *not* try due to lack of reputation.

I found a guy who had both Intel and Broadcom models. So I could take both home and see which one works better. The Intel model had drivers in windows update from 2013 only (looking at available model, the latest model mPCIe Intel I found is discontinued and has 2019 drivers). The Broadcom had 2016 or something.

I wanted to try the Broadcom first due to Bluetooth 4.0 LE support. Some new mice and other devices only support that version. It performed well but computer crashed a few times for one day.

Finally tried the Intel with 2013 driver. Now that worked rock solid and fast. Downside is Bluetooth 3.0 support but mouse can also be used with a receiver so I guess it should be good enough. I see Bluetooth 4 USB dongles for $5 so not a big deal to add such support if needed in the future.

Unfortunately this card will never get WPA3 support and I have no idea whether the recent WPA2 vulnerabilities have been fixed with it somehow or not.

In conclusion I see that for older hardware, that is not ancient, just little old but perfectly fine, Linux still gets much better support.

I'm sorry if you didn't expect just another Windows rant with a click-bait title. Still I needed to express my frustration with the state of computing. And I don't mention Apple here, it is so closed ecosystem that no amount of polish can fix.

Wednesday, April 1, 2020

Back to third grade with SSH or how to setup ~/.ssh/authorized_keys

Very often I am asked to SSH to a machine just to hit access denied. A few roundtrips then needed until issue is resolved. Here are my commands to get it working from the first time.

# mkdir .ssh
# vi .ssh/authorized_keys # add user's public key here
# chown -R user.user .ssh
# chmod 700 .ssh
# chmod 600 .ssh/authorized_keys
# restorecon -R .ssh

Last one is for SELinux enabled distributions.

Hope you find useful.

Thursday, November 21, 2019

Using authenticated proxy with Selenium / Packaging Chrome extensions with Ruby

Overview

Recently I've got the request to implement authenticated proxy support for our product test framework. The problem is that recent browsers do not allow the widely popular http://username:password@proxy.example.com syntax and still ask you to manually enter credentials.

The next problem is that Selenium does not let you interact with these basic auth dialogs [1][2]. So how should one go about this?

Chrome allows you to do this with a custom extension that you can insert with selenium/watir.


One additional complication is that we can use a different proxy server each time. Thus extension needs to be packaged on the fly.

Chrome extension

This is the proxy extension as I use it. See it as an example for whatever you'll be trying to do. It consists of only 2 files you can put in an empty directory.

manifest.json

{
    "version": "0.0.1",
    "manifest_version": 2,
    "name": "Authenticated Proxy",
    "permissions": [
        "<all_urls>",
        "proxy",
        "unlimitedStorage",
        "webRequest",
        "webRequestBlocking",
        "storage",
        "tabs"
    ],
    "background": {
        "scripts": ["background.js"]
    },
    "minimum_chrome_version":"23.0.0"
}

background.js.erb

var config = {
  mode: "fixed_servers",
  rules: {
    singleProxy: {
      scheme: "<%= proxy_proto %>",
      host: "<%= proxy_host %>",
      port: parseInt(<%= proxy_port %>)
    },
    bypassList: <%= proxy_bypass.split(/[ ,]/).delete_if(&:empty?).to_json %>
  }
};

chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});

function callbackFn(details) {
  return {
    authCredentials: {
      username: "<%= proxy_user %>",
      password: "<%= proxy_pass %>"
    }
  };
}

chrome.webRequest.onAuthRequired.addListener(
  callbackFn,
  {urls: ["<all_urls>"]},
  ['blocking']
);

Protocol Buffers

As you can see on the web site, Protocol Buffers is a method of serializing structured data. For CRX3 (unlike CRX2) format it is part of the required header for the extension.

I decided to use ruby-protobuf project instead of the google ruby library because it appeared well maintained and pure ruby. I assume google ruby library will work well too.

The Packager

 A CRX v3 file would consist of:
  • Cr24 - ASCII 8bit magic string
  • 3 - protocol version in unsigned 32bit little endian
  • header length in bytes in unsigned 32bit little endian
  • header itself - the protobuf serialized object
    • crx3.proto - the protobuf descriptor
    • as a rule of thumb
      •  all lengths inside are given as unsigned 32bit little-endian integers
      • key files are inserted in PKCS#8 binary encoding (Ruby's key.to_der worked fine)
  • ZIP archive of the extension files

Generating protobuf stub

We need to install Google protobuf compiler protoc. You can save the protocol file in a directory where you want stub to live in. Then generate by

protoc --plugin=protoc-gen-ruby-protobuf=`ls ~/bin/protoc-gen-ruby` --ruby-protobuf_out=./ path/chrome_crx3/crx3.proto
This will create a file crx3.pb.rb in the same directory as the protocol file. All you need is to require 'path/crx3.pb.rb' wherever you want to use that format.

Actual packager

At this point the packager is straightforward to implement. Pasting the whole logic here.

We have one ::zip method to generate a ZIP archive in memory. If an ERB binding is provided by caller, any .erb files are processed. That's how the above background.js.erb works.

The method ::header_v3_extension generates the signature and constructs the whole file header.

Finally ::pack_extension just glues the two methods above to generate the final extension.

chrome_extension.rb

require 'erb'
require 'find'
require 'openssl'
require 'zip'

require_relative 'resource/chrome_crx3/crx3.pb.rb'

class ChromeExtension
  def self.gen_rsa_key(len=2048)
    OpenSSL::PKey::RSA.generate(len)
  end

  #  @note file format spec pointers:
  #    https://groups.google.com/a/chromium.org/d/msgid/chromium-extensions/977b9b99-2bb9-476b-992f-97a3e37bf20c%40chromium.org
  def self.header_v3_extension(data, key: nil)
    key ||= gen_rsa_key()

    digest = OpenSSL::Digest.new('sha256')
    signed_data = Crx_file::SignedData.new
    signed_data.crx_id = digest.digest(key.public_key.to_der)[0...16]
    signed_data = signed_data.encode

    signature_data = String.new(encoding: "ASCII-8BIT")
    signature_data << "CRX3 SignedData\00"
    signature_data << [ signed_data.size ].pack("V")
    signature_data << signed_data
    signature_data << data

    signature = key.sign(digest, signature_data)

    proof = Crx_file::AsymmetricKeyProof.new
    proof.public_key = key.public_key.to_der
    proof.signature = signature

    header_struct = Crx_file::CrxFileHeader.new
    header_struct.sha256_with_rsa = [proof]
    header_struct.signed_header_data = signed_data
    header_struct = header_struct.encode

    header = String.new(encoding: "ASCII-8BIT")
    header << "Cr24"
    header << [ 3 ].pack("V") # version
    header << [ header_struct.size ].pack("V")
    header << header_struct

    return header
  end

  # @param file [String] to write result to
  # @param dir [String] to read extension from
  # @param key [OpenSSL::PKey]
  # @param crxv [String] version of CRX file to create
  # @param erb_binding [Binding] optional if you want to process ERB files
  # @return undefined
  def self.pack_extension(file:, dir:, key: nil, crxv: "v3", erb_binding: nil)
    zip = zip(dir: dir, erb_binding: erb_binding)

    File.open(file, 'wb') do |io|
      io.write self.send(:"header_#{crxv}_extension", zip, key: key)
      io.write zip
    end
  end

  # @param dir [String] to read extension from
  # @param erb_binding [Binding] optional if you want to process ERB files
  # @return [String] the zip file content
  def self.zip(dir:, erb_binding: nil)
    dir_prefix_len = dir.end_with?("/") ? dir.length : dir.length + 1
    zip = StringIO.new
    zip.set_encoding "ASCII-8BIT"
    Zip::OutputStream::write_buffer(zip) do |zio|
      Find.find(dir) do |file|
        if File.file? file
          if erb_binding && file.end_with?(".erb")
            zio.put_next_entry(file[dir_prefix_len...-4])
            erb = ERB.new(File.read file)
            erb.location = file
            zio.write(erb.result(erb_binding))
            Kernel.puts erb.result(erb_binding)
          else
            zio.put_next_entry(file[dir_prefix_len..-1])
            zio.write(File.read(file))
          end
        end
      end
    end
    return zip.string
  end
end

Using the packager

Packing the extension is as simple as:
require 'chrome_extension'

ChromeExtension.pack_extension(file: "/path/of/target/extension.crx", dir: "/path/of/proxy/extension")

Using the extension with Watir

proxy_proto, proxy_user, proxy_pass, proxy_host, proxy_port = <...>
chrome_caps = Selenium::WebDriver::Remote::Capabilities.chrome()
chrome_caps.proxy = Selenium::WebDriver::Proxy.new({http: "#{proxy_proto}://#{proxy_host}:#{proxy_port}", :ssl => "#{proxy_proto}://#{proxy_host}:#{proxy_port}")
# there is a bug in Watir where providing an object here results in an error 
# options = Selenium::WebDriver::Chrome::Options.new
# options.add_extension proxy_chrome_ext_file if proxy_chrome_ext_file
options = {}
options[:extensions] = [proxy_chrome_ext_file] if proxy_chrome_ext_file
browser = Watir::Browser.new :chrome, desired_capabilities: chrome_caps, switches: chrome_switches, options: options

Bonus content - CRX2 method


  #  @note original crx2 format description https://web.archive.org/web/20180114090616/https://developer.chrome.com/extensions/crx
  def self.header_v2_extension(data, key: nil)
    key ||= gen_rsa_key()
    digest = OpenSSL::Digest.new('sha1')
    header = String.new(encoding: "ASCII-8BIT")

    # it is exactly same signature as `ssh_do_sign(data)` from net/ssh does
    signature = key.sign(digest, data)
    signature_length = signature.length
    pubkey_length = key.public_key.to_der.length

    header << "Cr24"
    header << [ 2 ].pack("V") # version
    header << [ pubkey_length ].pack("V")
    header << [ signature_length ].pack("V")
    header << key.public_key.to_der
    header << signature

    return header
  end

Credits

Monday, April 15, 2019

accessing namespaces of a docker/podman container (nsenter)

There is a nice utility `nsenter` that allows you to switch to the namespace of another process. It took me considerable time to search it out today so thought to write a short blog about it.

Now I have a Podman container (for docker just use `docker` command instead of `podman` below). I started that container by:

$ sudo podman run -t -a STDIN -a STDOUT -a STDERR --rm=true --entrypoint /bin/bash quay.io/example/image:version

And I've been running some testing on it but it turned out I want to increase limits without destroying my preparations if I exit the process. So first thing is to figure out pid namespace of my container:

$ sudo podman ps --ns
CONTAINER ID  NAMES                PID   CGROUPNS    IPC         MNT         NET         PIDNS       USERNS      UTS
a147a3a5b35f  fervent_stonebraker  1408  4026531835  4026532431  4026532429  4026532360  4026532432  4026531837  4026532430

I see different namespaces but `nsenter` requires a file name to switch to a PID namespace. SO I will use the PID information in above output.

$ sudo nsenter --pid=/proc/1408/ns/pid

The above starts a shell for me in the PID namespace of my container. Now I want to change limits. Interesting to note here is that I change pid 1 as it is the PID of my bash shell in the container:

$ sudo prlimit --rss=-1 --memlock=33554432 --pid 1

Finally verify limits in my container shell:

bash-4.2$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 23534
max locked memory       (kbytes, -l) 32768
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 16384
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1048576
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

One interesting thing is `ps` inside namespace. If I run these two

$ ps -ef
$ sudo nsenter --pid=/proc/1408/ns/pid ps -ef

They will show exactly the same output. It is because I still have same `/proc` mounted even though my PID namespace is changed. And it is what `ps` looks at.

With `nsenter` you can switch any namespace, not only PID. I hope this is a useful short demonstration how to do fun things with linux namespaces.

Some links:
  • https://lwn.net/Articles/531114/ - namespaces overview series