migrate to astro
This commit is contained in:
parent
82150df591
commit
5e67b2bb0d
135 changed files with 5886 additions and 8330 deletions
153
src/content/blog/chatgpt-cover-letters.mdx
Normal file
153
src/content/blog/chatgpt-cover-letters.mdx
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
---
|
||||
pubDate: '2023-01-02'
|
||||
title: 'Generate cover letters with ChatGPT'
|
||||
description: 'With the help of ChatGPT it is fairly easy to generate custom tailored cover letters for job applications with your own CV'
|
||||
keywords:
|
||||
- ChatGPT
|
||||
- applications
|
||||
- cover letters
|
||||
- CV
|
||||
---
|
||||
|
||||
Something I hate is dealing with useless, repetitive tasks that serve no real purpose except to please societies customs. Usually, I opt-out of them by not playing the game, but sometimes not following the norm can seriously inhibt the chances of success. Other times, tasks like these can be automated with a little bit of effort.
|
||||
|
||||
Cover letters for job applications are such a thing. Luckily, with the rise of capable machine learning models like ChatGPT, we can automate this particular task. Now we can generate custom tailored cover letters for job applications with our own CV. Thinking about it, I should probably not publish this on my website, but who am I to pretend technology is not advancing?
|
||||
|
||||
It boils down to the following steps:
|
||||
|
||||
- Copy paste your CV into the text field
|
||||
- Copy paste the job description into the text field
|
||||
- Instruct to write a cover letter
|
||||
|
||||
The model will then generate a cover letter that is tailored to your CV and the job description. The generated cover letter is not perfect, but it is a good starting point for you to edit and improve. You can also use it as a template for future applications. Something you really need to look out for is factual errors. During countless hours of experimenting with it, I found that the model is often fantasizing. Sometimes, it makes things up and sounds very confident. You better check if you apply to MedTech and it writes something about a leading firm in low-latency HFT.
|
||||
|
||||
## A short example
|
||||
|
||||
I dont wanna get into any legal trouble, so I left out the job description. But here is my CV and the generated cover letter. At first, I tried to copy my CV in Latex, but the resulting cover letter was worse than with a copy-pasted version of my CV.
|
||||
|
||||
My prompt:
|
||||
|
||||
```
|
||||
I want you to generate a cover letter based on the following data:
|
||||
|
||||
job description:
|
||||
\`\`\`
|
||||
copy-paste job description here
|
||||
\`\`\`
|
||||
|
||||
my cv:
|
||||
\`\`\`
|
||||
Work
|
||||
9/22 – now Working Student DevOps, xxxxxxxxxxxxxxxxxxxxx
|
||||
○Designing, implementing high-availability infrastructure with Kubernetes on AWS
|
||||
○Setting up a monitoring and observability stack with Prometheus, Grafana, Loki
|
||||
○Developing custom Netbox plugins for network automation with Python and Ansible
|
||||
03/16 – 05/21 Accounting, xxxxxxxxx
|
||||
Education
|
||||
10/18 – 04/23 B. Sc. Informatics, Technical University Munich
|
||||
Thesis: Lightweight low-latency virtual networking, graded 1.0
|
||||
Final expected grade: xxx
|
||||
10/17 – 10/18 B. Sc. Business Informatics, Technical University Munich
|
||||
2 semesters, incomplete
|
||||
09 – 05/17 Abitur, xxxxxxxxxxxxxxxxxxxxx
|
||||
Focus on computer science and economy
|
||||
Final grade: xxx
|
||||
Skills
|
||||
Linux daily driver, administration, virtualization (KVM, OS-level)
|
||||
Networking in depth knowledge of the TCP/IP stack focused on networking during B.Sc.
|
||||
Tools git, LXC, Docker, Ansible, Jenkins, Kubernetes, AWS
|
||||
Prog. Languages JavaScript, Java, Python, Bash, C, SQL, Rust
|
||||
Languages German (native), English (C1), Chinese (A1), Swedish (A1)
|
||||
Projects
|
||||
04/22 – 08/22 Thesis: Lightweight low-latency virtual networking, graded 1.0
|
||||
○Evaluate viability of containers for low-latency networking
|
||||
○Integration of LXC into the chair’s infrastructure; orchestration service (pos), timestamping
|
||||
setup (MoonGen), various Python and Bash scripts (HVNet)
|
||||
○Results: LXC performs identical compared to KVM, but is less isolated
|
||||
04/21 – now Gear Optimizer, 1000 daily unique users, https://optimizer.discretize.eu/
|
||||
○Guild Wars 2 (video game) damage simulator: helps players to optimize their characters
|
||||
for highly configurable scenarios. Computation runs on the client side in JS.
|
||||
○Algorithm for lossless compression of large state objects into URL-safe strings
|
||||
○Tech stack (SPA): Vite, React, Redux, Material-UI, i18n, Emotion, serverless (Cloudflare)
|
||||
○Future: migrate computation core to multithreaded WebAssembly with Rust
|
||||
Hobbies and interests
|
||||
○reading books ○playing the piano
|
||||
○lightweight, efficient software ○maintaining my homelab
|
||||
\`\`\`
|
||||
|
||||
Please generate a cover letter with references to my CV and the job description.
|
||||
|
||||
```
|
||||
|
||||
The result is not too shabby. There are a couple mistakes, for example:
|
||||
|
||||
- I am a native german speaker
|
||||
- Famililarity with network topologies sounds odd
|
||||
- bash, docker in lower case
|
||||
- basic programming skills sounds weak; nowhere did I mention "basic"
|
||||
|
||||
```
|
||||
Dear [Hiring Manager],
|
||||
|
||||
I am writing to express my interest in the IT Infrastructure Lead position at xxxxxxx. As a recent
|
||||
graduate with a degree in Computer Science and practical experience in Linux systems, I believe I have
|
||||
the skills and experience needed to excel in this role.
|
||||
|
||||
In my current position as a Working Student DevOps at TV1 GmbH, I have gained experience in designing
|
||||
and implementing high-availability infrastructure using Kubernetes on AWS, as well as setting up a
|
||||
monitoring and observability stack with Prometheus, Grafana, and Loki. Additionally, I have developed
|
||||
custom Netbox plugins for network automation using Python and Ansible.
|
||||
|
||||
I am confident that my strong communication skills and familiarity with software configuration tools
|
||||
like Ansible, as well as my knowledge of network technology and topologies, will make me a valuable
|
||||
addition to the xxxxxxx team. I am excited about the opportunity to work closely with different
|
||||
departments to support their IT needs and to help improve the infrastructure to support future growth.
|
||||
|
||||
I am fluent in English and have a basic understanding of German. I also have strong awareness of IT
|
||||
security principles and am comfortable working with git, bash, docker, and basic programming skills
|
||||
in Python.
|
||||
|
||||
I am excited about the opportunity to work at xxxxx and become part of the team working on such
|
||||
a cool mission - to deliver tools to better cope with natural disasters and mitigate climate change.
|
||||
I am drawn to the down-to-earth yet high-caliber work environment at xxxxxxxx and the opportunity
|
||||
to grow both professionally and personally.
|
||||
|
||||
I would be delighted to discuss my qualifications further and how I can contribute to the success
|
||||
of xxxxxxx. Please find attached my CV for your review. Thank you for considering my application.
|
||||
I look forward to the opportunity to contribute to the success of xxxx.
|
||||
|
||||
Sincerely,
|
||||
[Your Name]
|
||||
```
|
||||
|
||||
Overall, it is a great start! It includes just the right amount of boot licking, is not too long, highlights my most important skills and especially important, references the job description!
|
||||
|
||||
Browsing online for a while, I found an [article](https://www.businessinsider.com/chatgpt-job-applications-hiring-managers-job-interview-candidate-2022-12) about HR people giving feedback on AI generated cover letter. Essentially, the remarked that AI letters are too dry and informal. So I asked GPT-3 to generate a more personal letter. Unfortunately, there is not much difference.
|
||||
|
||||
I guess, for now its best to use GPT-3 to generate a template and manually improve the result.
|
||||
|
||||
## Going further...
|
||||
|
||||
This is all fair and nice, but logging in to the OpenAI website and dealing with captchas every day is a pain. Of course we can automate it further ...
|
||||
|
||||
Keep in mind, that the GPT-3 model, which is used for ChatGPT does not have an API yet. This uses text-davinci-003, a similar model. There is also cost involved with using this model.
|
||||
|
||||
1. Generate an API key
|
||||
2. Send an API request like this:
|
||||
|
||||
```bash
|
||||
curl https://api.openai.com/v1/completions \
|
||||
-H 'Content-Type: application/json' \
|
||||
-H "Authorization: Bearer sk-TOKEN" \
|
||||
-d '{
|
||||
"model": "text-davinci-003",
|
||||
"prompt": "my cool prompt",
|
||||
"max_tokens": 4000,
|
||||
"temperature": 0.5
|
||||
}'
|
||||
```
|
||||
|
||||
3. Adjust the temperature:
|
||||
> What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
|
||||
|
||||
It shouldnt be hard to write a little script for this. If we are really funky, we can provide a Latex template to insert the template into to automate the entire process. All we need to do then is copy paste the Job description, upload CV.pdf and CoverLetter.pdf and send (after jumping through 5 hoops with shitty registration portals of course).
|
||||
159
src/content/blog/detecting-smi.mdx
Normal file
159
src/content/blog/detecting-smi.mdx
Normal file
|
|
@ -0,0 +1,159 @@
|
|||
---
|
||||
pubDate: '2024-01-15'
|
||||
title: 'Detecting System Management Interrupts'
|
||||
description: ''
|
||||
keywords:
|
||||
- SMI
|
||||
---
|
||||
|
||||
## System Management Interrutps (SMIs)
|
||||
|
||||
- high priority interrupts caused by the hardware
|
||||
- transparent to the operating system
|
||||
- can be used by the mainboard for power management, thermal management, or other system-level functions independent of the OS
|
||||
- can take a long time to execute, causing a CPU core to be blocked from other work
|
||||
|
||||
### Detecting SMIs
|
||||
|
||||
- compile a kernel with hwlat tracing capabilities; usually, a typical Linux kernel has this enabled; if not, the config can be found in the appendix
|
||||
- after starting the machine with a trace-capable image
|
||||
- check available tracers `cat /sys/kernel/debug/tracing/available_tracers`
|
||||
- enable the tracer `echo hwlat > /sys/kernel/debug/tracing/current_tracer`
|
||||
- there now should be a process "hwlat" running that takes up 50% of one CPU
|
||||
- output of the hwlat tracer available `cat /sys/kernel/debug/tracing/trace` or `cat /sys/kernel/debug/tracing/trace_pipep`
|
||||
|
||||
### Example Output
|
||||
|
||||
```
|
||||
# tracer: hwlat
|
||||
#
|
||||
# entries-in-buffer/entries-written: 1/1 #P:32
|
||||
#
|
||||
# _-----=> irqs-off
|
||||
# / _----=> need-resched
|
||||
# | / _---=> hardirq/softirq
|
||||
# || / _--=> preempt-depth
|
||||
# ||| / delay
|
||||
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
|
||||
# | | | |||| | |
|
||||
<...>-30395 [010] d... 533.362276: #1 inner/outer(us): 0/388 ts:1711469939.505579595 count:1
|
||||
```
|
||||
|
||||
- inner/outer: where the latency was detected, see next section
|
||||
|
||||
#### How does it work?
|
||||
|
||||
- this hwlat process is taking timestamps in a loop
|
||||
- if distance between two timestamps is unreasonably large (bigger than ns), there was an SMI
|
||||
- we should lower the threshold of this distance to 1us by executing `echo 1 > /sys/kernel/debug/tracing/tracing_thresh`
|
||||
- the hwlat process is migrated over all the cores to catch SMIs there
|
||||
- inner vs outer latency
|
||||
|
||||
```
|
||||
while (run) {
|
||||
start_ts = trace_local_clock();
|
||||
end_ts = trace_local_clock();
|
||||
if (!first && start_ts - last_ts > thresh)
|
||||
record_outer();
|
||||
if (end_ts - start_ts > thresh)
|
||||
record_inner();
|
||||
last_ts = end_ts;
|
||||
first = 0;
|
||||
}
|
||||
```
|
||||
|
||||
#### Further options
|
||||
|
||||
- by default, only 50% CPU time is used
|
||||
- this can be increased by echoing into `echo 9999999 > /sys/kernel/debug/tracing/hwlat_detector/width`, where the value is smaller than the set window `cat /sys/kernel/debug/tracing/hwlat_detector/window` to avoid starving the system.
|
||||
- from my experience, this, however, is not necessary to catch SMIs. The default option is "good enough".
|
||||
|
||||
### Firing an SMI manually
|
||||
|
||||
- There is a nice small kernel module [here](https://github.com/jib218/kernel-module-smi-trigger) for manually triggering an SMI to verify the setup
|
||||
- follow the instructions in the readme to compile and load the module
|
||||
|
||||
### Hardware Registers for counting SMIs
|
||||
|
||||
- Intel: MSR0x34, can be read out with turbostat / perf
|
||||
- AMD: ls\_msi\_rx, can be used with `perf stat -e ls_smi_rx -I 60000`
|
||||
However, doesn't seem to count everything; counts seem incorrect
|
||||
|
||||
---
|
||||
|
||||
### Sources, Appendix
|
||||
|
||||
- https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/hwlat
|
||||
- https://www.kernel.org/doc/html/latest/trace/hwlat_detector.html
|
||||
- https://lwn.net/Articles/860578/
|
||||
- https://www.jabperf.com/ima-let-you-finish-but-hunting-down-system-interrupts/
|
||||
|
||||
Custom Kernel Fragment: files/kernel\_config\_fragments/trace
|
||||
|
||||
```
|
||||
CONFIG_USER_STACKTRACE_SUPPORT=y
|
||||
CONFIG_NOP_TRACER=y
|
||||
CONFIG_HAVE_RETHOOK=y
|
||||
CONFIG_RETHOOK=y
|
||||
CONFIG_HAVE_FUNCTION_TRACER=y
|
||||
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
|
||||
CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE=y
|
||||
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
|
||||
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
|
||||
CONFIG_HAVE_FENTRY=y
|
||||
CONFIG_HAVE_OBJTOOL_MCOUNT=y
|
||||
CONFIG_HAVE_OBJTOOL_NOP_MCOUNT=y
|
||||
CONFIG_HAVE_C_RECORDMCOUNT=y
|
||||
CONFIG_HAVE_BUILDTIME_MCOUNT_SORT=y
|
||||
CONFIG_BUILDTIME_MCOUNT_SORT=y
|
||||
CONFIG_TRACER_MAX_TRACE=y
|
||||
CONFIG_TRACE_CLOCK=y
|
||||
CONFIG_RING_BUFFER=y
|
||||
CONFIG_EVENT_TRACING=y
|
||||
CONFIG_CONTEXT_SWITCH_TRACER=y
|
||||
CONFIG_RING_BUFFER_ALLOW_SWAP=y
|
||||
CONFIG_PREEMPTIRQ_TRACEPOINTS=y
|
||||
CONFIG_TRACING=y
|
||||
CONFIG_GENERIC_TRACER=y
|
||||
CONFIG_TRACING_SUPPORT=y
|
||||
CONFIG_FTRACE=y
|
||||
CONFIG_FUNCTION_TRACER=y
|
||||
CONFIG_FUNCTION_GRAPH_TRACER=y
|
||||
CONFIG_FUNCTION_GRAPH_RETVAL=y
|
||||
CONFIG_DYNAMIC_FTRACE=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y
|
||||
CONFIG_FPROBE=y
|
||||
CONFIG_FUNCTION_PROFILER=y
|
||||
CONFIG_TRACE_PREEMPT_TOGGLE=y
|
||||
CONFIG_IRQSOFF_TRACER=y
|
||||
CONFIG_PREEMPT_TRACER=y
|
||||
CONFIG_SCHED_TRACER=y
|
||||
CONFIG_HWLAT_TRACER=y
|
||||
CONFIG_OSNOISE_TRACER=y
|
||||
CONFIG_TIMERLAT_TRACER=y
|
||||
CONFIG_MMIOTRACE=y
|
||||
CONFIG_FTRACE_SYSCALLS=y
|
||||
CONFIG_TRACER_SNAPSHOT=y
|
||||
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
|
||||
CONFIG_BRANCH_PROFILE_NONE=y
|
||||
CONFIG_BLK_DEV_IO_TRACE=y
|
||||
CONFIG_FPROBE_EVENTS=y
|
||||
CONFIG_KPROBE_EVENTS=y
|
||||
CONFIG_UPROBE_EVENTS=y
|
||||
CONFIG_DYNAMIC_EVENTS=y
|
||||
CONFIG_PROBE_EVENTS=y
|
||||
CONFIG_FTRACE_MCOUNT_RECORD=y
|
||||
CONFIG_FTRACE_MCOUNT_USE_CC=y
|
||||
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
|
||||
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y
|
||||
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y
|
||||
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
|
||||
CONFIG_STRICT_DEVMEM=y
|
||||
```
|
||||
66
src/content/blog/huawei-matebook-x-pro-2024.mdx
Normal file
66
src/content/blog/huawei-matebook-x-pro-2024.mdx
Normal file
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
pubDate: 2024-12-21
|
||||
title: Linux on a Huawei MateBook X Pro 2024
|
||||
description: A guide on what is needed to get Linux running on the Huawei MateBook X Pro 2024.
|
||||
heroImage: ./images/matebook.jpg
|
||||
---
|
||||
|
||||
I recently bought a Huawei MateBook X Pro 2024. It is a beautiful laptop with a 3:2 aspect ratio display and a touchscreen. The laptop comes with Windows 11 preinstalled. However, I wanted to run Linux on it. Here is a guide on what is needed to get Linux running on the Huawei MateBook X Pro 2024.
|
||||
|
||||
Overall, the experience was okay, but not something I would recommend to an average user. There are a fair bit of quirks that need to be ironed out. Especially distros running older kernels will have a hard time. I am running CachyOS with the latest 6.13-rc1 kernel, more on that later.
|
||||
|
||||
| Hardware | PCI/USB ID | Status |
|
||||
| ----------- | ------------------------------------------- | ------------------ |
|
||||
| CPU | | :white_check_mark: |
|
||||
| Touchpad | ps/2:7853-7853-bltp7840-00-347d | :white_check_mark: |
|
||||
| Touchscreen | | :white_check_mark: |
|
||||
| Keyboard | ps/2:0001-0001-at-translated-set-2-keyboard | :white_check_mark: |
|
||||
| WiFi | 8086:7e40 | :white_check_mark: |
|
||||
| Bluetooth | 8087:0033 | :white_check_mark: |
|
||||
| iGPU | 8086:7d55 | :neutral_face: |
|
||||
| Audio | 8086:7e28 | :ok: |
|
||||
| Webcam | 8086:7d19 | :x: |
|
||||
| Fingerprint | | :x: |
|
||||
|
||||
## CPU
|
||||
|
||||
The CPU on my SKU is an Intel Meteor Lake Core Ultra 155H. It comes with 6 performance cores, each with 2 threads, 8 efficiency cores, one thread each, and 2 LPE cores. The p and e cores share 24MB of L3 cache. The LPE cores do not have L3 cache and share 2MB L2 cache, which makes them rather slow. Below you can find the output of `lstopo`:
|
||||
|
||||

|
||||
|
||||
Since thread director is not yet supported in the Linux kernel, by default, the scheduler will assign processes to the performance cores--while on battery. A scheduler like bpfland helps, but that still leaves the first, CPU core 0, alive. Disabling the cores manually is also not a good solution as the core 0 can not be deactivated. There used to be a kernel config option, `CONFIG_BOOTPARAM_HOTPLUG_CPU0` which would allow the core to be disabled at runtime, but is no longer available[^1].
|
||||
|
||||
Luckily, Intel is developing a tool which utilizes cgroups to enable/disable cores at runtime and moves processes away. If you care about battery life, you might want to configure `intel-lpmd`[^2].
|
||||
After installing the tool, it must be enabled with `sudo systemctl enable --now intel-lpmd`. Next, enter your p cores into the config file at `/etc/intel_lpmd/intel_lpmd_config.xml`, so if you are running with SMT enabled, it would be the string `0-11` to include the 6 p-cores with 2 threads each. When you are on battery, the tool will disable the p-cores and move processes away. You can verify that it is active with `sudo systemctl status intel_lpmd.service`. For additional battery-savings, you can also disable the e-cores as the L3 cache can then be powered down. I would not recommend it tho.
|
||||
|
||||
## Touchpad
|
||||
|
||||
The touchpad worked out of the box ever since I got the laptop. I did read that older kernels might not register it.
|
||||
|
||||
## Touchscreen, Keyboard, Wifi, Bluetooth
|
||||
|
||||
No problems, as far as I can tell, all work out of the box.
|
||||
|
||||
## iGPU
|
||||
|
||||
This is a big one. Theres a problem with the default i915 driver which causes the iGPU to never go into a low power state. This is a big problem as it drains the battery rather quickly. There is an experimental Intel Xe driver, which fixes this issue. It can be enabled by adding the kernel parameters `i915.force_probe=!7d55 xe.force_probe=7d55` to the kernel command line. The driver is already in mainline, so no need to compile it yourself. However, the driver is still experimental there are several bugs. The screen might flicker from time to time showing rectangular artifacts. The 6.12 or lower Xe driver was highly unstable and caused my system to hard lock every few minutes. The 6.13-rc1 driver is much more stable, asides from the artifacts.
|
||||
|
||||
## Audio
|
||||
|
||||
The audio works out of the box. But its not great. It seems like not all speakers are fully used. It is good enough for me tho.
|
||||
|
||||
## Webcam
|
||||
|
||||
The webcam is an ipu6-based camera. Support has been trickling in over the years, but it is unusable at the moment and the forseeable future.
|
||||
|
||||
## Fingerprint
|
||||
|
||||
The fingerprint sensor is not supported at the moment. It does not even show up anywhere. One of those ingenious Goodix sensors that are not supported by the fprintd library.
|
||||
|
||||
|
||||
---
|
||||
|
||||
Sources:
|
||||
|
||||
[^1]: https://www.kernelconfig.io/search?q=CONFIG_BOOTPARAM_HOTPLUG_CPU0&kernelversion=6.12.4&arch=x86
|
||||
[^2]: https://github.com/intel/intel-lpmd
|
||||
49
src/content/blog/images/kagi_doggo_5.svg
Normal file
49
src/content/blog/images/kagi_doggo_5.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 47 KiB |
BIN
src/content/blog/images/matebook.jpg
Normal file
BIN
src/content/blog/images/matebook.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 84 KiB |
BIN
src/content/blog/images/msi.png
Normal file
BIN
src/content/blog/images/msi.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 434 KiB |
36
src/content/blog/kagi.mdx
Normal file
36
src/content/blog/kagi.mdx
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
pubDate: '2024-06-11'
|
||||
title: "Kagi.com"
|
||||
description: "Thoughts on Kagi.com"
|
||||
keywords:
|
||||
- search engine
|
||||
hidden: false
|
||||
heroImage: ./images/kagi_doggo_5.svg
|
||||
---
|
||||
|
||||
Kagi is a paid search engine providing excellent search that reminds me of what Google was like in the early 2000s. Furthermore, it provides search-enhancing features like specific filters, custom site rankings, and an LLM summary of the search results.
|
||||
In this post, I would like to share my thoughts on Kagi.com and explain why I think it is a great search engine despite recent criticism.
|
||||
|
||||
## Search Quality
|
||||
Kagi's search quality is much superior to that of its competitors. The first few results usually include either links to the documentation or - if applicable - blog posts of tiny websites that are not well-known.
|
||||
Google has been overflooded by SEO spam: sites that do not contain any useful information but are optimized to be indexed high in search results. Kagi's search results are much cleaner and more relevant.
|
||||
|
||||
If, for some reason, a bad site appears in the search results, I can easily block it. More relevant sites like Wikipedia or StackOverflow can be promoted to the top of the search results.
|
||||
|
||||
## AI Summary
|
||||
Kagi's AI will summarize the search results by simply appending a `?` to the end of the search query. LLMs are prone to generating nonsense, but Kagi's AI adds citations with links to the original source. If the AI summary provided helpful information, it was accurate; if it did not, the results were still there.
|
||||
|
||||

|
||||
|
||||
## Privacy
|
||||
|
||||
By default, since the search engine requires registration and payment, Kagi could theoretically track the user's search history. However, I have no reason to believe that Kagi is doing this. Kagi repeatedly stated that they are a small company that aims to do things differently, i.e., not maximize profit over sustainability. That is also why they give free T-shirts to the first 20k users. Although I'm not convinced this is a wise business decision, I respect their commitment to their user base.
|
||||
|
||||
In recent criticism, Kagi's CEO Vlad has made questionable privacy statements. Mainly, he claimed that an Email address is not PII (Personally Identifiable Information) because the user could create single-use Email addresses. That statement is obviously regrettable, but the CEO has clarified and will be more careful in the future. Just because a CEO is more outspoken and engaging with the community (which does not happen often - if ever) and sometimes says woeful things does not mean that the company as a whole should be boycotted. It should be seen as a way to engage with the company and perhaps improve it. Kagi is the best we have right now, and I am happy to support them.
|
||||
|
||||
This entire privacy discussion boils down to a big "trust me, bro" which I am willing to give Kagi - for now.
|
||||
I pay for search; at least I know that Kagi does not have to sell my data to keep the lights on - unlike specific competitors.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Kagi is a great search engine that I can recommend to anyone who is tired of Google's SEO spam and wants to support a small company that is trying to do things differently. The search results are excellent, and the AI summaries are a nice addition. I am looking forward to seeing how Kagi will develop in the future.
|
||||
64
src/content/blog/kata-custom-kernel-module.mdx
Normal file
64
src/content/blog/kata-custom-kernel-module.mdx
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
pubDate: '2024-08-25'
|
||||
title: "Kata Containers: Custom Kernel Module in Guest"
|
||||
description: 'How to build a custom kernel module for a Kata Containers guest.'
|
||||
keywords:
|
||||
- kata
|
||||
- containers
|
||||
- kernel
|
||||
- module
|
||||
- custom
|
||||
- guest
|
||||
- build
|
||||
- compile
|
||||
hidden: false
|
||||
---
|
||||
|
||||
Kata Containers is a lightweight container runtime that leverages hardware virtualization to provide strong isolation between containers. It is compatible with the Open Container Initiative (OCI) and the Container Runtime Interface (CRI). Kata Containers uses a lightweight VM to run each container, which provides an additional layer of isolation compared to traditional container runtimes like Docker or containerd.
|
||||
|
||||
The official documentation is fairly lackluster here and there. For example, see [here](https://github.com/kata-containers/kata-containers/blob/main/docs/how-to/how-to-load-kernel-modules-with-kata.md). There is a lot of prerequisite knowledge assumed.
|
||||
Another tutorial is [here](https://vadosware.io/post/building-custom-kernels-for-kata-containers/), which sheds some light into the building process of a custom kernel image, but leaves out custom kernel modules.
|
||||
|
||||
This article aims to provide a step-by-step guide on how to utilize a custom kernel module in a Kata Containers guest. In this example, we will include the igb_uio kernel module, which can be used with DPDK.
|
||||
|
||||
This guide assumes you already installed yq.
|
||||
```bash
|
||||
KATA_VERSION="3.2.0"
|
||||
KATA_CONFIG_VERSION="114"
|
||||
wget "https://github.com/kata-containers/kata-containers/archive/refs/tags/$KATA_VERSION.tar.gz"
|
||||
tar -xf $KATA_VERSION.tar.gz
|
||||
|
||||
cd kata-containers-$KATA_VERSION/tools/packaging/kernel
|
||||
bash build-kernel.sh -v 6.7 -s -f setup # download the kernel source, force create the .config file
|
||||
|
||||
mkdir -p kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio
|
||||
cp -r source/folder/to/module/igb_uio/* kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio/
|
||||
|
||||
# create Kconfig file, this file will make the module visible in make menuconfig
|
||||
cat > kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio/Kconfig << EOF
|
||||
menuconfig IGB_UIO
|
||||
tristate "igb_uio driver"
|
||||
depends on UIO
|
||||
default y
|
||||
EOF
|
||||
# overwrite Makefile to avoid building the module as .ko file
|
||||
echo "# SPDX-License-Identifier: GPL-2.0" > kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio/Makefile
|
||||
echo "obj-\$(CONFIG_IGB_UIO) += igb_uio.o" >> kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio/Makefile
|
||||
|
||||
# link into main Kconfig, so that make menuconfig has a new entry; insert into the line after uio entry
|
||||
sed -i '135i\source "drivers/igb_uio/Kconfig"' kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/Kconfig
|
||||
|
||||
# add folder to main Makefile
|
||||
echo "obj-\$(CONFIG_IGB_UIO) += igb_uio/" >> kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/Makefile
|
||||
|
||||
# remove Kbuild file to avoid building the module as .ko file and therefore directly link it into the built kernel
|
||||
rm kata-linux-6.7-$KATA_CONFIG_VERSION/drivers/igb_uio/Kbuild
|
||||
|
||||
# append to .config file
|
||||
echo "CONFIG_IGB_UIO=y" >> kata-linux-6.7-$KATA_CONFIG_VERSION/.config
|
||||
|
||||
# build the kernel with the new module
|
||||
bash build-kernel.sh -v 6.7 build
|
||||
```
|
||||
|
||||
Why Kata 3.2.0, an ancient version, you might ask? Unfortunately, we were unable to get newer version to work with SEV-SNP.
|
||||
137
src/content/blog/lldap-caddy.mdx
Normal file
137
src/content/blog/lldap-caddy.mdx
Normal file
|
|
@ -0,0 +1,137 @@
|
|||
---
|
||||
pubDate: '2022-09-24'
|
||||
title: 'Securing a Caddy endpoint with LLDAP'
|
||||
description: ''
|
||||
keywords:
|
||||
- LDAP
|
||||
- Caddy
|
||||
---
|
||||
|
||||
For my small home network, I was looking around for a solution to synchronize user
|
||||
accounts across services. I host various services like a file server or smaller web
|
||||
applications that are accessed by my significant other and a couple of friends. In the
|
||||
past, I manually created accounts and passed on the password, which is slightly tedious.
|
||||
Another pain point is, that my web applications are unsecured at the moment. I would like
|
||||
to have an SSO login screen before being able to access a service. As a reverse proxy, I
|
||||
already deployed [Caddy](https://caddyserver.com/).
|
||||
|
||||
After researching I found out that for me and the services I am running, LDAP would be the
|
||||
best choice as it has the best compatibility and appears to be the industry standard for
|
||||
this kind of problem. Looking for a server proofed hard: most of them are rather heavy
|
||||
with lots of functions that I don't need or are tedious to configure. Consequently, I
|
||||
searched for a lightweight server and found one:
|
||||
[LLDAP](https://github.com/nitnelave/lldap) is exactly what I was looking for. The project
|
||||
includes a lightweight LDAP server, which only supports a bare minimum of features (users,
|
||||
groups). Passwords can be reset by users in a small - admittedly ugly - web interface.
|
||||
Perfect for me. For locking down web applications there is
|
||||
[caddy-security](https://authp.github.io/) - an addon that allows interaction with LDAP
|
||||
before granting access to a site.
|
||||
|
||||
## Setup
|
||||
|
||||
Since I installed a Caddy Docker image with caddy-security already integrated, I did not
|
||||
have to do anything else. For example, [Alexander
|
||||
Henderson's](https://hub.docker.com/r/alexandzors/caddy#!) image comes with several useful
|
||||
modules preinstalled, that I require for other projects anyway. If that's not an option,
|
||||
you can easily create your own Docker image including caddy-security.
|
||||
|
||||
The installation is quickly done with Docker. I mounted a folder instead of the suggested
|
||||
volume. The initial password can be reset in the administration panel. I utilize a custom
|
||||
bridge for networking so that I can resolve my other services with DNS. After that, we can
|
||||
navigate to http://IP:17170 and are presented with the administration panel, where we can
|
||||
create users and groups.
|
||||
|
||||

|
||||
|
||||
## Integration with Caddy
|
||||
|
||||
Assume we have a domain `example.xyz`. You can already create A or CNAME records for the
|
||||
domains `auth.example.xyz` and `example.xyz` and point them at your Caddy server.
|
||||
|
||||
Now we need to prepare the authentication part in the global block of the Caddy file as
|
||||
follows. In your Caddy docker-compose file make sure to add the two env vars for
|
||||
LLDAP_ADMIN_PASSWORD and JWT_SHARED_KEY. The part with search_filter and group is crucial.
|
||||
The search_filter is the query that is used to find your user object in the domain. Once
|
||||
it is found, depending on the groups your user is in, groups get assigned within the Caddy
|
||||
authentication procedure. In this example, a domain user that belongs to the group `user`
|
||||
gets assigned the `authp/user` group.
|
||||
|
||||
The file is an adoption of the caddy-security documentation[^1] to interact with LLDAP:
|
||||
|
||||
```
|
||||
order authenticate before respond
|
||||
order authorize before basicauth
|
||||
|
||||
security {
|
||||
ldap identity store example.xyz {
|
||||
realm example.xyz
|
||||
servers {
|
||||
ldap://lldap:3890
|
||||
}
|
||||
attributes {
|
||||
name displayName
|
||||
surename cn
|
||||
username uid
|
||||
member_of memberOf
|
||||
email mail
|
||||
}
|
||||
username "CN=admin,OU=people,DC=example,DC=xyz"
|
||||
password "{env.LLDAP_ADMIN_PASSWORD}"
|
||||
search_base_dn "DC=example,DC=xyz"
|
||||
search_filter "(&(uid=%s)(objectClass=person))"
|
||||
groups {
|
||||
"uid=user,ou=groups,dc=example,dc=xyz" authp/user
|
||||
}
|
||||
}
|
||||
|
||||
authentication portal myportal {
|
||||
crypto default token lifetime 3600
|
||||
crypto key sign-verify {env.JWT_SHARED_KEY}
|
||||
enable identity store example.xyz
|
||||
cookie domain example.xyz
|
||||
ui {
|
||||
logo url "https://caddyserver.com/resources/images/caddy-circle-lock.svg"
|
||||
logo description "Caddy"
|
||||
links {
|
||||
"My Identity" "/whoami" icon "las la-user"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
authorization policy mypolicy {
|
||||
# disable auth redirect
|
||||
set auth url https://auth.example.xyz
|
||||
|
||||
crypto key verify {env.JWT_SHARED_KEY}
|
||||
allow roles authp/user
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Finally, we need to create endpoints for auth and whatever service we need to protect.
|
||||
Make sure that the names `myportal` and `mypolicy` match with the previously declared
|
||||
ones. Note: the `import tls` originated from my globally defined TLS setup and is not
|
||||
relevant to this guide.
|
||||
|
||||
```
|
||||
auth.example.xyz {
|
||||
route {
|
||||
authenticate with myportal
|
||||
}
|
||||
}
|
||||
|
||||
example.xyz {
|
||||
root * /config/html
|
||||
authorize with mypolicy
|
||||
encode gzip
|
||||
file_server browse
|
||||
import tls
|
||||
}
|
||||
```
|
||||
|
||||
I strongly recommend consulting the documentation[^1][^2] for in-depth information.
|
||||
|
||||
Happy authenticating!
|
||||
|
||||
[^1]: https://authp.github.io/docs/authenticate/ldap/ldap
|
||||
[^2]: https://github.com/nitnelave/lldap
|
||||
93
src/content/blog/lxcanddpdk.mdx
Normal file
93
src/content/blog/lxcanddpdk.mdx
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
pubDate: '2022-07-27'
|
||||
title: 'How to: Run a DPDK application in an LXC container'
|
||||
description: ''
|
||||
keywords:
|
||||
- LXC
|
||||
- DPDK
|
||||
---
|
||||
|
||||
For my thesis, I evaluated if containers are viable for low-latency networking. I
|
||||
decided to pick LXC as my container implementation due to them being extremely
|
||||
lightweight compared to its peers and also related work indicating, that LXC beats
|
||||
others in raw performance. Latency-critical applications are typically
|
||||
implemented with poll mode drivers in userspace, due to the traditional
|
||||
interrupt-based network stack inducing unreliable delays[^1]. Unfortunately,
|
||||
there are not a lot of tutorials out there on how to get DPDK to run with LXC. One resource that did help me, but is not complete and also is a bit older is from J. Krishnamurthy[^2].
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This tutorial expects that you went over the following checklist:
|
||||
|
||||
- Debian Bullseye (other distributions should work too)
|
||||
- a working LXC instance of at least LXC 4.0 (did not test earlier versions)
|
||||
- a network interface for your containers so that they can communicate with the internet, and also you can SSH into the container
|
||||
- another NIC that you want to use with DPDK
|
||||
|
||||
## Host Setup
|
||||
|
||||
We first must initialize the userspace device driver. There exist two kinds of
|
||||
kernel modules that provide the driver interface: `igb_uio` and `vfio`. Since
|
||||
the `vfio` module requires the IOMMU and the IOMMU can have - under some
|
||||
circumstances - a bad impact on system performance, we opt for the `igb_uio`
|
||||
module. A very interesting read about how the drivers work on a kernel level is
|
||||
the paper of Koch[^3]. The following code installs the `igb_uio` kernel module.
|
||||
|
||||
```bash
|
||||
git clone http://dpdk.org/git/dpdk-kmods
|
||||
cd dpdk-kmods/linux/igb_uio/
|
||||
make
|
||||
modprobe uio
|
||||
insmod igb_uio.ko
|
||||
```
|
||||
|
||||
Next, clone whatever version of DPDK you want to use on your host. Do not
|
||||
compile it! We will utilize the `dpdk-devbind` script from the provided usertools to bind a NIC to the driver `igb_uio`. This script can also be called with `--status` to verify if your NIC indeed was bound to the driver. Instead of eno8 in the following example, it is also possible to use the PCI identifier like 0000:65:00.0.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/DPDK/dpdk.git
|
||||
python dpdk/usertools/dpdk-devbind.py --bind=igb_uio eno8
|
||||
```
|
||||
|
||||
By binding the NIC to this driver, device files are created. These device files allow the userspace driver of DPDK to directly interact with the PCI device. The next example demonstrates how to find the device files and their major/minor IDs, which we need for the next step. In this case, the major ID is 239 and the minor ID is 0.
|
||||
|
||||
```bash
|
||||
$ ls /dev/uio* -la
|
||||
crw------- 1 root root 239, 0 Jul 27 20:21 /dev/uio0
|
||||
```
|
||||
|
||||
Now we will pass through these device files to the container. Open the container config file under `/var/lib/lxc/<name>/config` and add following lines. The first two lines are required according to the lead developer of LXC Stéphane Graber[^4]. The third line gives access to the device with CGroups v2. And finally, we pass through the device file. The last line could also be replaced with a `mknode` call after starting the container, but I found this variant cleaner.
|
||||
|
||||
```toml
|
||||
lxc.mount.auto =
|
||||
lxc.mount.auto = proc:rw sys:rw
|
||||
lxc.cgroup2.devices.allow = c 239:0 rwm
|
||||
lxc.mount.entry = /dev/uio0 dev/uio0 none bind,create=file
|
||||
```
|
||||
|
||||
One last step is missing on the container host: the creation of hugepages. Modern CPUs with high core counts rely on multiple NUMA nodes, where each node has its memory.
|
||||
Since I don't want to write more about the impact of suboptimal assigned memory, we create 2Mb hugepages for each node.
|
||||
|
||||
```bash
|
||||
echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
|
||||
echo 512 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
|
||||
echo 512 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
|
||||
...
|
||||
```
|
||||
|
||||
Now we pass through the created device file to the container again. Open the container config file and add the following line.
|
||||
|
||||
```toml
|
||||
lxc.mount.entry = /dev/hugepages dev/hugepages none bind,create=dir 0 0
|
||||
```
|
||||
|
||||
The host is now set up and it should be possible to use your DPDK application in the container like you are used to it. In case multiple devices are bound to `igb_uio`, each container still sees the other devices even though they were not passed through. I assume this is because we mounted the `/sys` folder as well. While it throws a small warning in the DPDK application, it is no reason to be worried.
|
||||
|
||||
Any questions? [Contact me!](/contact)
|
||||
|
||||
<br />
|
||||
|
||||
[^1]: [5G QoS: Impact of Security Functions on Latency](https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/gallenmueller_noms2020.pdf)
|
||||
[^2]: http://mails.dpdk.org/archives/dev/2014-October/006373.html
|
||||
[^3]: Koch, Hans J from Linutronix GmbH, 2013, "Userspace I/O drivers in a realtime context"
|
||||
[^4]: https://github.com/lxc/lxd/issues/3619\#issuecomment-319430483
|
||||
62
src/content/blog/memory-frequency-amd-gpu.mdx
Normal file
62
src/content/blog/memory-frequency-amd-gpu.mdx
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
pubDate: '2023-03-12'
|
||||
title: '[Workaround] High idle power consumption with AMD GPUs'
|
||||
description: There is a bug with AMD gpus on Linux that causes the memory to run at full speed even when the GPU is idle for certain refresh rates and resolutions. This causes high idle power consumption. This post explains how to fix it.
|
||||
keywords:
|
||||
- AMD
|
||||
- amdgpu
|
||||
- gpu
|
||||
- idle power
|
||||
- power consumption
|
||||
- memory frequency
|
||||
- memory scaling
|
||||
- memory clock
|
||||
- memory speed
|
||||
- memory
|
||||
- linux
|
||||
- linux kernel
|
||||
- kernel
|
||||
hidden: false
|
||||
updatedDate: '2023-08-02'
|
||||
---
|
||||
|
||||
|
||||
For many years, AMD GPUs have had a bug that causes the memory to run at full speed even when the GPU is idle for certain refresh rates and resolutions. For example, I am running a 34' UWQHD 3440x1440 monitor with 144Hz refresh rate. My GPU is a 6700 XT with 16GB of VRAM. When I set the resolution to 3440x1440 and the refresh rate to 144Hz, the memory frequency is stuck at 1000 MHz, or as Windows would report it 2000 Mhz. The high memory frequency consumes about 30 W of power - not doing anything at all.
|
||||
|
||||

|
||||
|
||||
This bug apparently existed on Windows as well, but was fixed some time in 2021. There were some tries to fix the issue on Linux in the past, but obviously nothing worked - as I found out now running the latest kernel 6.2.
|
||||
|
||||
As a non-feasible workaround, you could lower the refresh rate to 50Hz or 60Hz. This would reduce the memory frequency and therefore also the power draw. However, nobody enjoys to watch their mouse stutter at 50Hz when you spent money on a 1440Hz monitor.
|
||||
|
||||
Some interesting links to read up on the topic:
|
||||
|
||||
- https://gitlab.freedesktop.org/drm/amd/-/issues/1403
|
||||
- https://gitlab.freedesktop.org/drm/amd/-/issues/1709
|
||||
|
||||
## Workaround
|
||||
|
||||
A user in a [Gitlab issue](https://gitlab.freedesktop.org/drm/amd/-/issues/1655) reported that a different xrandr profile with different timings would fix the issue. I can confirm that it works for me as well. However, a small downside is that infrequenctly - every few minutes or so a small flicker occurs. I'll test this setup for a bit and figure out if I'd rather have high power draw or an occasional flicker.
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
xrandr --newmode "3440x1440_143.92" 891.4 3440 3536 3600 3920 1440 1446 1466 1580 +hsync -vsync
|
||||
xrandr --addmode DisplayPort-0 3440x1440_143.92
|
||||
xrandr --output DisplayPort-0 --mode 3440x1440_143.92
|
||||
```
|
||||
|
||||
I added the script to my i3 config so that it is executed once I sign in.
|
||||
|
||||
The power draw is now as expected at around 8W. The memory downclocks to 96Mhz. Whenever I move my mouse I can observe a rise in GPU freqency and power draw to around 12W. This is much better.
|
||||
|
||||
This might not seem like a lot, but in situations like this I have to think about how many GPUs were sold globally and are affected by this bug. It's is not an insignificant amount of power that is wasted on a global scale.
|
||||
|
||||

|
||||
|
||||
_Update: 2021-08-02_:
|
||||
|
||||
I tried to use my internal GPU for the display and the 6700 XT for compute only. Initially, this setup would crash every few minutes. A couple days ago, Gigabyte pushed a new BIOS update for my B650 AM5 board - version F7 - which apparently fixed this issue.
|
||||
|
||||
To launch games on the dGPU, I must launch it with an environment variable: `DRI_PRIME=1`. For steam games, you can set this in the launch options. For example: `DRI_PRIME=1 %command%`.
|
||||
|
||||
My dGPU sits now happily in idle at 5W with 96Mhz memory frequency.
|
||||
52
src/content/blog/no-aios.mdx
Normal file
52
src/content/blog/no-aios.mdx
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
pubDate: '2023-02-04'
|
||||
title: You don't need an AIO
|
||||
description: AIOs are overrated tech that's recommended too frequently without considering the downsides
|
||||
keywords:
|
||||
- Hardware
|
||||
- AIO
|
||||
- cooling
|
||||
- CPU cooler
|
||||
---
|
||||
|
||||
Recently, I have been browsing hardware forums to get in touch with the current state of computer hardware again. I am in the process of upgrading my 6-year-old rig. Up until yesterday, I had a Corsair AIO (All in one) CPU cooler with a 280mm radiator cooling my heavily overclocked CPU. Unfortunately, this AIO is not supported by Liquidctl, the software to control AIOs on Linux, and the other legacy software for controlling it no longer works. So I was stuck with a piece of working hardware that I could no longer configure on my operating system of choice. I ended up buying a nice Noctua CPU cooler and ever since my PC is more enjoyable to use in every aspect. Let's get into the details, of why I think AIOs are (for most people) a bad pick.
|
||||
|
||||
(this is not a Noctua-sponsored post)
|
||||
|
||||
## Reliability
|
||||
|
||||
An AIO only lasts 5-7 years. The manufacturers sometimes grant 5 years of warranty, but realistically, people recommend switching them out at some point. The fluid inside might evaporate over time and it'll reach a point where there is not enough fluid to properly cool your CPU. I did not notice this, but the fact, that my AIO already was 6 years old and marked as EOL by the manufacturer made me jumpy.
|
||||
|
||||
In contrast to that, a good air cooler has no expiry date. Good fan manufacturers test their products over years to ensure good reliability. A couple of days ago I sold a 10-year-old Thermalright cooler, the 140mm fan was still spinning like on day 1.
|
||||
|
||||
## Waste
|
||||
|
||||
Replacing a key component like a CPU cooler every 5-7 years leads to a lot of extra waste. Especially considering that parts of an AIO are typically not reusable. The entire device has to be switched out. How many AIOs does planet earth have left?
|
||||
|
||||
A good air cooler has no expiry date.
|
||||
|
||||
## Energy consumption
|
||||
|
||||
Only after I removed the AIO and replaced it with 2x 140mm fans and a Noctua D12L I realized how much power the AIO pulled. My idle power consumption dropped by about 15 Watts! At 5hrs / day with 40 €cents per kWh, this is about 10 € extra per year for cooling your CPU. Is that worth it?
|
||||
|
||||
## Performance
|
||||
|
||||
A good air cooler provides performance just as well as an AIO. Countless reviews support this statement.
|
||||
|
||||
## KISS
|
||||
|
||||
Keep it simple, stupid. An AIO consists of a lot of moving parts that can break: there is the pump, fluid, pipes, pipe connectors, radiator, fans, a USB header connection, and power connections. Meanwhile, an air cooler consists of: a fan with a 4-pin PWM-controlled connector and a heatsink without anything flowing through, so nothing that can get clogged up or break.
|
||||
|
||||
Why would I voluntarily pick a solution that is so much more complicated to understand, where there are so many more components that can break?
|
||||
|
||||
## Silence
|
||||
|
||||
The fans of my H110i AIO were not particularly silent. They produced a lot of noise even on the "silent" preset. Sure, I could make the AIO silent if I installed good fans on it, but let's be real: I paid 140€ for this device and then I should drop another 50€ on silent fans? What do I do with the shitty bundled fans? Throw them away? Even more garbage?
|
||||
|
||||
Maybe this is a fault with this particular AIO, but regardless: a Noctua CPU cooler will always be quiet. My PC is now so silent, that when I turned it on today I got startled and thought it wasn't on. So I force-killed my PC and turned it on again until I realized it wasn't broken, but so silent I didn't hear it even while sitting right next to it.
|
||||
|
||||
## Price
|
||||
|
||||
Sure, a Noctua fan costs 80-120 €, a decent chunk of money, but an AIO is not much difference in pricing, except that you have all those downsides. I'd rather pay this amount of money for an air cooler, that I can use for years, uses less energy, is more silent, probably won't break, and provides the same performance.
|
||||
|
||||
**So, the next time you want to buy an AIO, please think twice.**
|
||||
49
src/content/blog/ondemand-image-optimization-catapi.mdx
Normal file
49
src/content/blog/ondemand-image-optimization-catapi.mdx
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
pubDate: '2022-08-04'
|
||||
title: 'Free, on-demand image optimizations with Cloudflare'
|
||||
description: ''
|
||||
keywords:
|
||||
- image optimization
|
||||
- docker
|
||||
- sveltekit
|
||||
- svelte
|
||||
hidden: true
|
||||
---
|
||||
|
||||
|
||||
I wanted to use responsive images for my small page of cute [cats](/cat) (now removed as of 8/12/2024). Since
|
||||
one of the design goals is to give my significant other, who by the way loves
|
||||
cats a lot more than me, the option to add cats on the fly and also consume the
|
||||
cat pictures in other services, I require dynamic image optimization.
|
||||
|
||||
Requirements:
|
||||
|
||||
- On demand addition of new images
|
||||
- Optimization for different width and webp / progressive JPEG
|
||||
- Served from the edge for lowest latencies
|
||||
|
||||
Since I've been working a lot with Cloudflare, I of course checked out their
|
||||
[Image](https://developers.cloudflare.com/images/cloudflare-images/) offering.
|
||||
While the service would exactly fulfill my requirements, it costs 5€/m, which I
|
||||
currently can not afford. I thought, this was a good opportunity to build my own
|
||||
little image optimization service, that I could maybe even use in the future for
|
||||
other image-related projects.
|
||||
|
||||
## Infrastructure overview
|
||||
|
||||
I rely on Cloudflare services to serve the optimized images. On pages, this
|
||||
Svelte website is hosted. For our API endpoints that this website is querying, I
|
||||
utilize Cloudflare workers. The optimized images are stored on the new
|
||||
Cloudflare R2 storage and an index is created on KV. Finally, I host the image
|
||||
optimization program on a Raspberry Pi at my place. It is connected with a
|
||||
Cloudflare tunnel to the internet. Any other device that has computing power and
|
||||
is accessible via the internet is alright.
|
||||
|
||||

|
||||
|
||||
- **(1)**: a user uploads a new image, for example via this site
|
||||
- **(2)**: the worker processes forwards the image to the image optimization server
|
||||
- **(3)**: the optimization server does its thing and serves the optimized
|
||||
images in a folder. The worker is notified where the optimized images can be found.
|
||||
- **(4)**: the worker fetches the images and stores them with a consistent naming scheme in
|
||||
- **(5)**: an object containing metadata to the optimized image (location in R2) are inserted into KV
|
||||
87
src/content/blog/optimizing-the-optimizer.mdx
Normal file
87
src/content/blog/optimizing-the-optimizer.mdx
Normal file
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
pubDate: '2023-06-19'
|
||||
title: 'Optimzing the Guild Wars 2 Gear Optimizer'
|
||||
description: ''
|
||||
keywords:
|
||||
- rust
|
||||
- wasm
|
||||
- optimization
|
||||
- gear optimizer
|
||||
- guild wars 2
|
||||
- gw2
|
||||
hidden: false
|
||||
---
|
||||
|
||||
# Preface for readers of my blog
|
||||
|
||||
This post was originally published on the Guild Wars 2 Community on [lemmy](https://sopuli.xyz/post/713722). The Gear Optimizer helps Guild Wars 2 players find optimal builds for fractals, raids, and strike missions. Challenges and ideas for a faster calculation core are presented in this post.
|
||||
|
||||
# Post
|
||||
|
||||
Hey fellow Tyrian Lemmings,
|
||||
|
||||
my first Post on Lemmy, hurray \o/ !
|
||||
I have dedicated a significant amount of time to enhancing the performance of the Discretize Gear Optimizer. It now supports multi-core processing for calculations and heuristics, enabling the simultaneous calculation of large amounts of Runes, Sigils, and Food!
|
||||
|
||||
What is the Gear Optimizer? In short, it runs damage calculations in the game using specific scenarios to determine the ideal gear combinations.
|
||||
|
||||
Unfortunately, due to known cirucmstances I am unable to provide links to previous Reddit posts regarding beginner-friendly details and explanations. However, there are a few noteworthy videos I'd like to mention:
|
||||
|
||||
- https://www.youtube.com/watch?v=6HfjKorDWP4 by REMagic
|
||||
- https://www.youtube.com/watch?v=2vVbzzmoq5E by Connor
|
||||
|
||||
You can access the work-in-progress version here: https://parallelization.discretize-optimizer.pages.dev/?m=fractals
|
||||
Code is freely available here: https://github.com/discretize/discretize-gear-optimizer/tree/parallelization
|
||||
|
||||
The details are technical, so I don't expect individuals without a background in computer science or coding to fully understand them.
|
||||
|
||||
# Details
|
||||
|
||||
Previously, the Optimizer utilized a single-threaded approach that exhaustively enumerated all possible combinations, calculated the damage/sustainability/healing values, and stored the top 50 results in memory. This caused the main thread to be blocked, resulting in a UI update rate of only 15 FPS whenever a progress update was yielded. Far from optimal.
|
||||
|
||||
The new approach incorporates three interesting concepts that significantly speed up the calculations:
|
||||
|
||||
- Usage of Webworker (multiple threads)
|
||||
- Core calculations in Rust compiled to WebAssembly
|
||||
- Heuristics for removing dead weight combinations that likely are not gonna be useful
|
||||
|
||||
Various useful options can be adjusted in the optimizer's UI settings.
|
||||
|
||||
## WebWorkers
|
||||
|
||||
Currently, deploying WebWorkers is the most widely available method for websites to utilize additional hardware threads. The old algorithm was designed in a way that prevented parallel processing. Thus, I had to devise a method to distribute the workload among WebWorkers. Each work chunk must be independent to avoid synchronization overhead.
|
||||
|
||||
To address this, I introduced the concept of an "Affix tree." Each Gear Slot can accept different affixes. For example, the helm can be Assassin, Berserker, or Dragon, resulting in three valid combinations. When these three choices are added to the next level (Shoulders), we end up with 3 \* 3 = 9 combinations. By repeating this process for all 14 gear slots, we end up with 3^14 = 4,782,969 combinations. This may not sound too overwhelming, but let's reconsider it with 5 affixes: 5^14 = 6,103,515,625. Quite spicy, indeed.
|
||||
|
||||
However, there are even more combinations for extras. Assuming we choose one rune, one sigil1, one sigil2, two nourishments, and two enhancements, we get 1 _ 1 _ 1 _ 2 _ 2 = 4 combinations. For 5 affixes, this amounts to 24,414,062,500 combinations. Although each combination only takes a few milliseconds to calculate, the total time adds up when dealing with billions of combinations. Hence, the need to crunch these numbers using multiple cores simultaneously.
|
||||
|
||||
By utilizing the affix tree concept, we can divide the work into multiple chunks. Each layer of the tree contains (previous layer subtrees) \* (current layer subtrees) subtrees. We can simply assign a number of subtrees to each thread, allowing for independent evaluation. In the end, we merge the best results from each subtree to find the global maximum. Subtree evaluation employs a depth-first search, as memory allocations must be minimized due to potentially trillions of executions.
|
||||
|
||||
## Rust / WebAssembly
|
||||
|
||||
JavaScript can be slow, especially when code is not optimized. Rust, on the other hand, requires explicit consideration of memory allocation. The Rust implementation is compiled to WebAssembly (WASM), a form of bytecode similar to the JVM that can be executed by nearly all browsers. Initially, I benchmarked a barebones implementation of traversing the Affix Tree without any calculations and found that the Rust implementation is significantly faster. This gave me hope. The new Rust implementation appears to be between 2x and 5x faster than the JS implementation when running with a single thread, depending on the machine and the specific problem. Moreover, when adding more threads, the performance gains scale nearly linearly.
|
||||
|
||||
## Heuristics
|
||||
|
||||
During discussions with some members of the GW2 development Discord (special thanks to Greaka), I realized that computing every combination of extras (sigils, runes, food) is unnecessary. Around 95% of the extras combinations are likely irrelevant. To address this, I implemented a benchmarking phase where 1000 random affix combinations are tested for each extras combination. We retain the 1000 best results and calculate the frequency of appearances for each extras combination. As it turns out, the benchmark quickly converges to the optimal extras combination. Each run has a slight variance of 2-3%, which is not ideal but sufficient to discard combinations that have 0 appearances in the top 1000 extras combinations. This is the reason why after clicking "calculate" now progress appears. Progress is at the moment only calculated for the actual calculation phase, not the benchmarking phase.
|
||||
|
||||
# Limitations
|
||||
|
||||
Currently, the Rust implementation lacks numerous features:
|
||||
|
||||
- Stopping/resuming the calculation is not possible
|
||||
- Infusions are not yet calculated
|
||||
- Displaying the best results for each combination (as an option in the result table) is not feasible since we don't calculate all combinations.
|
||||
- Mobile compatibility may vary
|
||||
|
||||
# Going further
|
||||
|
||||
I plan to implement infusion calculations and the stop/resume mechanism and bring the overall UX up to par to the JS implementation.
|
||||
|
||||
Additionally, it should be possible to utilize a regression model to directly calculate the optimal gear without the need for brute-forcing. I have pondered this idea but couldn't come up with the required mathematical models as my expertise is limited in this area. If anyone with a background in ML/math is interested in tackling this challenge, please let me know. I would be more than happy to discuss and implement ideas.
|
||||
|
||||
Let me know what you think. Maybe you find a bug or two :)
|
||||
|
||||
I am of course available for any questions.
|
||||
|
||||
Thank you for reading :)
|
||||
78
src/content/blog/redmi-note7-arrowos.mdx
Normal file
78
src/content/blog/redmi-note7-arrowos.mdx
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
pubDate: '2022-05-08'
|
||||
title: 'How to: Arrow OS on Redmi Note 7, root, microG'
|
||||
description: 'Learn how to install ArrowOS, based on Android 12 on your Redmi Note 7 (lavender) phone! Also installs root and microG for a BigTech free phone.'
|
||||
keywords:
|
||||
- ArrowOS
|
||||
- Redmi Note 7
|
||||
- Lavender
|
||||
- root
|
||||
- microG
|
||||
- Magisk
|
||||
---
|
||||
|
||||
This tutorial will show you how to flash ArrowOS, a nice android 12 rom, together with magisk to get root access to the phone and also microG, the google alternative to google play services. This tutorial is tailored for the Redmi Note 7, commonly referred to as lavender. Other phones might work differently due to not having a ramdisk, or being an A/B device, or ... something else. Proceed with caution. You can't blame me for bricked devices.
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- an unlocked bootloader (check [here](https://forum.xda-developers.com/t/all-in-one-redmi-note-7-lavender-unlock-bootloader-flash-twrp-root-flash-rom.3890751/) if you haven't, steps A+B only).
|
||||
- ADB installed and also whatever driver your OS requires to send commands via adb
|
||||
- USB debugging enabled on the phone
|
||||
|
||||
## 1. Flash OrangeFox Recovery
|
||||
|
||||
- Download the latest version [here](https://orangefox.download/device/lavender) and extract it.
|
||||
- I'm following essentially [this](https://wiki.orangefox.tech/en/guides/installing_orangefox) guide
|
||||
- Boot your phone into fastboot with `adb reboot fastboot`.
|
||||
- Since lavender is an A-only phone, you need to flash the recovery with `fastboot flash recovery recovery.img`
|
||||
- Once your PC says it is done, reboot the phone by holding the power button and the volume up button at the same time until the orange fox pops up. Please note, that booting with a command from your pc to recovery did not work for me!
|
||||
- Flash the orangefox recovery on your phone by navigating into the fox folder and selecting the zip file.
|
||||
- This should be it.
|
||||
|
||||
## 2. Install the latest version of MIUI
|
||||
|
||||
I'm not actually sure if this is necessary, but I found a simple firmware upgrade not to be as comprising as the full MIUI flash - even if it means that you have to deal with the cancer that is MIUI.
|
||||
|
||||
1. Wipe data, cache, ART/Dalvik cache in the recovery (you'll lose all of you data!)
|
||||
2. Format Data
|
||||
3. Download the latest MIUI version from [here](https://c.mi.com//miuidownload/detail?device=1700360)
|
||||
4. Move it to your phone with `adb push path/to/miui/ sdcard`
|
||||
5. On your phone, navigate to `sdcard` and install MIUI. This process might take a while. It even crashed for me during OTA_BAK, which is just the backup process. So I interrupted the process after the clock was frozen for 10 min.
|
||||
6. According to the ArrowOS docs you dont need to boot into MIUI. But since it froze for me, I kinda had to.
|
||||
7. Enabled USB debugging in MIUI again.
|
||||
|
||||
Reinstall OrangeFox recovery now with the same steps from 1. If your MIUI installation doesnt freeze, you should be able to go to 3 without the hassle.
|
||||
|
||||
## 3. Install ArrowOS
|
||||
|
||||
1. If you had to reboot earlier, wipe data, cache, ART/Dalvik cache in the recovery again.
|
||||
2. Format Data
|
||||
3. Download ArrowOS [here](https://arrowos.net/download/lavender)
|
||||
4. `adb push path/to/arrowos sdcard` and install it.
|
||||
5. Rewipe cache with the button that pops up after the installation.
|
||||
6. \o/ Boot into arrowOS
|
||||
|
||||
## 4. Install Magisk
|
||||
|
||||
I'm following mostly the steps from the official documentation [here](https://topjohnwu.github.io/Magisk/install.html).
|
||||
|
||||
1. The magisk docs are a bit tricky. First, grab the ArrowOS zip and unzip it. We'll get back later to that.
|
||||
2. Download the [Magisk App](https://github.com/topjohnwu/Magisk/releases/latest), send it to your phone with `adb push path/to/magisk sdcard` and install the apk on your phone.
|
||||
3. Lavender has a ramdisk despite the magisk app saying otherwise! Hence you need to move the `boot.img` file and also the `vbmeta.img` that we extracted in step 1 to your phone (for example with adb again ... by now you should be able to use that command)
|
||||
4. Now we patch the `boot.img` file in the magisk app. Click install in magisk.
|
||||
5. Dont select "Recovery mode" and also dont select the vbmeta option
|
||||
6. Select the previously moved `boot.img` file, and start the patching.
|
||||
7. Move the patched file back to your pc with `adb pull /sdcard/Download/magisk_patched_[random_strings].img`
|
||||
8. Reboot your phone into fastboot again with `adb reboot fastboot`
|
||||
9. Flash the patched image with `fastboot flash boot /path/to/magisk_patched.img`. This will overwrite the boot partition of your phone so that it loads the magisk related processes on its own.
|
||||
10. Flash the vbmeta image as well with `fastboot flash vbmeta --disable-verity --disable-verification vbmeta.img`
|
||||
11. Reboot! The magisk app should show up as installed now!
|
||||
|
||||
## 5. Install microG
|
||||
|
||||
This is luckily quite easy. A lot quicker and easier than the previous steps! Lucky us, someone put together a magisk module, that takes care of the installation. Check [here](https://github.com/nift4/microg_installer_revived).
|
||||
|
||||
1. Download the latest version of the installer [here](https://github.com/nift4/microg_installer_revived/releases)
|
||||
2. Move the installer to your phone (with adb)
|
||||
3. Go to magisk modules, select the file and install it.
|
||||
4. Reboot your phone \o/ thats it. I promise.
|
||||
69
src/content/blog/site2sitewireguard.mdx
Normal file
69
src/content/blog/site2sitewireguard.mdx
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
---
|
||||
pubDate: '2022-09-27'
|
||||
updatedDate: '2022-10-22'
|
||||
title: 'Site 2 Site Wireguard VPN with a Mikrotik Router and a Cloud VM'
|
||||
description: ''
|
||||
keywords:
|
||||
- cloud
|
||||
- Mikrotik
|
||||
- site 2 site
|
||||
- wireguard
|
||||
- vpn
|
||||
---
|
||||
|
||||
My network consists of a server located in country A. Since the largest ISP in the country
|
||||
B does have terrible peering with the ISP in country A, I thought of setting up a small
|
||||
proxy server in country A. This way, I should be able to bypass bad peering, since the
|
||||
cloud provider probably organizes good routing to both sides. Since I meant to try out
|
||||
Oracles free tier anyway, it seemed like a good opportunity to learn ansible properly and
|
||||
develop with IaC scripts to set up a reverse proxy in the cloud.
|
||||
|
||||

|
||||
|
||||
1. Create Wireguard keys. If the CLI is not an option [this
|
||||
website](https://www.wireguardconfig.com/) is cool too (keys are client-sided generated)
|
||||
2. Since I want to have dedicated monitoring for what traffic is flowing between the proxy
|
||||
and my server, I create a new Wireguard interface in my Mikrotik router. Remember to
|
||||
use the previously generated keypairs.
|
||||
3. Create a new peer as follows. Important is the entry to allow the IP address of the
|
||||
cloud wg endpoint, otherwise the cloud cant ping back home.
|
||||
<div style="max-width:600px">
|
||||

|
||||
</div>
|
||||
4. I had to adjust the firewall rules to allow communication with the tunnel network.
|
||||
5. On the proxy server we use similar settings. Interestingly enough, the Mikrotik wg
|
||||
endpoint grabs the network address of the 10.222.0.0/30 network. Meaning, 10.222.0.1 is
|
||||
unallocated.
|
||||
|
||||
```
|
||||
[Interface]
|
||||
PrivateKey = REDACTED
|
||||
Address = 10.222.0.2/30
|
||||
ListenPort = 23xxx
|
||||
|
||||
[Peer]
|
||||
PublicKey = REDACTED
|
||||
AllowedIPs = 10.10.0.0/16,10.222.0.0/32
|
||||
Endpoint = alphard.abc.de:23xxx
|
||||
```
|
||||
|
||||
6. Next, we need to create a DDNS updater to keep our A record in sync with the publicly
|
||||
assigned IP address of the cloud provider (unless you wanna pay for a static address of
|
||||
course). I found [this](https://hub.docker.com/r/oznu/cloudflare-ddns/) Docker
|
||||
container to be convenient.
|
||||
7. Finally, we need to update the IP address of the peer in the Mikrotik router. For that
|
||||
we use a Mikrotik script, which I stole from [Uli
|
||||
Koehler](https://techoverflow.net/2021/12/29/how-to-update-wireguard-peer-endpoint-address-using-dns-on-mikrotik-routeros/). Remember to add a scheduler to the script.
|
||||
|
||||
```
|
||||
:if ([interface wireguard peers get number=[find comment=belka] value-name=endpoint-address] != [resolve belka.abc.de]) do={
|
||||
interface wireguard peers set number=[find comment=belka] endpoint-address=[/resolve belka.abc.de]
|
||||
}
|
||||
```
|
||||
|
||||
**Edit 22-10-2022:**
|
||||
|
||||
After running this setup for a while, I found it a bit unstable. Since I do not get a
|
||||
static IP address from my ISP, my address changes every now and then. The wireguard client
|
||||
at the other hand does not recheck the A record but instead loses connection and requires
|
||||
a manual restart. I would recommend to use a more stable protocol like IPsec.
|
||||
61
src/content/blog/software-recommendations.mdx
Normal file
61
src/content/blog/software-recommendations.mdx
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
pubDate: '2022-07-26'
|
||||
updatedDate: '2022-07-27'
|
||||
title: 'Software recommendations for privacy conscious people'
|
||||
description: ''
|
||||
keywords:
|
||||
- privacy
|
||||
hidden: false
|
||||
---
|
||||
|
||||
Moving away from BigTech is not an easy task. However, in these days, there are plenty polished
|
||||
alternatives out there. Over the years I tried out many different services and software. I will
|
||||
present what worked best for me here.
|
||||
|
||||
An encompassing resource I am recommending is
|
||||
[PrivacyGuides](https://www.privacyguides.org/). However, their recommendations are at
|
||||
times rather purist. Everyone should use whatever works best for them. Privacy is not a
|
||||
black-and-white game. Every bit of big-tech helps to minimize your digital footprint and
|
||||
puts you into a position to decide what people find about you online.
|
||||
|
||||
## Utilities
|
||||
|
||||
| Name | Description | Cost | Selfhostable |
|
||||
| --------------------------------------------------------- | -------------------------------------------- | :---------------: | :----------------: |
|
||||
| [Bitwarden](https://bitwarden.com/) | Password manager | free | :white_check_mark: |
|
||||
| [SimpleLogin](https://simplelogin.io/) | Email aliases | free for students | :white_check_mark: |
|
||||
| [Mailbox.org](https://mailbox.org/en/) | Email hosting | 1 € / m | :x: |
|
||||
| [Element](https://element.io/) | Instant messenging | free | :white_check_mark: |
|
||||
| [OpenStreetMap](https://www.openstreetmap.org/) | Global map | free | :x: |
|
||||
| [Baïkal](https://sabre.io/baikal/) | Lightweight calendar synchronisation | free | :white_check_mark: |
|
||||
| [Filebrowser](https://github.com/filebrowser/filebrowser) | Lightweight file organisation in the browser | free | :white_check_mark: |
|
||||
| [xBrowserSync](https://www.xbrowsersync.org/) | Bookmark sync | free | :white_check_mark: |
|
||||
|
||||
## PC Software
|
||||
|
||||
| Name | Description | Cost | Selfhostable |
|
||||
| ------------------------------------------------------------------------------ | ---------------------------- | :--: | :----------: |
|
||||
| [Ungoogled Chromium](https://github.com/ungoogled-software/ungoogled-chromium) | Browser | free | - |
|
||||
| [KDE Software Suite](https://kde.org/) | Desktop environment | free | - |
|
||||
| [i3wm](https://i3wm.org/) | Desktop environment | free | - |
|
||||
| [VSCodium](https://vscodium.com/) | No telemetry VSCode | free | - |
|
||||
| [Xournal++](https://xournalpp.github.io/) | PDF annotation and creation | free | - |
|
||||
| [Istilldontcareaboutcookies](https://www.stilldontcareaboutcookies.com/) | No more nasty cookie banners | free | - |
|
||||
|
||||
## Android Apps
|
||||
|
||||
| Name | Description | Cost | Selfhostable |
|
||||
| ------------------------------------------------------------------------------------- | ---------------------------------------------------------- | :--: | :----------------: |
|
||||
| [Aurora Store](https://auroraoss.com/) | Anonymized access to the PlayStore | free | - |
|
||||
| [Aurora Droid](https://auroraoss.com/) | Front-end for F-Droid | free | - |
|
||||
| [Infinity](https://f-droid.org/packages/ml.docilealligator.infinityforreddit/) | Reddit client | free | - |
|
||||
| [Aegis](https://f-droid.org/en/packages/com.beemdevelopment.aegis) | 2FA Manager | free | - |
|
||||
| [FindMyDevice](https://f-droid.org/en/packages/de.nulide.findmydevice/) | Remote phone control | free | :white_check_mark: |
|
||||
| [AdAway](https://f-droid.org/en/packages/org.adaway/) | Adblocking with hosts file | free | - |
|
||||
| [OsmAnd+](https://f-droid.org/en/packages/net.osmand.plus/) | Global map | free | :x: |
|
||||
| [StreetComplete](https://f-droid.org/en/packages/de.westnordost.streetcomplete/) | Improve OpenStreetMap | free | - |
|
||||
| [NewPipe](https://newpipe.net/) | Youtube client | free | - |
|
||||
| [Finamp](https://github.com/jmshrv/finamp) | Music client for Jellyfin | free | - |
|
||||
| [K-9 Mail](https://k9mail.app/) | Mail client with PGP | free | - |
|
||||
| [Ice Box](https://play.google.com/store/apps/details?id=com.catchingnow.icebox&gl=US) | Freeze Magisk to make certain banking apps _cough_ DKB run | free | - |
|
||||
| [Element](https://element.io/) | Instant messenging | free | :white_check_mark: |
|
||||
205
src/content/blog/tumthesis.mdx
Normal file
205
src/content/blog/tumthesis.mdx
Normal file
|
|
@ -0,0 +1,205 @@
|
|||
---
|
||||
pubDate: '2022-09-29'
|
||||
title: "My Bachelor's thesis journey at TUM"
|
||||
description: ''
|
||||
keywords:
|
||||
- thesis
|
||||
- latex
|
||||
- figures
|
||||
- spellchecking
|
||||
- academic writing
|
||||
- journey
|
||||
hidden: false
|
||||
---
|
||||
|
||||
In this post, I talk about the entire process of finding a thesis, the research, the
|
||||
writing, and the defense at TUM. I'll try to pass on my lessons learned and maybe give you
|
||||
an idea of what to expect and how to prepare yourself for every step of the journey.
|
||||
|
||||
Summary of my Bachelor's thesis:
|
||||
|
||||
- Topic: Lightweight low-latency virtual networking
|
||||
- Faculty for informatics (computer science)
|
||||
- Chair of Network Architectures and Services
|
||||
([Website](https://www.net.in.tum.de/homepage/))
|
||||
- Graded: 1.0 (best possible grade)
|
||||
- Time allotted: about 30h / week for about 16-20 weeks
|
||||
- Download: [thesis.pdf](/files/thesis.pdf) - [final_talk.pdf](/files/final_talk.pdf)
|
||||
|
||||
## Finding a topic (1-2 months)
|
||||
|
||||
Before finding a topic, I had to find out which topics of computer science interest me. I
|
||||
started to actively look for a thesis in January 2022. Since I was about to finish the
|
||||
[iLab](https://ilab.net.in.tum.de/) practical course, which taught me intermediate
|
||||
concepts of networking beyond the fundamentals lecture, of course, I looked for theses at
|
||||
the networking chair. Initially, I checked out the physical posters at my chairs
|
||||
exhibition area and found a couple of interesting posters about creating a virtual iLab. I
|
||||
reached out to the listed people but didn't hear back in a long while. Later it turned
|
||||
out, that the posters were not updated since before COVID-19 and the advertised thesis was
|
||||
no longer available.
|
||||
|
||||
On the website of the chair, I found a couple of advertisements for theses that seemed up
|
||||
to date. Many of them were about topics I had a hard time placing - I barely knew any of
|
||||
the abbreviations mentioned. Intimidating! Two theses struck me: both of them were about
|
||||
SR-IOV and virtualization, which I had previously experienced with. I put together a small
|
||||
application letter of about 300 words. Please note, that this is just an example and you
|
||||
should work on a personal letter yourself.
|
||||
|
||||
```
|
||||
Since I am one of the few students who are currently attending iLab1, and it is one of my
|
||||
favorite courses in the computer science curriculum at TUM, I would like to apply for a
|
||||
bachelor thesis at the networking chair. In particular, I saw the posting "Lightweight
|
||||
low-latency virtual networking", which I am interested in.
|
||||
|
||||
Over the years, I dedicated some time to learning more about working with Linux
|
||||
kernel-based operating systems. For example, regarding virtualization, I set up
|
||||
PCI-passthrough for a GPU with KVM and spent hours reading documentation about topics such
|
||||
as CPU pinning, static huge pages, and virtual network interfaces. In my small home lab, I
|
||||
experimented with docker since containers appear to be less dependent on the system,
|
||||
lightweight, and faster to spin up, which I find convenient.
|
||||
|
||||
While PCI-passthrough or my casual docker experiments might not be relevant for a thesis
|
||||
concerning SR-IOV, it helped me to acquire a basic understanding of virtualization
|
||||
solutions, which I would like to improve further. I believe the iLab helped me to develop
|
||||
the right mindset for solving problems I initially only posse a rather general overview
|
||||
of; I am interested in deepening my knowledge and curious to learn more.
|
||||
|
||||
Is this thesis still available for the upcoming summer term? Finally, I do not insist on
|
||||
this particular topic or a particular language (de/en are both fine) to write the thesis
|
||||
in; I am happy to pick up any opportunity to learn more about Linux, automation and
|
||||
networking and - in the best case - work on something I am interested in. I would be happy
|
||||
to receive a reply and - if the thesis is still available - to be taken into
|
||||
consideration.
|
||||
```
|
||||
|
||||
Promptly I was invited for a small interview, where I first talked about myself about 5-10
|
||||
minutes, my previous knowledge and experiences at TUM and interests. Then my future
|
||||
advisor introduced the topic to me and gave me more information about technical details.
|
||||
It was a lot to take in and I probably didn't remember more than 10%, but it sounded like
|
||||
something I could figure out.
|
||||
|
||||
Next, I took about 5 days to research the given details and figure out if I wanted to
|
||||
accept the thesis. After messaging them back, I was given another appointment to receive
|
||||
access to their infrastructure and information about the process. This appointment was
|
||||
scheduled to be on the 2nd of March.
|
||||
|
||||
Timeline:
|
||||
|
||||
- Initial talk - shortly before the official start
|
||||
- Official start - 15th of the month
|
||||
- Mid talk - after ~ 2 months of work
|
||||
- Deadline - 15th of the month, 4 months after the start
|
||||
- Final talk - weeks after the deadline
|
||||
|
||||
## Putting together a proposal
|
||||
|
||||
After receiving all the details of the thesis and actual git projects, I started to
|
||||
formulate a small 2-page document detailing what my thesis would be doing - commonly known
|
||||
as the proposal. Since I was speaking English for years daily I decided that I would be
|
||||
writing the thesis in English. I filled out the required documents for the administration
|
||||
and set the official starting date to be the 15th of April (theses only start on the 15th
|
||||
at TUM). So I had plenty of time left to submit a proposal and discuss it with my
|
||||
professor.
|
||||
|
||||
From then on I had weekly virtual meetings scheduled with my advisors. During the proposal
|
||||
time, I would upload my document each week and they would comment on it during our
|
||||
meetings. This already clarified many questions I had and made it much clearer to me, what
|
||||
the goal of the thesis should be. I would say, the proposal is an integral part of the
|
||||
thesis especially for yourself to know what you are gonna get yourself into. Remember, you
|
||||
can decline a thesis within the first third without repercussions.
|
||||
|
||||
The initial talk with Prof. Carle happened towards the end of March and was extremely
|
||||
valuable to me. While I was scared shitless before, it turned out to be extremely relaxed
|
||||
and enriching. Prof. Carle remarked on a couple of details and advised what exactly I
|
||||
should focus on. Going back over my protocol of the meeting, he remarked upon details I
|
||||
was able to work out and put into my thesis. Valuable. Don't underestimate the experience
|
||||
of a professor who has been doing research in the field for decades! They surely know
|
||||
where to poke to get valuable results.
|
||||
|
||||
## The project period
|
||||
|
||||
Most of the work on the project is supposed to be done during this time. A thesis is
|
||||
considered to take 20 h per week, so you really should not slack and start with your
|
||||
research and implementation right away. Don't underestimate the amount of time it takes to
|
||||
handle the measurement and evaluation part of your thesis. It's often not trivial to test
|
||||
your prototype in a scientifically correct manner.
|
||||
|
||||
A word on the weekly meetings: Weekly meetings still happen, where I got to discuss
|
||||
problems with my advisors. You should always be prepared for such a meeting; don't waste
|
||||
the precious time of your advisors. I always made a small document, where I quickly wrote
|
||||
what I did since the last meeting, problems I was facing, questions, and finally my plan
|
||||
for the upcoming week.
|
||||
|
||||
I enjoyed this part of the project. I got access to state-of-the-art hardware such as AMD
|
||||
EPYC 7551P and Intel X710 10 GbE with special time stamping capabilities. Really fun
|
||||
stuff!
|
||||
|
||||
During that time I was frequently working 11-12 hours a day, well into the night. It was
|
||||
probably the most intense part. It paid out: after just a couple of weeks I implemented a
|
||||
basic prototype. I thought I would take some quick measurements and be done; I could chill
|
||||
for the remaining time. Oh boy, was I wrong. It had barely started. Getting the
|
||||
measurement setup right was a task that would accompany me until the last weeks. There is
|
||||
so much complexity involved when it comes to timestamping packets; I recommend reading
|
||||
Chapter 6 of the thesis to get an impression of my final setup.
|
||||
|
||||
## The mid talk
|
||||
|
||||
When the two-month period approached, we scheduled my mid-talk. I was supposed to prepare
|
||||
a couple of slides and talk for exactly 10 minutes about my topic, related work, my
|
||||
approach, and preliminary results. Luckily I was able to finish a basic testing setup to
|
||||
record and show off some latency measurements. Not every student gets to present results
|
||||
already at this stage, although it is desired.
|
||||
|
||||
I was told the mid-talk is not TUM specific, but an exclusive thing to the chair. It is
|
||||
supposed to give students the chance to have another discussion with the professor and
|
||||
maybe get nudged in another direction. The weeks before I prepared my slides and discussed
|
||||
them with my advisors. This time around, I was not as scared as for the initial talk.
|
||||
Regardless, I practiced my talk a whole bunch and had an interesting discussion with Prof.
|
||||
Carle.
|
||||
|
||||
## Writing the thesis
|
||||
|
||||
Up until now, I had only worked on the project but not written a word of my thesis. The
|
||||
week after the mid-talk I started to put together a structure and work on the first
|
||||
sections. Meanwhile, I still had to fix up my measurement setup. Initially, I was
|
||||
extremely insecure about my structure and what to put into each section. I ended up moving
|
||||
around sections a lot in the process.
|
||||
|
||||
I was allowed to send in chapters to my advisors and they would correct them. In our
|
||||
weekly meeting, we would discuss my mistakes. This part was integral for me in learning
|
||||
about academic writing, the necessary formalities, and expectations. I already wrote down
|
||||
most of my mistakes in another [post](/blog/writingathesis).
|
||||
|
||||
work. Lots of mistakes were spotted and corrected. Also, my advisor took a last look over
|
||||
In week 15 of the official handling time, I had my girlfriend and family proofread my
|
||||
the finished thesis. There is still one honest mistake in the PDF. Did you spot it? The
|
||||
next step was to get 3 printed versions of the thesis. One for the examination office, one
|
||||
for Prof. Carle, and one for my advisors. A firm in Munich, printy, offers such services.
|
||||
It cost me about 20 € per printout (70 pages, double-sided).
|
||||
|
||||
** Please make sure to consult your faculty's formalities regarding cover and the next few
|
||||
pages. ** Also check the deadline and potential public holidays and calculate for traffic
|
||||
when handing it in. The drop-off point was only open for a couple of hours.
|
||||
|
||||
## The defense / final talk
|
||||
|
||||
My final talk was scheduled for the 9th of September, so about 3 weeks after the deadline.
|
||||
I prepared my slides, got feedback on them in the weekly meetings, and even had a dress
|
||||
rehearsal with my advisor. He provided invaluable feedback so that I would not make
|
||||
(obvious) mistakes in my real final talk.
|
||||
|
||||
For my final talk, I was not terribly excited. I practiced my talk a couple of times
|
||||
before, but not too much. I was almost looking forward to presenting my exciting results
|
||||
and having a discussion with Prof. Carle. The talk went well; the professor seemed happy
|
||||
with the results and scope; I was able to answer all his questions. The final talk is a 20
|
||||
minute presentation of my results and in addition to that about 5-15 minutes of questions
|
||||
or rather a discussion. If you put as much work into the thesis as I did, you should not
|
||||
need to prepare specifically for this part.
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
- start your work early
|
||||
- be prepared for every meeting; it is part of your grade
|
||||
- talk to your advisors! They are here to help you; they have supervised many theses; they
|
||||
(mostly) know how to avoid mistakes.
|
||||
- avoid easy mistakes -> [my list](/blog/writingathesis)
|
||||
39
src/content/blog/warranty-msi-monitor.mdx
Normal file
39
src/content/blog/warranty-msi-monitor.mdx
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
pubDate: '2024-08-10'
|
||||
updatedDate: '2024-12-08'
|
||||
title: "Experience with MSI's warranty for a monitor"
|
||||
description: "Positive review of MSI's warranty service for a monitor."
|
||||
keywords:
|
||||
- warranty
|
||||
- msi
|
||||
- monitor
|
||||
- dead pixel
|
||||
- bug
|
||||
- thunderfly
|
||||
hidden: false
|
||||
heroImage: ./images/msi.png
|
||||
---
|
||||
|
||||
I bought a new 32' 4k MSI Monitor for my birthday a couple of months ago. Since there was not much choice when it came to my requirements:
|
||||
- IPS (or rather, not OLED),
|
||||
- an integrated KVM Switch with 3 Ports,
|
||||
- at least 60W USB-C power delivery,
|
||||
- a higher refresh rate of >= 120hz,
|
||||
- 4k, 32',
|
||||
I ended up buying the MSI MAG 323UPF, albeit its 'gamer' design. The device and its productivity features are awesome, and I'm very happy with them.
|
||||
|
||||
One day, a couple of dead pixels appeared. I was not sure if they were dead pixels or bugs, but they were not moving. Since they were about 8 pixels in a perfect line, I assumed it was a bug, but I do not know for sure. Searching the web it seems common occurence. I was devastated.
|
||||
|
||||
None of the tricks online worked:
|
||||
- setup another light source to lure it outside
|
||||
- use a vacuum cleaner
|
||||
- use canned air in various gaps
|
||||
- use a suction cup
|
||||
|
||||
Maybe it was a juicy one and died in the middle of the screen.
|
||||
|
||||
I contacted MSI support, and they were very helpful. After uploading a picture, it did not take 5 minutes to receive a return label. A day later, the UPS guy picked it up at my door. The package was then delivered to Poland (I live in Germany). Once it arrived there, it only took a couple of hours until I received an email claiming that the monitor was defective, and I got a free replacement immediately. Granted, I bribed the technician with a pack of Haribo gummy bears, but I'm sure that was not the reason for the quick replacement. A little kindness goes a long way.
|
||||
|
||||
Kudos to MSI for their excellent support. Whether it was a bug or a dead pixel, neither should happen on a new monitor. I'm glad they stick to their warranty without any hassle.
|
||||
|
||||
Would buy MSI again.
|
||||
71
src/content/blog/writingathesis.mdx
Normal file
71
src/content/blog/writingathesis.mdx
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
pubDate: '2022-08-12'
|
||||
title: "Do's and don't when writing a thesis"
|
||||
description: 'Useful tips for avoiding common mistakes when writing a thesis. Includes recommendations for writing, formatting, figures and Latex.'
|
||||
keywords:
|
||||
- thesis
|
||||
- latex
|
||||
- figures
|
||||
- spellchecking
|
||||
- academic writing
|
||||
- guidelines
|
||||
hidden: false
|
||||
---
|
||||
|
||||
When I wrote my Bachelor's thesis in computer science, I had barely any experience with
|
||||
academic writing. So far, I only attended one seminar about OT security, where I wrote a small 10-page paper, which was not published. Although I learned the basics of Latex and
|
||||
academic writing, it was less comprehensive than the requirements in my thesis.
|
||||
|
||||
First, check your faculty's homepage for guidelines. They usually provide extensive
|
||||
documents on style, format, and writing. Ultimately, these documents overrule any other
|
||||
advice you may find on dubious websites.
|
||||
|
||||
## Do's and dont's
|
||||
|
||||
General notes:
|
||||
|
||||
- In chapter/section headlines, do not add the acronym.
|
||||
Good: `3. Data Plane Development Kit`.
|
||||
Bad: `3. Data Plane Development Kit (DPDK)`
|
||||
- Avoid enumerations in brackets; instead, use "such as"
|
||||
- Use a spellchecker!
|
||||
- Read out loud to detect errors or strange wording
|
||||
- Do not use a new page for a couple of sentences. At least fill 1/4 or even more of a page.
|
||||
- Do not add Section 7.1 when you do not have a 7.2
|
||||
- Check for double spaces
|
||||
- Tables/Listings/... should not reach into the side margin
|
||||
- Use colors and different line types to highlight graphs better
|
||||
- Do not ever use forward references
|
||||
- Section/Chapter/Listing always with uppercase (this might be TUM specific?)
|
||||
- Tables should never have vertical lines
|
||||
- Check for consistent dashing in words such as "low latency" vs "low-latency"
|
||||
|
||||
Latex:
|
||||
|
||||
- Citations should be on the same line; use invisible spaces (~) to avoid a linebreak
|
||||
before a citation
|
||||
- Use the package siunitx for consistent formatting of numbers
|
||||
- Use an acronym library and use it consistently throughout the thesis
|
||||
|
||||
Figures:
|
||||
|
||||
- Avoid png or jpegs. Instead, use vector graphics such as SVG
|
||||
- Do not write a novel in a figure caption. The caption is printed in the table of contents; large sentences look strange there and decrease readability
|
||||
- A figure should have the same font as the remaining thesis
|
||||
- Avoid hard-to-read colors like yellow in figures
|
||||
|
||||
## Spellchecker
|
||||
|
||||
_I am not affiliated with any service mentioned here_
|
||||
|
||||
I have had good experience with
|
||||
[Writefull](https://www.writefull.com/writefull-for-overleaf). More specifically, compared
|
||||
to alternatives, they support Latex. They trained their AI with scientific papers so that
|
||||
the recommendations mostly match the expected writing style. Especially when it comes to
|
||||
commas, it pointed out many mistakes which I would have not noticed on my own.
|
||||
|
||||
One thing I disliked about Writefull is that it is only available for Word documents or
|
||||
Overleaf. I am using neither. Therefore, I had to copy and paste my tex files from my local
|
||||
editor to Overleaf. A bit of a hassle, but okay. Another thing I noticed is that the Latex
|
||||
acronym package is not supported. Often, it would suggest reordering my \ac so that
|
||||
it does not make sense afterward.
|
||||
Loading…
Add table
Add a link
Reference in a new issue