3 Incorrect Things New Runners Believe About Running

3 Incorrect Things New Runners Believe About Running

Conversations I have with newbie runners always seem to cover these three things.

1. You need to run the entire time.

No, you don’t. In fact, most endurance runners have some kind of walk/run cadence they use, or at least a lighter/harder run pattern depending on the workout for the day.

As you get going, try running for 1-2 minutes, walk for 30s, and then run for 1-2 minutes. Extend those minutes out as your fitness goes up.

2. You need to push through the pain.

No, almost never is this the correct answer. If you mean push through the discomfort of working hard, sure, if you mean pain then you are ignoring something your body is trying to tell you.

And for beginning runners it’s the same things:

  • You got the wrong shoes. Like, for real, you did. Go to Fleet Feet or your local running store and get properly fitted. It will make a much bigger difference than you think.
  • You’re running too much. Start at 3-5 miles per week (mph) and increase ~10% per week in total volume.
  • You’re running wrong. You are raising your feet too high off the ground, landing too hard in front of you instead of under you, and landing too hard, pushing that force back up your body. Expect pain in your ankles, knees, hips and maybe back.

I’m not the expert at running right, find a PT, or spend some time on YouTube.

3. All runs are the same.

Nope. Sure aren’t.

When you are just starting, sure, do whatever is enjoyable. As you increase your volume, you’ll want to spend relatively more time in zone two than zones four and five, and none in zone three.

What is zone two? The zone where you can carry on a conversation with a friend but you’re still working. “Talk but can’t sing.”

Zone two is where you build a cardio base, burn fat and your body learns endurance.

Zone three is where you will absolutely run naturally, and where you run too fast to get the benefits of zone two running, but too slowly to get the benefits of zone four.

Zone four is short answers, speaking in words not sentences. This is where you burn more carbs than fat, and start doing more to increase overall heart and lung capacity.

Zone five is one to two word answers, or grunting. You can only spend two to three minutes at a time in zone five, or you’re not actually in zone five. This is where you increase Vo2 max.

4 – BONUS. Running Sucks

¯\_(ツ)_/¯ 

Maybe you’ll really feel that way after you fixed the first three items on this list, but probably not.

ChatGPT’s User-Agent… Obfuscation

ChatGPT’s User-Agent… Obfuscation

If you ask ChatGPT to “please fetch https://whatmyuseragent.com/” in regular mode, it gives an answer like Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; ChatGPT‑User/1.0; +https://openai.com/bot

Notice it clearly labels itself ChatGPT.

However, if you ask it the same thing in agent mode, it lies:

Web page displaying a user agent string for a Mac OS X device using Chrome browser. The site is "WhatMyUserAgent.com" and includes a button to copy the user agent information.

That is my user-agent, not ChatGPT’s!

This comment on Hacker News tries to find a gray area:

I find this problem quite difficult to solve:

1. If I as a human request a website, then I should be shown the content. Everyone agrees.

2. If I as the human request the software on my computer to modify the content before displaying it, for example by installing an ad-blocker into my user agent, then that’s my choice and the website should not be notified about it. Most users agree, some websites try to nag you into modifying the software you run locally.

3. If I now go one step further and use an LLM to summarize content because the authentic presentation is so riddled with ads, JavaScript, and pop-ups, that the content becomes borderline unusable, then why would the LLM accessing the website on my behalf be in a different legal category as my Firefox web browser accessing the website on my behalf?

But I really don’t think it is. While I would be equally annoyed to find my requests to ChatGPT to do research stymied, that doesn’t give ChatGPT the right to lie to other online businesses about ‘who’ it is.

Creating A Timelapse with bash, sshfs, imagemagick and ffmpeg in 2007

This is the story of how I created a million+ image timelapse with absolutely no knowledge on how to do it correctly.

One day in the late 2000s I’m sitting in the dungeons of Bethel University with no windows. A new building is being constructed next door. I can hear the construction happening and I want a window.

“What are the odds they made a webcam?”

They did! Cool.

“Can I cron this?”

I sure could.

Over the course of a couple of years I saved millions of jpgs from the construction, and then needed to figure out how to put them into a timelapse.

It wasn’t as easy as just stringing them altogether because when it got dark at night you’d end up with a long black spell in the video. How to get around that?

Simple time comparisons (i.e.m 8-5) wouldn’t work, esp. in MN where the day length changes dramatically.

Solution: imagemagick.

IM would give me the darkness/lightness of an image. So for months my workflow was to compute the relative brightness of each and every image everytime I wanted to update the timelapse.

Something like:

#!/bin/bash

mkdir -p frames

i=0

# Define the brightness threshold (10% of the maximum brightness value, which is 1.0)
BRIGHTNESS_THRESHOLD=0.1

find . -maxdepth 1 -type f -name "*.jpg" | while read -r image; do
    # Get brightness (mean value, typically 0.0 to 1.0)
    brightness=$(identify -format "%[fx:mean]" "$image")

    # Perform the brightness comparison using bc for floating-point arithmetic
    # Check if brightness is greater than the threshold
    if (( $(echo "$brightness > $BRIGHTNESS_THRESHOLD" | bc -l) )); then
        i_filename=$(printf "%04d.jpg" "$i")
        ln -s "$(readlink -f "$image")" "frames/$i_filename"
        ((i++))
    fi
done

# Check if any frames were linked before attempting to create the video
if [ "$i" -gt 0 ]; then
    ffmpeg -r 25 -i frames/%04d.jpg -c:v libx264 -vf "fps=25,format=yuv420p" output.mp4
    echo "Processing complete. Symbolic links created in 'frames/' for images over ${BRIGHTNESS_THRESHOLD} brightness."
    echo "Video 'output.mp4' generated from ${i} selected frames."
else
    echo "No images met the brightness threshold of ${BRIGHTNESS_THRESHOLD}. No symbolic links created or video generated."
fi

Except far less pretty, I had Gemini clean that up for me.

So every time I generated a new video, recompute every frame based on brightness (I forgot the comparison in there, you can add it in your imagination) and used symlinks to give frame numbers for ffmpeg.

Yes, I was creating tens of thousands and then hundreds of thousands of symlinks to get ffmpeg to pick up on them as individual frames.

Eventually I figured out how to not re-process everything, I think by moving processed images to a different folder. Something very high tech like that.

Was going great until some idiot started leaving a light on overnight.

Completely threw my heuristic out the window.

BUT, I soon found the counting the number of unique colors in the image was even better than the overall lightness. So same loop, but get the count of unique colors.

Problem: we were now at millions of images and still in the Pentium age.

What to do?

What any self-respecting bash guy does: get more computers, then write bash scripts that create a mysql database and load up every image into the database.

I created a job queueing system — of sorts — that required bash scripts to loop over mysql SELECT statements and write multiple imagemagick commands to a single sh file, and execute that.

Something like:

brightness=$(identify -format "%[fx:mean]" "$image")
sql = "INSERT INTO images_and_brightness (image_path, brightness_value) VALUES ('%s', %.4f);" 
echo $sql | mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DB"

But put ten of those into a sh file at a time, and create thousands of sh files.

The main server wrote those and the “workers” picked up a file over an sshfs filesystem and ran it locally. When they were done, they deleted the file and that’s how it was removed from the “queue.”

So lots and lots and lots and of bash scripts now running in a distributing computing environment over ssh on old Pentium 3s my employer had no use for.

I don’t know if I spent more time figuring this out than just letting my Pentium run it.

And still symlink to the original images. Ain’t nobody got enough space for two copies of those jpegs.

At least nobody still eating ramen twice a day.

To this day, the only distributed computing system I every made, and, I believe, one of the more unique systems a person could have come up with.

AI Small Wins; Real World Difference

AI Small Wins; Real World Difference

I’m deeply skeptical of most commercial claims about AI “personalizing” education—at least in the short term. My doubt isn’t about the underlying technology, but about how poorly it’s likey to be executed. (I ought to write another post on this, because it also shows great promise)

But there is room for real-world wins in knowledge and education.

One of the most useful tools I’ve built is a system that turns any article or website into a podcast episode with high-quality voice narration. I use it daily—catching up on news while driving or listening to long-form Tolkien essays before bed.

Today, I showed Gavrel how to use it.

Gav devours audio. He’s only 14 and has already logged over 13 months on Audible. We can’t get him books fast enough.

He’s not a fan of reading text—but he loves to learn.
So today, I taught him how to ask AI to generate a custom article and add it to his podcast feed.

Prompt (I do not know what these things are):

Please write a story on the v1v2 mouse-tank (mauz?) and the rat-tank landship concept.

Give me a long story article that is very detailed, uses trustworthy and historical information that you can find on the web and is optmized for a podcast, then add it to my podcast feed.

Result several minues later: a 12-minute podcast on… whatever those WWII German weapons were.

This is a small win at home, a way for my 14-year old to generate articles tailored to him in the way that he learns best. Not without risk, the AI might get some things wrong, but because its pulling from general web knowledge it will be roughly as accurate as what he could Google himself, but in a fraction of the amount of time.

There are so many ways that use-cases like this can be beneficial.

The real catch is this: for it to be tailored, it also can’t be a mass-market product, that isn’t how customization works…

P.S. Listen to more examples:

On Gondolin via Tolkien Gateway

On Larry the Chief Mouser:

My morning report, dynamically generated for me each day:

Posts in This Category