Just a quick note after yesterday's S2 Pico OLED
tutorial.
I encountered some hiccups getting Serial.println() to work on Arduino with
this board. Essentially nothing would come out of Serial Monitor after reprogramming.
I think the core of the issue is that the ESP32-S2 has native USB.
ESP8266 and older ESP32 boards used a USB serial converter chip, so programming
over serial vs. printing to serial happened without any glitches to the USB. Now
with native USB I think here's what happens:
You press Button 0, cycle Reset and release B0
ESP32-S2 boots into "programming mode" and initializes native USB as COM port
You hear the USB connection sound as COM port is assigned
Arduino reprograms the flash
You manually press reset
USB COM port actually drops at this point
When you have Serial.begin(); in your code, it now initializes native USB as
COM port again
You hear the "USB chime" again from your computer, and COM port is assigned
Now if you're used to having Arduino Serial monitor open all the time, the
same COM13 that was there during programming on my PC is now a "new" COM13.
It seems the serial monitor doesn't notice the change. Solution is simple:
Reprogram your chip.
Reset, wait for the "chime"
Only now open the serial monitor
The irksome thing is, that I'll now need a delay in setup() to see what's
going on. Maybe I have an old version of Arduino or something. If you
know another solution, you're welcome to drop me a line (me at codeandlife.com)
Notice: I wanted to see if OpenAI canvas can do reasonable Markdown editing, so this post is co-written with ChatGPT 4o with Canvas. The code and Fish script were done before writing this separately with the gracious help of our AI overlords as well. I've kept the prose to minimum and edited the result myself, so benefit should still be high, even though manually written content is low.
Recently, I wanted to make my command-line experience a bit more conversational. Imagine writing a comment like # list files, pressing enter, and seeing it magically turn into the corresponding Fish shell command: ls. With OpenAI's API, this becomes not just possible but surprisingly straightforward. And should rid me of jumping to ChatGPT every time I need to remember how find or let alone ffmpeg exactly works.
This blog post walks through creating a Python script called shai that turns natural language comments into Unix commands using OpenAI's API, and then utilizing that script with a Fish shell function to replace a comment written on command line with the actual command. After the command is generated, you can edit it before running it — a nice way to integrate AI without losing control.
Setting up the environment
Before we dive into the script, make sure you have the following:
Python installed (version 3.8 or higher is recommended).
An OpenAI API key. If you don’t have one, sign up at OpenAI.
The OpenAI Python library in a Python virtual environment (or adjust the code below if you prefer something else like pip install openai on your global env):
$ python3 -m venv /home/joonas/venvs/myenv
$ source /home/joonas/venvs/myenv/bin/activate.fish # or just activate with bash
$ pip install openai
A configuration file named openai.ini with your API key and model settings, structured like this:
[shai]
api_key = your-openai-api-key
model = gpt-4o-mini
The Python script
Here’s the Python script, shai, that interprets natural language descriptions and returns Unix commands:
#!/home/joonas/venvs/myenv/bin/python
import os
import sys
from openai import OpenAI
import configparser
# Read the configuration file
config = configparser.ConfigParser()
config.read('/home/joonas/openai.ini')
# Initialize the OpenAI client with your API key
client = OpenAI(api_key=config['shai']['api_key'])
def get_unix_command(natural_language_description):
# Define the system prompt for the model
system_prompt = (
"You are an assistant that converts natural language descriptions of tasks into "
"concise, accurate Unix commands. Always output only the Unix command without any "
"additional explanations or text. Your response must be a single Unix command."
)
# Call the OpenAI API with the description
response = client.chat.completions.create(
model=config['shai']['model'],
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": natural_language_description},
],
temperature=0, # To ensure consistent and accurate output
)
# Extract the command from the response
command = response.choices[0].message.content.strip()
return command
def main():
if len(sys.argv) < 2:
print("Usage: shai <natural language description>")
sys.exit(1)
# Get the natural language description from command line arguments
description = " ".join(sys.argv[1:])
try:
# Generate the Unix command
unix_command = get_unix_command(description)
print(unix_command)
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
main()
How it works
Configuration: The script reads an openai.ini file for API credentials and model settings.
Command generation: When you provide a natural language description, the script sends it to OpenAI’s API along with a system prompt specifying the desired output format.
Output: The script returns the corresponding Unix command.
You could place it in e.g. ~/bin and do chmod +x shai to make it runnable, and then test it:
$ shai list files
ls
Extending to Fish shell
To make this functionality seamlessly available in the Fish shell, you can use the following Fish function:
function transform_comment_line
set cmd (commandline)
# Check if line starts with a hash (a comment)
if string match -q "#*" $cmd
# Remove the '#' and possible leading space
set query (string trim (string sub -s 2 $cmd))
# Run your "shai" script (replace 'shai' with the actual command)
# Assuming that 'shai' takes the query as arguments and prints the command
set result (shai $query)
# Replace the current command line with the output of 'shai'
commandline -r $result
# Now your command line is replaced with the generated command.
# The user can edit it further if needed, and press Enter again to run.
else
# If it's not a comment line, just execute normally
commandline -f execute
end
end
Save this function in your Fish configuration directory as .config/fish/functions/transform_comment_line.fish. Then, bind it to a key or trigger it manually to convert comments into executable commands. I am using this in my .config/fish/config.fish to automatically run on enter:
if status is-interactive
# Commands to run in interactive sessions can go here
bind \r transform_comment_line
end
And that is literally it. Enjoy!
Ending was edited for brevity, ChatGPT wanted to rant on how this could become a powerful part of your workflow...
Just received the Wemos S2 pico board from AliExpress, and thought I'd write
a simple tutorial on how to use it with Arduino, as Wemos' Getting started guide was a bit
outdated on Arduino config and did not have an OLED example.
Quick Background
I've been planning to make a DIY hardware Bitcoin wallet just for fun. To make
it even remotely secure — once you assume attackers have your internet
connected devices pwned it pretty much varying degrees of tinfoil — it's
essential that you have an external display and a button to print out your
secret key or which address you're signing your coins to go.
My ESP8266 supply was running low (have been using ), and not sure if it has
enough memory, I looked what Wemos might have nowadays, since I've
used their nice D1 Mini in several projects, such as the ATX power
control. I was very happy to
discover they had this
Wemos S2 Pico
available at a reasonable 8 € price point from LoLin AliExpress store , having an SSD-1306 compatible
OLED display and even a button. Perfect!
Note: there are clones for Wemos products for cheaper, but I
like to show my support even if it costs a dollar or two more!
Setting up Arduino for ESP32-S2 Support
Following Wemos' Getting Started tutorial, I realized the Boards list did not
include any ESP32-S2 modules. I checked that I had the "latest" 1.0.6 version
installed. After some googling lead me to this Adafruit
page,
I realised that I needed 2.0.x version that is served from a different location
(latest ESP32 branch now lives in
Github).
After following the installation
instructions
— essentially replacing the old Espressif "Additional Boards Manager URL"
in Arduino Preferences with the new one — I updated the ESP32 package to
2.0.1 and voilà: There is now the "ESP32S2 Dev Module" available in the ESP32
Boards section. Since Wemos' instructions, the USB CDC setting had changed a
bit, this is how I set it up (changes made highlighted):
Note that the S2 Pico requires you to hold Button 0 down, press Reset button
and release the Button 0 to enter flashing mode. This will change the COM port!
Thankfully, it seems to stay in that mode so you should not be in a rush to
flash.
After a bit of AI hiatus, I noticed that llama 3.0 models were released and wanted to try the models. Sure enough, after a week the weights we re available at the official site. However, the Docker image hasn't been used in a while and I wanted to upgrade it without losing the models.
There was almost no information on this available online yet, and even the
ollama docker documentation is quite non-existent — maybe for seasoned
Docker users it is obvious what needs to be done? But not for me, so let's see
if I can manage it.
Upgrading the docker image
First, let's just upgrade the ollama/ollama image:
$ sudo docker pull ollama/ollama
This is nice, but the currently running container is still the old one. Let's stop it:
$ sudo docker stop ollama
Checking the location of the files
I remember I set a custom directory to store the models. Let's check where it is:
As can be seen, the models are stored in /mnt/scratch/docker/volumes/ollama/_data. Let's make a hard-linked copy
of the files into another folder, to make sure we don't lose them:
This site has been migrated from Wordpress to 11ty based static site. I took the posts, categories, tags and comments as JSON data and made the necessary templates for conversion. Everything should be alot faster now.
The look is still a bit bare, and some things like tables seem a bit broken. Will address these issues hopefully during upcoming days, weeks and months. Enjoy!
PS. Comments are currently disabled, I was only receiving spam in any case. You
can check out my homepage at https://joonaspihlajamaa.com/ if you want to
contact me.
The security of sensitive information is of utmost importance today. One way to enhance the security of
stored passwords is by using PBKDF2 with SHA256 HMAC, a cryptographic algorithm that adds an extra layer of protection if the password hashes are compromised. I covered recently how to calculate PBKDF2 yourself with Python, but today needed to do the same with Javascript.
CryptoJS documentation on PBKDF2 is as scarce as everything on this library, and trying out the 256 bit key example with Node.js gives the following output:
$ node
Welcome to Node.js v19.6.0.
Type ".help" for more information.
> const crypto = require('crypto-js')
undefined
> const key = crypto.PBKDF2("password", "salt", {keySize: 256/32, iterations: 4096})
undefined
> crypto.enc.Hex.stringify(key)
'4b007901b765489abead49d926f721d065a429c12e463f6c4cd79401085b03db'
Now let's recall what Python gave here:
$ python
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> hashlib.pbkdf2_hmac('sha256', b'password', b'salt', 4096).hex()
'c5e478d59288c841aa530db6845c4c8d962893a001ce4e11a4963873aa98134a'
>>>
Uh-oh, they don't match! Now looking at pbkdf2.js in the CryptoJS Github source, we can see that the algorithm defaults to SHA-1:
Seeing how keySize and iterations are overridden, we only need to locate the SHA256 in proper namespace to guide the implementation to use SHA256 HMAC instead:
$ node
Welcome to Node.js v19.6.0.
Type ".help" for more information.
> const crypto = require('crypto-js')
undefined
> const key = crypto.PBKDF2("password", "salt", {keySize: 256/32,
... iterations: 4096, hasher: crypto.algo.SHA256})
undefined
> crypto.enc.Hex.stringify(key)
'c5e478d59288c841aa530db6845c4c8d962893a001ce4e11a4963873aa98134a'
Awesome! Now we are ready to rock'n'roll with a proper hash function.