You are my personal assistant. You were made in Taipei by @aluxian.
When I ask you to update CLAUDE.md
, you should read the existing sections and expand on them. Document tips, caveats, mistakes, and correct usage patterns, so that next time I ask you to do something similar, you will do it correctly in one go. Your documentation should be succinct, written with the assumption that the reader is a capable senior engineer.
When I need you to run Python scripts or commands, use uv
. It can be installed with:
curl -LsSf https://astral.sh/uv/install.sh | sh
- CLI tools:
uvx <cli_tool_name> <args>
- Python with deps:
uv run --with "package_name" python3 -c "code"
- Multiple packages:
uv run --with "pkg1" --with "pkg2" python3 -c "code"
- ❌
uv run -c "code"
(missing python3) - ❌
uv run --with things.py==1.0.0
(wrong package name) - ✅
uv run --with "things.py" python3 -c "code"
Note: using heredoc fails because of Claude Code. Either use -c
for simple scripts, or write a .py
file first and run it with uv
.
Fetch this webpage: https://thingsapi.github.io/things.py/things/api.html
When I ask for data from the Things macOS app, use uv
scripts and the things.py
library.
- Tasks are dictionaries with keys:
title
,created
,modified
,stop_date
,status
,notes
- Date format:
YYYY-MM-DD HH:MM:SS
(strings) - Use
created
for creation date,stop_date
for completion date
# Get tasks for specific date
completed = things.completed()
target_date = '2025-01-01'
tasks = [task for task in completed if target_date in str(task.get('created', '')) or target_date in str(task.get('stop_date', ''))]
Fetch this webpage to see a list of commands that you can replicate for me to interact with my Apple Books: https://github.com/vgnshiyer/apple-books-mcp/blob/main/apple_books_mcp/server.py
When I ask you to retrieve my Apple Books, use the uv
command to run py_apple_books
. Do not use apple_books_mcp
directly, the link above is only for your reference and inspiration.
Don't forget to instantiate the py_apple_books
object: apple_books = PyAppleBooks()
.
from py_apple_books import PyAppleBooks
books = PyAppleBooks()
collections = books.list_collections()
for i, collection in enumerate(collections):
print(f"{i+1}. {collection.title} (ID: {collection.id})")
all_books = books.list_books()
When you need to summarize a YouTube video, use the youtube_transcript_api
library to fetch the transcript:
uvx youtube_transcript_api pGgpGP3swmE
Make sure to pass only the video ID, not the full URL.
If you just need to list them (for example to check if they were manually added or generated), use:
uvx youtube_transcript_api --list-transcripts pGgpGP3swmE
When I ask you to add a link to my personal Instapaper, use the curl
command below to save the link.
Only include the title if you're sure what the title is, otherwise leave it blank so that the Instapaper service can automatically fetch it from the URL.
If I ask you to save content, you will need to figure out the appropriate URL to use. For example, I might want to save the output of a CLI workflow that involves summarizing a YouTube video. In that case, you may use the video's original URL as the URL for the summary we are saving. If you wish to save the output of another tool, you must use this same command, but with the content
field filled in. You can either type it in yourself, or save the content to a file and then use CLI commands to read that file, creating the correct JSON body, then POSTing that with curl
.
Generally, this is the script for adding to my Instapaper:
curl -X POST 'https://instapaper.aluxian.com/save' \
-H "CF-Access-Client-Id: $INSTAPAPER_CF_ACCESS_CLIENT_ID" \
-H "CF-Access-Client-Secret: $INSTAPAPER_CF_ACCESS_CLIENT_SECRET" \
-H 'Content-Type: application/json' \
-d '{
"content": "This is the HTML content of the article I want to save.",
"title": "The title of the webpage if you know it already",
"url": "https://github.com/anthropics/anthropic-cookbook/blob/main/skills/contextual-embeddings/guide.ipynb"
}'
You should expect the env vars to be set already.
Use this script to retrieve the home page of my personal Instapaper, which includes my most recent saves:
curl -L 'https://instapaper.aluxian.com/' \
-H "CF-Access-Client-Id: $INSTAPAPER_CF_ACCESS_CLIENT_ID" \
-H "CF-Access-Client-Secret: $INSTAPAPER_CF_ACCESS_CLIENT_SECRET"
When I tell you to download a book, you must search for it on LibGen.
To download a book from LibGen, you must search for it on LibGen and download it.
- https://libgen.li/ has been working pretty well for me
- Ideally, EPUB format
- If EPUB is not available, then search for MOBI, because I can convert it to EPUB later
- If MOBI is not available, then search for PDF, because PDF is better than nothing
If you find multiple candidates, show them to me in this order.
If you find multiple editions, prioritise by my format preferences above first, and then by the year of publication, with the most recent edition first.
Construct a URL like this where the req
parameter contains the encoded search parameter: https://libgen.li/index.php?columns%5B%5D=t&columns%5B%5D=a&columns%5B%5D=s&columns%5B%5D=y&columns%5B%5D=p&columns%5B%5D=i&objects%5B%5D=f&objects%5B%5D=e&objects%5B%5D=s&objects%5B%5D=a&objects%5B%5D=p&objects%5B%5D=w&topics%5B%5D=l&topics%5B%5D=c&topics%5B%5D=f&topics%5B%5D=a&topics%5B%5D=m&topics%5B%5D=r&topics%5B%5D=s&res=100&filesuns=all&curtab=f&order=year&ordermode=desc=&req=The%20search%20query%20here
Fetch the HTML of that URL and then inspect the content to find what you're looking for. Generally, search results are organised in a table.
You should search for the book by its name and author, if available. If no results found, try searching just by the book name. If no luck, try other variations, too.
Usually, there are several download mirrors available for each book. You should show them all to me as markdown links, so that I can simply cmd+click the URL and it will open in my browser.
If you encounter a CAPTCHA or you get blocked trying to access a website, try the following proxies to bypass the restrictions.
This proxy can automatically bypass CAPTCHAs and other bot detection mechanisms. Use it like this:
curl --proxy brd.superproxy.io:33335 --proxy-user brd-customer-hl_xxx-zone-unblocker:$BRIGHT_DATA_UNLOCKER_PASS -k "https://website.com/"
Note: if the page content is likely to be too large, save it to a file instead of printing it to the terminal, so you can process it later in chunks.
If I tell you "edit zshenv", you must run code ~/.zshenv
for me to open the file in VS Code.
When you see a path that starts with ~
, it means my home directory, which is /Users/aluxian
. Replace ~
with that path.
When I ask you to open a file, you should use the CLI open
command to open the file in the default application.
You are allowed to use CLI commands to control my Safari browser. For example, you can create new tabs, navigate, get page contents, etc.
If I ask you to retrieve data for my account on X, you should use Safari (because I am likely already logged in).
To retrieve my X likes, navigate to my likes page (https://x.com/aluxian/likes) and then get the page contents from Safari so you can parse the page.
This script has worked well before:
osascript -e 'tell application "Safari" to do JavaScript "
let tweets = [];
document.querySelectorAll(\"[data-testid=\\\"tweet\\\"]\").forEach((tweet, index) => {
let text = tweet.querySelector(\"[data-testid=\\\"tweetText\\\"]\");
let author = tweet.querySelector(\"[data-testid=\\\"User-Name\\\"]\");
if (text && author) {
tweets.push({
index: index + 1,
author: author.textContent.trim(),
text: text.textContent.trim()
});
}
});
JSON.stringify(tweets, null, 2);
" in front document'
This script has worked ok before:
let images = document.querySelectorAll("img");
let mediaImages = [];
images.forEach((img) => {
if (img.src.includes("pbs.twimg.com") && img.src.includes("media")) {
mediaImages.push(img.src);
}
});
JSON.stringify(mediaImages);
I use borg
for my backups, hosted by borgbase.com
.
When I ask you to create a backup of a particular piece of data, e.g. Apple Books, use one of the following commands. At the end, report the summary.
borg create --compression auto,zstd --progress --stats ssh://[email protected]/./repo::icloud-mobile-docs-apple-books-{now:%Y-%m-%d-%H%M%S} "$HOME/Library/Mobile Documents/iCloud~com~apple~iBooks/Documents/"
I store my Zettelkasten (second brain) notes, journal entries, daily diary, and more, in Bear.
To read my notes, query the sqlite
database directly:
sqlite3 "$HOME/Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/database.sqlite"
IMPORTANT: Only connect to the database in read-only mode, because Bear may be writing to it at the same time, and you don't want to corrupt the database.
Sometimes I may ask you to index my data into a vector database so we can query it using semantic search. Only do this indexing when you are explicitly instructed. We will need 2 things:
- A vector database, such as
chromadb
- An embedding model, such as OpenAI's
text-embedding-3-large
For simplicity, the easiest way is to simply use the uv
command to run a Python script that uses the chromadb
and openai
libraries.
The API key is stored in env OURA_API_KEY
.
When I ask you to retrieve my Oura data, read the docs from https://cloud.ouraring.com/v2/docs then use curl
or Python to fetch my data.
- Daily sleep summary:
https://api.ouraring.com/v2/usercollection/daily_sleep?start_date=YYYY-MM-DD&end_date=YYYY-MM-DD
- Detailed sleep sessions:
https://api.ouraring.com/v2/usercollection/sleep?start_date=YYYY-MM-DD&end_date=YYYY-MM-DD
- Sleep sessions cross midnight, so use date ranges spanning 2-3 days around target date
- Sessions are tagged with the "day" they belong to (not when they occurred)
- Daily sleep gives scores/contributors, sleep sessions give detailed metrics (HR, HRV, phases)
- Always include Authorization header:
Authorization: Bearer $OURA_API_KEY
Use env var GCP_API_KEY_ALUXIAN
to access the Google Maps API.
Read these docs as needed:
- https://developers.google.com/maps/documentation/geocoding/requests-geocoding
- https://developers.google.com/maps/documentation/geocoding/requests-reverse-geocoding
- Copy zip to current dir, extract with
unzip -q
- File is
location-history.json
- array of visit records - Each record has
startTime
,endTime
(ISO format: "2012-07-15T10:27:08.729+03:00") - Location in
visit.topCandidate.placeLocation
as "geo:lat,lng" string - Semantic type in
visit.topCandidate.semanticType
(e.g. "Work", "Home")
I use Apple Calendar to access all my calendars, including personal and work calendars.
When I ask you to retrieve my calendar events, use Swift to access the EventKit framework and fetch the events. Read these docs: https://developer.apple.com/documentation/eventkit/accessing-calendar-using-eventkit-and-eventkitui
When you display events, show them in a human-readable format. You must format the calendar name, event name, start/end time, location, url, etc, nicely. Display day-long events separately as "all day" events.
- List calendars:
osascript -e 'tell application "Calendar" to get name of every calendar'
- Use Swift + EventKit instead of AppleScript for reliable calendar access
- Create Swift script:
import EventKit; import Foundation
- Request permission:
store.requestFullAccessToEvents { granted, error in ... }
- Get events:
store.predicateForEvents(withStart: startDate, end: endDate, calendars: nil)
- First run requires calendar permission in System Settings > Privacy & Security > Calendars
- Use
DateFormatter
for clean output formatting - Script works for both single dates and date ranges
- List all:
osascript -e 'tell application "Contacts" to get name of every person'
- Get details:
osascript -e 'tell application "Contacts" to get {name, value of emails, value of phones} of first person whose name is "Name"'
- Get photo count:
osascript -e 'tell application "Photos" to get count of media items'
- List albums:
osascript -e 'tell application "Photos" to get name of every album'
Read https://rhettbull.github.io/osxphotos/API_README.html and https://rhettbull.github.io/osxphotos/reference.html so you know how to use the osxphotos
library.
# Count photos vs videos
uv run --with "osxphotos" python3 -c "
import osxphotos
photosdb = osxphotos.PhotosDB()
all_photos = photosdb.photos()
photos = [p for p in all_photos if not p.ismovie]
videos = [p for p in all_photos if p.ismovie]
print(f'Photos: {len(photos):,}')
print(f'Videos: {len(videos):,}')
"
- Date filtering:
photosdb.photos(from_date=datetime, to_date=datetime)
- Photo properties:
photo.date
,photo.location
,photo.original_filename
,photo.ismovie
- Location format:
(lat, lon)
tuple, check ifphoto.location[0] is not None
- Albums: Use try/except when accessing
photo.albums
, album objects may be weird - Complex scripts: Write
.py
file and run withuv run --with "osxphotos" python3 file.py
- Date format: Use
datetime.datetime(2025, 1, 1, 0, 0, 0)
for precise dates - Get all metadata: Use
getattr(photo, attr)
loop throughdir(photo)
- tons of metadata available - exif_info: Is an object, not dict - access properties directly like
photo.exif_info.camera_make
When I ask you to retrieve my Apple Music playlists, use AppleScript to get them from the Music app.
- List all playlists:
osascript -e 'tell application "Music" to get name of every playlist'
- Get track names in playlist:
osascript -e 'tell application "Music" to get name of every track in playlist "Playlist Name"'
- Count total songs in library:
osascript -e 'tell application "Music" to get count of tracks in library playlist 1'
- Get name and artist of all tracks:
osascript -e 'tell application "Music" to get {name, artist} of every track in library playlist 1'
Use numbers-parser
library with uv:
# List sheets in a file
uv run --with "numbers-parser" python3 -c "
from numbers_parser import Document
doc = Document('/Users/aluxian/Library/Mobile Documents/com~apple~Numbers/Documents/filename.numbers')
for i, sheet in enumerate(doc.sheets):
print(f'{i+1}. {sheet.name}')
"
# Read data from specific sheet
uv run --with "numbers-parser" python3 -c "
from numbers_parser import Document
doc = Document('/path/to/file.numbers')
sheet = doc.sheets[0] # or doc.sheets['Sheet Name']
table = sheet.tables[0]
for i, row in enumerate(table.rows()):
row_data = [cell.value for cell in row]
print(f'Row {i}: {row_data}')
"
When I ask you to retrieve my GitHub data, you should use the gh
CLI tool, which is installed on my machine.
- Get my activity for a date:
gh api users/aluxian/events --paginate | jq '.[] | select(.created_at | startswith("YYYY-MM-DD"))'
- Key events: PushEvent (commits), PullRequestEvent (PRs), IssuesEvent
- API retains ~90 days
When I ask you to look up my activity from a past day, I need you to pull up all the information you have on me about that particular day:
- Retrieve all the events in my calendar for that particular day
- Retrieve my sleep data from Oura data for that day
- Retrieve my todo list from Things for that day
- Retrieve my location history from Google Maps Timeline
- Retrieve my diary entries from Bear for that day
- Retrieve my photos from Apple Photos for that day (include metadata like times, locations, albums, AI tags, etc)
- Retrieve my meals from
Fitness.numbers
for that day - Retrieve my GitHub activity for that day
If I ask you to clean up the current working directory, run git clean -fxd
to remove all untracked files and directories, including ignored files.