[For Beginners] Getting Started with Personal Knowledge Management Using Claude Code × Obsidian × Vertex AI

[For Beginners] Getting Started with Personal Knowledge Management Using Claude Code × Obsidian × Vertex AI

2025.07.10

Introduction

I am kasama from the Data Business Division.
I saw the following article and thought the idea was really good, so I customized the settings in my own way, used it for a week, and wanted to record my experience.
(I'm a beginner with both Obsidian and Claude Code, so if there are better, simpler methods, please let me know...)

https://www.m3tech.blog/entry/2025/06/29/110000

Prerequisites

I wanted to transcribe mp3/mp4 recordings of my private English lessons to get feedback, and also transcribe regular meeting videos into minutes, so I created a system that uses Vertex AI for transcription and summarization. The diagram is as follows:

.claude/format-md.sh is executed using Claude Code's hooks feature. It formats markdown files using prettier when the file extension is md.

.claude/format-md.sh
			
			#!/bin/bash
FILE_PATH=$(jq -r '.tool_input.file_path')
if [[ "$FILE_PATH" == *.md ]]; then
    echo "📝 Formatting markdown file: $FILE_PATH"
    if [ -f node_modules/.bin/prettier ]; then
        npx prettier --write "$FILE_PATH" && echo "✅ Prettier formatting completed for $FILE_PATH" || echo "❌ Prettier formatting failed for $FILE_PATH"
    elif command -v prettier >/dev/null 2>&1; then
        prettier --write "$FILE_PATH" && echo "✅ Prettier formatting completed for $FILE_PATH" || echo "❌ Prettier formatting failed for $FILE_PATH"
    else
        echo "⚠️  Warning: prettier not found, skipping formatting for $FILE_PATH"
    fi
fi

		

.claude/settings.json configures basic allowed and denied commands, hooks settings to call scripts, and to play notification sounds.

.claude/settings.json
			
			{
  "env": {
    "TF_LOG": "WARN",
    "CLAUDE_CODE_ENABLE_TELEMETRY": "0",
    "BASH_DEFAULT_TIMEOUT_MS": "120000"
  },
  "permissions": {
    "allow": [
      "Bash(ls ./)",
      "Bash(ls ./*)",
      "Bash(cat ./*)",
      "Bash(grep * ./)",
      "Bash(rg * ./)",
      "Bash(find ./)",
      "Bash(tree ./)",
      "Bash(head ./*)",
      "Bash(tail ./*)",
      "Bash(echo *)",
      "Bash(pwd)",
      "Bash(cd ./)",
      "Bash(mkdir ./)",
      "Bash(cp ./* ./)",
      "Bash(mv ./* ./)",
      "Bash(touch ./)",
      "Bash(which *)",
      "Bash(env)",
      "Bash(whoami)",
      "Bash(date)",
      "Read(./**)",
      "Edit(./**)",
      "Grep(./**)",
      "Glob(./**)",
      "LS(./**)",
      "Write(./**)",
      "MultiEdit(./**)",
      "TodoRead(**)",
      "TodoWrite(**)",
      "Task(**)"
    ],
    "deny": [
      "Bash(rm -rf*)",
      "Bash(rm /*)",
      "Bash(cp /* *)",
      "Bash(cp * /*)",
      "Bash(mv /* *)",
      "Bash(mv * /*)",
      "Bash(mkdir /*)",
      "Bash(sudo*)",
      "Write(.git/**)"
    ]
  },
  "enabledMcpjsonServers": [
  ],
  "disabledMcpjsonServers": [],
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit|MultiEdit",
        "hooks": [
          {
            "type": "command",
            "command": "./.claude/format-md.sh"
          }
        ]
      }
    ],
    "Notification": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "afplay /System/Library/Sounds/Funk.aiff"
          }
        ]
      }
    ],
    "Stop": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "afplay /System/Library/Sounds/Funk.aiff"
          }
        ]
      }
    ]
  }
}
```I created the Obsidian folder structure by referencing the following article.
The folder structure is set up quite broadly, so it's likely that I'll modify it as I use the system.
https://zenn.dev/game8_blog/articles/0e50c36cd63b98

```bash
├── 00_Configs
│   ├── Extra → Image file storage location
│   └── Templates → Template file storage location
│       └── Daily.md
├── 01_Daily → Daily Note storage location
├── 02_Inbox → Memo storage location
│   └── 雑メモ.md
├── 03_eng_study → English learning notes storage location
├── 04_Meetings → Meeting minutes storage location

		

I'm using the following Python script for transcribing mp4/mp3 files.

audio_video_to_text/audio_video_to_text.py
			
			import os
import logging
import vertexai
from vertexai.generative_models import GenerativeModel, Part
from dotenv import load_dotenv
import ffmpeg

# Loading .env file
load_dotenv()

# ---------- Environment Variables ----------
PROJECT_ID = os.getenv("PROJECT_ID")
REGION = os.getenv("REGION")
FILE_NAME = os.getenv("FILE_NAME")  # Example: "meeting_audio.mp4" or "meeting_audio.mp3"
OUTPUT_DIR = "output"  # Output destination
MODEL_NAME = "gemini-2.5-pro"
GOOGLE_APPLICATION_CREDENTIALS = os.getenv("GOOGLE_APPLICATION_CREDENTIALS")
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = GOOGLE_APPLICATION_CREDENTIALS

# ---------- Logging ----------
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(message)s")
logger = logging.getLogger(__name__)

# Initialize Vertex AI
vertexai.init(project=PROJECT_ID, location=REGION)

def convert_mp4_to_mp3(mp4_path: str, mp3_path: str) -> None:
    """Convert MP4 file to MP3"""
    logger.info("Starting conversion from MP4 to MP3: %s -> %s", mp4_path, mp3_path)
    try:
        (
            ffmpeg.input(mp4_path)
            .output(mp3_path)
            .global_args("-loglevel", "quiet")
            .run(overwrite_output=True)
        )
        logger.info("MP4 to MP3 conversion completed")
    except Exception as e:
        logger.error("Error occurred during MP4 conversion: %s", e)
        raise

def transcribe_audio(audio_path: str) -> str:
    """Transcribe audio file to text"""
    logger.info("Starting transcription: %s", audio_path)

    model = GenerativeModel(MODEL_NAME)

    # Determine MIME type based on file extension
    if audio_path.lower().endswith('.mp4'):
        mime_type = "video/mp4"
    elif audio_path.lower().endswith('.mp3'):
        mime_type = "audio/mp3"
    else:
        raise ValueError(f"Unsupported file format: {audio_path}")

    with open(audio_path, "rb") as f:
        audio_part = Part.from_data(f.read(), mime_type=mime_type)

    prompt = (
        "Please transcribe the following audio.\n"
        "1. Start a new line whenever the speaker changes.\n"
        "2. Add punctuation where possible.\n"
    )

    response = model.generate_content([audio_part, prompt])
    logger.info("Transcription completed")
    return response.text

if __name__ == "__main__":
    try:
        # Separate filename and extension
        input_file_path = os.path.join("input", FILE_NAME)
        file_name_without_ext, file_extension = os.path.splitext(FILE_NAME)

        # Check if input file exists
        if not os.path.exists(input_file_path):
            raise FileNotFoundError(f"Input file not found: {input_file_path}")

        audio_file = None
        temp_mp3_file = None

        if file_extension.lower() == ".mp4":
            # For MP4 files, convert to MP3
            temp_mp3_file = os.path.join("input", f"{file_name_without_ext}_converted.mp3")
            convert_mp4_to_mp3(input_file_path, temp_mp3_file)
            audio_file = temp_mp3_file
        elif file_extension.lower() == ".mp3":
            # For MP3 files, use as is
            audio_file = input_file_path
        else:
            raise ValueError(f"Unsupported file format: {file_extension}")

        # Execute transcription
        transcript = transcribe_audio(audio_file)

        # Save output (using filename without extension)
        os.makedirs(OUTPUT_DIR, exist_ok=True)
        out_path = os.path.join(OUTPUT_DIR, f"{file_name_without_ext}_transcript.txt")
        with open(out_path, "w", encoding="utf-8") as f:
            f.write(transcript)
        logger.info("Transcription text saved: %s", out_path)

        # Delete temporary file
        if temp_mp3_file and os.path.exists(temp_mp3_file):
            os.remove(temp_mp3_file)
            logger.info("Temporary file deleted: %s", temp_mp3_file)

    except Exception as e:
        logger.exception("Error occurred during processing: %s", e)
        raise
```:::

`.prettierrc` is used to set up automatic formatting configuration for markdown files.

```txt: .prettierrc
{
  "tabWidth": 4,
  "useTabs": false,
  "proseWrap": "preserve",
  "printWidth": 120,
  "endOfLine": "lf"
}

		

Setup

Obsidian

Let's proceed with setup.
First, download the repository I shared earlier to your local machine using git clone.

Next, if you haven't set up Obsidian yet, please install it. There are various methods for installing Obsidian available on the web, so I'll omit that here.

https://qiita.com/hann-solo/items/22bcaa81b695ddb47238

Once installed, launch Obsidian and from the following screen, select "Open folder as vault" and open the folder you just cloned.

Screenshot 2025-07-09 at 21.43.08

After opening it, in the settings screen, specify the templates and note creation location highlighted in the red frame.

Files & Links
Screenshot 2025-07-09 at 21.46.47
Daily Notes
Screenshot 2025-07-09 at 21.51.08
Templates
Screenshot 2025-07-09 at 21.51.55

Currently, I'm proceeding with these minimal settings. I do feel like I want to research and do more, though.

Vertex AI

Next, set up to run scripts using Vertex AI.

For enabling Vertex AI, please refer to the Google Cloud Setup section in the blog below:
https://dev.classmethod.jp/articles/generating-meeting-minutes-from-mp4-with-vertex-ai-gemini/### Python Environment

Next, we'll install uv for Python version management, virtual environments, and package management.

			
			# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

		

After installation, navigate to the project directory, create a uv virtual environment and install dependencies.

			
			# Navigate to project directory
cd audio_video_to_text

# Create virtual environment (automatically installs Python 3.13)
uv venv --python 3.13

# Activate virtual environment
# macOS/Linux
source .venv/bin/activate
# Windows
.venv\Scripts\activate

# Install dependencies
uv pip install -r requirements.txt

		

Create a .env file in the audio_video_to_text/ directory. For FILE_NAME, specify the input file name with extension. Include the other Google Cloud configuration values as well.

			
			PROJECT_ID=your-gcp-project-id
REGION=your-region
GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account-key.json
FILE_NAME=eng_record.mp3

		

Claude Code

For Claude Code, please refer to the official setup instructions available here:
https://docs.anthropic.com/en/docs/claude-code/setup

After setting up, use npm install to install Prettier from package.json as we'll use it in hooks.

			
			@obsidian-claude-feedback-sample % npm install

added 1 package, and audited 2 packages in 706ms

1 package is looking for funding
  run `npm fund` for details

found 0 vulnerabilities 

		

You can confirm the installation using npm list:

			
			@obsidian-claude-feedback-sample % npm list
obsidian-claude-feedback-sample@1.0.0 /git/obsidian-claude-feedback-sample
└── prettier@3.6.2

		

For hooks, you can verify that the settings are applied by using the Claude command /hooks.

Screenshot 2025-07-09 at 22.27.50
Screenshot 2025-07-09 at 22.28.03## Using it in Practice

Now, let's actually try using it.

Place the MP3 or MP4 file you want to convert in the audio_video_to_text/input/ folder and set FILE_NAME in the .env file to the filename (with extension).
Make sure the virtual environment we created earlier is activated and run the script.

			
			cd audio_video_to_text
python audio_video_to_text.py

		

The file was about 30 minutes long, but it finished in just over a minute.

			
			@audio_video_to_text % python audio_video_to_text.py
2025-07-09 22:41:02,151 - Transcription started: input/eng_record.mp3
2025-07-09 22:42:15,686 - Transcription completed
2025-07-09 22:42:15,688 - Transcription text saved: output/eng_record_transcript.txt

		

I'll ask Claude to create feedback based on the generated txt file.

To verify that Prettier is running, I'll run in debug mode with claude --debug.

I'll paste the path to the transcribed file in the recording text section of .claude/prompt/english_lesson.md and pass it to Claude Code.

Screenshot 2025-07-10 at 7.28.02
Screenshot 2025-07-10 at 7.18.55

I confirmed in the DEBUG logs that the hooks ran and Prettier was activated.

Screenshot 2025-07-10 at 7.19.41

Completed.

Screenshot 2025-07-10 at 7.20.31

Although the date in the feedback file is somehow January, I received appropriate feedback.

Screenshot 2025-07-10 at 7.21.12

For meeting minutes, the process is almost the same except for different prompts, so please try it yourself.
For mp4 files, you could also use Gemini Web to transcribe the text (I implemented the script because it didn't support mp3 files...).

Final Thoughts

Since I just started managing my personal knowledge base a week ago, I plan to improve the structure daily, so please consider this as just a reference.

Share this article

FacebookHatena blogX

Related articles