Notes on getting started in vibe-coding with Cursor…

Prep

  1. Clone Ryan Carson’s AI Dev Task repo as described in How I AI: A 3-step AI Coding workflow
    • also copy relevant cursor.directory rules from https://cursor.directory/rules/spring-boot to local.
  2. Create Spring Initializr project with required dependencies
    • Get build to success (add spring.datasource.url, other required properties)
  3. Create new folder called rules in the project
    • Copy create-prd.md, generate-tasks.md, process-task-list.md task files from AI Dev Task into project
  4. Initialize local Git repo and commit

     # in project root directory 
     $ git init; git add . ; git commit -m "Initial commit" ;
    
  5. create online (remote) repo and link to offline
$ git remote add origin https://github.com/<username>/<projectname>.git

$ git branch  # check if branch is master

$ git branch -M main # Git moved to main (black lives matter)
$ git push -u origin main  # `-u` sets the upstream for future `git push` and `git pull`
  1. Activate Cursor
    • install IDE, create account, log in to IDE

Step 1: Create a PRD

  1. In the Cursor Agent chat window type:
Use @create-prd.md
Here's the feature I want to build: [Describe your feature in detail]
  1. Answer the clarifying questions to generate the PRD

Step 2: Use the PRD to generate the tasks

Now take @MyFeature-PRD.md and create tasks using @generate-tasks.md

Step 3: Instruct the AI to work through the tasks

Please start on task 1.1 and use @process-task-list.md

(above 3 steps are direct lifts from Ryan Carson’s repo).


Claude-Code anthropic.com/claude-code: a command line tool that:

  • runs in the IDE - Integrates with VS Code and JetBrains IDEs
  • Optimized for code understanding and generation with Claude Opus 4.1

Google

  • Gemini: reasoning models
  • Gemma 3n models: designed for efficient execution on low-resource devices (knowledge cutoff date for the training data was June 2024)
    • Mobile devices and laptops

CodeGemma released as Pre Trained and Instruction Trained variants.

  • CodeGemma PT (Pretrained): 

    for code completion within an existing code structure or generating code based on surrounding code snippets(aka code prefixes and/or suffixes). Trained on fill-in-the-middle (FIM) tasks, for scenarios where you have partial code and need the model to intelligently complete it based on context. Useful for IDE extensions and similar tools for accelerating coding.

  • CodeGemma IT (Instruction-tuned): 

    for generating code from natural language descriptions, building conversational coding interfaces, or following complex instructions, (natural language-to-code chat and instruction following). Fine-tuned to understand and respond to human instructions, suitable for tasks like generating code from a natural language description, answering questions about code, or powering conversational coding assistants.

Gemma 3n models use selective parameter activation technology to reduce resource requirements, allowing them to operate at an effective size of 2B and 4B parameters, which is lower than the total number of parameters they contain.

Downloading Gemma 3n : e4B model from ollama: ollama run gemma3n


Working in offline mode

Install a Local Runtime: to run Large Language Models (LLMs) locally, you can use

  • Ollama: runs LLMs on your computer (can plug Gemma models in)
  • Hugging Face Transformers: a library to load and run models.

The imperative to work in offline mode

  • secure work environment – code / corporate data can’t leave the network
  • no Internet

Agent mode: AI code generation where you direct the assistant to:

  • create new files & directories
  • implement features across multiple files
  • execute terminal commands
  • refactor code
  • work out complex implementations

When you run Agent mode on your machine locally with Ollama you get zero latency response without any API rate limits.

Guardrails & preparation: prepare  PRDs (Product Requirement Documents) before touching any code.

PRDs are simple markdown files that outline:

  • the specific problem you are solving
  • Success metrics for the implementation
  • Clear boundaries (anything else becomes a new issue)
  • Step-by-step tasks broken into bite-sized chunks

Add Rules Rules are pre-emptive additions to your system prompt that teach the AI how your team writes code. Cuing it on your preferred approach.

For example, rules to:

  • Specify we use Junit & Mockito, for testing
  • Define our file structure conventions
  • Outline error handling patterns
  • Set code style preferences

They prevent the agent from making assumptions that would require internet lookups or external dependencies.

Refnc: https://blog.continue.dev/go-offline-with-context-engineering/