From DeepSeek Copy-Paste to Claude Code
This article describes how HotelByte started with DeepSeek for copy-paste coding in March 2025, and evolved into structured AI coding practices, including challenges and lessons learned.
From DeepSeek Copy-Paste to Claude Code

Introduction
In March 2025, the HotelByte project officially launched. As a hotel API distribution platform, we needed to rapidly build complex backend systems, management backends, and frontend applications. At the start of the project, the team faced significant development pressure and tight delivery timelines.
This article will describe our journey from initially using DeepSeek for “copy-paste coding” to evolving into a complete AI coding system, including the problems encountered, challenges, and the ultimate successful solutions.
Phase 1: DeepSeek Copy-Paste Coding
Initial Scenario
When the project launched, we faced several practical issues:
- Insufficient personnel: Small team size, but complex functionality required
- Tight timeline: Need to deliver MVP in a short time
- Diverse tech stack: Backend Go, frontend Vue3/React, database design, etc.
Under these circumstances, we decided to try AI-assisted development. DeepSeek was our first choice because:
- Relatively low cost
- Good Chinese language support
- Certain code generation capabilities
Typical Workflow
Our initial AI coding workflow looked like this:
1. Describe requirements in DeepSeek chat interface
"Help me write a user login Go function"
2. Copy generated code
func UserLogin(username, password string) error {
// Generated code
}
3. Paste into IDE
4. Manually modify to fit project structure
5. Test (mostly fails)
6. Modify → Test → Modify again (loop)
Typical Code Example
This was a user login function generated by DeepSeek at that time:
// Original code generated by DeepSeek (many issues)
func UserLogin(db *sql.DB, username, password string) (int, error) {
var userID int
var storedPassword string
query := "SELECT id, password FROM users WHERE username = ?"
row := db.QueryRow(query, username)
err := row.Scan(&userID, &storedPassword)
if err != nil {
if err == sql.ErrNoRows {
return 0, errors.New("user not found")
}
return 0, err
}
// Plain text password comparison (security risk)
if password != storedPassword {
return 0, errors.New("invalid password")
}
return userID, nil
}
Issue Checklist:
- ❌ Directly uses
sql.DB(violates project standards) - ❌ Plain text password comparison (security risk)
- ❌ Non-standard error handling
- ❌ Missing context (doesn’t align with project DDD architecture)
- ❌ Doesn’t use project logging and utility functions
Main Challenges
1. Lack of Context
DeepSeek couldn’t understand our project structure, coding standards, and business logic. Each generation required extensive manual modification.
Actual Problem Example:
// ❌ Generated by DeepSeek: Doesn't follow project naming conventions
func GetUserInfo(userId int) (*User, error) {
// ...
}
// ✅ Project standard requires: CamelCase naming, semantic naming
func GetEntity(ctx context.Context, entityID int64) (*Entity, error) {
// ...
}
2. Non-Compliant Coding Standards
The project has strict coding standards (see CLAUDE.md and .github/code_review_rules.md), but code generated by DeepSeek required modification on almost every line:
// ❌ Generated by DeepSeek
fmt.Printf("User logged in: %s\n", username) // Forbidden fmt.Printf
// ✅ Project standard
log.Info("User logged in", log.Field("username", username)) // Use unified logging
3. Test Coverage Issues
DeepSeek rarely generated test code, forcing us to spend significant time writing unit tests and E2E tests.
4. Repetitive Work
Each time, we had to repeat project context:
- “We use go-zero framework”
- “Logging uses hotel/common/log”
- “ID generation uses idgen.GenID()”
- “Forbidden to use json.Marshal, use utils.ToJSON”
Phase 2: Structured Attempts
Identifying Problems
After several months of practice, we realized: AI coding without project context is inefficient.
We needed:
- Context Injection: Make AI understand project specifications
- Structured Process: Standardize AI-assisted development workflow
- Quality Assurance: Ensure generated code meets standards
Initial Improvements
We began manually organizing project specification documents and tried to provide more detailed context before each request:
# Project Context Template
## Tech Stack
- Backend: Go 1.25.6, go-zero, MySQL, Redis
- Frontend: Vue 3, React 18, TypeScript
## Coding Standards
- Logging: hotel/common/log
- JSON: utils.ToJSON/FromJSON*
- ID: idgen.GenID()
- Testing: mockey + goconvey
## Architecture
- DDD: domain → protocol/mysql → service
- Routing: httpdispatcher
But this approach was still inefficient, requiring manual input of this context each time.
Phase 3: Transition to Claude Code
Why Choose Claude Code
After evaluating multiple AI coding tools, we chose Claude Code for the following reasons:
| Feature | DeepSeek | Claude Code |
|---|---|---|
| Project context understanding | ❌ Weak | ✅ Strong |
| File system access | ❌ None | ✅ Native support |
| Rules system integration | ❌ None | ✅ .cursor/ directory |
| Multi-model switching | ❌ Fixed | ✅ Supported |
| Test generation | ❌ Poor | ✅ Good |
| Cost | ✅ Low | ⚠️ Medium |
Core Advantages
Claude Code allows us to manage AI coding configuration through the .cursor/ directory structure:
- Define Project Rules:
.cursor/rules/ - Configure Skills:
.cursor/skills/ - Custom Commands:
.cursor/commands/ - Team Configuration:
.cursor/team.json
This enables AI to “understand” our project, not just generate code.
Phase 4: Establishing a Complete System
.claude/ Directory Structure
We created the .claude/ directory in the project to manage AI coding configuration:
.claude/
├── agents/ # AI agent definitions
│ ├── hotel-api-architect.json
│ ├── golang-tech-lead.json
│ ├── frontend-ux-expert.json
│ └── team-coordinator.json
├── commands/ # Custom commands
│ ├── openspec/proposal.md
│ ├── openspec/apply.md
│ ├── openspec/archive.md
│ └── speckit.plan.md
├── skills/ # Skill definitions
│ ├── e2e-test-design.md
│ └── troubleshoot-uat-network-and-git.md
└── settings.json # Claude Code settings
CLAUDE.md Core Rules
We created the CLAUDE.md file to define core rules for AI coding assistants:
# AI Programming Assistant Unified Rules
## Core Requirements
### Completion Definition 🎯
**Requirement Completion = Functional Code + Unit Tests (UT) + E2E Tests All Passing!**
### Test Coverage Requirements 🧪
- PR mandatory check: Incremental code test coverage ≥ 50%
- domain/: 100% coverage
- mysql/: 80%+ coverage
- service/: 70%+ coverage
OpenSpec Workflow Integration
We introduced OpenSpec spec-driven development workflow:
Proposal → Spec → Implementation → Archive
This ensures every feature change has clear specifications and implementation standards.
Results Comparison
Efficiency Improvement
| Metric | Using DeepSeek | Using Claude Code + OpenSpec |
|---|---|---|
| Feature development time | 3-5 days | 1-2 days |
| Test coverage | 30-40% | 60-70% |
| Code review pass rate | 40% | 85% |
| Bug rate (post-launch) | 15% | 5% |
Code Quality
Previous Code (DeepSeek Generated):
// ❌ Problematic code
func ProcessOrder(order *Order) error {
if order == nil {
return errors.New("order is nil")
}
// Directly returns nil, no error handling
return nil
}
Current Code (Claude Code + Standards):
// ✅ Standards-compliant code
func (s *OrderService) ProcessOrder(ctx context.Context, req *protocol.ProcessOrderRequest) (*protocol.ProcessOrderResponse, error) {
mockey.PatchConvey("ProcessOrder", t, func() {
// 1. Parameter validation
if req == nil {
return nil, errors.New("request is nil")
}
// 2. Call domain logic
order, err := s.domain.ProcessOrder(ctx, req)
if err != nil {
log.Error("process order failed", log.Field("error", err))
return nil, err
}
// 3. Convert protocol layer
resp := convert.ToProcessOrderResponse(order)
return resp, nil
})
}
Lessons Learned
Key Lessons
- Context is more important than code
- ❌ Let AI generate code without understanding the project
- ✅ First make AI understand project specifications and architecture
- Process is more important than tools
- ❌ Rely on AI “magic”
- ✅ Establish standardized development processes
- Testing is essential
- ❌ Manually test after AI generates code
- ✅ Require AI to generate tests simultaneously
- Quality cannot be compromised
- ❌ Lower quality standards for speed
- ✅ Strictly enforce test coverage requirements
Next Steps
- Improve .cursor/ skill library
- Expand AI agent roles
- Optimize multi-model switching strategy
- Establish automated CI/CD checks
Series Navigation
This is the first article in this series. The complete series includes:
- From DeepSeek Copy-Paste to Claude Code ✅ (This article)
- Deep Claude Code Integration
- Multi-Model and Toolchain Integration
- OpenSpec-Driven Development
- AI Coding Best Practices
Related Resources:
Comments