Claude Code Plugins

Community-maintained marketplace

Feedback

Use when writing or rewriting ANY content for GeoVerity (homepage, services, insights, contact pages) before generating text - establishes mandatory register stratification with plain professional language (B2-C1) for service pages targeting administrators/project managers, and academic register (C1-C2) for Insights journal posts only

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name languaging
description Use when writing or rewriting ANY content for GeoVerity (homepage, services, insights, contact pages) before generating text - establishes mandatory register stratification with plain professional language (B2-C1) for service pages targeting administrators/project managers, and academic register (C1-C2) for Insights journal posts only

Languaging: GeoVerity Register Stratification Framework

MANDATORY: Examine this skill before ALL writing and rewriting attempts.

Overview

Register stratification framework for GeoVerity content surfaces, grounded in systemic functional linguistics (Halliday) and audience design theory (Bell).

Core Principle: Register selection must be audience-driven and genre-appropriate, not uniformly applied.

Critical Rule: Service pages (Homepage, Services Hub, Pillar/Spoke pages) use plain professional language (B2-C1 CEFR) for administrators and project managers. Academic register (C1-C2) is ONLY for Insights Journal posts.


What "Languaging" Means in This Skill (Do Not Skip)

Operational Definition: Languaging is the active process of using language to do things — to build meaning with an audience, to frame expertise, to guide decisions, and to position yourself socially. It treats language as action, not just text. Under this view (Swain; Halliday; Vygotsky), wording is never neutral: every lexical choice, sentence shape, and tone choice performs work. It establishes authority, clarifies a problem, signals who the intended reader is, and marks who "belongs" in the conversation.

Why This Matters for GeoVerity: For GeoVerity, languaging is how we align voice, cognitive load, and social positioning with each audience segment. We do not just "write copy." We actively produce a social relationship with administrators, project managers, or researchers through register. That's why a Services page and an Insights Post cannot share the same style — because they are performing different social actions.

1. Languaging as Action (What are we doing with this text?)

In practice: Every page has a job. The language on that page must directly serve that job.

Homepage / Services Page Job

Goal: Reduce friction, build trust fast, and move a decision-maker toward contact.

Example (GOOD for Services):

"We evaluate your AI models across 120+ languages and tell you where they're risky."

Action performed: Reassures an administrator that we solve a concrete operational problem.

Example (BAD for Services):

"The epistemic instability introduced by multilingual generative systems demands institutional recalibration."

Action performed: Signals academic debate, not operational support. Violates B2-C1 service register.

Insights Post Job

Goal: Contribute to an expert conversation and shape thought leadership.

Example (GOOD for Insights Post):

"While multilingual LLMs expand institutional reach, they also destabilize traditional assumptions about authorship, assessment, and evidence. Institutions cannot outsource those judgments to detectors alone."

Action performed: Argues, theorizes, reframes policy. This belongs in C1-C2 Insights, not Services.

Enforcement Hook: Ask: "What is this page trying to cause in the reader?"

  • If the answer is "book a call / request support" → Use Services register (B2-C1)
  • If the answer is "rethink a policy or framework" → Use Insights Post register (C1-C2)

2. Languaging as Meaning-Making (Are we co-constructing understanding?)

Languaging assumes meaning is negotiated with the audience, not dumped on them. We must "speak in the reader's world first," then layer complexity only if allowed by that surface's register.

Services Page Pattern (B2-C1)

Structure:

  1. State the reader's problem in their terms
  2. Name what we do
  3. State the outcome

Example:

"Your faculty are worried about AI-written student work. We build integrity frameworks that acknowledge where AI is actually being used, instead of pretending it isn't. That lets you update policy without starting an arms race around detection tools."

Why this is compliant:

  • Plain professional language
  • Short clauses, main-clause first
  • Directly names their world ("your faculty," "AI-written student work")

Insights Post Pattern (C1-C2)

We are allowed to interrogate assumptions and use theoretical framing.

Example:

"Faculty anxiety around AI text is not only about plagiarism; it reflects a deeper loss of epistemic trust in the act of submitting written work as evidence of learning."

Why this is compliant:

  • Concept-building, not service positioning
  • High lexical density ("epistemic trust," "evidence of learning")
  • Acceptable only in Insights

Enforcement Hook:

  • If the copy frames a shared operational scenario and walks the reader toward an outcome → Service register
  • If the copy problematizes concepts and reframes the discourse → Insights register

3. Languaging as Identity Work (Who are we telling the reader they are?)

Languaging marks identity. The same company can position the reader as a decision-maker ("You set policy") or as a peer in a research community ("We, as a field, need to reconsider…"). That shift in identity is not allowed to drift across surfaces.

Services Page Identity Move

Example:

"You are the person responsible for protecting academic integrity at your institution. We give you a defensible policy you can stand behind when you're challenged."

Reader position: Accountable leader who needs immediate, defensible, auditable solutions. Register: B2-C1 (correct for administrators/PMs)

Insights Post Identity Move

Example:

"Our current models of authorship assume linguistic stability, but multilingual fine-tuning has already eroded that assumption."

Reader position: Co-analyst of system-level change. Register: C1-C2 (scholarly stance, must not leak into Services)

Enforcement Hook: Ask: "Am I speaking to them as an operations owner, or as a fellow theorist?"

  • Operations owner → Services register (B2-C1)
  • Fellow theorist → Insights register only (C1-C2)

4. Languaging as Cognitive Tool (Are we letting the reader think through the problem?)

In applied linguistics, "languaging" refers to using language to think through complex problems (externalized reasoning, self-explanation). That maps to two different behaviors in GeoVerity:

On a Services Page: We do the reasoning for them so they don't have to

Example (GOOD for Services):

"AI detectors generate false accusations. We train faculty to assess process, not just output, so you reduce conflict and keep trust in the classroom."

Why compliant: We surface the reasoning as a finished, usable policy move.

Anti-example (BAD for Services):

"Because authorship verification remains epistemically unstable in multilingual assessment contexts, universities must reconceptualize evaluation itself."

Why wrong: This forces them to theorize. That belongs in an Insights Post.

In an Insights Post: We invite them into the reasoning process

Example (GOOD for Insights):

"If we admit that authorship can no longer be verified by output inspection alone, then assessment must shift toward supervised process evidence — drafting traces, revision logs, oral defenses. That shift has legal and labor implications."

Enforcement Hook:

  • If you're making them think through institutional redesignInsights mode (C1-C2)
  • If you're promising them an implementable fixServices mode (B2-C1)

5. Languaging as Multimodal Meaning (How do we handle examples, definitions, and jargon density per surface?)

Languaging is not just about words; it's about how explanation, definition, and framing are delivered for the specific audience.

Services (B2-C1)

We are allowed to use technical terms like "LLM fine-tuning," "governance," "integrity review," but we immediately ground them in effect.

Example:

"We audit your LLM fine-tuning pipeline to surface bias before deployment."

We do not unpack theory unless it supports a decision.

Insights Posts (C1-C2)

We are allowed (and expected) to introduce theoretical constructs without immediate operationalization.

Example:

"Institutional trust in assessment cannot survive if epistemic warrant is outsourced to probabilistic detectors."

This matches the existing register table on lexical density, passive voice tolerance, hedging, and nominalization: those aren't just style preferences, they are different languaging acts.

Enforcement Hook:

  • If you define a term in plain language and tie it to an outcome → Services
  • If you elaborate a term to situate it in a field-level debate → Insights

When to Use

Triggering Conditions - Use this skill when:

  • ✅ Writing homepage hero copy
  • ✅ Creating service descriptions (pillar/spoke pages)
  • ✅ Drafting Insights journal posts
  • ✅ Writing contact page microcopy
  • ✅ Reviewing or editing ANY user-facing text
  • ✅ Translating English content to Spanish (maintain register parity)
  • ✅ About to use subordinate-initial clauses in service copy
  • ✅ About to write "Given X, Y..." or "While X, Y..." in marketing pages

Symptoms You Need This Skill:

  • 🚨 Service page reads like academic paper
  • 🚨 Using complex subordination for administrators
  • 🚨 Starting sentences with "Because...", "While...", "Given that..." on service pages
  • 🚨 Heavy nominalization ("the verification of data quality via methodological frameworks")
  • 🚨 Passive voice >15% on non-Insights pages
  • 🚨 Graduate-level vocabulary on homepage/services pages
  • 🚨 Jargon without definitions for non-specialist audiences

When NOT to use:

  • ❌ Internal documentation (not user-facing)
  • ❌ Code comments or technical specs
  • ❌ Git commit messages

Quick Reference: Register by Content Surface

Surface CEFR Level Avg Sentence Length Subordinate-Initial Nominalization Passive Voice Target Audience
Homepage B1-B2 12-18 words ❌ Never Minimal <10% All segments
Services Hub/Pillar/Spoke B2-C1 15-25 words ⚠️ Rare Low-Moderate <15% Administrators, PMs
Insights Hub B2-C1 18-25 words ⚠️ Rare Moderate <20% All segments (scannable)
Insights Posts C1-C2 20-35 words ✅ Common High 30-40% Researchers, scholars
Contact Page A2-B1 8-15 words ❌ Never None 0% All segments (max clarity)

Target Audience Profiles

Segment 1: Higher Education Administrators

Roles: Deans, Associate Provosts, Program Directors, IRB Chairs

Language Expectations:

  • Standard Academic English WITHOUT technical AI/NLP jargon
  • Active voice, SVO order, main clause initial
  • Concrete examples over abstract theorization
  • Problem → Solution → Outcome structure
  • Avoid: Dense nominalization, subordinate-clause-initial sentences, discipline-specific jargon

Example:

  • ✅ "GeoVerity helps institutions maintain epistemic integrity."
  • ❌ "Given the epistemological challenges posed by generative AI in pedagogical contexts, institutions require..."

Segment 2: Enterprise ML/AI Project Managers

Roles: ML Product Managers, Data Science Team Leads, AI Ethics Officers

Language Expectations:

  • Industry-standard terminology (LLM, RAG, fine-tuning) without excessive academic framing
  • Actionable language ("Deploy," "Configure," "Evaluate")
  • Quantitative precision (numbers, metrics, benchmarks)
  • Avoid: Theoretical background without application, academic citation styles

Example:

  • ✅ "We evaluate LLM performance across 120+ languages."
  • ❌ "The phenomenological dimensions of LLM-generated text in cross-linguistic contexts..."

Segment 3: Academic Researchers & Thought Leaders

Roles: Faculty researchers, graduate students, policy scholars

Language Expectations (Insights Posts ONLY):

  • Discipline-specific terminology, theoretical frameworks, citations
  • Complex syntax with embedding, hedging, epistemic modality
  • Argumentation structure (claim → evidence → warrant → counterargument)
  • Expected: Complex syntax, nominalization, disciplinary vocabulary

Register Specifications by Content Surface

1. Homepage (/ and /es/)

Register: B1-B2 (Intermediate-Upper Intermediate) Genre: Promotional landing page Tenor: Professional-to-peer (B2B marketing)

Feature Specification Example
Lexical Density 40-50% (conversational-professional) "GeoVerity provides verifiable AI training data across 120+ languages"
Sentence Length 12-18 words average Short, punchy sentences
Syntactic Complexity Simple + compound (minimal subordination) "We verify data quality. You build trustworthy AI."
Clause Structure Main clause initial, SVO order "GeoVerity helps institutions maintain epistemic integrity" ✅
"Epistemic integrity, which institutions must maintain, is supported by GeoVerity" ❌
Voice Active voice (>90%) "We verify" not "Data is verified"
Nominalization Minimal (prefer verbs) "We verify data" ✅
"Data verification processes" ⚠️
Jargon Tolerance Low (define technical terms) "AI training data (the text, images, and code used to teach AI systems)"

Prohibited Structures:

  • ❌ Subordinate clause initial: "Because AI systems require verified data, GeoVerity..."
  • ❌ Heavy nominalization: "The verification of data quality via methodological frameworks..."
  • ❌ Passive + abstract agent: "Data quality is ensured through processes..."
  • ❌ Academic hedging: "Our services arguably contribute to..."

Approved Structures:

  • ✅ Main clause initial: "GeoVerity verifies AI training data across 120+ languages."
  • ✅ Active voice + concrete agent: "Our linguists verify data quality."
  • ✅ Parallel structure: "Verify data. Build trust. Deploy confidently."

2. Services Hub & Service Pillar Pages

Register: B2-C1 (Upper Intermediate-Advanced) Genre: Service description (informational-promotional) Tenor: Professional consultant-to-client

Feature Specification Example
Lexical Density 50-60% (professional) "Our multilingual data infrastructure supports 120+ languages with verified native-speaker annotations"
Sentence Length 15-22 words average Moderate complexity
Syntactic Complexity Compound + moderate subordination "We verify data quality so you can deploy models confidently"
Clause Structure Main clause initial, purpose clauses acceptable "GeoVerity provides verified training data [main] to ensure model trustworthiness [purpose]" ✅
Voice Active voice (>85%) "Our team evaluates models" not "Models are evaluated"
Nominalization Low-moderate (only for established terms) "model evaluation" ✅, "the evaluation of model performance metrics" ❌
Jargon Tolerance Moderate (industry-standard terms OK) "LLM fine-tuning" ✅, "decontextualized lemma frequency distributions" ❌

Problem-Solution-Outcome Structure:

1. **The Problem** (1-2 paragraphs, B2 register)
   - State the challenge administrators/PMs face
   - Use concrete scenarios, not abstract theorization
   - "Graduate programs face declining trust in student work due to undetectable AI authorship."

2. **Our Approach** (2-3 paragraphs, B2-C1 register)
   - Describe GeoVerity's solution
   - Use active voice, process verbs
   - "We partner with institutions to establish epistemic integrity frameworks."

3. **What We Offer** (Bulleted list, B2 register)
   - Service deliverables in scannable format
   - "✓ Faculty training on AI detection limitations"

4. **Why This Matters** (1 paragraph, B2 register)
   - Value proposition, outcomes-focused
   - "Institutions maintain accreditation standards while adapting to AI realities."

Prohibited Structures:

  • ❌ Academic subordination: "Given the epistemological challenges posed by generative AI in pedagogical contexts, institutions require..."
  • ❌ Excessive nominalization: "The implementation of verification processes through methodological rigor..."
  • ❌ Passive + vague agent: "Data quality is ensured through processes conducted by teams..."

Approved Structures:

  • ✅ Problem-first: "Graduate programs struggle with AI detection. We provide training on epistemic integrity frameworks."
  • ✅ Process verbs: "We train faculty, evaluate models, and verify data quality."
  • ✅ Concrete outcomes: "Institutions maintain accreditation while adopting AI tools responsibly."

3. Service Spoke Pages

Register: B2-C1 Genre: Detailed service specification Tenor: Consultant-to-informed-client

Feature Specification
Lexical Density 55-65%
Sentence Length 18-25 words average
Syntactic Complexity Compound-complex (controlled subordination)
Voice Active voice (>80%)
Nominalization Moderate (technical terms)
Jargon Tolerance Moderate-high (domain-specific)

Feature-Benefit Structure:

**Key Features**
- Feature 1: [What it is] → [Why it matters]
- Feature 2: [What it is] → [Why it matters]

**Who This Serves**
- Administrators responsible for [X]
- Teams managing [Y]

4. Insights Hub & Category Pages

Register: B2-C1 (Hub), C1-C2 (Posts) Genre: Thought leadership portal Tenor: Professional-to-professional

CRITICAL: Hub uses B2-C1 (scannable), Posts use C1-C2 (scholarly)

Hub Page Register:

  • Scannable post previews
  • Category descriptions in B2-C1 register
  • Post titles in plain language (avoid jargon-heavy titles)
  • "Exploring graduate education, epistemic responsibility, and AI use in academia." ✅
  • NOT: "Investigating the phenomenological dimensions of generative AI's impact on epistemic warrant in pedagogical praxis." ❌

5. Insights Journal Posts (Individual Articles)

Register: C1-C2 (Advanced-Proficient) Genre: Scholarly argumentation / thought leadership essay Target Audience: Academic Researchers & Thought Leaders ONLY

Feature Specification Example
Lexical Density 65-75% (academic prose) "The epistemic collapse induced by LLM-generated text in graduate pedagogy necessitates institutional recalibration of authorship verification frameworks."
Sentence Length 20-35 words average Complex ideas require complex syntax
Syntactic Complexity High (embedding, subordination, nominalization) "While detection tools claim accuracy [concessive], empirical studies reveal failure rates exceeding 40% [main], suggesting institutions must adopt alternative frameworks [result]."
Clause Structure Subordination acceptable (argument-driven) Thematic progression, given-new structure
Voice Mixed (passive acceptable for academic hedging) "It has been argued..." "The data suggest..."
Nominalization High (disciplinary norms) "epistemic warrant" "authorship verification" "institutional recalibration"
Jargon Tolerance High (discipline-specific terminology) "phenomenological," "deontological," "hermeneutic," "corpus-driven," "fine-tuning," "RLHF"
Hedging High (epistemic modality) "may," "suggests," "arguably," "potentially," "appears to"

Approved Structures (Insights Posts ONLY):

  • ✅ Subordinate clause initial: "While AI detection tools proliferate, their empirical accuracy remains contested."
  • ✅ Heavy nominalization: "The institutionalization of epistemic integrity frameworks requires faculty buy-in."
  • ✅ Passive + hedging: "It has been argued that generative AI undermines traditional authorship models."
  • ✅ Discipline-specific jargon: "Bakhtinian dialogism," "Vygotskian ZPD," "transformer architectures"

Argumentation Structure:

1. **Introduction** (2-3 paragraphs)
   - Contextualize the problem (field, significance)
   - State thesis/claim
   - Preview argumentation structure

2. **Background/Literature Review** (2-4 paragraphs)
   - Engage with scholarly literature
   - Establish theoretical framework

3. **Analysis/Argument** (3-6 paragraphs)
   - Present evidence (data, case studies, citations)
   - Address counterarguments

4. **Implications** (1-2 paragraphs)
   - Practical applications
   - Policy recommendations

5. **Conclusion** (1 paragraph)
   - Restate thesis
   - Link to related GeoVerity service (if applicable)

6. Contact Page

Register: A2-B1 (Elementary-Intermediate) Genre: Transactional (form-based) Maximum accessibility

Feature Specification Example
Lexical Density 35-45% (instructional clarity) "Tell us about your needs. We'll respond within 24 hours."
Sentence Length 8-15 words average Short, direct
Syntactic Complexity Simple sentences Imperative mood
Voice Active voice (100%) "Contact us" not "We may be contacted"
Jargon Tolerance Zero Plain language only

Syntactic Feature Matrix

Feature Homepage Services Spokes Insights Hub Insights Posts Contact
Subordinate-initial clauses ❌ Never ⚠️ Rare ⚠️ Rare ⚠️ Rare ✅ Common ❌ Never
Nominalization density Low Low-Mod Moderate Moderate High Minimal
Passive voice % <10% <15% <20% <20% 30-40% 0%
Average sentence length 12-18 15-22 18-25 18-25 20-35 8-15
Lexical density 40-50% 50-60% 55-65% 55-65% 65-75% 35-45%
Embedding depth 0-1 1-2 1-2 1-2 2-4 0
Technical jargon Minimal Moderate Moderate-High Moderate High None
Hedging Minimal Low Low-Mod Moderate High None

Lexical Stratification Guidelines

Tier 1: Universal Vocabulary (All Surfaces)

Criteria: General Service List (GSL) 2000 most frequent English words Examples: "data," "quality," "verify," "trust," "help," "service," "system" Usage: Homepage, Services, Contact

Tier 2: Professional Vocabulary (Services + Insights)

Criteria: Academic Word List (AWL) + Industry-standard terms Examples: "methodology," "framework," "evaluation," "governance," "compliance," "integrity" Usage: Services Hub, Pillar/Spoke pages, Insights Hub

Tier 3: Technical Vocabulary (Spokes + Insights Posts)

Criteria: Domain-specific terminology (AI/ML, Education, Policy) Examples: "fine-tuning," "RLHF," "epistemic warrant," "IRB compliance," "transformer architecture" Usage: Service Spoke pages, Insights Posts (defined on first use in Services)

Tier 4: Disciplinary Jargon (Insights Posts ONLY)

Criteria: Specialized scholarly terminology Examples: "phenomenological," "Bakhtinian dialogism," "deontological," "hermeneutic," "sociolinguistic variation" Usage: Insights Journal Posts ONLY (not defined, assumes expert audience)


Information Structure Principles

Given-New Contract (Halliday)

All content surfaces should follow given-before-new information structure:

  • Place known/contextual information in theme position (sentence-initial)
  • Place new/focal information in rheme position (sentence-final)

Example (Services Page):

  • ✅ "Graduate programs [given] face new challenges from AI authorship [new]. These challenges [given] require updated integrity frameworks [new]."
  • ❌ "Updated integrity frameworks are required by challenges that graduate programs face."

Thematic Progression

Homepage: Constant theme (GeoVerity as repeated subject)

  • "GeoVerity provides... GeoVerity verifies... GeoVerity helps..."

Services Pages: Linear theme (previous rheme becomes next theme)

  • "AI systems require verified data [rheme]. Verified data [theme] enables trustworthy models [rheme]. Trustworthy models [theme] build institutional confidence."

Insights Posts: Split theme (complex thematic development)

  • Academic argumentation allows non-linear thematic progression

Code-Switching & Bilingual Parity

English-Spanish Register Alignment

Critical: Spanish translations must match the register of English source text.

Register-Appropriate Translation:

English (Services Page, B2-C1) Spanish (Same Register)
"GeoVerity helps institutions maintain epistemic integrity." "GeoVerity ayuda a las instituciones a mantener la integridad epistémica."
NOT: "GeoVerity auxilia a instituciones en el mantenimiento de la integridad epistémica." (too formal)
English (Insights Post, C1-C2) Spanish (Same Register)
"The epistemic collapse induced by LLM-generated text necessitates institutional recalibration." "El colapso epistémico inducido por texto generado por LLM necesita una recalibración institucional."

Register Calibration by Variety:

  • Latin American Spanish: Prefer slightly more direct/informal register than Peninsular Spanish
  • Peninsular Spanish: Acceptable for formal Insights Posts
  • Avoid: Overly formal constructions in service pages ("se ruega," "a la mayor brevedad posible")

Common Mistakes

❌ Mistake 1: Academic Register on Service Pages

Symptom: Subordinate-initial clauses, heavy nominalization, passive voice on Homepage/Services pages

Wrong:

"Given the epistemological challenges posed by generative AI in pedagogical contexts, institutions require comprehensive frameworks for the maintenance of epistemic integrity through methodological rigor."

Right:

"Graduate programs face new challenges from AI-generated student work. GeoVerity helps institutions maintain academic integrity with proven frameworks."

Why: Administrators and project managers need clear, actionable language, not academic prose.


❌ Mistake 2: Plain Language in Insights Posts

Symptom: Oversimplified syntax, no disciplinary terminology in scholarly articles

Wrong (for Insights Post):

"AI tools are changing how students write. This is a problem for universities. We need new ways to check student work."

Right (for Insights Post):

"While generative AI proliferates across pedagogical contexts, its epistemic implications for authorship verification remain contested. Institutional frameworks must recalibrate beyond detection-based models toward process-oriented integrity assessment."

Why: Academic audiences expect scholarly argumentation with complex syntax and disciplinary terminology.


❌ Mistake 3: Register Mismatch in Spanish Translations

Symptom: Spanish translation is more formal than English source

Wrong:

  • English (B2): "We help you build trustworthy AI."
  • Spanish (C1): "Auxiliamos en la construcción de inteligencia artificial fidedigna."

Right:

  • English (B2): "We help you build trustworthy AI."
  • Spanish (B2): "Te ayudamos a construir IA confiable."

Why: Register must match across languages for bilingual parity.


❌ Mistake 4: Jargon Without Context

Symptom: Technical terms undefined on service pages for non-specialist audiences

Wrong:

"Our RLHF pipelines optimize decontextualized lemma frequency distributions across polyglot corpora."

Right:

"We optimize language model training using human feedback and multilingual datasets."

Why: Administrators/PMs need industry-standard terms, not research jargon.


❌ Mistake 5: Subordinate-Initial Clauses on Service Pages

Symptom: Starting sentences with "Because...", "While...", "Given that..." on Homepage/Services pages

Wrong:

"Because AI detection tools fail 40% of the time, institutions need alternative approaches to academic integrity."

Right:

"AI detection tools fail 40% of the time. Institutions need alternative approaches to academic integrity."

Why: Main-clause-initial structure is clearer for busy professionals scanning content.


Register Swap Test (Mandatory Self-Check)

Before publishing ANY content, run this diagnostic:

If Drafting Homepage / Services Content

STOP and revise if you hear yourself doing ANY of this:

Starting with subordinate clauses:

  • "While X...", "Because X...", "Given that X...", "Although X..."

Using academic terminology without operational grounding:

  • "epistemic instability," "hermeneutic framing," "phenomenological pressure," "deontological imperatives"

Asking the reader to rethink policy foundations instead of telling them what we do:

  • "Institutions must reconceptualize..."
  • "We need to problematize..."
  • "Traditional frameworks require epistemological revision..."

Performing identity work as "fellow theorist" instead of "operational partner":

  • "As a field, we must reconsider..."
  • "Our shared disciplinary assumptions..."

Diagnosis: You are doing Insights languaging. Stop and rewrite in plain professional language (B2-C1).

Fix checklist:

  • Rewrite with main clause first
  • Replace academic terms with industry-standard terms
  • Frame as operational problem → GeoVerity solution → outcome
  • Position reader as decision-maker, not co-theorist

If Drafting Insights Post

STOP and revise if you hear yourself doing ANY of this:

Promising operational outcomes directly:

  • "We help you..."
  • "This lets you implement..."
  • "You can deploy this framework next week..."

Avoiding theoretical terms because you think they're "too academic":

  • Writing "problem with checking" instead of "epistemic warrant"
  • Writing "power issues" instead of "deontological constraints"
  • Refusing to engage with disciplinary literature

Writing only in short main-clause-first sentences:

  • Refusing to use subordination for argumentation
  • Avoiding complex syntax even when ideas require it

Performing identity work as "service provider" instead of "intellectual peer":

  • "We can solve this for you..."
  • "Our clients need..."

Diagnosis: You are doing Services languaging. Stop and escalate to academic register (C1-C2).

Fix checklist:

  • Rewrite to argue/theorize/reframe, not promise outcomes
  • Use disciplinary terminology without immediate operationalization
  • Use complex syntax to match complex ideas
  • Position reader as co-analyst, not client

Swap Test Summary (Copy-Paste Diagnostic)

Services content should:

  • Answer: "What problem do you have? What do we do? What outcome do you get?"
  • Use main-clause-first sentences
  • Ground technical terms in operational effect
  • Position reader as decision-maker

Insights content should:

  • Answer: "What assumptions are we interrogating? What evidence challenges them? What reframing do we propose?"
  • Use subordination for argumentation
  • Introduce theoretical constructs without immediate application
  • Position reader as fellow theorist

If content does the opposite of its surface type, you have violated register stratification.


Quality Assurance Checklist

Before publishing ANY content, verify:

Register Compliance

  • Identified content surface type (Homepage, Services, Insights, etc.)
  • Applied correct CEFR level (B1-B2 for Homepage, B2-C1 for Services, C1-C2 for Insights Posts)
  • Verified sentence length matches target range
  • Checked passive voice percentage
  • Confirmed no subordinate-initial clauses on service pages

Syntactic Rules

  • Main clause initial on Homepage/Services pages (no "Because X, Y..." or "While X, Y...")
  • Active voice >85% on service pages
  • Minimal nominalization on service pages
  • Technical terms defined on first use (service pages)

Audience Alignment

  • Language matches target audience (administrators, PMs, researchers)
  • Problem-Solution-Outcome structure for service pages
  • Claim-Evidence-Warrant structure for Insights Posts

Bilingual Parity

  • Spanish translation matches English register level
  • No register shift between languages (e.g., B2 EN → C1 ES)

Integration with Other Skills

MANDATORY SKILL COMBINATIONS:

  1. Always combine with building-pages for accessibility/performance compliance
  2. Use with templating-pages for Astro-specific implementation
  3. Reference branding for voice/tone alignment (branding specifies brand voice, languaging specifies register)
  4. Follow making-skill-decisions for skill discovery workflows
  5. CRITICAL: Always run checking-crappy-writing v1.3.0 after generating content
    • This skill checks register stratification
    • checking-crappy-writing v1.3.0 AUTO-FIXES AI artifacts (hallucinated citations, puffery, chatbot meta-language, formatting leaks, anti-detection evasion)
    • User reviews FIXES (not violations) and updates provenance

Execution Order (v1.3.0 AUTO-FIX workflow):

  1. languaging → Generate register-compliant content
  2. checking-crappy-writing → AUTO-FIX violations
  3. Claude Code assistant reports fixes to user (structured format)
  4. User reviews auto-fixes (accepts/rejects/edits)
  5. User updates provenance to "human-edited"
  6. User sets _meta.contentStatus to "approved"
  7. Iterate until PASS

IMPORTANT: Content you generate will be automatically scanned and fixed by checking-crappy-writing. User will review FIXES, not violations. See .claude/skills/checking-crappy-writing/SKILL.md for auto-fix output format.


Pre-Publication Register Audit

Automated Checks

  1. Sentence length analysis: Flag sentences >30 words on Homepage/Services pages
  2. Passive voice detection: Flag >15% passive on non-Insights pages
  3. Lexical density calculation: Flag mismatches with target range
  4. Readability scores: Flesch-Kincaid Grade Level, CEFR alignment

Human Review

  1. Linguistic review: PhD-level linguist reviews register appropriateness
  2. Audience testing: Segment representatives review drafts (administrators, PMs, researchers)
  3. Cross-linguistic review: Native Spanish speaker reviews register parity

Glossary of Linguistic Terms

CEFR (Common European Framework of Reference for Languages): Standardized scale of language proficiency (A1-C2)

Lexical Density: Ratio of content words (nouns, verbs, adjectives, adverbs) to total words; higher density = more information-packed

Nominalization: Converting verbs/adjectives into nouns (e.g., "verify" → "verification"); increases abstraction and formality

Embedding Depth: Number of subordinate clauses nested within a sentence; higher depth = greater syntactic complexity

Thematic Structure: Division of clause into theme (sentence-initial, given information) and rheme (new information)

Subordinate Clause Initial: Sentence structure where dependent clause precedes main clause (e.g., "While X, Y...")

Hedging: Use of epistemic modality to express uncertainty or tentativeness (e.g., "may," "suggests," "arguably")

Register: Contextual variety of language associated with particular situations, audiences, and purposes

Tenor: Social relationship between discourse participants (formal ↔ informal, expert ↔ novice)

Field: Subject matter or domain of discourse (technical, academic, everyday)

Mode: Channel of communication (spoken, written, digital)


References (Linguistic Framework)

  • Halliday, M.A.K., & Matthiessen, C.M.I.M. (2014). Halliday's Introduction to Functional Grammar (4th ed.). Routledge.
  • Bell, A. (1984). Language style as audience design. Language in Society, 13(2), 145-204.
  • Bernstein, B. (1971). Class, Codes and Control, Volume 1: Theoretical Studies Towards a Sociology of Language. Routledge.
  • Biber, D., & Conrad, S. (2009). Register, Genre, and Style. Cambridge University Press.
  • Martin, J.R., & Rose, D. (2008). Genre Relations: Mapping Culture. Equinox.
  • Council of Europe (2001). Common European Framework of Reference for Languages. Cambridge University Press.

Document Control:

  • Version: 1.0.0
  • Date: 2025-10-27
  • Owner: GeoVerity Content Strategy + Linguistics PhD Stakeholder
  • Full Language Register Plan: See docs/Language-Register-Plan.md for complete specifications

Red Flags - STOP Before Publishing

If you catch yourself doing ANY of these, STOP and revise:

Languaging Violations (Action/Identity)

  • 🚨 Services page does academic languaging: Arguing/theorizing instead of solving/promising
  • 🚨 Insights post does service languaging: Promising outcomes instead of interrogating assumptions
  • 🚨 Wrong identity positioning: Services page treats reader as co-theorist; Insights post treats reader as client

Syntactic Violations

  • 🚨 Using subordinate-initial clauses on Homepage/Services pages
  • 🚨 Starting sentences with "Because...", "While...", "Given that..." on marketing pages
  • 🚨 Heavy nominalization on service pages ("the implementation of verification processes...")
  • 🚨 Passive voice >15% on service pages
  • 🚨 Only short simple sentences in Insights posts (refusing complex syntax for complex ideas)

Lexical Violations

  • 🚨 Academic jargon undefined on service pages ("epistemic warrant" without grounding in effect)
  • 🚨 Plain language in Insights journal posts (oversimplified for academic audience)
  • 🚨 Industry-standard terms avoided in Insights because "too technical"

Cross-Linguistic Violations

  • 🚨 Spanish translation more formal than English source
  • 🚨 Register shift between EN → ES (B2 English becomes C1 Spanish)

Swap Test Failures

  • 🚨 Services content asks: "What assumptions are we interrogating?" (Should ask: "What problem do you have?")
  • 🚨 Insights content promises: "We help you implement X" (Should argue: "Traditional models require reconsideration")

All of these mean: Revise before publishing. No exceptions.

Quick diagnostic: Run the Register Swap Test above before finalizing any content.