Sector HQ Glossary

49+ terms and definitions for understanding AI adoption rankings, hype vs reality analysis, and BS detection.

BS Gap (Bullshit Gap)

The difference between a company's hype score and reality score. A positive BS gap indicates marketing claims exceed actual substance, while a negative gap shows the company under-promises and over-delivers.

Example:
Company X has a BS gap of +23.5, meaning their hype score (85) is 23.5 points higher than their reality score (61.5).

Hype Score

A 0-100 metric measuring the volume and intensity of marketing claims, press releases, and self-promotion from AI companies. Calculated by analyzing company blog posts, press releases, and executive statements for buzzwords and unsubstantiated claims.

Example:
A company announcing "revolutionary AI breakthroughs" without demos or papers would score high on hype.

Reality Score

A 0-100 metric measuring actual substance and deliverables from community and third-party sources. Tracks GitHub activity, research papers, product launches with demos, developer discussions, and verified user feedback.

Example:
A company with 50+ GitHub commits, 5 published papers, and active developer community would score high on reality.

AI Adoption

The measurable integration and deployment of artificial intelligence technologies within a company, tracked through events, product launches, research output, and community activity.

Example:
Google's AI adoption is measured by their Gemini releases, TensorFlow contributions, and DeepMind research.

Sentiment Score

A 0-100 metric analyzing public perception and market sentiment through social media, news coverage, developer feedback, and community discussions.

Example:
High sentiment (80+) indicates strong positive community perception and trust.

AI Events

Trackable activities including product launches (actual demos), research papers (arXiv, journals), GitHub activity (real code), partnerships (with substance), funding rounds (validated), and hiring/layoffs (market signals).

Example:
OpenAI releasing GPT-4 with public API access counts as a high-quality AI event.

7-Day Event Count

The number of significant AI-related events for a company in the past 7 days. Used to measure recent activity and momentum.

Example:
A company with 15 events in 7 days shows high activity; 0-2 events suggests minimal recent progress.

Quality Score

A metric evaluating the technical quality and reliability of AI products through model performance benchmarks, API reliability, documentation quality, user satisfaction, and bug resolution time.

Example:
A company with 99.9% API uptime, comprehensive docs, and positive user reviews scores high on quality.

Gap Classification

Categories for BS gap ranges: Extreme Overhype (+25+), High BS (+20-25), Significant Spin (+15-20), Moderate Gap (+10-15), Slight Exaggeration (+5-10), Balanced (0-5), Honest (-5-0), Over-delivers (-10-0).

Example:
A +27 BS gap is classified as "Extreme Overhype".

Hype Signals

Indicators of marketing spin including buzzword usage (revolutionary, groundbreaking), vague announcements, executive quotes without substance, repackaged old news, and PR without product.

Example:
"We're exploring revolutionary AI" with no product or timeline is a strong hype signal.

Reality Signals

Indicators of actual substance including GitHub commits, arXiv papers, product demos, developer adoption, third-party benchmarks, and verified user testimonials.

Example:
Publishing code on GitHub, papers on arXiv, and running public benchmarks are strong reality signals.

Rank

A company's position on Sector HQ, calculated from weighted scores of events (40%), sentiment (35%), and quality (25%). Updated every 5 minutes.

Example:
Rank #1 means highest overall AI adoption score across all tracked companies.

Overall Score

Composite 0-100 metric combining event activity, sentiment, and quality. Formula: (events_7d × 0.40) + (sentiment × 0.35) + (quality × 0.25).

Example:
A score of 85.3 indicates very high AI adoption and activity.

Sentiment Analysis

Automated evaluation of public perception through natural language processing of social media posts, news articles, developer forums, and user reviews.

Example:
Tracking Reddit comments, Twitter mentions, and HackerNews discussions about a company.

Event Quality

Assessment of event substance and impact. Product launches with demos score higher than vague announcements. Research papers with code score higher than press releases.

Example:
Releasing an open-source model scores higher than announcing "AI research partnership".

Community-Driven Data

Information sourced from third-party platforms like GitHub, Reddit, arXiv, and developer forums rather than company-controlled channels.

Example:
GitHub stars, Reddit upvotes, and arXiv citations vs company blog posts.

Real-Time Updates

Leaderboard rankings refresh every 5 minutes with latest data from tracked sources, providing near-instantaneous reflection of AI activity.

Example:
A major product launch can change rankings within 5 minutes of announcement.

Trend

Direction of BS gap change over time: Increasing (gap growing), Decreasing (gap shrinking), or Stable (minimal change). Indicates whether marketing and reality are converging or diverging.

Example:
An "Increasing" trend means hype is outpacing reality.

Confidence Score

A 0-100 metric indicating reliability of BS gap calculation based on event count and data quality. Higher confidence requires more events analyzed.

Example:
95% confidence means BS gap is calculated from 10+ high-quality events.

Events Analyzed

Number of events included in hype vs reality calculation. Minimum 3 events required for BS gap analysis.

Example:
15 events analyzed provides more reliable BS gap than 3 events.

Most Overhyped

Companies with the highest positive BS gaps, indicating significant disconnect between marketing claims and actual deliverables.

Example:
Companies in this list have BS gaps of +20 or higher.

Most Honest

Companies with low or negative BS gaps, indicating marketing aligns with or understates actual capabilities.

Example:
A BS gap of -5 means reality exceeds hype by 5 points.

API Access

Programmatic access to leaderboard data, company rankings, event feeds, and hype vs reality metrics through REST endpoints.

Example:
GET /api/leaderboard returns current rankings with scores and BS gaps.

Event Types

Categories of trackable activities: Product Launch, Research Paper, GitHub Activity, Partnership, Funding, Hiring, Media Mention, Technical Achievement.

Example:
A research paper on arXiv is categorized as "Research Paper" event type.

Data Sources

Platforms monitored for AI activity: GitHub, arXiv, Reddit, HackerNews, company blogs, tech news outlets, Twitter, LinkedIn, academic journals.

Example:
We track r/MachineLearning and r/artificial for community sentiment.

Update Frequency

How often different metrics refresh: Rankings (every 5 minutes), Event detection (real-time), Sentiment (hourly), BS gap (daily).

Example:
New GitHub releases are detected within minutes and affect rankings in next 5-minute cycle.

Methodology

Systematic approach to scoring: Event weighting based on quality, Sentiment from multiple sources, Quality from benchmarks and reviews, Transparency in calculations.

Example:
A GitHub release with 1000+ stars weights more than a blog post announcement.

Leaderboard Categories

Classification of companies: Overall Rankings, Most Overhyped, Best Sentiment, Highest Activity, Rising Stars, Falling Knives.

Example:
A company can rank #5 overall but #1 in "Most Overhyped".

Company Profile

Dedicated page for each company showing rank, scores, recent events, hype vs reality breakdown, trends, and historical data.

Example:
/company/openai displays OpenAI's complete AI adoption metrics.

Comparison Tool

Side-by-side analysis of two companies across metrics: ranks, scores, events, sentiment, BS gaps, with winner/loser indicators.

Example:
/compare/openai-vs-anthropic shows head-to-head AI adoption comparison.

Rising Stars

Companies with largest positive rank changes over past week/month, indicating growing AI momentum and adoption.

Example:
Moving from rank #45 to #12 in one week qualifies as a rising star.

Falling Knives

Companies with largest negative rank drops, potentially indicating slowing AI activity or negative sentiment.

Example:
Dropping from #8 to #35 suggests decreased activity or community concerns.

Activity Feed

Real-time stream of AI events across all tracked companies, showing latest launches, papers, and announcements.

Example:
Live feed displays "OpenAI released GPT-5 API" seconds after detection.

Event Filtering

Automated removal of low-quality events: vague press releases, marketing fluff, reposted content, executive quotes without substance.

Example:
"CEO says AI is important" is filtered out; "CEO demos new AI model" is included.

Verified Events

Events confirmed through multiple sources or official channels before inclusion in scoring.

Example:
A rumored product launch isn't counted until official announcement or demo.

Historical Trends

Company performance over time showing rank changes, score evolution, and event frequency across weeks/months.

Example:
Graph showing company went from 5 events/week to 25 events/week over 3 months.

Normalized Scoring

Adjusting raw metrics to 0-100 scale for fair comparison across companies of different sizes and industries.

Example:
Startup with 5 high-quality events can score higher than BigCo with 20 low-quality events.

Recency Weighting

More recent events carry higher weight in scoring than older events, emphasizing current activity over historical performance.

Example:
Events from past 7 days weighted 2x higher than events from 30 days ago.

AI Buzz

Volume of discussion and mentions about a company across social platforms, correlated with but separate from sentiment.

Example:
High buzz doesn't guarantee positive sentiment—controversial companies can have high buzz and low sentiment.

Technical Depth

Measure of how detailed and substantial technical information is in announcements, papers, and releases.

Example:
Sharing model architecture details and training methods shows higher technical depth than "we used AI".

Developer Adoption

Uptake of company's AI tools by developer community, measured through GitHub forks, npm downloads, API usage, and Stack Overflow questions.

Example:
TensorFlow's 180k+ GitHub stars indicate massive developer adoption.

Research Impact

Academic and industry influence of published research, tracked through citations, reproductions, and benchmarks.

Example:
A paper cited 500+ times in 6 months shows high research impact.

Product Velocity

Rate of new feature releases and product updates, indicating development speed and innovation pace.

Example:
Releasing major updates every 2-3 weeks shows high product velocity.

Transparency Score

How open a company is about AI capabilities, limitations, training data, and methodologies.

Example:
Publishing model cards, datasets, and technical reports increases transparency score.

Benchmark Performance

Scores on standardized AI tests and leaderboards like MMLU, HumanEval, ImageNet, or domain-specific evaluations.

Example:
Achieving 95.3% on MMLU benchmark is tracked as objective performance metric.

Community Sentiment

Specific to developer and researcher communities vs general public sentiment, often more critical and technical.

Example:
A product might have 90% public sentiment but only 60% developer sentiment due to API limitations.

Marketing Spin Detection

Automated identification of exaggerated claims, vague language, and buzzword overuse in company communications.

Example:
Detecting phrases like "revolutionary," "game-changing," "unprecedented" without supporting evidence.

Substance Verification

Cross-referencing company claims with third-party sources, demos, code, and independent testing.

Example:
Verifying "fastest inference" claim against public benchmarks and community testing.

Gap Emoji

Visual indicator of BS gap severity: 🔥 for honest, ⚠️ for moderate, 🤡 for high BS, 💩 for extreme overhype.

Example:
A +28 BS gap displays 💩 emoji for extreme overhype.