Sector HQ Glossary
49+ terms and definitions for understanding AI adoption rankings, hype vs reality analysis, and BS detection.
- Hype Gap
- The difference between a company's hype score and reality score. A positive Hype Gap indicates marketing claims exceed actual substance, while a negative gap shows the company under-promises and over-delivers.Example:Company X has a Hype Gap of +23.5, meaning their hype score (85) is 23.5 points higher than their reality score (61.5).
- Hype Score
- A 0-100 metric measuring the volume and intensity of marketing claims, press releases, and self-promotion from AI companies. Calculated by analyzing company blog posts, press releases, and executive statements for buzzwords and unsubstantiated claims.Example:A company announcing "revolutionary AI breakthroughs" without demos or papers would score high on hype.
- Reality Score
- A 0-100 metric measuring actual substance and deliverables from community and third-party sources. Tracks GitHub activity, research papers, product launches with demos, developer discussions, and verified user feedback.Example:A company with 50+ GitHub commits, 5 published papers, and active developer community would score high on reality.
- AI Adoption
- The measurable integration and deployment of artificial intelligence technologies within a company, tracked through events, product launches, research output, and community activity.Example:Google's AI adoption is measured by their Gemini releases, TensorFlow contributions, and DeepMind research.
- Sentiment Score
- A 0-1 scale metric (displayed as 0-100%) analyzing community perception from Reddit (r/MachineLearning, r/artificial, r/LocalLLaMA, r/singularity, r/OpenAI), company blogs, arXiv, and tech news. Calculated per-event, then time-weighted and aggregated. Display ranges: 80-100% (strong positive, ≥0.6 in DB), 60-80% (generally positive, 0.2-0.6), 40-60% (mixed, -0.2 to 0.2), below 40% (negative, <-0.2).Example:Sentiment of 85% indicates strong positive community perception and high trust among developers and users.
- AI Events
- Trackable activities including product launches (actual demos), research papers (arXiv, journals), GitHub activity (real code), partnerships (with substance), funding rounds (validated), and hiring/layoffs (market signals).Example:OpenAI releasing GPT-4 with public API access counts as a high-quality AI event.
- 7-Day Event Count
- The number of significant AI-related events for a company in the past 7 days. Used to measure recent activity and momentum.Example:A company with 15 events in 7 days shows high activity; 0-2 events suggests minimal recent progress.
- Quality Score
- A 40% weighted metric evaluating content quality and technical substance through comprehensive AI analysis. Includes technical depth (working demos, code releases, API docs), benchmark performance, research quality (arXiv papers, peer review), production readiness, and developer experience. Highest weighted factor in the scoring formula.Example:A company with 99.9% API uptime, comprehensive docs, GitHub releases, and published benchmarks scores high on quality (40% of overall score).
- Gap Classification
- Categories for Hype Gap ranges: Under-delivers (−5 or lower), Honest (0 to +5), Slight Exaggeration (+5 to +10), Moderate Overhype (+10 to +20), High BS (+20 or higher).Example:A +23 Hype Gap is classified as "High BS" indicating marketing far exceeds substance.
- Hype Patterns
- AI-detected patterns in ALL events indicating marketing spin (5-7 points each). Includes buzzwords ("revolutionary", "groundbreaking", "game-changing"), vague promises ("coming soon", "stay tuned"), hyperbole ("world-changing", "transformative"), superlatives without proof, AGI/superintelligence claims.Example:"We're exploring revolutionary AI" with no product or timeline contains multiple hype patterns worth 15-20 points.
- Reality Patterns
- AI-detected patterns in ALL events indicating technical substance (4-6 points each). Includes concrete deliverables ("now available", "released", "launched"), technical substance (GitHub links, API docs, pricing), measurable metrics (benchmarks, performance numbers, SLA guarantees).Example:Publishing code on GitHub with benchmarks, pricing, and API docs contains multiple reality patterns worth 20-30 points.
- Rank
- A company's position on Sector HQ leaderboard, calculated from weighted overall score: Quality (40%), Sentiment (30%), and Urgency (30%), plus frequency and recency bonuses. Rankings update every 5 minutes based on real-time data. Events decay 10% per week.Example:Rank #1 means highest overall AI adoption score across all 500+ tracked companies.
- Overall Score
- Composite 0-100 metric combining three factors with bonuses. Formula: Score = Σ[(Quality × 40% + Sentiment × 30% + Urgency × 30%) × Time Decay] + Frequency Bonus (up to +10) + Recency Bonus (up to +5). Events decay 10% per week (exp(-0.1 × age_in_weeks)).Example:A score of 85.3 indicates very high AI adoption with strong quality, positive sentiment, high urgency events, plus recent activity bonuses.
- Sentiment Analysis
- Automated evaluation of public perception through natural language processing of social media posts, news articles, developer forums, and user reviews.Example:Tracking Reddit comments, Twitter mentions, and HackerNews discussions about a company.
- Event Quality
- Assessment of event substance and impact. Product launches with demos score higher than vague announcements. Research papers with code score higher than press releases.Example:Releasing an open-source model scores higher than announcing "AI research partnership".
- Community-Driven Data
- Information sourced from third-party platforms like GitHub, Reddit, arXiv, and developer forums rather than company-controlled channels.Example:GitHub stars, Reddit upvotes, and arXiv citations vs company blog posts.
- Real-Time Updates
- Leaderboard rankings refresh every 5 minutes with latest data from tracked sources, providing near-instantaneous reflection of AI activity.Example:A major product launch can change rankings within 5 minutes of announcement.
- Trend
- Direction of Hype Gap change over time: Increasing (gap growing), Decreasing (gap shrinking), or Stable (minimal change). Indicates whether marketing and reality are converging or diverging.Example:An "Increasing" trend means hype is outpacing reality.
- Confidence Score
- A 0-100 metric indicating reliability of Hype Gap calculation based on event count and data quality. Higher confidence requires more events analyzed.Example:95% confidence means Hype Gap is calculated from 10+ high-quality events.
- Events Analyzed
- Number of events included in hype vs reality calculation. Minimum 3 events required for Hype Gap analysis.Example:15 events analyzed provides more reliable Hype Gap than 3 events.
- Most Overhyped
- Companies with the highest positive Hype Gaps, indicating significant disconnect between marketing claims and actual deliverables.Example:Companies in this list have Hype Gaps of +20 or higher.
- Most Honest
- Companies with low or negative Hype Gaps, indicating marketing aligns with or understates actual capabilities.Example:A Hype Gap of -5 means reality exceeds hype by 5 points.
- API Access
- Programmatic access to leaderboard data, company rankings, event feeds, and hype vs reality metrics through REST endpoints.Example:GET /api/leaderboard returns current rankings with scores and BS gaps.
- Event Types
- Categories of trackable activities: Product Launch, Research Paper, GitHub Activity, Partnership, Funding, Hiring, Media Mention, Technical Achievement.Example:A research paper on arXiv is categorized as "Research Paper" event type.
- Data Sources
- Platforms monitored for AI activity (71 active sources): Reddit (r/MachineLearning, r/artificial, r/singularity, r/LocalLLaMA, r/OpenAI - 60 events in last 7 days), RSS Feeds (45 sources), Playwright RSS (9 sources), GitHub API (8 sources), Web Scrapers (6 sources), arXiv, company blogs, tech news.Example:We track r/MachineLearning and r/artificial for community sentiment, with 60 Reddit events collected in the last 7 days.
- Update Frequency
- How often different metrics refresh: Rankings (every 5 minutes, 300s cache), Hype Gap (every 5 minutes, 300s cache), Event detection (varies by source: RSS ~15min, Reddit ~30min), Sentiment (per-event, instant), Frontend cache (30s leaderboard, 60s company pages).Example:New events are detected within 15-30 minutes, then affect rankings within the next 5-minute update cycle. Total latency up to ~20 minutes worst case.
- Methodology
- 3-factor weighted scoring model: Quality (40%) evaluates content quality and technical substance, Sentiment (30%) tracks community perception, Urgency (30%) measures event importance. Plus bonuses: Frequency (up to +10), Recency (up to +5). Formula: Score = Σ[(Quality × 40% + Sentiment × 30% + Urgency × 30%) × Time Decay] + Bonuses. Events decay 10% per week.Example:A company with high quality events (90/100), strong sentiment (85/100), and high urgency (80/100), plus 15 events (frequency bonus +1.5) and recent activity (recency bonus +3) scores: [(90×0.40)+(85×0.30)+(80×0.30)] × time_decay + 1.5 + 3.
- Leaderboard Categories
- Classification of companies: Overall Rankings, Most Overhyped, Best Sentiment, Highest Activity, Rising Stars, Falling Knives.Example:A company can rank #5 overall but #1 in "Most Overhyped".
- Company Profile
- Dedicated page for each company showing rank, scores, recent events, hype vs reality breakdown, trends, and historical data.Example:/company/openai displays OpenAI's complete AI adoption metrics.
- Comparison Tool
- Side-by-side analysis of two companies across metrics: ranks, scores, events, sentiment, BS gaps, with winner/loser indicators.Example:/compare/openai-vs-anthropic shows head-to-head AI adoption comparison.
- Rising Stars
- Companies with largest positive rank changes over past week/month, indicating growing AI momentum and adoption.Example:Moving from rank #45 to #12 in one week qualifies as a rising star.
- Falling Knives
- Companies with largest negative rank drops, potentially indicating slowing AI activity or negative sentiment.Example:Dropping from #8 to #35 suggests decreased activity or community concerns.
- Activity Feed
- Real-time stream of AI events across all tracked companies, showing latest launches, papers, and announcements.Example:Live feed displays "OpenAI released GPT-5 API" seconds after detection.
- Event Filtering
- Automated removal of low-quality events: vague press releases, marketing fluff, reposted content, executive quotes without substance.Example:"CEO says AI is important" is filtered out; "CEO demos new AI model" is included.
- Verified Events
- Events confirmed through multiple sources or official channels before inclusion in scoring.Example:A rumored product launch isn't counted until official announcement or demo.
- Historical Trends
- Company performance over time showing rank changes, score evolution, and event frequency across weeks/months.Example:Graph showing company went from 5 events/week to 25 events/week over 3 months.
- Normalized Scoring
- Adjusting raw metrics to 0-100 scale for fair comparison across companies of different sizes and industries.Example:Startup with 5 high-quality events can score higher than BigCo with 20 low-quality events.
- Recency Weighting
- More recent events carry higher weight in scoring than older events, emphasizing current activity over historical performance.Example:Events from past 7 days weighted 2x higher than events from 30 days ago.
- AI Buzz
- Volume of discussion and mentions about a company across social platforms, correlated with but separate from sentiment.Example:High buzz doesn't guarantee positive sentiment—controversial companies can have high buzz and low sentiment.
- Technical Depth
- Measure of how detailed and substantial technical information is in announcements, papers, and releases.Example:Sharing model architecture details and training methods shows higher technical depth than "we used AI".
- Developer Adoption
- Uptake of company's AI tools by developer community, measured through GitHub forks, npm downloads, API usage, and Stack Overflow questions.Example:TensorFlow's 180k+ GitHub stars indicate massive developer adoption.
- Research Impact
- Academic and industry influence of published research, tracked through citations, reproductions, and benchmarks.Example:A paper cited 500+ times in 6 months shows high research impact.
- Product Velocity
- Rate of new feature releases and product updates, indicating development speed and innovation pace.Example:Releasing major updates every 2-3 weeks shows high product velocity.
- Transparency Score
- How open a company is about AI capabilities, limitations, training data, and methodologies.Example:Publishing model cards, datasets, and technical reports increases transparency score.
- Benchmark Performance
- Scores on standardized AI tests and leaderboards like MMLU, HumanEval, ImageNet, or domain-specific evaluations.Example:Achieving 95.3% on MMLU benchmark is tracked as objective performance metric.
- Community Sentiment
- Specific to developer and researcher communities vs general public sentiment, often more critical and technical.Example:A product might have 90% public sentiment but only 60% developer sentiment due to API limitations.
- Marketing Spin Detection
- Automated identification of exaggerated claims, vague language, and buzzword overuse in company communications.Example:Detecting phrases like "revolutionary," "game-changing," "unprecedented" without supporting evidence.
- Substance Verification
- Cross-referencing company claims with third-party sources, demos, code, and independent testing.Example:Verifying "fastest inference" claim against public benchmarks and community testing.
- Gap Emoji
- Visual indicator of Hype Gap severity: 🔥 for honest, ⚠️ for moderate, 🤡 for high hype, 💩 for extreme overhype.Example:A +28 Hype Gap displays 💩 emoji for extreme overhype.