Sector HQ Glossary
49+ terms and definitions for understanding AI adoption rankings, hype vs reality analysis, and BS detection.
BS Gap (Bullshit Gap)
The difference between a company's hype score and reality score. A positive BS gap indicates marketing claims exceed actual substance, while a negative gap shows the company under-promises and over-delivers.
Hype Score
A 0-100 metric measuring the volume and intensity of marketing claims, press releases, and self-promotion from AI companies. Calculated by analyzing company blog posts, press releases, and executive statements for buzzwords and unsubstantiated claims.
Reality Score
A 0-100 metric measuring actual substance and deliverables from community and third-party sources. Tracks GitHub activity, research papers, product launches with demos, developer discussions, and verified user feedback.
AI Adoption
The measurable integration and deployment of artificial intelligence technologies within a company, tracked through events, product launches, research output, and community activity.
Sentiment Score
A 0-100 metric analyzing public perception and market sentiment through social media, news coverage, developer feedback, and community discussions.
AI Events
Trackable activities including product launches (actual demos), research papers (arXiv, journals), GitHub activity (real code), partnerships (with substance), funding rounds (validated), and hiring/layoffs (market signals).
7-Day Event Count
The number of significant AI-related events for a company in the past 7 days. Used to measure recent activity and momentum.
Quality Score
A metric evaluating the technical quality and reliability of AI products through model performance benchmarks, API reliability, documentation quality, user satisfaction, and bug resolution time.
Gap Classification
Categories for BS gap ranges: Extreme Overhype (+25+), High BS (+20-25), Significant Spin (+15-20), Moderate Gap (+10-15), Slight Exaggeration (+5-10), Balanced (0-5), Honest (-5-0), Over-delivers (-10-0).
Hype Signals
Indicators of marketing spin including buzzword usage (revolutionary, groundbreaking), vague announcements, executive quotes without substance, repackaged old news, and PR without product.
Reality Signals
Indicators of actual substance including GitHub commits, arXiv papers, product demos, developer adoption, third-party benchmarks, and verified user testimonials.
Rank
A company's position on Sector HQ, calculated from weighted scores of events (40%), sentiment (35%), and quality (25%). Updated every 5 minutes.
Overall Score
Composite 0-100 metric combining event activity, sentiment, and quality. Formula: (events_7d × 0.40) + (sentiment × 0.35) + (quality × 0.25).
Sentiment Analysis
Automated evaluation of public perception through natural language processing of social media posts, news articles, developer forums, and user reviews.
Event Quality
Assessment of event substance and impact. Product launches with demos score higher than vague announcements. Research papers with code score higher than press releases.
Community-Driven Data
Information sourced from third-party platforms like GitHub, Reddit, arXiv, and developer forums rather than company-controlled channels.
Real-Time Updates
Leaderboard rankings refresh every 5 minutes with latest data from tracked sources, providing near-instantaneous reflection of AI activity.
Trend
Direction of BS gap change over time: Increasing (gap growing), Decreasing (gap shrinking), or Stable (minimal change). Indicates whether marketing and reality are converging or diverging.
Confidence Score
A 0-100 metric indicating reliability of BS gap calculation based on event count and data quality. Higher confidence requires more events analyzed.
Events Analyzed
Number of events included in hype vs reality calculation. Minimum 3 events required for BS gap analysis.
Most Overhyped
Companies with the highest positive BS gaps, indicating significant disconnect between marketing claims and actual deliverables.
Most Honest
Companies with low or negative BS gaps, indicating marketing aligns with or understates actual capabilities.
API Access
Programmatic access to leaderboard data, company rankings, event feeds, and hype vs reality metrics through REST endpoints.
Event Types
Categories of trackable activities: Product Launch, Research Paper, GitHub Activity, Partnership, Funding, Hiring, Media Mention, Technical Achievement.
Data Sources
Platforms monitored for AI activity: GitHub, arXiv, Reddit, HackerNews, company blogs, tech news outlets, Twitter, LinkedIn, academic journals.
Update Frequency
How often different metrics refresh: Rankings (every 5 minutes), Event detection (real-time), Sentiment (hourly), BS gap (daily).
Methodology
Systematic approach to scoring: Event weighting based on quality, Sentiment from multiple sources, Quality from benchmarks and reviews, Transparency in calculations.
Leaderboard Categories
Classification of companies: Overall Rankings, Most Overhyped, Best Sentiment, Highest Activity, Rising Stars, Falling Knives.
Company Profile
Dedicated page for each company showing rank, scores, recent events, hype vs reality breakdown, trends, and historical data.
Comparison Tool
Side-by-side analysis of two companies across metrics: ranks, scores, events, sentiment, BS gaps, with winner/loser indicators.
Rising Stars
Companies with largest positive rank changes over past week/month, indicating growing AI momentum and adoption.
Falling Knives
Companies with largest negative rank drops, potentially indicating slowing AI activity or negative sentiment.
Activity Feed
Real-time stream of AI events across all tracked companies, showing latest launches, papers, and announcements.
Event Filtering
Automated removal of low-quality events: vague press releases, marketing fluff, reposted content, executive quotes without substance.
Verified Events
Events confirmed through multiple sources or official channels before inclusion in scoring.
Historical Trends
Company performance over time showing rank changes, score evolution, and event frequency across weeks/months.
Normalized Scoring
Adjusting raw metrics to 0-100 scale for fair comparison across companies of different sizes and industries.
Recency Weighting
More recent events carry higher weight in scoring than older events, emphasizing current activity over historical performance.
AI Buzz
Volume of discussion and mentions about a company across social platforms, correlated with but separate from sentiment.
Technical Depth
Measure of how detailed and substantial technical information is in announcements, papers, and releases.
Developer Adoption
Uptake of company's AI tools by developer community, measured through GitHub forks, npm downloads, API usage, and Stack Overflow questions.
Research Impact
Academic and industry influence of published research, tracked through citations, reproductions, and benchmarks.
Product Velocity
Rate of new feature releases and product updates, indicating development speed and innovation pace.
Transparency Score
How open a company is about AI capabilities, limitations, training data, and methodologies.
Benchmark Performance
Scores on standardized AI tests and leaderboards like MMLU, HumanEval, ImageNet, or domain-specific evaluations.
Community Sentiment
Specific to developer and researcher communities vs general public sentiment, often more critical and technical.
Marketing Spin Detection
Automated identification of exaggerated claims, vague language, and buzzword overuse in company communications.
Substance Verification
Cross-referencing company claims with third-party sources, demos, code, and independent testing.
Gap Emoji
Visual indicator of BS gap severity: 🔥 for honest, ⚠️ for moderate, 🤡 for high BS, 💩 for extreme overhype.