Avoiding Placebo Tech: How to Spot Overpromised Energy Products Before You Buy
Avoid costly 'placebo tech' in energy procurement: demand independent testing, define M&V and lock in acceptance criteria before you buy.
Stop Buying Placebo Tech: A Practical Guide for Procurement Teams and SMEs
High energy bills, opaque vendor claims and the pressure to decarbonise make any promising new product look irresistible. But not every claim survives scrutiny. In January 2026, critics flagged a 3D‑scanned insole as a clear example of “placebo tech” — a product that delivers no measurable benefit beyond the belief that it will work. That story is a useful warning for anyone evaluating new solar, battery, generator or IoT devices: if you accept bold claims without independent testing and clear acceptance criteria, you may pay for placebo results.
The short answer: how to avoid placebo tech
Demand evidence, design pilots with control groups and blinded measurements where possible, require independent testing (laboratory and field), and contractually lock in clear acceptance criteria and M&V steps before you pay the balance. Below is a field‑tested playbook that procurement teams and small business operators can apply immediately.
Why the 3D‑insole story matters for energy procurement
On Jan 16, 2026 The Verge published a critique of a 3D‑scanned custom insole product that relied largely on subjective user feedback. The product highlighted how design, marketing and plausible‑sounding science can create perceived benefits even when objective measures show none.
“This 3D‑scanned insole is another example of placebo tech.” — The Verge, Jan 16, 2026
Energy suppliers and vendors can deploy the same tactics: clever dashboards, selective data visualisations, or short pilot periods that highlight a lucky week of good weather. For businesses, the consequences are real: wasted capital expenditure, disrupted operations, and missed carbon and cost savings.
2026 trends that increase placebo risk — and how to respond
- Explosion of IoT claims: In 2025–26 thousands of energy IoT devices flooded the market claiming AI optimisation, demand response and predictive maintenance. Many rely on heuristics that aren’t validated in commercial settings. Response: insist on field M&V and cybersecurity attestations.
- Shorter pilots to sell faster: Vendors often present 2–4 week pilots — too short to adjust for weather and operational noise. Response: set minimum pilot durations tied to product type (see guidance below).
- More marketing, less data transparency: Visual dashboards and selective KPIs can hide poor baseline comparisons. Response: demand raw data export, meter-level granularity and third‑party audits.
- Regulator and funder scrutiny: By late 2025, UK grant programmes and corporate energy buyers increasingly require independent M&V. Use that leverage in procurement.
Core concepts: what to require before you trial a product
- Independent testing — lab and third‑party field testing from reputable bodies (BSI, TÜV, Intertek, UL or an accredited university lab). See field reviews like the microinverters field tests for the sort of lab+field approach that reveals real-world performance.
- Measurement & Verification (M&V) — an IPMVP-aligned plan (or equivalent) describing meters, baseline methodology, adjustment rules and statistical confidence intervals.
- Pre‑defined acceptance criteria — quantitative thresholds for success and clear remedies if the pilot fails (refund, extended trial, fixed price).
- Pilot design with controls — A/B or randomized matched sites where possible, and operational blinds when subjective outcomes are claimed.
- Data integrity and security — signed attestations for raw data access, chain of custody and device firmware controls. If your deployment includes edge telemetry, follow best practices from edge backend playbooks and observability patterns.
Practical due diligence checklist for vendors
Use this checklist in RFPs and vendor interviews to expose placebo products quickly.
- Can you provide independent lab test reports for the claimed performance metrics? (If yes, request the full test protocol.)
- Do you consent to an independent field M&V plan with a third‑party M&V provider? (If no, red flag.) Consider asking vendors to fund a neutral provider as seen in smart‑device field playbooks like the smart plug field playbook.
- Will you provide raw, timestamped meter data in an open format (CSV/JSON) and grant audit access to a third party? Raw exports are the difference between marketing KPIs and verifiable results — see observability guidance in Cloud‑Native Observability.
- What baseline methodology do you propose? Provide historical data and adjustment rules for weather, occupancy and operations.
- List all algorithms used to generate outputs. Will you agree to interim algorithm audits and version control checks?
- What warranties, performance guarantees and remedies do you offer if KPIs are not met?
- Do you perform firmware signing, secure boot and encryption for IoT devices? Provide a cybersecurity assessment.
- Provide references for at least two independent customers with similar load profiles and operational constraints.
Pilot design: avoid short, noisy tests
Design pilots to separate signal from noise. A robust pilot answers three questions: does the product work as claimed, is the benefit persistent, and does it operate reliably within our environment?
Key pilot elements
- Objective: precise, measurable outcomes (e.g., kWh saved, demand reduction in kW, battery round‑trip efficiency, uptime percentage).
- Duration: minimum 3 months for most solar/energy optimisation pilots; 6–12 months for batteries and seasonal systems. Shorter may be acceptable only with strong normalization and larger sample size.
- Controls: A/B testing (control site without the tech), or randomized rollouts. For IoT‑based occupant comfort claims, use blinded controls when feasible.
- Instrumentation: NMI/utility‑grade meters for energy, calibrated environmental sensors for weather/irradiance, and data loggers with tamper evidence.
- Sample size: single-site pilots are high risk. Prefer clusters (3–5 comparable sites) or portfolio pilots to reduce variance.
Measurement & Verification (M&V) essentials
M&V is your legal and technical guardrail. An M&V plan specifies how savings or improvements are measured and adjusted, ensuring unbiased results. In practice:
- Use IPMVP principles (Options A–D) to select the right approach for your project.
- Spell out the baseline period, normalization rules (weather, occupancy) and statistical confidence levels (e.g., 90% CI).
- Agree on data sampling frequency (e.g., 15‑minute intervals for solar and battery controllers, 1‑minute for fast‑acting UPS events).
- Include an independent M&V provider in the contract or require vendor funding of a neutral M&V third party. See approaches used in device playbooks and field programmes such as the smart‑plugs powering microgrids studies.
Acceptance criteria: examples by product
Examples below are starting points. Tailor thresholds to your business context and risk tolerance, and always specify test protocols.
Solar PV
- Performance Ratio (PR) within ±5–10% of the vendor model after temperature and soiling adjustments.
- Energy yield within ±8% of baseline model over a 6‑month normalised period.
- Inverter availability >99.5% during the pilot; manufacturer must supply error logs and remote access for diagnostics. For inverter and panel performance examples, see independent field work such as the microinverters field review.
Batteries
- Round‑trip efficiency ≥ manufacturer claim ±5% across 10 full cycle tests under specified state‑of‑charge ranges.
- Depth of discharge and capacity fade measured across calendar weeks; minimum delivered capacity within warranty band.
- Guaranteed cycle life demonstration or prorated remedial payments if capacity falls below agreed thresholds. For broader battery and device profiling techniques see work on device power profiles and optimization in 2026.
Generators & UPS
- Acceptable start time and load‑pickup within vendor specs during defined engine/UPS tests.
- Mean time between failures (MTBF) and availability targets (e.g., >99.9% for critical UPS) validated over the pilot.
- Fuel consumption and emissions tested to independent standards when efficiency claims are made.
IoT energy platforms / optimisation
- Measured energy savings vs control group with statistical significance (p < 0.1 or pre‑agreed confidence level).
- Algorithm change control: no undisclosed model updates during the pilot without notification and revalidation. When algorithm provenance matters, review approaches in operational provenance discussions.
- Data completeness >99% and raw data export on demand. Raw, timestamped exports separate marketing dashboards from audit‑grade evidence — see cloud observability patterns in Cloud‑Native Observability.
Contract language to lock in accountability
Put critical commitments in the contract:
- Defined KPIs and test protocols: reference the M&V plan, data formats, and acceptance tests.
- Payment milestones tied to independent acceptance: hold back a portion of payment until third‑party M&V signs off. This is a common clause in device and installer playbooks such as the smart plug field playbook.
- Remedies and liquidated damages: clear refund, replacement, or service extension clauses when KPIs fail.
- Data ownership and audit rights: you must own pilot data and have the right to audit firmware, models and raw logs. Auditability is a feature stressed in edge and observability guidance like edge observability.
- Escrow of algorithms for critical claims: where claims rest on proprietary models, use a neutral escrow or audit mechanism.
Independent testing: labs, field trials and accreditation
Independent validation has two parts: controlled lab tests and real‑world field tests. Lab tests validate physical claims under repeatable conditions; field tests validate the vendor’s solution in your operations.
- Reliable labs and cert bodies: BSI, TÜV, Intertek, UL, and university labs with traceable instruments.
- Field provers: accredited M&V consultancies or ESCOs experienced in commercial portfolios.
- Accreditation: check ISO/IEC 17025 for lab competence and look for M&V providers that have worked under IPMVP or similar programmes.
Red flags that suggest placebo tech
- Vendor refuses independent M&V or raw data export.
- Performance claims are only demonstrated over very short or cherry‑picked time windows.
- Results rely exclusively on subjective surveys or “comfort scores” without instrumented validation.
- Dashboards show aggregated or smoothed KPIs without access to underlying timestamps.
- Claims that require an implausible mix of improvements simultaneously (e.g., immediate 50% reduction in energy with no operational changes).
Case study (anonymised): how a retail chain avoided a costly mistake
A UK retail chain was offered an AI‑driven HVAC optimisation box that promised 20% energy savings. The procurement team required:
- An independent baseline study across 10 stores.
- An A/B pilot across matched stores for 6 months, with an independent M&V provider contracted up front.
- Raw data export and signed cybersecurity attestations.
- Payment holdback until savings were validated with 90% confidence.
Result: the pilot showed a 3–6% reduction in similar operating conditions. The vendor offered a pro‑rated refund and extended trial to reach the 20% claim — which the retailer declined. The chain saved capital and avoided rolling the device out across 150 stores.
Actionable takeaways — your step‑by‑step playbook
- At RFP stage, require independent lab reports and consent to third‑party M&V.
- Design pilots with a control group, minimum 3 months (6–12 months for batteries/seasons) and instrument to utility‑grade meters.
- Define acceptance criteria numerically and include them in the contract. Tie final payment to third‑party sign‑off.
- Insist on raw data access, algorithm change logs and cybersecurity attestations.
- If claims seem subjective, require blinded tests or independent user studies paired with instrumented measures.
- Prioritise vendors who provide open APIs, third‑party references and independent lab credentials.
Final note on risk, innovation and procurement strategy
Innovation requires pilots, but pilots should not be sales tools. Treat every novel product — especially IoT and AI‑driven energy solutions — as a hypothesis to be tested, not a delivered fact. Use the 3D‑insole story as a reminder: good marketing can create perceived benefit, but rigorous M&V, independent testing and ironclad acceptance criteria reveal the real value. If you’re unsure how to scope lab vs field validation, the microinverter field testing model is a useful reference.
Start now: template checklist to include in your next RFP
- Independent lab test report attached (date, scope, lab name).
- Third‑party M&V provider appointed or vendor agrees to fund a neutral M&V.
- Defined pilot duration and control group methodology.
- Data export requirements and audit rights (format, frequency, retention period).
- Payment milestones linked to independent acceptance criteria.
- Cybersecurity and firmware attestations included.
- Remedies on KPI failure (refunds, replacement, extended trial) clearly stated.
Call to action
If you’re specifying a pilot this quarter, we can help turn these principles into procurement documents, M&V scopes and contract clauses you can use immediately. Contact our team at powersuppliers.uk for tailored RFP templates, independent M&V vendor recommendations and a free pilot readiness checklist to protect your budget from placebo tech.
Related Reading
- Hands-On Review: Top Microinverters for Rooftop Systems (2026 Field Test)
- Field Playbook 2026: Safety, Certification and Resilient Power Practices for Smart Plug Installers and Retailers
- How Smart Plugs Are Powering Neighborhood Microgrids in 2026
- Cloud-Native Observability for Trading Firms: Protecting Your Edge (2026)
- Operationalizing Provenance: Designing Practical Trust Scores for Synthetic Images in 2026
- Alternatives to Spotify: Where Indie Artists Should Focus Playlist Outreach in 2026
- The Modest Activewear Edit: Sneakers, Sports Hijabs and Affordable Brands to Buy Now
- Ford vs. Tesla: How European Strategy Could Determine Market Share in the EV Race
- Protect Your Company: Simple Time-Tracking Practices for Small Plumbing Firms
- Create a Gradebook in LibreOffice Calc: From Formulas to Automation
Related Topics
powersuppliers
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Operational Resilience for UK Power Suppliers: Microgrids, Edge Observability and Field‑Ready Solar for 2026
Preparing for Energy Procurement: Navigating Deals on Solar Equipment and Services
Advanced Strategy: Integrating On‑Device Controls for DERs — Privacy, Latency and Commercial Models (2026)
From Our Network
Trending stories across our publication group