Why I publish this
If you don't agree with my scoring, this page is where you can audit it. I want you to know exactly how I weight the dimensions, what I tested, and what I deliberately didn't score. Re-weight things if you want. Re-rank for your situation. The goal is that you have enough information to disagree with me on purpose rather than by accident.
It also keeps me honest. If I publish that I score on cost and setup time and daily usability and support, I can't quietly tilt the rankings toward whoever pays the best affiliate rate. The numbers have to fall out of those four dimensions.
What I actually tested
I provisioned a fresh account on each platform. Set up a test landing page running a real Google Ads campaign for one of my own rank-and-rent properties. Routed real inbound calls from a panel of test prospects (friends and family who agreed to call) through every system, plus actual organic and paid traffic on properties I own. Then I scored each platform on the same task list.
The task list
Every platform had to do the following in front of me:
- Provision a US local tracking number (target: under 5 minutes after signup)
- Set up dynamic number insertion on a test landing page
- Tag the source as a Google Ads campaign and verify the routing
- Route a real call and confirm it showed up in reporting tagged correctly
- Sync the call as a Google Ads conversion event
- Export a 7-day source-attribution report as CSV
- Configure a basic call-flow rule (route after-hours calls to voicemail)
- Verify HubSpot sync of inbound calls as activities
The four dimensions
I score each platform on cost, setup time, daily usability, and support. Equal weighting. Each dimension on a 10-point scale. Total score is the average.
Cost (25%)
Plan fees plus per-number rental at the volume I actually run. The published $50/month entry plan looks fine until you add 40 tracking numbers at $3 each. I score on what the realistic monthly bill looks like for a small agency or rank-and-rent operator. Per-number cost is the biggest factor here because it's the line item that compounds.
Setup time (25%)
Minutes from signup to a working setup with one tracking number, DNI live on a landing page, and a Google Ads conversion event firing. I time myself on each platform.
Daily usability (25%)
Can I log in, find what I need, and get out, or am I clicking through six menus to mark a call as a qualified lead? This dimension is the most subjective. I weight it on what the day-to-day flow feels like for someone running multiple clients or properties without IT support.
Support (25%)
When something breaks at 6 PM on a Friday, how fast does someone pick up? I judge support based on response times, channel availability, and the few times I've actually had to use it on each platform.
What I didn't score
Conversation intelligence depth
I don't use deep CI in my work. If you're a Fortune-1000 buyer who needs ML-driven call scoring, my list is wrong for you. Read an Invoca review instead.
Raw integration count
The major integrations are what matters (HubSpot, Salesforce, Google Ads, GA4, Zapier). Long-tail count is a vanity metric. I noted CallRail's deeper library where it's relevant.
Contact-center features
I don't run a contact center, so I can't credibly score these. CTM and Invoca have them; my list isn't built around them.
Vendor-supplied benchmarks
If a vendor sent me numbers, I didn't use them. I only counted what I could measure.
Refresh cadence
I re-test the platforms in this guide twice a year, and re-run the task list whenever a platform ships a meaningful release that would change my scoring. Pricing is checked monthly. The "Updated" date on each page reflects the most recent edit.
If you want to verify
The test campaigns, landing pages, and per-platform setup logs are available on request. Email me and I'll send what I can share. I'd rather you check my work than take it on faith.