"Ship analytics to your customers in weeks" is how embedded analytics vendors — including us — describe the buy path. It's true. It's also worth being specific about what those weeks actually look like, what your team owns, and where the realistic sticking points are.
This chapter maps a typical Yurbi integration from trial to production-ready. Your specific timeline will vary based on your stack, your data model, and how many tenants you need to support at launch — but this is what most ISV teams actually experience.
Before Week 1 — What You Need in Place
The integration timeline starts when your team is ready, not when you download the trial. Before week one, you need:
A clear data model. Which database holds the data your analytics will query? What are the key tables and relationships? If the answer to this is "it's complicated," that's fine — but you need to know the complication before the integration starts, not during it.
A defined first-use case. The fastest integrations start with one specific use case — a specific set of reports for a specific customer segment — and expand from there. Teams that start with "all the reporting we'll ever need" take longer and launch less successfully than teams that start with "these five dashboards for our top 20 customers."
A decision on your tenancy architecture. Shared DB, separate DB per tenant, or hybrid? This determines how the data source configuration works and should be resolved before the integration begins.
One developer with integration ownership. Not a committee. One person who knows the codebase, can work with the Yurbi API documentation, and has authority to make integration decisions without a two-week approval cycle.
Week 1 — Installation, Connection, and First Data
The first week is installation and data connectivity. Yurbi runs on Windows, Linux, or Docker — your developer downloads the trial, installs it in your dev environment, and connects it to your database. The installation itself is typically measured in hours, not days.
The meaningful work in week one is the data connection: defining the data sources, configuring the semantic layer to represent your data model in terms your reports will use, and confirming that queries return the right data. If your data model is straightforward, you'll have working queries against real data by the end of the first week.
The sticking point here, when it occurs, is usually data model complexity — many-to-many relationships, denormalized schemas, or data spread across multiple tables in ways that require semantic layer configuration to surface correctly in reports. This isn't Yurbi-specific; any analytics platform will require this work. It's just worth knowing that a complex data model extends week one.
Weeks 2–3 — First Reports and the Embedding Layer
With data connectivity working, weeks two and three cover two parallel tracks: building the first set of reports and dashboards, and implementing the embedding integration in your application.
Report building. Yurbi includes a no-code report builder — your developer (or a technical non-developer) can build charts, tables, and dashboards without writing SQL. If your team wants to build reports in SQL directly, that's also supported. The first set of pre-built reports for your customers is typically ready within a few days of having a working data connection.
Embedding integration. This is the SSO handshake and iframe integration in your application — passing the authenticated user's identity to Yurbi so it knows which tenant to show and which permissions to apply. Yurbi's DoLogin API handles the token exchange; your developer adds a server-side call that generates a session token and passes it to the client. Most developers familiar with their stack complete this in a day or two. If your application uses a framework with documented Yurbi integration examples, it goes faster.
| Week | What gets done | Who owns it | Typical sticking points |
|---|---|---|---|
| Pre-work | Define use case, confirm data model, assign integration owner | Your team | Undefined scope or data model ambiguity |
| Week 1 | Install Yurbi, connect to database, configure semantic layer, confirm working queries | Your developer + Yurbi support | Complex data models; multi-schema setups |
| Week 2 | Build first reports and dashboards; implement SSO embedding in your app | Your developer + report builder | SSO edge cases; custom auth systems |
| Week 3 | Configure branding, set up multi-tenant data source routing, test with dev tenants | Your developer | Multiple tenant data source configurations at scale |
| Week 4 | Internal QA, user permissions, scheduled exports, staging environment validation | Your team | Permission model complexity; export formatting |
| Week 5–6 | Beta with select customers, feedback, refinement, production deployment | Your team + Yurbi support | Customer-specific data edge cases |
Week 3 — Branding, Tenancy, and Permissions
Week three is when the multi-tenant configuration comes together. This includes configuring the branding policies for each tenant (logo, color scheme, report set), setting up dynamic data source routing if your architecture requires it, and configuring row-level security and user permissions.
For ISVs with a small number of tenants at launch, this can be completed in a few days. For teams launching with many tenants who each have their own database, this phase takes longer — the configuration work scales with tenant count, though Yurbi's API allows it to be automated as part of your tenant provisioning workflow rather than done manually for each one.
Week 4 — QA, Permissions, and Staged Rollout Preparation
Week four is quality assurance: confirming that tenant isolation is working correctly, that users see only the data they should see, that scheduled exports run on the right schedule and deliver to the right addresses, and that the embedded experience looks correct across the browsers and devices your customers use.
This is also the week to set up your production server environment and validate the configuration against production data volumes. Performance testing with realistic data sizes — not dev environment sample data — surfaces any caching or query optimization needs before customers do.
Weeks 5–6 — Beta and Production
Most teams do a beta rollout with a handful of existing customers before opening the feature to everyone. This serves two purposes: it catches customer-specific data edge cases (the one tenant with a schema variation you didn't account for), and it generates early feedback that informs what to build next.
With beta feedback incorporated, production deployment follows. For a straightforward deployment — one server, standard configuration — the production go-live is operationally simple. For larger multi-server deployments, Yurbi's Priority Support team works directly with your developer on the production configuration.
What "Weeks" Actually Means — Realistic Ranges
| Deployment scenario | Realistic timeline to production |
|---|---|
| Simple deployment — shared DB, small tenant count, pre-built reports only | 2–3 weeks |
| Standard ISV deployment — dynamic data sources, per-tenant branding, SSO, scheduled exports | 4–6 weeks |
| Complex deployment — many tenants with separate DBs, self-service report builder, custom permission model | 6–10 weeks |
| Enterprise deployment — multiple production servers, large user count, volume discount configuration | 8–12 weeks |
Compare these to the build timelines from Chapter 2: a custom-built reporting layer for a standard ISV deployment typically takes 8–14 weeks for the initial version alone — without multi-tenant routing, self-service report building, or a caching layer for performance.
What You Get That You Didn't Have to Build
At the end of the integration, your customers have access to features that would have taken your team months to build from scratch — and that your team never has to maintain:
A no-code report builder for tenant admins. Scheduled report delivery with configurable frequency and format. Row-level and column-level security enforced at the platform layer. Per-tenant branding with your logo and color scheme. In-memory caching for fast query response at scale. 20+ chart types including responsive mobile layouts. PDF and CSV export. A full API for programmatic report generation. Weekly platform updates — new features without engineering work from your side.
Your team owns the integration — the data connection, the SSO configuration, the tenant provisioning workflow. The platform owns everything else. That's the actual division of labor in a well-executed embedded analytics buy decision.
Ready to see what the integration actually looks like for your stack? Download the trial and have your developer start — or book a technical demo and we'll walk through your specific architecture.
Next Steps
If you've read this guide through, you have the framework to make the build vs. buy decision with real numbers and real criteria — not assumptions. Here's where to go from here depending on where you landed:
If you're leaning toward buying: Download the Yurbi trial. It includes a full sandboxed environment — connect your data, build a report, test the SSO integration. No consultants required, no sales process to get through. If you want a guided walkthrough of your specific use case, book a technical demo.
If you're still comparing costs: Use the build vs. buy calculator with your actual team numbers — engineers, time allocation, fully-loaded salary. The output is a specific cost comparison against Yurbi's published pricing for your user count.
If you're evaluating other platforms too: Use the evaluation checklist from Chapter 6. The questions apply to any vendor, not just us. We'd rather you make the right decision than the Yurbi decision — they're usually the same thing, but not always.
Stop rebuilding your reporting layer.
Embed Yurbi into your product and ship analytics to your customers in weeks — not quarters. Self-hosted, white-labeled, flat annual pricing.
Download Free Trial