
How a Product Manager Optimized 2,400 Games in 25 Minutes
FluxPlay is a tech-driven iGaming platform headquartered in Tel Aviv, built from the ground up by engineers who run A/B tests on everything from button placement to bonus mechanics. With roughly twenty-two thousand monthly active users, a multi-currency stack spanning USD, EUR, and BTC, and $6M per week in gross gaming revenue, the platform punches well above its MAU weight. Its catalog of 2,400 titles — split across crash games, provably fair slots, live casino, and table games — is simultaneously FluxPlay's biggest competitive asset and its most unmanaged cost center.
Products used: Game Performance Analytics, Player Preference Modeling, Catalog Intelligence
25 minutes | full portfolio review time
180 | underperforming titles flagged for removal
3 | high-potential catalog gaps identified for provider negotiation
Challenge
Every month, Yael Cohen is responsible for a portfolio review that is, in theory, straightforward: look at 2,400 games, figure out which ones are earning their slot and which ones are dead weight, and make decisions accordingly. In practice, this review had become one of the most dreaded items on her calendar.
The raw numbers were never the problem — FluxPlay had plenty of game-level data, fragmented across a provider reporting portal, an internal BI dashboard, and a spreadsheet that one of the data analysts maintained manually. The problem was synthesis. Tying GGR contribution to session behavior to repeat play rate to RTP performance across 2,400 titles required exporting three files, matching them on game IDs that used different naming conventions in each source, and then making judgment calls on assets that had no natural peer group to benchmark against. A single provider's catalog alone might have two hundred titles spanning four genres. Making a delistment decision on any one of them meant understanding how it fit into the full picture, not just how it ranked on a sortable spreadsheet.
The review had a real cost beyond Yael's time. Underperforming games consumed lobby real estate, slowed page load on mobile, and cluttered the discovery experience for players who were bouncing between thirty tabs. FluxPlay's A/B testing culture meant the team ran experiments constantly — but you can't A/B test your way to a lean catalog if nobody has the bandwidth to actually pull the dead weight.
"We had data on every game. What we didn't have was a view. When you're looking at 2,400 titles, the question isn't whether the data exists — it's whether you can see the shape of the problem in under an hour. Before Gaming Mind AI, the answer was no."
— Yael Cohen, Product Manager, FluxPlay
The month before Yael started using Gaming Mind AI, the review took her four and a half days and resulted in a list of twenty-two games to delist — a list she was not particularly confident in, because it had been built by eliminating anything with fewer than ten sessions in the last thirty days, a heuristic so blunt it almost certainly missed games with high ARPU and low volume.
Solution
Yael now opens Gaming Mind AI on the first Tuesday of every month and runs the full portfolio review as a single conversation. The AI has access to the complete catalog performance data — GGR, sessions, session duration, repeat play, RTP actuals versus theoretical — and understands FluxPlay's player segments well enough to distinguish a low-volume, high-value niche title from a genuinely dead one.
Here is what last month's session looked like:
Yael: "Give me the overall shape of our catalog. Which games are actually doing the work?"
| Catalog Tier | Title Count | Share of Catalog | GGR Share | Cumulative GGR |
|---|---|---|---|---|
| Top 5% | 120 | 5% | 61% | 61% |
| Mid tier (next 15%) | 360 | 15% | 28% | 89% |
| Long tail (bottom 80%) | 1,920 | 80% | 11% | 100% |
| Total catalog | 2,400 | 100% | 100% |
⚠️ Gaming Mind flags: The top 5% of titles (120 games) generate 61% of GGR. The bottom 80% (1,920 titles) split just 11% between them. This long tail is both a catalog efficiency problem and a player experience issue — for every player who finds a niche title they love in the bottom 80%, four others are bouncing off a stale discovery experience without finding anything worth staying for.
The distribution confirmed what Yael suspected but had never been able to quantify cleanly: the top 5% of the catalog — 120 titles — accounted for 61% of GGR. The next 15%, another 360 titles, contributed 28%. The remaining 1,920 titles split the final 11% between them. Gaming Mind flagged the long tail as both a catalog efficiency problem and a player experience issue — for every player who finds a niche title they love in that bottom 80%, four others are bouncing off a stale discovery experience without finding anything worth staying for.
Yael: "Show me the top performers. Not just by GGR — I want to see engagement too."
| GGR Rank | Title | GGR Share | Sessions | Avg Session Length | Repeat Play Rate | Flag |
|---|---|---|---|---|---|---|
| 1 | Crash Master Pro | 8.4% | 41,200 | 6 min | 21% | 🟡 Low repeat — volume-driven |
| 2 | Slots Blitz | 7.1% | 38,700 | 3 min | 17% | 🔴 Low repeat — not loyalty-driving |
| 3 | Live Roulette VIP | 6.8% | 19,100 | 9 min | 42% | 🟢 Solid retention |
| 4 | Plinko Galaxy | 5.9% | 34,400 | 4 min | 16% | 🔴 Low repeat — volume-driven |
| 5 | Dragon Tiger | 5.2% | 22,800 | 7 min | 38% | 🟢 Good retention |
| ... | ... | ... | ... | ... | ... | |
| 14 | Aviator Classic | 3.1% | 12,600 | 11 min | 74% | 🟢 Loyalty anchor — #1 repeat |
| Catalog avg | — | — | — | 4 min | ~28% |
⚠️ Gaming Mind flags: Three of the top-ten GGR titles have repeat play rates below 18% — revenue is volume-driven, not loyalty-driven. Aviator Classic ranks 14th on GGR but 1st on repeat play at 74% with an 11-minute average session (vs 4-minute catalog average). Treat it as a loyalty anchor and use it as a reference point for future acquisition decisions.
The top GGR list and the top engagement list had only partial overlap, and Gaming Mind highlighted the divergence immediately. Three of the top-ten GGR titles had repeat play rates below 18%, signaling that players were trying them, not loving them, and moving on — the revenue was volume-driven, not loyalty-driven. By contrast, a crash game called Aviator Classic sat fourteenth on GGR but first on repeat play at 74%, with an average session length of eleven minutes versus a catalog average of four. Gaming Mind recommended treating it as a loyalty anchor and considering it as a reference point for future acquisition decisions.
Yael: "What does the bottom 10% look like? I want to understand what we're dealing with before I make any delistment calls."
| Sub-group | Title Count | Sessions (90 days) | Distinct Players | GGR | Classification |
|---|---|---|---|---|---|
| Clean delistment candidates | 180 | < 15 per title | < 40 per title | ~0 (rounds to zero in weekly reports) | 🔴 Remove |
| Niche loyalty titles | 60 | Thin but non-zero | Small, returning cohort | Low but non-trivial | 🟡 Separate review |
| Bottom 10% total | 240 |
⚠️ Gaming Mind flags: The bottom 10% contains two distinct populations, not one. 180 titles have minimal sessions, minimal players, and near-zero GGR across 90 days — clean delistment candidates. 60 titles have thin GGR but non-trivial engagement, with a small cohort of players returning consistently and above-average session lengths. Do not bundle them — the 60 niche titles warrant a targeted player segment analysis before any delistment call.
The bottom 10% — 240 titles — contained two different types of underperformers, and Gaming Mind separated them rather than bundling them together. The first group, 180 titles, had fewer than fifteen sessions in ninety days, fewer than forty distinct players, and combined GGR so low it rounded to zero in most weekly reports. These were clean delistment candidates. The second group, 60 titles, had thin GGR but non-trivial engagement metrics — a handful of players returning consistently, session lengths above average. Gaming Mind flagged these as potential niche loyalty titles and recommended a separate review rather than automatic removal. Yael marked them for a targeted player segment analysis before making a call.
Yael: "Are the RTP actuals on our top titles tracking close to theoretical? I want to see if anything is off."
| Title | Provider | Theoretical RTP | Actual RTP (3-month avg) | Variance | Duration of Gap | Status |
|---|---|---|---|---|---|---|
| Top 48 titles | Various | Various | Within ±0.5pp | < ±0.5pp | — | 🟢 Normal |
| Slot Title A | Smaller provider | 96.5% | 94.2% | −2.3pp | 3 consecutive months | 🔴 Flag — possible feed misreport |
| Slot Title B | Different provider | 96.0% | 97.8% | +1.8pp | 3 consecutive months | 🟡 Flag — unusual positive variance |
⚠️ Gaming Mind flags: Two RTP anomalies detected across the top 50 titles. Slot Title A is running 230 basis points below theoretical — a gap that has persisted for three months and likely represents a provider data feed issue silently misreporting. Slot Title B running 180bp above theoretical may indicate an unusual outcome distribution or provider reporting error. Both should be raised on the next provider call. This is the first time RTP reconciliation has been run as part of the monthly review.
Across the top fifty titles, most were tracking within half a percentage point of theoretical RTP — exactly what you'd expect with sufficient volume. Two exceptions surfaced. A slot from one of FluxPlay's smaller providers was running 94.2% actual RTP against a theoretical 96.5%, a gap that had persisted for three consecutive months. A second title from a different provider was running 97.8% actual against 96.0% theoretical. Gaming Mind noted that persistent positive variance at that magnitude either indicates an unusual distribution of outcomes or a provider reporting issue worth raising directly. Yael added a line item to the upcoming provider call. This was not something she had ever checked systematically before — the RTP reconciliation had simply never made it onto anyone's review checklist.
Yael: "Compare provider performance. I want to know which suppliers are carrying their weight."
| Provider | Titles in Catalog | GGR Contribution | GGR per Title (index) | Avg Session Duration | Repeat Play Rate | RTP Compliance | Catalog Access |
|---|---|---|---|---|---|---|---|
| Provider A | 380 | 31% | 0.82x avg | 3.8 min | 24% | 🟡 1 anomaly | 100% |
| Provider B | 290 | 24% | 0.83x avg | 4.1 min | 27% | 🟢 Clean | 100% |
| Provider C | 210 | 18% | 0.86x avg | 4.3 min | 26% | 🟢 Clean | 100% |
| Provider D | 120 | 11% | 0.92x avg | 4.6 min | 31% | 🟡 1 anomaly | 100% |
| Provider E | 48 | 9% | 1.88x avg | 5.9 min | 40% | 🟢 Cleanest record | 40% of library |
| Others | 1,352 | 7% | 0.52x avg | 3.4 min | 21% | Mixed | — |
⚠️ Gaming Mind flags: Provider E generates GGR per title nearly 3x the portfolio average, with repeat play 12 percentage points above the next closest provider and the cleanest RTP compliance record. Critical gap: FluxPlay's current contract grants access to only 40% of Provider E's full library. This is the highest-leverage negotiation on the table — expanding access is the most direct path to portfolio performance improvement.
FluxPlay's top three providers by raw GGR were no surprise — they were the ones with the biggest catalogs and the most marketing support. The more interesting view was GGR per title and player engagement per title. Provider E, with forty-eight titles in the catalog, was generating GGR per title nearly three times the portfolio average, with a repeat play rate across its catalog that was twelve percentage points above the next closest provider. Provider E also had the cleanest RTP compliance record. Gaming Mind noted that Provider E's current contract gave FluxPlay access to only 40% of their full library and suggested this was the highest-leverage negotiation on the table.
Yael: "Where are the gaps in our catalog? What are players looking for that we're not delivering well?"
| Segment | MAU Share | GGR Share | Preferred Category | Current FluxPlay Supply | Estimated Saturation Point | Gap Status |
|---|---|---|---|---|---|---|
| High-volatility slots (bonus buy) | ~14% | ~18% | High-vol slots with bonus buy | 7 titles | 18–22 titles | 🔴 Underserved |
| Live casino (speed variants) | ~6% | ~11% | Speed baccarat, lightning roulette | 2 titles (same provider) | 8–10 titles | 🔴 Underserved |
| BTC-native / provably fair crash | ~8% | 19% | Provably fair crash games | 6 titles (3 in bottom tier) | 10–12 active titles | 🔴 Underserved |
| Core recreational (slots) | ~52% | ~38% | Mainstream video slots | Well supplied | — | 🟢 Adequate |
| Table games enthusiasts | ~12% | ~9% | Blackjack, baccarat | Adequately supplied | — | 🟢 Adequate |
| Casual crash | ~8% | ~5% | Basic crash | Adequately supplied | — | 🟢 Adequate |
⚠️ Gaming Mind flags: Three high-priority catalog gaps identified. The BTC-native cohort contributes 19% of GGR from only 8% of MAU — the highest-ARPU segment — yet three of FluxPlay's six provably fair titles are in the low-session bottom tier. Two providers actively developing in this space have no current FluxPlay contract. Initiate negotiations immediately.
This was the analysis Yael had been waiting for. Gaming Mind's player preference model clustered FluxPlay's user base into behavioral segments based on what they actually played, how long, and what they tried next. Three gaps emerged clearly. First, players with a strong preference for high-volatility slots with bonus buy features — a segment representing roughly 14% of the active base — were cycling through the available titles quickly and exiting to competitors. FluxPlay had seven titles in this category; the preference model suggested the segment could sustain eighteen to twenty-two before saturation. Second, a smaller but high-ARPU cluster of live casino players was showing strong preference signals for speed baccarat and lightning roulette variants, categories where FluxPlay had only two titles, both from the same provider. Third, the growing BTC-native player cohort — around 8% of MAU but contributing 19% of GGR — had game consumption patterns that clustered heavily around provably fair crash games, but three of FluxPlay's six provably fair titles were in the low-session bottom tier. Gaming Mind identified two providers actively developing in this space with whom FluxPlay had no current contract.
Yael: "Give me the short version — what do I take into Thursday's product sync?"
| Decision | Action | Supporting Data | Timeline |
|---|---|---|---|
| Remove 180 dead-weight titles | Delist immediately | < 15 sessions, < 40 players, ~0 GGR across 90 days | This week |
| Expand Provider E library | Negotiate 40% → 75% catalog access | 3x GGR/title avg, +12pp repeat play, cleanest RTP record | Next QBR |
| Address RTP anomalies | Raise with Slot A and Slot B providers | 3-month persistent variance (−230bp and +180bp) | Next provider call |
| Open catalog negotiations | Contact 2 provably fair crash providers | BTC cohort = 19% GGR / 8% MAU; 3 of 6 titles in bottom tier | This month |
| Preserve 60 niche titles | 60-day observation period | Non-trivial repeat engagement; insufficient data for delistment | Review in 60 days |
⚠️ Gaming Mind flags: Three provider conversations should be initiated immediately — expanding Provider E's library access, resolving the two RTP anomalies, and opening negotiations with the two provably fair crash game providers currently absent from the catalog. The 180-title delistment carries no material GGR risk based on 90-day trailing data.
Gaming Mind produced a clean summary that Yael pasted directly into the product sync agenda. Remove 180 dead-weight titles immediately — catalog streamlining with no material GGR risk based on ninety-day trailing data. Initiate three provider conversations: expand Provider E's library access, address the two RTP anomalies with their respective suppliers, and open negotiations with the two provably fair crash game providers currently absent from the catalog. Keep the sixty low-GGR niche titles under observation for another sixty days before making delistment calls. The whole session, from first question to copy-pasted summary, had taken twenty-five minutes.
Results
180 underperformers queued for removal with confidence
In previous months, Yael had avoided aggressive delistment decisions because the blunt heuristics available to her created too much uncertainty. A game with twelve sessions in thirty days might be a dead title or a niche loyalty driver — without the engagement breakdown, the distinction wasn't visible. Gaming Mind separated the two populations cleanly. The 180 flagged for removal had both minimal session volume and zero meaningful engagement signal across ninety days, making the delistment call straightforward. The sixty titles in the observation group were preserved for the right reasons, not out of analytical paralysis.
Three provider negotiations initiated with data-backed positioning
Yael went into the Provider E call with specific data: the GGR per title differential, the catalog access gap versus competitors, and a player preference model showing demand headroom for at least fifteen additional titles in Provider E's wheelhouse. The conversation moved in twenty minutes from a routine check-in to a concrete discussion about expanding the catalog access from 40% to 75% of Provider E's library. The two RTP anomaly conversations were similarly focused — Yael arrived with ninety days of actuals versus theoretical, not a vague concern.
RTP compliance gap surfaced for the first time
The systematic RTP comparison had never been part of the monthly review before. The two anomalous titles would have continued running indefinitely without anyone noticing the divergence. One of the gaps, the title running 230 basis points below theoretical, turned out to be a provider data feed issue that had been silently misreporting for eleven weeks. It was corrected within a day of Yael raising it.
Catalog review time cut from four and a half days to 25 minutes
The most direct outcome was simply the time. Four and a half days of export-reconcile-analyze became a twenty-five-minute conversation, and the output was demonstrably more reliable — separating niche loyalty titles from dead weight, surfacing RTP anomalies, and producing a preference-driven gap analysis that no amount of spreadsheet work would have generated.
Catalog intelligence became a recurring strategic input
Before Gaming Mind AI, the monthly portfolio review was a defensive exercise — remove the obvious dead weight and move on. Now it feeds directly into provider strategy, acquisition decisions, and lobby merchandising. The preference model gaps are shared with the marketing team to inform which game categories to promote in acquisition creatives. The provider performance comparison is reviewed in every supplier QBR. The catalog has become a managed asset rather than an inherited one.
"The review used to be something I endured. Now it's one of the most useful sessions in my month. I walk out of it with three concrete negotiations and a delistment list I can actually defend. That shift happened because the analysis finally matched the complexity of the catalog."
— Yael Cohen, Product Manager, FluxPlay
Read in another language
Want to see how Gaming Mind AI can help your operation?
Get a Demo