Shortlisted Candidate — Communications Digital Officer · ETC2 · Req 35105
What Changes I Would Propose for CIF.org
3 minutes on user research findings.
2 minutes on Generative Engine Optimization.
Michael Douma
March 2026
Good afternoon. I’m Michael Douma. Thank you for shortlisting me. Here’s around 5 minutes on what I’d propose for cif.org. Ideally, I’d start by learning from your users, or the staff who work with them. As a workaround, I reverse-engineered your users and hypothetically interviewed them. I’ll jump into that, then circle back to generative engine optimization.
Part 1 — User Research · 3 minutes
I reverse-engineered 9 user personas for CIF.org
AI analysis of the site’s content architecture, tone, jargon, and navigation pathways revealed every distinct audience — from bond investors to Indigenous community leaders.
Primary
Recipient Governments
Ministry officials designing investment plans and reporting results.
Primary
MDB Staff
Specialists at World Bank, IFC, ADB, AfDB, EBRD, IDB implementing CIF projects.
Primary
Donor Governments
Treasury & development officials from 15 contributor nations.
Secondary
Institutional Investors
Pension funds & asset managers evaluating CCMM bonds.
Secondary
Private Sector
Renewable energy developers seeking concessional finance through MDBs.
Secondary
Researchers
Climate finance scholars studying CIF’s mechanisms and evidence.
Tertiary
Civil Society / NGOs
Watchdogs monitoring governance, pushing for safeguards.
Tertiary
Indigenous Peoples
IPLC leaders accessing the Dedicated Grant Mechanism.
Tertiary
Youth / Early-Career
Young professionals seeking the CIF Youth Fellowship and careers.
It seems you have around 9 user personas, each with different goals. I see this in three tiers. Primary: recipient governments, MDB staff, and donor governments — folks using the site to do their jobs. Then investors, private sector, researchers. And then civil society, Indigenous communities, and youth. Let’s pretend to talk to them.
Persona Findings
What users like
73 Country Dashboards
- Most-praised feature across 5 personas
- Population, emissions, funding in one place
Decision Tracker
- Searchable governance database — best in class
- Valued by govts, MDBs, donors, CSOs, researchers
Knowledge Hub
- 13 publication types, multi-faceted search
- Tiered: full report → brief → article
Co-financing Data
- 1:10 ratio is CIF’s most powerful proof point
- Clear donut charts with data timestamps
Governance Transparency
- Meeting archives back to 2016
- Exceeds most multilateral fund peers
Visual Credibility
- Clean institutional aesthetic, strong brand
- DGM pages + ChangeMakers prove accessible writing works
Good news first — and keep in mind, this AI slop is just a proxy for real interviews. The country dashboards are loved. The Decision Tracker is great. The Knowledge Hub with 13 publication types is sophisticated. And the 1:10 co-financing ratio is a powerful proof point.
Persona Findings
What users find problematic
No audience entry points
- All 9 personas land on the same homepage
- No “For Governments” / “For Communities” routing
Showcase over utility
- Site is a brochure first, a working tool second
- Homepage helps no one do their actual job
Results are aggregate-only
- 42.7M tons CO2 — no country/program breakdown
- “87 of 173 reporting” buried in footnotes
Nav follows the org chart
- M&R Toolkits hidden under “Learning Laboratory”
- One workflow touches 5 of 7 nav sections
Misallocated real estate
- Capital Markets gets a full nav slot
- DGM (only direct grants) buried 2 levels deep
No current calls page
- Funding info scattered across news, governance, KB
- Contact page: 3 generic emails for 9 audiences
Now what isn’t so liked. There’s a wall consistently hit by all the personas. Where do we go? The site feels too much like an annual report, not enough like a tool. Results seem too aggregated, not broken down by the user’s perspective. And open funding windows were missing.
Summary
The overall picture
The Good
Extraordinary content breadth — $500M bonds to $30K grants
73 country dashboards, Knowledge Hub, M&R Toolkits
Decision Tracker is best-in-class among peer funds
Four languages, Just Transition Toolbox, strong governance transparency
The To-Dos
Audience-specific portals with homepage routing
“Current Calls” dashboard — most critical missing feature
Disaggregable results by country, program, MDB
Navigation restructured around user tasks, not org chart
“CIF has built content for every audience but organized the site for none of them.”
The big idea that emerged: too much content that’s hard to find. And I think you know that, which is why you’re looking to fill this position.
What This Means
It’s about usability, not redesign
A full visual redesign is premature. The aesthetics are solid. The problem is that the site isn’t organized around how people actually use it. The fix is structural.
1
Map the user journeys
Follow each persona’s actual path through the site. Where do they get lost? Where do they give up? The data is in the analytics — supplement it with the persona research.
2
Strip back to the bones
Reduce visual complexity until the information architecture is clear. If a user can’t find what they need in a wireframe, no amount of polish will fix it.
3
Connect with the data
Find how each audience can actually digest and use the data, videos, and vast documents embedded across the site. Make the content workable, not just findable.
4
Layer beauty back on
Once each journey works — once every persona can accomplish their task in 3 clicks — then restore the CIF brand’s strong visual identity on top of the solid structure.
Beautiful and usable are not in tension. But usable must come first.
So, past the synthetic personas. You have a mature site. It’s very beautiful. Feels like it maybe got overstuffed over the years. The approach that comes to mind: look at how different kinds of users are going to use the site. Strip this back to the bones. Think about how people get to the information, the data, the reports they need. Then bring back the visual identity.
Part 2 — Generative Engine Optimization · 2 minutes
Making CIF.org the source AI engines quote
When someone asks ChatGPT “What are the largest multilateral climate funds?” — CIF.org should be cited. Structural gaps prevent this today.
-25%
Drop in traditional search volume by 2026 (Gartner)
+693%
Surge in AI referral traffic, 2025 holiday season (Adobe)
2–7
Sources cited per AI answer. You must be one of them.
+40%
Visibility boost from GEO techniques (Princeton/Georgia Tech, KDD 2024)
Current Quick Links — no structured data for AI crawlers
As for generative engine optimization. The question is: how do we get frontier models to cite CIF? To emphasize it in training, to show it in inferences. The early numbers on search engine traffic declines are brutal. It’s good you’re planning ahead.
The Deeper Principle
How LLMs decide what’s worth quoting
Beyond metadata and schema, there’s a fundamental principle: LLMs’ attention mechanisms latch onto specific, distinctive, quotable facts. If CIF’s pages read like generic brochure copy, AI engines will skip them for a source that offers concrete claims.
What AI engines ignore
“CIF supports ambitious climate action in developing countries through its portfolio of programs.” — This is interchangeable with any fund’s about page. No attention head fires. Nothing to cite.
What AI engines quote
“CIF has deployed $12.5B across 81 countries, leveraging $73B in co-financing at a 1:10 ratio — making it the world’s largest multilateral climate finance mechanism by leverage.” — Specific. Unique. Citable.
Lead with numbers
Every program page should open with its specific stats: dollars deployed, countries reached, tons mitigated.
Make claims comparative
“Largest,” “first,” “only” — superlatives give LLMs confidence that this is the primary source. CIF has many legitimate firsts.
Front-load each page
44% of AI citations come from the first 30% of a page. The most important facts must appear in the first two paragraphs, not in a PDF.
The big idea is to help LLMs feel that CIF is worth quoting. If you’ve ever used an LLM to edit a document, you know how they latch onto facts and numbers. That’s exactly what the site has to provide. Readable, findable, believable, specific facts and integrated concepts.
01
GEO Action 1
Fix metadata & add structured data
CIF.org has zero JSON-LD structured data. The homepage title reads “CIF” — not “Climate Investment Funds.” AI engines can’t identify the entity.
- Fix title: “Climate Investment Funds (CIF) | Accelerating Climate Action in Developing Countries”
- Add Organization schema with founding date, World Bank relationship, mission
- Add Article/Report schema to 400+ publications with author, date, dateModified
- Add FAQPage schema answering “How does CTF work?” on every program page
- Deploy llms.txt at site root mapping key resources
Pages with 3+ schema types are ~13% more likely to be cited in AI answers.
// What should exist on every page:
<script type="application/ld+json">
{ "@type": "Organization",
"name": "Climate Investment Funds",
"alternateName": "CIF",
"parentOrganization": "World Bank"
}
</script>
One level is pretty mechanical — such as embedding structured data in the site. There’s no JSON-LD on cif.org right now. It’s about embedding computer-readable data so an LLM indexing system knows what it’s looking at.
02
GEO Action 2
Convert PDFs to citable HTML & build topic clusters
CIF’s most valuable content — evaluations, investment plans — is locked in PDFs. AI crawlers can’t index them.
- HTML landing pages for top 50 PDFs with executive summaries and extractable data
- Build topical authority clusters: “Climate Finance Mechanisms” pillar linking to program sub-pages
- Add 2–4 sentence “answer capsules” under every H2 heading
- Reduce navigation markup bloat — lazy-load faceted filters
Queries CIF should own
“What is the Clean Technology Fund?” · “Largest multilateral climate funds” · “How does concessional climate finance work?” · “Coal transition financing”
A more important level is access. An LLM can read a PDF, but that’s a lot of digestion and processing you can’t rely on them doing for you. Instead of content locked in files, there should be HTML versions with commentary — so an LLM knows what it’s about, why it should read it.
03
GEO Action 3
Build cross-platform authority so AI engines trust CIF
The strongest predictor of AI citations is brand search volume (0.334 correlation), not backlinks. 90% of AI citations come from earned & owned media.
- Audit & enrich Wikipedia articles about CIF — ChatGPT’s #1 source (7.8% of citations)
- Engage on social media like Reddit climate finance discussions — top source for AI Overviews & Perplexity
- Full metadata + transcripts + chapters on all YouTube content
- Data-rich LinkedIn posts (29.4K followers) AI engines can reference
- Create a definitive “About CIF” page — single source of truth for AI extraction
- Allow AI search bots while blocking training bots in robots.txt
AI-referred visitors convert at 4.4x the rate of organic search visitors.
The opportunity
CIF has unique, authoritative, data-rich climate finance content no other source can replicate. The challenge is purely structural — making it discoverable, extractable, and citable.
More broadly, GEO isn’t accomplishable just from this website. Citability by AIs correlates with overall brand presence — Wikipedia, social media like Reddit, YouTube with transcriptions. There’s a bigger ecosystem than this website. If you want the LLM to know that you matter.
Implementation
Priority actions: what I’d do first
Immediate: Quick Wins
- Fix homepage title tag & metadata
- Add Organization JSON-LD schema site-wide
- Create definitive “About CIF” page
- Deploy llms.txt and configure robots.txt
- Front-load key stats onto program pages
Near-Term: Structural
- Map user journeys & identify drop-off points
- Build audience landing pages
- Create “Open Funding Windows” dashboard
- Convert top 50 PDFs to HTML with structured data
- Add FAQPage schema to program pages
Medium-Term: Authority
- Build topical content clusters
- Enrich Wikipedia, social media, YouTube presence
- Restructure navigation around user tasks
- Add glossary, contacts directory, community portals
- Launch data API / CSV downloads for researchers
This was a naive first pass. The good news: the heavy lifting — restructuring the site, making reports and multimedia more accessible to both humans and models — in the past would have taken thousands of person-hours. With modern AI methods, it’s reducible to weeks. Shockingly short time frame by any previous measure.
In Summary
CIF has world-class content.
It needs world-class routing.
The initial fix is not more content. It’s usability-first architecture, task-based navigation, and making the site as discoverable to AI engines as it is to human visitors.
Michael Douma
Communications Digital Officer · ETC2 · Req 35105
That’s my time on this mini presentation. Thank you for the opportunity. I’m curious how much of this maps to your actual realities. I look forward to the conversation.
Appendix
Backup & references
User Research: AI-assisted reverse-engineering of 9 audience personas from CIF.org’s content architecture, tone, jargon, navigation, and 167-frame visual walkthrough.
GEO Sources: Princeton/Georgia Tech/Allen Institute GEO study (KDD 2024), Gartner search projections, Adobe AI referral data, Edelman brand authority analysis.
Full Deliverables Available:
• 9 persona critique documents (4–6 pages each)
• Cross-persona synthesis with 15 prioritized improvements
• Complete GEO audit with 7 technical recommendations
• Visual/functional site description (900 lines, 167 frames)
• Complete site content architecture (50-page crawl analysis)