Model the risk before
the tide decides.
A pattern common in cooperation and environmental NGOs: prioritize stretches of coast for mitigation interventions under a limited budget. Here's how we'd address the problem — with a predictive model and auditable metrics.
Too much coast. Too little budget.
The classic pattern: an organization has resources to intervene only a fraction of the coast with flood-mitigation works. And the decision of which sections to prioritize has been made based on qualitative reports, field visits, and expert judgment — without a reproducible metric any auditor could review.
Serious donors require quantitative justification for each prioritization, especially because coastal interventions have immediate political consequences: the chosen neighborhood receives; the excluded ones protest.
What's needed: a model that (1) integrates observable and verifiable variables, (2) produces a defensible output with reported metrics, and (3) is maintainable by the client's technical team after the contract.
Hex grid + satellite variables + predictive model.
The pattern we'd apply: discretize the coast into a hexagonal grid and enrich each cell with three families of observable variables.
Satellite variables: coast, terrain, exposure.
Census variables (DANE): exposed population, social
vulnerability indicators, housing quality.
Hydrological variables: distance to sea, current elevation,
documented historical flood records.
We'd train several ML models in parallel, compare AUC and interpretability with SHAP, and pick the one that best balances precision and traceability.
The model identifies and predicts the morphology of risk.
A predictive model. Open data. Inheritable code.
At close, the client receives: a georeferenced risk map with hexagonal cells classified, the trained model with reported metrics (AUC, validation against historical events where they exist), the open dataset under a liberal license where the data allows, and the notebooks any analyst can run.
The public report makes explicit: which variables were used, how it was trained, which model was chosen and why, what the model does not predict.
And the client's technical team can retrain the model when new data arrives — without asking our permission.
A risk model describes the morphology of the problem, not predicts events. That distinction is not semantic — it's the difference between a defensible tool and an impossible promise.Methodological note · part of every delivery
Public data, open tools.
The entire pipeline runs with public data and open-source tools. The client's team inherits the repository, the dataset, and the notebooks.
Predictive models for politically sensitive decisions?
Tell us the problem and the data you have available. We respond with questions, not a closed proposal.