Loading project
Preparing this case study...
Preparing this case study...
A full-stack MVP that ingests UK planning application data, enriches records with AI summaries, and matches them against user-defined postcode and radius alerts.
Project Snapshot
Technical Footprint
PropTech Intel is a full-stack MVP for UK planning application intelligence, built to help property professionals monitor planning activity around specific postcodes and areas.
The idea behind the project was to turn fragmented public planning data into a more useful alert system for landlords, developers, estate agents, architects, and property investors. Instead of manually checking council portals or raw planning datasets, users can create saved postcode and radius alerts, browse planning records, view details, and receive matched planning updates.
The backend is built with FastAPI and SQLAlchemy, with PostgreSQL/PostGIS provisioned as the intended data layer. The frontend is built with Next.js, React, TypeScript, and Tailwind CSS. The app supports user registration, login, saved alert rules, planning search, planning feed views, dashboard workflows, and planning detail pages.
The ingestion layer pulls records from Camden OpenData and the Planning Data Platform. Records are normalised into a planning application model, stored with raw payloads, content hashes, source references, source URLs, and provenance metadata. This was important because planning data needs to remain traceable back to the original public source.
The project also includes AI classification and summarisation. Planning records can be processed with OpenAI to generate a category, summary, relevance reason, risk score, and opportunity score. When the AI provider is unavailable or no key is configured, the system falls back to rule-based classification so the pipeline can still function.
The alert-matching logic compares stored planning applications against user-defined postcode and radius alerts using geocoded latitude/longitude data. It also supports keyword and category matching, creates alert events, avoids duplicate deliveries, and includes SendGrid email delivery logic for matched alerts.
This is not yet a fully production-ready SaaS. Stripe checkout is not wired. Meilisearch is configured but not implemented; PostGIS is provisioned, but spatial querying is currently handled with latitude/longitude and geodesic distance checks; and some monitoring/cron scripts need cleanup. The strongest factual value is the MVP architecture: ingestion, normalisation, provenance, AI enrichment, user alerts, matching, and notification logic.
I built the full-stack MVP structure, including the FastAPI backend, Next.js frontend, authentication flow, planning APIs, saved alert rules, planning feed/search/detail views, and dashboard workflows.
I implemented the planning ingestion foundation for Camden and the Planning Data Platform, including raw payload retention, normalised planning records, source references, content hashes, and provenance metadata.
I also built the AI processing layer for planning classification and summarisation, the postcode geocoding flow, the radius and keyword alert-matching logic, and the SendGrid email delivery path for matched alerts.
I worked on the deployment and service structure, including Docker files, Railway/Netlify configuration, Redis/Celery setup, monitoring configuration, and tests covering key backend and frontend paths.
One of the main challenges was working with fragmented planning data. Public planning records can come from different sources, with different structures, field names, quality levels, and missing information. The system needed a normalised planning application model while still preserving the raw source payload and provenance.
Another challenge was making the alerts useful. A simple list of planning records is not enough. Users need to monitor specific areas, so the project had to support postcode geocoding, radius matching, keyword filters, category filters, and deduped alert events.
The AI layer also needed to be practical rather than cosmetic. Planning proposals can be hard to scan quickly, so the system uses AI to create summaries, classifications, relevance reasons, and risk/opportunity scores while keeping the original source data separate.
A current limitation is that some planned production features are not fully wired yet. Stripe checkout, Meilisearch indexing, true PostGIS spatial querying, Telegram notifications, and some monitoring routes are either incomplete or configured without being fully used.
PropTech Intel turns scattered public planning records into a more focused monitoring workflow for property professionals.
The MVP helps users create location-based alerts, view relevant planning activity, and understand applications faster through AI summaries and classification.
As a portfolio project, it shows my ability to build a practical data-driven product: backend APIs, frontend dashboards, user authentication, planning ingestion, source normalisation, provenance tracking, AI enrichment, geocoding, alert matching, and email notification logic.
It also gives me a strong foundation for a future PropTech SaaS product focused on planning alerts, local development intelligence, housing demand signals, and property opportunity monitoring.
This project taught me that data products need provenance from the beginning. When working with public planning records, it is not enough to store a cleaned version. The system also needs source references, raw payloads, URLs, content hashes, and licence/attribution information.
I also learnt that AI is most useful when it improves interpretation rather than replacing the official record. The AI summary can help users understand a planning application faster, but the original proposal and source data still need to remain visible.
The project helped me think more deeply about MVP boundaries. It is possible to build a useful planning intelligence workflow before the full SaaS layer is complete, as long as the core loop works: ingest records, normalise them, enrich them, match them to alerts, and notify users.
Another key learning was that geospatial products need careful technical choices. The current implementation uses postcode geocoding and geodesic filtering, but true scale would require proper PostGIS spatial queries and indexing.
I help founders and teams turn messy ideas into reliable systems — from MVPs and APIs to AI-enabled automation workflows.