PHP Reverse Auction Platform
A real-time auction platform where prices drop automatically and buyers compete to snag items at the lowest price before someone else claims it
Platform Showcase
The Challenge
Traditional auction platforms have a fundamental flaw: prices only move in one direction. Up. Buyers watch helplessly as items climb beyond their reach. Sellers often leave money on the table when anticipated bidding wars never materialize. The client wanted to flip this model on its head.
The vision was bold and technically ambitious: build a reverse auction platform where items start at a maximum price and automatically decrease every hour without bids. The psychology is simple but effective. Buyers face constant strategic tension between waiting for a better price and risking that someone else claims the item first. This creates genuine urgency rather than artificial scarcity.
But this inverted model introduced complexity I hadn't encountered in traditional auction systems:
- Distributed State Synchronization: Prices change hourly, bids reset timers, and auction closures happen in real-time. Every connected client needs to see the exact same state at the exact same moment. Otherwise, the entire system loses credibility.
- Concurrency Under Pressure: When multiple users bid in the final seconds of a window, the system must serialize these requests correctly, determine winners atomically, and broadcast updates without race conditions.
- Reliable Background Processing: Price drops must happen exactly on schedule, even during deployments or server restarts. A missed price drop breaks the core promise of the platform.
- Dual Payment Rails: Canadian users needed e-transfer support alongside Stripe. This meant building manual verification workflows without compromising the real-time experience.
- Phone-First Authentication: SMS OTP login for a non-technical user base required aggressive rate limiting to prevent abuse while keeping friction minimal for legitimate users.
"We looked at existing auction platforms, but none offered the reverse pricing model we wanted. Building from scratch was the only way to get the exact user experience we envisioned."
— Project Stakeholder
The Solution
I architected a distributed system built for reliability first, real-time performance second. The stack (Next.js 16, Socket.IO, BullMQ, PostgreSQL, and Redis) wasn't chosen for trendiness. Each piece solved a specific problem that alternatives couldn't match.
System Architecture
The platform separates concerns across distinct layers, each independently scalable and failure-isolated:
Architecture Overview
- Client Layer → Next.js React components with Socket.IO client
- API Layer → Next.js API routes (40+ endpoints)
- Real-Time → Socket.IO server with Redis pub/sub
- Job Queue → BullMQ + Redis for background workers
- Database → PostgreSQL 16 + Prisma ORM
- Payments → Stripe Checkout + webhook handling
- Notifications → Twilio SMS for alerts
Core Features
Reverse Auction Engine
Items start at maximum price and drop 7% every hour without bids. When price hits minimum, it recycles back to start. Creates strategic tension for buyers.
Real-Time Updates
Socket.IO with Redis pub/sub delivers price changes, new bids, and auction closures to all connected clients in under 100ms. No page refreshes needed.
Target Bidding
Users set target prices with auto-bid or notify options. Background workers monitor prices and trigger actions automatically when thresholds are reached.
Flexible Payments
Stripe Checkout for credit cards plus manual e-transfer verification. Membership system with annual fees credited toward first won auction.
SMS Notifications
Twilio-powered alerts for outbid notifications, auction wins, price drops on watched items, and target price reached events.
Phone OTP Auth
NextAuth.js v5 with phone-based OTP login. Redis-backed rate limiting prevents abuse: 3 OTP requests/hour, 5 verification attempts before lockout.
Technical Deep Dive
Building the Reverse Auction Engine
The core auction logic was the most intellectually challenging part of this project. Unlike traditional auctions where price discovery happens upward through competitive bidding, reverse auctions require careful orchestration of automatic price decay, bid-triggered price increases, and timer management. All while maintaining absolute consistency across distributed clients.
The fundamental algorithm works like this: items start at a maximum price and drop by 7% every hour if no bids occur. When a bid is placed, the price jumps up by 7% and a 7-minute competition window opens. Each subsequent bid resets this timer. If the window closes with a winning bid, that user claims the item. If the price ever drops to the minimum threshold, the auction "recycles", resetting to the starting price and beginning the cycle again.
The tricky part wasn't implementing these rules. It was ensuring they held true under concurrent load. When three users place bids in the same second, the system must serialize these operations correctly, determine the winning bid atomically, and broadcast the updated state to all connected clients without anyone seeing stale data. I solved this with database-level transactions and careful use of row-level locking in PostgreSQL, ensuring that bid placement is always serialized correctly even under heavy contention.
Real-Time Architecture: Socket.IO + Redis Pub/Sub
Real-time updates were non-negotiable for this product. Users watching an auction need to see price drops, new bids, and timer changes the moment they happen. Any perceptible lag erodes trust in the system.
I chose Socket.IO over alternatives like raw WebSockets or Server-Sent Events for a few specific reasons. First, its automatic reconnection with buffered messages meant users on unstable connections wouldn't miss important updates. Second, the room-based broadcasting model mapped perfectly to auction rooms. Each auction gets its own channel, and users join/leave as they navigate. Third, Socket.IO's Redis adapter enables horizontal scaling: multiple server instances can share the same Redis pub/sub backend, so a user connected to server A receives updates triggered by actions on server B.
The architecture is straightforward but effective: API routes that modify auction state (placing bids, triggering price drops) emit events to Socket.IO, which broadcasts to all clients subscribed to that auction room. The Redis adapter handles cross-server communication transparently. I measured end-to-end latency at under 100ms in production, fast enough that users perceive updates as instant.
Reliable Background Processing with BullMQ
The platform relies heavily on scheduled jobs: hourly price drops, auction closure checks, SMS notifications, and target bid triggers. These can't be best-effort. If a price drop fails to execute, users lose trust in the system. If an auction closure is delayed, the winner might change.
BullMQ was the clear choice over alternatives like cron jobs or native Node.js timers. The main advantages: jobs persist in Redis, so they survive server restarts and deployments; failed jobs automatically retry with configurable backoff; and the delayed job API lets me schedule auction closure checks exactly 7 minutes after each bid without managing complex timer state.
I structured the worker architecture around three separate queues: price drops (scheduled hourly, checked every minute), auction closures (scheduled dynamically based on bid window expirations), and notifications (triggered by various auction events). Each queue runs independent workers with appropriate concurrency limits. Price drops can process in parallel since they're independent, but auction closures need careful serialization to prevent race conditions.
One lesson learned: BullMQ's delayed jobs aren't guaranteed to fire at exact millisecond precision. For the 7-minute bid windows, I added a secondary check that runs every few seconds to catch any auctions whose windows should have closed. This belt-and-suspenders approach ensures no auction gets stuck in limbo.
Authentication: Phone OTP with Aggressive Rate Limiting
The target user base isn't particularly technical, and the client wanted to minimize friction. No email/password to remember, just phone number authentication. This is convenient for users but opens significant abuse vectors: SMS costs money, OTP codes can be brute-forced, and phone numbers can be spoofed.
I implemented multi-layered rate limiting backed by Redis. At the OTP request level: maximum 3 codes per phone number per hour, with the counter resetting on successful authentication. At the verification level: 5 failed attempts before a 15-minute lockout kicks in. I also added IP-based limits to prevent attackers from rotating phone numbers to bypass per-phone restrictions.
NextAuth.js v5 handles the session management side. The phone OTP flow integrates as a custom credentials provider, with sessions stored in the database rather than JWTs to enable server-side revocation if suspicious activity is detected. The architecture also supports future expansion to social logins without restructuring the auth flow.
Database Design: Modeling Complex Auction State
The data model needed to capture several interconnected concepts: listings with their current price state and history, bids with their relationship to listings and users, target bids (users' desired price points), watchlists, and the payment/membership system.
The core Listing model tracks not just current price, but the entire lifecycle: starting price, minimum price, current price, bid window expiration, and recycle count (how many times the auction has reset). This last field turned out to be valuable for analytics. The client can now see which items are cycling frequently (suggesting the starting price is too high) versus getting snatched quickly (suggesting underpricing).
Bids are linked to listings with a composite index on listing ID and creation time, enabling efficient "show me all bids for this auction in chronological order" queries. The isWinning flag on each bid simplifies the common query "who's currently winning this auction" without needing complex subqueries.
Target bids (a feature allowing users to say "notify me when this item drops below $50") required careful design to prevent race conditions. When a price drop occurs, the system queries for all target bids where the new price is at or below the threshold. For auto-bid targets, the system needs to place the bid atomically with the price drop to prevent another user from sniping in between. I handled this with database transactions that lock the relevant rows during the check-and-trigger operation.
Payment Integration: Stripe + Manual E-Transfer
The Canadian market requirement for e-transfer support meant building two parallel payment flows. Stripe Checkout handles the standard credit card experience. The platform redirects to Stripe's hosted page, handles the webhook confirmation, and updates the auction status accordingly.
E-transfers required more creativity. The flow I built: after winning an auction, users can select e-transfer as their payment method. This creates a pending payment record and displays instructions (the recipient email and a unique reference code). An admin dashboard shows all pending e-transfers; when the client confirms receipt in their bank account, they manually mark the payment complete in the admin panel, which then finalizes the auction.
One UX challenge: users expected immediate confirmation after winning, but e-transfers can take hours to arrive. I added clear messaging throughout the flow explaining the delay, plus automatic email reminders after 24 hours for unpaid auctions. The system also enforces a 72-hour payment window before the auction reopens to the second-place bidder.
Lessons from Production
Several issues only surfaced after deployment. The first was timezone handling. I initially stored all times in UTC but displayed them in the server's local timezone. Users in different regions saw different "auction closes at" times, which was confusing. The fix was to store UTC consistently but convert to the user's detected timezone on the frontend, with an explicit timezone selector for users who wanted to override.
The second was Socket.IO connection management. Some users on corporate networks or restrictive ISPs experienced frequent disconnections due to WebSocket blocking. I enabled Socket.IO's HTTP long-polling fallback mode, which degrades gracefully when WebSockets are blocked. It's higher latency but keeps the real-time experience functional.
The third was a subtle race condition in the target bid system. When multiple users had target bids at the same price point, and the price dropped to that threshold, all their auto-bids would fire simultaneously. That led to multiple bids at the same price. I added a "first target bid wins" rule with a timestamp-based tiebreaker, processing target bids in order of creation rather than letting them race.
Building this platform reinforced my belief that the hardest problems in web development aren't about writing clever algorithms. They're about managing distributed state, handling failure modes gracefully, and designing systems that remain correct under concurrent load. The reverse auction model, with its combination of time-sensitive operations and real-time requirements, was an excellent exercise in thinking through these concerns systematically.
The Results
The platform launched successfully and has been running in production without significant incidents. The architecture has proven itself through multiple high-traffic auction events, and the modular design has made ongoing feature development straightforward. But the metrics that matter most aren't technical. They're about user engagement and business outcomes.
Technical Achievements
Business Impact
- User Engagement: The reverse auction model created genuine engagement. Users return repeatedly to check prices and time their bids. Average session duration is 3x longer than the client's previous e-commerce platform.
- Price Discovery: The 7% hourly drop rate proved optimal through iteration. Faster drops created anxiety without strategy; slower drops reduced urgency. The recycling mechanism ensures items don't stagnate at minimum prices indefinitely.
- Payment Flexibility: Dual payment rails (Stripe + e-transfer) removed friction for the Canadian user base. Roughly 40% of transactions now use e-transfer, a segment that would have been lost with credit-card-only checkout.
- Target Bidding Adoption: The target price feature drove significant engagement. Users set targets and return when notified, rather than constantly monitoring. This reduced the need for real-time attention while maintaining platform stickiness.
Key Learnings
- Concurrency is Harder Than It Looks: The bid window mechanism seemed straightforward until multiple users bid simultaneously. Database-level transactions with proper locking are essential. Application-level mutexes aren't enough when you're running multiple server instances.
- Job Queues Need Supervision: BullMQ is reliable, but I should have added monitoring from day one. Setting up alerts for queue depth and failed job counts would have caught issues earlier. Now I consider observability a first-class citizen in any architecture.
- Real-Time Has Failure Modes: Not all users can maintain WebSocket connections. The HTTP long-polling fallback isn't an afterthought. It's essential for production reliability. Test your real-time features on restrictive networks before launching.
- TypeScript Saved Weeks of Debugging: The auction engine's complex state transitions (price drops, bid increases, timer resets, auction recycling) would have been a nightmare without type safety. Prisma's generated types caught countless bugs at compile time that would have been production incidents otherwise.
- User Expectations Differ from Technical Reality: E-transfers taking hours to process confused users expecting instant confirmation. Clear UX writing and expectation-setting throughout the flow prevented support tickets. Technical correctness isn't enough if users don't understand what's happening.
What I'd Do Differently
In hindsight, I'd start with a more robust testing infrastructure. I wrote integration tests, but they came later in the development cycle. Writing tests alongside features (not after) would have caught edge cases earlier and made refactoring less nerve-wracking. I'd also add structured logging from the start rather than retrofitting it when debugging production issues.
The database schema evolved organically as requirements changed, which created some awkward migrations. Next time, I'd spend more upfront time on data modeling, particularly thinking through how the schema will need to grow. The current schema works, but a few early decisions made later features more complex than necessary.
Tech Stack Summary
Frontend
Next.js 16 App Router, React 19, TypeScript 5, Tailwind CSS 4, Framer Motion
Backend
Next.js API Routes, Socket.IO, BullMQ, Redis, PostgreSQL 16, Prisma ORM
Auth & Security
NextAuth.js v5, Phone OTP, Redis Rate Limiting, Input Sanitization
Payments
Stripe Checkout, Webhook Handling, E-transfer Verification
Observability
Sentry Error Tracking, PostHog Analytics, Structured Logging
Testing
Vitest, React Testing Library, MSW for API Mocking