Most software conversations focus on getting to launch. Far fewer focus on what comes after — which is where software either holds up or quietly starts to fail.
Launch day tends to get treated as a finish line. The project is delivered, the team celebrates, and the software gets handed off to whoever will operate it going forward. In reality, launch is closer to the starting line for a different kind of work — the ongoing operational effort required to keep software running reliably, securely, and in alignment with how a business actually evolves.
This post covers what that work looks like in practice: the categories of post-launch activity, why they're necessary, and what happens when they're neglected.
Dependencies: the software inside your software
Almost no application is built entirely from scratch. Modern software is assembled from a combination of custom code and third-party packages — libraries, frameworks, and services maintained by other teams and organizations. A typical web application might rely on dozens of these dependencies, some of which have their own dependencies.
This creates a maintenance surface that exists independently of any features you build or change.
Why it matters after launch: Dependencies get updated. Some updates are performance improvements. Some are bug fixes. Some are security patches for vulnerabilities that have been publicly disclosed, which means anyone running the older version is exposed until they update. A piece of software that ships with a clean, up-to-date dependency tree in January may have several known vulnerabilities in its stack by July — not because anything in the codebase changed, but because the ecosystem around it did.
Dependency management is routine work, but it requires ongoing attention. Automated tooling can flag when updates are available and surface known vulnerabilities, but someone needs to review, test, and apply them. In applications without active maintenance, dependency trees drift further and further from current versions until updating becomes a significant project rather than routine upkeep.
Security: the attack surface changes over time
The security posture of an application isn't static. Several things change after launch that affect how exposed an application is:
New vulnerability classes are discovered. Security research is ongoing. Vulnerabilities that didn't exist (or weren't known) when your application was built get discovered and published. The CVE database — the public record of known vulnerabilities — adds thousands of entries per year. Some of those will apply to your stack.
Your own data becomes a more valuable target. An application processing a small volume of transactions on day one presents a different risk profile than the same application processing a much higher volume two years later. As the value of the data and transactions your application handles grows, so does the incentive for attacks.
Infrastructure configuration drifts. Servers, databases, and cloud services have configuration options that affect security — encryption settings, access controls, network rules, authentication requirements. These configurations are set at deployment time, but over time they may drift from best practices as updates change defaults or as team members make changes without a full security review.
A secure application at launch can become a vulnerable one through neglect rather than any active change. Regular security reviews, penetration testing at appropriate intervals, and dependency auditing are the mechanisms that prevent this.
Performance degradation
Applications tend to perform well at launch because they're starting fresh: small databases, few users, low transaction volume. Over time, several factors cause performance to degrade if left unaddressed.
Data accumulation. Queries that run efficiently against a database with ten thousand records may run significantly slower against the same database with ten million. Indexes that were appropriate for initial data volumes may need to be reconsidered as data grows. Archiving strategies for historical data that aren't needed in real-time queries need to be implemented before they become necessary, not after.
Load changes. An application architected for 50 concurrent users may not behave the same way under 500. Caching strategies, connection pool sizing, infrastructure scaling configuration, and architectural decisions about where computation happens all become relevant as load increases.
Third-party service degradation. Applications that integrate with external APIs and services inherit their performance characteristics. A service that adds 200 milliseconds of latency to a request might be acceptable initially; the same latency multiplied across several integrations, or increasing over time, compounds into noticeable user-facing slowness.
Performance monitoring — tracking response times, error rates, database query performance, and infrastructure metrics — is what surfaces these issues before they become user-facing problems. Without it, degradation is invisible until it's significant.
Monitoring and observability
A production application running without monitoring is essentially a black box. You know it's there, but you have no visibility into what it's doing or whether it's doing it correctly.
The categories of monitoring that matter in production:
Uptime monitoring — Is the application responding? This is the baseline. External services check whether your application is reachable from the outside and alert when it isn't. Without this, you find out about outages from users, not from your own systems.
Application performance monitoring (APM) — How fast are requests being served? Which endpoints are slow? Where are errors occurring? APM tools instrument the application itself and capture detailed performance data that makes it possible to diagnose issues quickly.
Infrastructure monitoring — CPU, memory, disk, and network utilization on the servers running the application. Infrastructure problems cause application problems, and catching resource exhaustion before it causes failures requires watching the underlying systems.
Error tracking — When something goes wrong in the application — an unhandled exception, a failed database query, an unexpected state — error tracking captures it with context (stack trace, request data, affected user) and surfaces it to the team. Without error tracking, many application errors go undetected because they affect individual users silently rather than causing broad outages.
Log aggregation — Centralizing and indexing application logs makes it possible to investigate issues after the fact, audit user activity, and understand system behavior over time.
Collectively, these tools provide the observability needed to operate software responsibly. They're not optional in production — they're the mechanism by which you know what's happening.
Incident response
Despite good monitoring and maintenance practices, things go wrong in production. Incidents — service outages, data issues, performance degradation, security events — require a defined response process.
What a mature incident response process includes:
Detection and alerting. Monitoring systems detect the issue and route alerts to the right people through appropriate channels (on-call rotations, escalation paths).
Triage. Quickly assessing severity, scope, and likely cause to prioritize the response. Not every incident warrants the same level of urgency.
Mitigation. Taking the fastest path to restoring service, which may not be a complete fix. Rolling back a deployment, rerouting traffic, disabling a feature — mitigation prioritizes service restoration over root cause resolution.
Root cause analysis. After service is restored, investigating what actually happened and why. This is where lessons are extracted.
Post-mortem. Documenting the incident, the response, and the corrective actions. Good post-mortems are blameless — focused on systemic improvements rather than individual fault.
Organizations without defined incident response processes tend to have chaotic, stressful, slow responses to outages. The response process itself is a product of deliberate design, not something that emerges naturally under pressure.
Iteration: software that doesn't change becomes wrong
Even if an application is perfectly stable — no bugs, no security issues, no performance problems — it will gradually become misaligned with the business it serves if it doesn't evolve.
Business processes change. User needs shift. Market conditions change what features matter. Integrations with third-party services need updating when those services evolve. Regulatory requirements change the compliance surface.
Software that is treated as done at launch rather than as a living product accumulates what practitioners call product drift — the growing gap between what the software does and what the business actually needs. This manifests as workarounds (users doing things manually that the software should handle), shadow systems (spreadsheets and external tools that fill gaps the software doesn't), and eventual replacement projects that could have been avoided with ongoing iteration.
A healthy post-launch relationship with software looks like a regular cadence of: collecting user feedback, prioritizing improvements, shipping updates, and reviewing the system against evolving business requirements. This is not a sign that something went wrong — it's the expected and healthy lifecycle of a software product.
Who owns post-launch operations
The question of who is responsible for post-launch operations has a significant impact on how well they're executed. The common failure modes:
No one owns it explicitly. The most common scenario. Development is done by an agency or contractor who considers the engagement complete at launch. The client has the software but lacks the expertise to operate it. Maintenance doesn't happen until something breaks, by which point deferred work has compounded.
The internal team owns it without the right expertise. Technical staff take on infrastructure and maintenance responsibilities they weren't hired for and don't have deep experience in. This works until it doesn't — when a security incident, a performance crisis, or a complex dependency update exceeds their capabilities.
A dedicated engineering partner owns it. The development team that built the software maintains an ongoing relationship: monitoring, maintaining, and evolving the application over time. This model has higher ongoing cost than the others but avoids the compounding costs of neglect and the risk transfer problems of the alternatives.
The right model depends on the organization's technical maturity, the criticality of the software, and the volume of ongoing development work needed. But the decision should be made explicitly and before launch — not discovered as a gap after something goes wrong.
Written by
Chris Coussa
Founder, Day2 Innovative Technical Solutions