Building Your Own DI Framework in Python — From Principles to Practice
A deep dive into Composition, Lifecycle management, and Interception

Imagine you are building a complex software application with tightly coupled components, like a house of cards. Changing one part can cause the entire structure to wobble, making development, testing, and maintenance a nightmare.
This is a common challenge in software development, where Dependency Injection (DI) shines.
In our previous article, we discussed dependency injection's theoretical underpinnings and how it promotes loose coupling, enhances maintainability, and streamlines the unit testing process.

Today, we’re moving from theory to practice. We’ll embark on a hands-on journey by building a complete Dependency Injection container in Python.
This isn’t just a theoretical exercise. We will transform abstract concepts into concrete implementations by building our own DI container. This practical endeavor will reveal the intricate relationships between components and highlight how dependency management leads to more flexible, maintainable, and testable code.
Our journey will delve into three fundamental dimensions of DI:
- Composition: How objects are constructed and assembled from their dependencies.
- Lifecycle Management: How the lifespan of objects is managed.
- Interception: How to augment or change object behavior without altering the objects.
By the end of this article, you’ll have constructed a working DI framework and gained a deeper understanding of how these concepts are applied in real-world frameworks and applications.
The complete source code for this implementation is available in this GitHub repository.
Composition

Composition is the bedrock of Dependency Injection, presenting a remarkably straightforward concept. It’s a practice you’re already familiar with, often intuitively used when you create objects in Python that contain other objects.
Introducing the SimpleContainer
To show composition in action, we’ll build upon the SimpleContainer
implementation from our previous article. This minimalist container is an excellent starting point for understanding how Dependency Injection orchestrates object creation and management.
The SimpleContainer
uses two key concepts: registration and resolution.
- Registration: Declares which classes the container should manage and how to construct their instances
- Resolution: Handles the automatic construction of registered classes, including the injection of their dependencies
Here’s a brief look at the SimpleContainer
class:
import inspect
class SimpleContainer:
def __init__(self):
self._registry = {}
def register(self, cls):
self._registry[cls] = cls
def resolve(self, cls):
if cls not in self._registry:
raise ValueError(f"{cls} is not registered in the container.")
target_cls = self._registry[cls]
constructor_params = inspect.signature(target_cls.__init__).parameters.values()
dependencies = [
self.resolve(param.annotation)
for param in constructor_params
if param.annotation is not inspect.Parameter.empty
]
return target_cls(*dependencies)
Let’s see this container in action through a practical example. Consider a typical scenario where a Service
class depends on a Repository
for data access:
class Repository:
def fetch_data(self):
return "Data from repository"
class Service:
def __init__(self, repository: Repository):
self.repository = repository
def get_data(self):
return self.repository.fetch_data()
Using the SimpleContainer
, we can automate the dependency management process:
container = SimpleContainer()
# Register both Service and Repository with the container
container.register(Repository)
container.register(Service)
# Resolve an instance of Service, which automatically resolves and injects its Repository dependency
service = container.resolve(Service)
# Use the service to show dependency injection in action
print(service.get_data())
The magic happens in the resolve
method, which forms the heart of our composition system. Let's break down its operation:
def resolve(self, cls):
if cls not in self._registry:
raise ValueError(f"{cls} is not registered in the container.")
target_cls = self._registry[cls]
constructor_params = inspect.signature(target_cls.__init__).parameters.values()
dependencies = [
self.resolve(param.annotation)
for param in constructor_params
if param.annotation is not inspect.Parameter.empty
]
return target_cls(*dependencies)
This method employs recursive resolution to handle dependency chains. When resolving a class, it examines the constructor’s parameter types through Python’s inspection capabilities.
For each annotated parameter, it recursively resolves the required dependency, ensuring that every object in the dependency chain is instantiated correctly.
When you request an instance of Service
, the container follows these steps:
- Examines the
Service
constructor's signature - Identifies the need for a
Repository
instance - Examines the
Repository
constructor's signature to check for its dependencies - Finding no dependencies for
Repository
, creates a newRepository
instance - Constructs the
Service
with the newly createdRepository
The result is a clean separation between object creation and business logic, leading to more maintainable and testable code.
This approach eliminates the need to manually construct object hierarchies, allowing developers to focus on implementing business functionality instead of managing dependencies.
When you outsource the creation of your dependencies to a container, you delegate not just the instantiation but also the responsibility of managing their lifetimes.
This leads us to our next dimension: lifecycle management, where we explore how a DI container controls the lifespan of dependencies and determines when to dispose of them.
Lifecycle management

Beyond the initial creation and assembly of objects through composition, a robust DI framework must manage the complete lifecycle of dependencies. This management covers when objects spring to life, how long they persist, and when they should cease to exist.
At the heart of lifecycle management lie three fundamental patterns, each serving distinct use cases in your application’s architecture.
Singleton — A Singleton lifestyle ensures that only one instance of a dependency exists throughout your application’s lifetime, making it ideal for stateless services or shared resources that need a consistent state. When multiple components request this dependency, they all receive the same instance.
Transient — In contrast, the Transient lifestyle creates a fresh instance each time a dependency is requested. This pattern proves invaluable when you need an isolated state or when working with disposable resources, ensuring that each consumer operates with its independent copy.
Scoped — The Scoped lifestyle balances these extremes by maintaining a single instance within a defined context or scope. Whether bound to a web request, a user session, or a business transaction, scoped dependencies provide controlled sharing while preventing unintended state leakage between different contexts.
In the upcoming sections, we will discuss each lifestyle, including how to implement and integrate them into our SimpleContainer. We will start with the Singleton lifestyle.
Singleton lifestyle
The Singleton lifestyle represents one of the most fundamental patterns in dependency management: maintaining a single instance throughout your application’s lifetime.
While this might remind you of the Gang of Four’s Singleton design pattern, there’s a crucial distinction. Our DI implementation achieves single-instance behavior without exposing a global access point.
This lifestyle pattern offers interesting benefits, particularly in terms of resource efficiency. By reusing a single instance across your application, the Singleton lifestyle minimizes memory overhead and reduces the computational cost of object creation.
This makes it particularly valuable for stateless services, configuration managers, or database connection pools where a shared state is desirable and efficient.
However, the Singleton lifestyle isn’t without its challenges. The primary consideration comes into play in concurrent environments. Since our basic implementation doesn’t include thread-safety mechanisms by default, you must exercise caution when using Singleton-scoped dependencies in multi-threaded applications.
The shared state that makes Singletons efficient can become a liability if not correctly protected against concurrent access.
Implementation
Let’s extend our SimpleContainer to implement this pattern, demonstrating how to maintain a single instance.
First, we define the foundation for our lifetime management using an Enum
class. While we’ll expand this later with additional lifecycles, we start with just the Singleton:
class Lifetime(Enum):
SINGLETON = "singleton"
The SimpleContainer
implementation now incorporates lifetime management through two key modifications:
- We separate instance creation logic into its own method, making our code more modular and more straightforward to extend.
- We introduce a dedicated dictionary to store and manage Singleton instances, ensuring consistent object reuse.
Here is our enhanced container:
class SimpleContainer:
def __init__(self):
self._registry: Dict[Type, tuple[Type, Lifetime]] = {}
self._singletons: Dict[Type, Any] = {}
def register(self, cls: Type, lifetime: Lifetime = Lifetime.TRANSIENT):
self._registry[cls] = (cls, lifetime)
def resolve(self, cls: Type) -> Any:
if cls not in self._registry:
raise ValueError(f"{cls} is not registered in the container.")
registered_cls, lifetime = self._registry[cls]
if lifetime == Lifetime.SINGLETON:
if cls not in self._singletons:
self._singletons[cls] = self._create_instance(registered_cls)
return self._singletons[cls]
return self._create_instance(registered_cls)
def _create_instance(self, cls: Type) -> Any:
constructor_params = inspect.signature(cls.__init__).parameters.values()
dependencies = [
self.resolve(param.annotation)
for param in constructor_params
if param.annotation is not inspect.Parameter.empty
]
return cls(*dependencies)
The resolve
method now serves as a decision point, determining whether to create a new instance or return an existing one based on the registered lifetime.
For Singleton dependencies, the _singletons
dictionary is first checked. If an instance exists, it returns it; if not, it creates one, stores it, and then returns it. This approach ensures that all subsequent requests for the same type receive the same instance.
Meanwhile, the _create_instance
method maintains our original dependency resolution logic, recursively creating objects based on their constructor parameters.
Example
We can now use an example to show the usage of the lifecycle Singleton.
from di_framework import SimpleContainer
from di_framework import Lifetime
class DatabaseConnection:
def __init__(self):
# We'll use the object's memory address to prove singleton behavior
self.connection_id = id(self)
def get_connection_info(self):
return f"Connection ID: {self.connection_id}"
class UserRepository:
def __init__(self, db: DatabaseConnection):
self.db = db
# Configure our container
container = SimpleContainer()
container.register(DatabaseConnection, Lifetime.SINGLETON)
container.register(UserRepository, Lifetime.TRANSIENT)
# Create multiple repositories
repository1 = container.resolve(UserRepository)
repository2 = container.resolve(UserRepository)
# Verify singleton behavior
print(repository1.db.get_connection_info())
print(repository2.db.get_connection_info())
print(repository1.db is repository2.db)
When we run this, it shows that indeed both repositories share the same DatabaseConnection instance.

While Singletons excels at managing shared resources, many scenarios require fresh instances with isolated state. This leads us to explore the Transient lifestyle.
Transient lifestyle
The Transient lifestyle represents the opposite end of the instance management spectrum from Singleton. Where Singleton maintains a single shared instance, Transient creates a new instance every time a dependency is resolved. This approach provides complete isolation between components, ensuring each receives its fresh instance of dependencies.
This lifestyle pattern shines particularly in scenarios where state isolation is crucial, such as when handling user-specific data processing, managing separate transaction contexts, or dealing with disposable resources. The Transient lifestyle guarantees that state modifications in one instance won’t unexpectedly affect other parts of your application.
However, this flexibility comes with inevitable trade-offs. Creating new instances for every resolution increases memory usage and can affect performance, especially with complex object graphs or resource-intensive initialization.
Understanding these implications helps make informed decisions about when to use Transient versus Singleton lifestyles.
Implementation
Implementing the Transient lifestyle requires minimal changes to our container, as it represents the default behavior of creating new instances.
Let’s extend our lifetime enumeration and adjust the container to support this pattern.
class Lifetime(Enum):
SINGLETON = "singleton"
TRANSIENT = "transient" # Adding explicit support for transient
The changes to the SimpleContainer
are minimal, as you can see below.
class SimpleContainer:
def __init__(self):
self._registry: Dict[Type, tuple[Type, Lifetime]] = {}
self._singletons: Dict[Type, Any] = {}
def register(self, cls: Type, lifetime: Lifetime = Lifetime.TRANSIENT):
self._registry[cls] = (cls, lifetime)
def resolve(self, cls: Type) -> Any:
if cls not in self._registry:
raise ValueError(f"{cls} is not registered in the container.")
registered_cls, lifetime = self._registry[cls]
if lifetime == Lifetime.SINGLETON:
if cls not in self._singletons:
self._singletons[cls] = self._create_instance(registered_cls)
return self._singletons[cls]
# Transient lifestyle simply creates a new instance every time
return self._create_instance(registered_cls)
Example
Let’s show the difference between Transient and Singleton lifestyles with a practical example:
from di_framework import SimpleContainer, Lifetime
class RequestContext:
def __init__(self):
self.context_id = id(self)
self.data = {}
def set_data(self, key: str, value: str):
self.data[key] = value
def get_context_info(self):
return f"Context ID: {self.context_id}, Data: {self.data}"
class UserService:
def __init__(self, context: RequestContext):
self.context = context
# Configure our container
container = SimpleContainer()
container.register(RequestContext, Lifetime.TRANSIENT)
container.register(UserService, Lifetime.TRANSIENT)
# Create multiple services
service1 = container.resolve(UserService)
service2 = container.resolve(UserService)
# Demonstrate isolation
service1.context.set_data("user", "Alice")
service2.context.set_data("user", "Bob")
print(service1.context.get_context_info())
print(service2.context.get_context_info())
print(service1.context is service2.context) # Will print False
When we run this, it shows that indeed both services have a separate instance of the RequestContext
class.

Between Singletons' global persistence and Transients' complete isolation lies a powerful middle ground: the Scoped lifestyle. This pattern emerged from the practical challenges of managing dependencies in modern, concurrent applications.
Scoped lifestyle
Consider a web application serving multiple users simultaneously. Each user expects quick responses regardless of system load, making concurrent request handling essential. However, this concurrent execution model introduces specific challenges to dependency management.
When handling concurrent requests, we face a key dilemma: While efficient, singleton dependencies can create thread-safety issues if they are not carefully designed.
Conversely, Transient dependencies, though thread-safe by nature, may prove inefficient when multiple components within the same logical operation need to share state or resources.
The Scoped lifestyle provides an elegant solution to this challenge. It creates a middle ground where dependencies behave like Singletons within a specific context (such as a web request) while maintaining isolation between different contexts.
For example, each incoming request can operate within its scope in a web application. Components within that scope share the same instances, enabling efficient resource usage while different requests maintain complete isolation.
Implementation
We need to enhance our container with scope management capabilities to implement a scoped lifetime. This requires more sophisticated tracking than our previous patterns:
Besides extending the enum with SCOPED
, we also introduce a new Scope
class that manages instance lifecycles within defined boundaries, tracking object instantiation, and ensuring proper cleanup when the scope ends.
This class serves as an isolation boundary, preventing unintended sharing of dependencies between different execution contexts while maintaining efficient resource usage within each scope.
class Lifetime(Enum):
SINGLETON = "singleton"
TRANSIENT = "transient"
SCOPED = "scoped"
class Scope:
def __init__(self):
self.id = str(uuid4())
self.instances: Dict[Type, Any] = {}
We use this in the SimpleContainer
together with a context manager to track the instances within a single scope.
class SimpleContainer:
def __init__(self):
self._registry: Dict[Type, tuple[Type, Lifetime]] = {}
self._singletons: Dict[Type, Any] = {}
self._current_scope: Optional[Scope] = None
@contextmanager
def create_scope(self):
"""Creates a new dependency scope."""
previous_scope = self._current_scope
self._current_scope = Scope()
try:
yield self._current_scope
finally:
self._current_scope = previous_scope
def resolve(self, cls: Type) -> Any:
if cls not in self._registry:
raise ValueError(f"{cls} is not registered in the container.")
registered_cls, lifetime = self._registry[cls]
if lifetime == Lifetime.SINGLETON:
if cls not in self._singletons:
self._singletons[cls] = self._create_instance(registered_cls)
return self._singletons[cls]
if lifetime == Lifetime.SCOPED:
if not self._current_scope:
raise ValueError("Cannot resolve scoped dependency outside of a scope")
if cls not in self._current_scope.instances:
self._current_scope.instances[cls] = self._create_instance(registered_cls)
return self._current_scope.instances[cls]
return self._create_instance(registered_cls)
Example
Let’s see how scoped dependencies work in a practical scenario:
class UserContext:
def __init__(self):
self.request_id = str(uuid4())
self.current_user = None
class AuditLogger:
def __init__(self, context: UserContext):
self.context = context
def log_action(self, action: str):
print(f"[Request {self.context.request_id}] "
f"User {self.context.current_user}: {action}")
class UserService:
def __init__(self, context: UserContext, logger: AuditLogger):
self.context = context
self.logger = logger
def perform_action(self, action: str):
self.logger.log_action(action)
# Configure container
container = SimpleContainer()
container.register(UserContext, Lifetime.SCOPED)
container.register(AuditLogger, Lifetime.SCOPED)
container.register(UserService, Lifetime.TRANSIENT)
# Simulate request handling
def handle_request(username: str, action: str):
with container.create_scope() as scope:
service = container.resolve(UserService)
service.context.current_user = username
service.perform_action(action)
# Simulate multiple requests
handle_request("alice", "view_profile")
handle_request("bob", "update_settings")
Running this example shows how each request gets its isolated context while sharing components within that request.
The output shows complete isolation between requests while maintaining consistency within each scope.

With our lifecycle management patterns in place, we can focus on a powerful feature that extends our container’s capabilities beyond simple object creation and management.
Interception allows us to change or enhance the behavior of our dependencies without altering their original implementation.
Interception

While lifestyles manage instance creation and scope, interception provides a powerful way to change or enhance the behavior of resolved dependencies.
This feature allows you to intercept your dependencies' creation or method calls, enabling cross-cutting concerns like logging, caching, or transaction management without modifying the original classes.
Implementation
Interception works by wrapping resolved instances in proxy objects that can execute custom logic before or after method calls. This approach adheres to the Open-Closed Principle, allowing you to extend functionality without modifying existing code.
Here’s how we can add this to our container:
T = TypeVar('T')
class Interceptor(Generic[T]):
def __init__(self, instance: T):
self._instance = instance
self._before_callbacks: list[Callable] = []
self._after_callbacks: list[Callable] = []
def before(self, callback: Callable):
self._before_callbacks.append(callback)
return self
def after(self, callback: Callable):
self._after_callbacks.append(callback)
return self
def __getattr__(self, name):
attr = getattr(self._instance, name)
if callable(attr):
@wraps(attr)
def wrapped(*args, **kwargs):
# Execute before callbacks
for callback in self._before_callbacks:
callback(self._instance, name, args, kwargs)
# Call the original method
result = attr(*args, **kwargs)
# Execute after callbacks
for callback in self._after_callbacks:
callback(self._instance, name, result, args, kwargs)
return result
return wrapped
return attr
class SimpleContainer:
def __init__(self):
self._registry: Dict[Type, tuple[Type, Lifetime]] = {}
self._singletons: Dict[Type, Any] = {}
self._current_scope: Optional[Scope] = None
self._interceptors: Dict[Type, list[Callable[[Any], Any]]] = {}
def register_interceptor(self, cls: Type, interceptor_factory: Callable[[Any], Any]):
"""Register an interceptor factory for a specific type."""
if cls not in self._interceptors:
self._interceptors[cls] = []
self._interceptors[cls].append(interceptor_factory)
def _create_instance(self, cls: Type) -> Any:
# Create the instance with dependencies
constructor_params = inspect.signature(cls.__init__).parameters.values()
dependencies = [
self.resolve(param.annotation)
for param in constructor_params
if param.annotation is not inspect.Parameter.empty
]
instance = cls(*dependencies)
# Apply interceptors if any exist
if cls in self._interceptors:
for factory in self._interceptors[cls]:
instance = factory(instance)
return instance
Example
Let’s create an example that show interception can help with performance monitoring. This interceptor measure the time to execute and print the called function.
import time
from datetime import datetime
class PerformanceLoggingInterceptor(Interceptor):
def __init__(self, instance):
super().__init__(instance)
self.before(self._log_start)
self.after(self._log_end)
def _log_start(self, instance, method_name, args, kwargs):
self._start_time = time.time()
print(f"[{datetime.now()}] Starting {instance.__class__.__name__}.{method_name}")
def _log_end(self, instance, method_name, result, args, kwargs):
duration = (time.time() - self._start_time) * 1000
print(f"[{datetime.now()}] Completed {instance.__class__.__name__}.{method_name} "
f"in {duration:.2f}ms")
class UserRepository:
def get_user(self, user_id: str):
# Simulate database query
time.sleep(0.1)
return {"id": user_id, "name": "Test User"}
# Configure container with interception
container = SimpleContainer()
container.register(UserRepository, Lifetime.SINGLETON)
container.register_interceptor(UserRepository,
lambda instance: PerformanceLoggingInterceptor(instance))
# Use the intercepted repository
repo = container.resolve(UserRepository)
user = repo.get_user("123")
When we run this example, it shows the power of interceptors. The get_user
function knows nothing about measuring its performance, which is entirely transparent.

With this, we've gained deep insight into dependency injection's inner workings by building a complete DI container that handles composition, lifecycle management, and interception.
But how do these patterns translate to real-world frameworks? Let’s explore how our educational implementation compares to production-grade solutions.
What next?
In this article, we’ve built a dependency injection container from the ground up, exploring composition, lifecycle management (Singleton, Transient, and Scoped), and interception. You’ve now gained hands-on experience with how these core concepts work in practice.
But how do these principles translate to production-grade frameworks?
Here are specific avenues to further your learning and deepen your understanding:
Thread safety in practice
Our example `SimpleContainer` isn’t thread-safe. Investigate how frameworks like FastAPI handle concurrency using thread-local storage and other mechanisms to manage dependencies safely in multi-threaded environments. What are the trade-offs between using thread-local storage vs explicit locking mechanisms?
Circular dependency detection
In complex applications, circular dependencies can lead to infinite loops. Explore how frameworks like the Python Dependency Injector detect such issues and implement strategies to prevent them. How can we improve our simple container to check and report on circular dependencies?
Performance optimizations
For large dependency graphs, even small instantiation operations can become bottlenecks. Examine how libraries like the Python Dependency Injector optimize their dependency resolution process. What are some strategies to cache or lazy-load dependencies to improve performance in more complex application scenarios?
Framework comparisons
Compare the specific implementation choices we made with existing DI frameworks. For example, how does FastAPI’s `Depends` utility combine with its dependency resolution? How do the `Provider` types in the Python Dependency Injector library implement the lifecycle concepts we have explored?
Practical implementation in database operations
Explore how SQLAlchemy uses the concept of dependency injection in its session management. How is the concept of “context” used in database operations?
By working through this practical implementation and by exploring these avenues, you’ve not only built a DI container; you’ve developed a solid foundational understanding of how dependency injection operates.
You are now better equipped to work with any dependency injection implementation and to architect more maintainable and testable applications.
Remember, the complete source code for our implementation is available in this GitHub repository. Experiment, change, and apply these concepts to your own projects!