• Python »
  • 3.12.5 Documentation »
  • The Python Standard Library »
  • Development Tools »
  • typing — Support for type hints
  • Theme Auto Light Dark |

typing — Support for type hints ¶

Added in version 3.5.

Source code: Lib/typing.py

The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers , IDEs, linters, etc.

This module provides runtime support for type hints.

Consider the function below:

The function surface_area_of_cube takes an argument expected to be an instance of float , as indicated by the type hint edge_length: float . The function is expected to return an instance of str , as indicated by the -> str hint.

While type hints can be simple classes like float or str , they can also be more complex. The typing module provides a vocabulary of more advanced type hints.

New features are frequently added to the typing module. The typing_extensions package provides backports of these new features to older versions of Python.

A quick overview of type hints (hosted at the mypy docs)

The Python typing system is standardised via PEPs, so this reference should broadly apply to most Python type checkers. (Some parts may still be specific to mypy.)

Type-checker-agnostic documentation written by the community detailing type system features, useful typing related tools and typing best practices.

Specification for the Python Type System ¶

The canonical, up-to-date specification of the Python type system can be found at “Specification for the Python type system” .

Type aliases ¶

A type alias is defined using the type statement, which creates an instance of TypeAliasType . In this example, Vector and list[float] will be treated equivalently by static type checkers:

Type aliases are useful for simplifying complex type signatures. For example:

The type statement is new in Python 3.12. For backwards compatibility, type aliases can also be created through simple assignment:

Or marked with TypeAlias to make it explicit that this is a type alias, not a normal variable assignment:

Use the NewType helper to create distinct types:

The static type checker will treat the new type as if it were a subclass of the original type. This is useful in helping catch logical errors:

You may still perform all int operations on a variable of type UserId , but the result will always be of type int . This lets you pass in a UserId wherever an int might be expected, but will prevent you from accidentally creating a UserId in an invalid way:

Note that these checks are enforced only by the static type checker. At runtime, the statement Derived = NewType('Derived', Base) will make Derived a callable that immediately returns whatever parameter you pass it. That means the expression Derived(some_value) does not create a new class or introduce much overhead beyond that of a regular function call.

More precisely, the expression some_value is Derived(some_value) is always true at runtime.

It is invalid to create a subtype of Derived :

However, it is possible to create a NewType based on a ‘derived’ NewType :

and typechecking for ProUserId will work as expected.

See PEP 484 for more details.

Recall that the use of a type alias declares two types to be equivalent to one another. Doing type Alias = Original will make the static type checker treat Alias as being exactly equivalent to Original in all cases. This is useful when you want to simplify complex type signatures.

In contrast, NewType declares one type to be a subtype of another. Doing Derived = NewType('Derived', Original) will make the static type checker treat Derived as a subclass of Original , which means a value of type Original cannot be used in places where a value of type Derived is expected. This is useful when you want to prevent logic errors with minimal runtime cost.

Added in version 3.5.2.

Changed in version 3.10: NewType is now a class rather than a function. As a result, there is some additional runtime cost when calling NewType over a regular function.

Changed in version 3.11: The performance of calling NewType has been restored to its level in Python 3.9.

Annotating callable objects ¶

Functions – or other callable objects – can be annotated using collections.abc.Callable or typing.Callable . Callable[[int], str] signifies a function that takes a single parameter of type int and returns a str .

For example:

The subscription syntax must always be used with exactly two values: the argument list and the return type. The argument list must be a list of types, a ParamSpec , Concatenate , or an ellipsis. The return type must be a single type.

If a literal ellipsis ... is given as the argument list, it indicates that a callable with any arbitrary parameter list would be acceptable:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions , or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a __call__() method:

Callables which take other callables as arguments may indicate that their parameter types are dependent on each other using ParamSpec . Additionally, if that callable adds or removes arguments from other callables, the Concatenate operator may be used. They take the form Callable[ParamSpecVariable, ReturnType] and Callable[Concatenate[Arg1Type, Arg2Type, ..., ParamSpecVariable], ReturnType] respectively.

Changed in version 3.10: Callable now supports ParamSpec and Concatenate . See PEP 612 for more details.

The documentation for ParamSpec and Concatenate provides examples of usage in Callable .

Since type information about objects kept in containers cannot be statically inferred in a generic way, many container classes in the standard library support subscription to denote the expected types of container elements.

Generic functions and classes can be parameterized by using type parameter syntax :

Or by using the TypeVar factory directly:

Changed in version 3.12: Syntactic support for generics is new in Python 3.12.

Annotating tuples ¶

For most containers in Python, the typing system assumes that all elements in the container will be of the same type. For example:

list only accepts one type argument, so a type checker would emit an error on the y assignment above. Similarly, Mapping only accepts two type arguments: the first indicates the type of the keys, and the second indicates the type of the values.

Unlike most other Python containers, however, it is common in idiomatic Python code for tuples to have elements which are not all of the same type. For this reason, tuples are special-cased in Python’s typing system. tuple accepts any number of type arguments:

To denote a tuple which could be of any length, and in which all elements are of the same type T , use tuple[T, ...] . To denote an empty tuple, use tuple[()] . Using plain tuple as an annotation is equivalent to using tuple[Any, ...] :

The type of class objects ¶

A variable annotated with C may accept a value of type C . In contrast, a variable annotated with type[C] (or typing.Type[C] ) may accept values that are classes themselves – specifically, it will accept the class object of C . For example:

Note that type[C] is covariant:

The only legal parameters for type are classes, Any , type variables , and unions of any of these types. For example:

type[Any] is equivalent to type , which is the root of Python’s metaclass hierarchy .

User-defined generic types ¶

A user-defined class can be defined as a generic class.

This syntax indicates that the class LoggedVar is parameterised around a single type variable T . This also makes T valid as a type within the class body.

Generic classes implicitly inherit from Generic . For compatibility with Python 3.11 and lower, it is also possible to inherit explicitly from Generic to indicate a generic class:

Generic classes have __class_getitem__() methods, meaning they can be parameterised at runtime (e.g. LoggedVar[int] below):

A generic type can have any number of type variables. All varieties of TypeVar are permissible as parameters for a generic type:

Each type variable argument to Generic must be distinct. This is thus invalid:

Generic classes can also inherit from other classes:

When inheriting from generic classes, some type parameters could be fixed:

In this case MyDict has a single parameter, T .

Using a generic class without specifying type parameters assumes Any for each position. In the following example, MyIterable is not generic but implicitly inherits from Iterable[Any] :

User-defined generic type aliases are also supported. Examples:

For backward compatibility, generic type aliases can also be created through a simple assignment:

Changed in version 3.7: Generic no longer has a custom metaclass.

Changed in version 3.12: Syntactic support for generics and type aliases is new in version 3.12. Previously, generic classes had to explicitly inherit from Generic or contain a type variable in one of their bases.

User-defined generics for parameter expressions are also supported via parameter specification variables in the form [**P] . The behavior is consistent with type variables’ described above as parameter specification variables are treated by the typing module as a specialized type variable. The one exception to this is that a list of types can be used to substitute a ParamSpec :

Classes generic over a ParamSpec can also be created using explicit inheritance from Generic . In this case, ** is not used:

Another difference between TypeVar and ParamSpec is that a generic with only one parameter specification variable will accept parameter lists in the forms X[[Type1, Type2, ...]] and also X[Type1, Type2, ...] for aesthetic reasons. Internally, the latter is converted to the former, so the following are equivalent:

Note that generics with ParamSpec may not have correct __parameters__ after substitution in some cases because they are intended primarily for static type checking.

Changed in version 3.10: Generic can now be parameterized over parameter expressions. See ParamSpec and PEP 612 for more details.

A user-defined generic class can have ABCs as base classes without a metaclass conflict. Generic metaclasses are not supported. The outcome of parameterizing generics is cached, and most types in the typing module are hashable and comparable for equality.

The Any type ¶

A special kind of type is Any . A static type checker will treat every type as being compatible with Any and Any as being compatible with every type.

This means that it is possible to perform any operation or method call on a value of type Any and assign it to any variable:

Notice that no type checking is performed when assigning a value of type Any to a more precise type. For example, the static type checker did not report an error when assigning a to s even though s was declared to be of type str and receives an int value at runtime!

Furthermore, all functions without a return type or parameter types will implicitly default to using Any :

This behavior allows Any to be used as an escape hatch when you need to mix dynamically and statically typed code.

Contrast the behavior of Any with the behavior of object . Similar to Any , every type is a subtype of object . However, unlike Any , the reverse is not true: object is not a subtype of every other type.

That means when the type of a value is object , a type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. For example:

Use object to indicate that a value could be any type in a typesafe manner. Use Any to indicate that a value is dynamically typed.

Nominal vs structural subtyping ¶

Initially PEP 484 defined the Python static type system as using nominal subtyping . This means that a class A is allowed where a class B is expected if and only if A is a subclass of B .

This requirement previously also applied to abstract base classes, such as Iterable . The problem with this approach is that a class had to be explicitly marked to support them, which is unpythonic and unlike what one would normally do in idiomatic dynamically typed Python code. For example, this conforms to PEP 484 :

PEP 544 allows to solve this problem by allowing users to write the above code without explicit base classes in the class definition, allowing Bucket to be implicitly considered a subtype of both Sized and Iterable[int] by static type checkers. This is known as structural subtyping (or static duck-typing):

Moreover, by subclassing a special class Protocol , a user can define new custom protocols to fully enjoy structural subtyping (see examples below).

Module contents ¶

The typing module defines the following classes, functions and decorators.

Special typing primitives ¶

Special types ¶.

These can be used as types in annotations. They do not support subscription using [] .

Special type indicating an unconstrained type.

Every type is compatible with Any .

Any is compatible with every type.

Changed in version 3.11: Any can now be used as a base class. This can be useful for avoiding type checker errors with classes that can duck type anywhere or are highly dynamic.

A constrained type variable .

Definition:

AnyStr is meant to be used for functions that may accept str or bytes arguments but cannot allow the two to mix.

Note that, despite its name, AnyStr has nothing to do with the Any type, nor does it mean “any string”. In particular, AnyStr and str | bytes are different from each other and have different use cases:

Special type that includes only literal strings.

Any string literal is compatible with LiteralString , as is another LiteralString . However, an object typed as just str is not. A string created by composing LiteralString -typed objects is also acceptable as a LiteralString .

LiteralString is useful for sensitive APIs where arbitrary user-generated strings could generate problems. For example, the two cases above that generate type checker errors could be vulnerable to an SQL injection attack.

See PEP 675 for more details.

Added in version 3.11.

Never and NoReturn represent the bottom type , a type that has no members.

They can be used to indicate that a function never returns, such as sys.exit() :

Or to define a function that should never be called, as there are no valid arguments, such as assert_never() :

Never and NoReturn have the same meaning in the type system and static type checkers treat both equivalently.

Added in version 3.6.2: Added NoReturn .

Added in version 3.11: Added Never .

Special type to represent the current enclosed class.

This annotation is semantically equivalent to the following, albeit in a more succinct fashion:

In general, if something returns self , as in the above examples, you should use Self as the return annotation. If Foo.return_self was annotated as returning "Foo" , then the type checker would infer the object returned from SubclassOfFoo.return_self as being of type Foo rather than SubclassOfFoo .

Other common use cases include:

classmethod s that are used as alternative constructors and return instances of the cls parameter.

Annotating an __enter__() method which returns self.

You should not use Self as the return annotation if the method is not guaranteed to return an instance of a subclass when the class is subclassed:

See PEP 673 for more details.

Special annotation for explicitly declaring a type alias .

TypeAlias is particularly useful on older Python versions for annotating aliases that make use of forward references, as it can be hard for type checkers to distinguish these from normal variable assignments:

See PEP 613 for more details.

Added in version 3.10.

Deprecated since version 3.12: TypeAlias is deprecated in favor of the type statement, which creates instances of TypeAliasType and which natively supports forward references. Note that while TypeAlias and TypeAliasType serve similar purposes and have similar names, they are distinct and the latter is not the type of the former. Removal of TypeAlias is not currently planned, but users are encouraged to migrate to type statements.

Special forms ¶

These can be used as types in annotations. They all support subscription using [] , but each has a unique syntax.

Union type; Union[X, Y] is equivalent to X | Y and means either X or Y.

To define a union, use e.g. Union[int, str] or the shorthand int | str . Using that shorthand is recommended. Details:

The arguments must be types and there must be at least one.

Unions of unions are flattened, e.g.:

Unions of a single argument vanish, e.g.:

Redundant arguments are skipped, e.g.:

When comparing unions, the argument order is ignored, e.g.:

You cannot subclass or instantiate a Union .

You cannot write Union[X][Y] .

Changed in version 3.7: Don’t remove explicit subclasses from unions at runtime.

Changed in version 3.10: Unions can now be written as X | Y . See union type expressions .

Optional[X] is equivalent to X | None (or Union[X, None] ).

Note that this is not the same concept as an optional argument, which is one that has a default. An optional argument with a default does not require the Optional qualifier on its type annotation just because it is optional. For example:

On the other hand, if an explicit value of None is allowed, the use of Optional is appropriate, whether the argument is optional or not. For example:

Changed in version 3.10: Optional can now be written as X | None . See union type expressions .

Special form for annotating higher-order functions.

Concatenate can be used in conjunction with Callable and ParamSpec to annotate a higher-order callable which adds, removes, or transforms parameters of another callable. Usage is in the form Concatenate[Arg1Type, Arg2Type, ..., ParamSpecVariable] . Concatenate is currently only valid when used as the first argument to a Callable . The last parameter to Concatenate must be a ParamSpec or ellipsis ( ... ).

For example, to annotate a decorator with_lock which provides a threading.Lock to the decorated function, Concatenate can be used to indicate that with_lock expects a callable which takes in a Lock as the first argument, and returns a callable with a different type signature. In this case, the ParamSpec indicates that the returned callable’s parameter types are dependent on the parameter types of the callable being passed in:

PEP 612 – Parameter Specification Variables (the PEP which introduced ParamSpec and Concatenate )

  • Annotating callable objects

Special typing form to define “literal types”.

Literal can be used to indicate to type checkers that the annotated object has a value equivalent to one of the provided literals.

Literal[...] cannot be subclassed. At runtime, an arbitrary value is allowed as type argument to Literal[...] , but type checkers may impose restrictions. See PEP 586 for more details about literal types.

Added in version 3.8.

Changed in version 3.9.1: Literal now de-duplicates parameters. Equality comparisons of Literal objects are no longer order dependent. Literal objects will now raise a TypeError exception during equality comparisons if one of their parameters are not hashable .

Special type construct to mark class variables.

As introduced in PEP 526 , a variable annotation wrapped in ClassVar indicates that a given attribute is intended to be used as a class variable and should not be set on instances of that class. Usage:

ClassVar accepts only types and cannot be further subscribed.

ClassVar is not a class itself, and should not be used with isinstance() or issubclass() . ClassVar does not change Python runtime behavior, but it can be used by third-party type checkers. For example, a type checker might flag the following code as an error:

Added in version 3.5.3.

Special typing construct to indicate final names to type checkers.

Final names cannot be reassigned in any scope. Final names declared in class scopes cannot be overridden in subclasses.

There is no runtime checking of these properties. See PEP 591 for more details.

Special typing construct to mark a TypedDict key as required.

This is mainly useful for total=False TypedDicts. See TypedDict and PEP 655 for more details.

Special typing construct to mark a TypedDict key as potentially missing.

See TypedDict and PEP 655 for more details.

Special typing form to add context-specific metadata to an annotation.

Add metadata x to a given type T by using the annotation Annotated[T, x] . Metadata added using Annotated can be used by static analysis tools or at runtime. At runtime, the metadata is stored in a __metadata__ attribute.

If a library or tool encounters an annotation Annotated[T, x] and has no special logic for the metadata, it should ignore the metadata and simply treat the annotation as T . As such, Annotated can be useful for code that wants to use annotations for purposes outside Python’s static typing system.

Using Annotated[T, x] as an annotation still allows for static typechecking of T , as type checkers will simply ignore the metadata x . In this way, Annotated differs from the @no_type_check decorator, which can also be used for adding annotations outside the scope of the typing system, but completely disables typechecking for a function or class.

The responsibility of how to interpret the metadata lies with the tool or library encountering an Annotated annotation. A tool or library encountering an Annotated type can scan through the metadata elements to determine if they are of interest (e.g., using isinstance() ).

Here is an example of how you might use Annotated to add metadata to type annotations if you were doing range analysis:

Details of the syntax:

The first argument to Annotated must be a valid type

Multiple metadata elements can be supplied ( Annotated supports variadic arguments):

It is up to the tool consuming the annotations to decide whether the client is allowed to add multiple metadata elements to one annotation and how to merge those annotations.

Annotated must be subscripted with at least two arguments ( Annotated[int] is not valid)

The order of the metadata elements is preserved and matters for equality checks:

Nested Annotated types are flattened. The order of the metadata elements starts with the innermost annotation:

Duplicated metadata elements are not removed:

Annotated can be used with nested and generic aliases:

Annotated cannot be used with an unpacked TypeVarTuple :

This would be equivalent to:

where T1 , T2 , etc. are TypeVars . This would be invalid: only one type should be passed to Annotated.

By default, get_type_hints() strips the metadata from annotations. Pass include_extras=True to have the metadata preserved:

At runtime, the metadata associated with an Annotated type can be retrieved via the __metadata__ attribute:

The PEP introducing Annotated to the standard library.

Added in version 3.9.

Special typing construct for marking user-defined type guard functions.

TypeGuard can be used to annotate the return type of a user-defined type guard function. TypeGuard only accepts a single type argument. At runtime, functions marked this way should return a boolean.

TypeGuard aims to benefit type narrowing – a technique used by static type checkers to determine a more precise type of an expression within a program’s code flow. Usually type narrowing is done by analyzing conditional code flow and applying the narrowing to a block of code. The conditional expression here is sometimes referred to as a “type guard”:

Sometimes it would be convenient to use a user-defined boolean function as a type guard. Such a function should use TypeGuard[...] as its return type to alert static type checkers to this intention.

Using -> TypeGuard tells the static type checker that for a given function:

The return value is a boolean.

If the return value is True , the type of its argument is the type inside TypeGuard .

If is_str_list is a class or instance method, then the type in TypeGuard maps to the type of the second parameter (after cls or self ).

In short, the form def foo(arg: TypeA) -> TypeGuard[TypeB]: ... , means that if foo(arg) returns True , then arg narrows from TypeA to TypeB .

TypeB need not be a narrower form of TypeA – it can even be a wider form. The main reason is to allow for things like narrowing list[object] to list[str] even though the latter is not a subtype of the former, since list is invariant. The responsibility of writing type-safe type guards is left to the user.

TypeGuard also works with type variables. See PEP 647 for more details.

Typing operator to conceptually mark an object as having been unpacked.

For example, using the unpack operator * on a type variable tuple is equivalent to using Unpack to mark the type variable tuple as having been unpacked:

In fact, Unpack can be used interchangeably with * in the context of typing.TypeVarTuple and builtins.tuple types. You might see Unpack being used explicitly in older versions of Python, where * couldn’t be used in certain places:

Unpack can also be used along with typing.TypedDict for typing **kwargs in a function signature:

See PEP 692 for more details on using Unpack for **kwargs typing.

Building generic types and type aliases ¶

The following classes should not be used directly as annotations. Their intended purpose is to be building blocks for creating generic types and type aliases.

These objects can be created through special syntax ( type parameter lists and the type statement). For compatibility with Python 3.11 and earlier, they can also be created without the dedicated syntax, as documented below.

Abstract base class for generic types.

A generic type is typically declared by adding a list of type parameters after the class name:

Such a class implicitly inherits from Generic . The runtime semantics of this syntax are discussed in the Language Reference .

This class can then be used as follows:

Here the brackets after the function name indicate a generic function .

For backwards compatibility, generic classes can also be declared by explicitly inheriting from Generic . In this case, the type parameters must be declared separately:

Type variable.

The preferred way to construct a type variable is via the dedicated syntax for generic functions , generic classes , and generic type aliases :

This syntax can also be used to create bound and constrained type variables:

However, if desired, reusable type variables can also be constructed manually, like so:

Type variables exist primarily for the benefit of static type checkers. They serve as the parameters for generic types as well as for generic function and type alias definitions. See Generic for more information on generic types. Generic functions work as follows:

Note that type variables can be bound , constrained , or neither, but cannot be both bound and constrained.

The variance of type variables is inferred by type checkers when they are created through the type parameter syntax or when infer_variance=True is passed. Manually created type variables may be explicitly marked covariant or contravariant by passing covariant=True or contravariant=True . By default, manually created type variables are invariant. See PEP 484 and PEP 695 for more details.

Bound type variables and constrained type variables have different semantics in several important ways. Using a bound type variable means that the TypeVar will be solved using the most specific type possible:

Type variables can be bound to concrete types, abstract types (ABCs or protocols), and even unions of types:

Using a constrained type variable, however, means that the TypeVar can only ever be solved as being exactly one of the constraints given:

At runtime, isinstance(x, T) will raise TypeError .

The name of the type variable.

Whether the type var has been explicitly marked as covariant.

Whether the type var has been explicitly marked as contravariant.

Whether the type variable’s variance should be inferred by type checkers.

Added in version 3.12.

The bound of the type variable, if any.

Changed in version 3.12: For type variables created through type parameter syntax , the bound is evaluated only when the attribute is accessed, not when the type variable is created (see Lazy evaluation ).

A tuple containing the constraints of the type variable, if any.

Changed in version 3.12: For type variables created through type parameter syntax , the constraints are evaluated only when the attribute is accessed, not when the type variable is created (see Lazy evaluation ).

Changed in version 3.12: Type variables can now be declared using the type parameter syntax introduced by PEP 695 . The infer_variance parameter was added.

Type variable tuple. A specialized form of type variable that enables variadic generics.

Type variable tuples can be declared in type parameter lists using a single asterisk ( * ) before the name:

Or by explicitly invoking the TypeVarTuple constructor:

A normal type variable enables parameterization with a single type. A type variable tuple, in contrast, allows parameterization with an arbitrary number of types by acting like an arbitrary number of type variables wrapped in a tuple. For example:

Note the use of the unpacking operator * in tuple[T, *Ts] . Conceptually, you can think of Ts as a tuple of type variables (T1, T2, ...) . tuple[T, *Ts] would then become tuple[T, *(T1, T2, ...)] , which is equivalent to tuple[T, T1, T2, ...] . (Note that in older versions of Python, you might see this written using Unpack instead, as Unpack[Ts] .)

Type variable tuples must always be unpacked. This helps distinguish type variable tuples from normal type variables:

Type variable tuples can be used in the same contexts as normal type variables. For example, in class definitions, arguments, and return types:

Type variable tuples can be happily combined with normal type variables:

However, note that at most one type variable tuple may appear in a single list of type arguments or type parameters:

Finally, an unpacked type variable tuple can be used as the type annotation of *args :

In contrast to non-unpacked annotations of *args - e.g. *args: int , which would specify that all arguments are int - *args: *Ts enables reference to the types of the individual arguments in *args . Here, this allows us to ensure the types of the *args passed to call_soon match the types of the (positional) arguments of callback .

See PEP 646 for more details on type variable tuples.

The name of the type variable tuple.

Changed in version 3.12: Type variable tuples can now be declared using the type parameter syntax introduced by PEP 695 .

Parameter specification variable. A specialized version of type variables .

In type parameter lists , parameter specifications can be declared with two asterisks ( ** ):

For compatibility with Python 3.11 and earlier, ParamSpec objects can also be created as follows:

Parameter specification variables exist primarily for the benefit of static type checkers. They are used to forward the parameter types of one callable to another callable – a pattern commonly found in higher order functions and decorators. They are only valid when used in Concatenate , or as the first argument to Callable , or as parameters for user-defined Generics. See Generic for more information on generic types.

For example, to add basic logging to a function, one can create a decorator add_logging to log function calls. The parameter specification variable tells the type checker that the callable passed into the decorator and the new callable returned by it have inter-dependent type parameters:

Without ParamSpec , the simplest way to annotate this previously was to use a TypeVar with bound Callable[..., Any] . However this causes two problems:

The type checker can’t type check the inner function because *args and **kwargs have to be typed Any .

cast() may be required in the body of the add_logging decorator when returning the inner function, or the static type checker must be told to ignore the return inner .

Since ParamSpec captures both positional and keyword parameters, P.args and P.kwargs can be used to split a ParamSpec into its components. P.args represents the tuple of positional parameters in a given call and should only be used to annotate *args . P.kwargs represents the mapping of keyword parameters to their values in a given call, and should be only be used to annotate **kwargs . Both attributes require the annotated parameter to be in scope. At runtime, P.args and P.kwargs are instances respectively of ParamSpecArgs and ParamSpecKwargs .

The name of the parameter specification.

Parameter specification variables created with covariant=True or contravariant=True can be used to declare covariant or contravariant generic types. The bound argument is also accepted, similar to TypeVar . However the actual semantics of these keywords are yet to be decided.

Changed in version 3.12: Parameter specifications can now be declared using the type parameter syntax introduced by PEP 695 .

Only parameter specification variables defined in global scope can be pickled.

Concatenate

Arguments and keyword arguments attributes of a ParamSpec . The P.args attribute of a ParamSpec is an instance of ParamSpecArgs , and P.kwargs is an instance of ParamSpecKwargs . They are intended for runtime introspection and have no special meaning to static type checkers.

Calling get_origin() on either of these objects will return the original ParamSpec :

The type of type aliases created through the type statement.

The name of the type alias:

The module in which the type alias was defined:

The type parameters of the type alias, or an empty tuple if the alias is not generic:

The type alias’s value. This is lazily evaluated , so names used in the definition of the alias are not resolved until the __value__ attribute is accessed:

Other special directives ¶

These functions and classes should not be used directly as annotations. Their intended purpose is to be building blocks for creating and declaring types.

Typed version of collections.namedtuple() .

This is equivalent to:

To give a field a default value, you can assign to it in the class body:

Fields with a default value must come after any fields without a default.

The resulting class has an extra attribute __annotations__ giving a dict that maps the field names to the field types. (The field names are in the _fields attribute and the default values are in the _field_defaults attribute, both of which are part of the namedtuple() API.)

NamedTuple subclasses can also have docstrings and methods:

NamedTuple subclasses can be generic:

Backward-compatible usage:

Changed in version 3.6: Added support for PEP 526 variable annotation syntax.

Changed in version 3.6.1: Added support for default values, methods, and docstrings.

Changed in version 3.8: The _field_types and __annotations__ attributes are now regular dictionaries instead of instances of OrderedDict .

Changed in version 3.9: Removed the _field_types attribute in favor of the more standard __annotations__ attribute which has the same information.

Changed in version 3.11: Added support for generic namedtuples.

Helper class to create low-overhead distinct types .

A NewType is considered a distinct type by a typechecker. At runtime, however, calling a NewType returns its argument unchanged.

The module in which the new type is defined.

The name of the new type.

The type that the new type is based on.

Changed in version 3.10: NewType is now a class rather than a function.

Base class for protocol classes.

Protocol classes are defined like this:

Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example:

See PEP 544 for more details. Protocol classes decorated with runtime_checkable() (described later) act as simple-minded runtime protocols that check only the presence of given attributes, ignoring their type signatures.

Protocol classes can be generic, for example:

In code that needs to be compatible with Python 3.11 or older, generic Protocols can be written as follows:

Mark a protocol class as a runtime protocol.

Such a protocol can be used with isinstance() and issubclass() . This raises TypeError when applied to a non-protocol class. This allows a simple-minded structural check, very similar to “one trick ponies” in collections.abc such as Iterable . For example:

runtime_checkable() will check only the presence of the required methods or attributes, not their type signatures or types. For example, ssl.SSLObject is a class, therefore it passes an issubclass() check against Callable . However, the ssl.SSLObject.__init__ method exists only to raise a TypeError with a more informative message, therefore making it impossible to call (instantiate) ssl.SSLObject .

An isinstance() check against a runtime-checkable protocol can be surprisingly slow compared to an isinstance() check against a non-protocol class. Consider using alternative idioms such as hasattr() calls for structural checks in performance-sensitive code.

Changed in version 3.12: The internal implementation of isinstance() checks against runtime-checkable protocols now uses inspect.getattr_static() to look up attributes (previously, hasattr() was used). As a result, some objects which used to be considered instances of a runtime-checkable protocol may no longer be considered instances of that protocol on Python 3.12+, and vice versa. Most users are unlikely to be affected by this change.

Changed in version 3.12: The members of a runtime-checkable protocol are now considered “frozen” at runtime as soon as the class has been created. Monkey-patching attributes onto a runtime-checkable protocol will still work, but will have no impact on isinstance() checks comparing objects to the protocol. See “What’s new in Python 3.12” for more details.

Special construct to add type hints to a dictionary. At runtime it is a plain dict .

TypedDict declares a dictionary type that expects all of its instances to have a certain set of keys, where each key is associated with a value of a consistent type. This expectation is not checked at runtime but is only enforced by type checkers. Usage:

To allow using this feature with older versions of Python that do not support PEP 526 , TypedDict supports two additional equivalent syntactic forms:

Using a literal dict as the second argument:

Using keyword arguments:

Deprecated since version 3.11, will be removed in version 3.13: The keyword-argument syntax is deprecated in 3.11 and will be removed in 3.13. It may also be unsupported by static type checkers.

The functional syntax should also be used when any of the keys are not valid identifiers , for example because they are keywords or contain hyphens. Example:

By default, all keys must be present in a TypedDict . It is possible to mark individual keys as non-required using NotRequired :

This means that a Point2D TypedDict can have the label key omitted.

It is also possible to mark all keys as non-required by default by specifying a totality of False :

This means that a Point2D TypedDict can have any of the keys omitted. A type checker is only expected to support a literal False or True as the value of the total argument. True is the default, and makes all items defined in the class body required.

Individual keys of a total=False TypedDict can be marked as required using Required :

It is possible for a TypedDict type to inherit from one or more other TypedDict types using the class-based syntax. Usage:

Point3D has three items: x , y and z . It is equivalent to this definition:

A TypedDict cannot inherit from a non- TypedDict class, except for Generic . For example:

A TypedDict can be generic:

To create a generic TypedDict that is compatible with Python 3.11 or lower, inherit from Generic explicitly:

A TypedDict can be introspected via annotations dicts (see Annotations Best Practices for more information on annotations best practices), __total__ , __required_keys__ , and __optional_keys__ .

Point2D.__total__ gives the value of the total argument. Example:

This attribute reflects only the value of the total argument to the current TypedDict class, not whether the class is semantically total. For example, a TypedDict with __total__ set to True may have keys marked with NotRequired , or it may inherit from another TypedDict with total=False . Therefore, it is generally better to use __required_keys__ and __optional_keys__ for introspection.

Point2D.__required_keys__ and Point2D.__optional_keys__ return frozenset objects containing required and non-required keys, respectively.

Keys marked with Required will always appear in __required_keys__ and keys marked with NotRequired will always appear in __optional_keys__ .

For backwards compatibility with Python 3.10 and below, it is also possible to use inheritance to declare both required and non-required keys in the same TypedDict . This is done by declaring a TypedDict with one value for the total argument and then inheriting from it in another TypedDict with a different value for total :

If from __future__ import annotations is used or if annotations are given as strings, annotations are not evaluated when the TypedDict is defined. Therefore, the runtime introspection that __required_keys__ and __optional_keys__ rely on may not work properly, and the values of the attributes may be incorrect.

See PEP 589 for more examples and detailed rules of using TypedDict .

Changed in version 3.11: Added support for marking individual keys as Required or NotRequired . See PEP 655 .

Changed in version 3.11: Added support for generic TypedDict s.

Protocols ¶

The following protocols are provided by the typing module. All are decorated with @runtime_checkable .

An ABC with one abstract method __abs__ that is covariant in its return type.

An ABC with one abstract method __bytes__ .

An ABC with one abstract method __complex__ .

An ABC with one abstract method __float__ .

An ABC with one abstract method __index__ .

An ABC with one abstract method __int__ .

An ABC with one abstract method __round__ that is covariant in its return type.

ABCs for working with IO ¶

Generic type IO[AnyStr] and its subclasses TextIO(IO[str]) and BinaryIO(IO[bytes]) represent the types of I/O streams such as returned by open() .

Functions and decorators ¶

Cast a value to a type.

This returns the value unchanged. To the type checker this signals that the return value has the designated type, but at runtime we intentionally don’t check anything (we want this to be as fast as possible).

Ask a static type checker to confirm that val has an inferred type of typ .

At runtime this does nothing: it returns the first argument unchanged with no checks or side effects, no matter the actual type of the argument.

When a static type checker encounters a call to assert_type() , it emits an error if the value is not of the specified type:

This function is useful for ensuring the type checker’s understanding of a script is in line with the developer’s intentions:

Ask a static type checker to confirm that a line of code is unreachable.

Here, the annotations allow the type checker to infer that the last case can never execute, because arg is either an int or a str , and both options are covered by earlier cases.

If a type checker finds that a call to assert_never() is reachable, it will emit an error. For example, if the type annotation for arg was instead int | str | float , the type checker would emit an error pointing out that unreachable is of type float . For a call to assert_never to pass type checking, the inferred type of the argument passed in must be the bottom type, Never , and nothing else.

At runtime, this throws an exception when called.

Unreachable Code and Exhaustiveness Checking has more information about exhaustiveness checking with static typing.

Ask a static type checker to reveal the inferred type of an expression.

When a static type checker encounters a call to this function, it emits a diagnostic with the inferred type of the argument. For example:

This can be useful when you want to debug how your type checker handles a particular piece of code.

At runtime, this function prints the runtime type of its argument to sys.stderr and returns the argument unchanged (allowing the call to be used within an expression):

Note that the runtime type may be different from (more or less specific than) the type statically inferred by a type checker.

Most type checkers support reveal_type() anywhere, even if the name is not imported from typing . Importing the name from typing , however, allows your code to run without runtime errors and communicates intent more clearly.

Decorator to mark an object as providing dataclass -like behavior.

dataclass_transform may be used to decorate a class, metaclass, or a function that is itself a decorator. The presence of @dataclass_transform() tells a static type checker that the decorated object performs runtime “magic” that transforms a class in a similar way to @dataclasses.dataclass .

Example usage with a decorator function:

On a base class:

On a metaclass:

The CustomerModel classes defined above will be treated by type checkers similarly to classes created with @dataclasses.dataclass . For example, type checkers will assume these classes have __init__ methods that accept id and name .

The decorated class, metaclass, or function may accept the following bool arguments which type checkers will assume have the same effect as they would have on the @dataclasses.dataclass decorator: init , eq , order , unsafe_hash , frozen , match_args , kw_only , and slots . It must be possible for the value of these arguments ( True or False ) to be statically evaluated.

The arguments to the dataclass_transform decorator can be used to customize the default behaviors of the decorated class, metaclass, or function:

eq_default ( bool ) – Indicates whether the eq parameter is assumed to be True or False if it is omitted by the caller. Defaults to True .

order_default ( bool ) – Indicates whether the order parameter is assumed to be True or False if it is omitted by the caller. Defaults to False .

kw_only_default ( bool ) – Indicates whether the kw_only parameter is assumed to be True or False if it is omitted by the caller. Defaults to False .

Indicates whether the frozen parameter is assumed to be True or False if it is omitted by the caller. Defaults to False .

field_specifiers ( tuple [ Callable [ ... , Any ] , ... ] ) – Specifies a static list of supported classes or functions that describe fields, similar to dataclasses.field() . Defaults to () .

**kwargs ( Any ) – Arbitrary other keyword arguments are accepted in order to allow for possible future extensions.

Type checkers recognize the following optional parameters on field specifiers:

Parameter name

Description

Indicates whether the field should be included in the synthesized method. If unspecified, defaults to .

Provides the default value for the field.

Provides a runtime callback that returns the default value for the field. If neither nor are specified, the field is assumed to have no default value and must be provided a value when the class is instantiated.

An alias for the parameter on field specifiers.

Indicates whether the field should be marked as keyword-only. If , the field will be keyword-only. If , it will not be keyword-only. If unspecified, the value of the parameter on the object decorated with will be used, or if that is unspecified, the value of on will be used.

Provides an alternative name for the field. This alternative name is used in the synthesized method.

At runtime, this decorator records its arguments in the __dataclass_transform__ attribute on the decorated object. It has no other runtime effect.

See PEP 681 for more details.

Decorator for creating overloaded functions and methods.

The @overload decorator allows describing functions and methods that support multiple different combinations of argument types. A series of @overload -decorated definitions must be followed by exactly one non- @overload -decorated definition (for the same function/method).

@overload -decorated definitions are for the benefit of the type checker only, since they will be overwritten by the non- @overload -decorated definition. The non- @overload -decorated definition, meanwhile, will be used at runtime but should be ignored by a type checker. At runtime, calling an @overload -decorated function directly will raise NotImplementedError .

An example of overload that gives a more precise type than can be expressed using a union or a type variable:

See PEP 484 for more details and comparison with other typing semantics.

Changed in version 3.11: Overloaded functions can now be introspected at runtime using get_overloads() .

Return a sequence of @overload -decorated definitions for func .

func is the function object for the implementation of the overloaded function. For example, given the definition of process in the documentation for @overload , get_overloads(process) will return a sequence of three function objects for the three defined overloads. If called on a function with no overloads, get_overloads() returns an empty sequence.

get_overloads() can be used for introspecting an overloaded function at runtime.

Clear all registered overloads in the internal registry.

This can be used to reclaim the memory used by the registry.

Decorator to indicate final methods and final classes.

Decorating a method with @final indicates to a type checker that the method cannot be overridden in a subclass. Decorating a class with @final indicates that it cannot be subclassed.

Changed in version 3.11: The decorator will now attempt to set a __final__ attribute to True on the decorated object. Thus, a check like if getattr(obj, "__final__", False) can be used at runtime to determine whether an object obj has been marked as final. If the decorated object does not support setting attributes, the decorator returns the object unchanged without raising an exception.

Decorator to indicate that annotations are not type hints.

This works as a class or function decorator . With a class, it applies recursively to all methods and classes defined in that class (but not to methods defined in its superclasses or subclasses). Type checkers will ignore all annotations in a function or class with this decorator.

@no_type_check mutates the decorated object in place.

Decorator to give another decorator the no_type_check() effect.

This wraps the decorator with something that wraps the decorated function in no_type_check() .

Decorator to indicate that a method in a subclass is intended to override a method or attribute in a superclass.

Type checkers should emit an error if a method decorated with @override does not, in fact, override anything. This helps prevent bugs that may occur when a base class is changed without an equivalent change to a child class.

There is no runtime checking of this property.

The decorator will attempt to set an __override__ attribute to True on the decorated object. Thus, a check like if getattr(obj, "__override__", False) can be used at runtime to determine whether an object obj has been marked as an override. If the decorated object does not support setting attributes, the decorator returns the object unchanged without raising an exception.

See PEP 698 for more details.

Decorator to mark a class or function as unavailable at runtime.

This decorator is itself not available at runtime. It is mainly intended to mark classes that are defined in type stub files if an implementation returns an instance of a private class:

Note that returning instances of private classes is not recommended. It is usually preferable to make such classes public.

Introspection helpers ¶

Return a dictionary containing type hints for a function, method, module or class object.

This is often the same as obj.__annotations__ , but this function makes the following changes to the annotations dictionary:

Forward references encoded as string literals or ForwardRef objects are handled by evaluating them in globalns , localns , and (where applicable) obj ’s type parameter namespace. If globalns or localns is not given, appropriate namespace dictionaries are inferred from obj .

None is replaced with types.NoneType .

If @no_type_check has been applied to obj , an empty dictionary is returned.

If obj is a class C , the function returns a dictionary that merges annotations from C ’s base classes with those on C directly. This is done by traversing C.__mro__ and iteratively combining __annotations__ dictionaries. Annotations on classes appearing earlier in the method resolution order always take precedence over annotations on classes appearing later in the method resolution order.

The function recursively replaces all occurrences of Annotated[T, ...] with T , unless include_extras is set to True (see Annotated for more information).

See also inspect.get_annotations() , a lower-level function that returns annotations more directly.

If any forward references in the annotations of obj are not resolvable or are not valid Python code, this function will raise an exception such as NameError . For example, this can happen with imported type aliases that include forward references, or with names imported under if TYPE_CHECKING .

Changed in version 3.9: Added include_extras parameter as part of PEP 593 . See the documentation on Annotated for more information.

Changed in version 3.11: Previously, Optional[t] was added for function and method annotations if a default value equal to None was set. Now the annotation is returned unchanged.

Get the unsubscripted version of a type: for a typing object of the form X[Y, Z, ...] return X .

If X is a typing-module alias for a builtin or collections class, it will be normalized to the original class. If X is an instance of ParamSpecArgs or ParamSpecKwargs , return the underlying ParamSpec . Return None for unsupported objects.

Get type arguments with all substitutions performed: for a typing object of the form X[Y, Z, ...] return (Y, Z, ...) .

If X is a union or Literal contained in another generic type, the order of (Y, Z, ...) may be different from the order of the original arguments [Y, Z, ...] due to type caching. Return () for unsupported objects.

Check if a type is a TypedDict .

Class used for internal typing representation of string forward references.

For example, List["SomeClass"] is implicitly transformed into List[ForwardRef("SomeClass")] . ForwardRef should not be instantiated by a user, but may be used by introspection tools.

PEP 585 generic types such as list["SomeClass"] will not be implicitly transformed into list[ForwardRef("SomeClass")] and thus will not automatically resolve to list[SomeClass] .

Added in version 3.7.4.

A special constant that is assumed to be True by 3rd party static type checkers. It is False at runtime.

The first type annotation must be enclosed in quotes, making it a “forward reference”, to hide the expensive_mod reference from the interpreter runtime. Type annotations for local variables are not evaluated, so the second annotation does not need to be enclosed in quotes.

If from __future__ import annotations is used, annotations are not evaluated at function definition time. Instead, they are stored as strings in __annotations__ . This makes it unnecessary to use quotes around the annotation (see PEP 563 ).

Deprecated aliases ¶

This module defines several deprecated aliases to pre-existing standard library classes. These were originally included in the typing module in order to support parameterizing these generic classes using [] . However, the aliases became redundant in Python 3.9 when the corresponding pre-existing classes were enhanced to support [] (see PEP 585 ).

The redundant types are deprecated as of Python 3.9. However, while the aliases may be removed at some point, removal of these aliases is not currently planned. As such, no deprecation warnings are currently issued by the interpreter for these aliases.

If at some point it is decided to remove these deprecated aliases, a deprecation warning will be issued by the interpreter for at least two releases prior to removal. The aliases are guaranteed to remain in the typing module without deprecation warnings until at least Python 3.14.

Type checkers are encouraged to flag uses of the deprecated types if the program they are checking targets a minimum Python version of 3.9 or newer.

Aliases to built-in types ¶

Deprecated alias to dict .

Note that to annotate arguments, it is preferred to use an abstract collection type such as Mapping rather than to use dict or typing.Dict .

This type can be used as follows:

Deprecated since version 3.9: builtins.dict now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to list .

Note that to annotate arguments, it is preferred to use an abstract collection type such as Sequence or Iterable rather than to use list or typing.List .

This type may be used as follows:

Deprecated since version 3.9: builtins.list now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to builtins.set .

Note that to annotate arguments, it is preferred to use an abstract collection type such as AbstractSet rather than to use set or typing.Set .

Deprecated since version 3.9: builtins.set now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to builtins.frozenset .

Deprecated since version 3.9: builtins.frozenset now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias for tuple .

tuple and Tuple are special-cased in the type system; see Annotating tuples for more details.

Deprecated since version 3.9: builtins.tuple now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to type .

See The type of class objects for details on using type or typing.Type in type annotations.

Deprecated since version 3.9: builtins.type now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Aliases to types in collections ¶

Deprecated alias to collections.defaultdict .

Deprecated since version 3.9: collections.defaultdict now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.OrderedDict .

Added in version 3.7.2.

Deprecated since version 3.9: collections.OrderedDict now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.ChainMap .

Added in version 3.6.1.

Deprecated since version 3.9: collections.ChainMap now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.Counter .

Deprecated since version 3.9: collections.Counter now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.deque .

Deprecated since version 3.9: collections.deque now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Aliases to other concrete types ¶

Deprecated since version 3.8, will be removed in version 3.13: The typing.io namespace is deprecated and will be removed. These types should be directly imported from typing instead.

Deprecated aliases corresponding to the return types from re.compile() and re.match() .

These types (and the corresponding functions) are generic over AnyStr . Pattern can be specialised as Pattern[str] or Pattern[bytes] ; Match can be specialised as Match[str] or Match[bytes] .

Deprecated since version 3.8, will be removed in version 3.13: The typing.re namespace is deprecated and will be removed. These types should be directly imported from typing instead.

Deprecated since version 3.9: Classes Pattern and Match from re now support [] . See PEP 585 and Generic Alias Type .

Deprecated alias for str .

Text is provided to supply a forward compatible path for Python 2 code: in Python 2, Text is an alias for unicode .

Use Text to indicate that a value must contain a unicode string in a manner that is compatible with both Python 2 and Python 3:

Deprecated since version 3.11: Python 2 is no longer supported, and most type checkers also no longer support type checking Python 2 code. Removal of the alias is not currently planned, but users are encouraged to use str instead of Text .

Aliases to container ABCs in collections.abc ¶

Deprecated alias to collections.abc.Set .

Deprecated since version 3.9: collections.abc.Set now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

This type represents the types bytes , bytearray , and memoryview of byte sequences.

Deprecated since version 3.9, will be removed in version 3.14: Prefer collections.abc.Buffer , or a union like bytes | bytearray | memoryview .

Deprecated alias to collections.abc.Collection .

Added in version 3.6.

Deprecated since version 3.9: collections.abc.Collection now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Container .

Deprecated since version 3.9: collections.abc.Container now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.ItemsView .

Deprecated since version 3.9: collections.abc.ItemsView now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.KeysView .

Deprecated since version 3.9: collections.abc.KeysView now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Mapping .

Deprecated since version 3.9: collections.abc.Mapping now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.MappingView .

Deprecated since version 3.9: collections.abc.MappingView now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.MutableMapping .

Deprecated since version 3.9: collections.abc.MutableMapping now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.MutableSequence .

Deprecated since version 3.9: collections.abc.MutableSequence now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.MutableSet .

Deprecated since version 3.9: collections.abc.MutableSet now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Sequence .

Deprecated since version 3.9: collections.abc.Sequence now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.ValuesView .

Deprecated since version 3.9: collections.abc.ValuesView now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Aliases to asynchronous ABCs in collections.abc ¶

Deprecated alias to collections.abc.Coroutine .

The variance and order of type variables correspond to those of Generator , for example:

Deprecated since version 3.9: collections.abc.Coroutine now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.AsyncGenerator .

An async generator can be annotated by the generic type AsyncGenerator[YieldType, SendType] . For example:

Unlike normal generators, async generators cannot return a value, so there is no ReturnType type parameter. As with Generator , the SendType behaves contravariantly.

If your generator will only yield values, set the SendType to None :

Alternatively, annotate your generator as having a return type of either AsyncIterable[YieldType] or AsyncIterator[YieldType] :

Deprecated since version 3.9: collections.abc.AsyncGenerator now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.AsyncIterable .

Deprecated since version 3.9: collections.abc.AsyncIterable now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.AsyncIterator .

Deprecated since version 3.9: collections.abc.AsyncIterator now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Awaitable .

Deprecated since version 3.9: collections.abc.Awaitable now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Aliases to other ABCs in collections.abc ¶

Deprecated alias to collections.abc.Iterable .

Deprecated since version 3.9: collections.abc.Iterable now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Iterator .

Deprecated since version 3.9: collections.abc.Iterator now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Callable .

See Annotating callable objects for details on how to use collections.abc.Callable and typing.Callable in type annotations.

Deprecated since version 3.9: collections.abc.Callable now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Generator .

A generator can be annotated by the generic type Generator[YieldType, SendType, ReturnType] . For example:

Note that unlike many other generics in the typing module, the SendType of Generator behaves contravariantly, not covariantly or invariantly.

If your generator will only yield values, set the SendType and ReturnType to None :

Alternatively, annotate your generator as having a return type of either Iterable[YieldType] or Iterator[YieldType] :

Deprecated since version 3.9: collections.abc.Generator now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Hashable .

Deprecated since version 3.12: Use collections.abc.Hashable directly instead.

Deprecated alias to collections.abc.Reversible .

Deprecated since version 3.9: collections.abc.Reversible now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to collections.abc.Sized .

Deprecated since version 3.12: Use collections.abc.Sized directly instead.

Aliases to contextlib ABCs ¶

Deprecated alias to contextlib.AbstractContextManager .

Added in version 3.5.4.

Deprecated since version 3.9: contextlib.AbstractContextManager now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecated alias to contextlib.AbstractAsyncContextManager .

Added in version 3.6.2.

Deprecated since version 3.9: contextlib.AbstractAsyncContextManager now supports subscripting ( [] ). See PEP 585 and Generic Alias Type .

Deprecation Timeline of Major Features ¶

Certain features in typing are deprecated and may be removed in a future version of Python. The following table summarizes major deprecations for your convenience. This is subject to change, and not all deprecations are listed.

Feature

Deprecated in

Projected removal

PEP/issue

and submodules

3.8

3.13

versions of standard collections

3.9

Undecided (see for more information)

3.9

3.14

3.11

Undecided

and

3.12

Undecided

3.12

Undecided

Table of Contents

  • Specification for the Python Type System
  • Type aliases
  • Annotating tuples
  • The type of class objects
  • User-defined generic types
  • The Any type
  • Nominal vs structural subtyping
  • Special types
  • Special forms
  • Building generic types and type aliases
  • Other special directives
  • ABCs for working with IO
  • Functions and decorators
  • Introspection helpers
  • Aliases to built-in types
  • Aliases to types in collections
  • Aliases to other concrete types
  • Aliases to container ABCs in collections.abc
  • Aliases to asynchronous ABCs in collections.abc
  • Aliases to other ABCs in collections.abc
  • Aliases to contextlib ABCs
  • Deprecation Timeline of Major Features

Previous topic

Development Tools

pydoc — Documentation generator and online help system

  • Report a Bug
  • Show Source

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption
  • Start Monitoring for Free

Understanding type annotation in Python

python type annotation assignment

Python is highly recognized for being a dynamically typed language, which implies that the datatype of a variable is determined at runtime. In other words, as a Python developer, you are not mandated to declare the data type of the value that a variable accepts because Python realizes the data type of this variable based on the current value it holds.

Understanding type annotation in Python

The flexibility of this feature, however, comes with some disadvantages that you typically would not experience when using a statically typed language like Java or C++:

  • More errors will be detected at runtime that could have been avoided at the development time
  • Absence of compilation could lead to poor performing codes
  • Verbose variables make codes harder to read
  • Incorrect assumptions about the behavior of specific functions
  • Errors due to type mismatch

Python 3.5 introduced type hints , which you can add to your code using the type annotations introduced in Python 3.0. With type hints, you can annotate variables and functions with datatypes. Tools like mypy , pyright , pytypes , or pyre perform the functions of static type-checking and provide hints or warnings when these types are used inconsistently.

This tutorial will explore type hints and how you can add them to your Python code. It will focus on the mypy static type-checking tool and its operations in your code. You’ll learn how to annotate variables, functions, lists, dictionaries, and tuples. You’ll also learn how to work with the Protocol class, function overloading, and annotating constants.

What is static type checking?

Adding type hints to variables.

  • Adding type hints to functions
  • The Any  type

Configuring mypy for type checking

Adding type hints to functions without return statements, adding union type hints in function parameters, when to use the iterable type to annotate function parameters, when to use the sequence type, when to use the mapping class, using the mutablemapping class as a type hint, using the typeddict class as a type hint.

  • Adding type hints to tuples

Creating and using protocols

Annotating overloaded functions, annotating constants with final, dealing with type-checking in third-party packages, before you begin.

To get the most out of this tutorial, you should have:

  • Python ≥3.10 installed
  • Knowledge of how to write functions, f-strings , and running Python code
  • Knowledge of how to use the command-line

We recommend Python ≥3.10, as those versions have new and better type-hinting features. If you’re using Python ≤3.9, Python provides an alternatives type-hint syntax that I’ll demonstrate in the tutorial.

When declaring a variable in statically-typed languages like C and Java, you are mandated to declare the data type of the variable. As a result, you cannot assign a value that does not conform to the data type you specified for the variable. For example, if you declare a variable to be an integer, you can’t assign a string value to it at any point in time.

In statically-typed languages, a compiler monitors the code as it is written and strictly ensures that the developer abides by the rules of the language. If no issues are found, the program can be run.

Using static type-checkers has numerous advantages; some of which include:

  • Detecting type errors
  • Preventing bugs
  • Documenting your code — anyone who wants to use an annotated function will know the type of parameters it accepts and the return value type at a glance
  • Additionally, IDEs understand your code much better and offer good autocompletion suggestions

Static typing in Python is optional and can be introduced gradually (this is known as gradual typing). With gradual typing, you can choose to specify the portion of your code that should be dynamically or statically typed. The static type-checkers will ignore the dynamically-typed portions of your code and will not give out warnings on code that does not have type hints nor prevents inconsistent types from compiling during runtime.

What is mypy?

Since Python is by default, a dynamically-typed language, tools like mypy were created to give you the benefits of a statically-typed environment. mypy is a optional static type checker created by Jukka Lehtosalo. It checks for annotated code in Python and emits warnings if annotated types are used inconsistently.

mypy also checks the code syntax and issues syntax errors when it encounters invalid syntax. Additionally, supports gradual typing, allowing you to add type hints in your code slowly at your own pace.

In Python, you can define a variable with a type hint using the following syntax:

Let’s look at the following variable:

You assign a string value "rocket" to the name variable.

To annotate the variable, you need to append a colon ( : ) after the variable name, and declare a type str :

In Python, you can read the type hints defined on variables using the __annotations__ dictionary:

The __annotations__ dictionary will show you the type hints on all global variables.

python type annotation assignment

Over 200k developers use LogRocket to create better digital experiences

python type annotation assignment

As mentioned earlier, the Python interpreter does not enforce types, so defining a variable with a wrong type won’t trigger an error:

On the other hand, a static type checker like mypy will flag this as an error:

Declaring type hints for other data types follows the same syntax. The following are some of the simple types you can use to annotate variables:

  • float : float values, such as 3.10
  • int : integers, such as 3 , 7
  • str : strings, such as 'hello'
  • bool : boolean value, which can be True or False
  • bytes : represents byte values, such as b'hello'

Annotating variables with simple types like int , or str may not be necessary because mypy can infer the type. However, when working with complex datatypes like lists, dictionary or tuples, it is important that you declare type hints to the corresponding variables because mypy may struggle to infer types on those variables.

Adding types hints to functions

To annotate a function, declare the annotation after each parameter and the return value:

Let’s annotate the following function that returns a message:

The function accepts a string as the first parameter, a float as the second parameter, and returns a string. To annotate the function parameters, we will append a colon( : ) after each parameter and follow it with the parameter type:

  • language: str
  • version: float

To annotate return value type, add -> immediately after closing the parameter parentheses, just before the function definition colon( : ):

The function now has type hints showing that it receives str and float arguments, and returns str .

When you invoke the function, the output should be similar to what is obtained as follows:

Although our code has type hints, the Python interpreter won’t provide warnings if you invoke the function with wrong arguments:

The function executes successfully, even when you passed a Boolean True as the first argument , and a string "Python" as the second argument. To receive warnings about these mistakes, we need to use a static type-checker like mypy.

Static type-checking with mypy

We will now begin our tutorial on static type-checking with mypy to get warnings about type errors in our code.

Create a directory called type_hints and move it into the directory:

Create and activate the virtual environment:

Install the latest version of mypy with pip :

With mypy installed, create a file called announcement.py and enter the following code:

Save the file and exit. We’re going to reuse the same function from the previous section.

Next, run the file with mypy:

As you can see, mypy does not emit any warnings. Static typing in Python is optional, and with gradual typing, you should not receive any warnings unless you opt in by adding type hints to functions. This allows you to annotate your code slowly.

Let’s now understand why mypy doesn’t show us any warnings.

More great articles from LogRocket:

  • Don't miss a moment with The Replay , a curated newsletter from LogRocket
  • Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app
  • Use React's useEffect to optimize your application's performance
  • Switch between multiple versions of Node
  • Discover how to use the React children prop with TypeScript
  • Explore creating a custom mouse cursor with CSS
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

The Any type

As we noted, mypy ignores code with no type hints. This is because it assumes the Any type on code without hints.

The following is how mypy sees the function:

The Any type is a dynamic type that’s compatible with, well, any type. So mypy will not complain whether the function argument types are bool , int , bytes , etc.

Now that we know why mypy doesn’t always issue warnings, let’s configure it to do that.

mypy can be configured to suit your workflow and code practices. You can run mypy in strict mode, using the --strict option to flag any code without type hints:

The --strict option is the most restrictive option and doesn’t support gradual typing. Most of the time, you won’t need to be this strict. Instead, adopt gradual typing to add the type hints in phases.

mypy also provides a --disallow-incomplete-defs option. This option flags functions that don’t have all of their parameters and return values annotated. This option is so handy when you forget to annotate a return value or a newly added parameter, causing mypy to warn you. You can think of this as your compiler that reminds you to abide by the rules of static typing in your code development.

To understand this, add the type hints to the parameters only and omit the return value types (pretending you forgot):

Run the file with mypy without any command-line option:

As you can see, mypy does not warn us that we forgot to annotate the return type. It assumes the Any type on the return value. If the function was large, it would be difficult to figure out the type of value it returns. To know the type, we would have to inspect the return value, which is time-consuming.

To protect ourselves from these issues, pass the --disallow-incomplete-defs option to mypy:

Now run the file again with the --disallow-incomplete-defs option enabled:

Not only does the --disallow-incomplete-defs option warn you about missing type hint, it also flags any datatype-value mismatch. Consider the example below where bool and str values are passed as arguments to a function that accepts str and float respectively:

Let’s see if mypy will warn us about this now:

Great! mypy warns us that we passed the wrong arguments to the function.

Now, let’s eliminate the need to type mypy with the --disallow-incomplete-defs option.

mypy allows you save the options in a mypy.ini file. When running mypy , it will check the file and run with the options saved in the file.

You don’t necessarily need to add the --disallow-incomplete-defs  option each time you run the file using mypy. Mypy gives you an alternative of adding this configuration in a mypy.ini file where you can add some mypy configurations.

Create the mypy.ini file in your project root directory and enter the following code:

In the mypy.ini file, we tell mypy that we are using Python 3.10 and that we want to disallow incomplete function definitions.

Save the file in your project, and next time you can run mypy without any command-line options:

mypy has many options you can add in the mypy file. I recommend referring to the mypy command line documentation to learn more.

Not all functions have a return statement. When you create a function with no return statement, it still returns a None value:

The None value isn’t totally useful as you may not be able to perform an operation with it. It only shows that the function was executed successfully. You can hint that a function has no return type by annotating the return value with None :

When a function accepts a parameter of more than one type, you can use the union character ( | ) to separate the types.

For example, the following function accepts a parameter that can be either str or int :

You can invoke the function show_type  with a string or an integer, and the output depends on the data type of the argument it receives.

To annotate the parameter, we will use the union character | , which was introduced in Python 3.10, to separate the types as follows:

The union | now shows that the parameter num is either str or int .

If you’re using Python ≤3.9, you need to import Union from the typing module. The parameter can be annotated as follows:

Adding type hints to optional function parameters

Not all parameters in a function are required; some are optional. Here’s an example of a function that takes an optional parameter:

The second parameter title is an optional parameter that has a default value of None if it receives no argument at the point of invoking the function. The typing module provides the Optional[<datatype>] annotation to annotate this optional parameter with a type hint:

Below is an example of how you can perform this annotation:

Adding type hints to lists

Python lists are annotated based on the types of the elements they have or expect to have. Starting with Python ≥3.9, to annotate a list, you use the list type, followed by [] . [] contains the element’s type data type.

For example, a list of strings can be annotated as follows:

If you’re using Python ≤3.8, you need to import List from the typing module:

In function definitions, the Python documentation recommends that the list type should be used to annotate the return types:

However, for function parameters, the documentation recommends using these abstract collection types:

The Iterable type should be used when the function takes an iterable and iterates over it.

An iterable is an object that can return one item at a time. Examples range from lists, tuples, and strings to anything that implements the __iter__ method.

You can annotate an Iterable as follows, in Python ≥3.9:

In the function, we define the items parameter and assign it an Iterable[int] type hint, which specifies that the Iterable contains int elements.

The Iterable type hint accepts anything that has the __iter__ method implemented. Lists and tuples have the method implemented, so you can invoke the double_elements function with a list or a tuple, and the function will iterate over them.

To use Iterable in Python ≤3.8, you have to import it from the typing module:

Using Iterable in parameters is more flexible than if we had a list type hint or any other objects that implements the __iter__ method. This is because you wouldn’t need to convert a tuple for example, or any other iterable to a list before passing it into the function.

A sequence is a collection of elements that allows you to access an item or compute its length.

A Sequence type hint can accept a list, string, or tuple. This is because they have special methods: __getitem__ and __len__ . When you access an item from a sequence using  items[index] , the __getitem__ method is used. When getting the length of the sequence len(items) , the __len__ method is used.

In the following example, we use the Sequence[int] type to accept a sequence that has integer items:

This function accepts a sequence and access the last element from it with data[-1] . This uses the __getitem__ method on the sequence to access the last element.

As you can see, we can call the function with a tuple or list and the function works properly. We don’t have to limit parameters to list if all the function does is get an item.

For Python ≤3.8, you need to import Sequence from the typing module:

Adding type hints to dictionaries

To add type hints to dictionaries, you use the dict type followed by [key_type, value_type] :

For example, the following dictionary has both the key and the value as a string:

You can annotate it as follows:

The dict type specifies that the person dictionary keys are of type str and values are of type str .

If you’re using Python ≤3.8, you need to import Dict from the typing module.

In function definitions, the documentation recommends using dict as a return type:

For function parameters, it recommends using these abstract base classes:

  • MutableMapping

In function parameters, when you use the dict type hints, you limit the arguments the function can take to only dict , defaultDict , or OrderedDict . But, there are many dictionary subtypes, such as UserDict and ChainMap , that can be used similarly.

You can access an element and iterate or compute their length like you can with a dictionary. This is because they implement:

  • __getitem__ : for accessing an element
  • __iter__ : for iterating
  • __len__ : computing the length

So instead of limiting the structures the parameter accepts, you can use a more generic type Mapping since it accepts:

  • defaultdict
  • OrderedDict

Another benefit of the Mapping type is that it specifies that you are only reading the dictionary and not mutating it.

The following example is a function that access items values from a dictionary:

The Mapping type hint in the above function has the [str, str] depiction that specifies that the student data structure has keys and values both of type str .

If you’re using Python ≤3.8, import Mapping from the typing module:

Use MutableMapping as a type hint in a parameter when the function needs to mutate the dictionary or its subtypes. Examples of mutation are deleting items or changing item values.

The MutableMapping class accepts any instance that implements the following special methods:

  • __getitem__
  • __setitem__
  • __delitem__

The __delitem__ and __setitem__ methods are used for mutation, and these are methods that separate Mapping type from the MutableMapping type.

In the following example, the function accepts a dictionary and mutates it:

In the function body, the value in the first_name variable is assigned to the dictionary and replaces the value paired to the first_name key. Changing a dictionary key value invokes the __setitem__ method.

If you are on Python ≤3.8, import MutableMapping from the typing module.

So far, we have looked at how to annotate dictionaries with dict , Mapping , and MutableMapping , but most of the dictionaries have only one type: str . However, dictionaries can contain a combination of other data types.

Here is an example of a dictionary whose keys are of different types:

The dictionary values range from str , int , and list . To annotate the dictionary, we will use a TypedDict that was introduced in Python 3.8. It allows us to annotate the value types for each property with a class-like syntax:

We define a class StudentDict that inherits from TypedDict . Inside the class, we define each field and its expected type.

With the TypedDict defined, you can use it to annotate a dictionary variable as follows:

You can also use it to annotate a function parameter that expects a dictionary as follows:

If the dictionary argument doesn’t match StudentDict , mypy will show a warning.

Adding type hints to tuples

A tuple stores a fixed number of elements. To add type hints to it, you use the tuple type, followed by [] , which takes the types for each elements.

The following is an example of how to annotate a tuple with two elements:

Regardless of the number of elements the tuple contains, you’re required to declare the type for each one of them.

The tuple type can be used as a type hint for a parameter or return type value:

If your tuple is expected to have an unknown amount of elements of a similar type, you can use tuple[type, ...] to annotate them:

To annotate a named tuple, you need to define a class that inherits from NamedTuple . The class fields define the elements and their types:

If you have a function that takes a named tuple as a parameter, you can annotate the parameter with the named tuple:

There are times when you don’t care about the argument a function takes. You only care if it has the method you want.

To implement this behavior, you’d use a protocol. A protocol is a class that inherits from the Protocol class in the typing module. In the protocol class, you define one or more methods that the static type checker should look for anywhere the protocol type is used.

Any object that implements the methods on the protocol class will be accepted. You can think of a protocol as an interface found in programming languages such as Java, or TypeScript. Python provides predefined protocols, a good example of this is the Sequence type. It doesn’t matter what kind of object it is, as long as it implements the __getitem__ and __len__ methods, it accepts them.

Let’s consider the following code snippets. Here is an example of a function that calculates age by subtracting the birth year from the current year:

The function takes two parameters: current_year , an integer, and data , an object. Within the function body, we find the difference between the current_year and the value returned from get_birthyear() method.

Here is an example of a class that implements the get_birthyear method:

This is one example of such a class, but there could be other classes such as Dog or Cat that implements the get_birthyear method. Annotating all the possible types would be cumbersome.

Since we only care about the get_birthyear() method. To implement this behavior, let’s create our protocol:

The class HasBirthYear inherits from Protocol , which is part of the typing module. To make the Protocol aware about the get_birthyear method, we will redefine the method exactly as it is done in the Person class example we saw earlier. The only exception would be the function body, where we have to replace the body with an ellipsis ( ... ).

With the Protocol defined, we can use it on the calc_age function to add a type hint to the data parameter:

Now the data parameter has been annotated with the HasBirthYear Protocol. The function can now accept any object as long it has the get_birthyear method.

Here is the full implementation of the code using Protocol :

Running the code with mypy will give you no issues.

Some functions produce different outputs based on the inputs you give them. For example, let’s look at the following function:

When you call the function with an integer as the first argument, it returns an integer. If you invoke the function with a list as the first argument, it returns a list with each element added with the second argument value.

Now, how can we annotate this function? Based on what we know so far, our first instinct would be to use the union syntax:

However, this could be misleading due to its ambiguity. The above code describes a function that accepts an integer as the first argument, and the function returns either a list or an int . Similarly, when you pass a list as the first argument, the function will return either a list or an int .

You can implement function overloading to properly annotate this function. With function overloading, you get to define multiple definitions of the same function without the body, add type hints to them, and place them before the main function implementations.

To do this, annotate the function with the overload decorator from the typing module. Let’s define two overloads before the add_number function implementation:

We define two overloads before the main function add_number . The overloads parameters are annotated with the appropriate types and their return value types. Their function bodies contains an ellipsis ( ... ).

The first overload shows that if you pass int as the first argument, the function will return int .

The second overload shows that if you pass a list as the first argument, the function will return a list .

Finally, the main add_number implementation does not have any type hints.

As you can now see, the overloads annotate the function behavior much better than using unions.

At the time of writing, Python does not have an inbuilt way of defining constants . Starting with Python 3.10, you can use the Final type from the typing module. This will mean mypy will emit warnings if there are attempts to change the variable value.

Running the code with mypy with issue a warning:

This is because we are trying to modify the MIN variable value to MIN = MIN + 3 .

Note that, without mypy or any static file-checker, Python won’t enforce this and the code will run without any issues:

As you can see, during runtime you can change the variable value MIN any time. To enforce a constant variable in your codebase, you have to depend on mypy.

While you may be able to add annotations to your code, the third-party modules you use may not have any type hints. As a result, mypy will warn you.

If you receive those warnings, you can use a type comment that will ignore the third-party module code:

You also have the option of adding type hints with stubs. To learn how to use stubs, see Stub files in the mypy documentation.

This tutorial explored the differences between statically typed and dynamically typed codes. You learned the different approaches you can use to add type hints to your functions and classes. You also learned about static type-checking with mypy and how to add type hints to variables, functions, lists, dictionaries, and tuples as well as working with Protocols, function overloading, and how to annotate constants.

To continue building your knowledge, visit typing — Support for type hints . To learn more about mypy, visit the mypy documentation .

Get set up with LogRocket's modern error tracking in minutes:

  • Visit https://logrocket.com/signup/ to get an app ID

Install LogRocket via npm or script tag. LogRocket.init() must be called client-side, not server-side

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)

Hey there, want to help make our blog better?

Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

python type annotation assignment

Stop guessing about your digital experience with LogRocket

Recent posts:.

python type annotation assignment

Creating toast notifications using Solid Toast

Toast notifications are messages that appear on the screen to provide feedback to users. When users interact with the user […]

python type annotation assignment

Deno adoption guide: Overview, examples, and alternatives

Deno’s features and built-in TypeScript support make it appealing for developers seeking a secure and streamlined development experience.

python type annotation assignment

Types vs. interfaces in TypeScript

It can be difficult to choose between types and interfaces in TypeScript, but in this post, you’ll learn which to use in specific use cases.

python type annotation assignment

How to build a bottom navigation bar in Flutter

This tutorial demonstrates how to build, integrate, and customize a bottom navigation bar in a Flutter app.

python type annotation assignment

Leave a Reply Cancel reply

Python Type Annotations Full Guide

A website containing documentation and tutorials for the software team..

This notebook explores how and when to use Python’s type annotations to enhance your code. Note: Unless otherwise specified, any code written in this notebook is written in the Python 3.10 version of the Python programming language. Lines of code that are feature-specific to versions 3.9 and 3.10 will be annotated accordingly. IMPORTANT: Type annotations only need to be done on the first occurrence of a variable’s name in a scope.

Table of Contents

Introduction, how to use union-typed variables, optional variables, nested collections, tuple unpacking, inheritance, namedtuples, dataclasses, shape typing, data type typing, other advanced types, type aliases, type variables, structural subtyping and generic collections (abc), user-defined generics.

Python Type Annotations , also known as type signatures or “type hints”, are a way to indicate the intended data types associated with variable names. In addition to writing more readable, understandable, and maintainable code, type annotations can also be used by static type checkers like mypy to verify type consistency and to catch programming errors before they are found the traditional way, at runtime. It should be noted that type annotations create no new logic at runtime and thus are designed to generate nearly zero runtime overhead, so there’s no risk of decreased performance.

The typing module is the core Python module to handle advanced type annotations. Introduced in Python 3.5, this module adds extra functionality on top of the built-in type annotations to account for more specific type circumstances such as pre-python 3.9 structural subtyping, pre-python 3.10 union types, callables, generics, and others.

Basic Variable Type Annotations

General form (with or without assigned value):

Here are some examples of basic annotated types:

Dynamically (Union) Typed Variables

If the dynamic typing is needed, use Union (pre-Python 3.10, imported from typing ) or the pipe operator, | (Python 3.10+):

Since for Union types, you have no way of knowing/ensuring exactly what type a variable is at compile-time, you must use either assert isinstance(...) or if isinstance(...) statements to fulfill the runtime type-checking and type-safety that type checkers can’t verify. See examples below.

Oftentimes, values need the option to end up in a “null” or empty state. These are known as optional values, which use the type format Optional[T] where T is the possible non-None type. Alternatively, new to Python 3.10, the new T | None syntax may be used, as seen below.

For the same reason as union types, optional types should be only used after its exact type has been resolved at runtime. As a best practice, this means utilizing Python’s is operator instead of the == operator to check for identity instead of equality. See example below.

See PEP 526 - Syntax for Variable Type Annotations for more info.

Collections

When making type annotations for a collection, it is important to also annotate the type of data that is stored within that collection. While collections should almost always be typed to their “deepest” known sub-type, there’s a point where type annotations lose their elegance and instead may transform into monstrous nested strings of death. In such cases, Type Aliases may be used to reduce clutter (more on that later).

red_bullet

In the case of JSON files and other cases where there are unknown types from a function call, annotate as far as is known about the result as possible (ex. at the least, we know json.load will return a dict mapping str to objects):

Note: For pre-Python 3.9 code, built-in collection types’ annotations are imported from the typing module as their uppercase variants (i.e. List[int] )

  • See TypeAliases for more info on TypeAliases.
  • See NewTypes for more info on NewTypes.

Note: This is the only real way to do tuple unpacking right now (see PEP 526 ). Hopefully in a future release they devise a more elegant method.

See PEP 526 - Syntax for Variable Type Annotations for more info on variable type annotations.

Function Signatures and Callables

Functions’ arguments are all typed normally, and the return type is typed with an arrow ( -> ) followed by the return type followed by the colon terminating the function signature. Here are some examples.

Simple function:

Function with default values:

Slightly more complex function:

Functions with *args (variable-length positional arguments) or **kwargs (variable-length keyword arguments) are typed a little differently than usual in that the collection that stores them does not need annotation. Here’s a simple example from the mypy docs:

Functions designed to never return look like this:

See Function Signatures from mypy docs for more info.

Callables are special types of objects that can be called. The type annotation is written as Callable[[P], R] where P is a comma-separated list of types corresponding to the types of the input parameters of the callable, in order, and R is the return type. Here are some examples of callables in practice:

Note: Callables, when used for Decorators , need a way to specify generic parameters, so use ParamSpec from the typing module in the event that’s necessary.

Class Type Annotations

Classes are typed as you would expect although there are some nuances that are handled more explicitly. For instance, class variables must be explicitly typed as ClassVar[T] where T is the type of the class variable.

Note: the method return-typed with the class name is a feature added in Python 3.10. Pre-Python 3.10 code can use this feature as well if the line from __future__ import annotations is written at the top of the file to enable it.

In cases where subtypes of a class are used, the subtype must be annotated with the supertype if the intention is to re-assign the variable between the subtypes of the supertype.

  • Python typing - ClassVars
  • Structural Subtyping and Generic Collections
  • User-defined Generics

Iterators and Generators

Iterators and generators are objects that implement __next__ or are functions that include the keyword yield in the body.

Iterators are classes in python that implement __iter__ and __next__ . These are usually iterated over with for loops, but you can use them in other ways such as casting them directly to a sequence or even using the built-in next function to iterate manually. Iterator types are annotated as Iterator[T] where T is the type(s) of the items yielded.

Here is an example of an iterator that counts up in triplets until the max_val passed.

Generators are like iterators in that they continuously “return” a next value, but they differ in that they can return objects instead of just numerical values, and they can be written as functions. Generator return types are annotated as Generator[Y, S, R] where Y is the type of the yielded values, S is the type of the values expected to be sent to the generator (if applicable), and R is the type of the return value of the generator (if applicable). Not all generators have send or return values, so these may be replaced with None if not applicable.

In the case below, the generator function only returns integers, so we can type it as an iterator of integers for simplicity’s sake. However, if desired, it can also use the traditional method of generator typing.

This next example cannot be typed as an iterator because it returns objects, so it’s a generator.

Here we have a generator with yield, send, and return support. This example’s parameter max_num starts at infinity, and can be passed as either a an int or a float.

Note: the pipe ( | ) operator between types used above is not supported pre-Python3.10 (See Dynamically Typed (Union) Variables ).

Advanced Python Data Types

Enums don’t need to be typed in their construction since their type is inherently being defined. As such, they do need to be annotated when referenced (see example below).

NamedTuples are typed normally and constructed as a class inheriting from typing.NamedTuple .

Note: while there is an alternative (legacy) method of creating namedtuples using collections.namedtuple , it is not recommended to use this method and is recommended to use the typing.NamedTuple instead as the former requires you to enter the type name as an argument string and does not support type annotation.

Dataclasses are also typed as expected.

Numpy Arrays

Numpy arrays are typed with the PyPI package nptyping (ver. 2.0.0+). This is so that we get explicit shape typing, and an overall cleaner annotation system. Unfortunately, this means that type checkers like mypy can’t actually check the details of the typed numpy array (only whether the variable is or is not an ndarray), so at the moment, it’s almost purely a glorified comment.

Type annotations are formatted as NDArray[S, T] where S is the intended shape of the array ( see nptyping Shape expressions ), and T is the intended data type of the array ( see nptyping dtypes ). Additionally, the structure of an array can also be annotated ( see nptyping Structure expressions ).

Shapes are represented as strings containing a comma-separated list of integers corresponding to the shape of the ndarray. For example, a 2D array of shape (3, 4) would be represented as "3, 4" . In addition, shapes can also be more dynamically typed with wildcards (*) in place of single dimension numbers to represent any length for that dimension, and they can also be labeled and named. A full detailing of Shape expressions can be found here .

Data types are imported from nptyping explicitly. Some commonly used types that can be imported are Int , UInt8 , Float , Bool , String , and Any . A full list of available dtypes can be found here

Here are some examples of type annotated numpy arrays.

See Nptyping Documentation for more info on how to use nptyping.

Note: While numpy does have its own numpy.typing library, for a variety of reasons, we no longer use this library and thus do not recommend it.

Here are a list of other advanced types that are not covered in the above sections with links to their type annotation documentation:

  • Awaitables & Asynchronous Iterators/Generators
  • Final (Uninheritable) Attributes
  • metaclasses

Advanced Python Type Annotations

A Type Alias is simply a synonym for a type, and is used to make the code more readable. To create one, simply assign a type annotation to a variable name. Beginning in Python 3.10, this assigned variable can be typed with typing.TypeAlias . Here is an example.

New Types are a way to definte types that wrap existing types in Python. What this means is that you can define a new type that is a subtype of an existing type with almost no class/inheritance overhead, and then use that new type in place of the existing type.

See Python typing - NewType for more info.

Type Variables are a way to define a type that can be used in place of a type (with or without constraints on what those types may be), but is not a type. This is useful for defining generic types. Let’s take a look at what the class signature for typing.TypeVar .

Here’s an example of TypeVar in practice:

  • Python typing - TypeVars

Also known as “duck types”, generic collections are a way of defining a type of collection that fits a certain set of operations. These types are all the Abstract Base Classes (ABCs) of common Python collections. For example, a list is an generic Sequence , and a dict is a generic Mapping Here are some examples of some common generic collections:

More information and other abstract base classes can be found here .

  • Python typing - Generics
  • mypy - Protocols and Structural Subtyping

Oftentimes when you want to create your own collection, you want to be adapatable as to what types it can take. In this case, we combine TypeVar and Generic to create a generic collection. Here are some simple examples:

Note: the usage of T as a type variable is a convention and can be substituted with any name. Similarly, the convention for user-defined generic mappings or other paired values is K , V (usually used as Key, Value)

How to Use Type Hints for Multiple Return Types in Python

How to Use Type Hints for Multiple Return Types in Python

Table of Contents

Use Python’s Type Hints for One Piece of Data of Alternative Types

Use python’s type hints for multiple pieces of data of different types, declare a function to take a callback, annotate the return value of a factory function, annotate the values yielded by a generator, improve readability with type aliases, leverage tools for static type checking.

In Python, type hinting is an optional yet useful feature to make your code easier to read, reason about, and debug. With type hints, you let other developers know the expected data types for variables, function arguments, and return values. As you write code for applications that require greater flexibility, you may need to specify multiple return types to make your code more robust and adaptable to different situations.

You’ll encounter different use cases where you may want to annotate multiple return types within a single function in Python. In other words, the data returned can vary in type. In this tutorial, you’ll walk through examples of how to specify multiple return types for a function that parses a string from an email address to grab the domain name.

In addition, you’ll see examples of how to specify type hints for callback functions or functions that take another function as input. With these examples, you’ll be ready to express type hints in functional programming.

Note: Typically, you want to work with functions that are generous in which type of arguments they accept, while they’re specific about the type of their return value. For example, a function may accept any iterable like a list, tuple, or generator, but always return a list.

If your function can return several different types, then you should first consider whether you can refactor it to have a single return type. In this tutorial, you’ll learn how to deal with those functions that need multiple return types.

To get the most out of this tutorial, you should know the basics of what type hints in Python are and how you use them.

Get Your Code: Click here to get access to the free sample code that shows you how to declare type hints for multiple types in Python.

In this section, you’ll learn how to write type hints for functions that can return one piece of data that could be of different types. The scenarios for considering multiple return types include:

Conditional statements : When a function uses conditional statements that return different types of results, you can convey this by specifying alternative return types for your function using type hints.

Optional values : A function may sometimes return no value, in which case you can use type hints to signal the occasional absence of a return value.

Error handling : When a function encounters an error, you may want to return a specific error object that’s different from the normal results’ return type. Doing so could help other developers handle errors in the code.

Flexibility : When designing and writing code, you generally want it to be versatile, flexible, and reusable. This could mean writing functions that can handle a range of data types. Specifying this in type hints helps other developers understand your code’s versatility and its intended uses in different cases.

In the example below, you use type hints in working with conditional statements. Imagine that you’re processing customer data and want to write a function to parse users’ email addresses to extract their usernames.

To represent one piece of data of multiple types using type hints in Python 3.10 or newer, you can use the pipe operator ( | ) . Here’s how you’d use type hints in a function that typically returns a string containing the username but can also return None if the corresponding email address is incomplete:

In the example above, the parse_email() function has a conditional statement that checks if the email address passed as an argument contains the at sign ( @ ). If it does, then the function splits on that symbol to extract the elements before and after the at sign, stores them in local variables , and returns the username. If the argument doesn’t contain an at sign, then the return value is None , indicating an invalid email address.

Note: In practice, the validation rules for email addresses are much more complicated.

So, the return value of this function is either a string containing the username or None if the email address is incomplete. The type hint for the return value uses the pipe operator ( | ) to indicate alternative types of the single value that the function returns. To define the same function in Python versions older than 3.10, you can use an alternative syntax:

This function uses the Union type from the typing module to indicate that parse_email() returns either a string or None , depending on the input value. Whether you use the old or new syntax, a union type hint can combine more than two data types.

Even when using a modern Python release, you may still prefer the Union type over the pipe operator if your code needs to run in older Python versions.

Note: One challenge with functions that may return different types is that you need to check the return type when you call the function. In the examples above, you need to test whether you got None when parsing the email address.

If the return type can be deduced from the argument types, then you can alternatively use @overload to specify different type signatures.

Now that you know how to define a function that returns a single value of potentially different types, you can turn your attention toward using type hints to declare that a function can return more than one piece of data.

Sometimes, a function returns more than one value, and you can communicate this in Python using type hints. You can use a tuple to indicate the types of the individual pieces of data that a function returns at once. In Python 3.9 and later, you can use built-in tuple data structure . On older versions, you need to use typing.Tuple in your annotations.

Now, consider a scenario where you want to build on the previous example. You aim to declare a function whose return value incorporates multiple pieces of data of various types. In addition to returning the username obtained from an email address, you want to update the function to return the domain as well.

Here’s how you’d use type hints to indicate that the function returns a tuple with a string for the username and another string for the domain:

The function’s return type above is a pair of two strings that correspond to the username and domain of an email address. Alternatively, if the input value doesn’t constitute a valid email address, then the function returns None .

The type hint for the return value contains a tuple with two comma-separated str elements inside square brackets, which tells you that the tuple has exactly two elements, and they’re both strings. Then, a pipe operator ( | ) followed by None indicates that the return value could be either a two-string tuple or None , depending on the input value.

To implement the same function in Python earlier than 3.10, use the Tuple and Union types from the typing module:

This notation is slightly more verbose and requires importing two additional types from the typing module. On the other hand, you can use it in older Python versions.

Note: As with alternative types in a union, you can have an arbitrary number of elements and types in a tuple to combine multiple pieces of data in a type hint. Here’s an example:

In this case, the function returns three values. One is a string, the next is an integer, and the third is a Boolean value.

Okay, it’s time for you to move on to more advanced type hints in Python!

In some programming languages, including Python, functions can return other functions or take them as arguments. Such functions, commonly known as higher-order functions , are a powerful tool in functional programming . To annotate callable objects with type hints, you can use the Callable type from the collections.abc module.

Note: Don’t confuse collections.abc.Callable with typing.Callable , which has been deprecated since Python 3.9.

A common type of higher-order function is one that takes a callback as an argument. Many built-in functions in Python, including sorted() , map() , and filter() , accept a callback function and repeatedly apply it to a sequence of elements. Such higher-order functions eliminate the need for writing explicit loops, so they align with a functional programming style.

Here’s a custom function that takes a callable as an argument, illustrating how you’d annotate it with type hints:

The first function above, apply_func() , takes a callable object as the first argument and a string value as the second argument. The callable object could be a regular function , a lambda expression , or a custom class with a special .__call__() method . Other than that, this function returns a pair of strings.

The Callable type hint above has two parameters defined inside square brackets. The first parameter is a list of arguments that the input function takes. In this case, func() expects only one argument of type string. The second parameter of the Callable type hint is the return type, which, in this case, is a tuple of two strings.

The next function in the code snippet above, parse_email() , is an adapted version of the function that you’ve seen before that always returns a tuple of strings.

Then, you call apply_func() with a reference to parse_email() as the first argument and the string "[email protected]" as the second argument. In turn, apply_func() invokes your supplied function with the given argument and passes the return value back to you.

Now, what if you want apply_func() to be able to take different functions with multiple input types as arguments and have multiple return types? In this case, you can modify the parameters inside the Callable type hint to make it more generic.

Instead of listing the individual argument types of the input function, you can use an ellipsis literal ( ... ) to indicate that the callable can take an arbitrary list of arguments. You can also use the Any type from the typing module to specify that any return type is acceptable for the callable.

Even better, you can use type variables to specify the connection between the return type of the callable and of apply_func() .

Either option would apply type hinting to the return type only for the function in question. Below, you update the previous example in this manner:

Notice that you’ve now annotated the Callable above with an ellipsis literal as the first element in square brackets. Therefore, the input function can take any number of arguments of arbitrary types.

The second parameter of the Callable type hint is now T . This is a type variable that can stand in for any type. Since you use T as the return type for apply_func() as well, this declares that apply_func() returns the same type as func .

Because the callable supplied to apply_func() can take arguments of an arbitrary number or no arguments at all, you can use *args and **kwargs to indicate this.

Note: As written, there’s no explicit relationship between the ellipsis inside Callable and the Any annotations of *args and **kwargs . You can use parameter specification variables to improve these type hints further:

Now, P represents all the parameters of func , and apply_func() will inherit the same types for its *args and **kwargs .

If you’re on a Python version before Python 3.10 , then you need to import ParamSpec from typing_extensions instead.

In addition to annotating callable arguments, you can also use type hints to formally specify a callable return type of a function, which you’ll take a closer look at now.

A factory function is a higher-order function that produces a new function from scratch. The factory’s parameters determine this new function’s behavior. In particular, a function that takes a callable and also returns one is called a decorator in Python.

Continuing with the previous examples, what if you wanted to write a decorator to time the execution of other functions in your code? Here’s how you’d measure how long your parse_email() function takes to finish:

The timeit() decorator takes a callable with arbitrary inputs and outputs as an argument and returns a callable with the same inputs and outputs. The ParamSpec annotation indicates the arbitrary inputs in the first element of Callable , while the TypeVar indicates the arbitrary outputs in the second element.

The timeit() decorator defines an inner function , wrapper() , that uses a timer function to measure how long it takes to execute the callable given as the argument. This inner function stores the current time in the start variable, executes the decorated function while capturing its return value, and stores the new time in the end variable. It then prints out the calculated duration before returning the value of the decorated function.

After defining timeit() , you can decorate any function with it using the at symbol ( @ ) as syntactic sugar instead of manually calling it as if it were a factory function. For example, you can use @timeit around parse_email() to create a new function with an additional behavior responsible for timing its own execution:

You’ve added new capability to your function in a declarative style without modifying its source code, which is elegant but somewhat goes against the Zen of Python . One could argue that decorators make your code less explicit. At the same time, they can make your code look simpler, improving its readability.

When you call the decorated parse_email() function, it returns the expected values but also prints a message describing how long your original function took to execute:

The duration of your function is negligible, as indicated by the message above. After calling the decorated function, you assign and unpack the returned tuple into variables named username and domain .

Next up, you’ll learn how to annotate the return value of a generator function that yields values one by one when requested instead of all at once.

Sometimes, you may want to use a generator to yield pieces of data one at a time instead of storing them all in memory for better efficiency, especially for larger datasets. You can annotate a generator function using type hints in Python. One way to do so is to use the Generator type from the collections.abc module.

Note: As with the Callable type, don’t confuse collections.abc.Generator with the deprecated typing.Generator type.

Continuing with the previous examples, imagine now that you have a long list of emails to parse. Instead of storing every parsed result in memory and having the function return everything at once, you can use a generator to yield the parsed usernames and domains one at a time.

To do so, you can write the following generator function, which yields this information, and use the Generator type as a type hint for the return type:

The parse_email() generator function doesn’t take any arguments, as you’ll send them to the resulting generator object . Notice that the Generator type hint expects three parameters, the last two of which are optional:

  • Yield type : The first parameter is what the generator yields. In this case, it’s a tuple containing two strings—one for the username and the other for the domain, both parsed from the email address. Alternatively, the generator may yield an error string when the email address is invalid.
  • Send type : The second parameter describes what you’re sending into the generator. This is also a string, as you’ll be sending email addresses to the generator.
  • Return type : The third parameter represents what the generator returns when it’s done producing values. In this case, the function returns the string "Done" .

This is how you’d use your generator function:

You start by calling the parse_email() generator function, which returns a new generator object. Then, you advance the generator to the first yield statement by calling the built-in next() function . After that, you can start sending email addresses to the generator to parse. The generator terminates when you send an empty string.

Because generators are also iterators —namely, generator iterators —you can alternatively use the collections.abc.Iterator type instead of Generator as a type hint to convey a similar meaning. However, because you won’t be able to specify the send and return types using a pure Iterator type hint, collections.abc.Iterator will only work as long as your generator yields values alone:

This flavor of the parse_email() function takes a list of strings and returns a generator object that iterates over them in a lazy fashion using a for loop . Even though a generator is more specific than an iterator, the latter is still broadly applicable and easier to read, so it’s a valid choice.

Sometimes Python programmers use the even less restrictive and more general collections.abc.Iterable type to annotate such a generator without leaking the implementation details:

In this case, you annotate both the function’s argument and its return type with the Iterable type to make the function more versatile. It can now accept any iterable object instead of just a list like before.

Conversely, the function caller doesn’t need to know whether it returns a generator or a sequence of items as long as they can loop over it. This adds tremendous flexibility because you can change the implementation from an eager list container to a lazy generator without breaking the contract with the caller established through type hints. You can do this when you anticipate that the returned data will be large enough to need a generator.

Note: While you can reserve the freedom to change the return type later, as in the example above, in most cases you should strive to be as specific in your return type annotations as possible.

As you’ve seen in this example, there are a few options when it comes to annotating generators with type hints.

Now that you’ve seen how to use type hints to specify multiple different types, it’s worth considering best practices for maintainability. The first concept on this topic is around type aliasing .

If you find yourself using the same set of return types across multiple functions, then it can get tedious trying to maintain all of them separately in different places across your codebase. Instead, consider using a type alias . You can assign a set of type hints to an alias and reuse that alias in multiple functions within your code.

The main benefit of doing this is that if you need to modify the specific set of type hints, then you can do so in a single location, and there’s no need to refactor the return types in each function where you use them.

It’s worth noting that even if you don’t intend to reuse the same type hint across multiple functions, using type aliases can improve the readability of your code. In a keynote speech at PyCon US 2022 , Łukasz Langa explained how giving your types meaningful names can help with understanding your code better.

You can do so by giving your type hint an aliased name and then using this alias as a type hint. Here’s an example of how to do this for the same function as before:

Here, you define a new EmailComponents variable as an alias of the type hint indicating the function’s return value, which can be either None or a tuple containing two strings. Then, you use the EmailComponents alias in your function’s signature .

Python version 3.10 introduced the TypeAlias declaration to make type aliases more explicit and distinct from regular variables. Here’s how you can use it:

You need to import TypeAlias from the typing module before you can use it as a type hint to annotate your EmailComponents alias. After importing it, you can use it as a type hint for type aliases, as demonstrated above.

Note that since Python 3.12, you can specify type aliases using the new soft keyword type . A soft keyword only becomes a keyword when it’s clear from the context. Otherwise, it can mean something else. Remember that type() is also one of the built-in functions in Python. Here’s how you’d use the new soft keyword type :

Starting in Python 3.12, you can use type to specify type aliases, as you’ve done in the example above. You can specify the type alias name and type hint. The benefit of using type is that it doesn’t require any imports.

Note: There are subtle differences in using type and TypeAlias , which aren’t drop-in replacements for each other. To learn more about type and other new typing features in Python 3.12, check out Python 3.12 Preview: Static Typing Improvements .

Aliasing type hints with descriptive and meaningful names is a straightforward yet elegant trick that can improve your code’s readability and maintainability, so don’t overlook it.

As a dynamically typed language, Python doesn’t actually enforce type hints at runtime . This means that a function can specify any desired return type, and the program would still run without actually returning a value of that type or raising an exception .

Although Python doesn’t enforce type hints at runtime, you can use third-party tools for type checking , some of which may integrate with your code editor through plugins. They can be helpful for catching type-related errors during the development or testing process.

Mypy is a popular third-party static type checker tool for Python. Other options include pytype , Pyre , and Pyright . They all work by inferring variable types from their values and checking against the corresponding type hints.

To use mypy in your project, start by installing the mypy package in your virtual environment using pip :

This will bring the mypy command into your project. What if you tried to run mypy on a Python module containing a function that you’ve seen previously?

If you recall the parse_email() function from before, it takes a string with an email address as a parameter. Other than that, it returns either None or a tuple of two strings containing the username and the domain. Go ahead and save this function in a file named email_parser.py if you haven’t already:

You can run this code through a type checker by typing mypy followed by the name of your Python file in the command line:

This runs an automated static code analysis without executing your code. Mypy tries to assess whether the actual values will have the expected types according to the type hints declared. In this case, everything seems to be correct.

But what happens if you make a mistake in your code? Say that the declared return value of parse_email() has an incorrect type hint, indicating a string instead of a tuple of two strings:

The modified parse_email() function above has a discrepancy between the type hint and one of the values that it actually returns. When you rerun mypy in the command line, you’ll see the following error:

This message indicates that your function returns a tuple with two strings instead of the expected single string value. Such information is invaluable because it can prevent catastrophic bugs from happening at runtime.

In addition to type checking, mypy can infer types for you. When you run your script through mypy, you can pass in any expression or variable to the reveal_type() function without having to import it. It’ll infer the type of the expression and print it out on the standard output .

When annotating functions with type hints, you can call reveal_type() for trickier cases. Here’s an example of how to use this function to determine the actual type of the parse_email() return value:

In the example above, the variable result contains the two components from a parsed email address. You can pass this variable into the reveal_type() function for mypy to infer the type. Here’s how you’d run it in the console:

When you run mypy on your Python file, it prints out the inferred type from the script’s reveal_type() function. In this example, mypy correctly infers that the result variable is a tuple containing two strings or an empty value of None .

Remember that IDEs and third-party static type checker tools can catch type-related errors in the development and testing process. These tools infer the type from return values and ensure that functions are returning the expected type.

Be sure to take advantage of this useful function that can reveal the type hints of more complicated return types!

Although type hinting is optional, it’s a useful concept to make your code more readable, user-friendly, and easier to debug . Type hints signal to other developers the desired inputs and return types of your functions, facilitating collaboration.

In this tutorial, you focused on implementing type hints for more complex scenarios and learned best practices for using and maintaing them.

In this tutorial, you’ve learned how to use:

  • The pipe operator ( | ) or the Union type to specify alternative types of one piece of data returned from a function
  • Tuples to specify distinct types of multiple pieces of data
  • The Callable type for annotating callback functions
  • The Generator , Iterator , and Iterable types for annotating generators
  • Type aliases for type hints to help simplify complex type hints that you reference in multiple places in your code
  • Mypy , a third-party tool for type checking

Now you’re ready to use type hints in a variety of scenarios. How do you use type hints in your code? Share your use cases in the comments below.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About Claudia Ng

Claudia Ng

Claudia is an avid Pythonista and Real Python contributor. She is a Data Scientist and has worked for several tech startups specializing in the areas of credit and fraud risk modeling.

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Aldren Santos

Master Real-World Python Skills With Unlimited Access to Real Python

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

What Do You Think?

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. Get tips for asking good questions and get answers to common questions in our support portal . Looking for a real-time conversation? Visit the Real Python Community Chat or join the next “Office Hours” Live Q&A Session . Happy Pythoning!

Keep Learning

Related Topics: intermediate

Keep reading Real Python by creating a free account or signing in:

Already have an account? Sign-In

Almost there! Complete this form and click the button below to gain instant access:

How to Use Type Hints for Multiple Return Types in Python (Sample Code)

🔒 No spam. We take your privacy seriously.

python type annotation assignment

Python Type Hints: A Comprehensive Guide to Using Type Annotations in Python

avatar

What are Python Type Hints?

  • How to Use Python Type Hints (with examples)

Benefits of Python Type Hints

Best practices for using python type hints, knowing when to avoid python type hints.

As a software engineer, writing clean, reliable, and maintainable code is crucial. Python, with its dynamic typing, provides flexibility but can sometimes lead to issues and confusion, especially when working on larger projects. Python type hints offer a solution to this problem by providing static typing information to improve code quality and maintainability. In this blog post, we'll explore Python type hints, their usage, benefits, drawbacks, and best practices for using type annotations in your Python code.

Python type hints, introduced in Python 3.5 through PEP 484 , allow developers to add type annotations to variables, function parameters, and return values. These annotations provide information about the expected types of values and enable static type checking tools to catch potential errors before runtime. However, it's important to note that Python type hints are optional and do not affect the dynamic nature of Python. They serve as documentation and aids for static analysis tools and other developers.

Benefits and Usage of Python Type Hints

Python type hints offer numerous benefits and can be effectively used in various scenarios to enhance code quality, readability, and collaboration.

How to Use Python Type Hints

Python type hints can be added using the colon syntax ( : ) followed by the type annotation. Here are a few examples:

In the above examples, we specify that variable_name should be of type int , and the add_numbers function takes two parameters ( a and b ) of type int and returns an int value.

Python provides a set of built-in types like int , float , str , list , dict , etc. Additionally, you can also use type hints with user-defined classes and modules.

Here are some additional examples showcasing the benefits of Python type hints:

In these examples, you can see how type hints are used to specify the expected types of function parameters and return values. They help clarify the intent of the code and provide information for static type checkers and IDEs to offer better code suggestions and catch potential errors.

These examples demonstrate the flexibility of Python type hints, allowing you to annotate variables, function parameters, and return values with various types, including built-in types, user-defined classes, and even more complex types like unions and generators.

Python type hints can be beneficial in several scenarios:

  • Improved Code Readability : Type annotations make code more self-explanatory and help developers understand the expected types of variables, function parameters, and return values.
  • Improved Code Quality : Type hints enable developers to catch type-related errors early and improve the overall quality of the codebase. Python type hints work hand-in-hand with static type checkers like mypy and linters like pylint and flake8 . These tools analyze code against type hints and provide additional warnings or suggestions for code improvements, reducing the likelihood of bugs/runtime errors and improving code correctness.
  • Enhanced Developer Experience : IDEs and code editors leverage type hints to provide better autocompletion, refactoring tools, and improved static analysis. This results in a more efficient and pleasant coding experience.
  • Collaboration and Maintainability : Type hints act as a form of documentation, making it easier for other developers to understand and work with your code, especially in larger codebases or collaborative projects. It makes it easier to collaborate on projects and reduces the chance of misinterpretation.

To make the most out of Python type hints, consider the following best practices:

  • Consistency : Maintain consistent usage of type hints throughout your codebase.
  • Gradual Adoption : If you're working with an existing codebase that does not currently use type hints, consider adopting type hints gradually to minimize disruptions and allow for a smooth transition.
  • Avoid Overly Complex Annotations : While type hints can express complex types, try to keep the annotations simple and straightforward. Overly complex annotations may hinder code readability.
  • Document Non-obvious Types : When using custom types or when the expected type is not immediately clear, consider adding a comment or docstring to provide additional context.
  • Use Union Types and Optional Types : Leverage union types ( Union[T1, T2] ) and optional types ( Optional[T] ) to express flexibility in the type system.
  • Leverage Type Checking Tools : Integrate static type checkers like mypy and linters like pylint and flake8 into your development process to catch type-related errors early and benefit from their static analysis capabilities.
  • Test Your Code : Even with type hints and type checkers, it's important to thoroughly test your code to ensure correctness and identify potential runtime errors.

While Python type hints offer numerous benefits, their usage should be considered based on the specific requirements and context of your project. Here are some key factors to keep in mind:

  • Script Size and Complexity : For small scripts or prototypes, introducing type hints may add unnecessary complexity without significant benefits. Consider whether the benefits of type hints outweigh the additional overhead in these cases.
  • Legacy Codebases : If you are working with a legacy codebase, it's important to evaluate whether the codebase can migrate to a Python version that supports type hints (Python 3.5 and above). If not, using type hints may not be feasible or practical.
  • Dynamic and Unknown Types : Python's dynamic nature allows flexibility with dynamic or unknown types. In situations where the flexibility of dynamic typing is essential, strict type hints may restrict that flexibility and hinder development.

Python type hints provide a powerful mechanism for adding static typing information to your code while preserving the dynamic nature of the language. By using type annotations, you can improve code quality, readability, collaboration, catch errors early, and benefit from enhanced tooling support.

When using Python type hints, ensure consistency, adopt type hints gradually, and consider the specific needs of your project. While type hints have benefits, they are optional and may not be necessary in all scenarios. Small scripts, prototypes, or legacy codebases where introducing type hints would be impractical or disruptive may not require their usage. Additionally, in cases where dynamic or unknown types are involved, the flexibility of Python's dynamic nature may be preferred over strict type hints.

Remember, the primary goal is to write clean, maintainable code that is easily understood by both humans and machines. Python type hints are a valuable tool in achieving this goal and can greatly enhance the development experience for you and your team.

Python Enhancement Proposals

  • Python »
  • PEP Index »

PEP 484 – Type Hints

The meaning of annotations, acceptable type hints, type aliases, user-defined generic types, scoping rules for type variables, instantiating generic classes and type erasure, arbitrary generic types as base classes, abstract generic types, type variables with an upper bound, covariance and contravariance, the numeric tower, forward references, union types, support for singleton types in unions, the any type, the noreturn type, the type of class objects, annotating instance and class methods, version and platform checking, runtime or type checking, arbitrary argument lists and default argument values, positional-only arguments, annotating generator functions and coroutines, compatibility with other uses of function annotations, type comments, newtype helper function, function/method overloading, storing and distributing stub files, the typeshed repo, the typing module, suggested syntax for python 2.7 and straddling code, which brackets for generic type parameters, what about existing uses of annotations, the problem of forward declarations, the double colon, other forms of new syntax, other backwards compatible conventions, pep development process, acknowledgements.

PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. There has now been enough 3rd party usage for static type analysis that the community would benefit from a standard vocabulary and baseline tools within the standard library.

This PEP introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.

Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.

For example, here is a simple function whose argument and return type are declared in the annotations:

While these annotations are available at runtime through the usual __annotations__ attribute, no type checking happens at runtime . Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter. (While it would of course be possible for individual users to employ a similar checker at run time for Design By Contract enforcement or JIT optimization, those tools are not yet as mature.)

The proposal is strongly inspired by mypy . For example, the type “sequence of integers” can be written as Sequence[int] . The square brackets mean that no new syntax needs to be added to the language. The example here uses a custom type Sequence , imported from a pure-Python module typing . The Sequence[int] notation works at runtime by implementing __getitem__() in the metaclass (but its significance is primarily to an offline type checker).

The type system supports unions, generic types, and a special type named Any which is consistent with (i.e. assignable to and from) all types. This latter feature is taken from the idea of gradual typing. Gradual typing and the full type system are explained in PEP 483 .

Other approaches from which we have borrowed or to which ours can be compared and contrasted are described in PEP 482 .

Rationale and Goals

PEP 3107 added support for arbitrary annotations on parts of a function definition. Although no meaning was assigned to annotations then, there has always been an implicit goal to use them for type hinting , which is listed as the first possible use case in said PEP.

This PEP aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.

Of these goals, static analysis is the most important. This includes support for off-line type checkers such as mypy, as well as providing a standard notation that can be used by IDEs for code completion and refactoring.

While the proposed typing module will contain some building blocks for runtime type checking – in particular the get_type_hints() function – third party packages would have to be developed to implement specific runtime type checking functionality, for example using decorators or metaclasses. Using type hints for performance optimizations is left as an exercise for the reader.

It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.

Any function without annotations should be treated as having the most general type possible, or ignored, by any type checker. Functions with the @no_type_check decorator should be treated as having no annotations.

It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any . An exception is the first argument of instance and class methods. If it is not annotated, then it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. For example, in class A the first argument of an instance method has the implicit type A . In a class method, the precise type of the first argument cannot be represented using the available type notation.

(Note that the return type of __init__ ought to be annotated with -> None . The reason for this is subtle. If __init__ assumed a return annotation of -> None , would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.)

A type checker is expected to check the body of a checked function for consistency with the given annotations. The annotations may also be used to check correctness of calls appearing in other checked functions.

Type checkers are expected to attempt to infer as much information as necessary. The minimum requirement is to handle the builtin decorators @property , @staticmethod and @classmethod .

Type Definition Syntax

The syntax leverages PEP 3107 -style annotations with a number of extensions described in sections below. In its basic form, type hinting is used by filling function annotation slots with classes:

This states that the expected type of the name argument is str . Analogically, the expected return type is str .

Expressions whose type is a subtype of a specific argument type are also accepted for that argument.

Type hints may be built-in classes (including those defined in standard library or third-party extension modules), abstract base classes, types available in the types module, and user-defined classes (including those defined in the standard library or third-party modules).

While annotations are normally the best format for type hints, there are times when it is more appropriate to represent them by a special comment, or in a separately distributed stub file. (See below for examples.)

Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).

Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)

In addition to the above, the following special constructs defined below may be used: None , Any , Union , Tuple , Callable , all ABCs and stand-ins for concrete classes exported from typing (e.g. Sequence and Dict ), type variables, and type aliases.

All newly introduced names used to support features described in following sections (such as Any and Union ) are available in the typing module.

When used in a type hint, the expression None is considered equivalent to type(None) .

Type aliases are defined by simple variable assignments:

Note that we recommend capitalizing alias names, since they represent user-defined types, which (like user-defined classes) are typically spelled that way.

Type aliases may be as complex as type hints in annotations – anything that is acceptable as a type hint is acceptable in a type alias:

This is equivalent to:

Frameworks expecting callback functions of specific signatures might be type hinted using Callable[[Arg1Type, Arg2Type], ReturnType] . Examples:

It is possible to declare the return type of a callable without specifying the call signature by substituting a literal ellipsis (three dots) for the list of arguments:

Note that there are no square brackets around the ellipsis. The arguments of the callback are completely unconstrained in this case (and keyword arguments are acceptable).

Since using callbacks with keyword arguments is not perceived as a common use case, there is currently no support for specifying keyword arguments with Callable . Similarly, there is no support for specifying callback signatures with a variable number of arguments of a specific type.

Because typing.Callable does double-duty as a replacement for collections.abc.Callable , isinstance(x, typing.Callable) is implemented by deferring to isinstance(x, collections.abc.Callable) . However, isinstance(x, typing.Callable[...]) is not supported.

Since type information about objects kept in containers cannot be statically inferred in a generic way, abstract base classes have been extended to support subscription to denote expected types for container elements. Example:

Generics can be parameterized by using a new factory available in typing called TypeVar . Example:

In this case the contract is that the returned value is consistent with the elements held by the collection.

A TypeVar() expression must always directly be assigned to a variable (it should not be used as part of a larger expression). The argument to TypeVar() must be a string equal to the variable name to which it is assigned. Type variables must not be redefined.

TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parameterized by type variables). For example, we can define a type variable that ranges over just str and bytes . By default, a type variable ranges over all possible types. Example of constraining a type variable:

The function concat can be called with either two str arguments or two bytes arguments, but not with a mix of str and bytes arguments.

There should be at least two constraints, if any; specifying a single constraint is disallowed.

Subtypes of types constrained by a type variable should be treated as their respective explicitly listed base types in the context of the type variable. Consider this example:

The call is valid but the type variable AnyStr will be set to str and not MyStr . In effect, the inferred type of the return value assigned to x will also be str .

Additionally, Any is a valid value for every type variable. Consider the following:

This is equivalent to omitting the generic notation and just saying elements: List .

You can include a Generic base class to define a user-defined class as generic. Example:

Generic[T] as a base class defines that the class LoggedVar takes a single type parameter T . This also makes T valid as a type within the class body.

The Generic base class uses a metaclass that defines __getitem__ so that LoggedVar[t] is valid as a type:

A generic type can have any number of type variables, and type variables may be constrained. This is valid:

Each type variable argument to Generic must be distinct. This is thus invalid:

The Generic[T] base class is redundant in simple cases where you subclass some other generic class and specify type variables for its parameters:

That class definition is equivalent to:

You can use multiple inheritance with Generic :

Subclassing a generic class without specifying type parameters assumes Any for each position. In the following example, MyIterable is not generic but implicitly inherits from Iterable[Any] :

Generic metaclasses are not supported.

Type variables follow normal name resolution rules. However, there are some special cases in the static typechecking context:

  • A type variable used in a generic function could be inferred to represent different types in the same code block. Example: from typing import TypeVar , Generic T = TypeVar ( 'T' ) def fun_1 ( x : T ) -> T : ... # T here def fun_2 ( x : T ) -> T : ... # and here could be different fun_1 ( 1 ) # This is OK, T is inferred to be int fun_2 ( 'a' ) # This is also OK, now T is str
  • A type variable used in a method of a generic class that coincides with one of the variables that parameterize this class is always bound to that variable. Example: from typing import TypeVar , Generic T = TypeVar ( 'T' ) class MyClass ( Generic [ T ]): def meth_1 ( self , x : T ) -> T : ... # T here def meth_2 ( self , x : T ) -> T : ... # and here are always the same a = MyClass () # type: MyClass[int] a . meth_1 ( 1 ) # OK a . meth_2 ( 'a' ) # This is an error!
  • A type variable used in a method that does not match any of the variables that parameterize the class makes this method a generic function in that variable: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) class Foo ( Generic [ T ]): def method ( self , x : T , y : S ) -> S : ... x = Foo () # type: Foo[int] y = x . method ( 0 , "abc" ) # inferred type of y is str
  • Unbound type variables should not appear in the bodies of generic functions, or in the class bodies apart from method definitions: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) def a_fun ( x : T ) -> None : # this is OK y = [] # type: List[T] # but below is an error! y = [] # type: List[S] class Bar ( Generic [ T ]): # this is also an error an_attr = [] # type: List[S] def do_something ( x : S ) -> S : # this is OK though ...
  • A generic class definition that appears inside a generic function should not use type variables that parameterize the generic function: from typing import List def a_fun ( x : T ) -> None : # This is OK a_list = [] # type: List[T] ... # This is however illegal class MyGeneric ( Generic [ T ]): ...
  • A generic class nested in another generic class cannot use same type variables. The scope of the type variables of the outer class doesn’t cover the inner one: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) class Outer ( Generic [ T ]): class Bad ( Iterable [ T ]): # Error ... class AlsoBad : x = None # type: List[T] # Also an error class Inner ( Iterable [ S ]): # OK ... attr = None # type: Inner[T] # Also OK

User-defined generic classes can be instantiated. Suppose we write a Node class inheriting from Generic[T] :

To create Node instances you call Node() just as for a regular class. At runtime the type (class) of the instance will be Node . But what type does it have to the type checker? The answer depends on how much information is available in the call. If the constructor ( __init__ or __new__ ) uses T in its signature, and a corresponding argument value is passed, the type of the corresponding argument(s) is substituted. Otherwise, Any is assumed. Example:

In case the inferred type uses [Any] but the intended type is more specific, you can use a type comment (see below) to force the type of the variable, e.g.:

Alternatively, you can instantiate a specific concrete type, e.g.:

Note that the runtime type (class) of p and q is still just Node – Node[int] and Node[str] are distinguishable class objects, but the runtime class of the objects created by instantiating them doesn’t record the distinction. This behavior is called “type erasure”; it is common practice in languages with generics (e.g. Java, TypeScript).

Using generic classes (parameterized or not) to access attributes will result in type check failure. Outside the class definition body, a class attribute cannot be assigned, and can only be looked up by accessing it through a class instance that does not have an instance attribute with the same name:

Generic versions of abstract collections like Mapping or Sequence and generic versions of built-in classes – List , Dict , Set , and FrozenSet – cannot be instantiated. However, concrete user-defined subclasses thereof and generic versions of concrete collections can be instantiated:

Note that one should not confuse static types and runtime classes. The type is still erased in this case and the above expression is just a shorthand for:

It is not recommended to use the subscripted class (e.g. Node[int] ) directly in an expression – using a type alias (e.g. IntNode = Node[int] ) instead is preferred. (First, creating the subscripted class, e.g. Node[int] , has a runtime cost. Second, using a type alias is more readable.)

Generic[T] is only valid as a base class – it’s not a proper type. However, user-defined generic types such as LinkedList[T] from the above example and built-in generic types and ABCs such as List[T] and Iterable[T] are valid both as types and as base classes. For example, we can define a subclass of Dict that specializes type arguments:

SymbolTable is a subclass of dict and a subtype of Dict[str, List[Node]] .

If a generic base class has a type variable as a type argument, this makes the defined class generic. For example, we can define a generic LinkedList class that is iterable and a container:

Now LinkedList[int] is a valid type. Note that we can use T multiple times in the base class list, as long as we don’t use the same type variable T multiple times within Generic[...] .

Also consider the following example:

In this case MyDict has a single parameter, T.

The metaclass used by Generic is a subclass of abc.ABCMeta . A generic class can be an ABC by including abstract methods or properties, and generic classes can also have ABCs as base classes without a metaclass conflict.

A type variable may specify an upper bound using bound=<type> (note: <type> itself cannot be parameterized by type variables). This means that an actual type substituted (explicitly or implicitly) for the type variable must be a subtype of the boundary type. Example:

An upper bound cannot be combined with type constraints (as in used AnyStr , see the example earlier); type constraints cause the inferred type to be _exactly_ one of the constraint types, while an upper bound just requires that the actual type is a subtype of the boundary type.

Consider a class Employee with a subclass Manager . Now suppose we have a function with an argument annotated with List[Employee] . Should we be allowed to call this function with a variable of type List[Manager] as its argument? Many people would answer “yes, of course” without even considering the consequences. But unless we know more about the function, a type checker should reject such a call: the function might append an Employee instance to the list, which would violate the variable’s type in the caller.

It turns out such an argument acts contravariantly , whereas the intuitive answer (which is correct in case the function doesn’t mutate its argument!) requires the argument to act covariantly . A longer introduction to these concepts can be found on Wikipedia and in PEP 483 ; here we just show how to control a type checker’s behavior.

By default generic types are considered invariant in all type variables, which means that values for variables annotated with types like List[Employee] must exactly match the type annotation – no subclasses or superclasses of the type parameter (in this example Employee ) are allowed.

To facilitate the declaration of container types where covariant or contravariant type checking is acceptable, type variables accept keyword arguments covariant=True or contravariant=True . At most one of these may be passed. Generic types defined with such variables are considered covariant or contravariant in the corresponding variable. By convention, it is recommended to use names ending in _co for type variables defined with covariant=True and names ending in _contra for that defined with contravariant=True .

A typical example involves defining an immutable (or read-only) container class:

The read-only collection classes in typing are all declared covariant in their type variable (e.g. Mapping and Sequence ). The mutable collection classes (e.g. MutableMapping and MutableSequence ) are declared invariant. The one example of a contravariant type is the Generator type, which is contravariant in the send() argument type (see below).

Note: Covariance or contravariance is not a property of a type variable, but a property of a generic class defined using this variable. Variance is only applicable to generic types; generic functions do not have this property. The latter should be defined using only type variables without covariant or contravariant keyword arguments. For example, the following example is fine:

while the following is prohibited:

PEP 3141 defines Python’s numeric tower, and the stdlib module numbers implements the corresponding ABCs ( Number , Complex , Real , Rational and Integral ). There are some issues with these ABCs, but the built-in concrete numeric classes complex , float and int are ubiquitous (especially the latter two :-).

Rather than requiring that users write import numbers and then use numbers.Float etc., this PEP proposes a straightforward shortcut that is almost as effective: when an argument is annotated as having type float , an argument of type int is acceptable; similar, for an argument annotated as having type complex , arguments of type float or int are acceptable. This does not handle classes implementing the corresponding ABCs or the fractions.Fraction class, but we believe those use cases are exceedingly rare.

When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later.

A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work:

To address this, we write:

The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated.

Moreover, the expression should be parseable as a valid type hint, i.e., it is constrained by the rules from the section Acceptable type hints above.

It is allowable to use string literals as part of a type hint, for example:

A common use for forward references is when e.g. Django models are needed in the signatures. Typically, each model is in a separate file, and has methods taking arguments whose type involves other models. Because of the way circular imports work in Python, it is often not possible to import all the needed models directly:

Assuming main is imported first, this will fail with an ImportError at the line from models.a import A in models/b.py, which is being imported from models/a.py before a has defined class A. The solution is to switch to module-only imports and reference the models by their _module_._class_ name:

Since accepting a small, limited set of expected types for a single argument is common, there is a new special factory called Union . Example:

A type factored by Union[T1, T2, ...] is a supertype of all types T1 , T2 , etc., so that a value that is a member of one of these types is acceptable for an argument annotated by Union[T1, T2, ...] .

One common case of union types are optional types. By default, None is an invalid value for any type, unless a default value of None has been provided in the function definition. Examples:

As a shorthand for Union[T1, None] you can write Optional[T1] ; for example, the above is equivalent to:

A past version of this PEP allowed type checkers to assume an optional type when the default value is None , as in this code:

This would have been treated as equivalent to:

This is no longer the recommended behavior. Type checkers should move towards requiring the optional type to be made explicit.

A singleton instance is frequently used to mark some special condition, in particular in situations where None is also a valid value for a variable. Example:

To allow precise typing in such situations, the user should use the Union type in conjunction with the enum.Enum class provided by the standard library, so that type errors can be caught statically:

Since the subclasses of Enum cannot be further subclassed, the type of variable x can be statically inferred in all branches of the above example. The same approach is applicable if more than one singleton object is needed: one can use an enumeration that has more than one value:

A special kind of type is Any . Every type is consistent with Any . It can be considered a type that has all values and all methods. Note that Any and builtin type object are completely different.

When the type of a value is object , the type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. On the other hand, when a value has type Any , the type checker will allow all operations on it, and a value of type Any can be assigned to a variable (or used as a return value) of a more constrained type.

A function parameter without an annotation is assumed to be annotated with Any . If a generic type is used without specifying type parameters, they are assumed to be Any :

This rule also applies to Tuple , in annotation context it is equivalent to Tuple[Any, ...] and, in turn, to tuple . As well, a bare Callable in an annotation is equivalent to Callable[..., Any] and, in turn, to collections.abc.Callable :

The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception:

The NoReturn annotation is used for functions such as sys.exit . Static type checkers will ensure that functions annotated as returning NoReturn truly never return, either implicitly or explicitly:

The checkers will also recognize that the code after calls to such functions is unreachable and will behave accordingly:

The NoReturn type is only valid as a return annotation of functions, and considered an error if it appears in other positions:

Sometimes you want to talk about class objects, in particular class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. To clarify: while C (when used as an annotation) refers to instances of class C , Type[C] refers to subclasses of C . (This is a similar distinction as between object and type .)

For example, suppose we have the following classes:

And suppose we have a function that creates an instance of one of these classes if you pass it a class object:

Without Type[] the best we could do to annotate new_user() would be:

However using Type[] and a type variable with an upper bound we can do much better:

Now when we call new_user() with a specific subclass of User a type checker will infer the correct type of the result:

The value corresponding to Type[C] must be an actual class object that’s a subtype of C , not a special form. In other words, in the above example calling e.g. new_user(Union[BasicUser, ProUser]) is rejected by the type checker (in addition to failing at runtime because you can’t instantiate a union).

Note that it is legal to use a union of classes as the parameter for Type[] , as in:

However the actual argument passed in at runtime must still be a concrete class object, e.g. in the above example:

Type[Any] is also supported (see below for its meaning).

Type[T] where T is a type variable is allowed when annotating the first argument of a class method (see the relevant section).

Any other special constructs like Tuple or Callable are not allowed as an argument to Type .

There are some concerns with this feature: for example when new_user() calls user_class() this implies that all subclasses of User must support this in their constructor signature. However this is not unique to Type[] : class methods have similar concerns. A type checker ought to flag violations of such assumptions, but by default constructor calls that match the constructor signature in the indicated base class ( User in the example above) should be allowed. A program containing a complex or extensible class hierarchy might also handle this by using a factory class method. A future revision of this PEP may introduce better ways of dealing with these concerns.

When Type is parameterized it requires exactly one parameter. Plain Type without brackets is equivalent to Type[Any] and this in turn is equivalent to type (the root of Python’s metaclass hierarchy). This equivalence also motivates the name, Type , as opposed to alternatives like Class or SubType , which were proposed while this feature was under discussion; this is similar to the relationship between e.g. List and list .

Regarding the behavior of Type[Any] (or Type or type ), accessing attributes of a variable with this type only provides attributes and methods defined by type (for example, __repr__() and __mro__ ). Such a variable can be called with arbitrary arguments, and the return type is Any .

Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base] :

In most cases the first argument of class and instance methods does not need to be annotated, and it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. In addition, the first argument in an instance method can be annotated with a type variable. In this case the return type may use the same type variable, thus making that method a generic function. For example:

The same applies to class methods using Type[] in an annotation of the first argument:

Note that some type checkers may apply restrictions on this use, such as requiring an appropriate upper bound for the type variable used (see examples).

Type checkers are expected to understand simple version and platform checks, e.g.:

Don’t expect a checker to understand obfuscations like "".join(reversed(sys.platform)) == "xunil" .

Sometimes there’s code that must be seen by a type checker (or other static analysis tools) but should not be executed. For such situations the typing module defines a constant, TYPE_CHECKING , that is considered True during type checking (or other static analysis) but False at runtime. Example:

(Note that the type annotation must be enclosed in quotes, making it a “forward reference”, to hide the expensive_mod reference from the interpreter runtime. In the # type comment no quotes are needed.)

This approach may also be useful to handle import cycles.

Arbitrary argument lists can as well be type annotated, so that the definition:

is acceptable and it means that, e.g., all of the following represent function calls with valid types of arguments:

In the body of function foo , the type of variable args is deduced as Tuple[str, ...] and the type of variable kwds is Dict[str, int] .

In stubs it may be useful to declare an argument as having a default without specifying the actual default value. For example:

What should the default value look like? Any of the options "" , b"" or None fails to satisfy the type constraint.

In such cases the default value may be specified as a literal ellipsis, i.e. the above example is literally what you would write.

Some functions are designed to take their arguments only positionally, and expect their callers never to use the argument’s name to provide that argument by keyword. All arguments with names beginning with __ are assumed to be positional-only, except if their names also end with __ :

The return type of generator functions can be annotated by the generic type Generator[yield_type, send_type, return_type] provided by typing.py module:

Coroutines introduced in PEP 492 are annotated with the same syntax as ordinary functions. However, the return type annotation corresponds to the type of await expression, not to the coroutine type:

The typing.py module provides a generic version of ABC collections.abc.Coroutine to specify awaitables that also support send() and throw() methods. The variance and order of type variables correspond to those of Generator , namely Coroutine[T_co, T_contra, V_co] , for example:

The module also provides generic ABCs Awaitable , AsyncIterable , and AsyncIterator for situations where more precise types cannot be specified:

A number of existing or potential use cases for function annotations exist, which are incompatible with type hinting. These may confuse a static type checker. However, since type hinting annotations have no runtime behavior (other than evaluation of the annotation expression and storing annotations in the __annotations__ attribute of the function object), this does not make the program incorrect – it just may cause a type checker to emit spurious warnings or errors.

To mark portions of the program that should not be covered by type hinting, you can use one or more of the following:

  • a # type: ignore comment;
  • a @no_type_check decorator on a class or function;
  • a custom class or function decorator marked with @no_type_check_decorator .

For more details see later sections.

In order for maximal compatibility with offline type checking it may eventually be a good idea to change interfaces that rely on annotations to switch to a different mechanism, for example a decorator. In Python 3.5 there is no pressure to do this, however. See also the longer discussion under Rejected alternatives below.

No first-class syntax support for explicitly marking variables as being of a specific type is added by this PEP. To help with type inference in complex cases, a comment of the following format may be used:

Type comments should be put on the last line of the statement that contains the variable definition. They can also be placed on with statements and for statements, right after the colon.

Examples of type comments on with and for statements:

In stubs it may be useful to declare the existence of a variable without giving it an initial value. This can be done using PEP 526 variable annotation syntax:

The above syntax is acceptable in stubs for all versions of Python. However, in non-stub code for versions of Python 3.5 and earlier there is a special case:

Type checkers should not complain about this (despite the value None not matching the given type), nor should they change the inferred type to Optional[...] (despite the rule that does this for annotated arguments with a default value of None ). The assumption here is that other code will ensure that the variable is given a value of the proper type, and all uses can assume that the variable has the given type.

The # type: ignore comment should be put on the line that the error refers to:

A # type: ignore comment on a line by itself at the top of a file, before any docstrings, imports, or other executable code, silences all errors in the file. Blank lines and other comments, such as shebang lines and coding cookies, may precede the # type: ignore comment.

In some cases, linting tools or other comments may be needed on the same line as a type comment. In these cases, the type comment should be before other comments and linting markers:

# type: ignore # <comment or other marker>

If type hinting proves useful in general, a syntax for typing variables may be provided in a future Python version. ( UPDATE : This syntax was added in Python 3.6 through PEP 526 .)

Occasionally the type checker may need a different kind of hint: the programmer may know that an expression is of a more constrained type than a type checker may be able to infer. For example:

Some type checkers may not be able to infer that the type of a[index] is str and only infer object or Any , but we know that (if the code gets to that point) it must be a string. The cast(t, x) call tells the type checker that we are confident that the type of x is t . At runtime a cast always returns the expression unchanged – it does not check the type, and it does not convert or coerce the value.

Casts differ from type comments (see the previous section). When using a type comment, the type checker should still verify that the inferred type is consistent with the stated type. When using a cast, the type checker should blindly believe the programmer. Also, casts can be used in expressions, while type comments only apply to assignments.

There are also situations where a programmer might want to avoid logical errors by creating simple classes. For example:

However, this approach introduces a runtime overhead. To avoid this, typing.py provides a helper function NewType that creates simple unique types with almost zero runtime overhead. For a static type checker Derived = NewType('Derived', Base) is roughly equivalent to a definition:

While at runtime, NewType('Derived', Base) returns a dummy function that simply returns its argument. Type checkers require explicit casts from int where UserId is expected, while implicitly casting from UserId where int is expected. Examples:

NewType accepts exactly two arguments: a name for the new unique type, and a base class. The latter should be a proper class (i.e., not a type construct like Union , etc.), or another unique type created by calling NewType . The function returned by NewType accepts only one argument; this is equivalent to supporting only one constructor accepting an instance of the base class (see above). Example:

Both isinstance and issubclass , as well as subclassing will fail for NewType('Derived', Base) since function objects don’t support these operations.

Stub files are files containing type hints that are only for use by the type checker, not at runtime. There are several use cases for stub files:

  • Extension modules
  • Third-party modules whose authors have not yet added type hints
  • Standard library modules for which type hints have not yet been written
  • Modules that must be compatible with Python 2 and 3
  • Modules that use annotations for other purposes

Stub files have the same syntax as regular Python modules. There is one feature of the typing module that is different in stub files: the @overload decorator described below.

The type checker should only check function signatures in stub files; It is recommended that function bodies in stub files just be a single ellipsis ( ... ).

The type checker should have a configurable search path for stub files. If a stub file is found the type checker should not read the corresponding “real” module.

While stub files are syntactically valid Python modules, they use the .pyi extension to make it possible to maintain stub files in the same directory as the corresponding real module. This also reinforces the notion that no runtime behavior should be expected of stub files.

Additional notes on stub files:

  • Modules and variables imported into the stub are not considered exported from the stub unless the import uses the import ... as ... form or the equivalent from ... import ... as ... form. ( UPDATE: To clarify, the intention here is that only names imported using the form X as X will be exported, i.e. the name before and after as must be the same.)
  • However, as an exception to the previous bullet, all objects imported into a stub using from ... import * are considered exported. (This makes it easier to re-export all objects from a given module that may vary by Python version.)

where __init__.pyi contains a line such as from . import ham or from .ham import Ham , then ham is an exported attribute of spam .

Any identifier not defined in the stub is therefore assumed to be of type Any .

The @overload decorator allows describing functions and methods that support multiple different combinations of argument types. This pattern is used frequently in builtin modules and types. For example, the __getitem__() method of the bytes type can be described as follows:

This description is more precise than would be possible using unions (which cannot express the relationship between the argument and return types):

Another example where @overload comes in handy is the type of the builtin map() function, which takes a different number of arguments depending on the type of the callable:

Note that we could also easily add items to support map(None, ...) :

Uses of the @overload decorator as shown above are suitable for stub files. In regular modules, a series of @overload -decorated definitions must be followed by exactly one non- @overload -decorated definition (for the same function/method). The @overload -decorated definitions are for the benefit of the type checker only, since they will be overwritten by the non- @overload -decorated definition, while the latter is used at runtime but should be ignored by a type checker. At runtime, calling a @overload -decorated function directly will raise NotImplementedError . Here’s an example of a non-stub overload that can’t easily be expressed using a union or a type variable:

NOTE: While it would be possible to provide a multiple dispatch implementation using this syntax, its implementation would require using sys._getframe() , which is frowned upon. Also, designing and implementing an efficient multiple dispatch mechanism is hard, which is why previous attempts were abandoned in favor of functools.singledispatch() . (See PEP 443 , especially its section “Alternative approaches”.) In the future we may come up with a satisfactory multiple dispatch design, but we don’t want such a design to be constrained by the overloading syntax defined for type hints in stub files. It is also possible that both features will develop independent from each other (since overloading in the type checker has different use cases and requirements than multiple dispatch at runtime – e.g. the latter is unlikely to support generic types).

A constrained TypeVar type can often be used instead of using the @overload decorator. For example, the definitions of concat1 and concat2 in this stub file are equivalent:

Some functions, such as map or bytes.__getitem__ above, can’t be represented precisely using type variables. However, unlike @overload , type variables can also be used outside stub files. We recommend that @overload is only used in cases where a type variable is not sufficient, due to its special stub-only status.

Another important difference between type variables such as AnyStr and using @overload is that the prior can also be used to define constraints for generic class type parameters. For example, the type parameter of the generic class typing.IO is constrained (only IO[str] , IO[bytes] and IO[Any] are valid):

The easiest form of stub file storage and distribution is to put them alongside Python modules in the same directory. This makes them easy to find by both programmers and the tools. However, since package maintainers are free not to add type hinting to their packages, third-party stubs installable by pip from PyPI are also supported. In this case we have to consider three issues: naming, versioning, installation path.

This PEP does not provide a recommendation on a naming scheme that should be used for third-party stub file packages. Discoverability will hopefully be based on package popularity, like with Django packages for example.

Third-party stubs have to be versioned using the lowest version of the source package that is compatible. Example: FooPackage has versions 1.0, 1.1, 1.2, 1.3, 2.0, 2.1, 2.2. There are API changes in versions 1.1, 2.0 and 2.2. The stub file package maintainer is free to release stubs for all versions but at least 1.0, 1.1, 2.0 and 2.2 are needed to enable the end user type check all versions. This is because the user knows that the closest lower or equal version of stubs is compatible. In the provided example, for FooPackage 1.3 the user would choose stubs version 1.1.

Note that if the user decides to use the “latest” available source package, using the “latest” stub files should generally also work if they’re updated often.

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH. A default fallback directory that is always checked is shared/typehints/pythonX.Y/ (for some PythonX.Y as determined by the type checker, not just the installed version). Since there can only be one package installed for a given Python version per environment, no additional versioning is performed under that directory (just like bare directory installs by pip in site-packages). Stub file package authors might use the following snippet in setup.py :

( UPDATE: As of June 2018 the recommended way to distribute type hints for third-party packages has changed – in addition to typeshed (see the next section) there is now a standard for distributing type hints, PEP 561 . It supports separately installable packages containing stubs, stub files included in the same distribution as the executable code of a package, and inline type hints, the latter two options enabled by including a file named py.typed in the package.)

There is a shared repository where useful stubs are being collected. Policies regarding the stubs collected here will be decided separately and reported in the repo’s documentation. Note that stubs for a given package will not be included here if the package owners have specifically requested that they be omitted.

No syntax for listing explicitly raised exceptions is proposed. Currently the only known use case for this feature is documentational, in which case the recommendation is to put this information in a docstring.

To open the usage of static type checking to Python 3.5 as well as older versions, a uniform namespace is required. For this purpose, a new module in the standard library is introduced called typing .

It defines the fundamental building blocks for constructing types (e.g. Any ), types representing generic variants of builtin collections (e.g. List ), types representing generic collection ABCs (e.g. Sequence ), and a small collection of convenience definitions.

Note that special type constructs, such as Any , Union , and type variables defined using TypeVar are only supported in the type annotation context, and Generic may only be used as a base class. All of these (except for unparameterized generics) will raise TypeError if appear in isinstance or issubclass .

Fundamental building blocks:

  • Any, used as def get(key: str) -> Any: ...
  • Union, used as Union[Type1, Type2, Type3]
  • Callable, used as Callable[[Arg1Type, Arg2Type], ReturnType]
  • Tuple, used by listing the element types, for example Tuple[int, int, str] . The empty tuple can be typed as Tuple[()] . Arbitrary-length homogeneous tuples can be expressed using one type and ellipsis, for example Tuple[int, ...] . (The ... here are part of the syntax, a literal ellipsis.)
  • TypeVar, used as X = TypeVar('X', Type1, Type2, Type3) or simply Y = TypeVar('Y') (see above for more details)
  • Generic, used to create user-defined generic classes
  • Type, used to annotate class objects

Generic variants of builtin collections:

  • Dict, used as Dict[key_type, value_type]
  • DefaultDict, used as DefaultDict[key_type, value_type] , a generic variant of collections.defaultdict
  • List, used as List[element_type]
  • Set, used as Set[element_type] . See remark for AbstractSet below.
  • FrozenSet, used as FrozenSet[element_type]

Note: Dict , DefaultDict , List , Set and FrozenSet are mainly useful for annotating return values. For arguments, prefer the abstract collection types defined below, e.g. Mapping , Sequence or AbstractSet .

Generic variants of container ABCs (and a few non-containers):

  • AsyncIterable
  • AsyncIterator
  • Callable (see above, listed here for completeness)
  • ContextManager
  • Generator, used as Generator[yield_type, send_type, return_type] . This represents the return value of generator functions. It is a subtype of Iterable and it has additional type variables for the type accepted by the send() method (it is contravariant in this variable – a generator that accepts sending it Employee instance is valid in a context where a generator is required that accepts sending it Manager instances) and the return type of the generator.
  • Hashable (not generic, but present for completeness)
  • MappingView
  • MutableMapping
  • MutableSequence
  • Set, renamed to AbstractSet . This name change was required because Set in the typing module means set() with generics.
  • Sized (not generic, but present for completeness)

A few one-off types are defined that test for single special methods (similar to Hashable or Sized ):

  • Reversible, to test for __reversed__
  • SupportsAbs, to test for __abs__
  • SupportsComplex, to test for __complex__
  • SupportsFloat, to test for __float__
  • SupportsInt, to test for __int__
  • SupportsRound, to test for __round__
  • SupportsBytes, to test for __bytes__

Convenience definitions:

  • Optional, defined by Optional[t] == Union[t, None]
  • Text, a simple alias for str in Python 3, for unicode in Python 2
  • AnyStr, defined as TypeVar('AnyStr', Text, bytes)
  • NamedTuple, used as NamedTuple(type_name, [(field_name, field_type), ...]) and equivalent to collections.namedtuple(type_name, [field_name, ...]) . This is useful to declare the types of the fields of a named tuple type.
  • NewType, used to create unique types with little runtime overhead UserId = NewType('UserId', int)
  • cast(), described earlier
  • @no_type_check, a decorator to disable type checking per class or function (see below)
  • @no_type_check_decorator, a decorator to create your own decorators with the same meaning as @no_type_check (see below)
  • @type_check_only, a decorator only available during type checking for use in stub files (see above); marks a class or function as unavailable during runtime
  • @overload, described earlier
  • get_type_hints(), a utility function to retrieve the type hints from a function or method. Given a function or method object, it returns a dict with the same format as __annotations__ , but evaluating forward references (which are given as string literals) as expressions in the context of the original function or method definition.
  • TYPE_CHECKING, False at runtime but True to type checkers

I/O related types:

  • IO (generic over AnyStr )
  • BinaryIO (a simple subtype of IO[bytes] )
  • TextIO (a simple subtype of IO[str] )

Types related to regular expressions and the re module:

  • Match and Pattern, types of re.match() and re.compile() results (generic over AnyStr )

Some tools may want to support type annotations in code that must be compatible with Python 2.7. For this purpose this PEP has a suggested (but not mandatory) extension where function annotations are placed in a # type: comment. Such a comment must be placed immediately following the function header (before the docstring). An example: the following Python 3 code:

is equivalent to the following:

Note that for methods, no type is needed for self .

For an argument-less method it would look like this:

Sometimes you want to specify the return type for a function or method without (yet) specifying the argument types. To support this explicitly, the argument list may be replaced with an ellipsis. Example:

Sometimes you have a long list of parameters and specifying their types in a single # type: comment would be awkward. To this end you may list the arguments one per line and add a # type: comment per line after an argument’s associated comma, if any. To specify the return type use the ellipsis syntax. Specifying the return type is not mandatory and not every argument needs to be given a type. A line with a # type: comment should contain exactly one argument. The type comment for the last argument (if any) should precede the close parenthesis. Example:

  • Tools that support this syntax should support it regardless of the Python version being checked. This is necessary in order to support code that straddles Python 2 and Python 3.
  • It is not allowed for an argument or return value to have both a type annotation and a type comment.
  • When using the short form (e.g. # type: (str, int) -> None ) every argument must be accounted for, except the first argument of instance and class methods (those are usually omitted, but it’s allowed to include them).
  • The return type is mandatory for the short form. If in Python 3 you would omit some argument or the return type, the Python 2 notation should use Any .
  • When using the short form, for *args and **kwds , put 1 or 2 stars in front of the corresponding type annotation. (As with Python 3 annotations, the annotation here denotes the type of the individual argument values, not of the tuple/dict that you receive as the special argument value args or kwds .)
  • Like other type comments, any names used in the annotations must be imported or defined by the module containing the annotation.
  • When using the short form, the entire annotation must be one line.
  • The short form may also occur on the same line as the close parenthesis, e.g.: def add ( a , b ): # type: (int, int) -> int return a + b
  • Misplaced type comments will be flagged as errors by a type checker. If necessary, such comments could be commented twice. For example: def f (): '''Docstring''' # type: () -> None # Error! def g (): '''Docstring''' # # type: () -> None # This is OK

When checking Python 2.7 code, type checkers should treat the int and long types as equivalent. For parameters typed as Text , arguments of type str as well as unicode should be acceptable.

Rejected Alternatives

During discussion of earlier drafts of this PEP, various objections were raised and alternatives were proposed. We discuss some of these here and explain why we reject them.

Several main objections were raised.

Most people are familiar with the use of angular brackets (e.g. List<int> ) in languages like C++, Java, C# and Swift to express the parameterization of generic types. The problem with these is that they are really hard to parse, especially for a simple-minded parser like Python. In most languages the ambiguities are usually dealt with by only allowing angular brackets in specific syntactic positions, where general expressions aren’t allowed. (And also by using very powerful parsing techniques that can backtrack over an arbitrary section of code.)

But in Python, we’d like type expressions to be (syntactically) the same as other expressions, so that we can use e.g. variable assignment to create type aliases. Consider this simple type expression:

From the Python parser’s perspective, the expression begins with the same four tokens (NAME, LESS, NAME, GREATER) as a chained comparison:

We can even make up an example that could be parsed both ways:

Assuming we had angular brackets in the language, this could be interpreted as either of the following two:

It would surely be possible to come up with a rule to disambiguate such cases, but to most users the rules would feel arbitrary and complex. It would also require us to dramatically change the CPython parser (and every other parser for Python). It should be noted that Python’s current parser is intentionally “dumb” – a simple grammar is easier for users to reason about.

For all these reasons, square brackets (e.g. List[int] ) are (and have long been) the preferred syntax for generic type parameters. They can be implemented by defining the __getitem__() method on the metaclass, and no new syntax is required at all. This option works in all recent versions of Python (starting with Python 2.2). Python is not alone in this syntactic choice – generic classes in Scala also use square brackets.

One line of argument points out that PEP 3107 explicitly supports the use of arbitrary expressions in function annotations. The new proposal is then considered incompatible with the specification of PEP 3107.

Our response to this is that, first of all, the current proposal does not introduce any direct incompatibilities, so programs using annotations in Python 3.4 will still work correctly and without prejudice in Python 3.5.

We do hope that type hints will eventually become the sole use for annotations, but this will require additional discussion and a deprecation period after the initial roll-out of the typing module with Python 3.5. The current PEP will have provisional status (see PEP 411 ) until Python 3.6 is released. The fastest conceivable scheme would introduce silent deprecation of non-type-hint annotations in 3.6, full deprecation in 3.7, and declare type hints as the only allowed use of annotations in Python 3.8. This should give authors of packages that use annotations plenty of time to devise another approach, even if type hints become an overnight success.

( UPDATE: As of fall 2017, the timeline for the end of provisional status for this PEP and for the typing.py module has changed, and so has the deprecation schedule for other uses of annotations. For the updated schedule see PEP 563 .)

Another possible outcome would be that type hints will eventually become the default meaning for annotations, but that there will always remain an option to disable them. For this purpose the current proposal defines a decorator @no_type_check which disables the default interpretation of annotations as type hints in a given class or function. It also defines a meta-decorator @no_type_check_decorator which can be used to decorate a decorator (!), causing annotations in any function or class decorated with the latter to be ignored by the type checker.

There are also # type: ignore comments, and static checkers should support configuration options to disable type checking in selected packages.

Despite all these options, proposals have been circulated to allow type hints and other forms of annotations to coexist for individual arguments. One proposal suggests that if an annotation for a given argument is a dictionary literal, each key represents a different form of annotation, and the key 'type' would be use for type hints. The problem with this idea and its variants is that the notation becomes very “noisy” and hard to read. Also, in most cases where existing libraries use annotations, there would be little need to combine them with type hints. So the simpler approach of selectively disabling type hints appears sufficient.

The current proposal is admittedly sub-optimal when type hints must contain forward references. Python requires all names to be defined by the time they are used. Apart from circular imports this is rarely a problem: “use” here means “look up at runtime”, and with most “forward” references there is no problem in ensuring that a name is defined before the function using it is called.

The problem with type hints is that annotations (per PEP 3107 , and similar to default values) are evaluated at the time a function is defined, and thus any names used in an annotation must be already defined when the function is being defined. A common scenario is a class definition whose methods need to reference the class itself in their annotations. (More general, it can also occur with mutually recursive classes.) This is natural for container types, for example:

As written this will not work, because of the peculiarity in Python that class names become defined once the entire body of the class has been executed. Our solution, which isn’t particularly elegant, but gets the job done, is to allow using string literals in annotations. Most of the time you won’t have to use this though – most uses of type hints are expected to reference builtin types or types defined in other modules.

A counterproposal would change the semantics of type hints so they aren’t evaluated at runtime at all (after all, type checking happens off-line, so why would type hints need to be evaluated at runtime at all). This of course would run afoul of backwards compatibility, since the Python interpreter doesn’t actually know whether a particular annotation is meant to be a type hint or something else.

A compromise is possible where a __future__ import could enable turning all annotations in a given module into string literals, as follows:

Such a __future__ import statement may be proposed in a separate PEP.

( UPDATE: That __future__ import statement and its consequences are discussed in PEP 563 .)

A few creative souls have tried to invent solutions for this problem. For example, it was proposed to use a double colon ( :: ) for type hints, solving two problems at once: disambiguating between type hints and other annotations, and changing the semantics to preclude runtime evaluation. There are several things wrong with this idea, however.

  • It’s ugly. The single colon in Python has many uses, and all of them look familiar because they resemble the use of the colon in English text. This is a general rule of thumb by which Python abides for most forms of punctuation; the exceptions are typically well known from other programming languages. But this use of :: is unheard of in English, and in other languages (e.g. C++) it is used as a scoping operator, which is a very different beast. In contrast, the single colon for type hints reads naturally – and no wonder, since it was carefully designed for this purpose ( the idea long predates PEP 3107 ). It is also used in the same fashion in other languages from Pascal to Swift.
  • What would you do for return type annotations?
  • Making type hints available at runtime allows runtime type checkers to be built on top of type hints.
  • It catches mistakes even when the type checker is not run. Since it is a separate program, users may choose not to run it (or even install it), but might still want to use type hints as a concise form of documentation. Broken type hints are no use even for documentation.
  • Because it’s new syntax, using the double colon for type hints would limit them to code that works with Python 3.5 only. By using existing syntax, the current proposal can easily work for older versions of Python 3. (And in fact mypy supports Python 3.2 and newer.)
  • If type hints become successful we may well decide to add new syntax in the future to declare the type for variables, for example var age: int = 42 . If we were to use a double colon for argument type hints, for consistency we’d have to use the same convention for future syntax, perpetuating the ugliness.

A few other forms of alternative syntax have been proposed, e.g. the introduction of a where keyword, and Cobra-inspired requires clauses. But these all share a problem with the double colon: they won’t work for earlier versions of Python 3. The same would apply to a new __future__ import.

The ideas put forward include:

  • A decorator, e.g. @typehints(name=str, returns=str) . This could work, but it’s pretty verbose (an extra line, and the argument names must be repeated), and a far cry in elegance from the PEP 3107 notation.
  • Stub files. We do want stub files, but they are primarily useful for adding type hints to existing code that doesn’t lend itself to adding type hints, e.g. 3rd party packages, code that needs to support both Python 2 and Python 3, and especially extension modules. For most situations, having the annotations in line with the function definitions makes them much more useful.
  • Docstrings. There is an existing convention for docstrings, based on the Sphinx notation ( :type arg1: description ). This is pretty verbose (an extra line per parameter), and not very elegant. We could also make up something new, but the annotation syntax is hard to beat (because it was designed for this very purpose).

It’s also been proposed to simply wait another release. But what problem would that solve? It would just be procrastination.

A live draft for this PEP lives on GitHub . There is also an issue tracker , where much of the technical discussion takes place.

The draft on GitHub is updated regularly in small increments. The official PEPS repo is (usually) only updated when a new draft is posted to python-dev.

This document could not be completed without valuable input, encouragement and advice from Jim Baker, Jeremy Siek, Michael Matson Vitousek, Andrey Vlasovskikh, Radomir Dopieralski, Peter Ludemann, and the BDFL-Delegate, Mark Shannon.

Influences include existing languages, libraries and frameworks mentioned in PEP 482 . Many thanks to their creators, in alphabetical order: Stefan Behnel, William Edwards, Greg Ewing, Larry Hastings, Anders Hejlsberg, Alok Menghrajani, Travis E. Oliphant, Joe Pamer, Raoul-Gabriel Urma, and Julien Verlaguet.

This document has been placed in the public domain.

Source: https://github.com/python/peps/blob/main/peps/pep-0484.rst

Last modified: 2023-09-09 17:39:29 GMT

  • Python »
  • typing documentation »
  • Specification for the Python type system »
  • Type annotations
  • Theme Auto Light Dark |

Type annotations ¶

The meaning of annotations ¶.

The type system leverages PEP 3107 -style annotations with a number of extensions described in sections below. In its basic form, type hinting is used by filling function annotation slots with classes:

This states that the expected type of the name argument is str . Analogically, the expected return type is str .

Expressions whose type is assignable to a specific argument type are also accepted for that argument. Similarly, an expression whose type is assignable to the annotated return type can be returned from the function.

Any function without annotations can be treated as having Any annotations on all arguments and the return type. Type checkers may also optionally infer more precise types for missing annotations.

Type checkers may choose to entirely ignore (not type check) the bodies of functions with no annotations, but this behavior is not required.

It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any . An exception is the first argument of instance and class methods. If it is not annotated, then it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. For example, in class A the first argument of an instance method has the implicit type A . In a class method, the precise type of the first argument cannot be represented using the available type notation.

(Note that the return type of __init__ ought to be annotated with -> None . The reason for this is subtle. If __init__ assumed a return annotation of -> None , would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.)

A type checker is expected to check the body of a checked function for consistency with the given annotations. The annotations may also be used to check correctness of calls appearing in other checked functions.

Type checkers are expected to attempt to infer as much information as necessary. The minimum requirement is to handle the builtin decorators @property , @staticmethod and @classmethod .

Type and annotation expressions ¶

The terms type expression and annotation expression denote specific subsets of Python expressions that are used in the type system. All type expressions are also annotation expressions, but not all annotation expressions are type expressions.

A type expression is any expression that validly expresses a type. Type expressions are always acceptable in annotations and also in various other places. Specifically, type expressions are used in the following locations:

In a type annotation (always as part of an annotation expression)

The first argument to cast()

The second argument to assert_type()

The bounds and constraints of a TypeVar (whether created through the old syntax or the native syntax in Python 3.12)

The definition of a type alias (whether created through the type statement, the old assignment syntax, or the TypeAliasType constructor)

The type arguments of a generic class (which may appear in a base class or in a constructor call)

The definitions of fields in the functional forms for creating TypedDict and NamedTuple types

The base type in the definition of a NewType

An annotation expression is an expression that is acceptable to use in an annotation context (a function parameter annotation, function return annotation, or variable annotation). Generally, an annotation expression is a type expression, optionally surrounded by one or more type qualifiers or by Annotated . Each type qualifier is valid only in some contexts. Note that while annotation expressions are the only expressions valid as type annotations in the type system, the Python language itself makes no such restriction: any expression is allowed.

Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see String annotations ).

The following grammar describes the allowed elements of type and annotation expressions:

The grammar assumes the code has already been parsed as Python code, and loosely follows the structure of the AST. Syntactic details like comments and whitespace are ignored.

<Name> refers to a special form . Most special forms must be imported from typing or typing_extensions , except for None , InitVar , type , and tuple . The latter two have aliases in typing : typing.Type and typing.Tuple . InitVar must be imported from dataclasses . Callable may be imported from either typing or collections.abc . Special forms may be aliased (e.g., from typing import Literal as L ), and they may be referred to by a qualified name (e.g., typing.Literal ). There are other special forms that are not acceptable in any annotation or type expression, including Generic , Protocol , and TypedDict .

Any leaf denoted as name may also be a qualified name (i.e., module '.' name or package '.' module '.' name , with any level of nesting).

Comments in parentheses denote additional restrictions not expressed in the grammar, or brief descriptions of the meaning of a construct.

String annotations ¶

When a type hint cannot be evaluated at runtime, that definition may be expressed as a string literal, to be resolved later.

A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work:

To address this, we write:

The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated.

Moreover, the expression should be parseable as a valid type hint, i.e., it is constrained by the rules from the expression grammar .

If a triple quote is used, the string should be parsed as though it is implicitly surrounded by parentheses. This allows newline characters to be used within the string literal:

It is allowable to use string literals as part of a type hint, for example:

A common use for forward references is when e.g. Django models are needed in the signatures. Typically, each model is in a separate file, and has methods taking arguments whose type involves other models. Because of the way circular imports work in Python, it is often not possible to import all the needed models directly:

Assuming main is imported first, this will fail with an ImportError at the line from models.a import A in models/b.py, which is being imported from models/a.py before a has defined class A. The solution is to switch to module-only imports and reference the models by their _module_._class_ name:

Annotating generator functions and coroutines ¶

The return type of generator functions can be annotated by the generic type Generator[yield_type, send_type, return_type] provided by typing.py module:

Coroutines introduced in PEP 492 are annotated with the same syntax as ordinary functions. However, the return type annotation corresponds to the type of await expression, not to the coroutine type:

The generic ABC collections.abc.Coroutine can be used to specify awaitables that also support send() and throw() methods. The variance and order of type variables correspond to those of Generator , namely Coroutine[T_co, T_contra, V_co] , for example:

The generic ABCs Awaitable , AsyncIterable , and AsyncIterator can be used for situations where more precise types cannot be specified:

Annotating instance and class methods ¶

In most cases the first argument of class and instance methods does not need to be annotated, and it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. In addition, the first argument in an instance method can be annotated with a type variable. In this case the return type may use the same type variable, thus making that method a generic function. For example:

The same applies to class methods using type[] in an annotation of the first argument:

Note that some type checkers may apply restrictions on this use, such as requiring an appropriate upper bound for the type variable used (see examples).

Table of Contents

  • The meaning of annotations
  • Type and annotation expressions
  • String annotations
  • Annotating generator functions and coroutines
  • Annotating instance and class methods

Previous topic

Type system concepts

Special types in annotations

  • Show Source

Quick search

A static type analyzer for Python code

Home Developer guide Workflow • Development process • Python version upgrades • Supporting new features Program analysis • Bytecode • Directives • Main loop • Stack frames • Typegraph Data representation • Abstract values • Attributes • Overlays • Special builtins • Type annotations • Type stubs • TypeVars Configuration Style guide Tools Documentation debugging

View the Project on GitHub google/pytype

Hosted on GitHub Pages — Theme by orderedlist

Type Annotations

Introduction, annotations dictionary, forward references, complex annotations, conversion to abstract types, tracking local operations.

In PEP484 , python added syntactic support for type annotations (also referred to as “type hints”). These are not enforced or applied by the python interpreter, but are instead intended as a combination of documentation and assertions that can be checked by third-party tools like pytype. This blog post is a good quick overview of how type hints fit into the python ecosystem in general.

A significant difference between annotations and type comments is that annotations are parsed and compiled by the interpreter, even if they have no semantic meaning in the runtime code. From pytype’s point of view, this means that we can process them as part of the regular bytecode VM (by contrast, type comments need a separate system to parse and integrate them into the main code). For example, the following code:

compiles to

Python’s SETUP_ANNOTATIONS and STORE_ANNOTATION opcodes respectively create and populate an __annotations__ dict in locals (for variables in functions) or in __dict__ (for annotated class members). Pytype similarly creates a corresponding dictionary, abstract.AnnotationsDict , which it stores in the equivalent locals or class member dictionary.

The annotations dict is updated via the vm._update_annotations_dict() method, which is called from two entry points:

vm._record_local() records a type annotation on a local variable. The AnnotationsDict is retrieved via self.current_annotated_locals , which gets the AnnotationsDict for the current frame.

vm._apply_annotation() is called with an explicit AnnotationsDict, which, in turn, is either the current_annotated_locals or the annotations dict for a class object, retrieved via

A class’s AnnotationsDict is also updated directly in byte_STORE_ATTRIBUTE , handling the case where we have an annotation on an attribute assignment that has not already been recorded as a class-level attribute annotation.

Converting variable annotations to types

As a first step, type annotations on a variable are converted to pytype’s abstract types, and then stored as the type of that variable in much the same way assignments are. Specifically, x = Foo() and x: Foo should both lead to the same internal type being retrieved for x when it is referred to later in the code.

Python currently supports two kinds of annotation,

where Foo is treated as a symbol that is looked up in the current namespace, and then stored under x in the __annotations__ dictionary, and

where Foo is simply stored as a string. The latter case is useful because it lets us annotate variables with types that have not been defined yet; annotations of this type are variously referred to as “string annotations”, “forward references” or “late annotations”.

From version 3.7 onward, Python includes a switch to implicitly treat all annotations as strings. You can set it by including

While an annotation like x: Foo corresponds directly to the runtime type class Foo , in general the type annotation system supports more complex types that do not correspond directly to a runtime python type.

Some examples:

  • Parametrised types, e.g. List[int] is the type of lists of integers, and Dict[K, V] is the (generic) type of dictionaries whose keys and values have types K and V respectively.
  • Union types, e.g. int | str is the type of variables that could contain either an int or a str for the purposes of static type analysis. At runtime, they will contain a single concrete type.

NOTE: Technically, these types do correspond to runtime classes defined in typing.py , but that is just an implementation detail to avoid compiler errors when using them. They are meant to be used by type checkers, not by python code.

Python’s general syntax for complex annotations is

where the base type Base is a python class subclassing typing.Generic , and the param s are types (possibly parametrised themselves) or lists of types.

The main annotation processing code lives in the annotation_utils.AnnotationUtils class (instantiated as a member of the VM). This code has several entry points, for various annotation contexts, but the bulk of the conversion work is done in the internal method _process_one_annotation() .

Unprocessed annotations are represented as abstract.AnnotationClass (including the derived class abstract.AnnotationContainer ) for immediate annotations, and abstract.LateAnnotation for late annotations. There is also a mixin class, mixin.NestedAnnotation , which has some common code for dealing with inner types (the types within the [] that the base type is parametrised on).

NOTE: The two types can be mixed; an immediate annotation can be parametrised with a late annotation, e.g. ` x: List[‘A’] which will eventually be converted to x = List[A] when we can resolve the name ‘A’`.

_process_one_annotation() is essentially a large switch statement dealing with various kinds of annotations, and calling itself recursively to deal with nested annotations. The return value of _process_one_annotation is an abstract.* object that can be applied as the python type of a variable.

The various public methods in AnnotationUtils cover different contexts in which we can encounter variable annotations while processing bytecode; search for annotation_utils in vm.py to see where each one is used.

There is a class of python code that does read type annotations at runtime, for metaprogramming reasons. The commonest example is dataclasses in the standard library (from python 3.7 onwards); for example the following will generate a class with an appropriate __init__ function:

Pytype has some custom overlay code to replicate the effects of this metaprogramming, but it needs a explicit record of variable annotations, possibly in the order in which they appear in the code, to handle the general case. This is distinct from the regular use of annotations to assign types to variables, and the information we need is not preserved by the regular pytype type tracking machinery.

To support this use case, we have a separate record of all assignments and annotations to local variables, stored in a vm.local_ops dictionary and indexed by the current frame. See vm._record_local() for how this dictionary is updated, and get_class_locals() in overlays/classgen.py for an instance of it is used along with vm.annotated_locals to recover a class’s variable annotations.

Type Hinting and Annotations in Python

Type Hints Annotations Python

Python is a dynamically typed language. We don’t have to explicitly mention the data types for the declared variables or functions. Python interpreter assigns the type for variables at the runtime based on the variable’s value at that time. We also have statically typed languages like Java, C, or C++, where we need to declare the variable type at the time of declaration and the variable types are known at compile time.

Starting from Python 3.5 there was an introduction of something called type hints in PEP 484 and PEP 483 . This addition to the Python language helped structure our code and make it feel more like a statically typed language. This helps to avoid bugs but at the same time makes the code more verbose.

However, the Python runtime does not enforce function and variable type annotations. They can be used by third-party tools such as type checkers, IDEs, linters, etc.

Also read: The Magic Methods in Python

Type Checking, Type Hints, and Code Compilation

At first, we had some external third-party libraries for example the static type checker like mypy that started doing type hinting, and a lot of those ideas from mypy were actually brought into the canonical Python itself and integrated directly into Python.

Now, the thing with type hints is that it does not modify how Python itself runs. The type hints do get compiled along with the rest of the code but they do not affect how Python executes your code.

Let’s go through an example and get an overview by assigning type hints to a function.

Explanation:

In the function declared above, we are assigning built-in data types to the arguments. It’s the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int)

This function is taking two arguments num1 and num2 , that’s what Python sees when it’s going to run the code. It is expecting two variables. Python would have been just fine even if we do not put any type hints that we specified saying that num1 and num2 should be integers .

So according to it, we should be passing two integer values to our code and that would work fine. However, what if we try to pass an integer and a string ?

Type Hinting tells us to pass in int values, yet we are passing a str . When we try to run the code, it runs fine with no issues. Python interpreter has no problem compiling the code if there is a valid data type present in out type hints like int, str, dict, and so on.

Why use Type Hinting at all?

In the example above, we saw that the code runs fine even if we pass a string value to it. Python has no problems multiplying an int with str . However, there are some really good reasons to use type hints even if Python ignores them.

  • One of the things is that it helps IDEs display context-sensitive help information such as not only the function parameters but also what the expected type is.
  • Type hints are often used for code documentation. There are multiple automated code document generators that use type hints when they generate the documentation for example if we are writing our own code libraries with lots of functions and classes and also the included comments.
  • Even if Python does not use type hinting at all, it helps us to leverage it to use a more declarative approach while writing our code and also to provide a runtime validation using external libraries.

Using a Type Checker

There are several type checkers for Python. One of them is the mypy.

Let’s use the same code that we ran before using an int and str . We will use the static type checker mypy and see what Python has to say about our code.

  • Installing mypy
  • Code with Type Hints using a type checker while running the code

In the terminal run the file with the type checker prefixed:

While we run our file with the type checker, now the Python interpreter has a problem with our code . The expected argument is an int data type for both the arguments and we are passing a string in one of them. The type checker tracked the bug and shows that in our output. The mypy type checker helped us address the problem in our code.

More Examples of Type Hinting in Python

In the above example, we used int and str types while hinting. Similarly, other data types can also be used for type hinting as well . We can also declare a type hint for the return type in a function.

Let’s go through the code and see some examples.

Here we are type hinting at different data types for our arguments. Note that we can also assign default values to our parameters if there is no argument provided.

In the above code, the return type has also been declared. When we try to run the code using a type checker like mypy, Python will have no problems running the code as we have a string as the return type, which matches with the type hint provided.

This code has a return type of None . When we try to run this code using the mypy type checker, Python will raise an exception, since it is expecting a return type of None , and the code is returning a string.

The above code shows type hints which are usually referred to as Variable Annotations. Just like we provided type hints to our functions in the above examples, even the variables can hold similar information and help make code more declarative and documented.

The typing Module

A lot of times we have more advanced or more complicated types that have to be passed as an argument to a function. Python has a built-in typing module that enables us to write such types of annotated variables, making the code even more documented. We have to import the typing module to our file and then use those functions. These include data structures such as List, Dictionary, Set, and Tuple .

Let’s see the code a get an overview along with the comments as explanations.

There are numerous other ways to utilize type hints in Python. Using types does not affect the performance of the code and we are not getting any extra functionalities as well. However, using type hints provides robustness to our code and provides documentation for people who would read the code later.

It certainly helps to avoid introducing difficult-to-find bugs. Using types while writing code is becoming popular in the current technology scenario and Python is also following the pattern by providing us with easy-to-use functions for the same. For more information, please refer to the official documentation.

Python Type Hints Documentation

Mouse Vs Python

Python 3.10 – Simplifies Unions in Type Annotations

Python 3.10 has several new typing features. They are given in detail here:

  • PEP 604 , Allow writing union types as X | Y
  • PEP 613 , Explicit Type Aliases
  • PEP 612 , Parameter Specification Variables

The focus of this tutorial is to talk about PEP 604 , which makes writing union types easier when adding type annotation (AKA: type hinting) to your codebase.

Unions Ye Olde Way

Before Python 3.10, if you wanted to say that a variable or parameter could be multiple different types, you would need to use Union :

Here’s another example from the Python documentation:

Let’s find out how 3.10 will fix that!

The New Union

In Python 3.10, you no longer need to import Union at all. All the details are in PEP 604 . The PEP allows you to replace it entirely with the | operator. Once you have done that, the code above becomes the following:

You can use the new syntax to replace Optional as well, which is what you used for type hinting that an argument could be None :

You can even use this syntax with isinstance() and issubclass() :

Wrapping Up

While this isn’t an amazing new feature for the Python language, the addition of using the | operator with type annotations makes your code look cleaner. You don’t need as many imports and it looks nicer this way.

Related Reading

  • Python 3: Variable Annotations
  • Type Checking in Python

7. Type Annotations

By Melisa Atay . Last modified: 05 Jul 2022.

On this page ➤

If you know another programming language such as C or C++, you are used to declaring what data type you are working with. For example, this is how you would declare an integer in C.

from this moment on, we know that "a" is of type integer. Then we would maybe assign an integer to a.

However we don't do it like that in Python. Let's go through what we already know.

As a is assigned an integer, it belongs to the integer class itself, without us having to say

beforehand. a's data type would change accordingly, when it is assigned values from other data types.

If Python is capable of determining the types itself, why are even type annotations useful?

Type Annotations: Why

  • You understand where the type errors stem from.

Built-in Function Hints

You would get syntax-highlighting as a warning before you even run your code.

Your code will be more readable & understandable. This is especially useful if you are working together with others.

Live Python training

instructor-led training course

Enjoying this page? We offer live Python training courses covering the content of this site.

See: Live Python courses overview

Type Annotations: How

In this example you see the ord() function, which takes a string as input and converts it into an integer by using the ASCII values of the letters it contains.

my_function() works perfectly, since b has to be of type string and it is of type string. Let's try it otherwise:

Here we receive a TypeError because we wrote an integer, and not a string for b. Since ord() needs a string to function, the code doesn't work.

Here we receive another TypeError because we wrote a string, and not an integer for a. While b is sucessfully converted into an integer by the ord() function, it is yet not possible to concatenate strings and integers.

Interestingly our function runs now, yet our argument a is of type float and not integer as we declared at the very beginning.

In this example, we are going to see how the functions can be given variables that can work without raising any errors. However, that can be still be very problematic.

How can we run a TypeError check before we actually run the program?

By installing mypy and running it before you run your code, you could avoid type errors. Mypy checks your annotations and gives a warning if a function is initialized with the wrong datatype.

Mypy

All in all, type annotations are very useful and it can save a lot of time for you and it can make your code readable for both yourself and the others.

Upcoming online Courses

On this page

Type inference and type annotations ¶

Type inference ¶.

For most variables, if you do not explicitly specify its type, mypy will infer the correct type based on what is initially assigned to the variable.

Note that mypy will not use type inference in dynamically typed functions (those without a function type annotation) — every local variable type defaults to Any in such functions. For more details, see Dynamically typed code .

Explicit types for variables ¶

You can override the inferred type of a variable by using a variable type annotation:

Without the type annotation, the type of x would be just int . We use an annotation to give it a more general type Union[int, str] (this type means that the value can be either an int or a str ).

The best way to think about this is that the type annotation sets the type of the variable, not the type of the expression. For instance, mypy will complain about the following code:

To explicitly override the type of an expression you can use cast(<type>, <expression>) . See Casts for details.

Note that you can explicitly declare the type of a variable without giving it an initial value:

Explicit types for collections ¶

The type checker cannot always infer the type of a list or a dictionary. This often arises when creating an empty list or dictionary and assigning it to a new variable that doesn’t have an explicit variable type. Here is an example where mypy can’t infer the type without some help:

In these cases you can give the type explicitly using a type annotation:

Using type arguments (e.g. list[int] ) on builtin collections like list , dict , tuple , and set only works in Python 3.9 and later. For Python 3.8 and earlier, you must use List (e.g. List[int] ), Dict , and so on.

Compatibility of container types ¶

A quick note: container types can sometimes be unintuitive. We’ll discuss this more in Invariance vs covariance . For example, the following program generates a mypy error, because mypy treats list[int] as incompatible with list[object] :

The reason why the above assignment is disallowed is that allowing the assignment could result in non-int values stored in a list of int :

Other container types like dict and set behave similarly.

You can still run the above program; it prints x . This illustrates the fact that static types do not affect the runtime behavior of programs. You can run programs with type check failures, which is often very handy when performing a large refactoring. Thus you can always ‘work around’ the type system, and it doesn’t really limit what you can do in your program.

Context in type inference ¶

Type inference is bidirectional and takes context into account.

Mypy will take into account the type of the variable on the left-hand side of an assignment when inferring the type of the expression on the right-hand side. For example, the following will type check:

The value expression [1, 2] is type checked with the additional context that it is being assigned to a variable of type list[object] . This is used to infer the type of the expression as list[object] .

Declared argument types are also used for type context. In this program mypy knows that the empty list [] should have type list[int] based on the declared type of arg in foo :

However, context only works within a single statement. Here mypy requires an annotation for the empty list, since the context would only be available in the following statement:

Working around the issue is easy by adding a type annotation:

Silencing type errors ¶

You might want to disable type checking on specific lines, or within specific files in your codebase. To do that, you can use a # type: ignore comment.

For example, say in its latest update, the web framework you use can now take an integer argument to run() , which starts it on localhost on that port. Like so:

However, the devs forgot to update their type annotations for run , so mypy still thinks run only expects str types. This would give you the following error:

If you cannot directly fix the web framework yourself, you can temporarily disable type checking on that line, by adding a # type: ignore :

This will suppress any mypy errors that would have raised on that specific line.

You should probably add some more information on the # type: ignore comment, to explain why the ignore was added in the first place. This could be a link to an issue on the repository responsible for the type stubs, or it could be a short explanation of the bug. To do that, use this format:

Type ignore error codes ¶

By default, mypy displays an error code for each error:

It is possible to add a specific error-code in your ignore comment (e.g. # type: ignore[attr-defined] ) to clarify what’s being silenced. You can find more information about error codes here .

Other ways to silence errors ¶

You can get mypy to silence errors about a specific variable by dynamically typing it with Any . See Dynamically typed code for more information.

You can ignore all mypy errors in a file by adding a # mypy: ignore-errors at the top of the file:

You can also specify per-module configuration options in your The mypy configuration file . For example:

Finally, adding a @typing.no_type_check decorator to a class, method or function causes mypy to avoid type checking that class, method or function and to treat it as not having any type annotations.

Notice: While JavaScript is not essential for this website, your interaction with the content will be limited. Please turn JavaScript on for the full experience.

Notice: Your browser is ancient . Please upgrade to a different browser to experience a better web.

  • Chat on IRC

Building Robust Codebases with Python's Type Annotations

Written by John Lekberg , Hudson River Trading

Hudson River Trading's (HRT's) Python codebase is large and constantly evolving. Millions of lines of Python reflect the work of hundreds of developers over the last decade. We trade in over 200 markets worldwide — including nearly all of the world's electronic markets — so we need to regularly update our code to handle changing rules and regulations.

Our codebase provides command-line interface (CLI) tools, graphical user interfaces (GUIs), and event-triggered processes that assist our traders, engineers, and operations personnel. This outer layer of our codebase is supported by an inner layer of shared business logic. Business logic is often more complicated than it appears: even a simple question like "what is the next business day for NASDAQ?" involves querying a database of market calendars (a database that requires regular maintenance). So, by centralizing this business logic into a single source of truth, we ensure that all the different systems in our codebase behave coherently.

Even a small change to shared business logic can affect many systems, and we need to check that these systems won't have issues with our change. It's inefficient and error-prone for a human to manually verify that nothing is broken. Python's type annotations have significantly improved how quickly we can update and verify changes to shared business logic.

Type annotations allow you to describe the type of data handled by your code. "Type checkers" are tools that reconcile your descriptions against how the code is actually being used. When we update shared business logic, we update the type annotations and use a type checker to identify any downstream systems that are affected.

We also thoroughly document and test our codebase. But written documentation is not automatically synchronized with the underlying code, so maintaining documentation requires a high level of vigilance and is subject to human error. Additionally, automated testing is limited to the scenarios that we test for, which means that novel uses of our shared business logic will be unverified until we add new tests.

Let's look at an example of type annotations to see how they can be used to describe the shape of data. Here's some type annotated Python code that computes the checksum digit of a CUSIP :

Here's what the type annotations tell us:

  • cusip_checksum() is a function that takes a string as input and returns an integer as output.
  • chars is a string.
  • charmap is a dictionary with string keys and integer values.
  • total and value are integers.

HRT uses mypy to analyze our Python type annotations. Mypy works by analyzing the type annotations in one or more Python files and determining if there are any issues or inconsistencies.

Mypy examples

Most of the time, mypy is good at type inference, so it’s better to focus on annotating the parameters and return values of a function rather than the internal variables used in a function.

Here's a new function, validate_cusip() , that relies on the cusip_checksum() function from earlier:

Mypy is happy with this code:

Now, let's say that we decide we should update cusip_checksum() to return None if it detects that the CUSIP is not valid:

Mypy automatically detects issues in how validate_cusip() is using cusip_checksum() :

Now that we've been alerted, we can update validate_cusip() to handle these changes:

In this example, the functions were next to each other in the source code. But Mypy really shines when the functions are spread across many files in the codebase.

All in all, type annotations have substantial benefits for making your codebase more robust. They are not an all-or-nothing proposition — you can focus on adding type annotations to small parts of your codebase and growing the amount of type annotated code over time. Along with other technologies, Python's type annotations help HRT to continue thriving in the fast-paced world of global trading.

This article originally appeared in the HRT Beat .

Meet the author:

John Lekberg works on a spectrum of Python and gRPC systems at HRT. He primarily develops and refines internal tooling for monitoring and alerting. He's also led initiatives to apply static analysis tools to HRT's codebase, catching bugs and reducing the manual work needed to review code.

Proposal: Annotate types in multiple assignment

In the latest version of Python (3.12.3), type annotation for single variable assignment is available:

However, in some scenarios like when we want to annotate the tuple of variables in return, the syntax of type annotation is invalid:

In this case, I propose two new syntaxes to support this feature:

  • Annotate directly after each variable:
  • Annotate the tuple of return:

In other programming languages, as I know, Julia and Rust support this feature in there approaches:

I’m pretty sure this has already been suggested. Did you go through the mailing list and searched for topics here? Without doing that, there’s nothing to discuss here. (Besides linking to them).

Secondly, try to not edit posts, but post a followup. Some people read these topics in mailing list mode and don’t see your edits.

  • https://mail.python.org
  • https://mail.python.org/archives

:slight_smile:

For reference, PEP 526 has a note about this in the “Rejected/Postponed Proposals” section:

Allow type annotations for tuple unpacking: This causes ambiguity: it’s not clear what this statement means: x, y: T Are x and y both of type T , or do we expect T to be a tuple type of two items that are distributed over x and y , or perhaps x has type Any and y has type T ? (The latter is what this would mean if this occurred in a function signature.) Rather than leave the (human) reader guessing, we forbid this, at least for now.

Personally I think the meaning of this is rather clear, especially when combined with an assignment, and I would like to see this.

Thank you for your valuable response, both regarding the discussion convention for Python development and the history of this feature.

I have found a related topic here: https://mail.python.org/archives/list/[email protected]/thread/5NZNHBDWK6EP67HSK4VNDTZNIVUOXMRS/

Here’s the part I find unconvincing:

Under what circumstances will fun() be hard to annotate, but a, b will be easy?

It’s better to annotate function arguments and return values, not variables. The preferred scenario is that fun() has a well-defined return type, and the type of a, b can be inferred (there is no reason to annotate it). This idea is presupposing there are cases where that’s difficult, but I’d like to see some examples where that applies.

Does this not work?

You don’t need from __future__ as of… 3.9, I think?

:confused:

3.10 if you want A | B too: PEP 604 , although I’m not sure which version the OP is using and 3.9 hasn’t reached end of life yet.

We can’t always infer it, so annotating a variable is sometimes necessary or useful. But if the function’s return type is annotated then a, b = fun() allows type-checkers to infer the types of a and b . This stuff isn’t built in to Python and is evolving as typing changes, so what was inferred in the past might be better in the future.

So my question above was: are there any scenarios where annotating the function is difficult, but annotating the results would be easy? That seems like the motivating use case.

Would it be a solution to put it on the line above? And not allow assigning on the same line? Then it better mirrors function definitions.

It’s a long thread, so it might have been suggested already.

Actually, in cases where the called function differs from the user-defined function, we should declare the types when assignment unpacking.

Here is a simplified MWE:

NOTE: In PyTorch, the __call__ function is internally wrapped from forward .

Can’t you write this? That’s shorter than writing the type annotations.

This is the kind of example I was asking for, thanks. Is the problem that typing tools don’t trace the return type through the call because the wrapping isn’t in python?

I still suggest to read the thread you linked, like I’m doing right now.

The __call__ function is not the same as forward . There might be many other preprocessing and postprocessing steps involved inside it.

Yeah, quite a bit of pre-processing in fact… unless you don’t have hooks by the looks of it:

  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

Python typing annotation return value annotation based if function argument being a list or not

Is there a way to specify in return value annotation that a list of float s is output only when the input ids is a list of str s or int s?

  • python-typing

InSync's user avatar

  • 1 typing module doesn't directly allow for conditional annotations that depend on the input type ... docs.python.org/3/library/typing.html#typing.overload ...Explore @overload concept ... using @overload, you can define multiple function signatures to specify the relationship between input and output types.. –  Bhargav - Retarded Skills Commented Sep 4 at 14:02
  • 2 This question is similar to: How can I type-hint a function where the return type depends on the input type of an argument? . If you believe it’s different, please edit the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem. –  Anerdw Commented Sep 4 at 14:09

2 Answers 2

One way to do this is by using the @overload decorator to create 2 extra function signatures, one for int | str that returns float and one for list[str] | list[int] that returns list[float] and then have the actual function definition like you have now.

I like Bananas's user avatar

Your get is really two different functions. One converts its argument to a float ; the other uses the first to convert a list of arguments to a list of float s.

You only feel compelled to combine them into a single function (for "convenience") because Python lacks a built-in function that makes gets easy to create on the fly from get .

This is an example of "inversion of control": instead of making get decide what to do based on the type of its argument, you decide which function to call based on the type of the argument.

chepner's user avatar

Your Answer

Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more

Sign up or log in

Post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged python python-typing or ask your own question .

  • The Overflow Blog
  • The hidden cost of speed
  • The creator of Jenkins discusses CI/CD and balancing business with open source
  • Featured on Meta
  • Announcing a change to the data-dump process
  • Bringing clarity to status tag usage on meta sites
  • What does a new user need in a homepage experience on Stack Overflow?
  • Feedback requested: How do you use tag hover descriptions for curating and do...
  • Staging Ground Reviewer Motivation

Hot Network Questions

  • Sub-/superscript size difference between newtxmath and txfonts
  • Where is this railroad track as seen in Rocky II during the training montage?
  • What's the statistical historical precedence for generalisation beyond overfitting?
  • What is the first work of fiction to feature a vampire-human hybrid or dhampir vampire hunter as a protagonist?
  • Why does Jeff think that having a story at all seems gross?
  • Colossians 1:16 New World Translation renders τα πάντα as “all other things” but why is this is not shown in their Kingdom Interlinear?
  • How to clean a female disconnect connector
  • How to truncate text in latex?
  • Breaker trips when plugging into wall outlet(receptacle) directly, but not when using extension
  • Is there a non-semistable simple sheaf?
  • Enumitem + color labels + multiline = bug?
  • Investigate why packages are held back, and stop release upgrade
  • When has the SR-71 been used for civilian purposes?
  • Is it helpful to use a thicker gauge wire for only part of a long circuit run that could have higher loads?
  • How do I prove the amount of a flight delay in UK court?
  • Can reinforcement learning rewards be a combination of current and new state?
  • Instance a different sets of geometry on different parts of mesh by index
  • What is the translation of this quote by Plato?
  • I'm a little embarrassed by the research of one of my recommenders
  • Applying to faculty jobs in universities without a research group in your area
  • What was IBM VS/PC?
  • Could a lawyer agree not to take any further cases against a company?
  • Visuallizing complex vectors?
  • How can I play MechWarrior 2?

python type annotation assignment

IMAGES

  1. Type Annotation in Python

    python type annotation assignment

  2. 7. Type Annotations

    python type annotation assignment

  3. python annotations list of objects

    python type annotation assignment

  4. Introduction to Python Typing For Type Annotation

    python type annotation assignment

  5. 7. Type Annotations

    python type annotation assignment

  6. Python

    python type annotation assignment

VIDEO

  1. Python Type Annotations: 4. Type-checker Tools mypy

  2. Function Annotation in python

  3. 05

  4. Python

  5. Python Type Hints: More Readable Code With type Hints in Python

  6. Type Annotation in Python #kody_az

COMMENTS

  1. typing

    typing — Support for type hints

  2. How to declare multiple variables with type annotation syntax in Python

    Python is completely object oriented, and not "statically typed". You do not need to declare variables before using them, or declare their type. Every variable in Python is an object. a,b,c=0,1,2 Or the following maybe what you are looking for. def magic(a: str, b: str) -> int: light = a.find(b) return light

  3. PEP 526

    PEP 526 - Syntax for Variable Annotations. This PEP is a historical document: see Annotated assignment statements, ClassVar and typing.ClassVar for up-to-date specs and documentation. Canonical typing specs are maintained at the typing specs site; runtime typing behaviour is described in the CPython documentation.

  4. Understanding type annotation in Python

    Understanding type annotation in Python

  5. Python Type Annotations Full Guide

    Advanced Python Type Annotations Type Aliases. A Type Alias is simply a synonym for a type, and is used to make the code more readable. To create one, simply assign a type annotation to a variable name. Beginning in Python 3.10, this assigned variable can be typed with typing.TypeAlias. Here is an example.

  6. How to Use Type Hints for Multiple Return Types in Python

    How to Use Type Hints for Multiple Return Types in Python

  7. Python Type Hints: A Comprehensive Guide to Using Type Annotations in

    In the above examples, we specify that variable_name should be of type int, and the add_numbers function takes two parameters (a and b) of type int and returns an int value.. Python provides a set of built-in types like int, float, str, list, dict, etc. Additionally, you can also use type hints with user-defined classes and modules.. Here are some additional examples showcasing the benefits of ...

  8. PEP 484

    The current PEP will have provisional status (see PEP 411) until Python 3.6 is released. The fastest conceivable scheme would introduce silent deprecation of non-type-hint annotations in 3.6, full deprecation in 3.7, and declare type hints as the only allowed use of annotations in Python 3.8.

  9. Type annotations

    In a type annotation (always as part of an annotation expression) The first argument to cast() The second argument to assert_type() The bounds and constraints of a TypeVar (whether created through the old syntax or the native syntax in Python 3.12) The definition of a type alias (whether created through the type statement, the old assignment ...

  10. Type Annotations

    Complex annotations. While an annotation like x: Foo corresponds directly to the runtime type class Foo, in general the type annotation system supports more complex types that do not correspond directly to a runtime python type.. Some examples: Parametrised types, e.g. List[int] is the type of lists of integers, and Dict[K, V] is the (generic) type of dictionaries whose keys and values have ...

  11. Advanced Type Annotations in Python

    Advanced Type Annotations in Python: Part 1. Python's type hinting system, introduced in PEP 484, has been a game-changer for many developers. It allows for better code readability, improved IDE ...

  12. Type hints cheat sheet

    In some cases type annotations can cause issues at runtime, see Annotation issues at runtime for dealing with this. See Silencing type errors for details on how to silence errors.. Standard "duck types"¶ In typical Python code, many functions that can take a list or a dict as an argument only need their argument to be somehow "list-like" or "dict-like".

  13. Type Hinting and Annotations in Python

    Explanation: In the function declared above, we are assigning built-in data types to the arguments. It's the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int). This function is taking two arguments num1 and num2, that's what Python sees when it's going to run the code.

  14. Python 3.10

    The focus of this tutorial is to talk about PEP 604, which makes writing union types easier when adding type annotation (AKA: type hinting) to your codebase. Unions Ye Olde Way. Before Python 3.10, if you wanted to say that a variable or parameter could be multiple different types, you would need to use Union:

  15. 7. Type Annotations

    If you are using an IDE, such as Visual Studio Code, typing the type annotations would allow you to access the built-in functions easier. When a variable is of no type, you wouldn't be able to automatically access the built-in functions that you intend to use. You would get syntax-highlighting as a warning before you even run your code.

  16. Type inference and type annotations

    The best way to think about this is that the type annotation sets the type of the variable, not the type of the expression. For instance, mypy will complain about the following code: x : Union [ int , str ] = 1.1 # error: Incompatible types in assignment # (expression has type "float", variable has type "Union[int, str]")

  17. Building Robust Codebases with Python's Type Annotations

    Python's type annotations have significantly improved how quickly we can update and verify changes to shared business logic. Type annotations allow you to describe the type of data handled by your code. "Type checkers" are tools that reconcile your descriptions against how the code is actually being used. When we update shared business logic ...

  18. Proposal: Annotate types in multiple assignment

    In the latest version of Python (3.12.3), type annotation for single variable assignment is available: a: int = 1 However, in some scenarios like when we want to annotate the tuple of variables in return, the syntax of type annotation is invalid: from typing import Any def fun() -> Any: # when hard to annotate the strict type return 1, True a: int, b: bool = fun() # INVALID In this case, I ...

  19. Understanding Python type annotation after indexing

    2. This comes from a combination of two features of type annotation syntax. First, you can put a type hint after any expression that could syntactically be the target of an assignment. This includes identifiers (e.g. data), indexed expressions (e.g. data["teapot"]), attributes (e.g. data.teapot). It doesn't have to be an actual assignment; it's ...

  20. How to "borrow" an instance property type in Python type annotations

    Type hints in Python are for static checkers, and instances imply dynamic types, which may be addressed using TypeVars.Unless the Foo has some kind of TypeVar which may be shared and bounded between Foo and Bar, I am not sure how you can go about this.Basically, you will need to tell us roughly how Foo is defined as a type, and how does it expose the type signature of Foo.foo.

  21. Python: use Type Hints together with Annotations

    As far as I know, no existing static type checker, linter, doc generator, etc is going to recognize an annotation that's a tuple of a type and a description. Practically speaking, I'd recommend using the annotation for the type only , since that's overwhelmingly the standard usage, and using comments or docstrings for additional information.

  22. python

    Python's type annotation system is not designed with a zero-overhead type assertion syntax. As an alternative, you could use Any as an "escape hatch". If you annotate foo with type Any, mypy should allow the bar call. Local variable annotations have no runtime cost, so the only runtime cost is an extra local variable store and lookup:

  23. Python typing annotation return value annotation based if function

    This question is similar to: How can I type-hint a function where the return type depends on the input type of an argument?. If you believe it's different, please edit the question, make it clear how it's different and/or how the answers on that question are not helpful for your problem. -