Update Python Client to 3.2.0rc1
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8ea0d3d..ed62a40 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -17,6 +17,53 @@
  under the License.
  -->
 
+# v3.2.0
+
+## New Features:
+
+- Add Asset Partitioning support: backfill for partitioned DAGs ([#61464](https://github.com/apache/airflow/pull/61464))
+- Add `partition_key` to `DagRunAssetReference` ([#61725](https://github.com/apache/airflow/pull/61725))
+- Add partition key column and filter to DAG Runs list ([#61939](https://github.com/apache/airflow/pull/61939))
+- Add `DagRunType` for asset materializations ([#62276](https://github.com/apache/airflow/pull/62276))
+- Add `allowed_run_types` to allowlist specific DAG run types ([#61833](https://github.com/apache/airflow/pull/61833))
+- Add DAG runs filters by `bundleVersion` ([#62810](https://github.com/apache/airflow/pull/62810))
+- Expose `timetable_partitioned` in UI API ([#62777](https://github.com/apache/airflow/pull/62777))
+- Add base React plugin destination ([#62530](https://github.com/apache/airflow/pull/62530))
+- Add `team_name` to Pool APIs ([#60952](https://github.com/apache/airflow/pull/60952))
+- Add `team_name` to connection public APIs ([#59336](https://github.com/apache/airflow/pull/59336))
+- Add `team_id` to variable APIs ([#57102](https://github.com/apache/airflow/pull/57102))
+- Add team selector to list variables and list connections pages ([#60995](https://github.com/apache/airflow/pull/60995))
+- Support OR operator in search parameters ([#60008](https://github.com/apache/airflow/pull/60008))
+- Add wildcard support for `dag_id` and `dag_run_id` in bulk task instance endpoint ([#57441](https://github.com/apache/airflow/pull/57441))
+- Add `operator_name_pattern`, `pool_pattern`, `queue_pattern` as search filters for task instances ([#57571](https://github.com/apache/airflow/pull/57571))
+- Add filters to Task Instances tab ([#56920](https://github.com/apache/airflow/pull/56920))
+- Add API support for filtering DAGs by timetable type ([#58852](https://github.com/apache/airflow/pull/58852))
+- Enable triggerer queues ([#59239](https://github.com/apache/airflow/pull/59239))
+- Add ability to add, edit, and delete XComs directly from UI ([#58921](https://github.com/apache/airflow/pull/58921))
+- Add HITL detail history ([#55952](https://github.com/apache/airflow/pull/55952), [#56760](https://github.com/apache/airflow/pull/56760))
+- Add update_mask support for bulk PATCH APIs ([#54597](https://github.com/apache/airflow/pull/54597))
+- Introduce named asset watchers ([#55643](https://github.com/apache/airflow/pull/55643))
+- Add DAG ID pattern search functionality to DAG Runs and Task Instances ([#55691](https://github.com/apache/airflow/pull/55691))
+- Add UI to allow creation of DAG Runs with partition key ([#58004](https://github.com/apache/airflow/pull/58004))
+- Support retry multiplier parameter ([#56866](https://github.com/apache/airflow/pull/56866))
+- Display active DAG runs count in header with auto-refresh ([#58332](https://github.com/apache/airflow/pull/58332))
+- Add checkbox before clear task confirmation to prevent rerun of tasks in Running state ([#56351](https://github.com/apache/airflow/pull/56351))
+
+## Improvements:
+
+- Upgrade FastAPI and conform OpenAPI schema changes ([#61476](https://github.com/apache/airflow/pull/61476))
+- Use SQLAlchemy native `Uuid`/`JSON` types instead of `sqlalchemy-utils` ([#61532](https://github.com/apache/airflow/pull/61532))
+- Remove team ID and use team name as primary key ([#59109](https://github.com/apache/airflow/pull/59109))
+- Update `BulkDeleteAction` to use generic type ([#59207](https://github.com/apache/airflow/pull/59207))
+- Add link to API docs ([#53346](https://github.com/apache/airflow/pull/53346))
+
+## Bug Fixes:
+
+- Fix null `dag_run_conf` in `BackfillResponse` serialization ([#63259](https://github.com/apache/airflow/pull/63259))
+- Fix missing `dag_id` filter on DAG Run query ([#62750](https://github.com/apache/airflow/pull/62750))
+- Fix `HITLResponse` data model name ([#57795](https://github.com/apache/airflow/pull/57795))
+- Remove unused parameter in logout ([#58045](https://github.com/apache/airflow/pull/58045))
+
 # v3.1.8
 
 ## Bug Fixes:
diff --git a/README.md b/README.md
index 4d698e3..6f04629 100644
--- a/README.md
+++ b/README.md
@@ -19,22 +19,6 @@
 
 # Apache Airflow Python Client
 
-> [!NOTE]
-> Code in this repository is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech)
-> project using Open-API specification from the Apache Airflow repository.
->
-> The process of generating the code is described in the
-> [Python Client Readme](https://github.com/apache/airflow/blob/main/clients/README.md).
-> We enabled [GitHub discussions](https://github.com/apache/airflow-client-python/discussions)
-> in the `airflow-client-python` repository, and we encourage you to start discussions if you have any questions
-> or suggestions to improve the client. However, in case the discussions result in a need to create an
-> actionable issuee, the issues in this repo are deliberately not enabled.
->
-> Instead, you should create GitHub issues or even PRs improving the client
-> in the main [Apache Airflow repository](https://github.com/apache/airflow) and test it by generating the
-> client locally following the instructions from the repo.
-
-
 # Overview
 
 To facilitate management, Apache Airflow supports a range of REST API endpoints across its
@@ -49,7 +33,6 @@
 Accept: application/json
 ```
 
-
 ## Resources
 
 The term `resource` refers to a single type of object in the Airflow metadata. An API is broken up by its
@@ -647,6 +630,7 @@
 
 * optionally expose configuration (NOTE! that this is dangerous setting). The script will happily run with
   the default setting, but if you want to see the configuration, you need to expose it.
+  Note that sensitive configuration values are always masked.
   In the `[api]` section of your `airflow.cfg` set:
 
 ```ini
diff --git a/airflow_client/client/__init__.py b/airflow_client/client/__init__.py
index c2913a4..da267df 100644
--- a/airflow_client/client/__init__.py
+++ b/airflow_client/client/__init__.py
@@ -14,7 +14,7 @@
 """  # noqa: E501
 
 
-__version__ = "3.1.8"
+__version__ = "3.2.0"
 
 # Define package exports
 __all__ = [
@@ -65,6 +65,7 @@
     "AssetEventCollectionResponse",
     "AssetEventResponse",
     "AssetResponse",
+    "AssetWatcherResponse",
     "BackfillCollectionResponse",
     "BackfillPostBody",
     "BackfillResponse",
@@ -132,6 +133,9 @@
     "DryRunBackfillCollectionResponse",
     "DryRunBackfillResponse",
     "EntitiesInner",
+    "EntitiesInner1",
+    "EntitiesInner2",
+    "EntitiesInner3",
     "EventLogCollectionResponse",
     "EventLogResponse",
     "ExternalLogUrlResponse",
@@ -141,6 +145,7 @@
     "FastAPIRootMiddlewareResponse",
     "HITLDetail",
     "HITLDetailCollection",
+    "HITLDetailHistory",
     "HITLDetailResponse",
     "HITLUser",
     "HTTPExceptionResponse",
@@ -255,6 +260,7 @@
 from airflow_client.client.models.asset_event_collection_response import AssetEventCollectionResponse as AssetEventCollectionResponse
 from airflow_client.client.models.asset_event_response import AssetEventResponse as AssetEventResponse
 from airflow_client.client.models.asset_response import AssetResponse as AssetResponse
+from airflow_client.client.models.asset_watcher_response import AssetWatcherResponse as AssetWatcherResponse
 from airflow_client.client.models.backfill_collection_response import BackfillCollectionResponse as BackfillCollectionResponse
 from airflow_client.client.models.backfill_post_body import BackfillPostBody as BackfillPostBody
 from airflow_client.client.models.backfill_response import BackfillResponse as BackfillResponse
@@ -322,6 +328,9 @@
 from airflow_client.client.models.dry_run_backfill_collection_response import DryRunBackfillCollectionResponse as DryRunBackfillCollectionResponse
 from airflow_client.client.models.dry_run_backfill_response import DryRunBackfillResponse as DryRunBackfillResponse
 from airflow_client.client.models.entities_inner import EntitiesInner as EntitiesInner
+from airflow_client.client.models.entities_inner1 import EntitiesInner1 as EntitiesInner1
+from airflow_client.client.models.entities_inner2 import EntitiesInner2 as EntitiesInner2
+from airflow_client.client.models.entities_inner3 import EntitiesInner3 as EntitiesInner3
 from airflow_client.client.models.event_log_collection_response import EventLogCollectionResponse as EventLogCollectionResponse
 from airflow_client.client.models.event_log_response import EventLogResponse as EventLogResponse
 from airflow_client.client.models.external_log_url_response import ExternalLogUrlResponse as ExternalLogUrlResponse
@@ -331,6 +340,7 @@
 from airflow_client.client.models.fast_api_root_middleware_response import FastAPIRootMiddlewareResponse as FastAPIRootMiddlewareResponse
 from airflow_client.client.models.hitl_detail import HITLDetail as HITLDetail
 from airflow_client.client.models.hitl_detail_collection import HITLDetailCollection as HITLDetailCollection
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory as HITLDetailHistory
 from airflow_client.client.models.hitl_detail_response import HITLDetailResponse as HITLDetailResponse
 from airflow_client.client.models.hitl_user import HITLUser as HITLUser
 from airflow_client.client.models.http_exception_response import HTTPExceptionResponse as HTTPExceptionResponse
diff --git a/airflow_client/client/api/asset_api.py b/airflow_client/client/api/asset_api.py
index ea62a74..b1bfd5f 100644
--- a/airflow_client/client/api/asset_api.py
+++ b/airflow_client/client/api/asset_api.py
@@ -1778,7 +1778,7 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`")] = None,
         _request_timeout: Union[
             None,
@@ -1801,7 +1801,7 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`
         :type order_by: List[str]
@@ -1861,7 +1861,7 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`")] = None,
         _request_timeout: Union[
             None,
@@ -1884,7 +1884,7 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`
         :type order_by: List[str]
@@ -1944,7 +1944,7 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`")] = None,
         _request_timeout: Union[
             None,
@@ -1967,7 +1967,7 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name`
         :type order_by: List[str]
@@ -2112,6 +2112,7 @@
         source_task_id: Optional[StrictStr] = None,
         source_run_id: Optional[StrictStr] = None,
         source_map_index: Optional[StrictInt] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         timestamp_gte: Optional[datetime] = None,
         timestamp_gt: Optional[datetime] = None,
         timestamp_lte: Optional[datetime] = None,
@@ -2149,6 +2150,8 @@
         :type source_run_id: str
         :param source_map_index:
         :type source_map_index: int
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type name_pattern: str
         :param timestamp_gte:
         :type timestamp_gte: datetime
         :param timestamp_gt:
@@ -2188,6 +2191,7 @@
             source_task_id=source_task_id,
             source_run_id=source_run_id,
             source_map_index=source_map_index,
+            name_pattern=name_pattern,
             timestamp_gte=timestamp_gte,
             timestamp_gt=timestamp_gt,
             timestamp_lte=timestamp_lte,
@@ -2227,6 +2231,7 @@
         source_task_id: Optional[StrictStr] = None,
         source_run_id: Optional[StrictStr] = None,
         source_map_index: Optional[StrictInt] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         timestamp_gte: Optional[datetime] = None,
         timestamp_gt: Optional[datetime] = None,
         timestamp_lte: Optional[datetime] = None,
@@ -2264,6 +2269,8 @@
         :type source_run_id: str
         :param source_map_index:
         :type source_map_index: int
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type name_pattern: str
         :param timestamp_gte:
         :type timestamp_gte: datetime
         :param timestamp_gt:
@@ -2303,6 +2310,7 @@
             source_task_id=source_task_id,
             source_run_id=source_run_id,
             source_map_index=source_map_index,
+            name_pattern=name_pattern,
             timestamp_gte=timestamp_gte,
             timestamp_gt=timestamp_gt,
             timestamp_lte=timestamp_lte,
@@ -2342,6 +2350,7 @@
         source_task_id: Optional[StrictStr] = None,
         source_run_id: Optional[StrictStr] = None,
         source_map_index: Optional[StrictInt] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         timestamp_gte: Optional[datetime] = None,
         timestamp_gt: Optional[datetime] = None,
         timestamp_lte: Optional[datetime] = None,
@@ -2379,6 +2388,8 @@
         :type source_run_id: str
         :param source_map_index:
         :type source_map_index: int
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type name_pattern: str
         :param timestamp_gte:
         :type timestamp_gte: datetime
         :param timestamp_gt:
@@ -2418,6 +2429,7 @@
             source_task_id=source_task_id,
             source_run_id=source_run_id,
             source_map_index=source_map_index,
+            name_pattern=name_pattern,
             timestamp_gte=timestamp_gte,
             timestamp_gt=timestamp_gt,
             timestamp_lte=timestamp_lte,
@@ -2452,6 +2464,7 @@
         source_task_id,
         source_run_id,
         source_map_index,
+        name_pattern,
         timestamp_gte,
         timestamp_gt,
         timestamp_lte,
@@ -2511,6 +2524,10 @@
             
             _query_params.append(('source_map_index', source_map_index))
             
+        if name_pattern is not None:
+            
+            _query_params.append(('name_pattern', name_pattern))
+            
         if timestamp_gte is not None:
             if isinstance(timestamp_gte, datetime):
                 _query_params.append(
@@ -2894,8 +2911,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         dag_ids: Optional[List[StrictStr]] = None,
         only_active: Optional[StrictBool] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name, uri, created_at, updated_at`")] = None,
@@ -2920,9 +2937,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
-        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type uri_pattern: str
         :param dag_ids:
         :type dag_ids: List[str]
@@ -2989,8 +3006,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         dag_ids: Optional[List[StrictStr]] = None,
         only_active: Optional[StrictBool] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name, uri, created_at, updated_at`")] = None,
@@ -3015,9 +3032,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
-        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type uri_pattern: str
         :param dag_ids:
         :type dag_ids: List[str]
@@ -3084,8 +3101,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        uri_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         dag_ids: Optional[List[StrictStr]] = None,
         only_active: Optional[StrictBool] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name, uri, created_at, updated_at`")] = None,
@@ -3110,9 +3127,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type name_pattern: str
-        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param uri_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type uri_pattern: str
         :param dag_ids:
         :type dag_ids: List[str]
@@ -3918,6 +3935,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "DAGRunResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -3990,6 +4008,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "DAGRunResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -4062,6 +4081,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "DAGRunResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
diff --git a/airflow_client/client/api/backfill_api.py b/airflow_client/client/api/backfill_api.py
index 62a2f37..5050c23 100644
--- a/airflow_client/client/api/backfill_api.py
+++ b/airflow_client/client/api/backfill_api.py
@@ -369,6 +369,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "BackfillResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -440,6 +441,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "BackfillResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -511,6 +513,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "BackfillResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
diff --git a/airflow_client/client/api/connection_api.py b/airflow_client/client/api/connection_api.py
index 81ac304..5fce05b 100644
--- a/airflow_client/client/api/connection_api.py
+++ b/airflow_client/client/api/connection_api.py
@@ -1133,8 +1133,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`")] = None,
-        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`")] = None,
+        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1156,9 +1156,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`
         :type order_by: List[str]
-        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type connection_id_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1216,8 +1216,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`")] = None,
-        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`")] = None,
+        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1239,9 +1239,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`
         :type order_by: List[str]
-        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type connection_id_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1299,8 +1299,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`")] = None,
-        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`")] = None,
+        connection_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1322,9 +1322,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id`
         :type order_by: List[str]
-        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param connection_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type connection_id_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
diff --git a/airflow_client/client/api/dag_api.py b/airflow_client/client/api/dag_api.py
index f3dd99e..5848876 100644
--- a/airflow_client/client/api/dag_api.py
+++ b/airflow_client/client/api/dag_api.py
@@ -1154,7 +1154,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`")] = None,
-        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1178,7 +1178,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`
         :type order_by: List[str]
-        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type tag_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1236,7 +1236,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`")] = None,
-        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1260,7 +1260,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`
         :type order_by: List[str]
-        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type tag_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1318,7 +1318,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`")] = None,
-        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        tag_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1342,7 +1342,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name`
         :type order_by: List[str]
-        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param tag_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type tag_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1481,8 +1481,8 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         has_import_errors: Annotated[Optional[StrictBool], Field(description="Filter Dags by having import errors. Only Dags that have been successfully loaded before will be returned.")] = None,
@@ -1502,6 +1502,7 @@
         dag_run_state: Optional[List[StrictStr]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `dag_id, dag_display_name, next_dagrun, state, start_date, last_run_state, last_run_start_date`")] = None,
         is_favorite: Optional[StrictBool] = None,
+        timetable_type: Optional[List[StrictStr]] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1529,9 +1530,9 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
@@ -1571,6 +1572,8 @@
         :type order_by: List[str]
         :param is_favorite:
         :type is_favorite: bool
+        :param timetable_type:
+        :type timetable_type: List[str]
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1620,6 +1623,7 @@
             dag_run_state=dag_run_state,
             order_by=order_by,
             is_favorite=is_favorite,
+            timetable_type=timetable_type,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -1651,8 +1655,8 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         has_import_errors: Annotated[Optional[StrictBool], Field(description="Filter Dags by having import errors. Only Dags that have been successfully loaded before will be returned.")] = None,
@@ -1672,6 +1676,7 @@
         dag_run_state: Optional[List[StrictStr]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `dag_id, dag_display_name, next_dagrun, state, start_date, last_run_state, last_run_start_date`")] = None,
         is_favorite: Optional[StrictBool] = None,
+        timetable_type: Optional[List[StrictStr]] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1699,9 +1704,9 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
@@ -1741,6 +1746,8 @@
         :type order_by: List[str]
         :param is_favorite:
         :type is_favorite: bool
+        :param timetable_type:
+        :type timetable_type: List[str]
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1790,6 +1797,7 @@
             dag_run_state=dag_run_state,
             order_by=order_by,
             is_favorite=is_favorite,
+            timetable_type=timetable_type,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -1821,8 +1829,8 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         has_import_errors: Annotated[Optional[StrictBool], Field(description="Filter Dags by having import errors. Only Dags that have been successfully loaded before will be returned.")] = None,
@@ -1842,6 +1850,7 @@
         dag_run_state: Optional[List[StrictStr]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `dag_id, dag_display_name, next_dagrun, state, start_date, last_run_state, last_run_start_date`")] = None,
         is_favorite: Optional[StrictBool] = None,
+        timetable_type: Optional[List[StrictStr]] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1869,9 +1878,9 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
@@ -1911,6 +1920,8 @@
         :type order_by: List[str]
         :param is_favorite:
         :type is_favorite: bool
+        :param timetable_type:
+        :type timetable_type: List[str]
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1960,6 +1971,7 @@
             dag_run_state=dag_run_state,
             order_by=order_by,
             is_favorite=is_favorite,
+            timetable_type=timetable_type,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -2007,6 +2019,7 @@
         dag_run_state,
         order_by,
         is_favorite,
+        timetable_type,
         _request_auth,
         _content_type,
         _headers,
@@ -2020,6 +2033,7 @@
             'owners': 'multi',
             'dag_run_state': 'multi',
             'order_by': 'multi',
+            'timetable_type': 'multi',
         }
 
         _path_params: Dict[str, str] = {}
@@ -2209,6 +2223,10 @@
             
             _query_params.append(('is_favorite', is_favorite))
             
+        if timetable_type is not None:
+            
+            _query_params.append(('timetable_type', timetable_type))
+            
         # process the header parameters
         # process the form parameters
         # process the body parameter
@@ -2580,7 +2598,7 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         _request_timeout: Union[
@@ -2614,7 +2632,7 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
@@ -2688,7 +2706,7 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         _request_timeout: Union[
@@ -2722,7 +2740,7 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
@@ -2796,7 +2814,7 @@
         tags: Optional[List[StrictStr]] = None,
         tags_match_mode: Optional[StrictStr] = None,
         owners: Optional[List[StrictStr]] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         exclude_stale: Optional[StrictBool] = None,
         paused: Optional[StrictBool] = None,
         _request_timeout: Union[
@@ -2830,7 +2848,7 @@
         :type tags_match_mode: str
         :param owners:
         :type owners: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param exclude_stale:
         :type exclude_stale: bool
diff --git a/airflow_client/client/api/dag_run_api.py b/airflow_client/client/api/dag_run_api.py
index ee4a852..a81f9df 100644
--- a/airflow_client/client/api/dag_run_api.py
+++ b/airflow_client/client/api/dag_run_api.py
@@ -16,7 +16,7 @@
 from typing_extensions import Annotated
 
 from datetime import datetime
-from pydantic import Field, StrictInt, StrictStr, field_validator
+from pydantic import Field, StrictFloat, StrictInt, StrictStr, field_validator
 from typing import Any, List, Optional, Union
 from typing_extensions import Annotated
 from airflow_client.client.models.asset_event_collection_response import AssetEventCollectionResponse
@@ -960,16 +960,24 @@
         end_date_gt: Optional[datetime] = None,
         end_date_lte: Optional[datetime] = None,
         end_date_lt: Optional[datetime] = None,
+        duration_gte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         updated_at_gte: Optional[datetime] = None,
         updated_at_gt: Optional[datetime] = None,
         updated_at_lte: Optional[datetime] = None,
         updated_at_lt: Optional[datetime] = None,
+        conf_contains: Optional[StrictStr] = None,
         run_type: Optional[List[StrictStr]] = None,
         state: Optional[List[StrictStr]] = None,
         dag_version: Optional[List[StrictInt]] = None,
+        bundle_version: Optional[StrictStr] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        partition_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1025,6 +1033,14 @@
         :type end_date_lte: datetime
         :param end_date_lt:
         :type end_date_lt: datetime
+        :param duration_gte:
+        :type duration_gte: float
+        :param duration_gt:
+        :type duration_gt: float
+        :param duration_lte:
+        :type duration_lte: float
+        :param duration_lt:
+        :type duration_lt: float
         :param updated_at_gte:
         :type updated_at_gte: datetime
         :param updated_at_gt:
@@ -1033,18 +1049,26 @@
         :type updated_at_lte: datetime
         :param updated_at_lt:
         :type updated_at_lt: datetime
+        :param conf_contains:
+        :type conf_contains: str
         :param run_type:
         :type run_type: List[str]
         :param state:
         :type state: List[str]
         :param dag_version:
         :type dag_version: List[int]
+        :param bundle_version:
+        :type bundle_version: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`
         :type order_by: List[str]
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type triggering_user_name_pattern: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param partition_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type partition_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1087,16 +1111,24 @@
             end_date_gt=end_date_gt,
             end_date_lte=end_date_lte,
             end_date_lt=end_date_lt,
+            duration_gte=duration_gte,
+            duration_gt=duration_gt,
+            duration_lte=duration_lte,
+            duration_lt=duration_lt,
             updated_at_gte=updated_at_gte,
             updated_at_gt=updated_at_gt,
             updated_at_lte=updated_at_lte,
             updated_at_lt=updated_at_lt,
+            conf_contains=conf_contains,
             run_type=run_type,
             state=state,
             dag_version=dag_version,
+            bundle_version=bundle_version,
             order_by=order_by,
             run_id_pattern=run_id_pattern,
             triggering_user_name_pattern=triggering_user_name_pattern,
+            dag_id_pattern=dag_id_pattern,
+            partition_key_pattern=partition_key_pattern,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -1143,16 +1175,24 @@
         end_date_gt: Optional[datetime] = None,
         end_date_lte: Optional[datetime] = None,
         end_date_lt: Optional[datetime] = None,
+        duration_gte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         updated_at_gte: Optional[datetime] = None,
         updated_at_gt: Optional[datetime] = None,
         updated_at_lte: Optional[datetime] = None,
         updated_at_lt: Optional[datetime] = None,
+        conf_contains: Optional[StrictStr] = None,
         run_type: Optional[List[StrictStr]] = None,
         state: Optional[List[StrictStr]] = None,
         dag_version: Optional[List[StrictInt]] = None,
+        bundle_version: Optional[StrictStr] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        partition_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1208,6 +1248,14 @@
         :type end_date_lte: datetime
         :param end_date_lt:
         :type end_date_lt: datetime
+        :param duration_gte:
+        :type duration_gte: float
+        :param duration_gt:
+        :type duration_gt: float
+        :param duration_lte:
+        :type duration_lte: float
+        :param duration_lt:
+        :type duration_lt: float
         :param updated_at_gte:
         :type updated_at_gte: datetime
         :param updated_at_gt:
@@ -1216,18 +1264,26 @@
         :type updated_at_lte: datetime
         :param updated_at_lt:
         :type updated_at_lt: datetime
+        :param conf_contains:
+        :type conf_contains: str
         :param run_type:
         :type run_type: List[str]
         :param state:
         :type state: List[str]
         :param dag_version:
         :type dag_version: List[int]
+        :param bundle_version:
+        :type bundle_version: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`
         :type order_by: List[str]
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type triggering_user_name_pattern: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param partition_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type partition_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1270,16 +1326,24 @@
             end_date_gt=end_date_gt,
             end_date_lte=end_date_lte,
             end_date_lt=end_date_lt,
+            duration_gte=duration_gte,
+            duration_gt=duration_gt,
+            duration_lte=duration_lte,
+            duration_lt=duration_lt,
             updated_at_gte=updated_at_gte,
             updated_at_gt=updated_at_gt,
             updated_at_lte=updated_at_lte,
             updated_at_lt=updated_at_lt,
+            conf_contains=conf_contains,
             run_type=run_type,
             state=state,
             dag_version=dag_version,
+            bundle_version=bundle_version,
             order_by=order_by,
             run_id_pattern=run_id_pattern,
             triggering_user_name_pattern=triggering_user_name_pattern,
+            dag_id_pattern=dag_id_pattern,
+            partition_key_pattern=partition_key_pattern,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -1326,16 +1390,24 @@
         end_date_gt: Optional[datetime] = None,
         end_date_lte: Optional[datetime] = None,
         end_date_lt: Optional[datetime] = None,
+        duration_gte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
+        duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         updated_at_gte: Optional[datetime] = None,
         updated_at_gt: Optional[datetime] = None,
         updated_at_lte: Optional[datetime] = None,
         updated_at_lt: Optional[datetime] = None,
+        conf_contains: Optional[StrictStr] = None,
         run_type: Optional[List[StrictStr]] = None,
         state: Optional[List[StrictStr]] = None,
         dag_version: Optional[List[StrictInt]] = None,
+        bundle_version: Optional[StrictStr] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        triggering_user_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        partition_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1391,6 +1463,14 @@
         :type end_date_lte: datetime
         :param end_date_lt:
         :type end_date_lt: datetime
+        :param duration_gte:
+        :type duration_gte: float
+        :param duration_gt:
+        :type duration_gt: float
+        :param duration_lte:
+        :type duration_lte: float
+        :param duration_lt:
+        :type duration_lt: float
         :param updated_at_gte:
         :type updated_at_gte: datetime
         :param updated_at_gt:
@@ -1399,18 +1479,26 @@
         :type updated_at_lte: datetime
         :param updated_at_lt:
         :type updated_at_lt: datetime
+        :param conf_contains:
+        :type conf_contains: str
         :param run_type:
         :type run_type: List[str]
         :param state:
         :type state: List[str]
         :param dag_version:
         :type dag_version: List[int]
+        :param bundle_version:
+        :type bundle_version: str
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id`
         :type order_by: List[str]
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param triggering_user_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type triggering_user_name_pattern: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param partition_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type partition_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
                                  timeout. It can also be a pair (tuple) of
@@ -1453,16 +1541,24 @@
             end_date_gt=end_date_gt,
             end_date_lte=end_date_lte,
             end_date_lt=end_date_lt,
+            duration_gte=duration_gte,
+            duration_gt=duration_gt,
+            duration_lte=duration_lte,
+            duration_lt=duration_lt,
             updated_at_gte=updated_at_gte,
             updated_at_gt=updated_at_gt,
             updated_at_lte=updated_at_lte,
             updated_at_lt=updated_at_lt,
+            conf_contains=conf_contains,
             run_type=run_type,
             state=state,
             dag_version=dag_version,
+            bundle_version=bundle_version,
             order_by=order_by,
             run_id_pattern=run_id_pattern,
             triggering_user_name_pattern=triggering_user_name_pattern,
+            dag_id_pattern=dag_id_pattern,
+            partition_key_pattern=partition_key_pattern,
             _request_auth=_request_auth,
             _content_type=_content_type,
             _headers=_headers,
@@ -1504,16 +1600,24 @@
         end_date_gt,
         end_date_lte,
         end_date_lt,
+        duration_gte,
+        duration_gt,
+        duration_lte,
+        duration_lt,
         updated_at_gte,
         updated_at_gt,
         updated_at_lte,
         updated_at_lt,
+        conf_contains,
         run_type,
         state,
         dag_version,
+        bundle_version,
         order_by,
         run_id_pattern,
         triggering_user_name_pattern,
+        dag_id_pattern,
+        partition_key_pattern,
         _request_auth,
         _content_type,
         _headers,
@@ -1758,6 +1862,22 @@
             else:
                 _query_params.append(('end_date_lt', end_date_lt))
             
+        if duration_gte is not None:
+            
+            _query_params.append(('duration_gte', duration_gte))
+            
+        if duration_gt is not None:
+            
+            _query_params.append(('duration_gt', duration_gt))
+            
+        if duration_lte is not None:
+            
+            _query_params.append(('duration_lte', duration_lte))
+            
+        if duration_lt is not None:
+            
+            _query_params.append(('duration_lt', duration_lt))
+            
         if updated_at_gte is not None:
             if isinstance(updated_at_gte, datetime):
                 _query_params.append(
@@ -1810,6 +1930,10 @@
             else:
                 _query_params.append(('updated_at_lt', updated_at_lt))
             
+        if conf_contains is not None:
+            
+            _query_params.append(('conf_contains', conf_contains))
+            
         if run_type is not None:
             
             _query_params.append(('run_type', run_type))
@@ -1822,6 +1946,10 @@
             
             _query_params.append(('dag_version', dag_version))
             
+        if bundle_version is not None:
+            
+            _query_params.append(('bundle_version', bundle_version))
+            
         if order_by is not None:
             
             _query_params.append(('order_by', order_by))
@@ -1834,6 +1962,14 @@
             
             _query_params.append(('triggering_user_name_pattern', triggering_user_name_pattern))
             
+        if dag_id_pattern is not None:
+            
+            _query_params.append(('dag_id_pattern', dag_id_pattern))
+            
+        if partition_key_pattern is not None:
+            
+            _query_params.append(('partition_key_pattern', partition_key_pattern))
+            
         # process the header parameters
         # process the form parameters
         # process the body parameter
diff --git a/airflow_client/client/api/event_log_api.py b/airflow_client/client/api/event_log_api.py
index 93083bc..f00fc78 100644
--- a/airflow_client/client/api/event_log_api.py
+++ b/airflow_client/client/api/event_log_api.py
@@ -328,11 +328,11 @@
         included_events: Optional[List[StrictStr]] = None,
         before: Optional[datetime] = None,
         after: Optional[datetime] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -378,15 +378,15 @@
         :type before: datetime
         :param after:
         :type after: datetime
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type owner_pattern: str
-        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type event_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -470,11 +470,11 @@
         included_events: Optional[List[StrictStr]] = None,
         before: Optional[datetime] = None,
         after: Optional[datetime] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -520,15 +520,15 @@
         :type before: datetime
         :param after:
         :type after: datetime
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type owner_pattern: str
-        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type event_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -612,11 +612,11 @@
         included_events: Optional[List[StrictStr]] = None,
         before: Optional[datetime] = None,
         after: Optional[datetime] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        owner_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        event_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -662,15 +662,15 @@
         :type before: datetime
         :param after:
         :type after: datetime
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param owner_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type owner_pattern: str
-        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param event_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type event_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
diff --git a/airflow_client/client/api/import_error_api.py b/airflow_client/client/api/import_error_api.py
index 9f239e6..fe93a30 100644
--- a/airflow_client/client/api/import_error_api.py
+++ b/airflow_client/client/api/import_error_api.py
@@ -319,7 +319,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`")] = None,
-        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -343,7 +343,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`
         :type order_by: List[str]
-        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type filename_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -401,7 +401,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`")] = None,
-        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -425,7 +425,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`
         :type order_by: List[str]
-        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type filename_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -483,7 +483,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`")] = None,
-        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        filename_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -507,7 +507,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id`
         :type order_by: List[str]
-        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param filename_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type filename_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
diff --git a/airflow_client/client/api/pool_api.py b/airflow_client/client/api/pool_api.py
index a6f3e55..8eb83a3 100644
--- a/airflow_client/client/api/pool_api.py
+++ b/airflow_client/client/api/pool_api.py
@@ -884,7 +884,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`")] = None,
-        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -908,7 +908,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`
         :type order_by: List[str]
-        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type pool_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -967,7 +967,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`")] = None,
-        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -991,7 +991,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`
         :type order_by: List[str]
-        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type pool_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1050,7 +1050,7 @@
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`")] = None,
-        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1074,7 +1074,7 @@
         :type offset: int
         :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name`
         :type order_by: List[str]
-        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type pool_name_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
diff --git a/airflow_client/client/api/task_instance_api.py b/airflow_client/client/api/task_instance_api.py
index 2542b34..3e0c183 100644
--- a/airflow_client/client/api/task_instance_api.py
+++ b/airflow_client/client/api/task_instance_api.py
@@ -26,6 +26,7 @@
 from airflow_client.client.models.extra_link_collection_response import ExtraLinkCollectionResponse
 from airflow_client.client.models.hitl_detail import HITLDetail
 from airflow_client.client.models.hitl_detail_collection import HITLDetailCollection
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory
 from airflow_client.client.models.hitl_detail_response import HITLDetailResponse
 from airflow_client.client.models.patch_task_instance_body import PatchTaskInstanceBody
 from airflow_client.client.models.task_dependency_collection_response import TaskDependencyCollectionResponse
@@ -1670,23 +1671,357 @@
 
 
     @validate_call
+    def get_hitl_detail_try_detail(
+        self,
+        dag_id: StrictStr,
+        dag_run_id: StrictStr,
+        task_id: StrictStr,
+        map_index: StrictInt,
+        try_number: StrictInt,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> HITLDetailHistory:
+        """Get Hitl Detail Try Detail
+
+        Get a Human-in-the-loop detail of a specific task instance.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param map_index: (required)
+        :type map_index: int
+        :param try_number: (required)
+        :type try_number: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._get_hitl_detail_try_detail_serialize(
+            dag_id=dag_id,
+            dag_run_id=dag_run_id,
+            task_id=task_id,
+            map_index=map_index,
+            try_number=try_number,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '200': "HITLDetailHistory",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        response_data.read()
+        return self.api_client.response_deserialize(
+            response_data=response_data,
+            response_types_map=_response_types_map,
+        ).data
+
+
+    @validate_call
+    def get_hitl_detail_try_detail_with_http_info(
+        self,
+        dag_id: StrictStr,
+        dag_run_id: StrictStr,
+        task_id: StrictStr,
+        map_index: StrictInt,
+        try_number: StrictInt,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> ApiResponse[HITLDetailHistory]:
+        """Get Hitl Detail Try Detail
+
+        Get a Human-in-the-loop detail of a specific task instance.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param map_index: (required)
+        :type map_index: int
+        :param try_number: (required)
+        :type try_number: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._get_hitl_detail_try_detail_serialize(
+            dag_id=dag_id,
+            dag_run_id=dag_run_id,
+            task_id=task_id,
+            map_index=map_index,
+            try_number=try_number,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '200': "HITLDetailHistory",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        response_data.read()
+        return self.api_client.response_deserialize(
+            response_data=response_data,
+            response_types_map=_response_types_map,
+        )
+
+
+    @validate_call
+    def get_hitl_detail_try_detail_without_preload_content(
+        self,
+        dag_id: StrictStr,
+        dag_run_id: StrictStr,
+        task_id: StrictStr,
+        map_index: StrictInt,
+        try_number: StrictInt,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> RESTResponseType:
+        """Get Hitl Detail Try Detail
+
+        Get a Human-in-the-loop detail of a specific task instance.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param map_index: (required)
+        :type map_index: int
+        :param try_number: (required)
+        :type try_number: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._get_hitl_detail_try_detail_serialize(
+            dag_id=dag_id,
+            dag_run_id=dag_run_id,
+            task_id=task_id,
+            map_index=map_index,
+            try_number=try_number,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '200': "HITLDetailHistory",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        return response_data.response
+
+
+    def _get_hitl_detail_try_detail_serialize(
+        self,
+        dag_id,
+        dag_run_id,
+        task_id,
+        map_index,
+        try_number,
+        _request_auth,
+        _content_type,
+        _headers,
+        _host_index,
+    ) -> RequestSerialized:
+
+        _host = None
+
+        _collection_formats: Dict[str, str] = {
+        }
+
+        _path_params: Dict[str, str] = {}
+        _query_params: List[Tuple[str, str]] = []
+        _header_params: Dict[str, Optional[str]] = _headers or {}
+        _form_params: List[Tuple[str, str]] = []
+        _files: Dict[
+            str, Union[str, bytes, List[str], List[bytes], List[Tuple[str, bytes]]]
+        ] = {}
+        _body_params: Optional[bytes] = None
+
+        # process the path parameters
+        if dag_id is not None:
+            _path_params['dag_id'] = dag_id
+        if dag_run_id is not None:
+            _path_params['dag_run_id'] = dag_run_id
+        if task_id is not None:
+            _path_params['task_id'] = task_id
+        if map_index is not None:
+            _path_params['map_index'] = map_index
+        if try_number is not None:
+            _path_params['try_number'] = try_number
+        # process the query parameters
+        # process the header parameters
+        # process the form parameters
+        # process the body parameter
+
+
+        # set the HTTP header `Accept`
+        if 'Accept' not in _header_params:
+            _header_params['Accept'] = self.api_client.select_header_accept(
+                [
+                    'application/json'
+                ]
+            )
+
+
+        # authentication setting
+        _auth_settings: List[str] = [
+            'OAuth2PasswordBearer', 
+            'HTTPBearer'
+        ]
+
+        return self.api_client.param_serialize(
+            method='GET',
+            resource_path='/api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index}/hitlDetails/tries/{try_number}',
+            path_params=_path_params,
+            query_params=_query_params,
+            header_params=_header_params,
+            body=_body_params,
+            post_params=_form_params,
+            files=_files,
+            auth_settings=_auth_settings,
+            collection_formats=_collection_formats,
+            _host=_host,
+            _request_auth=_request_auth
+        )
+
+
+
+
+    @validate_call
     def get_hitl_details(
         self,
         dag_id: StrictStr,
         dag_run_id: StrictStr,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_id: Optional[StrictStr] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[StrictInt] = None,
         state: Optional[List[StrictStr]] = None,
         response_received: Optional[StrictBool] = None,
         responded_by_user_id: Optional[List[StrictStr]] = None,
         responded_by_user_name: Optional[List[StrictStr]] = None,
-        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         created_at_gte: Optional[datetime] = None,
         created_at_gt: Optional[datetime] = None,
         created_at_lte: Optional[datetime] = None,
@@ -1716,13 +2051,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`
         :type order_by: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param task_id:
         :type task_id: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index:
         :type map_index: int
@@ -1734,9 +2069,9 @@
         :type responded_by_user_id: List[str]
         :param responded_by_user_name:
         :type responded_by_user_name: List[str]
-        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type subject_search: str
-        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type body_search: str
         :param created_at_gte:
         :type created_at_gte: datetime
@@ -1818,17 +2153,17 @@
         dag_run_id: StrictStr,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_id: Optional[StrictStr] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[StrictInt] = None,
         state: Optional[List[StrictStr]] = None,
         response_received: Optional[StrictBool] = None,
         responded_by_user_id: Optional[List[StrictStr]] = None,
         responded_by_user_name: Optional[List[StrictStr]] = None,
-        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         created_at_gte: Optional[datetime] = None,
         created_at_gt: Optional[datetime] = None,
         created_at_lte: Optional[datetime] = None,
@@ -1858,13 +2193,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`
         :type order_by: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param task_id:
         :type task_id: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index:
         :type map_index: int
@@ -1876,9 +2211,9 @@
         :type responded_by_user_id: List[str]
         :param responded_by_user_name:
         :type responded_by_user_name: List[str]
-        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type subject_search: str
-        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type body_search: str
         :param created_at_gte:
         :type created_at_gte: datetime
@@ -1960,17 +2295,17 @@
         dag_run_id: StrictStr,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
-        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_id: Optional[StrictStr] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[StrictInt] = None,
         state: Optional[List[StrictStr]] = None,
         response_received: Optional[StrictBool] = None,
         responded_by_user_id: Optional[List[StrictStr]] = None,
         responded_by_user_name: Optional[List[StrictStr]] = None,
-        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        subject_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        body_search: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         created_at_gte: Optional[datetime] = None,
         created_at_gt: Optional[datetime] = None,
         created_at_lte: Optional[datetime] = None,
@@ -2000,13 +2335,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state`
         :type order_by: List[str]
-        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_id_pattern: str
         :param task_id:
         :type task_id: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index:
         :type map_index: int
@@ -2018,9 +2353,9 @@
         :type responded_by_user_id: List[str]
         :param responded_by_user_name:
         :type responded_by_user_name: List[str]
-        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param subject_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type subject_search: str
-        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param body_search: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type body_search: str
         :param created_at_gte:
         :type created_at_gte: datetime
@@ -3668,11 +4003,14 @@
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -3752,8 +4090,12 @@
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -3762,6 +4104,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -3822,11 +4166,14 @@
             duration_lt=duration_lt,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -3887,11 +4234,14 @@
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -3971,8 +4321,12 @@
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -3981,6 +4335,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -4041,11 +4397,14 @@
             duration_lt=duration_lt,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -4106,11 +4465,14 @@
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -4190,8 +4552,12 @@
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -4200,6 +4566,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -4260,11 +4628,14 @@
             duration_lt=duration_lt,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -4320,11 +4691,14 @@
         duration_lt,
         state,
         pool,
+        pool_name_pattern,
         queue,
+        queue_name_pattern,
         executor,
         version_number,
         try_number,
         operator,
+        operator_name_pattern,
         map_index,
         limit,
         offset,
@@ -4650,10 +5024,18 @@
             
             _query_params.append(('pool', pool))
             
+        if pool_name_pattern is not None:
+            
+            _query_params.append(('pool_name_pattern', pool_name_pattern))
+            
         if queue is not None:
             
             _query_params.append(('queue', queue))
             
+        if queue_name_pattern is not None:
+            
+            _query_params.append(('queue_name_pattern', queue_name_pattern))
+            
         if executor is not None:
             
             _query_params.append(('executor', executor))
@@ -4670,6 +5052,10 @@
             
             _query_params.append(('operator', operator))
             
+        if operator_name_pattern is not None:
+            
+            _query_params.append(('operator_name_pattern', operator_name_pattern))
+            
         if map_index is not None:
             
             _query_params.append(('map_index', map_index))
@@ -6355,15 +6741,20 @@
         duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
-        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_group_id: Annotated[Optional[StrictStr], Field(description="Filter by exact task group ID. Returns all tasks within the specified task group.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -6439,16 +6830,24 @@
         :type duration_lte: float
         :param duration_lt:
         :type duration_lt: float
-        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_display_name_pattern: str
         :param task_group_id: Filter by exact task group ID. Returns all tasks within the specified task group.
         :type task_group_id: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type run_id_pattern: str
         :param state:
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -6457,6 +6856,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -6517,13 +6918,18 @@
             duration_lt=duration_lt,
             task_display_name_pattern=task_display_name_pattern,
             task_group_id=task_group_id,
+            dag_id_pattern=dag_id_pattern,
+            run_id_pattern=run_id_pattern,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -6536,6 +6942,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "TaskInstanceCollectionResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -6582,15 +6989,20 @@
         duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
-        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_group_id: Annotated[Optional[StrictStr], Field(description="Filter by exact task group ID. Returns all tasks within the specified task group.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -6666,16 +7078,24 @@
         :type duration_lte: float
         :param duration_lt:
         :type duration_lt: float
-        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_display_name_pattern: str
         :param task_group_id: Filter by exact task group ID. Returns all tasks within the specified task group.
         :type task_group_id: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type run_id_pattern: str
         :param state:
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -6684,6 +7104,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -6744,13 +7166,18 @@
             duration_lt=duration_lt,
             task_display_name_pattern=task_display_name_pattern,
             task_group_id=task_group_id,
+            dag_id_pattern=dag_id_pattern,
+            run_id_pattern=run_id_pattern,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -6763,6 +7190,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "TaskInstanceCollectionResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -6809,15 +7237,20 @@
         duration_gt: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lte: Optional[Union[StrictFloat, StrictInt]] = None,
         duration_lt: Optional[Union[StrictFloat, StrictInt]] = None,
-        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        task_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         task_group_id: Annotated[Optional[StrictStr], Field(description="Filter by exact task group ID. Returns all tasks within the specified task group.")] = None,
+        dag_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         state: Optional[List[StrictStr]] = None,
         pool: Optional[List[StrictStr]] = None,
+        pool_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         queue: Optional[List[StrictStr]] = None,
+        queue_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         executor: Optional[List[StrictStr]] = None,
         version_number: Optional[List[StrictInt]] = None,
         try_number: Optional[List[StrictInt]] = None,
         operator: Optional[List[StrictStr]] = None,
+        operator_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index: Optional[List[StrictInt]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
@@ -6893,16 +7326,24 @@
         :type duration_lte: float
         :param duration_lt:
         :type duration_lt: float
-        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_display_name_pattern: str
         :param task_group_id: Filter by exact task group ID. Returns all tasks within the specified task group.
         :type task_group_id: str
+        :param dag_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type dag_id_pattern: str
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type run_id_pattern: str
         :param state:
         :type state: List[str]
         :param pool:
         :type pool: List[str]
+        :param pool_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type pool_name_pattern: str
         :param queue:
         :type queue: List[str]
+        :param queue_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type queue_name_pattern: str
         :param executor:
         :type executor: List[str]
         :param version_number:
@@ -6911,6 +7352,8 @@
         :type try_number: List[int]
         :param operator:
         :type operator: List[str]
+        :param operator_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
+        :type operator_name_pattern: str
         :param map_index:
         :type map_index: List[int]
         :param limit:
@@ -6971,13 +7414,18 @@
             duration_lt=duration_lt,
             task_display_name_pattern=task_display_name_pattern,
             task_group_id=task_group_id,
+            dag_id_pattern=dag_id_pattern,
+            run_id_pattern=run_id_pattern,
             state=state,
             pool=pool,
+            pool_name_pattern=pool_name_pattern,
             queue=queue,
+            queue_name_pattern=queue_name_pattern,
             executor=executor,
             version_number=version_number,
             try_number=try_number,
             operator=operator,
+            operator_name_pattern=operator_name_pattern,
             map_index=map_index,
             limit=limit,
             offset=offset,
@@ -6990,6 +7438,7 @@
 
         _response_types_map: Dict[str, Optional[str]] = {
             '200': "TaskInstanceCollectionResponse",
+            '400': "HTTPExceptionResponse",
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
@@ -7033,13 +7482,18 @@
         duration_lt,
         task_display_name_pattern,
         task_group_id,
+        dag_id_pattern,
+        run_id_pattern,
         state,
         pool,
+        pool_name_pattern,
         queue,
+        queue_name_pattern,
         executor,
         version_number,
         try_number,
         operator,
+        operator_name_pattern,
         map_index,
         limit,
         offset,
@@ -7367,6 +7821,14 @@
             
             _query_params.append(('task_group_id', task_group_id))
             
+        if dag_id_pattern is not None:
+            
+            _query_params.append(('dag_id_pattern', dag_id_pattern))
+            
+        if run_id_pattern is not None:
+            
+            _query_params.append(('run_id_pattern', run_id_pattern))
+            
         if state is not None:
             
             _query_params.append(('state', state))
@@ -7375,10 +7837,18 @@
             
             _query_params.append(('pool', pool))
             
+        if pool_name_pattern is not None:
+            
+            _query_params.append(('pool_name_pattern', pool_name_pattern))
+            
         if queue is not None:
             
             _query_params.append(('queue', queue))
             
+        if queue_name_pattern is not None:
+            
+            _query_params.append(('queue_name_pattern', queue_name_pattern))
+            
         if executor is not None:
             
             _query_params.append(('executor', executor))
@@ -7395,6 +7865,10 @@
             
             _query_params.append(('operator', operator))
             
+        if operator_name_pattern is not None:
+            
+            _query_params.append(('operator_name_pattern', operator_name_pattern))
+            
         if map_index is not None:
             
             _query_params.append(('map_index', map_index))
@@ -9310,6 +9784,7 @@
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
+            '409': "HTTPExceptionResponse",
             '422': "HTTPValidationError",
         }
         response_data = self.api_client.call_api(
@@ -9385,6 +9860,7 @@
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
+            '409': "HTTPExceptionResponse",
             '422': "HTTPValidationError",
         }
         response_data = self.api_client.call_api(
@@ -9460,6 +9936,7 @@
             '401': "HTTPExceptionResponse",
             '403': "HTTPExceptionResponse",
             '404': "HTTPExceptionResponse",
+            '409': "HTTPExceptionResponse",
             '422': "HTTPValidationError",
         }
         response_data = self.api_client.call_api(
diff --git a/airflow_client/client/api/variable_api.py b/airflow_client/client/api/variable_api.py
index 8bd0e98..3b750c7 100644
--- a/airflow_client/client/api/variable_api.py
+++ b/airflow_client/client/api/variable_api.py
@@ -879,8 +879,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`")] = None,
-        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`")] = None,
+        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -902,9 +902,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`
         :type order_by: List[str]
-        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type variable_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -961,8 +961,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`")] = None,
-        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`")] = None,
+        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -984,9 +984,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`
         :type order_by: List[str]
-        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type variable_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
@@ -1043,8 +1043,8 @@
         self,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`")] = None,
-        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        order_by: Annotated[Optional[List[StrictStr]], Field(description="Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`")] = None,
+        variable_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         _request_timeout: Union[
             None,
             Annotated[StrictFloat, Field(gt=0)],
@@ -1066,9 +1066,9 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted`
+        :param order_by: Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name`
         :type order_by: List[str]
-        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param variable_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type variable_key_pattern: str
         :param _request_timeout: timeout setting for this request. If one
                                  number provided, it will be total request
diff --git a/airflow_client/client/api/x_com_api.py b/airflow_client/client/api/x_com_api.py
index fe49e7f..3af5fb3 100644
--- a/airflow_client/client/api/x_com_api.py
+++ b/airflow_client/client/api/x_com_api.py
@@ -379,6 +379,345 @@
 
 
     @validate_call
+    def delete_xcom_entry(
+        self,
+        dag_id: StrictStr,
+        task_id: StrictStr,
+        dag_run_id: StrictStr,
+        xcom_key: StrictStr,
+        map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> None:
+        """Delete Xcom Entry
+
+        Delete an XCom entry.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param xcom_key: (required)
+        :type xcom_key: str
+        :param map_index:
+        :type map_index: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._delete_xcom_entry_serialize(
+            dag_id=dag_id,
+            task_id=task_id,
+            dag_run_id=dag_run_id,
+            xcom_key=xcom_key,
+            map_index=map_index,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '204': None,
+            '400': "HTTPExceptionResponse",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        response_data.read()
+        return self.api_client.response_deserialize(
+            response_data=response_data,
+            response_types_map=_response_types_map,
+        ).data
+
+
+    @validate_call
+    def delete_xcom_entry_with_http_info(
+        self,
+        dag_id: StrictStr,
+        task_id: StrictStr,
+        dag_run_id: StrictStr,
+        xcom_key: StrictStr,
+        map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> ApiResponse[None]:
+        """Delete Xcom Entry
+
+        Delete an XCom entry.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param xcom_key: (required)
+        :type xcom_key: str
+        :param map_index:
+        :type map_index: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._delete_xcom_entry_serialize(
+            dag_id=dag_id,
+            task_id=task_id,
+            dag_run_id=dag_run_id,
+            xcom_key=xcom_key,
+            map_index=map_index,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '204': None,
+            '400': "HTTPExceptionResponse",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        response_data.read()
+        return self.api_client.response_deserialize(
+            response_data=response_data,
+            response_types_map=_response_types_map,
+        )
+
+
+    @validate_call
+    def delete_xcom_entry_without_preload_content(
+        self,
+        dag_id: StrictStr,
+        task_id: StrictStr,
+        dag_run_id: StrictStr,
+        xcom_key: StrictStr,
+        map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
+        _request_timeout: Union[
+            None,
+            Annotated[StrictFloat, Field(gt=0)],
+            Tuple[
+                Annotated[StrictFloat, Field(gt=0)],
+                Annotated[StrictFloat, Field(gt=0)]
+            ]
+        ] = None,
+        _request_auth: Optional[Dict[StrictStr, Any]] = None,
+        _content_type: Optional[StrictStr] = None,
+        _headers: Optional[Dict[StrictStr, Any]] = None,
+        _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0,
+    ) -> RESTResponseType:
+        """Delete Xcom Entry
+
+        Delete an XCom entry.
+
+        :param dag_id: (required)
+        :type dag_id: str
+        :param task_id: (required)
+        :type task_id: str
+        :param dag_run_id: (required)
+        :type dag_run_id: str
+        :param xcom_key: (required)
+        :type xcom_key: str
+        :param map_index:
+        :type map_index: int
+        :param _request_timeout: timeout setting for this request. If one
+                                 number provided, it will be total request
+                                 timeout. It can also be a pair (tuple) of
+                                 (connection, read) timeouts.
+        :type _request_timeout: int, tuple(int, int), optional
+        :param _request_auth: set to override the auth_settings for an a single
+                              request; this effectively ignores the
+                              authentication in the spec for a single request.
+        :type _request_auth: dict, optional
+        :param _content_type: force content-type for the request.
+        :type _content_type: str, Optional
+        :param _headers: set to override the headers for a single
+                         request; this effectively ignores the headers
+                         in the spec for a single request.
+        :type _headers: dict, optional
+        :param _host_index: set to override the host_index for a single
+                            request; this effectively ignores the host_index
+                            in the spec for a single request.
+        :type _host_index: int, optional
+        :return: Returns the result object.
+        """ # noqa: E501
+
+        _param = self._delete_xcom_entry_serialize(
+            dag_id=dag_id,
+            task_id=task_id,
+            dag_run_id=dag_run_id,
+            xcom_key=xcom_key,
+            map_index=map_index,
+            _request_auth=_request_auth,
+            _content_type=_content_type,
+            _headers=_headers,
+            _host_index=_host_index
+        )
+
+        _response_types_map: Dict[str, Optional[str]] = {
+            '204': None,
+            '400': "HTTPExceptionResponse",
+            '401': "HTTPExceptionResponse",
+            '403': "HTTPExceptionResponse",
+            '404': "HTTPExceptionResponse",
+            '422': "HTTPValidationError",
+        }
+        response_data = self.api_client.call_api(
+            *_param,
+            _request_timeout=_request_timeout
+        )
+        return response_data.response
+
+
+    def _delete_xcom_entry_serialize(
+        self,
+        dag_id,
+        task_id,
+        dag_run_id,
+        xcom_key,
+        map_index,
+        _request_auth,
+        _content_type,
+        _headers,
+        _host_index,
+    ) -> RequestSerialized:
+
+        _host = None
+
+        _collection_formats: Dict[str, str] = {
+        }
+
+        _path_params: Dict[str, str] = {}
+        _query_params: List[Tuple[str, str]] = []
+        _header_params: Dict[str, Optional[str]] = _headers or {}
+        _form_params: List[Tuple[str, str]] = []
+        _files: Dict[
+            str, Union[str, bytes, List[str], List[bytes], List[Tuple[str, bytes]]]
+        ] = {}
+        _body_params: Optional[bytes] = None
+
+        # process the path parameters
+        if dag_id is not None:
+            _path_params['dag_id'] = dag_id
+        if task_id is not None:
+            _path_params['task_id'] = task_id
+        if dag_run_id is not None:
+            _path_params['dag_run_id'] = dag_run_id
+        if xcom_key is not None:
+            _path_params['xcom_key'] = xcom_key
+        # process the query parameters
+        if map_index is not None:
+            
+            _query_params.append(('map_index', map_index))
+            
+        # process the header parameters
+        # process the form parameters
+        # process the body parameter
+
+
+        # set the HTTP header `Accept`
+        if 'Accept' not in _header_params:
+            _header_params['Accept'] = self.api_client.select_header_accept(
+                [
+                    'application/json'
+                ]
+            )
+
+
+        # authentication setting
+        _auth_settings: List[str] = [
+            'OAuth2PasswordBearer', 
+            'HTTPBearer'
+        ]
+
+        return self.api_client.param_serialize(
+            method='DELETE',
+            resource_path='/api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key}',
+            path_params=_path_params,
+            query_params=_query_params,
+            header_params=_header_params,
+            body=_body_params,
+            post_params=_form_params,
+            files=_files,
+            auth_settings=_auth_settings,
+            collection_formats=_collection_formats,
+            _host=_host,
+            _request_auth=_request_auth
+        )
+
+
+
+
+    @validate_call
     def get_xcom_entries(
         self,
         dag_id: StrictStr,
@@ -388,10 +727,10 @@
         map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index_filter: Optional[StrictInt] = None,
         logical_date_gte: Optional[datetime] = None,
         logical_date_gt: Optional[datetime] = None,
@@ -432,13 +771,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type xcom_key_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index_filter:
         :type map_index_filter: int
@@ -536,10 +875,10 @@
         map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index_filter: Optional[StrictInt] = None,
         logical_date_gte: Optional[datetime] = None,
         logical_date_gt: Optional[datetime] = None,
@@ -580,13 +919,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type xcom_key_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index_filter:
         :type map_index_filter: int
@@ -684,10 +1023,10 @@
         map_index: Optional[Annotated[int, Field(strict=True, ge=-1)]] = None,
         limit: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
         offset: Optional[Annotated[int, Field(strict=True, ge=0)]] = None,
-        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
-        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.")] = None,
+        xcom_key_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        dag_display_name_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        run_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
+        task_id_pattern: Annotated[Optional[StrictStr], Field(description="SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.")] = None,
         map_index_filter: Optional[StrictInt] = None,
         logical_date_gte: Optional[datetime] = None,
         logical_date_gt: Optional[datetime] = None,
@@ -728,13 +1067,13 @@
         :type limit: int
         :param offset:
         :type offset: int
-        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param xcom_key_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type xcom_key_pattern: str
-        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param dag_display_name_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type dag_display_name_pattern: str
-        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param run_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type run_id_pattern: str
-        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported.
+        :param task_id_pattern: SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported.
         :type task_id_pattern: str
         :param map_index_filter:
         :type map_index_filter: int
diff --git a/airflow_client/client/api_client.py b/airflow_client/client/api_client.py
index 715e08e..80e5ccf 100644
--- a/airflow_client/client/api_client.py
+++ b/airflow_client/client/api_client.py
@@ -68,6 +68,7 @@
         'date': datetime.date,
         'datetime': datetime.datetime,
         'decimal': decimal.Decimal,
+        'UUID': uuid.UUID,
         'object': object,
     }
     _pool = None
@@ -90,7 +91,7 @@
             self.default_headers[header_name] = header_value
         self.cookie = cookie
         # Set default User-Agent.
-        self.user_agent = 'OpenAPI-Generator/3.1.8/python'
+        self.user_agent = 'OpenAPI-Generator/3.2.0/python'
         self.client_side_validation = configuration.client_side_validation
 
     def __enter__(self):
@@ -305,7 +306,7 @@
         response_text = None
         return_data = None
         try:
-            if response_type == "bytearray":
+            if response_type in ("bytearray", "bytes"):
                 return_data = response_data.data
             elif response_type == "file":
                 return_data = self.__deserialize_file(response_data)
@@ -467,6 +468,8 @@
             return self.__deserialize_datetime(data)
         elif klass is decimal.Decimal:
             return decimal.Decimal(data)
+        elif klass is uuid.UUID:
+            return uuid.UUID(data)
         elif issubclass(klass, Enum):
             return self.__deserialize_enum(data, klass)
         else:
diff --git a/airflow_client/client/configuration.py b/airflow_client/client/configuration.py
index 7856eb6..4e7e4ec 100644
--- a/airflow_client/client/configuration.py
+++ b/airflow_client/client/configuration.py
@@ -157,6 +157,8 @@
       string values to replace variables in templated server configuration.
       The validation of enums is performed for variables with defined enum
       values before.
+    :param verify_ssl: bool - Set this to false to skip verifying SSL certificate
+      when calling API from https server.
     :param ssl_ca_cert: str - the path to a file of concatenated CA certificates
       in PEM format.
     :param retries: int | urllib3.util.retry.Retry - Retry configuration.
@@ -164,6 +166,16 @@
       in PEM (str) or DER (bytes) format.
     :param cert_file: the path to a client certificate file, for mTLS.
     :param key_file: the path to a client key file, for mTLS.
+    :param assert_hostname: Set this to True/False to enable/disable SSL hostname verification.
+    :param tls_server_name: SSL/TLS Server Name Indication (SNI). Set this to the SNI value expected by the server.
+    :param connection_pool_maxsize: Connection pool max size. None in the constructor is coerced to 100 for async and cpu_count * 5 for sync.
+    :param proxy: Proxy URL.
+    :param proxy_headers: Proxy headers.
+    :param safe_chars_for_path_param: Safe characters for path parameter encoding.
+    :param client_side_validation: Enable client-side validation. Default True.
+    :param socket_options: Options to pass down to the underlying urllib3 socket.
+    :param datetime_format: Datetime format string for serialization.
+    :param date_format: Date format string for serialization.
 
     :Example:
     """
@@ -188,6 +200,17 @@
         ca_cert_data: Optional[Union[str, bytes]] = None,
         cert_file: Optional[str]=None,
         key_file: Optional[str]=None,
+        verify_ssl: bool=True,
+        assert_hostname: Optional[bool]=None,
+        tls_server_name: Optional[str]=None,
+        connection_pool_maxsize: Optional[int]=None,
+        proxy: Optional[str]=None,
+        proxy_headers: Optional[Any]=None,
+        safe_chars_for_path_param: str='',
+        client_side_validation: bool=True,
+        socket_options: Optional[Any]=None,
+        datetime_format: str="%Y-%m-%dT%H:%M:%S.%f%z",
+        date_format: str="%Y-%m-%d",
         *,
         debug: Optional[bool] = None,
     ) -> None:
@@ -257,7 +280,7 @@
         """Debug switch
         """
 
-        self.verify_ssl = True
+        self.verify_ssl = verify_ssl
         """SSL/TLS verification
            Set this to false to skip verifying SSL certificate when calling API
            from https server.
@@ -275,46 +298,43 @@
         self.key_file = key_file
         """client key file
         """
-        self.assert_hostname = None
+        self.assert_hostname = assert_hostname
         """Set this to True/False to enable/disable SSL hostname verification.
         """
-        self.tls_server_name = None
+        self.tls_server_name = tls_server_name
         """SSL/TLS Server Name Indication (SNI)
            Set this to the SNI value expected by the server.
         """
 
-        self.connection_pool_maxsize = multiprocessing.cpu_count() * 5
+        self.connection_pool_maxsize = connection_pool_maxsize if connection_pool_maxsize is not None else multiprocessing.cpu_count() * 5
         """urllib3 connection pool's maximum number of connections saved
-           per pool. urllib3 uses 1 connection as default value, but this is
-           not the best value when you are making a lot of possibly parallel
-           requests to the same host, which is often the case here.
-           cpu_count * 5 is used as default value to increase performance.
+           per pool. None in the constructor is coerced to cpu_count * 5.
         """
 
-        self.proxy: Optional[str] = None
+        self.proxy = proxy
         """Proxy URL
         """
-        self.proxy_headers = None
+        self.proxy_headers = proxy_headers
         """Proxy headers
         """
-        self.safe_chars_for_path_param = ''
+        self.safe_chars_for_path_param = safe_chars_for_path_param
         """Safe chars for path_param
         """
         self.retries = retries
         """Retry configuration
         """
         # Enable client side validation
-        self.client_side_validation = True
+        self.client_side_validation = client_side_validation
 
-        self.socket_options = None
+        self.socket_options = socket_options
         """Options to pass down to the underlying urllib3 socket
         """
 
-        self.datetime_format = "%Y-%m-%dT%H:%M:%S.%f%z"
+        self.datetime_format = datetime_format
         """datetime format
         """
 
-        self.date_format = "%Y-%m-%d"
+        self.date_format = date_format
         """date format
         """
 
@@ -521,7 +541,7 @@
                "OS: {env}\n"\
                "Python Version: {pyversion}\n"\
                "Version of the API: 2\n"\
-               "SDK Package Version: 3.1.8".\
+               "SDK Package Version: 3.2.0".\
                format(env=sys.platform, pyversion=sys.version)
 
     def get_host_settings(self) -> List[HostSetting]:
diff --git a/airflow_client/client/models/__init__.py b/airflow_client/client/models/__init__.py
index 12359fd..38a35f4 100644
--- a/airflow_client/client/models/__init__.py
+++ b/airflow_client/client/models/__init__.py
@@ -25,6 +25,7 @@
 from airflow_client.client.models.asset_event_collection_response import AssetEventCollectionResponse
 from airflow_client.client.models.asset_event_response import AssetEventResponse
 from airflow_client.client.models.asset_response import AssetResponse
+from airflow_client.client.models.asset_watcher_response import AssetWatcherResponse
 from airflow_client.client.models.backfill_collection_response import BackfillCollectionResponse
 from airflow_client.client.models.backfill_post_body import BackfillPostBody
 from airflow_client.client.models.backfill_response import BackfillResponse
@@ -92,6 +93,9 @@
 from airflow_client.client.models.dry_run_backfill_collection_response import DryRunBackfillCollectionResponse
 from airflow_client.client.models.dry_run_backfill_response import DryRunBackfillResponse
 from airflow_client.client.models.entities_inner import EntitiesInner
+from airflow_client.client.models.entities_inner1 import EntitiesInner1
+from airflow_client.client.models.entities_inner2 import EntitiesInner2
+from airflow_client.client.models.entities_inner3 import EntitiesInner3
 from airflow_client.client.models.event_log_collection_response import EventLogCollectionResponse
 from airflow_client.client.models.event_log_response import EventLogResponse
 from airflow_client.client.models.external_log_url_response import ExternalLogUrlResponse
@@ -101,6 +105,7 @@
 from airflow_client.client.models.fast_api_root_middleware_response import FastAPIRootMiddlewareResponse
 from airflow_client.client.models.hitl_detail import HITLDetail
 from airflow_client.client.models.hitl_detail_collection import HITLDetailCollection
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory
 from airflow_client.client.models.hitl_detail_response import HITLDetailResponse
 from airflow_client.client.models.hitl_user import HITLUser
 from airflow_client.client.models.http_exception_response import HTTPExceptionResponse
diff --git a/airflow_client/client/models/app_builder_menu_item_response.py b/airflow_client/client/models/app_builder_menu_item_response.py
index ef9d786..c5745b1 100644
--- a/airflow_client/client/models/app_builder_menu_item_response.py
+++ b/airflow_client/client/models/app_builder_menu_item_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AppBuilderMenuItemResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["category", "href", "name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/app_builder_view_response.py b/airflow_client/client/models/app_builder_view_response.py
index b3722ef..50f7be0 100644
--- a/airflow_client/client/models/app_builder_view_response.py
+++ b/airflow_client/client/models/app_builder_view_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AppBuilderViewResponse(BaseModel):
     """
@@ -34,7 +35,8 @@
     __properties: ClassVar[List[str]] = ["category", "label", "name", "view"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/asset_alias_collection_response.py b/airflow_client/client/models/asset_alias_collection_response.py
index 8f056d9..334da94 100644
--- a/airflow_client/client/models/asset_alias_collection_response.py
+++ b/airflow_client/client/models/asset_alias_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.asset_alias_response import AssetAliasResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetAliasCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["asset_aliases", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/asset_alias_response.py b/airflow_client/client/models/asset_alias_response.py
index 81d8385..77b729f 100644
--- a/airflow_client/client/models/asset_alias_response.py
+++ b/airflow_client/client/models/asset_alias_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetAliasResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["group", "id", "name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/asset_collection_response.py b/airflow_client/client/models/asset_collection_response.py
index 3381d10..d4eccea 100644
--- a/airflow_client/client/models/asset_collection_response.py
+++ b/airflow_client/client/models/asset_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.asset_response import AssetResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["assets", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/asset_event_collection_response.py b/airflow_client/client/models/asset_event_collection_response.py
index 000abe6..381af67 100644
--- a/airflow_client/client/models/asset_event_collection_response.py
+++ b/airflow_client/client/models/asset_event_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.asset_event_response import AssetEventResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetEventCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["asset_events", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/asset_event_response.py b/airflow_client/client/models/asset_event_response.py
index e9cc451..68a8ae7 100644
--- a/airflow_client/client/models/asset_event_response.py
+++ b/airflow_client/client/models/asset_event_response.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.dag_run_asset_reference import DagRunAssetReference
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetEventResponse(BaseModel):
     """
@@ -34,16 +35,18 @@
     group: Optional[StrictStr] = None
     id: StrictInt
     name: Optional[StrictStr] = None
+    partition_key: Optional[StrictStr] = None
     source_dag_id: Optional[StrictStr] = None
     source_map_index: StrictInt
     source_run_id: Optional[StrictStr] = None
     source_task_id: Optional[StrictStr] = None
     timestamp: datetime
     uri: Optional[StrictStr] = None
-    __properties: ClassVar[List[str]] = ["asset_id", "created_dagruns", "extra", "group", "id", "name", "source_dag_id", "source_map_index", "source_run_id", "source_task_id", "timestamp", "uri"]
+    __properties: ClassVar[List[str]] = ["asset_id", "created_dagruns", "extra", "group", "id", "name", "partition_key", "source_dag_id", "source_map_index", "source_run_id", "source_task_id", "timestamp", "uri"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -55,8 +58,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -106,6 +108,7 @@
             "group": obj.get("group"),
             "id": obj.get("id"),
             "name": obj.get("name"),
+            "partition_key": obj.get("partition_key"),
             "source_dag_id": obj.get("source_dag_id"),
             "source_map_index": obj.get("source_map_index"),
             "source_run_id": obj.get("source_run_id"),
diff --git a/airflow_client/client/models/asset_response.py b/airflow_client/client/models/asset_response.py
index 544ec66..868c8d3 100644
--- a/airflow_client/client/models/asset_response.py
+++ b/airflow_client/client/models/asset_response.py
@@ -21,12 +21,14 @@
 from pydantic import BaseModel, ConfigDict, StrictInt, StrictStr
 from typing import Any, ClassVar, Dict, List, Optional
 from airflow_client.client.models.asset_alias_response import AssetAliasResponse
+from airflow_client.client.models.asset_watcher_response import AssetWatcherResponse
 from airflow_client.client.models.dag_schedule_asset_reference import DagScheduleAssetReference
 from airflow_client.client.models.last_asset_event_response import LastAssetEventResponse
 from airflow_client.client.models.task_inlet_asset_reference import TaskInletAssetReference
 from airflow_client.client.models.task_outlet_asset_reference import TaskOutletAssetReference
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class AssetResponse(BaseModel):
     """
@@ -44,10 +46,12 @@
     scheduled_dags: List[DagScheduleAssetReference]
     updated_at: datetime
     uri: StrictStr
-    __properties: ClassVar[List[str]] = ["aliases", "consuming_tasks", "created_at", "extra", "group", "id", "last_asset_event", "name", "producing_tasks", "scheduled_dags", "updated_at", "uri"]
+    watchers: List[AssetWatcherResponse]
+    __properties: ClassVar[List[str]] = ["aliases", "consuming_tasks", "created_at", "extra", "group", "id", "last_asset_event", "name", "producing_tasks", "scheduled_dags", "updated_at", "uri", "watchers"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -59,8 +63,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -116,6 +119,13 @@
                 if _item_scheduled_dags:
                     _items.append(_item_scheduled_dags.to_dict())
             _dict['scheduled_dags'] = _items
+        # override the default output from pydantic by calling `to_dict()` of each item in watchers (list)
+        _items = []
+        if self.watchers:
+            for _item_watchers in self.watchers:
+                if _item_watchers:
+                    _items.append(_item_watchers.to_dict())
+            _dict['watchers'] = _items
         return _dict
 
     @classmethod
@@ -139,7 +149,8 @@
             "producing_tasks": [TaskOutletAssetReference.from_dict(_item) for _item in obj["producing_tasks"]] if obj.get("producing_tasks") is not None else None,
             "scheduled_dags": [DagScheduleAssetReference.from_dict(_item) for _item in obj["scheduled_dags"]] if obj.get("scheduled_dags") is not None else None,
             "updated_at": obj.get("updated_at"),
-            "uri": obj.get("uri")
+            "uri": obj.get("uri"),
+            "watchers": [AssetWatcherResponse.from_dict(_item) for _item in obj["watchers"]] if obj.get("watchers") is not None else None
         })
         return _obj
 
diff --git a/airflow_client/client/models/asset_watcher_response.py b/airflow_client/client/models/asset_watcher_response.py
new file mode 100644
index 0000000..3d704d7
--- /dev/null
+++ b/airflow_client/client/models/asset_watcher_response.py
@@ -0,0 +1,93 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+from __future__ import annotations
+import pprint
+import re  # noqa: F401
+import json
+
+from datetime import datetime
+from pydantic import BaseModel, ConfigDict, StrictInt, StrictStr
+from typing import Any, ClassVar, Dict, List
+from typing import Optional, Set
+from typing_extensions import Self
+from pydantic_core import to_jsonable_python
+
+class AssetWatcherResponse(BaseModel):
+    """
+    Asset watcher serializer for responses.
+    """ # noqa: E501
+    created_date: datetime
+    name: StrictStr
+    trigger_id: StrictInt
+    __properties: ClassVar[List[str]] = ["created_date", "name", "trigger_id"]
+
+    model_config = ConfigDict(
+        validate_by_name=True,
+        validate_by_alias=True,
+        validate_assignment=True,
+        protected_namespaces=(),
+    )
+
+
+    def to_str(self) -> str:
+        """Returns the string representation of the model using alias"""
+        return pprint.pformat(self.model_dump(by_alias=True))
+
+    def to_json(self) -> str:
+        """Returns the JSON representation of the model using alias"""
+        return json.dumps(to_jsonable_python(self.to_dict()))
+
+    @classmethod
+    def from_json(cls, json_str: str) -> Optional[Self]:
+        """Create an instance of AssetWatcherResponse from a JSON string"""
+        return cls.from_dict(json.loads(json_str))
+
+    def to_dict(self) -> Dict[str, Any]:
+        """Return the dictionary representation of the model using alias.
+
+        This has the following differences from calling pydantic's
+        `self.model_dump(by_alias=True)`:
+
+        * `None` is only added to the output dict for nullable fields that
+          were set at model initialization. Other fields with value `None`
+          are ignored.
+        """
+        excluded_fields: Set[str] = set([
+        ])
+
+        _dict = self.model_dump(
+            by_alias=True,
+            exclude=excluded_fields,
+            exclude_none=True,
+        )
+        return _dict
+
+    @classmethod
+    def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
+        """Create an instance of AssetWatcherResponse from a dict"""
+        if obj is None:
+            return None
+
+        if not isinstance(obj, dict):
+            return cls.model_validate(obj)
+
+        _obj = cls.model_validate({
+            "created_date": obj.get("created_date"),
+            "name": obj.get("name"),
+            "trigger_id": obj.get("trigger_id")
+        })
+        return _obj
+
+
diff --git a/airflow_client/client/models/backfill_collection_response.py b/airflow_client/client/models/backfill_collection_response.py
index 41a0735..574532d 100644
--- a/airflow_client/client/models/backfill_collection_response.py
+++ b/airflow_client/client/models/backfill_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.backfill_response import BackfillResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BackfillCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["backfills", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/backfill_post_body.py b/airflow_client/client/models/backfill_post_body.py
index 788a69d..62df228 100644
--- a/airflow_client/client/models/backfill_post_body.py
+++ b/airflow_client/client/models/backfill_post_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.reprocess_behavior import ReprocessBehavior
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BackfillPostBody(BaseModel):
     """
@@ -40,7 +41,8 @@
     __properties: ClassVar[List[str]] = ["dag_id", "dag_run_conf", "from_date", "max_active_runs", "reprocess_behavior", "run_backwards", "run_on_latest_version", "to_date"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -52,8 +54,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/backfill_response.py b/airflow_client/client/models/backfill_response.py
index 4f8664d..34b0039 100644
--- a/airflow_client/client/models/backfill_response.py
+++ b/airflow_client/client/models/backfill_response.py
@@ -24,6 +24,7 @@
 from airflow_client.client.models.reprocess_behavior import ReprocessBehavior
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BackfillResponse(BaseModel):
     """
@@ -33,7 +34,7 @@
     created_at: datetime
     dag_display_name: StrictStr
     dag_id: StrictStr
-    dag_run_conf: Dict[str, Any]
+    dag_run_conf: Optional[Dict[str, Any]] = None
     from_date: datetime
     id: Annotated[int, Field(strict=True, ge=0)]
     is_paused: StrictBool
@@ -44,7 +45,8 @@
     __properties: ClassVar[List[str]] = ["completed_at", "created_at", "dag_display_name", "dag_id", "dag_run_conf", "from_date", "id", "is_paused", "max_active_runs", "reprocess_behavior", "to_date", "updated_at"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -56,8 +58,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/base_info_response.py b/airflow_client/client/models/base_info_response.py
index 803d5e1..2071a28 100644
--- a/airflow_client/client/models/base_info_response.py
+++ b/airflow_client/client/models/base_info_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BaseInfoResponse(BaseModel):
     """
@@ -30,7 +31,8 @@
     __properties: ClassVar[List[str]] = ["status"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -42,8 +44,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_action_response.py b/airflow_client/client/models/bulk_action_response.py
index 0e0ebdf..4410e69 100644
--- a/airflow_client/client/models/bulk_action_response.py
+++ b/airflow_client/client/models/bulk_action_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkActionResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["errors", "success"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_body_bulk_task_instance_body.py b/airflow_client/client/models/bulk_body_bulk_task_instance_body.py
index ab8b60b..54d2799 100644
--- a/airflow_client/client/models/bulk_body_bulk_task_instance_body.py
+++ b/airflow_client/client/models/bulk_body_bulk_task_instance_body.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.actions_inner import ActionsInner
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkBodyBulkTaskInstanceBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["actions"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_body_connection_body.py b/airflow_client/client/models/bulk_body_connection_body.py
index 9187afd..6d8a2f1 100644
--- a/airflow_client/client/models/bulk_body_connection_body.py
+++ b/airflow_client/client/models/bulk_body_connection_body.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.actions_inner1 import ActionsInner1
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkBodyConnectionBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["actions"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_body_pool_body.py b/airflow_client/client/models/bulk_body_pool_body.py
index 4486510..85977f7 100644
--- a/airflow_client/client/models/bulk_body_pool_body.py
+++ b/airflow_client/client/models/bulk_body_pool_body.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.actions_inner2 import ActionsInner2
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkBodyPoolBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["actions"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_body_variable_body.py b/airflow_client/client/models/bulk_body_variable_body.py
index 9153696..443ec6c 100644
--- a/airflow_client/client/models/bulk_body_variable_body.py
+++ b/airflow_client/client/models/bulk_body_variable_body.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.actions_inner3 import ActionsInner3
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkBodyVariableBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["actions"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_create_action_bulk_task_instance_body.py b/airflow_client/client/models/bulk_create_action_bulk_task_instance_body.py
index 1e7f0f8..0e3c15a 100644
--- a/airflow_client/client/models/bulk_create_action_bulk_task_instance_body.py
+++ b/airflow_client/client/models/bulk_create_action_bulk_task_instance_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.bulk_task_instance_body import BulkTaskInstanceBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkCreateActionBulkTaskInstanceBody(BaseModel):
     """
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_create_action_connection_body.py b/airflow_client/client/models/bulk_create_action_connection_body.py
index fde17d2..347b11d 100644
--- a/airflow_client/client/models/bulk_create_action_connection_body.py
+++ b/airflow_client/client/models/bulk_create_action_connection_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.connection_body import ConnectionBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkCreateActionConnectionBody(BaseModel):
     """
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_create_action_pool_body.py b/airflow_client/client/models/bulk_create_action_pool_body.py
index 0f936c0..bb24bdc 100644
--- a/airflow_client/client/models/bulk_create_action_pool_body.py
+++ b/airflow_client/client/models/bulk_create_action_pool_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.pool_body import PoolBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkCreateActionPoolBody(BaseModel):
     """
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_create_action_variable_body.py b/airflow_client/client/models/bulk_create_action_variable_body.py
index 8f056bd..2243a61 100644
--- a/airflow_client/client/models/bulk_create_action_variable_body.py
+++ b/airflow_client/client/models/bulk_create_action_variable_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.variable_body import VariableBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkCreateActionVariableBody(BaseModel):
     """
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_delete_action_bulk_task_instance_body.py b/airflow_client/client/models/bulk_delete_action_bulk_task_instance_body.py
index ecc1bfc..92c2b65 100644
--- a/airflow_client/client/models/bulk_delete_action_bulk_task_instance_body.py
+++ b/airflow_client/client/models/bulk_delete_action_bulk_task_instance_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.entities_inner import EntitiesInner
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkDeleteActionBulkTaskInstanceBody(BaseModel):
     """
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_delete_action_connection_body.py b/airflow_client/client/models/bulk_delete_action_connection_body.py
index 587e2ca..f250e86 100644
--- a/airflow_client/client/models/bulk_delete_action_connection_body.py
+++ b/airflow_client/client/models/bulk_delete_action_connection_body.py
@@ -20,9 +20,10 @@
 from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator
 from typing import Any, ClassVar, Dict, List, Optional
 from airflow_client.client.models.bulk_action_not_on_existence import BulkActionNotOnExistence
-from airflow_client.client.models.entities_inner import EntitiesInner
+from airflow_client.client.models.entities_inner1 import EntitiesInner1
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkDeleteActionConnectionBody(BaseModel):
     """
@@ -30,7 +31,7 @@
     """ # noqa: E501
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
-    entities: List[EntitiesInner] = Field(description="A list of entity id/key or entity objects to be deleted.")
+    entities: List[EntitiesInner1] = Field(description="A list of entity id/key or entity objects to be deleted.")
     __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
 
     @field_validator('action')
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +101,7 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [EntitiesInner.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [EntitiesInner1.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_delete_action_pool_body.py b/airflow_client/client/models/bulk_delete_action_pool_body.py
index ed2f72c..f8f8271 100644
--- a/airflow_client/client/models/bulk_delete_action_pool_body.py
+++ b/airflow_client/client/models/bulk_delete_action_pool_body.py
@@ -20,9 +20,10 @@
 from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator
 from typing import Any, ClassVar, Dict, List, Optional
 from airflow_client.client.models.bulk_action_not_on_existence import BulkActionNotOnExistence
-from airflow_client.client.models.entities_inner import EntitiesInner
+from airflow_client.client.models.entities_inner2 import EntitiesInner2
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkDeleteActionPoolBody(BaseModel):
     """
@@ -30,7 +31,7 @@
     """ # noqa: E501
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
-    entities: List[EntitiesInner] = Field(description="A list of entity id/key or entity objects to be deleted.")
+    entities: List[EntitiesInner2] = Field(description="A list of entity id/key or entity objects to be deleted.")
     __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
 
     @field_validator('action')
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +101,7 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [EntitiesInner.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [EntitiesInner2.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_delete_action_variable_body.py b/airflow_client/client/models/bulk_delete_action_variable_body.py
index 85e080a..c579a09 100644
--- a/airflow_client/client/models/bulk_delete_action_variable_body.py
+++ b/airflow_client/client/models/bulk_delete_action_variable_body.py
@@ -20,9 +20,10 @@
 from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator
 from typing import Any, ClassVar, Dict, List, Optional
 from airflow_client.client.models.bulk_action_not_on_existence import BulkActionNotOnExistence
-from airflow_client.client.models.entities_inner import EntitiesInner
+from airflow_client.client.models.entities_inner3 import EntitiesInner3
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkDeleteActionVariableBody(BaseModel):
     """
@@ -30,7 +31,7 @@
     """ # noqa: E501
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
-    entities: List[EntitiesInner] = Field(description="A list of entity id/key or entity objects to be deleted.")
+    entities: List[EntitiesInner3] = Field(description="A list of entity id/key or entity objects to be deleted.")
     __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
 
     @field_validator('action')
@@ -41,7 +42,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +101,7 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [EntitiesInner.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [EntitiesInner3.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_response.py b/airflow_client/client/models/bulk_response.py
index c2c7e99..3c92769 100644
--- a/airflow_client/client/models/bulk_response.py
+++ b/airflow_client/client/models/bulk_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.bulk_action_response import BulkActionResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["create", "delete", "update"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/bulk_task_instance_body.py b/airflow_client/client/models/bulk_task_instance_body.py
index c3126cf..37aeb9e 100644
--- a/airflow_client/client/models/bulk_task_instance_body.py
+++ b/airflow_client/client/models/bulk_task_instance_body.py
@@ -23,11 +23,14 @@
 from airflow_client.client.models.task_instance_state import TaskInstanceState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkTaskInstanceBody(BaseModel):
     """
     Request body for bulk update, and delete task instances.
     """ # noqa: E501
+    dag_id: Optional[StrictStr] = None
+    dag_run_id: Optional[StrictStr] = None
     include_downstream: Optional[StrictBool] = False
     include_future: Optional[StrictBool] = False
     include_past: Optional[StrictBool] = False
@@ -36,10 +39,11 @@
     new_state: Optional[TaskInstanceState] = None
     note: Optional[Annotated[str, Field(strict=True, max_length=1000)]] = None
     task_id: StrictStr
-    __properties: ClassVar[List[str]] = ["include_downstream", "include_future", "include_past", "include_upstream", "map_index", "new_state", "note", "task_id"]
+    __properties: ClassVar[List[str]] = ["dag_id", "dag_run_id", "include_downstream", "include_future", "include_past", "include_upstream", "map_index", "new_state", "note", "task_id"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -51,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -89,6 +92,8 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
+            "dag_id": obj.get("dag_id"),
+            "dag_run_id": obj.get("dag_run_id"),
             "include_downstream": obj.get("include_downstream") if obj.get("include_downstream") is not None else False,
             "include_future": obj.get("include_future") if obj.get("include_future") is not None else False,
             "include_past": obj.get("include_past") if obj.get("include_past") is not None else False,
diff --git a/airflow_client/client/models/bulk_update_action_bulk_task_instance_body.py b/airflow_client/client/models/bulk_update_action_bulk_task_instance_body.py
index 4d547e6..3347702 100644
--- a/airflow_client/client/models/bulk_update_action_bulk_task_instance_body.py
+++ b/airflow_client/client/models/bulk_update_action_bulk_task_instance_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.bulk_task_instance_body import BulkTaskInstanceBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkUpdateActionBulkTaskInstanceBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
     entities: List[BulkTaskInstanceBody] = Field(description="A list of entities to be updated.")
-    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
+    update_mask: Optional[List[StrictStr]] = None
+    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities", "update_mask"]
 
     @field_validator('action')
     def action_validate_enum(cls, value):
@@ -41,7 +43,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +56,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +102,8 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [BulkTaskInstanceBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [BulkTaskInstanceBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None,
+            "update_mask": obj.get("update_mask")
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_update_action_connection_body.py b/airflow_client/client/models/bulk_update_action_connection_body.py
index ad1ff1a..dbf4d0b 100644
--- a/airflow_client/client/models/bulk_update_action_connection_body.py
+++ b/airflow_client/client/models/bulk_update_action_connection_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.connection_body import ConnectionBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkUpdateActionConnectionBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
     entities: List[ConnectionBody] = Field(description="A list of entities to be updated.")
-    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
+    update_mask: Optional[List[StrictStr]] = None
+    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities", "update_mask"]
 
     @field_validator('action')
     def action_validate_enum(cls, value):
@@ -41,7 +43,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +56,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +102,8 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [ConnectionBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [ConnectionBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None,
+            "update_mask": obj.get("update_mask")
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_update_action_pool_body.py b/airflow_client/client/models/bulk_update_action_pool_body.py
index de947ea..d271464 100644
--- a/airflow_client/client/models/bulk_update_action_pool_body.py
+++ b/airflow_client/client/models/bulk_update_action_pool_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.pool_body import PoolBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkUpdateActionPoolBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
     entities: List[PoolBody] = Field(description="A list of entities to be updated.")
-    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
+    update_mask: Optional[List[StrictStr]] = None
+    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities", "update_mask"]
 
     @field_validator('action')
     def action_validate_enum(cls, value):
@@ -41,7 +43,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +56,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +102,8 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [PoolBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [PoolBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None,
+            "update_mask": obj.get("update_mask")
         })
         return _obj
 
diff --git a/airflow_client/client/models/bulk_update_action_variable_body.py b/airflow_client/client/models/bulk_update_action_variable_body.py
index be071ff..e82c240 100644
--- a/airflow_client/client/models/bulk_update_action_variable_body.py
+++ b/airflow_client/client/models/bulk_update_action_variable_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.variable_body import VariableBody
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class BulkUpdateActionVariableBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     action: StrictStr = Field(description="The action to be performed on the entities.")
     action_on_non_existence: Optional[BulkActionNotOnExistence] = None
     entities: List[VariableBody] = Field(description="A list of entities to be updated.")
-    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities"]
+    update_mask: Optional[List[StrictStr]] = None
+    __properties: ClassVar[List[str]] = ["action", "action_on_non_existence", "entities", "update_mask"]
 
     @field_validator('action')
     def action_validate_enum(cls, value):
@@ -41,7 +43,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +56,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -100,7 +102,8 @@
         _obj = cls.model_validate({
             "action": obj.get("action"),
             "action_on_non_existence": obj.get("action_on_non_existence"),
-            "entities": [VariableBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None
+            "entities": [VariableBody.from_dict(_item) for _item in obj["entities"]] if obj.get("entities") is not None else None,
+            "update_mask": obj.get("update_mask")
         })
         return _obj
 
diff --git a/airflow_client/client/models/clear_task_instances_body.py b/airflow_client/client/models/clear_task_instances_body.py
index d605c84..1ac4193 100644
--- a/airflow_client/client/models/clear_task_instances_body.py
+++ b/airflow_client/client/models/clear_task_instances_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.clear_task_instances_body_task_ids_inner import ClearTaskInstancesBodyTaskIdsInner
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ClearTaskInstancesBody(BaseModel):
     """
@@ -37,14 +38,16 @@
     include_upstream: Optional[StrictBool] = False
     only_failed: Optional[StrictBool] = True
     only_running: Optional[StrictBool] = False
+    prevent_running_task: Optional[StrictBool] = False
     reset_dag_runs: Optional[StrictBool] = True
     run_on_latest_version: Optional[StrictBool] = Field(default=False, description="(Experimental) Run on the latest bundle version of the dag after clearing the task instances.")
     start_date: Optional[datetime] = None
     task_ids: Optional[List[ClearTaskInstancesBodyTaskIdsInner]] = None
-    __properties: ClassVar[List[str]] = ["dag_run_id", "dry_run", "end_date", "include_downstream", "include_future", "include_past", "include_upstream", "only_failed", "only_running", "reset_dag_runs", "run_on_latest_version", "start_date", "task_ids"]
+    __properties: ClassVar[List[str]] = ["dag_run_id", "dry_run", "end_date", "include_downstream", "include_future", "include_past", "include_upstream", "only_failed", "only_running", "prevent_running_task", "reset_dag_runs", "run_on_latest_version", "start_date", "task_ids"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -56,8 +59,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -110,6 +112,7 @@
             "include_upstream": obj.get("include_upstream") if obj.get("include_upstream") is not None else False,
             "only_failed": obj.get("only_failed") if obj.get("only_failed") is not None else True,
             "only_running": obj.get("only_running") if obj.get("only_running") is not None else False,
+            "prevent_running_task": obj.get("prevent_running_task") if obj.get("prevent_running_task") is not None else False,
             "reset_dag_runs": obj.get("reset_dag_runs") if obj.get("reset_dag_runs") is not None else True,
             "run_on_latest_version": obj.get("run_on_latest_version") if obj.get("run_on_latest_version") is not None else False,
             "start_date": obj.get("start_date"),
diff --git a/airflow_client/client/models/config.py b/airflow_client/client/models/config.py
index 1bfaf0d..85e17a4 100644
--- a/airflow_client/client/models/config.py
+++ b/airflow_client/client/models/config.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.config_section import ConfigSection
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class Config(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["sections"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/config_option.py b/airflow_client/client/models/config_option.py
index 195191f..69eb92d 100644
--- a/airflow_client/client/models/config_option.py
+++ b/airflow_client/client/models/config_option.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.value import Value
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConfigOption(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["key", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/config_section.py b/airflow_client/client/models/config_section.py
index 0d8c4fd..0c4f4b8 100644
--- a/airflow_client/client/models/config_section.py
+++ b/airflow_client/client/models/config_section.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.config_option import ConfigOption
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConfigSection(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["name", "options"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/connection_body.py b/airflow_client/client/models/connection_body.py
index 4cf34c5..e1f1382 100644
--- a/airflow_client/client/models/connection_body.py
+++ b/airflow_client/client/models/connection_body.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConnectionBody(BaseModel):
     """
@@ -36,7 +37,8 @@
     password: Optional[StrictStr] = None
     port: Optional[StrictInt] = None
     var_schema: Optional[StrictStr] = Field(default=None, alias="schema")
-    __properties: ClassVar[List[str]] = ["conn_type", "connection_id", "description", "extra", "host", "login", "password", "port", "schema"]
+    team_name: Optional[Annotated[str, Field(strict=True, max_length=50)]] = None
+    __properties: ClassVar[List[str]] = ["conn_type", "connection_id", "description", "extra", "host", "login", "password", "port", "schema", "team_name"]
 
     @field_validator('connection_id')
     def connection_id_validate_regular_expression(cls, value):
@@ -46,7 +48,8 @@
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -58,8 +61,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -104,7 +106,8 @@
             "login": obj.get("login"),
             "password": obj.get("password"),
             "port": obj.get("port"),
-            "schema": obj.get("schema")
+            "schema": obj.get("schema"),
+            "team_name": obj.get("team_name")
         })
         return _obj
 
diff --git a/airflow_client/client/models/connection_collection_response.py b/airflow_client/client/models/connection_collection_response.py
index f601b6f..1f597c6 100644
--- a/airflow_client/client/models/connection_collection_response.py
+++ b/airflow_client/client/models/connection_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.connection_response import ConnectionResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConnectionCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["connections", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/connection_response.py b/airflow_client/client/models/connection_response.py
index 32b238f..b48921a 100644
--- a/airflow_client/client/models/connection_response.py
+++ b/airflow_client/client/models/connection_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConnectionResponse(BaseModel):
     """
@@ -35,10 +36,12 @@
     password: Optional[StrictStr] = None
     port: Optional[StrictInt] = None
     var_schema: Optional[StrictStr] = Field(default=None, alias="schema")
-    __properties: ClassVar[List[str]] = ["conn_type", "connection_id", "description", "extra", "host", "login", "password", "port", "schema"]
+    team_name: Optional[StrictStr] = None
+    __properties: ClassVar[List[str]] = ["conn_type", "connection_id", "description", "extra", "host", "login", "password", "port", "schema", "team_name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -50,8 +53,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -96,7 +98,8 @@
             "login": obj.get("login"),
             "password": obj.get("password"),
             "port": obj.get("port"),
-            "schema": obj.get("schema")
+            "schema": obj.get("schema"),
+            "team_name": obj.get("team_name")
         })
         return _obj
 
diff --git a/airflow_client/client/models/connection_test_response.py b/airflow_client/client/models/connection_test_response.py
index 3c3a37e..2c5be59 100644
--- a/airflow_client/client/models/connection_test_response.py
+++ b/airflow_client/client/models/connection_test_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ConnectionTestResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["message", "status"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/create_asset_events_body.py b/airflow_client/client/models/create_asset_events_body.py
index aaef5ca..acc4273 100644
--- a/airflow_client/client/models/create_asset_events_body.py
+++ b/airflow_client/client/models/create_asset_events_body.py
@@ -17,10 +17,11 @@
 import re  # noqa: F401
 import json
 
-from pydantic import BaseModel, ConfigDict, StrictInt
+from pydantic import BaseModel, ConfigDict, StrictInt, StrictStr
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class CreateAssetEventsBody(BaseModel):
     """
@@ -28,11 +29,13 @@
     """ # noqa: E501
     asset_id: StrictInt
     extra: Optional[Dict[str, Any]] = None
+    partition_key: Optional[StrictStr] = None
     additional_properties: Dict[str, Any] = {}
-    __properties: ClassVar[List[str]] = ["asset_id", "extra"]
+    __properties: ClassVar[List[str]] = ["asset_id", "extra", "partition_key"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -90,7 +92,8 @@
 
         _obj = cls.model_validate({
             "asset_id": obj.get("asset_id"),
-            "extra": obj.get("extra")
+            "extra": obj.get("extra"),
+            "partition_key": obj.get("partition_key")
         })
         # store additional fields in additional_properties
         for _key in obj.keys():
diff --git a/airflow_client/client/models/dag_collection_response.py b/airflow_client/client/models/dag_collection_response.py
index 4ba7557..ff4081d 100644
--- a/airflow_client/client/models/dag_collection_response.py
+++ b/airflow_client/client/models/dag_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_response import DAGResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dags", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_details_response.py b/airflow_client/client/models/dag_details_response.py
index c258b95..bd63f24 100644
--- a/airflow_client/client/models/dag_details_response.py
+++ b/airflow_client/client/models/dag_details_response.py
@@ -20,15 +20,19 @@
 from datetime import datetime
 from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictFloat, StrictInt, StrictStr
 from typing import Any, ClassVar, Dict, List, Optional, Union
+from airflow_client.client.models.dag_run_type import DagRunType
 from airflow_client.client.models.dag_tag_response import DagTagResponse
 from airflow_client.client.models.dag_version_response import DagVersionResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGDetailsResponse(BaseModel):
     """
     Specific serializer for DAG Details responses.
     """ # noqa: E501
+    active_runs_count: Optional[StrictInt] = 0
+    allowed_run_types: Optional[List[DagRunType]] = None
     asset_expression: Optional[Dict[str, Any]] = None
     bundle_name: Optional[StrictStr] = None
     bundle_version: Optional[StrictStr] = None
@@ -70,12 +74,14 @@
     tags: List[DagTagResponse]
     template_search_path: Optional[List[StrictStr]] = None
     timetable_description: Optional[StrictStr] = None
+    timetable_partitioned: StrictBool
     timetable_summary: Optional[StrictStr] = None
     timezone: Optional[StrictStr] = None
-    __properties: ClassVar[List[str]] = ["asset_expression", "bundle_name", "bundle_version", "catchup", "concurrency", "dag_display_name", "dag_id", "dag_run_timeout", "default_args", "description", "doc_md", "end_date", "file_token", "fileloc", "has_import_errors", "has_task_concurrency_limits", "is_favorite", "is_paused", "is_paused_upon_creation", "is_stale", "last_expired", "last_parse_duration", "last_parsed", "last_parsed_time", "latest_dag_version", "max_active_runs", "max_active_tasks", "max_consecutive_failed_dag_runs", "next_dagrun_data_interval_end", "next_dagrun_data_interval_start", "next_dagrun_logical_date", "next_dagrun_run_after", "owner_links", "owners", "params", "relative_fileloc", "render_template_as_native_obj", "start_date", "tags", "template_search_path", "timetable_description", "timetable_summary", "timezone"]
+    __properties: ClassVar[List[str]] = ["active_runs_count", "allowed_run_types", "asset_expression", "bundle_name", "bundle_version", "catchup", "concurrency", "dag_display_name", "dag_id", "dag_run_timeout", "default_args", "description", "doc_md", "end_date", "file_token", "fileloc", "has_import_errors", "has_task_concurrency_limits", "is_favorite", "is_paused", "is_paused_upon_creation", "is_stale", "last_expired", "last_parse_duration", "last_parsed", "last_parsed_time", "latest_dag_version", "max_active_runs", "max_active_tasks", "max_consecutive_failed_dag_runs", "next_dagrun_data_interval_end", "next_dagrun_data_interval_start", "next_dagrun_logical_date", "next_dagrun_run_after", "owner_links", "owners", "params", "relative_fileloc", "render_template_as_native_obj", "start_date", "tags", "template_search_path", "timetable_description", "timetable_partitioned", "timetable_summary", "timezone"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -87,8 +93,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -139,6 +144,8 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
+            "active_runs_count": obj.get("active_runs_count") if obj.get("active_runs_count") is not None else 0,
+            "allowed_run_types": obj.get("allowed_run_types"),
             "asset_expression": obj.get("asset_expression"),
             "bundle_name": obj.get("bundle_name"),
             "bundle_version": obj.get("bundle_version"),
@@ -180,6 +187,7 @@
             "tags": [DagTagResponse.from_dict(_item) for _item in obj["tags"]] if obj.get("tags") is not None else None,
             "template_search_path": obj.get("template_search_path"),
             "timetable_description": obj.get("timetable_description"),
+            "timetable_partitioned": obj.get("timetable_partitioned"),
             "timetable_summary": obj.get("timetable_summary"),
             "timezone": obj.get("timezone")
         })
diff --git a/airflow_client/client/models/dag_patch_body.py b/airflow_client/client/models/dag_patch_body.py
index fdb4389..a2c12df 100644
--- a/airflow_client/client/models/dag_patch_body.py
+++ b/airflow_client/client/models/dag_patch_body.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGPatchBody(BaseModel):
     """
@@ -30,7 +31,8 @@
     __properties: ClassVar[List[str]] = ["is_paused"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -42,8 +44,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_processor_info_response.py b/airflow_client/client/models/dag_processor_info_response.py
index 0a44381..72366d6 100644
--- a/airflow_client/client/models/dag_processor_info_response.py
+++ b/airflow_client/client/models/dag_processor_info_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagProcessorInfoResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["latest_dag_processor_heartbeat", "status"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_response.py b/airflow_client/client/models/dag_response.py
index f1e0bd2..9b913e2 100644
--- a/airflow_client/client/models/dag_response.py
+++ b/airflow_client/client/models/dag_response.py
@@ -20,14 +20,17 @@
 from datetime import datetime
 from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictFloat, StrictInt, StrictStr
 from typing import Any, ClassVar, Dict, List, Optional, Union
+from airflow_client.client.models.dag_run_type import DagRunType
 from airflow_client.client.models.dag_tag_response import DagTagResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGResponse(BaseModel):
     """
     DAG serializer for responses.
     """ # noqa: E501
+    allowed_run_types: Optional[List[DagRunType]] = None
     bundle_name: Optional[StrictStr] = None
     bundle_version: Optional[StrictStr] = None
     dag_display_name: StrictStr
@@ -53,11 +56,13 @@
     relative_fileloc: Optional[StrictStr] = None
     tags: List[DagTagResponse]
     timetable_description: Optional[StrictStr] = None
+    timetable_partitioned: StrictBool
     timetable_summary: Optional[StrictStr] = None
-    __properties: ClassVar[List[str]] = ["bundle_name", "bundle_version", "dag_display_name", "dag_id", "description", "file_token", "fileloc", "has_import_errors", "has_task_concurrency_limits", "is_paused", "is_stale", "last_expired", "last_parse_duration", "last_parsed_time", "max_active_runs", "max_active_tasks", "max_consecutive_failed_dag_runs", "next_dagrun_data_interval_end", "next_dagrun_data_interval_start", "next_dagrun_logical_date", "next_dagrun_run_after", "owners", "relative_fileloc", "tags", "timetable_description", "timetable_summary"]
+    __properties: ClassVar[List[str]] = ["allowed_run_types", "bundle_name", "bundle_version", "dag_display_name", "dag_id", "description", "file_token", "fileloc", "has_import_errors", "has_task_concurrency_limits", "is_paused", "is_stale", "last_expired", "last_parse_duration", "last_parsed_time", "max_active_runs", "max_active_tasks", "max_consecutive_failed_dag_runs", "next_dagrun_data_interval_end", "next_dagrun_data_interval_start", "next_dagrun_logical_date", "next_dagrun_run_after", "owners", "relative_fileloc", "tags", "timetable_description", "timetable_partitioned", "timetable_summary"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -69,8 +74,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -116,6 +120,7 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
+            "allowed_run_types": obj.get("allowed_run_types"),
             "bundle_name": obj.get("bundle_name"),
             "bundle_version": obj.get("bundle_version"),
             "dag_display_name": obj.get("dag_display_name"),
@@ -141,6 +146,7 @@
             "relative_fileloc": obj.get("relative_fileloc"),
             "tags": [DagTagResponse.from_dict(_item) for _item in obj["tags"]] if obj.get("tags") is not None else None,
             "timetable_description": obj.get("timetable_description"),
+            "timetable_partitioned": obj.get("timetable_partitioned"),
             "timetable_summary": obj.get("timetable_summary")
         })
         return _obj
diff --git a/airflow_client/client/models/dag_run_asset_reference.py b/airflow_client/client/models/dag_run_asset_reference.py
index cfdf5eb..546bbb0 100644
--- a/airflow_client/client/models/dag_run_asset_reference.py
+++ b/airflow_client/client/models/dag_run_asset_reference.py
@@ -22,23 +22,26 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagRunAssetReference(BaseModel):
     """
-    DAGRun serializer for asset responses.
+    DagRun serializer for asset responses.
     """ # noqa: E501
     dag_id: StrictStr
     data_interval_end: Optional[datetime] = None
     data_interval_start: Optional[datetime] = None
     end_date: Optional[datetime] = None
     logical_date: Optional[datetime] = None
+    partition_key: Optional[StrictStr] = None
     run_id: StrictStr
     start_date: datetime
     state: StrictStr
-    __properties: ClassVar[List[str]] = ["dag_id", "data_interval_end", "data_interval_start", "end_date", "logical_date", "run_id", "start_date", "state"]
+    __properties: ClassVar[List[str]] = ["dag_id", "data_interval_end", "data_interval_start", "end_date", "logical_date", "partition_key", "run_id", "start_date", "state"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -50,8 +53,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -93,6 +95,7 @@
             "data_interval_start": obj.get("data_interval_start"),
             "end_date": obj.get("end_date"),
             "logical_date": obj.get("logical_date"),
+            "partition_key": obj.get("partition_key"),
             "run_id": obj.get("run_id"),
             "start_date": obj.get("start_date"),
             "state": obj.get("state")
diff --git a/airflow_client/client/models/dag_run_clear_body.py b/airflow_client/client/models/dag_run_clear_body.py
index d415920..835194c 100644
--- a/airflow_client/client/models/dag_run_clear_body.py
+++ b/airflow_client/client/models/dag_run_clear_body.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGRunClearBody(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dry_run", "only_failed", "run_on_latest_version"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_run_collection_response.py b/airflow_client/client/models/dag_run_collection_response.py
index afb2661..5c2f703 100644
--- a/airflow_client/client/models/dag_run_collection_response.py
+++ b/airflow_client/client/models/dag_run_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_run_response import DAGRunResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGRunCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dag_runs", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_run_patch_body.py b/airflow_client/client/models/dag_run_patch_body.py
index 9f4c3ba..ad30816 100644
--- a/airflow_client/client/models/dag_run_patch_body.py
+++ b/airflow_client/client/models/dag_run_patch_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.dag_run_patch_states import DAGRunPatchStates
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGRunPatchBody(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["note", "state"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_run_response.py b/airflow_client/client/models/dag_run_response.py
index ecf2b54..a888a09 100644
--- a/airflow_client/client/models/dag_run_response.py
+++ b/airflow_client/client/models/dag_run_response.py
@@ -26,6 +26,7 @@
 from airflow_client.client.models.dag_version_response import DagVersionResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGRunResponse(BaseModel):
     """
@@ -44,6 +45,7 @@
     last_scheduling_decision: Optional[datetime] = None
     logical_date: Optional[datetime] = None
     note: Optional[StrictStr] = None
+    partition_key: Optional[StrictStr] = None
     queued_at: Optional[datetime] = None
     run_after: datetime
     run_type: DagRunType
@@ -51,10 +53,11 @@
     state: DagRunState
     triggered_by: Optional[DagRunTriggeredByType] = None
     triggering_user_name: Optional[StrictStr] = None
-    __properties: ClassVar[List[str]] = ["bundle_version", "conf", "dag_display_name", "dag_id", "dag_run_id", "dag_versions", "data_interval_end", "data_interval_start", "duration", "end_date", "last_scheduling_decision", "logical_date", "note", "queued_at", "run_after", "run_type", "start_date", "state", "triggered_by", "triggering_user_name"]
+    __properties: ClassVar[List[str]] = ["bundle_version", "conf", "dag_display_name", "dag_id", "dag_run_id", "dag_versions", "data_interval_end", "data_interval_start", "duration", "end_date", "last_scheduling_decision", "logical_date", "note", "partition_key", "queued_at", "run_after", "run_type", "start_date", "state", "triggered_by", "triggering_user_name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -66,8 +69,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -124,6 +126,7 @@
             "last_scheduling_decision": obj.get("last_scheduling_decision"),
             "logical_date": obj.get("logical_date"),
             "note": obj.get("note"),
+            "partition_key": obj.get("partition_key"),
             "queued_at": obj.get("queued_at"),
             "run_after": obj.get("run_after"),
             "run_type": obj.get("run_type"),
diff --git a/airflow_client/client/models/dag_run_type.py b/airflow_client/client/models/dag_run_type.py
index 0dc00da..6fe759e 100644
--- a/airflow_client/client/models/dag_run_type.py
+++ b/airflow_client/client/models/dag_run_type.py
@@ -30,6 +30,7 @@
     SCHEDULED = 'scheduled'
     MANUAL = 'manual'
     ASSET_TRIGGERED = 'asset_triggered'
+    ASSET_MATERIALIZATION = 'asset_materialization'
 
     @classmethod
     def from_json(cls, json_str: str) -> Self:
diff --git a/airflow_client/client/models/dag_runs_batch_body.py b/airflow_client/client/models/dag_runs_batch_body.py
index 4d43680..67ebf9b 100644
--- a/airflow_client/client/models/dag_runs_batch_body.py
+++ b/airflow_client/client/models/dag_runs_batch_body.py
@@ -18,18 +18,24 @@
 import json
 
 from datetime import datetime
-from pydantic import BaseModel, ConfigDict, Field, StrictStr
-from typing import Any, ClassVar, Dict, List, Optional
+from pydantic import BaseModel, ConfigDict, Field, StrictFloat, StrictInt, StrictStr
+from typing import Any, ClassVar, Dict, List, Optional, Union
 from typing_extensions import Annotated
 from airflow_client.client.models.dag_run_state import DagRunState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGRunsBatchBody(BaseModel):
     """
     List DAG Runs body for batch endpoint.
     """ # noqa: E501
+    conf_contains: Optional[StrictStr] = None
     dag_ids: Optional[List[StrictStr]] = None
+    duration_gt: Optional[Union[StrictFloat, StrictInt]] = None
+    duration_gte: Optional[Union[StrictFloat, StrictInt]] = None
+    duration_lt: Optional[Union[StrictFloat, StrictInt]] = None
+    duration_lte: Optional[Union[StrictFloat, StrictInt]] = None
     end_date_gt: Optional[datetime] = None
     end_date_gte: Optional[datetime] = None
     end_date_lt: Optional[datetime] = None
@@ -50,10 +56,11 @@
     start_date_lt: Optional[datetime] = None
     start_date_lte: Optional[datetime] = None
     states: Optional[List[Optional[DagRunState]]] = None
-    __properties: ClassVar[List[str]] = ["dag_ids", "end_date_gt", "end_date_gte", "end_date_lt", "end_date_lte", "logical_date_gt", "logical_date_gte", "logical_date_lt", "logical_date_lte", "order_by", "page_limit", "page_offset", "run_after_gt", "run_after_gte", "run_after_lt", "run_after_lte", "start_date_gt", "start_date_gte", "start_date_lt", "start_date_lte", "states"]
+    __properties: ClassVar[List[str]] = ["conf_contains", "dag_ids", "duration_gt", "duration_gte", "duration_lt", "duration_lte", "end_date_gt", "end_date_gte", "end_date_lt", "end_date_lte", "logical_date_gt", "logical_date_gte", "logical_date_lt", "logical_date_lte", "order_by", "page_limit", "page_offset", "run_after_gt", "run_after_gte", "run_after_lt", "run_after_lte", "start_date_gt", "start_date_gte", "start_date_lt", "start_date_lte", "states"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -65,8 +72,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -103,7 +109,12 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
+            "conf_contains": obj.get("conf_contains"),
             "dag_ids": obj.get("dag_ids"),
+            "duration_gt": obj.get("duration_gt"),
+            "duration_gte": obj.get("duration_gte"),
+            "duration_lt": obj.get("duration_lt"),
+            "duration_lte": obj.get("duration_lte"),
             "end_date_gt": obj.get("end_date_gt"),
             "end_date_gte": obj.get("end_date_gte"),
             "end_date_lt": obj.get("end_date_lt"),
diff --git a/airflow_client/client/models/dag_schedule_asset_reference.py b/airflow_client/client/models/dag_schedule_asset_reference.py
index 290b05d..8aec05a 100644
--- a/airflow_client/client/models/dag_schedule_asset_reference.py
+++ b/airflow_client/client/models/dag_schedule_asset_reference.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagScheduleAssetReference(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["created_at", "dag_id", "updated_at"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_source_response.py b/airflow_client/client/models/dag_source_response.py
index a93f3cc..eea33a3 100644
--- a/airflow_client/client/models/dag_source_response.py
+++ b/airflow_client/client/models/dag_source_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGSourceResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["content", "dag_display_name", "dag_id", "version_number"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_stats_collection_response.py b/airflow_client/client/models/dag_stats_collection_response.py
index 3e585fe..4866f04 100644
--- a/airflow_client/client/models/dag_stats_collection_response.py
+++ b/airflow_client/client/models/dag_stats_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_stats_response import DagStatsResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagStatsCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dags", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_stats_response.py b/airflow_client/client/models/dag_stats_response.py
index 3fcc1b9..d724e6b 100644
--- a/airflow_client/client/models/dag_stats_response.py
+++ b/airflow_client/client/models/dag_stats_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_stats_state_response import DagStatsStateResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagStatsResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "stats"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_stats_state_response.py b/airflow_client/client/models/dag_stats_state_response.py
index 6cd899d..63798b1 100644
--- a/airflow_client/client/models/dag_stats_state_response.py
+++ b/airflow_client/client/models/dag_stats_state_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_run_state import DagRunState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagStatsStateResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["count", "state"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_tag_collection_response.py b/airflow_client/client/models/dag_tag_collection_response.py
index c9fe2ff..316597c 100644
--- a/airflow_client/client/models/dag_tag_collection_response.py
+++ b/airflow_client/client/models/dag_tag_collection_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGTagCollectionResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["tags", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_tag_response.py b/airflow_client/client/models/dag_tag_response.py
index 0079203..962dd61 100644
--- a/airflow_client/client/models/dag_tag_response.py
+++ b/airflow_client/client/models/dag_tag_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagTagResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_version_collection_response.py b/airflow_client/client/models/dag_version_collection_response.py
index 6c3cb31..69862c5 100644
--- a/airflow_client/client/models/dag_version_collection_response.py
+++ b/airflow_client/client/models/dag_version_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_version_response import DagVersionResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGVersionCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dag_versions", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_version_response.py b/airflow_client/client/models/dag_version_response.py
index c0bcc75..776ba62 100644
--- a/airflow_client/client/models/dag_version_response.py
+++ b/airflow_client/client/models/dag_version_response.py
@@ -23,6 +23,7 @@
 from uuid import UUID
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DagVersionResponse(BaseModel):
     """
@@ -39,7 +40,8 @@
     __properties: ClassVar[List[str]] = ["bundle_name", "bundle_url", "bundle_version", "created_at", "dag_display_name", "dag_id", "id", "version_number"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -51,8 +53,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_warning_collection_response.py b/airflow_client/client/models/dag_warning_collection_response.py
index 69ccea4..60013d8 100644
--- a/airflow_client/client/models/dag_warning_collection_response.py
+++ b/airflow_client/client/models/dag_warning_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dag_warning_response import DAGWarningResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGWarningCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["dag_warnings", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_warning_response.py b/airflow_client/client/models/dag_warning_response.py
index 9d7bc57..7b571bf 100644
--- a/airflow_client/client/models/dag_warning_response.py
+++ b/airflow_client/client/models/dag_warning_response.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.dag_warning_type import DagWarningType
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DAGWarningResponse(BaseModel):
     """
@@ -36,7 +37,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "message", "timestamp", "warning_type"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -48,8 +50,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dag_warning_type.py b/airflow_client/client/models/dag_warning_type.py
index a5dcc3b..252aafc 100644
--- a/airflow_client/client/models/dag_warning_type.py
+++ b/airflow_client/client/models/dag_warning_type.py
@@ -28,6 +28,7 @@
     """
     ASSET_CONFLICT = 'asset conflict'
     NON_MINUS_EXISTENT_POOL = 'non-existent pool'
+    RUNTIME_VARYING_VALUE = 'runtime varying value'
 
     @classmethod
     def from_json(cls, json_str: str) -> Self:
diff --git a/airflow_client/client/models/dry_run_backfill_collection_response.py b/airflow_client/client/models/dry_run_backfill_collection_response.py
index b8acdc4..732ace4 100644
--- a/airflow_client/client/models/dry_run_backfill_collection_response.py
+++ b/airflow_client/client/models/dry_run_backfill_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.dry_run_backfill_response import DryRunBackfillResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DryRunBackfillCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["backfills", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/dry_run_backfill_response.py b/airflow_client/client/models/dry_run_backfill_response.py
index 6bbb751..239e48f 100644
--- a/airflow_client/client/models/dry_run_backfill_response.py
+++ b/airflow_client/client/models/dry_run_backfill_response.py
@@ -18,20 +18,24 @@
 import json
 
 from datetime import datetime
-from pydantic import BaseModel, ConfigDict
-from typing import Any, ClassVar, Dict, List
+from pydantic import BaseModel, ConfigDict, StrictStr
+from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class DryRunBackfillResponse(BaseModel):
     """
     Backfill serializer for responses in dry-run mode.
     """ # noqa: E501
-    logical_date: datetime
-    __properties: ClassVar[List[str]] = ["logical_date"]
+    logical_date: Optional[datetime] = None
+    partition_date: Optional[datetime] = None
+    partition_key: Optional[StrictStr] = None
+    __properties: ClassVar[List[str]] = ["logical_date", "partition_date", "partition_key"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -81,7 +84,9 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
-            "logical_date": obj.get("logical_date")
+            "logical_date": obj.get("logical_date"),
+            "partition_date": obj.get("partition_date"),
+            "partition_key": obj.get("partition_key")
         })
         return _obj
 
diff --git a/airflow_client/client/models/entities_inner1.py b/airflow_client/client/models/entities_inner1.py
new file mode 100644
index 0000000..6e3fdc8
--- /dev/null
+++ b/airflow_client/client/models/entities_inner1.py
@@ -0,0 +1,136 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+from __future__ import annotations
+from inspect import getfullargspec
+import json
+import pprint
+import re  # noqa: F401
+from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator
+from typing import Optional
+from airflow_client.client.models.connection_body import ConnectionBody
+from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict
+from typing_extensions import Literal, Self
+from pydantic import Field
+
+ENTITIESINNER1_ANY_OF_SCHEMAS = ["ConnectionBody", "str"]
+
+class EntitiesInner1(BaseModel):
+    """
+    EntitiesInner1
+    """
+
+    # data type: str
+    anyof_schema_1_validator: Optional[StrictStr] = None
+    # data type: ConnectionBody
+    anyof_schema_2_validator: Optional[ConnectionBody] = None
+    if TYPE_CHECKING:
+        actual_instance: Optional[Union[ConnectionBody, str]] = None
+    else:
+        actual_instance: Any = None
+    any_of_schemas: Set[str] = { "ConnectionBody", "str" }
+
+    model_config = {
+        "validate_assignment": True,
+        "protected_namespaces": (),
+    }
+
+    def __init__(self, *args, **kwargs) -> None:
+        if args:
+            if len(args) > 1:
+                raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`")
+            if kwargs:
+                raise ValueError("If a position argument is used, keyword arguments cannot be used.")
+            super().__init__(actual_instance=args[0])
+        else:
+            super().__init__(**kwargs)
+
+    @field_validator('actual_instance')
+    def actual_instance_must_validate_anyof(cls, v):
+        instance = EntitiesInner1.model_construct()
+        error_messages = []
+        # validate data type: str
+        try:
+            instance.anyof_schema_1_validator = v
+            return v
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # validate data type: ConnectionBody
+        if not isinstance(v, ConnectionBody):
+            error_messages.append(f"Error! Input type `{type(v)}` is not `ConnectionBody`")
+        else:
+            return v
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when setting the actual_instance in EntitiesInner1 with anyOf schemas: ConnectionBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return v
+
+    @classmethod
+    def from_dict(cls, obj: Dict[str, Any]) -> Self:
+        return cls.from_json(json.dumps(obj))
+
+    @classmethod
+    def from_json(cls, json_str: str) -> Self:
+        """Returns the object represented by the json string"""
+        instance = cls.model_construct()
+        error_messages = []
+        # deserialize data into str
+        try:
+            # validation
+            instance.anyof_schema_1_validator = json.loads(json_str)
+            # assign value to actual_instance
+            instance.actual_instance = instance.anyof_schema_1_validator
+            return instance
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # anyof_schema_2_validator: Optional[ConnectionBody] = None
+        try:
+            instance.actual_instance = ConnectionBody.from_json(json_str)
+            return instance
+        except (ValidationError, ValueError) as e:
+             error_messages.append(str(e))
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when deserializing the JSON string into EntitiesInner1 with anyOf schemas: ConnectionBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return instance
+
+    def to_json(self) -> str:
+        """Returns the JSON representation of the actual instance"""
+        if self.actual_instance is None:
+            return "null"
+
+        if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json):
+            return self.actual_instance.to_json()
+        else:
+            return json.dumps(self.actual_instance)
+
+    def to_dict(self) -> Optional[Union[Dict[str, Any], ConnectionBody, str]]:
+        """Returns the dict representation of the actual instance"""
+        if self.actual_instance is None:
+            return None
+
+        if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict):
+            return self.actual_instance.to_dict()
+        else:
+            return self.actual_instance
+
+    def to_str(self) -> str:
+        """Returns the string representation of the actual instance"""
+        return pprint.pformat(self.model_dump())
+
+
diff --git a/airflow_client/client/models/entities_inner2.py b/airflow_client/client/models/entities_inner2.py
new file mode 100644
index 0000000..23e9c39
--- /dev/null
+++ b/airflow_client/client/models/entities_inner2.py
@@ -0,0 +1,136 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+from __future__ import annotations
+from inspect import getfullargspec
+import json
+import pprint
+import re  # noqa: F401
+from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator
+from typing import Optional
+from airflow_client.client.models.pool_body import PoolBody
+from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict
+from typing_extensions import Literal, Self
+from pydantic import Field
+
+ENTITIESINNER2_ANY_OF_SCHEMAS = ["PoolBody", "str"]
+
+class EntitiesInner2(BaseModel):
+    """
+    EntitiesInner2
+    """
+
+    # data type: str
+    anyof_schema_1_validator: Optional[StrictStr] = None
+    # data type: PoolBody
+    anyof_schema_2_validator: Optional[PoolBody] = None
+    if TYPE_CHECKING:
+        actual_instance: Optional[Union[PoolBody, str]] = None
+    else:
+        actual_instance: Any = None
+    any_of_schemas: Set[str] = { "PoolBody", "str" }
+
+    model_config = {
+        "validate_assignment": True,
+        "protected_namespaces": (),
+    }
+
+    def __init__(self, *args, **kwargs) -> None:
+        if args:
+            if len(args) > 1:
+                raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`")
+            if kwargs:
+                raise ValueError("If a position argument is used, keyword arguments cannot be used.")
+            super().__init__(actual_instance=args[0])
+        else:
+            super().__init__(**kwargs)
+
+    @field_validator('actual_instance')
+    def actual_instance_must_validate_anyof(cls, v):
+        instance = EntitiesInner2.model_construct()
+        error_messages = []
+        # validate data type: str
+        try:
+            instance.anyof_schema_1_validator = v
+            return v
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # validate data type: PoolBody
+        if not isinstance(v, PoolBody):
+            error_messages.append(f"Error! Input type `{type(v)}` is not `PoolBody`")
+        else:
+            return v
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when setting the actual_instance in EntitiesInner2 with anyOf schemas: PoolBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return v
+
+    @classmethod
+    def from_dict(cls, obj: Dict[str, Any]) -> Self:
+        return cls.from_json(json.dumps(obj))
+
+    @classmethod
+    def from_json(cls, json_str: str) -> Self:
+        """Returns the object represented by the json string"""
+        instance = cls.model_construct()
+        error_messages = []
+        # deserialize data into str
+        try:
+            # validation
+            instance.anyof_schema_1_validator = json.loads(json_str)
+            # assign value to actual_instance
+            instance.actual_instance = instance.anyof_schema_1_validator
+            return instance
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # anyof_schema_2_validator: Optional[PoolBody] = None
+        try:
+            instance.actual_instance = PoolBody.from_json(json_str)
+            return instance
+        except (ValidationError, ValueError) as e:
+             error_messages.append(str(e))
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when deserializing the JSON string into EntitiesInner2 with anyOf schemas: PoolBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return instance
+
+    def to_json(self) -> str:
+        """Returns the JSON representation of the actual instance"""
+        if self.actual_instance is None:
+            return "null"
+
+        if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json):
+            return self.actual_instance.to_json()
+        else:
+            return json.dumps(self.actual_instance)
+
+    def to_dict(self) -> Optional[Union[Dict[str, Any], PoolBody, str]]:
+        """Returns the dict representation of the actual instance"""
+        if self.actual_instance is None:
+            return None
+
+        if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict):
+            return self.actual_instance.to_dict()
+        else:
+            return self.actual_instance
+
+    def to_str(self) -> str:
+        """Returns the string representation of the actual instance"""
+        return pprint.pformat(self.model_dump())
+
+
diff --git a/airflow_client/client/models/entities_inner3.py b/airflow_client/client/models/entities_inner3.py
new file mode 100644
index 0000000..2bea6a3
--- /dev/null
+++ b/airflow_client/client/models/entities_inner3.py
@@ -0,0 +1,136 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+from __future__ import annotations
+from inspect import getfullargspec
+import json
+import pprint
+import re  # noqa: F401
+from pydantic import BaseModel, ConfigDict, Field, StrictStr, ValidationError, field_validator
+from typing import Optional
+from airflow_client.client.models.variable_body import VariableBody
+from typing import Union, Any, List, Set, TYPE_CHECKING, Optional, Dict
+from typing_extensions import Literal, Self
+from pydantic import Field
+
+ENTITIESINNER3_ANY_OF_SCHEMAS = ["VariableBody", "str"]
+
+class EntitiesInner3(BaseModel):
+    """
+    EntitiesInner3
+    """
+
+    # data type: str
+    anyof_schema_1_validator: Optional[StrictStr] = None
+    # data type: VariableBody
+    anyof_schema_2_validator: Optional[VariableBody] = None
+    if TYPE_CHECKING:
+        actual_instance: Optional[Union[VariableBody, str]] = None
+    else:
+        actual_instance: Any = None
+    any_of_schemas: Set[str] = { "VariableBody", "str" }
+
+    model_config = {
+        "validate_assignment": True,
+        "protected_namespaces": (),
+    }
+
+    def __init__(self, *args, **kwargs) -> None:
+        if args:
+            if len(args) > 1:
+                raise ValueError("If a position argument is used, only 1 is allowed to set `actual_instance`")
+            if kwargs:
+                raise ValueError("If a position argument is used, keyword arguments cannot be used.")
+            super().__init__(actual_instance=args[0])
+        else:
+            super().__init__(**kwargs)
+
+    @field_validator('actual_instance')
+    def actual_instance_must_validate_anyof(cls, v):
+        instance = EntitiesInner3.model_construct()
+        error_messages = []
+        # validate data type: str
+        try:
+            instance.anyof_schema_1_validator = v
+            return v
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # validate data type: VariableBody
+        if not isinstance(v, VariableBody):
+            error_messages.append(f"Error! Input type `{type(v)}` is not `VariableBody`")
+        else:
+            return v
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when setting the actual_instance in EntitiesInner3 with anyOf schemas: VariableBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return v
+
+    @classmethod
+    def from_dict(cls, obj: Dict[str, Any]) -> Self:
+        return cls.from_json(json.dumps(obj))
+
+    @classmethod
+    def from_json(cls, json_str: str) -> Self:
+        """Returns the object represented by the json string"""
+        instance = cls.model_construct()
+        error_messages = []
+        # deserialize data into str
+        try:
+            # validation
+            instance.anyof_schema_1_validator = json.loads(json_str)
+            # assign value to actual_instance
+            instance.actual_instance = instance.anyof_schema_1_validator
+            return instance
+        except (ValidationError, ValueError) as e:
+            error_messages.append(str(e))
+        # anyof_schema_2_validator: Optional[VariableBody] = None
+        try:
+            instance.actual_instance = VariableBody.from_json(json_str)
+            return instance
+        except (ValidationError, ValueError) as e:
+             error_messages.append(str(e))
+
+        if error_messages:
+            # no match
+            raise ValueError("No match found when deserializing the JSON string into EntitiesInner3 with anyOf schemas: VariableBody, str. Details: " + ", ".join(error_messages))
+        else:
+            return instance
+
+    def to_json(self) -> str:
+        """Returns the JSON representation of the actual instance"""
+        if self.actual_instance is None:
+            return "null"
+
+        if hasattr(self.actual_instance, "to_json") and callable(self.actual_instance.to_json):
+            return self.actual_instance.to_json()
+        else:
+            return json.dumps(self.actual_instance)
+
+    def to_dict(self) -> Optional[Union[Dict[str, Any], VariableBody, str]]:
+        """Returns the dict representation of the actual instance"""
+        if self.actual_instance is None:
+            return None
+
+        if hasattr(self.actual_instance, "to_dict") and callable(self.actual_instance.to_dict):
+            return self.actual_instance.to_dict()
+        else:
+            return self.actual_instance
+
+    def to_str(self) -> str:
+        """Returns the string representation of the actual instance"""
+        return pprint.pformat(self.model_dump())
+
+
diff --git a/airflow_client/client/models/event_log_collection_response.py b/airflow_client/client/models/event_log_collection_response.py
index a71d33d..d100497 100644
--- a/airflow_client/client/models/event_log_collection_response.py
+++ b/airflow_client/client/models/event_log_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.event_log_response import EventLogResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class EventLogCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["event_logs", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/event_log_response.py b/airflow_client/client/models/event_log_response.py
index 7d12d7a..8c9c3ef 100644
--- a/airflow_client/client/models/event_log_response.py
+++ b/airflow_client/client/models/event_log_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class EventLogResponse(BaseModel):
     """
@@ -43,7 +44,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "event", "event_log_id", "extra", "logical_date", "map_index", "owner", "run_id", "task_display_name", "task_id", "try_number", "when"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -55,8 +57,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/external_log_url_response.py b/airflow_client/client/models/external_log_url_response.py
index 3b903a6..08d0b96 100644
--- a/airflow_client/client/models/external_log_url_response.py
+++ b/airflow_client/client/models/external_log_url_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ExternalLogUrlResponse(BaseModel):
     """
@@ -30,7 +31,8 @@
     __properties: ClassVar[List[str]] = ["url"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -42,8 +44,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/external_view_response.py b/airflow_client/client/models/external_view_response.py
index 6f559a9..5019f22 100644
--- a/airflow_client/client/models/external_view_response.py
+++ b/airflow_client/client/models/external_view_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ExternalViewResponse(BaseModel):
     """
@@ -42,12 +43,13 @@
         if value is None:
             return value
 
-        if value not in set(['nav', 'dag', 'dag_run', 'task', 'task_instance']):
-            raise ValueError("must be one of enum values ('nav', 'dag', 'dag_run', 'task', 'task_instance')")
+        if value not in set(['nav', 'dag', 'dag_run', 'task', 'task_instance', 'base']):
+            raise ValueError("must be one of enum values ('nav', 'dag', 'dag_run', 'task', 'task_instance', 'base')")
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -59,8 +61,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/extra_link_collection_response.py b/airflow_client/client/models/extra_link_collection_response.py
index 7f7435a..89e2ac6 100644
--- a/airflow_client/client/models/extra_link_collection_response.py
+++ b/airflow_client/client/models/extra_link_collection_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ExtraLinkCollectionResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["extra_links", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/fast_api_app_response.py b/airflow_client/client/models/fast_api_app_response.py
index ee0a8c2..245393d 100644
--- a/airflow_client/client/models/fast_api_app_response.py
+++ b/airflow_client/client/models/fast_api_app_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class FastAPIAppResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["app", "name", "url_prefix"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/fast_api_root_middleware_response.py b/airflow_client/client/models/fast_api_root_middleware_response.py
index 788f4d4..e8b32ec 100644
--- a/airflow_client/client/models/fast_api_root_middleware_response.py
+++ b/airflow_client/client/models/fast_api_root_middleware_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class FastAPIRootMiddlewareResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["middleware", "name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/health_info_response.py b/airflow_client/client/models/health_info_response.py
index 54f258f..00813d2 100644
--- a/airflow_client/client/models/health_info_response.py
+++ b/airflow_client/client/models/health_info_response.py
@@ -25,6 +25,7 @@
 from airflow_client.client.models.triggerer_info_response import TriggererInfoResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HealthInfoResponse(BaseModel):
     """
@@ -37,7 +38,8 @@
     __properties: ClassVar[List[str]] = ["dag_processor", "metadatabase", "scheduler", "triggerer"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -49,8 +51,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/hitl_detail.py b/airflow_client/client/models/hitl_detail.py
index de3f73f..f1b4456 100644
--- a/airflow_client/client/models/hitl_detail.py
+++ b/airflow_client/client/models/hitl_detail.py
@@ -25,6 +25,7 @@
 from airflow_client.client.models.task_instance_response import TaskInstanceResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HITLDetail(BaseModel):
     """
@@ -47,7 +48,8 @@
     __properties: ClassVar[List[str]] = ["assigned_users", "body", "chosen_options", "created_at", "defaults", "multiple", "options", "params", "params_input", "responded_at", "responded_by_user", "response_received", "subject", "task_instance"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -59,8 +61,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/hitl_detail_collection.py b/airflow_client/client/models/hitl_detail_collection.py
index 79dfa4a..e987c1c 100644
--- a/airflow_client/client/models/hitl_detail_collection.py
+++ b/airflow_client/client/models/hitl_detail_collection.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.hitl_detail import HITLDetail
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HITLDetailCollection(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["hitl_details", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/hitl_detail_history.py b/airflow_client/client/models/hitl_detail_history.py
new file mode 100644
index 0000000..b1ad58f
--- /dev/null
+++ b/airflow_client/client/models/hitl_detail_history.py
@@ -0,0 +1,131 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+from __future__ import annotations
+import pprint
+import re  # noqa: F401
+import json
+
+from datetime import datetime
+from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictStr
+from typing import Any, ClassVar, Dict, List, Optional
+from typing_extensions import Annotated
+from airflow_client.client.models.hitl_user import HITLUser
+from airflow_client.client.models.task_instance_history_response import TaskInstanceHistoryResponse
+from typing import Optional, Set
+from typing_extensions import Self
+from pydantic_core import to_jsonable_python
+
+class HITLDetailHistory(BaseModel):
+    """
+    Schema for Human-in-the-loop detail history.
+    """ # noqa: E501
+    assigned_users: Optional[List[HITLUser]] = None
+    body: Optional[StrictStr] = None
+    chosen_options: Optional[List[StrictStr]] = None
+    created_at: datetime
+    defaults: Optional[List[StrictStr]] = None
+    multiple: Optional[StrictBool] = False
+    options: Annotated[List[StrictStr], Field(min_length=1)]
+    params: Optional[Dict[str, Any]] = None
+    params_input: Optional[Dict[str, Any]] = None
+    responded_at: Optional[datetime] = None
+    responded_by_user: Optional[HITLUser] = None
+    response_received: Optional[StrictBool] = False
+    subject: StrictStr
+    task_instance: TaskInstanceHistoryResponse
+    __properties: ClassVar[List[str]] = ["assigned_users", "body", "chosen_options", "created_at", "defaults", "multiple", "options", "params", "params_input", "responded_at", "responded_by_user", "response_received", "subject", "task_instance"]
+
+    model_config = ConfigDict(
+        validate_by_name=True,
+        validate_by_alias=True,
+        validate_assignment=True,
+        protected_namespaces=(),
+    )
+
+
+    def to_str(self) -> str:
+        """Returns the string representation of the model using alias"""
+        return pprint.pformat(self.model_dump(by_alias=True))
+
+    def to_json(self) -> str:
+        """Returns the JSON representation of the model using alias"""
+        return json.dumps(to_jsonable_python(self.to_dict()))
+
+    @classmethod
+    def from_json(cls, json_str: str) -> Optional[Self]:
+        """Create an instance of HITLDetailHistory from a JSON string"""
+        return cls.from_dict(json.loads(json_str))
+
+    def to_dict(self) -> Dict[str, Any]:
+        """Return the dictionary representation of the model using alias.
+
+        This has the following differences from calling pydantic's
+        `self.model_dump(by_alias=True)`:
+
+        * `None` is only added to the output dict for nullable fields that
+          were set at model initialization. Other fields with value `None`
+          are ignored.
+        """
+        excluded_fields: Set[str] = set([
+        ])
+
+        _dict = self.model_dump(
+            by_alias=True,
+            exclude=excluded_fields,
+            exclude_none=True,
+        )
+        # override the default output from pydantic by calling `to_dict()` of each item in assigned_users (list)
+        _items = []
+        if self.assigned_users:
+            for _item_assigned_users in self.assigned_users:
+                if _item_assigned_users:
+                    _items.append(_item_assigned_users.to_dict())
+            _dict['assigned_users'] = _items
+        # override the default output from pydantic by calling `to_dict()` of responded_by_user
+        if self.responded_by_user:
+            _dict['responded_by_user'] = self.responded_by_user.to_dict()
+        # override the default output from pydantic by calling `to_dict()` of task_instance
+        if self.task_instance:
+            _dict['task_instance'] = self.task_instance.to_dict()
+        return _dict
+
+    @classmethod
+    def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
+        """Create an instance of HITLDetailHistory from a dict"""
+        if obj is None:
+            return None
+
+        if not isinstance(obj, dict):
+            return cls.model_validate(obj)
+
+        _obj = cls.model_validate({
+            "assigned_users": [HITLUser.from_dict(_item) for _item in obj["assigned_users"]] if obj.get("assigned_users") is not None else None,
+            "body": obj.get("body"),
+            "chosen_options": obj.get("chosen_options"),
+            "created_at": obj.get("created_at"),
+            "defaults": obj.get("defaults"),
+            "multiple": obj.get("multiple") if obj.get("multiple") is not None else False,
+            "options": obj.get("options"),
+            "params": obj.get("params"),
+            "params_input": obj.get("params_input"),
+            "responded_at": obj.get("responded_at"),
+            "responded_by_user": HITLUser.from_dict(obj["responded_by_user"]) if obj.get("responded_by_user") is not None else None,
+            "response_received": obj.get("response_received") if obj.get("response_received") is not None else False,
+            "subject": obj.get("subject"),
+            "task_instance": TaskInstanceHistoryResponse.from_dict(obj["task_instance"]) if obj.get("task_instance") is not None else None
+        })
+        return _obj
+
+
diff --git a/airflow_client/client/models/hitl_detail_response.py b/airflow_client/client/models/hitl_detail_response.py
index 03810fe..2d3d6bd 100644
--- a/airflow_client/client/models/hitl_detail_response.py
+++ b/airflow_client/client/models/hitl_detail_response.py
@@ -24,6 +24,7 @@
 from airflow_client.client.models.hitl_user import HITLUser
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HITLDetailResponse(BaseModel):
     """
@@ -36,7 +37,8 @@
     __properties: ClassVar[List[str]] = ["chosen_options", "params_input", "responded_at", "responded_by"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -48,8 +50,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/hitl_user.py b/airflow_client/client/models/hitl_user.py
index e2ee8e1..3dd8458 100644
--- a/airflow_client/client/models/hitl_user.py
+++ b/airflow_client/client/models/hitl_user.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HITLUser(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["id", "name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/http_exception_response.py b/airflow_client/client/models/http_exception_response.py
index d27ecf6..fb4e869 100644
--- a/airflow_client/client/models/http_exception_response.py
+++ b/airflow_client/client/models/http_exception_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.detail import Detail
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HTTPExceptionResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["detail"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/http_validation_error.py b/airflow_client/client/models/http_validation_error.py
index c1fdb33..dc96d3a 100644
--- a/airflow_client/client/models/http_validation_error.py
+++ b/airflow_client/client/models/http_validation_error.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.validation_error import ValidationError
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class HTTPValidationError(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["detail"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/import_error_collection_response.py b/airflow_client/client/models/import_error_collection_response.py
index c631efe..e2477e2 100644
--- a/airflow_client/client/models/import_error_collection_response.py
+++ b/airflow_client/client/models/import_error_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.import_error_response import ImportErrorResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ImportErrorCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["import_errors", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/import_error_response.py b/airflow_client/client/models/import_error_response.py
index 0ff8620..f2b0961 100644
--- a/airflow_client/client/models/import_error_response.py
+++ b/airflow_client/client/models/import_error_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ImportErrorResponse(BaseModel):
     """
@@ -35,7 +36,8 @@
     __properties: ClassVar[List[str]] = ["bundle_name", "filename", "import_error_id", "stack_trace", "timestamp"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -47,8 +49,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/job_collection_response.py b/airflow_client/client/models/job_collection_response.py
index ffe04a3..bb321e7 100644
--- a/airflow_client/client/models/job_collection_response.py
+++ b/airflow_client/client/models/job_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.job_response import JobResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class JobCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["jobs", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/job_response.py b/airflow_client/client/models/job_response.py
index a153cf8..6910b0b 100644
--- a/airflow_client/client/models/job_response.py
+++ b/airflow_client/client/models/job_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class JobResponse(BaseModel):
     """
@@ -41,7 +42,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "end_date", "executor_class", "hostname", "id", "job_type", "latest_heartbeat", "start_date", "state", "unixname"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/last_asset_event_response.py b/airflow_client/client/models/last_asset_event_response.py
index 288f545..a6c2ad2 100644
--- a/airflow_client/client/models/last_asset_event_response.py
+++ b/airflow_client/client/models/last_asset_event_response.py
@@ -23,6 +23,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class LastAssetEventResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["id", "timestamp"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/patch_task_instance_body.py b/airflow_client/client/models/patch_task_instance_body.py
index bf70cbd..2ff1af1 100644
--- a/airflow_client/client/models/patch_task_instance_body.py
+++ b/airflow_client/client/models/patch_task_instance_body.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.task_instance_state import TaskInstanceState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PatchTaskInstanceBody(BaseModel):
     """
@@ -37,7 +38,8 @@
     __properties: ClassVar[List[str]] = ["include_downstream", "include_future", "include_past", "include_upstream", "new_state", "note"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -49,8 +51,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/plugin_collection_response.py b/airflow_client/client/models/plugin_collection_response.py
index 8dfbab7..080e728 100644
--- a/airflow_client/client/models/plugin_collection_response.py
+++ b/airflow_client/client/models/plugin_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.plugin_response import PluginResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PluginCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["plugins", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/plugin_import_error_collection_response.py b/airflow_client/client/models/plugin_import_error_collection_response.py
index ce6fdfe..06237df 100644
--- a/airflow_client/client/models/plugin_import_error_collection_response.py
+++ b/airflow_client/client/models/plugin_import_error_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.plugin_import_error_response import PluginImportErrorResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PluginImportErrorCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["import_errors", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/plugin_import_error_response.py b/airflow_client/client/models/plugin_import_error_response.py
index 9e6d312..4271521 100644
--- a/airflow_client/client/models/plugin_import_error_response.py
+++ b/airflow_client/client/models/plugin_import_error_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PluginImportErrorResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["error", "source"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/plugin_response.py b/airflow_client/client/models/plugin_response.py
index f24443c..76c49d8 100644
--- a/airflow_client/client/models/plugin_response.py
+++ b/airflow_client/client/models/plugin_response.py
@@ -27,6 +27,7 @@
 from airflow_client.client.models.react_app_response import ReactAppResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PluginResponse(BaseModel):
     """
@@ -49,7 +50,8 @@
     __properties: ClassVar[List[str]] = ["appbuilder_menu_items", "appbuilder_views", "external_views", "fastapi_apps", "fastapi_root_middlewares", "flask_blueprints", "global_operator_extra_links", "listeners", "macros", "name", "operator_extra_links", "react_apps", "source", "timetables"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -61,8 +63,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/pool_body.py b/airflow_client/client/models/pool_body.py
index bf24849..225461d 100644
--- a/airflow_client/client/models/pool_body.py
+++ b/airflow_client/client/models/pool_body.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PoolBody(BaseModel):
     """
@@ -31,10 +32,12 @@
     include_deferred: Optional[StrictBool] = False
     name: Annotated[str, Field(strict=True, max_length=256)]
     slots: Annotated[int, Field(strict=True, ge=-1)] = Field(description="Number of slots. Use -1 for unlimited.")
-    __properties: ClassVar[List[str]] = ["description", "include_deferred", "name", "slots"]
+    team_name: Optional[Annotated[str, Field(strict=True, max_length=50)]] = None
+    __properties: ClassVar[List[str]] = ["description", "include_deferred", "name", "slots", "team_name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +49,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -87,7 +89,8 @@
             "description": obj.get("description"),
             "include_deferred": obj.get("include_deferred") if obj.get("include_deferred") is not None else False,
             "name": obj.get("name"),
-            "slots": obj.get("slots")
+            "slots": obj.get("slots"),
+            "team_name": obj.get("team_name")
         })
         return _obj
 
diff --git a/airflow_client/client/models/pool_collection_response.py b/airflow_client/client/models/pool_collection_response.py
index aa34914..6317eaf 100644
--- a/airflow_client/client/models/pool_collection_response.py
+++ b/airflow_client/client/models/pool_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.pool_response import PoolResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PoolCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["pools", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/pool_patch_body.py b/airflow_client/client/models/pool_patch_body.py
index 4e92e1e..78a0728 100644
--- a/airflow_client/client/models/pool_patch_body.py
+++ b/airflow_client/client/models/pool_patch_body.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PoolPatchBody(BaseModel):
     """
@@ -31,10 +32,12 @@
     include_deferred: Optional[StrictBool] = None
     pool: Optional[StrictStr] = None
     slots: Optional[Annotated[int, Field(strict=True, ge=-1)]] = Field(default=None, description="Number of slots. Use -1 for unlimited.")
-    __properties: ClassVar[List[str]] = ["description", "include_deferred", "pool", "slots"]
+    team_name: Optional[Annotated[str, Field(strict=True, max_length=50)]] = None
+    __properties: ClassVar[List[str]] = ["description", "include_deferred", "pool", "slots", "team_name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +49,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -87,7 +89,8 @@
             "description": obj.get("description"),
             "include_deferred": obj.get("include_deferred"),
             "pool": obj.get("pool"),
-            "slots": obj.get("slots")
+            "slots": obj.get("slots"),
+            "team_name": obj.get("team_name")
         })
         return _obj
 
diff --git a/airflow_client/client/models/pool_response.py b/airflow_client/client/models/pool_response.py
index ea8fe6c..8a16507 100644
--- a/airflow_client/client/models/pool_response.py
+++ b/airflow_client/client/models/pool_response.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class PoolResponse(BaseModel):
     """
@@ -37,10 +38,12 @@
     running_slots: StrictInt
     scheduled_slots: StrictInt
     slots: Annotated[int, Field(strict=True, ge=-1)] = Field(description="Number of slots. Use -1 for unlimited.")
-    __properties: ClassVar[List[str]] = ["deferred_slots", "description", "include_deferred", "name", "occupied_slots", "open_slots", "queued_slots", "running_slots", "scheduled_slots", "slots"]
+    team_name: Optional[StrictStr] = None
+    __properties: ClassVar[List[str]] = ["deferred_slots", "description", "include_deferred", "name", "occupied_slots", "open_slots", "queued_slots", "running_slots", "scheduled_slots", "slots", "team_name"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -52,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -99,7 +101,8 @@
             "queued_slots": obj.get("queued_slots"),
             "running_slots": obj.get("running_slots"),
             "scheduled_slots": obj.get("scheduled_slots"),
-            "slots": obj.get("slots")
+            "slots": obj.get("slots"),
+            "team_name": obj.get("team_name")
         })
         return _obj
 
diff --git a/airflow_client/client/models/provider_collection_response.py b/airflow_client/client/models/provider_collection_response.py
index efd462d..ba5e704 100644
--- a/airflow_client/client/models/provider_collection_response.py
+++ b/airflow_client/client/models/provider_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.provider_response import ProviderResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ProviderCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["providers", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/provider_response.py b/airflow_client/client/models/provider_response.py
index b551fa2..bdf6ee2 100644
--- a/airflow_client/client/models/provider_response.py
+++ b/airflow_client/client/models/provider_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ProviderResponse(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["description", "documentation_url", "package_name", "version"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/queued_event_collection_response.py b/airflow_client/client/models/queued_event_collection_response.py
index 384a214..8fdb990 100644
--- a/airflow_client/client/models/queued_event_collection_response.py
+++ b/airflow_client/client/models/queued_event_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.queued_event_response import QueuedEventResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class QueuedEventCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["queued_events", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/queued_event_response.py b/airflow_client/client/models/queued_event_response.py
index d9a8a2a..7b5fd99 100644
--- a/airflow_client/client/models/queued_event_response.py
+++ b/airflow_client/client/models/queued_event_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class QueuedEventResponse(BaseModel):
     """
@@ -34,7 +35,8 @@
     __properties: ClassVar[List[str]] = ["asset_id", "created_at", "dag_display_name", "dag_id"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/react_app_response.py b/airflow_client/client/models/react_app_response.py
index 7fb82c5..526f3aa 100644
--- a/airflow_client/client/models/react_app_response.py
+++ b/airflow_client/client/models/react_app_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ReactAppResponse(BaseModel):
     """
@@ -42,12 +43,13 @@
         if value is None:
             return value
 
-        if value not in set(['nav', 'dag', 'dag_run', 'task', 'task_instance', 'dashboard']):
-            raise ValueError("must be one of enum values ('nav', 'dag', 'dag_run', 'task', 'task_instance', 'dashboard')")
+        if value not in set(['nav', 'dag', 'dag_run', 'task', 'task_instance', 'base', 'dashboard']):
+            raise ValueError("must be one of enum values ('nav', 'dag', 'dag_run', 'task', 'task_instance', 'base', 'dashboard')")
         return value
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -59,8 +61,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/scheduler_info_response.py b/airflow_client/client/models/scheduler_info_response.py
index a5bf7e9..994e811 100644
--- a/airflow_client/client/models/scheduler_info_response.py
+++ b/airflow_client/client/models/scheduler_info_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class SchedulerInfoResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["latest_scheduler_heartbeat", "status"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/structured_log_message.py b/airflow_client/client/models/structured_log_message.py
index af5fc29..a3bee58 100644
--- a/airflow_client/client/models/structured_log_message.py
+++ b/airflow_client/client/models/structured_log_message.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class StructuredLogMessage(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["event", "timestamp"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_collection_response.py b/airflow_client/client/models/task_collection_response.py
index 108a8c2..39ebcdc 100644
--- a/airflow_client/client/models/task_collection_response.py
+++ b/airflow_client/client/models/task_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.task_response import TaskResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["tasks", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_dependency_collection_response.py b/airflow_client/client/models/task_dependency_collection_response.py
index da4b93f..f5031f1 100644
--- a/airflow_client/client/models/task_dependency_collection_response.py
+++ b/airflow_client/client/models/task_dependency_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.task_dependency_response import TaskDependencyResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskDependencyCollectionResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["dependencies"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_dependency_response.py b/airflow_client/client/models/task_dependency_response.py
index 23d80fa..06dc1d3 100644
--- a/airflow_client/client/models/task_dependency_response.py
+++ b/airflow_client/client/models/task_dependency_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskDependencyResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["name", "reason"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_inlet_asset_reference.py b/airflow_client/client/models/task_inlet_asset_reference.py
index fff4617..648f9d8 100644
--- a/airflow_client/client/models/task_inlet_asset_reference.py
+++ b/airflow_client/client/models/task_inlet_asset_reference.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInletAssetReference(BaseModel):
     """
@@ -34,7 +35,8 @@
     __properties: ClassVar[List[str]] = ["created_at", "dag_id", "task_id", "updated_at"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instance_collection_response.py b/airflow_client/client/models/task_instance_collection_response.py
index 9f30899..f9598fd 100644
--- a/airflow_client/client/models/task_instance_collection_response.py
+++ b/airflow_client/client/models/task_instance_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.task_instance_response import TaskInstanceResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstanceCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["task_instances", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instance_history_collection_response.py b/airflow_client/client/models/task_instance_history_collection_response.py
index c6d9577..1bc78ae 100644
--- a/airflow_client/client/models/task_instance_history_collection_response.py
+++ b/airflow_client/client/models/task_instance_history_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.task_instance_history_response import TaskInstanceHistoryResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstanceHistoryCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["task_instances", "total_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instance_history_response.py b/airflow_client/client/models/task_instance_history_response.py
index c8b4f90..d67a0d9 100644
--- a/airflow_client/client/models/task_instance_history_response.py
+++ b/airflow_client/client/models/task_instance_history_response.py
@@ -24,6 +24,7 @@
 from airflow_client.client.models.task_instance_state import TaskInstanceState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstanceHistoryResponse(BaseModel):
     """
@@ -58,7 +59,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "dag_run_id", "dag_version", "duration", "end_date", "executor", "executor_config", "hostname", "map_index", "max_tries", "operator", "operator_name", "pid", "pool", "pool_slots", "priority_weight", "queue", "queued_when", "scheduled_when", "start_date", "state", "task_display_name", "task_id", "try_number", "unixname"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -70,8 +72,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instance_response.py b/airflow_client/client/models/task_instance_response.py
index b428f66..b2ccc6d 100644
--- a/airflow_client/client/models/task_instance_response.py
+++ b/airflow_client/client/models/task_instance_response.py
@@ -20,12 +20,14 @@
 from datetime import datetime
 from pydantic import BaseModel, ConfigDict, StrictFloat, StrictInt, StrictStr
 from typing import Any, ClassVar, Dict, List, Optional, Union
+from uuid import UUID
 from airflow_client.client.models.dag_version_response import DagVersionResponse
 from airflow_client.client.models.job_response import JobResponse
 from airflow_client.client.models.task_instance_state import TaskInstanceState
 from airflow_client.client.models.trigger_response import TriggerResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstanceResponse(BaseModel):
     """
@@ -40,7 +42,7 @@
     executor: Optional[StrictStr] = None
     executor_config: StrictStr
     hostname: Optional[StrictStr] = None
-    id: StrictStr
+    id: UUID
     logical_date: Optional[datetime] = None
     map_index: StrictInt
     max_tries: StrictInt
@@ -68,7 +70,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "dag_run_id", "dag_version", "duration", "end_date", "executor", "executor_config", "hostname", "id", "logical_date", "map_index", "max_tries", "note", "operator", "operator_name", "pid", "pool", "pool_slots", "priority_weight", "queue", "queued_when", "rendered_fields", "rendered_map_index", "run_after", "scheduled_when", "start_date", "state", "task_display_name", "task_id", "trigger", "triggerer_job", "try_number", "unixname"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -80,8 +83,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instances_batch_body.py b/airflow_client/client/models/task_instances_batch_body.py
index f201e3d..54d8ab9 100644
--- a/airflow_client/client/models/task_instances_batch_body.py
+++ b/airflow_client/client/models/task_instances_batch_body.py
@@ -24,6 +24,7 @@
 from airflow_client.client.models.task_instance_state import TaskInstanceState
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstancesBatchBody(BaseModel):
     """
@@ -62,7 +63,8 @@
     __properties: ClassVar[List[str]] = ["dag_ids", "dag_run_ids", "duration_gt", "duration_gte", "duration_lt", "duration_lte", "end_date_gt", "end_date_gte", "end_date_lt", "end_date_lte", "executor", "logical_date_gt", "logical_date_gte", "logical_date_lt", "logical_date_lte", "order_by", "page_limit", "page_offset", "pool", "queue", "run_after_gt", "run_after_gte", "run_after_lt", "run_after_lte", "start_date_gt", "start_date_gte", "start_date_lt", "start_date_lte", "state", "task_ids"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -74,8 +76,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_instances_log_response.py b/airflow_client/client/models/task_instances_log_response.py
index 4cd4e56..243513b 100644
--- a/airflow_client/client/models/task_instances_log_response.py
+++ b/airflow_client/client/models/task_instances_log_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.content import Content
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskInstancesLogResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["content", "continuation_token"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_outlet_asset_reference.py b/airflow_client/client/models/task_outlet_asset_reference.py
index 6a285e0..26bc582 100644
--- a/airflow_client/client/models/task_outlet_asset_reference.py
+++ b/airflow_client/client/models/task_outlet_asset_reference.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskOutletAssetReference(BaseModel):
     """
@@ -34,7 +35,8 @@
     __properties: ClassVar[List[str]] = ["created_at", "dag_id", "task_id", "updated_at"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -46,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/task_response.py b/airflow_client/client/models/task_response.py
index 21e5461..3eb748f 100644
--- a/airflow_client/client/models/task_response.py
+++ b/airflow_client/client/models/task_response.py
@@ -23,6 +23,7 @@
 from airflow_client.client.models.time_delta import TimeDelta
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TaskResponse(BaseModel):
     """
@@ -45,7 +46,7 @@
     queue: Optional[StrictStr] = None
     retries: Optional[Union[StrictFloat, StrictInt]] = None
     retry_delay: Optional[TimeDelta] = None
-    retry_exponential_backoff: StrictBool
+    retry_exponential_backoff: Union[StrictFloat, StrictInt]
     start_date: Optional[datetime] = None
     task_display_name: Optional[StrictStr] = None
     task_id: Optional[StrictStr] = None
@@ -58,7 +59,8 @@
     __properties: ClassVar[List[str]] = ["class_ref", "depends_on_past", "doc_md", "downstream_task_ids", "end_date", "execution_timeout", "extra_links", "is_mapped", "operator_name", "owner", "params", "pool", "pool_slots", "priority_weight", "queue", "retries", "retry_delay", "retry_exponential_backoff", "start_date", "task_display_name", "task_id", "template_fields", "trigger_rule", "ui_color", "ui_fgcolor", "wait_for_downstream", "weight_rule"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -70,8 +72,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -87,8 +88,10 @@
         * `None` is only added to the output dict for nullable fields that
           were set at model initialization. Other fields with value `None`
           are ignored.
+        * OpenAPI `readOnly` fields are excluded.
         """
         excluded_fields: Set[str] = set([
+            "extra_links",
         ])
 
         _dict = self.model_dump(
diff --git a/airflow_client/client/models/time_delta.py b/airflow_client/client/models/time_delta.py
index 51f7c5f..a2fd826 100644
--- a/airflow_client/client/models/time_delta.py
+++ b/airflow_client/client/models/time_delta.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TimeDelta(BaseModel):
     """
@@ -33,7 +34,8 @@
     __properties: ClassVar[List[str]] = ["__type", "days", "microseconds", "seconds"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +47,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/trigger_dag_run_post_body.py b/airflow_client/client/models/trigger_dag_run_post_body.py
index 430112b..f136335 100644
--- a/airflow_client/client/models/trigger_dag_run_post_body.py
+++ b/airflow_client/client/models/trigger_dag_run_post_body.py
@@ -17,6 +17,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TriggerDAGRunPostBody(BaseModel):
     """
@@ -28,10 +29,11 @@
     data_interval_start: Optional[datetime] = None
     logical_date: Optional[datetime] = None
     note: Optional[StrictStr] = None
+    partition_key: Optional[StrictStr] = None
     run_after: Optional[datetime] = None
     additional_properties: Dict[str, Any] = {}
-    __properties: ClassVar[List[str]] = ['conf', 'dag_run_id', 'data_interval_end', 'data_interval_start', 'logical_date', 'note', 'run_after']
-    model_config = ConfigDict(populate_by_name=True, validate_assignment=True, protected_namespaces=())
+    __properties: ClassVar[List[str]] = ['conf', 'dag_run_id', 'data_interval_end', 'data_interval_start', 'logical_date', 'note', 'partition_key', 'run_after']
+    model_config = ConfigDict(validate_by_name=True, validate_by_alias=True, validate_assignment=True, protected_namespaces=())
 
     def to_str(self) -> str:
         """Returns the string representation of the model using alias"""
@@ -39,7 +41,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -73,7 +75,7 @@
             return None
         if not isinstance(obj, dict):
             return cls.model_validate(obj)
-        _obj = cls.model_validate({'conf': obj.get('conf'), 'dag_run_id': obj.get('dag_run_id'), 'data_interval_end': obj.get('data_interval_end'), 'data_interval_start': obj.get('data_interval_start'), 'logical_date': obj.get('logical_date'), 'note': obj.get('note'), 'run_after': obj.get('run_after')})
+        _obj = cls.model_validate({'conf': obj.get('conf'), 'dag_run_id': obj.get('dag_run_id'), 'data_interval_end': obj.get('data_interval_end'), 'data_interval_start': obj.get('data_interval_start'), 'logical_date': obj.get('logical_date'), 'note': obj.get('note'), 'partition_key': obj.get('partition_key'), 'run_after': obj.get('run_after')})
         for _key in obj.keys():
             if _key not in cls.__properties:
                 _obj.additional_properties[_key] = obj.get(_key)
diff --git a/airflow_client/client/models/trigger_response.py b/airflow_client/client/models/trigger_response.py
index abd4126..76707ad 100644
--- a/airflow_client/client/models/trigger_response.py
+++ b/airflow_client/client/models/trigger_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TriggerResponse(BaseModel):
     """
@@ -31,11 +32,13 @@
     created_date: datetime
     id: StrictInt
     kwargs: StrictStr
+    queue: Optional[StrictStr] = None
     triggerer_id: Optional[StrictInt] = None
-    __properties: ClassVar[List[str]] = ["classpath", "created_date", "id", "kwargs", "triggerer_id"]
+    __properties: ClassVar[List[str]] = ["classpath", "created_date", "id", "kwargs", "queue", "triggerer_id"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -47,8 +50,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -89,6 +91,7 @@
             "created_date": obj.get("created_date"),
             "id": obj.get("id"),
             "kwargs": obj.get("kwargs"),
+            "queue": obj.get("queue"),
             "triggerer_id": obj.get("triggerer_id")
         })
         return _obj
diff --git a/airflow_client/client/models/triggerer_info_response.py b/airflow_client/client/models/triggerer_info_response.py
index ea0c9db..5261c23 100644
--- a/airflow_client/client/models/triggerer_info_response.py
+++ b/airflow_client/client/models/triggerer_info_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class TriggererInfoResponse(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["latest_triggerer_heartbeat", "status"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/update_hitl_detail_payload.py b/airflow_client/client/models/update_hitl_detail_payload.py
index db159c7..af439cd 100644
--- a/airflow_client/client/models/update_hitl_detail_payload.py
+++ b/airflow_client/client/models/update_hitl_detail_payload.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class UpdateHITLDetailPayload(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["chosen_options", "params_input"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/validation_error.py b/airflow_client/client/models/validation_error.py
index 4799d69..92a7f1d 100644
--- a/airflow_client/client/models/validation_error.py
+++ b/airflow_client/client/models/validation_error.py
@@ -18,22 +18,26 @@
 import json
 
 from pydantic import BaseModel, ConfigDict, StrictStr
-from typing import Any, ClassVar, Dict, List
+from typing import Any, ClassVar, Dict, List, Optional
 from airflow_client.client.models.location_inner import LocationInner
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class ValidationError(BaseModel):
     """
     ValidationError
     """ # noqa: E501
+    ctx: Optional[Dict[str, Any]] = None
+    input: Optional[Any] = None
     loc: List[LocationInner]
     msg: StrictStr
     type: StrictStr
-    __properties: ClassVar[List[str]] = ["loc", "msg", "type"]
+    __properties: ClassVar[List[str]] = ["ctx", "input", "loc", "msg", "type"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +49,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -78,6 +81,11 @@
                 if _item_loc:
                     _items.append(_item_loc.to_dict())
             _dict['loc'] = _items
+        # set to None if input (nullable) is None
+        # and model_fields_set contains the field
+        if self.input is None and "input" in self.model_fields_set:
+            _dict['input'] = None
+
         return _dict
 
     @classmethod
@@ -90,6 +98,8 @@
             return cls.model_validate(obj)
 
         _obj = cls.model_validate({
+            "ctx": obj.get("ctx"),
+            "input": obj.get("input"),
             "loc": [LocationInner.from_dict(_item) for _item in obj["loc"]] if obj.get("loc") is not None else None,
             "msg": obj.get("msg"),
             "type": obj.get("type")
diff --git a/airflow_client/client/models/variable_body.py b/airflow_client/client/models/variable_body.py
index 937c970..d3e50d3 100644
--- a/airflow_client/client/models/variable_body.py
+++ b/airflow_client/client/models/variable_body.py
@@ -22,6 +22,7 @@
 from typing_extensions import Annotated
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class VariableBody(BaseModel):
     """
@@ -29,11 +30,13 @@
     """ # noqa: E501
     description: Optional[StrictStr] = None
     key: Annotated[str, Field(strict=True, max_length=250)]
+    team_name: Optional[Annotated[str, Field(strict=True, max_length=50)]] = None
     value: Optional[Any]
-    __properties: ClassVar[List[str]] = ["description", "key", "value"]
+    __properties: ClassVar[List[str]] = ["description", "key", "team_name", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -90,6 +92,7 @@
         _obj = cls.model_validate({
             "description": obj.get("description"),
             "key": obj.get("key"),
+            "team_name": obj.get("team_name"),
             "value": obj.get("value")
         })
         return _obj
diff --git a/airflow_client/client/models/variable_collection_response.py b/airflow_client/client/models/variable_collection_response.py
index 8723195..ee8bc5b 100644
--- a/airflow_client/client/models/variable_collection_response.py
+++ b/airflow_client/client/models/variable_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.variable_response import VariableResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class VariableCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["total_entries", "variables"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/variable_response.py b/airflow_client/client/models/variable_response.py
index 03a43f6..7ee008a 100644
--- a/airflow_client/client/models/variable_response.py
+++ b/airflow_client/client/models/variable_response.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class VariableResponse(BaseModel):
     """
@@ -29,11 +30,13 @@
     description: Optional[StrictStr] = None
     is_encrypted: StrictBool
     key: StrictStr
+    team_name: Optional[StrictStr] = None
     value: StrictStr
-    __properties: ClassVar[List[str]] = ["description", "is_encrypted", "key", "value"]
+    __properties: ClassVar[List[str]] = ["description", "is_encrypted", "key", "team_name", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -45,8 +48,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
@@ -86,6 +88,7 @@
             "description": obj.get("description"),
             "is_encrypted": obj.get("is_encrypted"),
             "key": obj.get("key"),
+            "team_name": obj.get("team_name"),
             "value": obj.get("value")
         })
         return _obj
diff --git a/airflow_client/client/models/version_info.py b/airflow_client/client/models/version_info.py
index adf945b..d385ed2 100644
--- a/airflow_client/client/models/version_info.py
+++ b/airflow_client/client/models/version_info.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class VersionInfo(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["git_version", "version"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_collection_response.py b/airflow_client/client/models/x_com_collection_response.py
index cd836a9..9486fb2 100644
--- a/airflow_client/client/models/x_com_collection_response.py
+++ b/airflow_client/client/models/x_com_collection_response.py
@@ -22,6 +22,7 @@
 from airflow_client.client.models.x_com_response import XComResponse
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComCollectionResponse(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["total_entries", "xcom_entries"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_create_body.py b/airflow_client/client/models/x_com_create_body.py
index 033e48c..4f2bfeb 100644
--- a/airflow_client/client/models/x_com_create_body.py
+++ b/airflow_client/client/models/x_com_create_body.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComCreateBody(BaseModel):
     """
@@ -32,7 +33,8 @@
     __properties: ClassVar[List[str]] = ["key", "map_index", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -44,8 +46,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_response.py b/airflow_client/client/models/x_com_response.py
index 1e4c2f3..9bf8597 100644
--- a/airflow_client/client/models/x_com_response.py
+++ b/airflow_client/client/models/x_com_response.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComResponse(BaseModel):
     """
@@ -40,7 +41,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "key", "logical_date", "map_index", "run_after", "run_id", "task_display_name", "task_id", "timestamp"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -52,8 +54,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_response_native.py b/airflow_client/client/models/x_com_response_native.py
index 17a0276..029e472 100644
--- a/airflow_client/client/models/x_com_response_native.py
+++ b/airflow_client/client/models/x_com_response_native.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComResponseNative(BaseModel):
     """
@@ -41,7 +42,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "key", "logical_date", "map_index", "run_after", "run_id", "task_display_name", "task_id", "timestamp", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_response_string.py b/airflow_client/client/models/x_com_response_string.py
index daf4050..b2f70db 100644
--- a/airflow_client/client/models/x_com_response_string.py
+++ b/airflow_client/client/models/x_com_response_string.py
@@ -22,6 +22,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComResponseString(BaseModel):
     """
@@ -41,7 +42,8 @@
     __properties: ClassVar[List[str]] = ["dag_display_name", "dag_id", "key", "logical_date", "map_index", "run_after", "run_id", "task_display_name", "task_id", "timestamp", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -53,8 +55,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/airflow_client/client/models/x_com_update_body.py b/airflow_client/client/models/x_com_update_body.py
index ab70880..b455f33 100644
--- a/airflow_client/client/models/x_com_update_body.py
+++ b/airflow_client/client/models/x_com_update_body.py
@@ -21,6 +21,7 @@
 from typing import Any, ClassVar, Dict, List, Optional
 from typing import Optional, Set
 from typing_extensions import Self
+from pydantic_core import to_jsonable_python
 
 class XComUpdateBody(BaseModel):
     """
@@ -31,7 +32,8 @@
     __properties: ClassVar[List[str]] = ["map_index", "value"]
 
     model_config = ConfigDict(
-        populate_by_name=True,
+        validate_by_name=True,
+        validate_by_alias=True,
         validate_assignment=True,
         protected_namespaces=(),
     )
@@ -43,8 +45,7 @@
 
     def to_json(self) -> str:
         """Returns the JSON representation of the model using alias"""
-        # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead
-        return json.dumps(self.to_dict())
+        return json.dumps(to_jsonable_python(self.to_dict()))
 
     @classmethod
     def from_json(cls, json_str: str) -> Optional[Self]:
diff --git a/docs/ActionsInner.md b/docs/ActionsInner.md
index f46b740..7ab3378 100644
--- a/docs/ActionsInner.md
+++ b/docs/ActionsInner.md
@@ -9,6 +9,7 @@
 **action_on_existence** | [**BulkActionOnExistence**](BulkActionOnExistence.md) |  | [optional] 
 **entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ActionsInner1.md b/docs/ActionsInner1.md
index ba49c6d..19390ff 100644
--- a/docs/ActionsInner1.md
+++ b/docs/ActionsInner1.md
@@ -7,8 +7,9 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_existence** | [**BulkActionOnExistence**](BulkActionOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner1]**](EntitiesInner1.md) | A list of entity id/key or entity objects to be deleted. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ActionsInner2.md b/docs/ActionsInner2.md
index 927e823..0e28ab2 100644
--- a/docs/ActionsInner2.md
+++ b/docs/ActionsInner2.md
@@ -7,8 +7,9 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_existence** | [**BulkActionOnExistence**](BulkActionOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner2]**](EntitiesInner2.md) | A list of entity id/key or entity objects to be deleted. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ActionsInner3.md b/docs/ActionsInner3.md
index 9f44841..f8517b1 100644
--- a/docs/ActionsInner3.md
+++ b/docs/ActionsInner3.md
@@ -7,8 +7,9 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_existence** | [**BulkActionOnExistence**](BulkActionOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner3]**](EntitiesInner3.md) | A list of entity id/key or entity objects to be deleted. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/AssetApi.md b/docs/AssetApi.md
index 24b063f..2319423 100644
--- a/docs/AssetApi.md
+++ b/docs/AssetApi.md
@@ -570,7 +570,7 @@
     api_instance = airflow_client.client.AssetApi(api_client)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    name_pattern = 'name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    name_pattern = 'name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name` (optional) (default to ["id"])
 
     try:
@@ -591,7 +591,7 @@
 ------------- | ------------- | ------------- | -------------
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name` | [optional] [default to ["id"]]
 
 ### Return type
@@ -620,7 +620,7 @@
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
 # **get_asset_events**
-> AssetEventCollectionResponse get_asset_events(limit=limit, offset=offset, order_by=order_by, asset_id=asset_id, source_dag_id=source_dag_id, source_task_id=source_task_id, source_run_id=source_run_id, source_map_index=source_map_index, timestamp_gte=timestamp_gte, timestamp_gt=timestamp_gt, timestamp_lte=timestamp_lte, timestamp_lt=timestamp_lt)
+> AssetEventCollectionResponse get_asset_events(limit=limit, offset=offset, order_by=order_by, asset_id=asset_id, source_dag_id=source_dag_id, source_task_id=source_task_id, source_run_id=source_run_id, source_map_index=source_map_index, name_pattern=name_pattern, timestamp_gte=timestamp_gte, timestamp_gt=timestamp_gt, timestamp_lte=timestamp_lte, timestamp_lt=timestamp_lt)
 
 Get Asset Events
 
@@ -667,6 +667,7 @@
     source_task_id = 'source_task_id_example' # str |  (optional)
     source_run_id = 'source_run_id_example' # str |  (optional)
     source_map_index = 56 # int |  (optional)
+    name_pattern = 'name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     timestamp_gte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     timestamp_gt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     timestamp_lte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
@@ -674,7 +675,7 @@
 
     try:
         # Get Asset Events
-        api_response = api_instance.get_asset_events(limit=limit, offset=offset, order_by=order_by, asset_id=asset_id, source_dag_id=source_dag_id, source_task_id=source_task_id, source_run_id=source_run_id, source_map_index=source_map_index, timestamp_gte=timestamp_gte, timestamp_gt=timestamp_gt, timestamp_lte=timestamp_lte, timestamp_lt=timestamp_lt)
+        api_response = api_instance.get_asset_events(limit=limit, offset=offset, order_by=order_by, asset_id=asset_id, source_dag_id=source_dag_id, source_task_id=source_task_id, source_run_id=source_run_id, source_map_index=source_map_index, name_pattern=name_pattern, timestamp_gte=timestamp_gte, timestamp_gt=timestamp_gt, timestamp_lte=timestamp_lte, timestamp_lt=timestamp_lt)
         print("The response of AssetApi->get_asset_events:\n")
         pprint(api_response)
     except Exception as e:
@@ -696,6 +697,7 @@
  **source_task_id** | **str**|  | [optional] 
  **source_run_id** | **str**|  | [optional] 
  **source_map_index** | **int**|  | [optional] 
+ **name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **timestamp_gte** | **datetime**|  | [optional] 
  **timestamp_gt** | **datetime**|  | [optional] 
  **timestamp_lte** | **datetime**|  | [optional] 
@@ -854,8 +856,8 @@
     api_instance = airflow_client.client.AssetApi(api_client)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    name_pattern = 'name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    uri_pattern = 'uri_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    name_pattern = 'name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    uri_pattern = 'uri_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     dag_ids = ['dag_ids_example'] # List[str] |  (optional)
     only_active = True # bool |  (optional) (default to True)
     order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name, uri, created_at, updated_at` (optional) (default to ["id"])
@@ -878,8 +880,8 @@
 ------------- | ------------- | ------------- | -------------
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **uri_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **uri_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **dag_ids** | [**List[str]**](str.md)|  | [optional] 
  **only_active** | **bool**|  | [optional] [default to True]
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, name, uri, created_at, updated_at` | [optional] [default to ["id"]]
@@ -1162,6 +1164,7 @@
 | Status code | Description | Response headers |
 |-------------|-------------|------------------|
 **200** | Successful Response |  -  |
+**400** | Bad Request |  -  |
 **401** | Unauthorized |  -  |
 **403** | Forbidden |  -  |
 **404** | Not Found |  -  |
diff --git a/docs/AssetEventResponse.md b/docs/AssetEventResponse.md
index ad03149..4dffd0d 100644
--- a/docs/AssetEventResponse.md
+++ b/docs/AssetEventResponse.md
@@ -12,6 +12,7 @@
 **group** | **str** |  | [optional] 
 **id** | **int** |  | 
 **name** | **str** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 **source_dag_id** | **str** |  | [optional] 
 **source_map_index** | **int** |  | 
 **source_run_id** | **str** |  | [optional] 
diff --git a/docs/AssetResponse.md b/docs/AssetResponse.md
index 0201a00..5583ce3 100644
--- a/docs/AssetResponse.md
+++ b/docs/AssetResponse.md
@@ -18,6 +18,7 @@
 **scheduled_dags** | [**List[DagScheduleAssetReference]**](DagScheduleAssetReference.md) |  | 
 **updated_at** | **datetime** |  | 
 **uri** | **str** |  | 
+**watchers** | [**List[AssetWatcherResponse]**](AssetWatcherResponse.md) |  | 
 
 ## Example
 
diff --git a/docs/AssetWatcherResponse.md b/docs/AssetWatcherResponse.md
new file mode 100644
index 0000000..ace1688
--- /dev/null
+++ b/docs/AssetWatcherResponse.md
@@ -0,0 +1,32 @@
+# AssetWatcherResponse
+
+Asset watcher serializer for responses.
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**created_date** | **datetime** |  | 
+**name** | **str** |  | 
+**trigger_id** | **int** |  | 
+
+## Example
+
+```python
+from airflow_client.client.models.asset_watcher_response import AssetWatcherResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AssetWatcherResponse from a JSON string
+asset_watcher_response_instance = AssetWatcherResponse.from_json(json)
+# print the JSON string representation of the object
+print(AssetWatcherResponse.to_json())
+
+# convert the object into a dict
+asset_watcher_response_dict = asset_watcher_response_instance.to_dict()
+# create an instance of AssetWatcherResponse from a dict
+asset_watcher_response_from_dict = AssetWatcherResponse.from_dict(asset_watcher_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/BackfillApi.md b/docs/BackfillApi.md
index 3aa424f..b400b8d 100644
--- a/docs/BackfillApi.md
+++ b/docs/BackfillApi.md
@@ -174,6 +174,7 @@
 | Status code | Description | Response headers |
 |-------------|-------------|------------------|
 **200** | Successful Response |  -  |
+**400** | Bad Request |  -  |
 **401** | Unauthorized |  -  |
 **403** | Forbidden |  -  |
 **404** | Not Found |  -  |
diff --git a/docs/BackfillResponse.md b/docs/BackfillResponse.md
index 5e2cfa5..c1e35c6 100644
--- a/docs/BackfillResponse.md
+++ b/docs/BackfillResponse.md
@@ -10,7 +10,7 @@
 **created_at** | **datetime** |  | 
 **dag_display_name** | **str** |  | 
 **dag_id** | **str** |  | 
-**dag_run_conf** | **Dict[str, object]** |  | 
+**dag_run_conf** | **Dict[str, object]** |  | [optional] 
 **from_date** | **datetime** |  | 
 **id** | **int** |  | 
 **is_paused** | **bool** |  | 
diff --git a/docs/BulkDeleteActionConnectionBody.md b/docs/BulkDeleteActionConnectionBody.md
index df1ca0a..c4af79a 100644
--- a/docs/BulkDeleteActionConnectionBody.md
+++ b/docs/BulkDeleteActionConnectionBody.md
@@ -7,7 +7,7 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner1]**](EntitiesInner1.md) | A list of entity id/key or entity objects to be deleted. | 
 
 ## Example
 
diff --git a/docs/BulkDeleteActionPoolBody.md b/docs/BulkDeleteActionPoolBody.md
index 3cada05..a82a16b 100644
--- a/docs/BulkDeleteActionPoolBody.md
+++ b/docs/BulkDeleteActionPoolBody.md
@@ -7,7 +7,7 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner2]**](EntitiesInner2.md) | A list of entity id/key or entity objects to be deleted. | 
 
 ## Example
 
diff --git a/docs/BulkDeleteActionVariableBody.md b/docs/BulkDeleteActionVariableBody.md
index 3353985..cb46eaa 100644
--- a/docs/BulkDeleteActionVariableBody.md
+++ b/docs/BulkDeleteActionVariableBody.md
@@ -7,7 +7,7 @@
 ------------ | ------------- | ------------- | -------------
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
-**entities** | [**List[EntitiesInner]**](EntitiesInner.md) | A list of entity id/key or entity objects to be deleted. | 
+**entities** | [**List[EntitiesInner3]**](EntitiesInner3.md) | A list of entity id/key or entity objects to be deleted. | 
 
 ## Example
 
diff --git a/docs/BulkTaskInstanceBody.md b/docs/BulkTaskInstanceBody.md
index ae1ce2b..f6e9d86 100644
--- a/docs/BulkTaskInstanceBody.md
+++ b/docs/BulkTaskInstanceBody.md
@@ -6,6 +6,8 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**dag_id** | **str** |  | [optional] 
+**dag_run_id** | **str** |  | [optional] 
 **include_downstream** | **bool** |  | [optional] [default to False]
 **include_future** | **bool** |  | [optional] [default to False]
 **include_past** | **bool** |  | [optional] [default to False]
diff --git a/docs/BulkUpdateActionBulkTaskInstanceBody.md b/docs/BulkUpdateActionBulkTaskInstanceBody.md
index d6691d3..c7125c3 100644
--- a/docs/BulkUpdateActionBulkTaskInstanceBody.md
+++ b/docs/BulkUpdateActionBulkTaskInstanceBody.md
@@ -8,6 +8,7 @@
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
 **entities** | [**List[BulkTaskInstanceBody]**](BulkTaskInstanceBody.md) | A list of entities to be updated. | 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/BulkUpdateActionConnectionBody.md b/docs/BulkUpdateActionConnectionBody.md
index 53494a0..0302937 100644
--- a/docs/BulkUpdateActionConnectionBody.md
+++ b/docs/BulkUpdateActionConnectionBody.md
@@ -8,6 +8,7 @@
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
 **entities** | [**List[ConnectionBody]**](ConnectionBody.md) | A list of entities to be updated. | 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/BulkUpdateActionPoolBody.md b/docs/BulkUpdateActionPoolBody.md
index 284c94b..bd03ec7 100644
--- a/docs/BulkUpdateActionPoolBody.md
+++ b/docs/BulkUpdateActionPoolBody.md
@@ -8,6 +8,7 @@
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
 **entities** | [**List[PoolBody]**](PoolBody.md) | A list of entities to be updated. | 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/BulkUpdateActionVariableBody.md b/docs/BulkUpdateActionVariableBody.md
index c811754..a3df85b 100644
--- a/docs/BulkUpdateActionVariableBody.md
+++ b/docs/BulkUpdateActionVariableBody.md
@@ -8,6 +8,7 @@
 **action** | **str** | The action to be performed on the entities. | 
 **action_on_non_existence** | [**BulkActionNotOnExistence**](BulkActionNotOnExistence.md) |  | [optional] 
 **entities** | [**List[VariableBody]**](VariableBody.md) | A list of entities to be updated. | 
+**update_mask** | **List[str]** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ClearTaskInstancesBody.md b/docs/ClearTaskInstancesBody.md
index 3ff6154..3fe4eeb 100644
--- a/docs/ClearTaskInstancesBody.md
+++ b/docs/ClearTaskInstancesBody.md
@@ -15,6 +15,7 @@
 **include_upstream** | **bool** |  | [optional] [default to False]
 **only_failed** | **bool** |  | [optional] [default to True]
 **only_running** | **bool** |  | [optional] [default to False]
+**prevent_running_task** | **bool** |  | [optional] [default to False]
 **reset_dag_runs** | **bool** |  | [optional] [default to True]
 **run_on_latest_version** | **bool** | (Experimental) Run on the latest bundle version of the dag after clearing the task instances. | [optional] [default to False]
 **start_date** | **datetime** |  | [optional] 
diff --git a/docs/ConnectionApi.md b/docs/ConnectionApi.md
index 9f79a90..14dcdb2 100644
--- a/docs/ConnectionApi.md
+++ b/docs/ConnectionApi.md
@@ -384,8 +384,8 @@
     api_instance = airflow_client.client.ConnectionApi(api_client)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id` (optional) (default to ["id"])
-    connection_id_pattern = 'connection_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id` (optional) (default to ["id"])
+    connection_id_pattern = 'connection_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Connections
@@ -405,8 +405,8 @@
 ------------- | ------------- | ------------- | -------------
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, connection_id` | [optional] [default to ["id"]]
- **connection_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `conn_id, conn_type, description, host, port, id, team_name, connection_id` | [optional] [default to ["id"]]
+ **connection_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/ConnectionBody.md b/docs/ConnectionBody.md
index bb250a4..d0acbc5 100644
--- a/docs/ConnectionBody.md
+++ b/docs/ConnectionBody.md
@@ -15,6 +15,7 @@
 **password** | **str** |  | [optional] 
 **port** | **int** |  | [optional] 
 **var_schema** | **str** |  | [optional] 
+**team_name** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ConnectionResponse.md b/docs/ConnectionResponse.md
index 56d71ac..9f93274 100644
--- a/docs/ConnectionResponse.md
+++ b/docs/ConnectionResponse.md
@@ -15,6 +15,7 @@
 **password** | **str** |  | [optional] 
 **port** | **int** |  | [optional] 
 **var_schema** | **str** |  | [optional] 
+**team_name** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/CreateAssetEventsBody.md b/docs/CreateAssetEventsBody.md
index 63ce884..23bf2fd 100644
--- a/docs/CreateAssetEventsBody.md
+++ b/docs/CreateAssetEventsBody.md
@@ -8,6 +8,7 @@
 ------------ | ------------- | ------------- | -------------
 **asset_id** | **int** |  | 
 **extra** | **Dict[str, object]** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/DAGApi.md b/docs/DAGApi.md
index c13e29e..888662f 100644
--- a/docs/DAGApi.md
+++ b/docs/DAGApi.md
@@ -397,7 +397,7 @@
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
     order_by = ["name"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name` (optional) (default to ["name"])
-    tag_name_pattern = 'tag_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    tag_name_pattern = 'tag_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Dag Tags
@@ -418,7 +418,7 @@
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `name` | [optional] [default to ["name"]]
- **tag_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **tag_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
@@ -445,7 +445,7 @@
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
 # **get_dags**
-> DAGCollectionResponse get_dags(limit=limit, offset=offset, tags=tags, tags_match_mode=tags_match_mode, owners=owners, dag_id_pattern=dag_id_pattern, dag_display_name_pattern=dag_display_name_pattern, exclude_stale=exclude_stale, paused=paused, has_import_errors=has_import_errors, last_dag_run_state=last_dag_run_state, bundle_name=bundle_name, bundle_version=bundle_version, has_asset_schedule=has_asset_schedule, asset_dependency=asset_dependency, dag_run_start_date_gte=dag_run_start_date_gte, dag_run_start_date_gt=dag_run_start_date_gt, dag_run_start_date_lte=dag_run_start_date_lte, dag_run_start_date_lt=dag_run_start_date_lt, dag_run_end_date_gte=dag_run_end_date_gte, dag_run_end_date_gt=dag_run_end_date_gt, dag_run_end_date_lte=dag_run_end_date_lte, dag_run_end_date_lt=dag_run_end_date_lt, dag_run_state=dag_run_state, order_by=order_by, is_favorite=is_favorite)
+> DAGCollectionResponse get_dags(limit=limit, offset=offset, tags=tags, tags_match_mode=tags_match_mode, owners=owners, dag_id_pattern=dag_id_pattern, dag_display_name_pattern=dag_display_name_pattern, exclude_stale=exclude_stale, paused=paused, has_import_errors=has_import_errors, last_dag_run_state=last_dag_run_state, bundle_name=bundle_name, bundle_version=bundle_version, has_asset_schedule=has_asset_schedule, asset_dependency=asset_dependency, dag_run_start_date_gte=dag_run_start_date_gte, dag_run_start_date_gt=dag_run_start_date_gt, dag_run_start_date_lte=dag_run_start_date_lte, dag_run_start_date_lt=dag_run_start_date_lt, dag_run_end_date_gte=dag_run_end_date_gte, dag_run_end_date_gt=dag_run_end_date_gt, dag_run_end_date_lte=dag_run_end_date_lte, dag_run_end_date_lt=dag_run_end_date_lt, dag_run_state=dag_run_state, order_by=order_by, is_favorite=is_favorite, timetable_type=timetable_type)
 
 Get Dags
 
@@ -489,8 +489,8 @@
     tags = ['tags_example'] # List[str] |  (optional)
     tags_match_mode = 'tags_match_mode_example' # str |  (optional)
     owners = ['owners_example'] # List[str] |  (optional)
-    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    dag_display_name_pattern = 'dag_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    dag_display_name_pattern = 'dag_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     exclude_stale = True # bool |  (optional) (default to True)
     paused = True # bool |  (optional)
     has_import_errors = True # bool | Filter Dags by having import errors. Only Dags that have been successfully loaded before will be returned. (optional)
@@ -510,10 +510,11 @@
     dag_run_state = ['dag_run_state_example'] # List[str] |  (optional)
     order_by = ["dag_id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `dag_id, dag_display_name, next_dagrun, state, start_date, last_run_state, last_run_start_date` (optional) (default to ["dag_id"])
     is_favorite = True # bool |  (optional)
+    timetable_type = ['timetable_type_example'] # List[str] |  (optional)
 
     try:
         # Get Dags
-        api_response = api_instance.get_dags(limit=limit, offset=offset, tags=tags, tags_match_mode=tags_match_mode, owners=owners, dag_id_pattern=dag_id_pattern, dag_display_name_pattern=dag_display_name_pattern, exclude_stale=exclude_stale, paused=paused, has_import_errors=has_import_errors, last_dag_run_state=last_dag_run_state, bundle_name=bundle_name, bundle_version=bundle_version, has_asset_schedule=has_asset_schedule, asset_dependency=asset_dependency, dag_run_start_date_gte=dag_run_start_date_gte, dag_run_start_date_gt=dag_run_start_date_gt, dag_run_start_date_lte=dag_run_start_date_lte, dag_run_start_date_lt=dag_run_start_date_lt, dag_run_end_date_gte=dag_run_end_date_gte, dag_run_end_date_gt=dag_run_end_date_gt, dag_run_end_date_lte=dag_run_end_date_lte, dag_run_end_date_lt=dag_run_end_date_lt, dag_run_state=dag_run_state, order_by=order_by, is_favorite=is_favorite)
+        api_response = api_instance.get_dags(limit=limit, offset=offset, tags=tags, tags_match_mode=tags_match_mode, owners=owners, dag_id_pattern=dag_id_pattern, dag_display_name_pattern=dag_display_name_pattern, exclude_stale=exclude_stale, paused=paused, has_import_errors=has_import_errors, last_dag_run_state=last_dag_run_state, bundle_name=bundle_name, bundle_version=bundle_version, has_asset_schedule=has_asset_schedule, asset_dependency=asset_dependency, dag_run_start_date_gte=dag_run_start_date_gte, dag_run_start_date_gt=dag_run_start_date_gt, dag_run_start_date_lte=dag_run_start_date_lte, dag_run_start_date_lt=dag_run_start_date_lt, dag_run_end_date_gte=dag_run_end_date_gte, dag_run_end_date_gt=dag_run_end_date_gt, dag_run_end_date_lte=dag_run_end_date_lte, dag_run_end_date_lt=dag_run_end_date_lt, dag_run_state=dag_run_state, order_by=order_by, is_favorite=is_favorite, timetable_type=timetable_type)
         print("The response of DAGApi->get_dags:\n")
         pprint(api_response)
     except Exception as e:
@@ -532,8 +533,8 @@
  **tags** | [**List[str]**](str.md)|  | [optional] 
  **tags_match_mode** | **str**|  | [optional] 
  **owners** | [**List[str]**](str.md)|  | [optional] 
- **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **dag_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **dag_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **exclude_stale** | **bool**|  | [optional] [default to True]
  **paused** | **bool**|  | [optional] 
  **has_import_errors** | **bool**| Filter Dags by having import errors. Only Dags that have been successfully loaded before will be returned. | [optional] 
@@ -553,6 +554,7 @@
  **dag_run_state** | [**List[str]**](str.md)|  | [optional] 
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `dag_id, dag_display_name, next_dagrun, state, start_date, last_run_state, last_run_start_date` | [optional] [default to ["dag_id"]]
  **is_favorite** | **bool**|  | [optional] 
+ **timetable_type** | [**List[str]**](str.md)|  | [optional] 
 
 ### Return type
 
@@ -717,7 +719,7 @@
     tags = ['tags_example'] # List[str] |  (optional)
     tags_match_mode = 'tags_match_mode_example' # str |  (optional)
     owners = ['owners_example'] # List[str] |  (optional)
-    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     exclude_stale = True # bool |  (optional) (default to True)
     paused = True # bool |  (optional)
 
@@ -744,7 +746,7 @@
  **tags** | [**List[str]**](str.md)|  | [optional] 
  **tags_match_mode** | **str**|  | [optional] 
  **owners** | [**List[str]**](str.md)|  | [optional] 
- **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **exclude_stale** | **bool**|  | [optional] [default to True]
  **paused** | **bool**|  | [optional] 
 
diff --git a/docs/DAGDetailsResponse.md b/docs/DAGDetailsResponse.md
index d26ac43..df51a5c 100644
--- a/docs/DAGDetailsResponse.md
+++ b/docs/DAGDetailsResponse.md
@@ -6,6 +6,8 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**active_runs_count** | **int** |  | [optional] [default to 0]
+**allowed_run_types** | [**List[DagRunType]**](DagRunType.md) |  | [optional] 
 **asset_expression** | **Dict[str, object]** |  | [optional] 
 **bundle_name** | **str** |  | [optional] 
 **bundle_version** | **str** |  | [optional] 
@@ -47,6 +49,7 @@
 **tags** | [**List[DagTagResponse]**](DagTagResponse.md) |  | 
 **template_search_path** | **List[str]** |  | [optional] 
 **timetable_description** | **str** |  | [optional] 
+**timetable_partitioned** | **bool** |  | 
 **timetable_summary** | **str** |  | [optional] 
 **timezone** | **str** |  | [optional] 
 
diff --git a/docs/DAGResponse.md b/docs/DAGResponse.md
index 4a4619a..b82c433 100644
--- a/docs/DAGResponse.md
+++ b/docs/DAGResponse.md
@@ -6,6 +6,7 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**allowed_run_types** | [**List[DagRunType]**](DagRunType.md) |  | [optional] 
 **bundle_name** | **str** |  | [optional] 
 **bundle_version** | **str** |  | [optional] 
 **dag_display_name** | **str** |  | 
@@ -31,6 +32,7 @@
 **relative_fileloc** | **str** |  | [optional] 
 **tags** | [**List[DagTagResponse]**](DagTagResponse.md) |  | 
 **timetable_description** | **str** |  | [optional] 
+**timetable_partitioned** | **bool** |  | 
 **timetable_summary** | **str** |  | [optional] 
 
 ## Example
diff --git a/docs/DAGRunApi.md b/docs/DAGRunApi.md
index 95e537f..5f08685 100644
--- a/docs/DAGRunApi.md
+++ b/docs/DAGRunApi.md
@@ -274,7 +274,7 @@
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
 # **get_dag_runs**
-> DAGRunCollectionResponse get_dag_runs(dag_id, limit=limit, offset=offset, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, run_type=run_type, state=state, dag_version=dag_version, order_by=order_by, run_id_pattern=run_id_pattern, triggering_user_name_pattern=triggering_user_name_pattern)
+> DAGRunCollectionResponse get_dag_runs(dag_id, limit=limit, offset=offset, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, conf_contains=conf_contains, run_type=run_type, state=state, dag_version=dag_version, bundle_version=bundle_version, order_by=order_by, run_id_pattern=run_id_pattern, triggering_user_name_pattern=triggering_user_name_pattern, dag_id_pattern=dag_id_pattern, partition_key_pattern=partition_key_pattern)
 
 Get Dag Runs
 
@@ -334,20 +334,28 @@
     end_date_gt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     end_date_lte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     end_date_lt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
+    duration_gte = 3.4 # float |  (optional)
+    duration_gt = 3.4 # float |  (optional)
+    duration_lte = 3.4 # float |  (optional)
+    duration_lt = 3.4 # float |  (optional)
     updated_at_gte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     updated_at_gt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     updated_at_lte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     updated_at_lt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
+    conf_contains = 'conf_contains_example' # str |  (optional)
     run_type = ['run_type_example'] # List[str] |  (optional)
     state = ['state_example'] # List[str] |  (optional)
     dag_version = [56] # List[int] |  (optional)
+    bundle_version = 'bundle_version_example' # str |  (optional)
     order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id` (optional) (default to ["id"])
-    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    triggering_user_name_pattern = 'triggering_user_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    triggering_user_name_pattern = 'triggering_user_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    partition_key_pattern = 'partition_key_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Dag Runs
-        api_response = api_instance.get_dag_runs(dag_id, limit=limit, offset=offset, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, run_type=run_type, state=state, dag_version=dag_version, order_by=order_by, run_id_pattern=run_id_pattern, triggering_user_name_pattern=triggering_user_name_pattern)
+        api_response = api_instance.get_dag_runs(dag_id, limit=limit, offset=offset, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, conf_contains=conf_contains, run_type=run_type, state=state, dag_version=dag_version, bundle_version=bundle_version, order_by=order_by, run_id_pattern=run_id_pattern, triggering_user_name_pattern=triggering_user_name_pattern, dag_id_pattern=dag_id_pattern, partition_key_pattern=partition_key_pattern)
         print("The response of DagRunApi->get_dag_runs:\n")
         pprint(api_response)
     except Exception as e:
@@ -380,16 +388,24 @@
  **end_date_gt** | **datetime**|  | [optional] 
  **end_date_lte** | **datetime**|  | [optional] 
  **end_date_lt** | **datetime**|  | [optional] 
+ **duration_gte** | **float**|  | [optional] 
+ **duration_gt** | **float**|  | [optional] 
+ **duration_lte** | **float**|  | [optional] 
+ **duration_lt** | **float**|  | [optional] 
  **updated_at_gte** | **datetime**|  | [optional] 
  **updated_at_gt** | **datetime**|  | [optional] 
  **updated_at_lte** | **datetime**|  | [optional] 
  **updated_at_lt** | **datetime**|  | [optional] 
+ **conf_contains** | **str**|  | [optional] 
  **run_type** | [**List[str]**](str.md)|  | [optional] 
  **state** | [**List[str]**](str.md)|  | [optional] 
  **dag_version** | [**List[int]**](int.md)|  | [optional] 
+ **bundle_version** | **str**|  | [optional] 
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, state, dag_id, run_id, logical_date, run_after, start_date, end_date, updated_at, conf, duration, dag_run_id` | [optional] [default to ["id"]]
- **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **triggering_user_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **triggering_user_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **partition_key_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/DAGRunResponse.md b/docs/DAGRunResponse.md
index ed820f3..00ef7cc 100644
--- a/docs/DAGRunResponse.md
+++ b/docs/DAGRunResponse.md
@@ -19,6 +19,7 @@
 **last_scheduling_decision** | **datetime** |  | [optional] 
 **logical_date** | **datetime** |  | [optional] 
 **note** | **str** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 **queued_at** | **datetime** |  | [optional] 
 **run_after** | **datetime** |  | 
 **run_type** | [**DagRunType**](DagRunType.md) |  | 
diff --git a/docs/DAGRunsBatchBody.md b/docs/DAGRunsBatchBody.md
index a710504..217af31 100644
--- a/docs/DAGRunsBatchBody.md
+++ b/docs/DAGRunsBatchBody.md
@@ -6,7 +6,12 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**conf_contains** | **str** |  | [optional] 
 **dag_ids** | **List[str]** |  | [optional] 
+**duration_gt** | **float** |  | [optional] 
+**duration_gte** | **float** |  | [optional] 
+**duration_lt** | **float** |  | [optional] 
+**duration_lte** | **float** |  | [optional] 
 **end_date_gt** | **datetime** |  | [optional] 
 **end_date_gte** | **datetime** |  | [optional] 
 **end_date_lt** | **datetime** |  | [optional] 
diff --git a/docs/DagRunAssetReference.md b/docs/DagRunAssetReference.md
index 6f29f40..2b63202 100644
--- a/docs/DagRunAssetReference.md
+++ b/docs/DagRunAssetReference.md
@@ -1,6 +1,6 @@
 # DagRunAssetReference
 
-DAGRun serializer for asset responses.
+DagRun serializer for asset responses.
 
 ## Properties
 
@@ -11,6 +11,7 @@
 **data_interval_start** | **datetime** |  | [optional] 
 **end_date** | **datetime** |  | [optional] 
 **logical_date** | **datetime** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 **run_id** | **str** |  | 
 **start_date** | **datetime** |  | 
 **state** | **str** |  | 
diff --git a/docs/DagRunType.md b/docs/DagRunType.md
index 4504a72..037d1aa 100644
--- a/docs/DagRunType.md
+++ b/docs/DagRunType.md
@@ -12,6 +12,8 @@
 
 * `ASSET_TRIGGERED` (value: `'asset_triggered'`)
 
+* `ASSET_MATERIALIZATION` (value: `'asset_materialization'`)
+
 [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
 
 
diff --git a/docs/DagWarningType.md b/docs/DagWarningType.md
index 265f1d0..11ec501 100644
--- a/docs/DagWarningType.md
+++ b/docs/DagWarningType.md
@@ -8,6 +8,8 @@
 
 * `NON_MINUS_EXISTENT_POOL` (value: `'non-existent pool'`)
 
+* `RUNTIME_VARYING_VALUE` (value: `'runtime varying value'`)
+
 [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
 
 
diff --git a/docs/DryRunBackfillResponse.md b/docs/DryRunBackfillResponse.md
index 54a3901..a33747f 100644
--- a/docs/DryRunBackfillResponse.md
+++ b/docs/DryRunBackfillResponse.md
@@ -6,7 +6,9 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
-**logical_date** | **datetime** |  | 
+**logical_date** | **datetime** |  | [optional] 
+**partition_date** | **datetime** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/EntitiesInner.md b/docs/EntitiesInner.md
index 3bee189..c7d9a30 100644
--- a/docs/EntitiesInner.md
+++ b/docs/EntitiesInner.md
@@ -5,6 +5,8 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**dag_id** | **str** |  | [optional] 
+**dag_run_id** | **str** |  | [optional] 
 **include_downstream** | **bool** |  | [optional] [default to False]
 **include_future** | **bool** |  | [optional] [default to False]
 **include_past** | **bool** |  | [optional] [default to False]
diff --git a/docs/EntitiesInner1.md b/docs/EntitiesInner1.md
new file mode 100644
index 0000000..0a11937
--- /dev/null
+++ b/docs/EntitiesInner1.md
@@ -0,0 +1,38 @@
+# EntitiesInner1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**conn_type** | **str** |  | 
+**connection_id** | **str** |  | 
+**description** | **str** |  | [optional] 
+**extra** | **str** |  | [optional] 
+**host** | **str** |  | [optional] 
+**login** | **str** |  | [optional] 
+**password** | **str** |  | [optional] 
+**port** | **int** |  | [optional] 
+**var_schema** | **str** |  | [optional] 
+**team_name** | **str** |  | [optional] 
+
+## Example
+
+```python
+from airflow_client.client.models.entities_inner1 import EntitiesInner1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of EntitiesInner1 from a JSON string
+entities_inner1_instance = EntitiesInner1.from_json(json)
+# print the JSON string representation of the object
+print(EntitiesInner1.to_json())
+
+# convert the object into a dict
+entities_inner1_dict = entities_inner1_instance.to_dict()
+# create an instance of EntitiesInner1 from a dict
+entities_inner1_from_dict = EntitiesInner1.from_dict(entities_inner1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/EntitiesInner2.md b/docs/EntitiesInner2.md
new file mode 100644
index 0000000..e5343b3
--- /dev/null
+++ b/docs/EntitiesInner2.md
@@ -0,0 +1,33 @@
+# EntitiesInner2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**description** | **str** |  | [optional] 
+**include_deferred** | **bool** |  | [optional] [default to False]
+**name** | **str** |  | 
+**slots** | **int** | Number of slots. Use -1 for unlimited. | 
+**team_name** | **str** |  | [optional] 
+
+## Example
+
+```python
+from airflow_client.client.models.entities_inner2 import EntitiesInner2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of EntitiesInner2 from a JSON string
+entities_inner2_instance = EntitiesInner2.from_json(json)
+# print the JSON string representation of the object
+print(EntitiesInner2.to_json())
+
+# convert the object into a dict
+entities_inner2_dict = entities_inner2_instance.to_dict()
+# create an instance of EntitiesInner2 from a dict
+entities_inner2_from_dict = EntitiesInner2.from_dict(entities_inner2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/EntitiesInner3.md b/docs/EntitiesInner3.md
new file mode 100644
index 0000000..5047404
--- /dev/null
+++ b/docs/EntitiesInner3.md
@@ -0,0 +1,32 @@
+# EntitiesInner3
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**description** | **str** |  | [optional] 
+**key** | **str** |  | 
+**team_name** | **str** |  | [optional] 
+**value** | **object** |  | 
+
+## Example
+
+```python
+from airflow_client.client.models.entities_inner3 import EntitiesInner3
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of EntitiesInner3 from a JSON string
+entities_inner3_instance = EntitiesInner3.from_json(json)
+# print the JSON string representation of the object
+print(EntitiesInner3.to_json())
+
+# convert the object into a dict
+entities_inner3_dict = entities_inner3_instance.to_dict()
+# create an instance of EntitiesInner3 from a dict
+entities_inner3_from_dict = EntitiesInner3.from_dict(entities_inner3_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/EventLogApi.md b/docs/EventLogApi.md
index 28d19d2..782e8e6 100644
--- a/docs/EventLogApi.md
+++ b/docs/EventLogApi.md
@@ -145,11 +145,11 @@
     included_events = ['included_events_example'] # List[str] |  (optional)
     before = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     after = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
-    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    owner_pattern = 'owner_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    event_pattern = 'event_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    owner_pattern = 'owner_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    event_pattern = 'event_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Event Logs
@@ -181,11 +181,11 @@
  **included_events** | [**List[str]**](str.md)|  | [optional] 
  **before** | **datetime**|  | [optional] 
  **after** | **datetime**|  | [optional] 
- **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **owner_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **event_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **owner_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **event_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/HITLDetailHistory.md b/docs/HITLDetailHistory.md
new file mode 100644
index 0000000..5119956
--- /dev/null
+++ b/docs/HITLDetailHistory.md
@@ -0,0 +1,43 @@
+# HITLDetailHistory
+
+Schema for Human-in-the-loop detail history.
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**assigned_users** | [**List[HITLUser]**](HITLUser.md) |  | [optional] 
+**body** | **str** |  | [optional] 
+**chosen_options** | **List[str]** |  | [optional] 
+**created_at** | **datetime** |  | 
+**defaults** | **List[str]** |  | [optional] 
+**multiple** | **bool** |  | [optional] [default to False]
+**options** | **List[str]** |  | 
+**params** | **Dict[str, object]** |  | [optional] 
+**params_input** | **Dict[str, object]** |  | [optional] 
+**responded_at** | **datetime** |  | [optional] 
+**responded_by_user** | [**HITLUser**](HITLUser.md) |  | [optional] 
+**response_received** | **bool** |  | [optional] [default to False]
+**subject** | **str** |  | 
+**task_instance** | [**TaskInstanceHistoryResponse**](TaskInstanceHistoryResponse.md) |  | 
+
+## Example
+
+```python
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of HITLDetailHistory from a JSON string
+hitl_detail_history_instance = HITLDetailHistory.from_json(json)
+# print the JSON string representation of the object
+print(HITLDetailHistory.to_json())
+
+# convert the object into a dict
+hitl_detail_history_dict = hitl_detail_history_instance.to_dict()
+# create an instance of HITLDetailHistory from a dict
+hitl_detail_history_from_dict = HITLDetailHistory.from_dict(hitl_detail_history_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ImportErrorApi.md b/docs/ImportErrorApi.md
index e08c87b..936d049 100644
--- a/docs/ImportErrorApi.md
+++ b/docs/ImportErrorApi.md
@@ -136,7 +136,7 @@
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
     order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id` (optional) (default to ["id"])
-    filename_pattern = 'filename_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    filename_pattern = 'filename_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Import Errors
@@ -157,7 +157,7 @@
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, timestamp, filename, bundle_name, stacktrace, import_error_id` | [optional] [default to ["id"]]
- **filename_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **filename_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/PoolApi.md b/docs/PoolApi.md
index 641979f..2579552 100644
--- a/docs/PoolApi.md
+++ b/docs/PoolApi.md
@@ -308,7 +308,7 @@
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
     order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name` (optional) (default to ["id"])
-    pool_name_pattern = 'pool_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    pool_name_pattern = 'pool_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Pools
@@ -329,7 +329,7 @@
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
  **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `id, pool, name` | [optional] [default to ["id"]]
- **pool_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **pool_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/PoolBody.md b/docs/PoolBody.md
index 987507c..b37cd84 100644
--- a/docs/PoolBody.md
+++ b/docs/PoolBody.md
@@ -10,6 +10,7 @@
 **include_deferred** | **bool** |  | [optional] [default to False]
 **name** | **str** |  | 
 **slots** | **int** | Number of slots. Use -1 for unlimited. | 
+**team_name** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/PoolPatchBody.md b/docs/PoolPatchBody.md
index 4eed2d6..f9b9f60 100644
--- a/docs/PoolPatchBody.md
+++ b/docs/PoolPatchBody.md
@@ -10,6 +10,7 @@
 **include_deferred** | **bool** |  | [optional] 
 **pool** | **str** |  | [optional] 
 **slots** | **int** | Number of slots. Use -1 for unlimited. | [optional] 
+**team_name** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/PoolResponse.md b/docs/PoolResponse.md
index f1c5988..4faa3c6 100644
--- a/docs/PoolResponse.md
+++ b/docs/PoolResponse.md
@@ -16,6 +16,7 @@
 **running_slots** | **int** |  | 
 **scheduled_slots** | **int** |  | 
 **slots** | **int** | Number of slots. Use -1 for unlimited. | 
+**team_name** | **str** |  | [optional] 
 
 ## Example
 
diff --git a/docs/ResponseClearDagRun.md b/docs/ResponseClearDagRun.md
index 7af527e..d51c77c 100644
--- a/docs/ResponseClearDagRun.md
+++ b/docs/ResponseClearDagRun.md
@@ -20,6 +20,7 @@
 **last_scheduling_decision** | **datetime** |  | [optional] 
 **logical_date** | **datetime** |  | [optional] 
 **note** | **str** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 **queued_at** | **datetime** |  | [optional] 
 **run_after** | **datetime** |  | 
 **run_type** | [**DagRunType**](DagRunType.md) |  | 
diff --git a/docs/TaskInstanceApi.md b/docs/TaskInstanceApi.md
index 89fe95c..1ac25cf 100644
--- a/docs/TaskInstanceApi.md
+++ b/docs/TaskInstanceApi.md
@@ -9,6 +9,7 @@
 [**get_external_log_url**](TaskInstanceApi.md#get_external_log_url) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/externalLogUrl/{try_number} | Get External Log Url
 [**get_extra_links**](TaskInstanceApi.md#get_extra_links) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/links | Get Extra Links
 [**get_hitl_detail**](TaskInstanceApi.md#get_hitl_detail) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index}/hitlDetails | Get Hitl Detail
+[**get_hitl_detail_try_detail**](TaskInstanceApi.md#get_hitl_detail_try_detail) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index}/hitlDetails/tries/{try_number} | Get Hitl Detail Try Detail
 [**get_hitl_details**](TaskInstanceApi.md#get_hitl_details) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/hitlDetails | Get Hitl Details
 [**get_log**](TaskInstanceApi.md#get_log) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{try_number} | Get Log
 [**get_mapped_task_instance**](TaskInstanceApi.md#get_mapped_task_instance) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index} | Get Mapped Task Instance
@@ -485,6 +486,99 @@
 
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
+# **get_hitl_detail_try_detail**
+> HITLDetailHistory get_hitl_detail_try_detail(dag_id, dag_run_id, task_id, map_index, try_number)
+
+Get Hitl Detail Try Detail
+
+Get a Human-in-the-loop detail of a specific task instance.
+
+### Example
+
+* OAuth Authentication (OAuth2PasswordBearer):
+* Bearer Authentication (HTTPBearer):
+
+```python
+import airflow_client.client
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory
+from airflow_client.client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to http://localhost
+# See configuration.py for a list of all supported configuration parameters.
+configuration = airflow_client.client.Configuration(
+    host = "http://localhost"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+configuration.access_token = os.environ["ACCESS_TOKEN"]
+
+# Configure Bearer authorization: HTTPBearer
+configuration = airflow_client.client.Configuration(
+    access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with airflow_client.client.ApiClient(configuration) as api_client:
+    # Create an instance of the API class
+    api_instance = airflow_client.client.TaskInstanceApi(api_client)
+    dag_id = 'dag_id_example' # str | 
+    dag_run_id = 'dag_run_id_example' # str | 
+    task_id = 'task_id_example' # str | 
+    map_index = 56 # int | 
+    try_number = 56 # int | 
+
+    try:
+        # Get Hitl Detail Try Detail
+        api_response = api_instance.get_hitl_detail_try_detail(dag_id, dag_run_id, task_id, map_index, try_number)
+        print("The response of TaskInstanceApi->get_hitl_detail_try_detail:\n")
+        pprint(api_response)
+    except Exception as e:
+        print("Exception when calling TaskInstanceApi->get_hitl_detail_try_detail: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description  | Notes
+------------- | ------------- | ------------- | -------------
+ **dag_id** | **str**|  | 
+ **dag_run_id** | **str**|  | 
+ **task_id** | **str**|  | 
+ **map_index** | **int**|  | 
+ **try_number** | **int**|  | 
+
+### Return type
+
+[**HITLDetailHistory**](HITLDetailHistory.md)
+
+### Authorization
+
+[OAuth2PasswordBearer](../README.md#OAuth2PasswordBearer), [HTTPBearer](../README.md#HTTPBearer)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Successful Response |  -  |
+**401** | Unauthorized |  -  |
+**403** | Forbidden |  -  |
+**404** | Not Found |  -  |
+**422** | Validation Error |  -  |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
 # **get_hitl_details**
 > HITLDetailCollection get_hitl_details(dag_id, dag_run_id, limit=limit, offset=offset, order_by=order_by, dag_id_pattern=dag_id_pattern, task_id=task_id, task_id_pattern=task_id_pattern, map_index=map_index, state=state, response_received=response_received, responded_by_user_id=responded_by_user_id, responded_by_user_name=responded_by_user_name, subject_search=subject_search, body_search=body_search, created_at_gte=created_at_gte, created_at_gt=created_at_gt, created_at_lte=created_at_lte, created_at_lt=created_at_lt)
 
@@ -529,17 +623,17 @@
     dag_run_id = 'dag_run_id_example' # str | 
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    order_by = ["ti_id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state` (optional) (default to ["ti_id"])
-    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    order_by = ["ti_id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state` (optional) (default to ["ti_id"])
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     task_id = 'task_id_example' # str |  (optional)
-    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     map_index = 56 # int |  (optional)
     state = ['state_example'] # List[str] |  (optional)
     response_received = True # bool |  (optional)
     responded_by_user_id = ['responded_by_user_id_example'] # List[str] |  (optional)
     responded_by_user_name = ['responded_by_user_name_example'] # List[str] |  (optional)
-    subject_search = 'subject_search_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    body_search = 'body_search_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    subject_search = 'subject_search_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    body_search = 'body_search_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     created_at_gte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     created_at_gt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     created_at_lte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
@@ -565,17 +659,17 @@
  **dag_run_id** | **str**|  | 
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, run_after, rendered_map_index, task_instance_operator, task_instance_state` | [optional] [default to ["ti_id"]]
- **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `ti_id, subject, responded_at, created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator, task_instance_state` | [optional] [default to ["ti_id"]]
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **task_id** | **str**|  | [optional] 
- **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **map_index** | **int**|  | [optional] 
  **state** | [**List[str]**](str.md)|  | [optional] 
  **response_received** | **bool**|  | [optional] 
  **responded_by_user_id** | [**List[str]**](str.md)|  | [optional] 
  **responded_by_user_name** | [**List[str]**](str.md)|  | [optional] 
- **subject_search** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **body_search** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **subject_search** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **body_search** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **created_at_gte** | **datetime**|  | [optional] 
  **created_at_gt** | **datetime**|  | [optional] 
  **created_at_lte** | **datetime**|  | [optional] 
@@ -976,7 +1070,7 @@
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
 # **get_mapped_task_instances**
-> TaskInstanceCollectionResponse get_mapped_task_instances(dag_id, dag_run_id, task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, state=state, pool=pool, queue=queue, executor=executor, version_number=version_number, try_number=try_number, operator=operator, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
+> TaskInstanceCollectionResponse get_mapped_task_instances(dag_id, dag_run_id, task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, state=state, pool=pool, pool_name_pattern=pool_name_pattern, queue=queue, queue_name_pattern=queue_name_pattern, executor=executor, version_number=version_number, try_number=try_number, operator=operator, operator_name_pattern=operator_name_pattern, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
 
 Get Mapped Task Instances
 
@@ -1044,11 +1138,14 @@
     duration_lt = 3.4 # float |  (optional)
     state = ['state_example'] # List[str] |  (optional)
     pool = ['pool_example'] # List[str] |  (optional)
+    pool_name_pattern = 'pool_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     queue = ['queue_example'] # List[str] |  (optional)
+    queue_name_pattern = 'queue_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     executor = ['executor_example'] # List[str] |  (optional)
     version_number = [56] # List[int] |  (optional)
     try_number = [56] # List[int] |  (optional)
     operator = ['operator_example'] # List[str] |  (optional)
+    operator_name_pattern = 'operator_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     map_index = [56] # List[int] |  (optional)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
@@ -1056,7 +1153,7 @@
 
     try:
         # Get Mapped Task Instances
-        api_response = api_instance.get_mapped_task_instances(dag_id, dag_run_id, task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, state=state, pool=pool, queue=queue, executor=executor, version_number=version_number, try_number=try_number, operator=operator, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
+        api_response = api_instance.get_mapped_task_instances(dag_id, dag_run_id, task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, state=state, pool=pool, pool_name_pattern=pool_name_pattern, queue=queue, queue_name_pattern=queue_name_pattern, executor=executor, version_number=version_number, try_number=try_number, operator=operator, operator_name_pattern=operator_name_pattern, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
         print("The response of TaskInstanceApi->get_mapped_task_instances:\n")
         pprint(api_response)
     except Exception as e:
@@ -1099,11 +1196,14 @@
  **duration_lt** | **float**|  | [optional] 
  **state** | [**List[str]**](str.md)|  | [optional] 
  **pool** | [**List[str]**](str.md)|  | [optional] 
+ **pool_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **queue** | [**List[str]**](str.md)|  | [optional] 
+ **queue_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **executor** | [**List[str]**](str.md)|  | [optional] 
  **version_number** | [**List[int]**](int.md)|  | [optional] 
  **try_number** | [**List[int]**](int.md)|  | [optional] 
  **operator** | [**List[str]**](str.md)|  | [optional] 
+ **operator_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **map_index** | [**List[int]**](int.md)|  | [optional] 
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
@@ -1590,7 +1690,7 @@
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
 # **get_task_instances**
-> TaskInstanceCollectionResponse get_task_instances(dag_id, dag_run_id, task_id=task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, task_display_name_pattern=task_display_name_pattern, task_group_id=task_group_id, state=state, pool=pool, queue=queue, executor=executor, version_number=version_number, try_number=try_number, operator=operator, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
+> TaskInstanceCollectionResponse get_task_instances(dag_id, dag_run_id, task_id=task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, task_display_name_pattern=task_display_name_pattern, task_group_id=task_group_id, dag_id_pattern=dag_id_pattern, run_id_pattern=run_id_pattern, state=state, pool=pool, pool_name_pattern=pool_name_pattern, queue=queue, queue_name_pattern=queue_name_pattern, executor=executor, version_number=version_number, try_number=try_number, operator=operator, operator_name_pattern=operator_name_pattern, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
 
 Get Task Instances
 
@@ -1659,15 +1759,20 @@
     duration_gt = 3.4 # float |  (optional)
     duration_lte = 3.4 # float |  (optional)
     duration_lt = 3.4 # float |  (optional)
-    task_display_name_pattern = 'task_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    task_display_name_pattern = 'task_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     task_group_id = 'task_group_id_example' # str | Filter by exact task group ID. Returns all tasks within the specified task group. (optional)
+    dag_id_pattern = 'dag_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     state = ['state_example'] # List[str] |  (optional)
     pool = ['pool_example'] # List[str] |  (optional)
+    pool_name_pattern = 'pool_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     queue = ['queue_example'] # List[str] |  (optional)
+    queue_name_pattern = 'queue_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     executor = ['executor_example'] # List[str] |  (optional)
     version_number = [56] # List[int] |  (optional)
     try_number = [56] # List[int] |  (optional)
     operator = ['operator_example'] # List[str] |  (optional)
+    operator_name_pattern = 'operator_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     map_index = [56] # List[int] |  (optional)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
@@ -1675,7 +1780,7 @@
 
     try:
         # Get Task Instances
-        api_response = api_instance.get_task_instances(dag_id, dag_run_id, task_id=task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, task_display_name_pattern=task_display_name_pattern, task_group_id=task_group_id, state=state, pool=pool, queue=queue, executor=executor, version_number=version_number, try_number=try_number, operator=operator, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
+        api_response = api_instance.get_task_instances(dag_id, dag_run_id, task_id=task_id, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, start_date_gte=start_date_gte, start_date_gt=start_date_gt, start_date_lte=start_date_lte, start_date_lt=start_date_lt, end_date_gte=end_date_gte, end_date_gt=end_date_gt, end_date_lte=end_date_lte, end_date_lt=end_date_lt, updated_at_gte=updated_at_gte, updated_at_gt=updated_at_gt, updated_at_lte=updated_at_lte, updated_at_lt=updated_at_lt, duration_gte=duration_gte, duration_gt=duration_gt, duration_lte=duration_lte, duration_lt=duration_lt, task_display_name_pattern=task_display_name_pattern, task_group_id=task_group_id, dag_id_pattern=dag_id_pattern, run_id_pattern=run_id_pattern, state=state, pool=pool, pool_name_pattern=pool_name_pattern, queue=queue, queue_name_pattern=queue_name_pattern, executor=executor, version_number=version_number, try_number=try_number, operator=operator, operator_name_pattern=operator_name_pattern, map_index=map_index, limit=limit, offset=offset, order_by=order_by)
         print("The response of TaskInstanceApi->get_task_instances:\n")
         pprint(api_response)
     except Exception as e:
@@ -1716,15 +1821,20 @@
  **duration_gt** | **float**|  | [optional] 
  **duration_lte** | **float**|  | [optional] 
  **duration_lt** | **float**|  | [optional] 
- **task_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **task_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **task_group_id** | **str**| Filter by exact task group ID. Returns all tasks within the specified task group. | [optional] 
+ **dag_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **state** | [**List[str]**](str.md)|  | [optional] 
  **pool** | [**List[str]**](str.md)|  | [optional] 
+ **pool_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **queue** | [**List[str]**](str.md)|  | [optional] 
+ **queue_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **executor** | [**List[str]**](str.md)|  | [optional] 
  **version_number** | [**List[int]**](int.md)|  | [optional] 
  **try_number** | [**List[int]**](int.md)|  | [optional] 
  **operator** | [**List[str]**](str.md)|  | [optional] 
+ **operator_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **map_index** | [**List[int]**](int.md)|  | [optional] 
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
@@ -1748,6 +1858,7 @@
 | Status code | Description | Response headers |
 |-------------|-------------|------------------|
 **200** | Successful Response |  -  |
+**400** | Bad Request |  -  |
 **401** | Unauthorized |  -  |
 **403** | Forbidden |  -  |
 **404** | Not Found |  -  |
@@ -2319,6 +2430,7 @@
 **401** | Unauthorized |  -  |
 **403** | Forbidden |  -  |
 **404** | Not Found |  -  |
+**409** | Conflict |  -  |
 **422** | Validation Error |  -  |
 
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
diff --git a/docs/TaskInstanceResponse.md b/docs/TaskInstanceResponse.md
index 8d2c6d1..0f95525 100644
--- a/docs/TaskInstanceResponse.md
+++ b/docs/TaskInstanceResponse.md
@@ -15,7 +15,7 @@
 **executor** | **str** |  | [optional] 
 **executor_config** | **str** |  | 
 **hostname** | **str** |  | [optional] 
-**id** | **str** |  | 
+**id** | **UUID** |  | 
 **logical_date** | **datetime** |  | [optional] 
 **map_index** | **int** |  | 
 **max_tries** | **int** |  | 
diff --git a/docs/TaskResponse.md b/docs/TaskResponse.md
index 6489d8d..180f7b3 100644
--- a/docs/TaskResponse.md
+++ b/docs/TaskResponse.md
@@ -12,7 +12,7 @@
 **downstream_task_ids** | **List[str]** |  | [optional] 
 **end_date** | **datetime** |  | [optional] 
 **execution_timeout** | [**TimeDelta**](TimeDelta.md) |  | [optional] 
-**extra_links** | **List[str]** | Extract and return extra_links. | 
+**extra_links** | **List[str]** | Extract and return extra_links. | [readonly] 
 **is_mapped** | **bool** |  | [optional] 
 **operator_name** | **str** |  | [optional] 
 **owner** | **str** |  | [optional] 
@@ -23,7 +23,7 @@
 **queue** | **str** |  | [optional] 
 **retries** | **float** |  | [optional] 
 **retry_delay** | [**TimeDelta**](TimeDelta.md) |  | [optional] 
-**retry_exponential_backoff** | **bool** |  | 
+**retry_exponential_backoff** | **float** |  | 
 **start_date** | **datetime** |  | [optional] 
 **task_display_name** | **str** |  | [optional] 
 **task_id** | **str** |  | [optional] 
diff --git a/docs/TriggerDAGRunPostBody.md b/docs/TriggerDAGRunPostBody.md
index a119bb6..1ff157e 100644
--- a/docs/TriggerDAGRunPostBody.md
+++ b/docs/TriggerDAGRunPostBody.md
@@ -12,6 +12,7 @@
 **data_interval_start** | **datetime** |  | [optional] 
 **logical_date** | **datetime** |  | [optional] 
 **note** | **str** |  | [optional] 
+**partition_key** | **str** |  | [optional] 
 **run_after** | **datetime** |  | [optional] 
 
 ## Example
diff --git a/docs/TriggerResponse.md b/docs/TriggerResponse.md
index aafff3c..fa39836 100644
--- a/docs/TriggerResponse.md
+++ b/docs/TriggerResponse.md
@@ -10,6 +10,7 @@
 **created_date** | **datetime** |  | 
 **id** | **int** |  | 
 **kwargs** | **str** |  | 
+**queue** | **str** |  | [optional] 
 **triggerer_id** | **int** |  | [optional] 
 
 ## Example
diff --git a/docs/ValidationError.md b/docs/ValidationError.md
index 9edabcf..b258f76 100644
--- a/docs/ValidationError.md
+++ b/docs/ValidationError.md
@@ -5,6 +5,8 @@
 
 Name | Type | Description | Notes
 ------------ | ------------- | ------------- | -------------
+**ctx** | **object** |  | [optional] 
+**input** | **object** |  | [optional] 
 **loc** | [**List[LocationInner]**](LocationInner.md) |  | 
 **msg** | **str** |  | 
 **type** | **str** |  | 
diff --git a/docs/VariableApi.md b/docs/VariableApi.md
index fd74b14..6354be5 100644
--- a/docs/VariableApi.md
+++ b/docs/VariableApi.md
@@ -306,8 +306,8 @@
     api_instance = airflow_client.client.VariableApi(api_client)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted` (optional) (default to ["id"])
-    variable_key_pattern = 'variable_key_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    order_by = ["id"] # List[str] | Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name` (optional) (default to ["id"])
+    variable_key_pattern = 'variable_key_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
 
     try:
         # Get Variables
@@ -327,8 +327,8 @@
 ------------- | ------------- | ------------- | -------------
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted` | [optional] [default to ["id"]]
- **variable_key_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **order_by** | [**List[str]**](str.md)| Attributes to order by, multi criteria sort is supported. Prefix with `-` for descending order. Supported attributes: `key, id, _val, description, is_encrypted, team_name` | [optional] [default to ["id"]]
+ **variable_key_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
 
 ### Return type
 
diff --git a/docs/VariableBody.md b/docs/VariableBody.md
index db11af4..7f3a608 100644
--- a/docs/VariableBody.md
+++ b/docs/VariableBody.md
@@ -8,6 +8,7 @@
 ------------ | ------------- | ------------- | -------------
 **description** | **str** |  | [optional] 
 **key** | **str** |  | 
+**team_name** | **str** |  | [optional] 
 **value** | **object** |  | 
 
 ## Example
diff --git a/docs/VariableResponse.md b/docs/VariableResponse.md
index 0c319cc..be92e70 100644
--- a/docs/VariableResponse.md
+++ b/docs/VariableResponse.md
@@ -9,6 +9,7 @@
 **description** | **str** |  | [optional] 
 **is_encrypted** | **bool** |  | 
 **key** | **str** |  | 
+**team_name** | **str** |  | [optional] 
 **value** | **str** |  | 
 
 ## Example
diff --git a/docs/XComApi.md b/docs/XComApi.md
index 0a59e77..52154df 100644
--- a/docs/XComApi.md
+++ b/docs/XComApi.md
@@ -5,6 +5,7 @@
 Method | HTTP request | Description
 ------------- | ------------- | -------------
 [**create_xcom_entry**](XComApi.md#create_xcom_entry) | **POST** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries | Create Xcom Entry
+[**delete_xcom_entry**](XComApi.md#delete_xcom_entry) | **DELETE** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key} | Delete Xcom Entry
 [**get_xcom_entries**](XComApi.md#get_xcom_entries) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries | Get Xcom Entries
 [**get_xcom_entry**](XComApi.md#get_xcom_entry) | **GET** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key} | Get Xcom Entry
 [**update_xcom_entry**](XComApi.md#update_xcom_entry) | **PATCH** /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key} | Update Xcom Entry
@@ -103,6 +104,97 @@
 
 [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
 
+# **delete_xcom_entry**
+> delete_xcom_entry(dag_id, task_id, dag_run_id, xcom_key, map_index=map_index)
+
+Delete Xcom Entry
+
+Delete an XCom entry.
+
+### Example
+
+* OAuth Authentication (OAuth2PasswordBearer):
+* Bearer Authentication (HTTPBearer):
+
+```python
+import airflow_client.client
+from airflow_client.client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to http://localhost
+# See configuration.py for a list of all supported configuration parameters.
+configuration = airflow_client.client.Configuration(
+    host = "http://localhost"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+configuration.access_token = os.environ["ACCESS_TOKEN"]
+
+# Configure Bearer authorization: HTTPBearer
+configuration = airflow_client.client.Configuration(
+    access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with airflow_client.client.ApiClient(configuration) as api_client:
+    # Create an instance of the API class
+    api_instance = airflow_client.client.XComApi(api_client)
+    dag_id = 'dag_id_example' # str | 
+    task_id = 'task_id_example' # str | 
+    dag_run_id = 'dag_run_id_example' # str | 
+    xcom_key = 'xcom_key_example' # str | 
+    map_index = -1 # int |  (optional) (default to -1)
+
+    try:
+        # Delete Xcom Entry
+        api_instance.delete_xcom_entry(dag_id, task_id, dag_run_id, xcom_key, map_index=map_index)
+    except Exception as e:
+        print("Exception when calling XComApi->delete_xcom_entry: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description  | Notes
+------------- | ------------- | ------------- | -------------
+ **dag_id** | **str**|  | 
+ **task_id** | **str**|  | 
+ **dag_run_id** | **str**|  | 
+ **xcom_key** | **str**|  | 
+ **map_index** | **int**|  | [optional] [default to -1]
+
+### Return type
+
+void (empty response body)
+
+### Authorization
+
+[OAuth2PasswordBearer](../README.md#OAuth2PasswordBearer), [HTTPBearer](../README.md#HTTPBearer)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**204** | Successful Response |  -  |
+**400** | Bad Request |  -  |
+**401** | Unauthorized |  -  |
+**403** | Forbidden |  -  |
+**404** | Not Found |  -  |
+**422** | Validation Error |  -  |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
 # **get_xcom_entries**
 > XComCollectionResponse get_xcom_entries(dag_id, dag_run_id, task_id, xcom_key=xcom_key, map_index=map_index, limit=limit, offset=offset, xcom_key_pattern=xcom_key_pattern, dag_display_name_pattern=dag_display_name_pattern, run_id_pattern=run_id_pattern, task_id_pattern=task_id_pattern, map_index_filter=map_index_filter, logical_date_gte=logical_date_gte, logical_date_gt=logical_date_gt, logical_date_lte=logical_date_lte, logical_date_lt=logical_date_lt, run_after_gte=run_after_gte, run_after_gt=run_after_gt, run_after_lte=run_after_lte, run_after_lt=run_after_lt)
 
@@ -152,10 +244,10 @@
     map_index = 56 # int |  (optional)
     limit = 50 # int |  (optional) (default to 50)
     offset = 0 # int |  (optional) (default to 0)
-    xcom_key_pattern = 'xcom_key_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    dag_display_name_pattern = 'dag_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
-    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. (optional)
+    xcom_key_pattern = 'xcom_key_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    dag_display_name_pattern = 'dag_display_name_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    run_id_pattern = 'run_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
+    task_id_pattern = 'task_id_pattern_example' # str | SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. (optional)
     map_index_filter = 56 # int |  (optional)
     logical_date_gte = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
     logical_date_gt = '2013-10-20T19:20:30+01:00' # datetime |  (optional)
@@ -189,10 +281,10 @@
  **map_index** | **int**|  | [optional] 
  **limit** | **int**|  | [optional] [default to 50]
  **offset** | **int**|  | [optional] [default to 0]
- **xcom_key_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **dag_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
- **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). Regular expressions are **not** supported. | [optional] 
+ **xcom_key_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **dag_display_name_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **run_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
+ **task_id_pattern** | **str**| SQL LIKE expression — use `%` / `_` wildcards (e.g. `%customer_%`). or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions are **not** supported. | [optional] 
  **map_index_filter** | **int**|  | [optional] 
  **logical_date_gte** | **datetime**|  | [optional] 
  **logical_date_gt** | **datetime**|  | [optional] 
diff --git a/pyproject.toml b/pyproject.toml
index c4d5dfd..bf9c5cf 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -21,7 +21,7 @@
     "packaging==26.0",
     "pathspec==1.0.4",
     "pluggy==1.6.0",
-    "tomli==2.4.0; python_version < '3.11'",
+    "tomli==2.4.1; python_version < '3.11'",
     "trove-classifiers==2026.1.14.14",
 ]
 build-backend = "hatchling.build"
@@ -33,7 +33,7 @@
 readme = "README.md"
 license = "Apache-2.0"
 license-files = ["LICENSE", "NOTICE"]
-requires-python = ">=3.10"
+requires-python = ">=3.10,!=3.15"
 authors = [
     { name = "Apache Software Foundation", email = "dev@airflow.apache.org" },
 ]
@@ -53,6 +53,7 @@
     "Programming Language :: Python :: 3.11",
     "Programming Language :: Python :: 3.12",
     "Programming Language :: Python :: 3.13",
+    "Programming Language :: Python :: 3.14",
     "Topic :: System :: Monitoring",
 ]
 
@@ -63,12 +64,12 @@
 ]
 
 [project.urls]
-"Bug Tracker" = "https://github.com/apache/airflow-client-python/issues"
+"Bug Tracker" = "https://github.com/apache/airflow/issues"
 Changelog = "https://github.com/apache/airflow-client-python/blob/main/CHANGELOG.md"
 Documentation = "https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html"
 Download = "https://archive.apache.org/dist/airflow/clients/python/"
 Homepage = "https://airflow.apache.org/"
-"Source Code" = "https://github.com/apache/airflow/clients/python"
+"Source Code" = "https://github.com/apache/airflow-client-python"
 
 [tool.hatch.envs.test]
 dependencies = [
diff --git a/spec/v2.yaml b/spec/v2.yaml
index 96a4d51..e4be048 100644
--- a/spec/v2.yaml
+++ b/spec/v2.yaml
@@ -127,6 +127,9 @@
         name:
           nullable: true
           type: string
+        partition_key:
+          nullable: true
+          type: string
         source_dag_id:
           nullable: true
           type: string
@@ -205,6 +208,11 @@
         uri:
           title: Uri
           type: string
+        watchers:
+          items:
+            $ref: '#/components/schemas/AssetWatcherResponse'
+          title: Watchers
+          type: array
       required:
       - id
       - name
@@ -216,8 +224,28 @@
       - producing_tasks
       - consuming_tasks
       - aliases
+      - watchers
       title: AssetResponse
       type: object
+    AssetWatcherResponse:
+      description: Asset watcher serializer for responses.
+      properties:
+        created_date:
+          format: date-time
+          title: Created Date
+          type: string
+        name:
+          title: Name
+          type: string
+        trigger_id:
+          title: Trigger Id
+          type: integer
+      required:
+      - name
+      - trigger_id
+      - created_date
+      title: AssetWatcherResponse
+      type: object
     BackfillCollectionResponse:
       description: Backfill Collection serializer for responses.
       properties:
@@ -294,7 +322,7 @@
           type: string
         dag_run_conf:
           additionalProperties: true
-          title: Dag Run Conf
+          nullable: true
           type: object
         from_date:
           format: date-time
@@ -325,7 +353,6 @@
       - dag_id
       - from_date
       - to_date
-      - dag_run_conf
       - is_paused
       - reprocess_behavior
       - max_active_runs
@@ -576,7 +603,7 @@
           items:
             anyOf:
             - type: string
-            - $ref: '#/components/schemas/BulkTaskInstanceBody'
+            - $ref: '#/components/schemas/ConnectionBody'
           title: Entities
           type: array
       required:
@@ -600,7 +627,7 @@
           items:
             anyOf:
             - type: string
-            - $ref: '#/components/schemas/BulkTaskInstanceBody'
+            - $ref: '#/components/schemas/PoolBody'
           title: Entities
           type: array
       required:
@@ -624,7 +651,7 @@
           items:
             anyOf:
             - type: string
-            - $ref: '#/components/schemas/BulkTaskInstanceBody'
+            - $ref: '#/components/schemas/VariableBody'
           title: Entities
           type: array
       required:
@@ -660,6 +687,12 @@
       additionalProperties: false
       description: Request body for bulk update, and delete task instances.
       properties:
+        dag_id:
+          nullable: true
+          type: string
+        dag_run_id:
+          nullable: true
+          type: string
         include_downstream:
           default: false
           title: Include Downstream
@@ -710,6 +743,11 @@
             $ref: '#/components/schemas/BulkTaskInstanceBody'
           title: Entities
           type: array
+        update_mask:
+          items:
+            type: string
+          nullable: true
+          type: array
       required:
       - action
       - entities
@@ -732,6 +770,11 @@
             $ref: '#/components/schemas/ConnectionBody'
           title: Entities
           type: array
+        update_mask:
+          items:
+            type: string
+          nullable: true
+          type: array
       required:
       - action
       - entities
@@ -754,6 +797,11 @@
             $ref: '#/components/schemas/PoolBody'
           title: Entities
           type: array
+        update_mask:
+          items:
+            type: string
+          nullable: true
+          type: array
       required:
       - action
       - entities
@@ -776,6 +824,11 @@
             $ref: '#/components/schemas/VariableBody'
           title: Entities
           type: array
+        update_mask:
+          items:
+            type: string
+          nullable: true
+          type: array
       required:
       - action
       - entities
@@ -820,6 +873,10 @@
           default: false
           title: Only Running
           type: boolean
+        prevent_running_task:
+          default: false
+          title: Prevent Running Task
+          type: boolean
         reset_dag_runs:
           default: true
           title: Reset Dag Runs
@@ -933,6 +990,10 @@
         schema:
           nullable: true
           type: string
+        team_name:
+          maxLength: 50
+          nullable: true
+          type: string
       required:
       - connection_id
       - conn_type
@@ -984,6 +1045,9 @@
         schema:
           nullable: true
           type: string
+        team_name:
+          nullable: true
+          type: string
       required:
       - connection_id
       - conn_type
@@ -1014,6 +1078,9 @@
           additionalProperties: true
           title: Extra
           type: object
+        partition_key:
+          nullable: true
+          type: string
       required:
       - asset_id
       title: CreateAssetEventsBody
@@ -1037,6 +1104,15 @@
     DAGDetailsResponse:
       description: Specific serializer for DAG Details responses.
       properties:
+        active_runs_count:
+          default: 0
+          title: Active Runs Count
+          type: integer
+        allowed_run_types:
+          items:
+            $ref: '#/components/schemas/DagRunType'
+          nullable: true
+          type: array
         asset_expression:
           additionalProperties: true
           nullable: true
@@ -1190,6 +1266,9 @@
         timetable_description:
           nullable: true
           type: string
+        timetable_partitioned:
+          title: Timetable Partitioned
+          type: boolean
         timetable_summary:
           nullable: true
           type: string
@@ -1202,6 +1281,7 @@
       - is_paused
       - is_stale
       - fileloc
+      - timetable_partitioned
       - tags
       - max_active_tasks
       - max_consecutive_failed_dag_runs
@@ -1228,6 +1308,11 @@
     DAGResponse:
       description: DAG serializer for responses.
       properties:
+        allowed_run_types:
+          items:
+            $ref: '#/components/schemas/DagRunType'
+          nullable: true
+          type: array
         bundle_name:
           nullable: true
           type: string
@@ -1315,6 +1400,9 @@
         timetable_description:
           nullable: true
           type: string
+        timetable_partitioned:
+          title: Timetable Partitioned
+          type: boolean
         timetable_summary:
           nullable: true
           type: string
@@ -1324,6 +1412,7 @@
       - is_paused
       - is_stale
       - fileloc
+      - timetable_partitioned
       - tags
       - max_active_tasks
       - max_consecutive_failed_dag_runs
@@ -1440,6 +1529,9 @@
         note:
           nullable: true
           type: string
+        partition_key:
+          nullable: true
+          type: string
         queued_at:
           format: date-time
           nullable: true
@@ -1476,11 +1568,26 @@
       additionalProperties: false
       description: List DAG Runs body for batch endpoint.
       properties:
+        conf_contains:
+          nullable: true
+          type: string
         dag_ids:
           items:
             type: string
           nullable: true
           type: array
+        duration_gt:
+          nullable: true
+          type: number
+        duration_gte:
+          nullable: true
+          type: number
+        duration_lt:
+          nullable: true
+          type: number
+        duration_lte:
+          nullable: true
+          type: number
         end_date_gt:
           format: date-time
           nullable: true
@@ -1675,7 +1782,7 @@
       type: object
     DagRunAssetReference:
       additionalProperties: false
-      description: DAGRun serializer for asset responses.
+      description: DagRun serializer for asset responses.
       properties:
         dag_id:
           title: Dag Id
@@ -1696,6 +1803,9 @@
           format: date-time
           nullable: true
           type: string
+        partition_key:
+          nullable: true
+          type: string
         run_id:
           title: Run Id
           type: string
@@ -1749,6 +1859,7 @@
       - scheduled
       - manual
       - asset_triggered
+      - asset_materialization
       title: DagRunType
       type: string
     DagScheduleAssetReference:
@@ -1886,6 +1997,7 @@
       enum:
       - asset conflict
       - non-existent pool
+      - runtime varying value
       title: DagWarningType
       type: string
     DryRunBackfillCollectionResponse:
@@ -1909,10 +2021,16 @@
       properties:
         logical_date:
           format: date-time
-          title: Logical Date
+          nullable: true
           type: string
-      required:
-      - logical_date
+        partition_date:
+          format: date-time
+          nullable: true
+          type: string
+        partition_key:
+          nullable: true
+          type: string
+      required: []
       title: DryRunBackfillResponse
       type: object
     EventLogCollectionResponse:
@@ -2006,6 +2124,7 @@
           - dag_run
           - task
           - task_instance
+          - base
           title: Destination
           type: string
         href:
@@ -2139,10 +2258,10 @@
         task_instance:
           $ref: '#/components/schemas/TaskInstanceResponse'
       required:
-      - task_instance
       - options
       - subject
       - created_at
+      - task_instance
       title: HITLDetail
       type: object
     HITLDetailCollection:
@@ -2161,6 +2280,72 @@
       - total_entries
       title: HITLDetailCollection
       type: object
+    HITLDetailHistory:
+      description: Schema for Human-in-the-loop detail history.
+      properties:
+        assigned_users:
+          items:
+            $ref: '#/components/schemas/HITLUser'
+          title: Assigned Users
+          type: array
+        body:
+          nullable: true
+          type: string
+        chosen_options:
+          items:
+            type: string
+          nullable: true
+          type: array
+        created_at:
+          format: date-time
+          title: Created At
+          type: string
+        defaults:
+          items:
+            type: string
+          nullable: true
+          type: array
+        multiple:
+          default: false
+          title: Multiple
+          type: boolean
+        options:
+          items:
+            type: string
+          minItems: 1
+          title: Options
+          type: array
+        params:
+          additionalProperties: true
+          title: Params
+          type: object
+        params_input:
+          additionalProperties: true
+          title: Params Input
+          type: object
+        responded_at:
+          format: date-time
+          nullable: true
+          type: string
+        responded_by_user:
+          $ref: '#/components/schemas/HITLUser'
+          nullable: true
+        response_received:
+          default: false
+          title: Response Received
+          type: boolean
+        subject:
+          title: Subject
+          type: string
+        task_instance:
+          $ref: '#/components/schemas/TaskInstanceHistoryResponse'
+      required:
+      - options
+      - subject
+      - created_at
+      - task_instance
+      title: HITLDetailHistory
+      type: object
     HITLDetailResponse:
       description: Response of updating a Human-in-the-loop detail.
       properties:
@@ -2539,6 +2724,10 @@
           minimum: -1.0
           title: Slots
           type: integer
+        team_name:
+          maxLength: 50
+          nullable: true
+          type: string
       required:
       - name
       - slots
@@ -2578,6 +2767,10 @@
           minimum: -1.0
           nullable: true
           type: integer
+        team_name:
+          maxLength: 50
+          nullable: true
+          type: string
       title: PoolPatchBody
       type: object
     PoolResponse:
@@ -2615,6 +2808,9 @@
           minimum: -1.0
           title: Slots
           type: integer
+        team_name:
+          nullable: true
+          type: string
       required:
       - name
       - slots
@@ -2721,6 +2917,7 @@
           - dag_run
           - task
           - task_instance
+          - base
           - dashboard
           title: Destination
           type: string
@@ -3009,6 +3206,7 @@
           nullable: true
           type: string
         id:
+          format: uuid
           title: Id
           type: string
         logical_date:
@@ -3366,7 +3564,7 @@
           nullable: true
         retry_exponential_backoff:
           title: Retry Exponential Backoff
-          type: boolean
+          type: number
         start_date:
           format: date-time
           nullable: true
@@ -3452,6 +3650,9 @@
         note:
           nullable: true
           type: string
+        partition_key:
+          nullable: true
+          type: string
         run_after:
           format: date-time
           nullable: true
@@ -3475,6 +3676,9 @@
         kwargs:
           title: Kwargs
           type: string
+        queue:
+          nullable: true
+          type: string
         triggerer_id:
           nullable: true
           type: integer
@@ -3516,6 +3720,11 @@
       type: object
     ValidationError:
       properties:
+        ctx:
+          title: Context
+          type: object
+        input:
+          title: Input
         loc:
           items:
             anyOf:
@@ -3546,6 +3755,10 @@
           maxLength: 250
           title: Key
           type: string
+        team_name:
+          maxLength: 50
+          nullable: true
+          type: string
         value:
           $ref: '#/components/schemas/JsonValue'
       required:
@@ -3581,6 +3794,9 @@
         key:
           title: Key
           type: string
+        team_name:
+          nullable: true
+          type: string
         value:
           title: Value
           type: string
@@ -3850,7 +4066,8 @@
           title: Offset
           type: integer
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: name_pattern
         required: false
@@ -3858,7 +4075,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: uri_pattern
         required: false
@@ -3955,7 +4173,8 @@
           title: Offset
           type: integer
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: name_pattern
         required: false
@@ -4127,6 +4346,15 @@
         schema:
           nullable: true
           type: integer
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: timestamp_gte
         required: false
@@ -4304,6 +4532,12 @@
               schema:
                 $ref: '#/components/schemas/DAGRunResponse'
           description: Successful Response
+        '400':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Bad Request
         '401':
           content:
             application/json:
@@ -4574,6 +4808,12 @@
               schema:
                 $ref: '#/components/schemas/BackfillResponse'
           description: Successful Response
+        '400':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Bad Request
         '401':
           content:
             application/json:
@@ -5056,7 +5296,7 @@
           type: integer
       - description: 'Attributes to order by, multi criteria sort is supported. Prefix
           with `-` for descending order. Supported attributes: `conn_id, conn_type,
-          description, host, port, id, connection_id`'
+          description, host, port, id, team_name, connection_id`'
         in: query
         name: order_by
         required: false
@@ -5065,13 +5305,14 @@
           - id
           description: 'Attributes to order by, multi criteria sort is supported.
             Prefix with `-` for descending order. Supported attributes: `conn_id,
-            conn_type, description, host, port, id, connection_id`'
+            conn_type, description, host, port, id, team_name, connection_id`'
           items:
             type: string
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: connection_id_pattern
         required: false
@@ -5607,7 +5848,8 @@
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: tag_name_pattern
         required: false
@@ -5772,7 +6014,8 @@
           title: Owners
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_id_pattern
         required: false
@@ -5780,7 +6023,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_display_name_pattern
         required: false
@@ -5926,6 +6170,14 @@
         schema:
           nullable: true
           type: boolean
+      - in: query
+        name: timetable_type
+        required: false
+        schema:
+          items:
+            type: string
+          title: Timetable Type
+          type: array
       responses:
         '200':
           content:
@@ -6011,7 +6263,8 @@
           title: Owners
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_id_pattern
         required: false
@@ -6518,6 +6771,12 @@
               schema:
                 $ref: '#/components/schemas/HTTPExceptionResponse'
           description: Not Found
+        '409':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Conflict
         '422':
           content:
             application/json:
@@ -6674,6 +6933,30 @@
           nullable: true
           type: string
       - in: query
+        name: duration_gte
+        required: false
+        schema:
+          nullable: true
+          type: number
+      - in: query
+        name: duration_gt
+        required: false
+        schema:
+          nullable: true
+          type: number
+      - in: query
+        name: duration_lte
+        required: false
+        schema:
+          nullable: true
+          type: number
+      - in: query
+        name: duration_lt
+        required: false
+        schema:
+          nullable: true
+          type: number
+      - in: query
         name: updated_at_gte
         required: false
         schema:
@@ -6702,6 +6985,12 @@
           nullable: true
           type: string
       - in: query
+        name: conf_contains
+        required: false
+        schema:
+          title: Conf Contains
+          type: string
+      - in: query
         name: run_type
         required: false
         schema:
@@ -6725,6 +7014,12 @@
             type: integer
           title: Dag Version
           type: array
+      - in: query
+        name: bundle_version
+        required: false
+        schema:
+          nullable: true
+          type: string
       - description: 'Attributes to order by, multi criteria sort is supported. Prefix
           with `-` for descending order. Supported attributes: `id, state, dag_id,
           run_id, logical_date, run_after, start_date, end_date, updated_at, conf,
@@ -6744,7 +7039,8 @@
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: run_id_pattern
         required: false
@@ -6752,13 +7048,32 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: triggering_user_name_pattern
         required: false
         schema:
           nullable: true
           type: string
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: dag_id_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: partition_key_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       responses:
         '200':
           content:
@@ -7194,7 +7509,8 @@
       - description: 'Attributes to order by, multi criteria sort is supported. Prefix
           with `-` for descending order. Supported attributes: `ti_id, subject, responded_at,
           created_at, responded_by_user_id, responded_by_user_name, dag_id, run_id,
-          run_after, rendered_map_index, task_instance_operator, task_instance_state`'
+          task_display_name, run_after, rendered_map_index, task_instance_operator,
+          task_instance_state`'
         in: query
         name: order_by
         required: false
@@ -7204,14 +7520,15 @@
           description: 'Attributes to order by, multi criteria sort is supported.
             Prefix with `-` for descending order. Supported attributes: `ti_id, subject,
             responded_at, created_at, responded_by_user_id, responded_by_user_name,
-            dag_id, run_id, run_after, rendered_map_index, task_instance_operator,
+            dag_id, run_id, task_display_name, run_after, rendered_map_index, task_instance_operator,
             task_instance_state`'
           items:
             type: string
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_id_pattern
         required: false
@@ -7225,7 +7542,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: task_id_pattern
         required: false
@@ -7269,7 +7587,8 @@
           title: Responded By User Name
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: subject_search
         required: false
@@ -7277,7 +7596,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: body_search
         required: false
@@ -7537,7 +7857,8 @@
           nullable: true
           type: number
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: task_display_name_pattern
         required: false
@@ -7552,6 +7873,24 @@
         schema:
           nullable: true
           type: string
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: dag_id_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: run_id_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: state
         required: false
@@ -7568,6 +7907,15 @@
             type: string
           title: Pool
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: pool_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: queue
         required: false
@@ -7576,6 +7924,15 @@
             type: string
           title: Queue
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: queue_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: executor
         required: false
@@ -7608,6 +7965,15 @@
             type: string
           title: Operator
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: operator_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: map_index
         required: false
@@ -7659,6 +8025,12 @@
               schema:
                 $ref: '#/components/schemas/TaskInstanceCollectionResponse'
           description: Successful Response
+        '400':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Bad Request
         '401':
           content:
             application/json:
@@ -8524,6 +8896,15 @@
             type: string
           title: Pool
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: pool_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: queue
         required: false
@@ -8532,6 +8913,15 @@
             type: string
           title: Queue
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: queue_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: executor
         required: false
@@ -8564,6 +8954,15 @@
             type: string
           title: Operator
           type: array
+      - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
+        in: query
+        name: operator_name_pattern
+        required: false
+        schema:
+          nullable: true
+          type: string
       - in: query
         name: map_index
         required: false
@@ -8948,7 +9347,8 @@
           title: Offset
           type: integer
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: xcom_key_pattern
         required: false
@@ -8956,7 +9356,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_display_name_pattern
         required: false
@@ -8964,7 +9365,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: run_id_pattern
         required: false
@@ -8972,7 +9374,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: task_id_pattern
         required: false
@@ -9156,6 +9559,81 @@
       tags:
       - XCom
   /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key}:
+    delete:
+      description: Delete an XCom entry.
+      operationId: delete_xcom_entry
+      parameters:
+      - in: path
+        name: dag_id
+        required: true
+        schema:
+          title: Dag Id
+          type: string
+      - in: path
+        name: task_id
+        required: true
+        schema:
+          title: Task Id
+          type: string
+      - in: path
+        name: dag_run_id
+        required: true
+        schema:
+          title: Dag Run Id
+          type: string
+      - in: path
+        name: xcom_key
+        required: true
+        schema:
+          title: Xcom Key
+          type: string
+      - in: query
+        name: map_index
+        required: false
+        schema:
+          default: -1
+          minimum: -1
+          title: Map Index
+          type: integer
+      responses:
+        '204':
+          description: Successful Response
+        '400':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Bad Request
+        '401':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Unauthorized
+        '403':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Forbidden
+        '404':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Not Found
+        '422':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPValidationError'
+          description: Validation Error
+      security:
+      - OAuth2PasswordBearer: []
+      - HTTPBearer: []
+      summary: Delete Xcom Entry
+      tags:
+      - XCom
     get:
       description: Get an XCom entry.
       operationId: get_xcom_entry
@@ -9781,6 +10259,78 @@
       summary: Update Hitl Detail
       tags:
       - Task Instance
+  /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index}/hitlDetails/tries/{try_number}:
+    get:
+      description: Get a Human-in-the-loop detail of a specific task instance.
+      operationId: get_hitl_detail_try_detail
+      parameters:
+      - in: path
+        name: dag_id
+        required: true
+        schema:
+          title: Dag Id
+          type: string
+      - in: path
+        name: dag_run_id
+        required: true
+        schema:
+          title: Dag Run Id
+          type: string
+      - in: path
+        name: task_id
+        required: true
+        schema:
+          title: Task Id
+          type: string
+      - in: path
+        name: map_index
+        required: true
+        schema:
+          title: Map Index
+          type: integer
+      - in: path
+        name: try_number
+        required: true
+        schema:
+          nullable: true
+          type: integer
+      responses:
+        '200':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HITLDetailHistory'
+          description: Successful Response
+        '401':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Unauthorized
+        '403':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Forbidden
+        '404':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPExceptionResponse'
+          description: Not Found
+        '422':
+          content:
+            application/json:
+              schema:
+                $ref: '#/components/schemas/HTTPValidationError'
+          description: Validation Error
+      security:
+      - OAuth2PasswordBearer: []
+      - HTTPBearer: []
+      summary: Get Hitl Detail Try Detail
+      tags:
+      - Task Instance
   /api/v2/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/{map_index}/tries:
     get:
       operationId: get_mapped_task_instance_tries
@@ -10589,7 +11139,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: dag_id_pattern
         required: false
@@ -10597,7 +11148,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: task_id_pattern
         required: false
@@ -10605,7 +11157,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: run_id_pattern
         required: false
@@ -10613,7 +11166,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: owner_pattern
         required: false
@@ -10621,7 +11175,8 @@
           nullable: true
           type: string
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: event_pattern
         required: false
@@ -10744,7 +11299,8 @@
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: filename_pattern
         required: false
@@ -11168,7 +11724,8 @@
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: pool_name_pattern
         required: false
@@ -11537,7 +12094,7 @@
           type: integer
       - description: 'Attributes to order by, multi criteria sort is supported. Prefix
           with `-` for descending order. Supported attributes: `key, id, _val, description,
-          is_encrypted`'
+          is_encrypted, team_name`'
         in: query
         name: order_by
         required: false
@@ -11546,13 +12103,14 @@
           - id
           description: 'Attributes to order by, multi criteria sort is supported.
             Prefix with `-` for descending order. Supported attributes: `key, id,
-            _val, description, is_encrypted`'
+            _val, description, is_encrypted, team_name`'
           items:
             type: string
           title: Order By
           type: array
       - description: "SQL LIKE expression \u2014 use `%` / `_` wildcards (e.g. `%customer_%`).\
-          \ Regular expressions are **not** supported."
+          \ or the pipe `|` operator for OR logic (e.g. `dag1 | dag2`). Regular expressions\
+          \ are **not** supported."
         in: query
         name: variable_key_pattern
         required: false
diff --git a/test/test_actions_inner.py b/test/test_actions_inner.py
index 35cfb63..c6c579d 100644
--- a/test/test_actions_inner.py
+++ b/test/test_actions_inner.py
@@ -40,7 +40,10 @@
                 entities = [
                     null
                     ],
-                action_on_non_existence = 'fail'
+                action_on_non_existence = 'fail',
+                update_mask = [
+                    ''
+                    ]
             )
         else:
             return ActionsInner(
diff --git a/test/test_actions_inner1.py b/test/test_actions_inner1.py
index f6e3f62..12d1562 100644
--- a/test/test_actions_inner1.py
+++ b/test/test_actions_inner1.py
@@ -40,7 +40,10 @@
                 entities = [
                     null
                     ],
-                action_on_non_existence = 'fail'
+                action_on_non_existence = 'fail',
+                update_mask = [
+                    ''
+                    ]
             )
         else:
             return ActionsInner1(
diff --git a/test/test_actions_inner2.py b/test/test_actions_inner2.py
index 1c382f4..51d2e3a 100644
--- a/test/test_actions_inner2.py
+++ b/test/test_actions_inner2.py
@@ -40,7 +40,10 @@
                 entities = [
                     null
                     ],
-                action_on_non_existence = 'fail'
+                action_on_non_existence = 'fail',
+                update_mask = [
+                    ''
+                    ]
             )
         else:
             return ActionsInner2(
diff --git a/test/test_actions_inner3.py b/test/test_actions_inner3.py
index 2b7112c..a8885f2 100644
--- a/test/test_actions_inner3.py
+++ b/test/test_actions_inner3.py
@@ -40,7 +40,10 @@
                 entities = [
                     null
                     ],
-                action_on_non_existence = 'fail'
+                action_on_non_existence = 'fail',
+                update_mask = [
+                    ''
+                    ]
             )
         else:
             return ActionsInner3(
diff --git a/test/test_asset_collection_response.py b/test/test_asset_collection_response.py
index fef5e94..db12571 100644
--- a/test/test_asset_collection_response.py
+++ b/test/test_asset_collection_response.py
@@ -72,7 +72,13 @@
                                 updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), )
                             ], 
                         updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
-                        uri = '', )
+                        uri = '', 
+                        watchers = [
+                            airflow_client.client.models.asset_watcher_response.AssetWatcherResponse(
+                                created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                                name = '', 
+                                trigger_id = 56, )
+                            ], )
                     ],
                 total_entries = 56
             )
@@ -115,7 +121,13 @@
                                 updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), )
                             ], 
                         updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
-                        uri = '', )
+                        uri = '', 
+                        watchers = [
+                            airflow_client.client.models.asset_watcher_response.AssetWatcherResponse(
+                                created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                                name = '', 
+                                trigger_id = 56, )
+                            ], )
                     ],
                 total_entries = 56,
         )
diff --git a/test/test_asset_event_collection_response.py b/test/test_asset_event_collection_response.py
index 596fe73..e636ea3 100644
--- a/test/test_asset_event_collection_response.py
+++ b/test/test_asset_event_collection_response.py
@@ -45,6 +45,7 @@
                                 data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                                partition_key = '', 
                                 run_id = '', 
                                 start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 state = '', )
@@ -55,6 +56,7 @@
                         group = '', 
                         id = 56, 
                         name = '', 
+                        partition_key = '', 
                         source_dag_id = '', 
                         source_map_index = 56, 
                         source_run_id = '', 
@@ -76,6 +78,7 @@
                                 data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                                partition_key = '', 
                                 run_id = '', 
                                 start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                                 state = '', )
@@ -86,6 +89,7 @@
                         group = '', 
                         id = 56, 
                         name = '', 
+                        partition_key = '', 
                         source_dag_id = '', 
                         source_map_index = 56, 
                         source_run_id = '', 
diff --git a/test/test_asset_event_response.py b/test/test_asset_event_response.py
index 4ee8512..73d3177 100644
--- a/test/test_asset_event_response.py
+++ b/test/test_asset_event_response.py
@@ -43,6 +43,7 @@
                         data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_key = '', 
                         run_id = '', 
                         start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         state = '', )
@@ -53,6 +54,7 @@
                 group = '',
                 id = 56,
                 name = '',
+                partition_key = '',
                 source_dag_id = '',
                 source_map_index = 56,
                 source_run_id = '',
@@ -70,6 +72,7 @@
                         data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_key = '', 
                         run_id = '', 
                         start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         state = '', )
diff --git a/test/test_asset_response.py b/test/test_asset_response.py
index 96936c5..62309b2 100644
--- a/test/test_asset_response.py
+++ b/test/test_asset_response.py
@@ -72,7 +72,13 @@
                         updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), )
                     ],
                 updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
-                uri = ''
+                uri = '',
+                watchers = [
+                    airflow_client.client.models.asset_watcher_response.AssetWatcherResponse(
+                        created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        name = '', 
+                        trigger_id = 56, )
+                    ]
             )
         else:
             return AssetResponse(
@@ -108,6 +114,12 @@
                     ],
                 updated_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 uri = '',
+                watchers = [
+                    airflow_client.client.models.asset_watcher_response.AssetWatcherResponse(
+                        created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        name = '', 
+                        trigger_id = 56, )
+                    ],
         )
         """
 
diff --git a/test/test_asset_watcher_response.py b/test/test_asset_watcher_response.py
new file mode 100644
index 0000000..bf5551f
--- /dev/null
+++ b/test/test_asset_watcher_response.py
@@ -0,0 +1,56 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+import unittest
+
+from airflow_client.client.models.asset_watcher_response import AssetWatcherResponse
+
+class TestAssetWatcherResponse(unittest.TestCase):
+    """AssetWatcherResponse unit test stubs"""
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def make_instance(self, include_optional) -> AssetWatcherResponse:
+        """Test AssetWatcherResponse
+            include_optional is a boolean, when False only required
+            params are included, when True both required and
+            optional params are included """
+        # uncomment below to create an instance of `AssetWatcherResponse`
+        """
+        model = AssetWatcherResponse()
+        if include_optional:
+            return AssetWatcherResponse(
+                created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                name = '',
+                trigger_id = 56
+            )
+        else:
+            return AssetWatcherResponse(
+                created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                name = '',
+                trigger_id = 56,
+        )
+        """
+
+    def testAssetWatcherResponse(self):
+        """Test AssetWatcherResponse"""
+        # inst_req_only = self.make_instance(include_optional=False)
+        # inst_req_and_optional = self.make_instance(include_optional=True)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/test/test_backfill_response.py b/test/test_backfill_response.py
index 670c8c3..8e079a0 100644
--- a/test/test_backfill_response.py
+++ b/test/test_backfill_response.py
@@ -53,7 +53,6 @@
                 created_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 dag_display_name = '',
                 dag_id = '',
-                dag_run_conf = { },
                 from_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 id = 0.0,
                 is_paused = True,
diff --git a/test/test_bulk_create_action_bulk_task_instance_body.py b/test/test_bulk_create_action_bulk_task_instance_body.py
index 80b6bd7..808dbcf 100644
--- a/test/test_bulk_create_action_bulk_task_instance_body.py
+++ b/test/test_bulk_create_action_bulk_task_instance_body.py
@@ -39,6 +39,8 @@
                 action_on_existence = 'fail',
                 entities = [
                     airflow_client.client.models.bulk_task_instance_body.BulkTaskInstanceBody(
+                        dag_id = '', 
+                        dag_run_id = '', 
                         include_downstream = True, 
                         include_future = True, 
                         include_past = True, 
@@ -54,6 +56,8 @@
                 action = 'create',
                 entities = [
                     airflow_client.client.models.bulk_task_instance_body.BulkTaskInstanceBody(
+                        dag_id = '', 
+                        dag_run_id = '', 
                         include_downstream = True, 
                         include_future = True, 
                         include_past = True, 
diff --git a/test/test_bulk_create_action_connection_body.py b/test/test_bulk_create_action_connection_body.py
index 508b48a..71d9493 100644
--- a/test/test_bulk_create_action_connection_body.py
+++ b/test/test_bulk_create_action_connection_body.py
@@ -47,7 +47,8 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
                     ]
             )
         else:
@@ -63,7 +64,8 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
                     ],
         )
         """
diff --git a/test/test_bulk_create_action_pool_body.py b/test/test_bulk_create_action_pool_body.py
index c4df10b..feb98b3 100644
--- a/test/test_bulk_create_action_pool_body.py
+++ b/test/test_bulk_create_action_pool_body.py
@@ -42,7 +42,8 @@
                         description = '', 
                         include_deferred = True, 
                         name = '', 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
                     ]
             )
         else:
@@ -53,7 +54,8 @@
                         description = '', 
                         include_deferred = True, 
                         name = '', 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
                     ],
         )
         """
diff --git a/test/test_bulk_create_action_variable_body.py b/test/test_bulk_create_action_variable_body.py
index 3241ab2..55249d7 100644
--- a/test/test_bulk_create_action_variable_body.py
+++ b/test/test_bulk_create_action_variable_body.py
@@ -41,6 +41,7 @@
                     airflow_client.client.models.variable_body.VariableBody(
                         description = '', 
                         key = '', 
+                        team_name = '', 
                         value = null, )
                     ]
             )
@@ -51,6 +52,7 @@
                     airflow_client.client.models.variable_body.VariableBody(
                         description = '', 
                         key = '', 
+                        team_name = '', 
                         value = null, )
                     ],
         )
diff --git a/test/test_bulk_task_instance_body.py b/test/test_bulk_task_instance_body.py
index e0f7a2c..e46752c 100644
--- a/test/test_bulk_task_instance_body.py
+++ b/test/test_bulk_task_instance_body.py
@@ -35,6 +35,8 @@
         model = BulkTaskInstanceBody()
         if include_optional:
             return BulkTaskInstanceBody(
+                dag_id = '',
+                dag_run_id = '',
                 include_downstream = True,
                 include_future = True,
                 include_past = True,
diff --git a/test/test_bulk_update_action_bulk_task_instance_body.py b/test/test_bulk_update_action_bulk_task_instance_body.py
index 3e6c666..d60ff91 100644
--- a/test/test_bulk_update_action_bulk_task_instance_body.py
+++ b/test/test_bulk_update_action_bulk_task_instance_body.py
@@ -39,6 +39,8 @@
                 action_on_non_existence = 'fail',
                 entities = [
                     airflow_client.client.models.bulk_task_instance_body.BulkTaskInstanceBody(
+                        dag_id = '', 
+                        dag_run_id = '', 
                         include_downstream = True, 
                         include_future = True, 
                         include_past = True, 
@@ -47,6 +49,9 @@
                         new_state = null, 
                         note = '', 
                         task_id = '', )
+                    ],
+                update_mask = [
+                    ''
                     ]
             )
         else:
@@ -54,6 +59,8 @@
                 action = 'update',
                 entities = [
                     airflow_client.client.models.bulk_task_instance_body.BulkTaskInstanceBody(
+                        dag_id = '', 
+                        dag_run_id = '', 
                         include_downstream = True, 
                         include_future = True, 
                         include_past = True, 
diff --git a/test/test_bulk_update_action_connection_body.py b/test/test_bulk_update_action_connection_body.py
index 21346a9..4682268 100644
--- a/test/test_bulk_update_action_connection_body.py
+++ b/test/test_bulk_update_action_connection_body.py
@@ -47,7 +47,11 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
+                    ],
+                update_mask = [
+                    ''
                     ]
             )
         else:
@@ -63,7 +67,8 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
                     ],
         )
         """
diff --git a/test/test_bulk_update_action_pool_body.py b/test/test_bulk_update_action_pool_body.py
index 9b17dc3..791aafc 100644
--- a/test/test_bulk_update_action_pool_body.py
+++ b/test/test_bulk_update_action_pool_body.py
@@ -42,7 +42,11 @@
                         description = '', 
                         include_deferred = True, 
                         name = '', 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
+                    ],
+                update_mask = [
+                    ''
                     ]
             )
         else:
@@ -53,7 +57,8 @@
                         description = '', 
                         include_deferred = True, 
                         name = '', 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
                     ],
         )
         """
diff --git a/test/test_bulk_update_action_variable_body.py b/test/test_bulk_update_action_variable_body.py
index b0a8245..daa3630 100644
--- a/test/test_bulk_update_action_variable_body.py
+++ b/test/test_bulk_update_action_variable_body.py
@@ -41,7 +41,11 @@
                     airflow_client.client.models.variable_body.VariableBody(
                         description = '', 
                         key = '', 
+                        team_name = '', 
                         value = null, )
+                    ],
+                update_mask = [
+                    ''
                     ]
             )
         else:
@@ -51,6 +55,7 @@
                     airflow_client.client.models.variable_body.VariableBody(
                         description = '', 
                         key = '', 
+                        team_name = '', 
                         value = null, )
                     ],
         )
diff --git a/test/test_clear_task_instances_body.py b/test/test_clear_task_instances_body.py
index ea5ba36..05698a2 100644
--- a/test/test_clear_task_instances_body.py
+++ b/test/test_clear_task_instances_body.py
@@ -44,6 +44,7 @@
                 include_upstream = True,
                 only_failed = True,
                 only_running = True,
+                prevent_running_task = True,
                 reset_dag_runs = True,
                 run_on_latest_version = True,
                 start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
diff --git a/test/test_connection_body.py b/test/test_connection_body.py
index 7a9742c..92082cf 100644
--- a/test/test_connection_body.py
+++ b/test/test_connection_body.py
@@ -43,7 +43,8 @@
                 login = '',
                 password = '',
                 port = 56,
-                var_schema = ''
+                var_schema = '',
+                team_name = ''
             )
         else:
             return ConnectionBody(
diff --git a/test/test_connection_collection_response.py b/test/test_connection_collection_response.py
index ecbed52..dd1e4a0 100644
--- a/test/test_connection_collection_response.py
+++ b/test/test_connection_collection_response.py
@@ -45,7 +45,8 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
                     ],
                 total_entries = 56
             )
@@ -61,7 +62,8 @@
                         login = '', 
                         password = '', 
                         port = 56, 
-                        schema = '', )
+                        schema = '', 
+                        team_name = '', )
                     ],
                 total_entries = 56,
         )
diff --git a/test/test_connection_response.py b/test/test_connection_response.py
index e382c53..f7e1adc 100644
--- a/test/test_connection_response.py
+++ b/test/test_connection_response.py
@@ -43,7 +43,8 @@
                 login = '',
                 password = '',
                 port = 56,
-                var_schema = ''
+                var_schema = '',
+                team_name = ''
             )
         else:
             return ConnectionResponse(
diff --git a/test/test_create_asset_events_body.py b/test/test_create_asset_events_body.py
index 345ef63..786ef90 100644
--- a/test/test_create_asset_events_body.py
+++ b/test/test_create_asset_events_body.py
@@ -36,7 +36,8 @@
         if include_optional:
             return CreateAssetEventsBody(
                 asset_id = 56,
-                extra = { }
+                extra = { },
+                partition_key = ''
             )
         else:
             return CreateAssetEventsBody(
diff --git a/test/test_dag_collection_response.py b/test/test_dag_collection_response.py
index f4a24b6..08d3227 100644
--- a/test/test_dag_collection_response.py
+++ b/test/test_dag_collection_response.py
@@ -37,6 +37,9 @@
             return DAGCollectionResponse(
                 dags = [
                     airflow_client.client.models.dag_response.DAGResponse(
+                        allowed_run_types = [
+                            'backfill'
+                            ], 
                         bundle_name = '', 
                         bundle_version = '', 
                         dag_display_name = '', 
@@ -69,6 +72,7 @@
                                 name = '', )
                             ], 
                         timetable_description = '', 
+                        timetable_partitioned = True, 
                         timetable_summary = '', )
                     ],
                 total_entries = 56
@@ -77,6 +81,9 @@
             return DAGCollectionResponse(
                 dags = [
                     airflow_client.client.models.dag_response.DAGResponse(
+                        allowed_run_types = [
+                            'backfill'
+                            ], 
                         bundle_name = '', 
                         bundle_version = '', 
                         dag_display_name = '', 
@@ -109,6 +116,7 @@
                                 name = '', )
                             ], 
                         timetable_description = '', 
+                        timetable_partitioned = True, 
                         timetable_summary = '', )
                     ],
                 total_entries = 56,
diff --git a/test/test_dag_details_response.py b/test/test_dag_details_response.py
index 5454bcb..dfa89b4 100644
--- a/test/test_dag_details_response.py
+++ b/test/test_dag_details_response.py
@@ -35,6 +35,10 @@
         model = DAGDetailsResponse()
         if include_optional:
             return DAGDetailsResponse(
+                active_runs_count = 56,
+                allowed_run_types = [
+                    'backfill'
+                    ],
                 asset_expression = { },
                 bundle_name = '',
                 bundle_version = '',
@@ -95,6 +99,7 @@
                     ''
                     ],
                 timetable_description = '',
+                timetable_partitioned = True,
                 timetable_summary = '',
                 timezone = ''
             )
@@ -122,6 +127,7 @@
                         dag_id = '', 
                         name = '', )
                     ],
+                timetable_partitioned = True,
         )
         """
 
diff --git a/test/test_dag_response.py b/test/test_dag_response.py
index 7896db6..99d3110 100644
--- a/test/test_dag_response.py
+++ b/test/test_dag_response.py
@@ -35,6 +35,9 @@
         model = DAGResponse()
         if include_optional:
             return DAGResponse(
+                allowed_run_types = [
+                    'backfill'
+                    ],
                 bundle_name = '',
                 bundle_version = '',
                 dag_display_name = '',
@@ -67,6 +70,7 @@
                         name = '', )
                     ],
                 timetable_description = '',
+                timetable_partitioned = True,
                 timetable_summary = ''
             )
         else:
@@ -90,6 +94,7 @@
                         dag_id = '', 
                         name = '', )
                     ],
+                timetable_partitioned = True,
         )
         """
 
diff --git a/test/test_dag_run_asset_reference.py b/test/test_dag_run_asset_reference.py
index a7a5e2b..675411c 100644
--- a/test/test_dag_run_asset_reference.py
+++ b/test/test_dag_run_asset_reference.py
@@ -40,6 +40,7 @@
                 data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                partition_key = '',
                 run_id = '',
                 start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 state = ''
diff --git a/test/test_dag_run_collection_response.py b/test/test_dag_run_collection_response.py
index e151658..4779258 100644
--- a/test/test_dag_run_collection_response.py
+++ b/test/test_dag_run_collection_response.py
@@ -60,6 +60,7 @@
                         last_scheduling_decision = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         note = '', 
+                        partition_key = '', 
                         queued_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         run_after = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         run_type = 'backfill', 
@@ -97,6 +98,7 @@
                         last_scheduling_decision = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         note = '', 
+                        partition_key = '', 
                         queued_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         run_after = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         run_type = 'backfill', 
diff --git a/test/test_dag_run_response.py b/test/test_dag_run_response.py
index de722e7..e1245f3 100644
--- a/test/test_dag_run_response.py
+++ b/test/test_dag_run_response.py
@@ -58,6 +58,7 @@
                 last_scheduling_decision = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 note = '',
+                partition_key = '',
                 queued_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 run_after = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 run_type = 'backfill',
diff --git a/test/test_dag_runs_batch_body.py b/test/test_dag_runs_batch_body.py
index 101314e..af557b1 100644
--- a/test/test_dag_runs_batch_body.py
+++ b/test/test_dag_runs_batch_body.py
@@ -35,9 +35,14 @@
         model = DAGRunsBatchBody()
         if include_optional:
             return DAGRunsBatchBody(
+                conf_contains = '',
                 dag_ids = [
                     ''
                     ],
+                duration_gt = 1.337,
+                duration_gte = 1.337,
+                duration_lt = 1.337,
+                duration_lte = 1.337,
                 end_date_gt = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 end_date_gte = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 end_date_lt = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
diff --git a/test/test_dry_run_backfill_collection_response.py b/test/test_dry_run_backfill_collection_response.py
index 74e1857..f3fb28f 100644
--- a/test/test_dry_run_backfill_collection_response.py
+++ b/test/test_dry_run_backfill_collection_response.py
@@ -37,7 +37,9 @@
             return DryRunBackfillCollectionResponse(
                 backfills = [
                     airflow_client.client.models.dry_run_backfill_response.DryRunBackfillResponse(
-                        logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), )
+                        logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_key = '', )
                     ],
                 total_entries = 56
             )
@@ -45,7 +47,9 @@
             return DryRunBackfillCollectionResponse(
                 backfills = [
                     airflow_client.client.models.dry_run_backfill_response.DryRunBackfillResponse(
-                        logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), )
+                        logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                        partition_key = '', )
                     ],
                 total_entries = 56,
         )
diff --git a/test/test_dry_run_backfill_response.py b/test/test_dry_run_backfill_response.py
index d806610..057d164 100644
--- a/test/test_dry_run_backfill_response.py
+++ b/test/test_dry_run_backfill_response.py
@@ -35,11 +35,12 @@
         model = DryRunBackfillResponse()
         if include_optional:
             return DryRunBackfillResponse(
-                logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f')
+                logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                partition_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                partition_key = ''
             )
         else:
             return DryRunBackfillResponse(
-                logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
         )
         """
 
diff --git a/test/test_entities_inner.py b/test/test_entities_inner.py
index af1201f..6a2613f 100644
--- a/test/test_entities_inner.py
+++ b/test/test_entities_inner.py
@@ -35,6 +35,8 @@
         model = EntitiesInner()
         if include_optional:
             return EntitiesInner(
+                dag_id = '',
+                dag_run_id = '',
                 include_downstream = True,
                 include_future = True,
                 include_past = True,
diff --git a/test/test_entities_inner1.py b/test/test_entities_inner1.py
new file mode 100644
index 0000000..682e0c8
--- /dev/null
+++ b/test/test_entities_inner1.py
@@ -0,0 +1,62 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+import unittest
+
+from airflow_client.client.models.entities_inner1 import EntitiesInner1
+
+class TestEntitiesInner1(unittest.TestCase):
+    """EntitiesInner1 unit test stubs"""
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def make_instance(self, include_optional) -> EntitiesInner1:
+        """Test EntitiesInner1
+            include_optional is a boolean, when False only required
+            params are included, when True both required and
+            optional params are included """
+        # uncomment below to create an instance of `EntitiesInner1`
+        """
+        model = EntitiesInner1()
+        if include_optional:
+            return EntitiesInner1(
+                conn_type = '',
+                connection_id = '2',
+                description = '',
+                extra = '',
+                host = '',
+                login = '',
+                password = '',
+                port = 56,
+                var_schema = '',
+                team_name = ''
+            )
+        else:
+            return EntitiesInner1(
+                conn_type = '',
+                connection_id = '2',
+        )
+        """
+
+    def testEntitiesInner1(self):
+        """Test EntitiesInner1"""
+        # inst_req_only = self.make_instance(include_optional=False)
+        # inst_req_and_optional = self.make_instance(include_optional=True)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/test/test_entities_inner2.py b/test/test_entities_inner2.py
new file mode 100644
index 0000000..702bea2
--- /dev/null
+++ b/test/test_entities_inner2.py
@@ -0,0 +1,57 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+import unittest
+
+from airflow_client.client.models.entities_inner2 import EntitiesInner2
+
+class TestEntitiesInner2(unittest.TestCase):
+    """EntitiesInner2 unit test stubs"""
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def make_instance(self, include_optional) -> EntitiesInner2:
+        """Test EntitiesInner2
+            include_optional is a boolean, when False only required
+            params are included, when True both required and
+            optional params are included """
+        # uncomment below to create an instance of `EntitiesInner2`
+        """
+        model = EntitiesInner2()
+        if include_optional:
+            return EntitiesInner2(
+                description = '',
+                include_deferred = True,
+                name = '',
+                slots = -1.0,
+                team_name = ''
+            )
+        else:
+            return EntitiesInner2(
+                name = '',
+                slots = -1.0,
+        )
+        """
+
+    def testEntitiesInner2(self):
+        """Test EntitiesInner2"""
+        # inst_req_only = self.make_instance(include_optional=False)
+        # inst_req_and_optional = self.make_instance(include_optional=True)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/test/test_entities_inner3.py b/test/test_entities_inner3.py
new file mode 100644
index 0000000..000595b
--- /dev/null
+++ b/test/test_entities_inner3.py
@@ -0,0 +1,56 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+import unittest
+
+from airflow_client.client.models.entities_inner3 import EntitiesInner3
+
+class TestEntitiesInner3(unittest.TestCase):
+    """EntitiesInner3 unit test stubs"""
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def make_instance(self, include_optional) -> EntitiesInner3:
+        """Test EntitiesInner3
+            include_optional is a boolean, when False only required
+            params are included, when True both required and
+            optional params are included """
+        # uncomment below to create an instance of `EntitiesInner3`
+        """
+        model = EntitiesInner3()
+        if include_optional:
+            return EntitiesInner3(
+                description = '',
+                key = '',
+                team_name = '',
+                value = None
+            )
+        else:
+            return EntitiesInner3(
+                key = '',
+                value = None,
+        )
+        """
+
+    def testEntitiesInner3(self):
+        """Test EntitiesInner3"""
+        # inst_req_only = self.make_instance(include_optional=False)
+        # inst_req_and_optional = self.make_instance(include_optional=True)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/test/test_hitl_detail_history.py b/test/test_hitl_detail_history.py
new file mode 100644
index 0000000..77db07b
--- /dev/null
+++ b/test/test_hitl_detail_history.py
@@ -0,0 +1,134 @@
+# coding: utf-8
+
+"""
+    Airflow API
+
+    Airflow API. All endpoints located under ``/api/v2`` can be used safely, are stable and backward compatible. Endpoints located under ``/ui`` are dedicated to the UI and are subject to breaking change depending on the need of the frontend. Users should not rely on those but use the public ones instead.
+
+    The version of the OpenAPI document: 2
+    Generated by OpenAPI Generator (https://openapi-generator.tech)
+
+    Do not edit the class manually.
+"""  # noqa: E501
+
+
+import unittest
+
+from airflow_client.client.models.hitl_detail_history import HITLDetailHistory
+
+class TestHITLDetailHistory(unittest.TestCase):
+    """HITLDetailHistory unit test stubs"""
+
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def make_instance(self, include_optional) -> HITLDetailHistory:
+        """Test HITLDetailHistory
+            include_optional is a boolean, when False only required
+            params are included, when True both required and
+            optional params are included """
+        # uncomment below to create an instance of `HITLDetailHistory`
+        """
+        model = HITLDetailHistory()
+        if include_optional:
+            return HITLDetailHistory(
+                assigned_users = [
+                    airflow_client.client.models.hitl_user.HITLUser(
+                        id = '', 
+                        name = '', )
+                    ],
+                body = '',
+                chosen_options = [
+                    ''
+                    ],
+                created_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                defaults = [
+                    ''
+                    ],
+                multiple = True,
+                options = [
+                    ''
+                    ],
+                params = { },
+                params_input = { },
+                responded_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                responded_by_user = airflow_client.client.models.hitl_user.HITLUser(
+                    id = '', 
+                    name = '', ),
+                response_received = True,
+                subject = '',
+                task_instance = airflow_client.client.models.task_instance_history_response.TaskInstanceHistoryResponse(
+                    dag_display_name = '', 
+                    dag_id = '', 
+                    dag_run_id = '', 
+                    dag_version = null, 
+                    duration = 1.337, 
+                    end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    executor = '', 
+                    executor_config = '', 
+                    hostname = '', 
+                    map_index = 56, 
+                    max_tries = 56, 
+                    operator = '', 
+                    operator_name = '', 
+                    pid = 56, 
+                    pool = '', 
+                    pool_slots = 56, 
+                    priority_weight = 56, 
+                    queue = '', 
+                    queued_when = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    scheduled_when = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    state = null, 
+                    task_display_name = '', 
+                    task_id = '', 
+                    try_number = 56, 
+                    unixname = '', )
+            )
+        else:
+            return HITLDetailHistory(
+                created_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
+                options = [
+                    ''
+                    ],
+                subject = '',
+                task_instance = airflow_client.client.models.task_instance_history_response.TaskInstanceHistoryResponse(
+                    dag_display_name = '', 
+                    dag_id = '', 
+                    dag_run_id = '', 
+                    dag_version = null, 
+                    duration = 1.337, 
+                    end_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    executor = '', 
+                    executor_config = '', 
+                    hostname = '', 
+                    map_index = 56, 
+                    max_tries = 56, 
+                    operator = '', 
+                    operator_name = '', 
+                    pid = 56, 
+                    pool = '', 
+                    pool_slots = 56, 
+                    priority_weight = 56, 
+                    queue = '', 
+                    queued_when = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    scheduled_when = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
+                    state = null, 
+                    task_display_name = '', 
+                    task_id = '', 
+                    try_number = 56, 
+                    unixname = '', ),
+        )
+        """
+
+    def testHITLDetailHistory(self):
+        """Test HITLDetailHistory"""
+        # inst_req_only = self.make_instance(include_optional=False)
+        # inst_req_and_optional = self.make_instance(include_optional=True)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/test/test_http_validation_error.py b/test/test_http_validation_error.py
index 6847046..3292622 100644
--- a/test/test_http_validation_error.py
+++ b/test/test_http_validation_error.py
@@ -37,6 +37,8 @@
             return HTTPValidationError(
                 detail = [
                     airflow_client.client.models.validation_error.ValidationError(
+                        ctx = airflow_client.client.models.context.Context(), 
+                        input = null, 
                         loc = [
                             null
                             ], 
diff --git a/test/test_pool_body.py b/test/test_pool_body.py
index f343c44..d7a9c8e 100644
--- a/test/test_pool_body.py
+++ b/test/test_pool_body.py
@@ -38,7 +38,8 @@
                 description = '',
                 include_deferred = True,
                 name = '',
-                slots = -1.0
+                slots = -1.0,
+                team_name = ''
             )
         else:
             return PoolBody(
diff --git a/test/test_pool_collection_response.py b/test/test_pool_collection_response.py
index 7195b42..8bd403d 100644
--- a/test/test_pool_collection_response.py
+++ b/test/test_pool_collection_response.py
@@ -46,7 +46,8 @@
                         queued_slots = 56, 
                         running_slots = 56, 
                         scheduled_slots = 56, 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
                     ],
                 total_entries = 56
             )
@@ -63,7 +64,8 @@
                         queued_slots = 56, 
                         running_slots = 56, 
                         scheduled_slots = 56, 
-                        slots = -1.0, )
+                        slots = -1.0, 
+                        team_name = '', )
                     ],
                 total_entries = 56,
         )
diff --git a/test/test_pool_patch_body.py b/test/test_pool_patch_body.py
index 6cf664c..4c72a53 100644
--- a/test/test_pool_patch_body.py
+++ b/test/test_pool_patch_body.py
@@ -38,7 +38,8 @@
                 description = '',
                 include_deferred = True,
                 pool = '',
-                slots = -1.0
+                slots = -1.0,
+                team_name = ''
             )
         else:
             return PoolPatchBody(
diff --git a/test/test_pool_response.py b/test/test_pool_response.py
index 8cc47f2..373ee8c 100644
--- a/test/test_pool_response.py
+++ b/test/test_pool_response.py
@@ -44,7 +44,8 @@
                 queued_slots = 56,
                 running_slots = 56,
                 scheduled_slots = 56,
-                slots = -1.0
+                slots = -1.0,
+                team_name = ''
             )
         else:
             return PoolResponse(
diff --git a/test/test_response_clear_dag_run.py b/test/test_response_clear_dag_run.py
index 1add495..92a4362 100644
--- a/test/test_response_clear_dag_run.py
+++ b/test/test_response_clear_dag_run.py
@@ -96,6 +96,7 @@
                 last_scheduling_decision = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 note = '',
+                partition_key = '',
                 queued_at = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 run_after = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 run_type = 'backfill',
diff --git a/test/test_task_collection_response.py b/test/test_task_collection_response.py
index 1d9bb4a..bac2a53 100644
--- a/test/test_task_collection_response.py
+++ b/test/test_task_collection_response.py
@@ -58,7 +58,7 @@
                         queue = '', 
                         retries = 1.337, 
                         retry_delay = null, 
-                        retry_exponential_backoff = True, 
+                        retry_exponential_backoff = 1.337, 
                         start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         task_display_name = '', 
                         task_id = '', 
@@ -98,7 +98,7 @@
                         queue = '', 
                         retries = 1.337, 
                         retry_delay = null, 
-                        retry_exponential_backoff = True, 
+                        retry_exponential_backoff = 1.337, 
                         start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                         task_display_name = '', 
                         task_id = '', 
diff --git a/test/test_task_instance_api.py b/test/test_task_instance_api.py
index dd701d9..640f0c0 100644
--- a/test/test_task_instance_api.py
+++ b/test/test_task_instance_api.py
@@ -61,6 +61,13 @@
         """
         pass
 
+    def test_get_hitl_detail_try_detail(self) -> None:
+        """Test case for get_hitl_detail_try_detail
+
+        Get Hitl Detail Try Detail
+        """
+        pass
+
     def test_get_hitl_details(self) -> None:
         """Test case for get_hitl_details
 
diff --git a/test/test_task_instance_response.py b/test/test_task_instance_response.py
index c27b8ee..8e73b3c 100644
--- a/test/test_task_instance_response.py
+++ b/test/test_task_instance_response.py
@@ -78,6 +78,7 @@
                     created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'), 
                     id = 56, 
                     kwargs = '', 
+                    queue = '', 
                     triggerer_id = 56, ),
                 triggerer_job = airflow_client.client.models.job_response.JobResponse(
                     dag_display_name = '', 
diff --git a/test/test_task_response.py b/test/test_task_response.py
index b38fb93..272cc85 100644
--- a/test/test_task_response.py
+++ b/test/test_task_response.py
@@ -64,7 +64,7 @@
                     days = 56, 
                     microseconds = 56, 
                     seconds = 56, ),
-                retry_exponential_backoff = True,
+                retry_exponential_backoff = 1.337,
                 start_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 task_display_name = '',
                 task_id = '',
@@ -83,7 +83,7 @@
                 extra_links = [
                     ''
                     ],
-                retry_exponential_backoff = True,
+                retry_exponential_backoff = 1.337,
                 wait_for_downstream = True,
         )
         """
diff --git a/test/test_trigger_dag_run_post_body.py b/test/test_trigger_dag_run_post_body.py
index af5ba45..958cd56 100644
--- a/test/test_trigger_dag_run_post_body.py
+++ b/test/test_trigger_dag_run_post_body.py
@@ -41,6 +41,7 @@
                 data_interval_start = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 logical_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 note = '',
+                partition_key = '',
                 run_after = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f')
             )
         else:
diff --git a/test/test_trigger_response.py b/test/test_trigger_response.py
index ad220d6..b2a833c 100644
--- a/test/test_trigger_response.py
+++ b/test/test_trigger_response.py
@@ -39,6 +39,7 @@
                 created_date = datetime.datetime.strptime('2013-10-20 19:20:30.00', '%Y-%m-%d %H:%M:%S.%f'),
                 id = 56,
                 kwargs = '',
+                queue = '',
                 triggerer_id = 56
             )
         else:
diff --git a/test/test_validation_error.py b/test/test_validation_error.py
index 37aa02b..85e0d14 100644
--- a/test/test_validation_error.py
+++ b/test/test_validation_error.py
@@ -35,6 +35,8 @@
         model = ValidationError()
         if include_optional:
             return ValidationError(
+                ctx = airflow_client.client.models.context.Context(),
+                input = None,
                 loc = [
                     null
                     ],
diff --git a/test/test_variable_body.py b/test/test_variable_body.py
index a526ec4..a23629e 100644
--- a/test/test_variable_body.py
+++ b/test/test_variable_body.py
@@ -37,6 +37,7 @@
             return VariableBody(
                 description = '',
                 key = '',
+                team_name = '',
                 value = None
             )
         else:
diff --git a/test/test_variable_collection_response.py b/test/test_variable_collection_response.py
index e070a6a..576cecf 100644
--- a/test/test_variable_collection_response.py
+++ b/test/test_variable_collection_response.py
@@ -41,6 +41,7 @@
                         description = '', 
                         is_encrypted = True, 
                         key = '', 
+                        team_name = '', 
                         value = '', )
                     ]
             )
@@ -52,6 +53,7 @@
                         description = '', 
                         is_encrypted = True, 
                         key = '', 
+                        team_name = '', 
                         value = '', )
                     ],
         )
diff --git a/test/test_variable_response.py b/test/test_variable_response.py
index 28815d9..c04c152 100644
--- a/test/test_variable_response.py
+++ b/test/test_variable_response.py
@@ -38,6 +38,7 @@
                 description = '',
                 is_encrypted = True,
                 key = '',
+                team_name = '',
                 value = ''
             )
         else:
diff --git a/test/test_x_com_api.py b/test/test_x_com_api.py
index 55813fd..ba910d4 100644
--- a/test/test_x_com_api.py
+++ b/test/test_x_com_api.py
@@ -33,6 +33,13 @@
         """
         pass
 
+    def test_delete_xcom_entry(self) -> None:
+        """Test case for delete_xcom_entry
+
+        Delete Xcom Entry
+        """
+        pass
+
     def test_get_xcom_entries(self) -> None:
         """Test case for get_xcom_entries
 
diff --git a/test_python_client.py b/test_python_client.py
index 9d77142..30cdfb2 100644
--- a/test_python_client.py
+++ b/test_python_client.py
@@ -121,6 +121,7 @@
 
         # Get current configuration. Note, this is disabled by default with most installation.
         # You need to set `expose_config = True` in Airflow configuration in order to retrieve configuration.
+        # Sensitive configuration values are always masked in the response.
         conf_api_instance = config_api.ConfigApi(api_client)
         try:
             api_response = conf_api_instance.get_config()
diff --git a/version.txt b/version.txt
index c848fb9..944880f 100644
--- a/version.txt
+++ b/version.txt
@@ -1 +1 @@
-3.1.8
+3.2.0