blob: 569e02ed8609559ad34bcb49f8fac2f664d83b19 [file] [log] [blame]
:mod:`airflow.contrib.operators.gcp_translate_speech_operator`
==============================================================
.. py:module:: airflow.contrib.operators.gcp_translate_speech_operator
Module Contents
---------------
.. py:class:: GcpTranslateSpeechOperator(audio, config, target_language, format_, source_language, model, project_id=None, gcp_conn_id='google_cloud_default', *args, **kwargs)
Bases: :class:`airflow.models.BaseOperator`
Recognizes speech in audio input and translates it.
Note that it uses the first result from the recognition api response - the one with the highest confidence
In order to see other possible results please use
:ref:`howto/operator:GcpSpeechToTextRecognizeSpeechOperator`
and
:ref:`howto/operator:CloudTranslateTextOperator`
separately
.. seealso::
For more information on how to use this operator, take a look at the guide:
:ref:`howto/operator:GcpTranslateSpeechOperator`
See https://cloud.google.com/translate/docs/translating-text
Execute method returns string object with the translation
This is a list of dictionaries queried value.
Dictionary typically contains three keys (though not
all will be present in all cases).
* ``detectedSourceLanguage``: The detected language (as an
ISO 639-1 language code) of the text.
* ``translatedText``: The translation of the text into the
target language.
* ``input``: The corresponding input value.
* ``model``: The model used to translate the text.
Dictionary is set as XCom return value.
:param audio: audio data to be recognized. See more:
https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionAudio
:type audio: dict or google.cloud.speech_v1.types.RecognitionAudio
:param config: information to the recognizer that specifies how to process the request. See more:
https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionConfig
:type config: dict or google.cloud.speech_v1.types.RecognitionConfig
:param target_language: The language to translate results into. This is required by the API and defaults
to the target language of the current instance.
Check the list of available languages here: https://cloud.google.com/translate/docs/languages
:type target_language: str
:param format_: (Optional) One of ``text`` or ``html``, to specify
if the input text is plain text or HTML.
:type format_: str or None
:param source_language: (Optional) The language of the text to
be translated.
:type source_language: str or None
:param model: (Optional) The model used to translate the text, such
as ``'base'`` or ``'nmt'``.
:type model: str or None
:param project_id: Optional, Google Cloud Platform Project ID where the Compute
Engine Instance exists. If set to None or missing, the default project_id from the GCP connection is
used.
:type project_id: str
:param gcp_conn_id: Optional, The connection ID used to connect to Google Cloud
Platform. Defaults to 'google_cloud_default'.
:type gcp_conn_id: str
.. attribute:: template_fields
:annotation: = ['target_language', 'format_', 'source_language', 'model', 'project_id', 'gcp_conn_id']
.. method:: execute(self, context)