Natural Language#

The Google Natural Language API can be used to reveal the structure and meaning of text via powerful machine learning models. You can use it to extract information about people, places, events and much more, mentioned in text documents, news articles or blog posts. You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call center or a messaging app. You can analyze text uploaded in your request or integrate with your document storage on Google Cloud Storage.

Authentication and Configuration#

  • For an overview of authentication in google-cloud-python, see Authentication.

  • In addition to any authentication configuration, you should also set the GOOGLE_CLOUD_PROJECT environment variable for the project you’d like to interact with. If the GOOGLE_CLOUD_PROJECT environment variable is not present, the project ID from JSON file credentials is used.

    If you are using Google App Engine or Google Compute Engine this will be detected automatically.

  • After configuring your environment, create a LanguageServiceClient.

>>> from import language
>>> client = language.LanguageServiceClient()

or pass in credentials explicitly.

>>> from import language
>>> client = language.LanguageServiceClient(
...     credentials=creds,
... )


The Google Natural Language API has three supported methods

and each method uses a Document for representing text.

>>> document = language.types.Document(
...     content='Google, headquartered in Mountain View, unveiled the '
...             'new Android phone at the Consumer Electronic Show.  '
...             'Sundar Pichai said in his keynote that users love '
...             'their new Android phones.',
...     language='en',
...     type='PLAIN_TEXT',
... )

The document’s language defaults to None, which will cause the API to auto-detect the language.

In addition, you can construct an HTML document:

>>> html_content = """\
... <html>
...   <head>
...     <title>El Tiempo de las Historias</time>
...   </head>
...   <body>
...     <p>La vaca salt&oacute; sobre la luna.</p>
...   </body>
... </html>
... """
>>> document = language.types.Document(
...     content=html_content,
...     language='es',
...     type='HTML',
... )

The language argument can be either ISO-639-1 or BCP-47 language codes. The API reference page contains the full list of supported languages.

In addition to supplying the text / HTML content, a document can refer to content stored in Google Cloud Storage.

>>> document = language.types.Document(
...     gcs_content_uri='gs://my-text-bucket/sentiment-me.txt',
...     type=language.enums.HTML,
... )

Analyze Entities#

The analyze_entities() method finds named entities (i.e. proper names) in the text. This method returns a AnalyzeEntitiesResponse.

>>> document = language.types.Document(
...     content='Michelangelo Caravaggio, Italian painter, is '
...             'known for "The Calling of Saint Matthew".',
...     type=language.enums.Type.PLAIN_TEXT,
... )
>>> response = client.analyze_entities(
...     document=document,
...     encoding_type='UTF32',
... )
>>> for entity in response.entities:
...     print('=' * 20)
...     print('         name: {0}'.format(
...     print('         type: {0}'.format(entity.entity_type))
...     print('     metadata: {0}'.format(entity.metadata))
...     print('     salience: {0}'.format(entity.salience))
         name: Michelangelo Caravaggio
         type: PERSON
     metadata: {'wikipedia_url': ''}
     salience: 0.7615959
         name: Italian
         type: LOCATION
     metadata: {'wikipedia_url': ''}
     salience: 0.19960518
         name: The Calling of Saint Matthew
         type: EVENT
     metadata: {'wikipedia_url': ''}
     salience: 0.038798928


It is recommended to send an encoding_type argument to Natural Language methods, so they provide useful offsets for the data they return. While the correct value varies by environment, in Python you usually want UTF32.

Analyze Sentiment#

The analyze_sentiment() method analyzes the sentiment of the provided text. This method returns a AnalyzeSentimentResponse.

>>> document = language.types.Document(
...     content='Jogging is not very fun.',
...     type='PLAIN_TEXT',
... )
>>> response = client.analyze_sentiment(
...     document=document,
...     encoding_type='UTF32',
... )
>>> sentiment = response.document_sentiment
>>> print(sentiment.score)
>>> print(sentiment.magnitude)


It is recommended to send an encoding_type argument to Natural Language methods, so they provide useful offsets for the data they return. While the correct value varies by environment, in Python you usually want UTF32.

Annotate Text#

The annotate_text() method analyzes a document and is intended for users who are familiar with machine learning and need in-depth text features to build upon. This method returns a AnnotateTextResponse.

API Reference#

This package includes clients for multiple versions of the Natural Language API. By default, you will get v1, the latest GA version.

If you are interested in beta features ahead of the latest GA, you may opt-in to the v1.1 beta, which is spelled v1beta2. In order to do this, you will want to import from in lieu of

An API and type reference is provided for the v1.1 beta also:


The client for the beta API is provided on a provisional basis. The API surface is subject to change, and it is possible that this client will be deprecated or removed after its features become GA.