doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
E76A86B7EE87A78FA06482285BAD02694ABCC3CA | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html?context=cdpaas&locale=en | Watson Studio environments compute usage | Watson Studio environments compute usage
Compute usage is calculated by the number of capacity unit hours (CUH) consumed by an active environment runtime in Watson Studio. Watson Studio plans govern how you are billed monthly for the resources you consume.
Capacity units included in each plan per month
Feature Lite Professional Standard (legacy) Enterprise (legacy)
Processing usage 10 CUH <br>per month Unlimited CUH <br>billed for usage per month 10 CUH per month <br>+ pay for more 5000 CUH per month <br>+ pay for more
Capacity units included in each plan per month
Feature Lite Professional
Processing usage 10 CUH per month Unlimited CUH <br>billed for usage per month
Capacity units per hour for notebooks
Notebooks
Capacity type Language Capacity units per hour
1 vCPU and 4 GB RAM Python <br>R 0.5
2 vCPU and 8 GB RAM Python <br>R 1
4 vCPU and 16 GB RAM Python <br>R 2
8 vCPU and 32 GB RAM Python <br>R 4
16 vCPU and 64 GB RAM Python <br>R 8
Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM Spark with Python <br>Spark with R 1 <br>CUH per additional executor is 0.5
Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM Spark with Python <br>Spark with R 1.5 <br>CUH per additional executor is 1
Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; Spark with Python <br>Spark with R 1.5 <br>CUH per additional executor is 0.5
Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; Spark with Python <br>Spark with R 2 <br>CUH per additional executor is 1
The rate of capacity units per hour consumed is determined for:
* Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
For example: The IBM Runtime 22.2 on Python 3.10 XS with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using the IBM Runtime 22.2 on Python 3.10 XS environment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is 5 x 7 x 8 = 280 CUH per week.
The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
* Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size.
Capacity units per hour for notebooks with Decision Optimization
The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization.
Decision Optimization notebooks
Capacity type Language Capacity units per hour
1 vCPU and 4 GB RAM Python + Decision Optimization 0.5 + 5 = 5.5
2 vCPU and 8 GB RAM Python + Decision Optimization 1 + 5 = 6
4 vCPU and 16 GB RAM Python + Decision Optimization 2 + 5 = 7
8 vCPU and 32 GB RAM Python + Decision Optimization 4 + 5 = 9
16 vCPU and 64 GB RAM Python + Decision Optimization 8 + 5 = 13
Capacity units per hour for notebooks with Watson Natural Language Processing
The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing.
Watson Natural Language Processing notebooks
Capacity type Language Capacity units per hour
1 vCPU and 4 GB RAM Python + Watson Natural Language Processing 0.5 + 5 = 5.5
2 vCPU and 8 GB RAM Python + Watson Natural Language Processing 1 + 5 = 6
4 vCPU and 16 GB RAM Python + Watson Natural Language Processing 2 + 5 = 7
8 vCPU and 32 GB RAM Python + Watson Natural Language Processing 4 + 5 = 9
16 vCPU and 64 GB RAM Python + Watson Natural Language Processing 8 + 5 = 13
Capacity units per hour for Synthetic Data Generator
Capacity type Capacity units per hour
2 vCPU and 8 GB RAM 7
Capacity units per hour for SPSS Modeler flows
SPSS Modeler flows
Name Capacity type Capacity units per hour
Default SPSS XS 4 vCPU 16 GB RAM 2
Capacity units per hour for Data Refinery and Data Refinery flows
Data Refinery and Data Refinery flows
Name Capacity type Capacity units per hour
Default Data Refinery XS runtime 3 vCPU and 12 GB RAM 1.5
Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1.5
Capacity units per hour for RStudio
RStudio
Name Capacity type Capacity units per hour
Default RStudio XS 2 vCPU and 8 GB RAM 1
Default RStudio M 8 vCPU and 32 GB RAM 4
Default RStudio L 16 vCPU and 64 GB RAM 8
Capacity units per hour for GPU environments
GPU environments
Capacity type GPUs Language Capacity units per hour
1 x NVIDIA Tesla V100 1 Python with GPU 68
2 x NVIDIA Tesla V100 2 Python with GPU 136
Runtime capacity limit
You are notified when you're about to reach the monthly runtime capacity limit for your Watson Studio service plan. When this happens, you can:
* Stop active runtimes you don't need.
* Upgrade your service plan. For up-to-date information, see the[Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=services).
Remember: The CUH counter continues to increase while a runtime is active so stop the runtimes you aren't using. If you don't explicitly stop a runtime, the runtime is stopped after an idle timeout. During the idle time, you will continue to consume CUHs for which you are billed.
Track runtime usage for a project
You can view the environment runtimes that are currently active in a project, and monitor usage for the project from the project's Environments page.
Track runtime usage for an account
The CUH consumed by the active runtimes in a project are billed to the account that the project creator has selected in his or her profile settings at the time the project is created. This account can be the account of the project creator, or another account that the project creator has access to. If other users are added to the project and use runtimes, their usage is also billed against the account that the project creator chose at the time of project creation.
You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator.
To view the total runtime usage across all of the projects and see how much of your plan you have currently used, choose Administration > Environment runtimes.
A list of the active runtimes billed to your account is displayed. You can see who created the runtimes, when, and for which projects, as well as the capacity units that were consumed by the active runtimes at the time you view the list.
Learn more
* [Idle runtime timeouts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
* [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
* [Upgrade your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)
Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
| # Watson Studio environments compute usage #
Compute usage is calculated by the number of capacity unit hours (CUH) consumed by an active environment runtime in Watson Studio\. Watson Studio plans govern how you are billed monthly for the resources you consume\.
<!-- <table> -->
Capacity units included in each plan per month
| Feature | Lite | Professional | Standard (legacy) | Enterprise (legacy) |
| ---------------- | --------------------- | --------------------------------------------- | ------------------------------------- | --------------------------------------- |
| Processing usage | 10 CUH <br>per month | Unlimited CUH <br>billed for usage per month | 10 CUH per month <br>\+ pay for more | 5000 CUH per month <br>\+ pay for more |
<!-- </table ""> -->
<!-- <table> -->
Capacity units included in each plan per month
| Feature | Lite | Professional |
| ---------------- | ---------------- | --------------------------------------------- |
| Processing usage | 10 CUH per month | Unlimited CUH <br>billed for usage per month |
<!-- </table ""> -->
## Capacity units per hour for notebooks ##
<!-- <table> -->
Notebooks
| Capacity type | Language | Capacity units per hour |
| ------------------------------------------------------------- | ----------------------------------- | --------------------------------------------- |
| 1 vCPU and 4 GB RAM | Python <br>R | 0\.5 |
| 2 vCPU and 8 GB RAM | Python <br>R | 1 |
| 4 vCPU and 16 GB RAM | Python <br>R | 2 |
| 8 vCPU and 32 GB RAM | Python <br>R | 4 |
| 16 vCPU and 64 GB RAM | Python <br>R | 8 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM | Spark with Python <br>Spark with R | 1 <br>CUH per additional executor is 0\.5 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM | Spark with Python <br>Spark with R | 1\.5 <br>CUH per additional executor is 1 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; | Spark with Python <br>Spark with R | 1\.5 <br>CUH per additional executor is 0\.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; | Spark with Python <br>Spark with R | 2 <br>CUH per additional executor is 1 |
<!-- </table ""> -->
The rate of capacity units per hour consumed is determined for:
<!-- <ul> -->
* Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
For example: The `IBM Runtime 22.2 on Python 3.10 XS` with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using the `IBM Runtime 22.2 on Python 3.10 XS` environment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is `5 x 7 x 8 = 280 CUH per week`.
The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
* Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size\.
<!-- </ul> -->
## Capacity units per hour for notebooks with Decision Optimization ##
The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization\.
<!-- <table> -->
Decision Optimization notebooks
| Capacity type | Language | Capacity units per hour |
| --------------------- | ------------------------------- | ----------------------- |
| 1 vCPU and 4 GB RAM | Python \+ Decision Optimization | 0\.5 \+ 5 = 5\.5 |
| 2 vCPU and 8 GB RAM | Python \+ Decision Optimization | 1 \+ 5 = 6 |
| 4 vCPU and 16 GB RAM | Python \+ Decision Optimization | 2 \+ 5 = 7 |
| 8 vCPU and 32 GB RAM | Python \+ Decision Optimization | 4 \+ 5 = 9 |
| 16 vCPU and 64 GB RAM | Python \+ Decision Optimization | 8 \+ 5 = 13 |
<!-- </table ""> -->
## Capacity units per hour for notebooks with Watson Natural Language Processing ##
The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing\.
<!-- <table> -->
Watson Natural Language Processing notebooks
| Capacity type | Language | Capacity units per hour |
| --------------------- | -------------------------------------------- | ----------------------- |
| 1 vCPU and 4 GB RAM | Python \+ Watson Natural Language Processing | 0\.5 \+ 5 = 5\.5 |
| 2 vCPU and 8 GB RAM | Python \+ Watson Natural Language Processing | 1 \+ 5 = 6 |
| 4 vCPU and 16 GB RAM | Python \+ Watson Natural Language Processing | 2 \+ 5 = 7 |
| 8 vCPU and 32 GB RAM | Python \+ Watson Natural Language Processing | 4 \+ 5 = 9 |
| 16 vCPU and 64 GB RAM | Python \+ Watson Natural Language Processing | 8 \+ 5 = 13 |
<!-- </table ""> -->
## Capacity units per hour for Synthetic Data Generator ##
<!-- <table> -->
| Capacity type | Capacity units per hour |
| ------------------- | ----------------------- |
| 2 vCPU and 8 GB RAM | 7 |
<!-- </table ""> -->
## Capacity units per hour for SPSS Modeler flows ##
<!-- <table> -->
SPSS Modeler flows
| Name | Capacity type | Capacity units per hour |
| --------------- | ---------------- | ----------------------- |
| Default SPSS XS | 4 vCPU 16 GB RAM | 2 |
<!-- </table ""> -->
## Capacity units per hour for Data Refinery and Data Refinery flows ##
<!-- <table> -->
Data Refinery and Data Refinery flows
| Name | Capacity type | Capacity units per hour |
| -------------------------------- | ------------------------------------------------------------------ | ----------------------- |
| Default Data Refinery XS runtime | 3 vCPU and 12 GB RAM | 1\.5 |
| Default Spark 3\.3 & R 4\.2 | 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM | 1\.5 |
<!-- </table ""> -->
## Capacity units per hour for RStudio ##
<!-- <table> -->
RStudio
| Name | Capacity type | Capacity units per hour |
| ------------------ | --------------------- | ----------------------- |
| Default RStudio XS | 2 vCPU and 8 GB RAM | 1 |
| Default RStudio M | 8 vCPU and 32 GB RAM | 4 |
| Default RStudio L | 16 vCPU and 64 GB RAM | 8 |
<!-- </table ""> -->
## Capacity units per hour for GPU environments ##
<!-- <table> -->
GPU environments
| Capacity type | GPUs | Language | Capacity units per hour |
| --------------------- | ---- | --------------- | ----------------------- |
| 1 x NVIDIA Tesla V100 | 1 | Python with GPU | 68 |
| 2 x NVIDIA Tesla V100 | 2 | Python with GPU | 136 |
<!-- </table ""> -->
## Runtime capacity limit ##
You are notified when you're about to reach the monthly runtime capacity limit for your Watson Studio service plan\. When this happens, you can:
<!-- <ul> -->
* Stop active runtimes you don't need\.
* Upgrade your service plan\. For up\-to\-date information, see the[Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=services)\.
<!-- </ul> -->
Remember: The CUH counter continues to increase while a runtime is active so stop the runtimes you aren't using\. If you don't explicitly stop a runtime, the runtime is stopped after an idle timeout\. During the idle time, you will continue to consume CUHs for which you are billed\.
## Track runtime usage for a project ##
You can view the environment runtimes that are currently active in a project, and monitor usage for the project from the project's **Environments** page\.
## Track runtime usage for an account ##
The CUH consumed by the active runtimes in a project are billed to the account that the project creator has selected in his or her profile settings at the time the project is created\. This account can be the account of the project creator, or another account that the project creator has access to\. If other users are added to the project and use runtimes, their usage is also billed against the account that the project creator chose at the time of project creation\.
You can track the runtime usage for an account on the **Environment Runtimes** page if you are the IBM Cloud account owner or administrator\.
To view the total runtime usage across all of the projects and see how much of your plan you have currently used, choose **Administration > Environment runtimes**\.
A list of the active runtimes billed to your account is displayed\. You can see who created the runtimes, when, and for which projects, as well as the capacity units that were consumed by the active runtimes at the time you view the list\.
## Learn more ##
<!-- <ul> -->
* [Idle runtime timeouts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)
* [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
* [Upgrade your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)
<!-- </ul> -->
**Parent topic:**[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
<!-- </article "role="article" "> -->
|
DE60E212953766B4698982B3B631D1A25A019F2E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html?context=cdpaas&locale=en | Accessing project assets with ibm-watson-studio-lib | Accessing project assets with ibm-watson-studio-lib
The ibm-watson-studio-lib library for Python and R contains a set of functions that help you to interact with IBM Watson Studio projects and project assets. You can think of the library as a programmatical interface to a project. Using the ibm-watson-studio-lib library, you can access project metadata and assets, including files and connections. The library also contains functions that simplify fetching files associated with the project.
Next steps
* Start using ibm-watson-studio-lib in new notebooks:
* [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html)
* [ibm-watson-studio-lib for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html)
Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
| # Accessing project assets with ibm\-watson\-studio\-lib #
The `ibm-watson-studio-lib` library for Python and R contains a set of functions that help you to interact with IBM Watson Studio projects and project assets\. You can think of the library as a programmatical interface to a project\. Using the `ibm-watson-studio-lib` library, you can access project metadata and assets, including files and connections\. The library also contains functions that simplify fetching files associated with the project\.
## Next steps ##
<!-- <ul> -->
* Start using `ibm-watson-studio-lib` in new notebooks:
<!-- <ul> -->
* [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html)
* [ibm-watson-studio-lib for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html)
<!-- </ul> -->
<!-- </ul> -->
**Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
<!-- </article "role="article" "> -->
|
15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html?context=cdpaas&locale=en | Watson Natural Language Processing task catalog | Watson Natural Language Processing task catalog
Watson Natural Language Processing encapsulates natural language functionality in standardized components called blocks or workflows. Each block or workflow can be loaded and run in a notebook, some directly on input data, others in a given order.
This topic contains descriptions of the natural language processing tasks supported in the Watson Natural Language Processing library. It lists the task names, the languages that are supported, dependencies to other blocks and includes sample code of how you use the natural language processing functionality in a Python notebook.
The following natural language processing tasks are supported as blocks or workflows in the Watson Natural Language Processing library:
* [Language detection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-language-detection.html)
* [Syntax analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-syntax.html)
* [Noun phrase extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-noun-phrase.html)
* [Keyword extraction and ranking](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-keyword.html)
* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html)
* [Sentiment classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-sentiment.html)
* [Tone classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html)
* [Emotion classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-emotion.html)
* [Concepts extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext.html)
* [Relations extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html)
* [Hierarchical text categorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html)
Language codes
Many of the pre-trained models are available in many languages. The following table lists the language codes and the corresponding language.
Language codes and their corresponding language equivalents
Language code Corresponding language Language code Corresponding language
af Afrikaans ar Arabic
bs Bosnian ca Catalan
cs Czech da Danish
de German el Greek
en English es Spanish
fi Finnish fr French
he Hebrew hi Hindi
hr Croatian it Italian
ja Japanese ko Korean
nb Norwegian Bokmål nl Dutch
nn Norwegian Nynorsk pl Polish
pt Portuguese ro Romanian
ru Russian sk Slovak
sr Serbian sv Swedish
tr Turkish zh_cn Chinese (Simplified)
zh_tw Chinese (Traditional)
Parent topic:[Watson Natural language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
| # Watson Natural Language Processing task catalog #
Watson Natural Language Processing encapsulates natural language functionality in standardized components called blocks or workflows\. Each block or workflow can be loaded and run in a notebook, some directly on input data, others in a given order\.
This topic contains descriptions of the natural language processing tasks supported in the Watson Natural Language Processing library\. It lists the task names, the languages that are supported, dependencies to other blocks and includes sample code of how you use the natural language processing functionality in a Python notebook\.
The following natural language processing tasks are supported as blocks or workflows in the Watson Natural Language Processing library:
<!-- <ul> -->
* [Language detection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-language-detection.html)
* [Syntax analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-syntax.html)
* [Noun phrase extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-noun-phrase.html)
* [Keyword extraction and ranking](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-keyword.html)
* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html)
* [Sentiment classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-sentiment.html)
* [Tone classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html)
* [Emotion classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-emotion.html)
* [Concepts extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext.html)
* [Relations extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html)
* [Hierarchical text categorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html)
<!-- </ul> -->
## Language codes ##
Many of the pre\-trained models are available in many languages\. The following table lists the language codes and the corresponding language\.
<!-- <table> -->
Language codes and their corresponding language equivalents
| Language code | Corresponding language | Language code | Corresponding language |
| ------------- | ---------------------- | ------------- | ---------------------- |
| af | Afrikaans | ar | Arabic |
| bs | Bosnian | ca | Catalan |
| cs | Czech | da | Danish |
| de | German | el | Greek |
| en | English | es | Spanish |
| fi | Finnish | fr | French |
| he | Hebrew | hi | Hindi |
| hr | Croatian | it | Italian |
| ja | Japanese | ko | Korean |
| nb | Norwegian Bokmål | nl | Dutch |
| nn | Norwegian Nynorsk | pl | Polish |
| pt | Portuguese | ro | Romanian |
| ru | Russian | sk | Slovak |
| sr | Serbian | sv | Swedish |
| tr | Turkish | zh\_cn | Chinese (Simplified) |
| zh\_tw | Chinese (Traditional) |
<!-- </table ""> -->
**Parent topic:**[Watson Natural language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
<!-- </article "role="article" "> -->
|
156F8A58809D3A4D8F80D02481E5ADDE513EDEAA | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext_cloud.html?context=cdpaas&locale=en | Concepts extraction block | Concepts extraction block
The Watson Natural Language Processing Concepts block extracts general DBPedia concepts (concepts drawn from language-specific Wikipedia versions) that are directly referenced or alluded to, but not directly referenced, in the input text.
Block name
concepts_alchemy_<language>_stock
Supported languages
The Concepts block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
de, en, es, fr, it, ja, ko, pt
Capabilities
Use this block to assign concepts from [DBPedia](https://www.dbpedia.org/) (2016 edition). The output types are based on DBPedia.
Dependencies on other blocks
The following block must run before you can run the Concepts extraction block:
* syntax_izumo_<language>_stock
Code sample
import watson_nlp
Load Syntax and a Concepts model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
concepts_model = watson_nlp.load('concepts_alchemy_en_stock')
Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
Run the concepts model on the result of syntax
concepts = concepts_model.run(syntax_prediction)
print(concepts)
Output of the code sample:
{
"concepts": [
{
"text": "IBM",
"relevance": 0.9842190146446228,
"dbpedia_resource": "http://dbpedia.org/resource/IBM"
},
{
"text": "Quantum_computing",
"relevance": 0.9797260165214539,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_computing"
},
{
"text": "Computing",
"relevance": 0.9080164432525635,
"dbpedia_resource": "http://dbpedia.org/resource/Computing"
},
{
"text": "Shor's_algorithm",
"relevance": 0.7580527067184448,
"dbpedia_resource": "http://dbpedia.org/resource/Shor's_algorithm"
},
{
"text": "Quantum_dot",
"relevance": 0.7069802284240723,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_dot"
},
{
"text": "Quantum_algorithm",
"relevance": 0.7063655853271484,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_algorithm"
},
{
"text": "Qubit",
"relevance": 0.7063655853271484,
"dbpedia_resource": "http://dbpedia.org/resource/Qubit"
},
{
"text": "DNA_computing",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/DNA_computing"
},
{
"text": "Computation",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/Computation"
},
{
"text": "Computer",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/Computer"
}
],
"producer_id": {
"name": "Alchemy Concepts",
"version": "0.0.1"
}
}
Parent topic:[Watson Natural Language Processing block catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Concepts extraction block #
The Watson Natural Language Processing Concepts block extracts general DBPedia concepts (concepts drawn from language\-specific Wikipedia versions) that are directly referenced or alluded to, but not directly referenced, in the input text\.
**Block name**
`concepts_alchemy_<language>_stock`
**Supported languages**
The Concepts block is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
de, en, es, fr, it, ja, ko, pt
**Capabilities**
Use this block to assign concepts from [DBPedia](https://www.dbpedia.org/) (2016 edition)\. The output types are based on DBPedia\.
**Dependencies on other blocks**
The following block must run before you can run the Concepts extraction block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
<!-- </ul> -->
**Code sample**
import watson_nlp
# Load Syntax and a Concepts model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
concepts_model = watson_nlp.load('concepts_alchemy_en_stock')
# Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
# Run the concepts model on the result of syntax
concepts = concepts_model.run(syntax_prediction)
print(concepts)
Output of the code sample:
{
"concepts": [
{
"text": "IBM",
"relevance": 0.9842190146446228,
"dbpedia_resource": "http://dbpedia.org/resource/IBM"
},
{
"text": "Quantum_computing",
"relevance": 0.9797260165214539,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_computing"
},
{
"text": "Computing",
"relevance": 0.9080164432525635,
"dbpedia_resource": "http://dbpedia.org/resource/Computing"
},
{
"text": "Shor's_algorithm",
"relevance": 0.7580527067184448,
"dbpedia_resource": "http://dbpedia.org/resource/Shor's_algorithm"
},
{
"text": "Quantum_dot",
"relevance": 0.7069802284240723,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_dot"
},
{
"text": "Quantum_algorithm",
"relevance": 0.7063655853271484,
"dbpedia_resource": "http://dbpedia.org/resource/Quantum_algorithm"
},
{
"text": "Qubit",
"relevance": 0.7063655853271484,
"dbpedia_resource": "http://dbpedia.org/resource/Qubit"
},
{
"text": "DNA_computing",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/DNA_computing"
},
{
"text": "Computation",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/Computation"
},
{
"text": "Computer",
"relevance": 0.7044616341590881,
"dbpedia_resource": "http://dbpedia.org/resource/Computer"
}
],
"producer_id": {
"name": "Alchemy Concepts",
"version": "0.0.1"
}
}
**Parent topic:**[Watson Natural Language Processing block catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
B32394103127310AF0F4BF240CFD0B26399B685D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-emotion.html?context=cdpaas&locale=en | Emotion classification | Emotion classification
The Emotion model in the Watson Natural Language Processing classification workflow classifies the emotion in the input text.
Workflow nameensemble_classification-workflow_en_emotion-stock
Supported languages
* English and French
Capabilities
The Emotion classification model is a pre-trained document classification model for the task of classifying the emotion in the input document. The model identifies the emotion of a document, and classifies it as:
* Anger
* Disgust
* Fear
* Joy
* Sadness
Unlike the Sentiment model, which classifies each individual sentence, the Emotion model classifies the entire input document. As such, the Emotion model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Emotion model on each sentence or paragraph.
A document may be classified into multiple categories or into no category.
Capabilities of emotion classification based on an example
Capabilities Example
Identifies the emotion of a document and classifies it "I'm so annoyed that this code won't run --> anger, sadness
Dependencies on other blocks
None
Code sample
import watson_nlp
Load the Emotion workflow model for English
emotion_model = watson_nlp.load('ensemble_classification-workflow_en_emotion-stock')
Run the Emotion model
emotion_result = emotion_model.run("I'm so annoyed that this code won't run")
print(emotion_result)
Output of the code sample:
{
"classes": [
{
"class_name": "anger",
"confidence": 0.6074999913276445
},
{
"class_name": "sadness",
"confidence": 0.2913303280964709
},
{
"class_name": "fear",
"confidence": 0.10266377929247113
},
{
"class_name": "disgust",
"confidence": 0.018745421312542355
},
{
"class_name": "joy",
"confidence": 0.0020577122567564804
}
],
"producer_id": {
"name": "Voting based Ensemble",
"version": "0.0.1"
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Emotion classification #
The Emotion model in the Watson Natural Language Processing classification workflow classifies the emotion in the input text\.
**Workflow name**`ensemble_classification-workflow_en_emotion-stock`
**Supported languages**
<!-- <ul> -->
* English and French
<!-- </ul> -->
**Capabilities**
The Emotion classification model is a pre\-trained document classification model for the task of classifying the emotion in the input document\. The model identifies the emotion of a document, and classifies it as:
<!-- <ul> -->
* Anger
* Disgust
* Fear
* Joy
* Sadness
<!-- </ul> -->
Unlike the Sentiment model, which classifies each individual sentence, the Emotion model classifies the entire input document\. As such, the Emotion model works optimally when the input text to classify is no longer than 1000 characters\. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Emotion model on each sentence or paragraph\.
A document may be classified into multiple categories or into no category\.
<!-- <table> -->
Capabilities of emotion classification based on an example
| Capabilities | Example |
| ------------------------------------------------------ | ---------------------------------------------------------------- |
| Identifies the emotion of a document and classifies it | "I'm so annoyed that this code won't run \-\-> anger, sadness |
<!-- </table ""> -->
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load the Emotion workflow model for English
emotion_model = watson_nlp.load('ensemble_classification-workflow_en_emotion-stock')
# Run the Emotion model
emotion_result = emotion_model.run("I'm so annoyed that this code won't run")
print(emotion_result)
Output of the code sample:
{
"classes": [
{
"class_name": "anger",
"confidence": 0.6074999913276445
},
{
"class_name": "sadness",
"confidence": 0.2913303280964709
},
{
"class_name": "fear",
"confidence": 0.10266377929247113
},
{
"class_name": "disgust",
"confidence": 0.018745421312542355
},
{
"class_name": "joy",
"confidence": 0.0020577122567564804
}
],
"producer_id": {
"name": "Voting based Ensemble",
"version": "0.0.1"
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
A8A2D53661EB9EF173F7CC4794096A134123DACA | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en | Entity extraction | Entity extraction
The Watson Natural Language Processing Entity extraction models extract entities from input text.
For details, on available extraction types, refer to these sections:
* [Machine-learning-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-general)
* [Machine-learning-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-pii)
* [Rule-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-general)
* [Rule-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-pii)
Machine-learning-based extraction for general entities
The machine-learning-based extraction models are trained on labeled data for the more complex entity types such as person, organization and location.
Capabilities
The entity models extract entities from the input text. The following types of entities are recognized:
* Date
* Duration
* Facility
* Geographic feature
* Job title
* Location
* Measure
* Money
* Ordinal
* Organization
* Person
* Time
Capabilities of machine-learning-based extraction based on an example
Capabilities Examples
Extracts entities from the input text. IBM's CEO Arvind Krishna is based in the US -> IBMOrganization , CEOJobTitle, Arvind KrishnaPerson, USLocation
Available workflows and blocks differ, depending on the runtime used.
Blocks and workflows for handling general entities with their corresponding runtimes
Block or workflow name Available in runtime
entity-mentions_transformer-workflow_multilingual_slate.153m.distilled [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231)
entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231)
entity-mentions_bert_multi_stock [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-222)
Machine-learning-based workflows for general entities in Runtime 23.1
Workflow names
* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled: this workflow can be used on both CPUs and GPUs.
* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu: this workflow is optimized for CPU-based runtimes.
Supported languages
Entity extraction is available for the following languages.
For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes):
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
Code sample
import watson_nlp
Load the workflow model
entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
Run the entity extraction workflow on the input text
entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code="en")
print(entities.get_mention_pairs())
Output of the code sample:
[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
Machine-learning-based blocks for general entities in Runtime 22.2
Block namesentity-mentions_bert_multi_stock
Supported languages
Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
Dependencies on other blocks
The following block must run before you can run the Entity extraction block:
* syntax_izumo_<language>_stock
Code sample
import watson_nlp
Load Syntax Model for English, and the multilingual BERT Entity model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
bert_entity_model = watson_nlp.load('entity-mentions_bert_multi_stock')
Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM's CEO Arvind Krishna is based in the US')
Run the entity mention model on the result of syntax model
bert_entity_mentions = bert_entity_model.run(syntax_prediction)
print(bert_entity_mentions.get_mention_pairs())
Output of the code sample:
[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
Machine-learning-based extraction for PII entities
Block namesentity-mentions_bilstm_en_pii
Blocks for handling Personal Identifiable Information (PII) entities with their corresponding runtimes
Block name Available in runtime
entity-mentions_bilstm_en_pii Runtime 22.2, Runtime 23.1
The entity-mentions_bilstm_en_pii machine-learning based extraction model is trained on labeled data for types person and location.
Capabilities
The entity-mentions_bilstm_en_pii block recognizes the following types of entities:
Entities extracted by the entity-mentions_bilstm_en_pii block
Entity type name Description Supported languages
Location All geo-political regions, continents, countries, and street names, states, provinces, cities, towns or islands. en
Person Any being; living, nonliving, fictional or real. en
Dependencies on other blocks
The following block must run before you can run the entity-mentions_bilstm_en_pii block:
* syntax_izumo_en_stock
Code sample
import os
import watson_nlp
Load Syntax and a Entity Mention BiLSTM model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
entity_model = watson_nlp.load('entity-mentions_bilstm_en_pii')
text = 'Denver is the capital of Colorado. The total estimated government spending in Colorado in fiscal year 2016 was $36.0 billion. IBM office is located in downtown Denver. Michael Hancock is the mayor of Denver.'
Run the syntax model on the input text
syntax_prediction = syntax_model.run(text)
Run the entity mention model on the result of the syntax analysis
entity_mentions = entity_model.run(syntax_prediction)
print(entity_mentions)
Output of the code sample:
{
"mentions": [
{
"span": {
"begin": 0,
"end": 6,
"text": "Denver"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.6885626912117004,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 25,
"end": 33,
"text": "Colorado"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.8509215116500854,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 78,
"end": 86,
"text": "Colorado"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.9928259253501892,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 151,
"end": 166,
"text": "downtown Denver"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.48378944396972656,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 168,
"end": 183,
"text": "Michael Hancock"
},
"type": "Person",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.9972871541976929,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
}
}
Rule-based extraction for general entities
The rule-based model entity-mentions_rbr_xx_stock identifies syntactically regular entities.
Block nameentity-mentions_rbr_xx_stock
Capabilities
Rule-based extraction handles syntactically regular entity types. The entity block extract entities from the input text. The following types of entities are recognized:
* PhoneNumber
* EmailAddress
* Number
* Percent
* IPAddress
* HashTag
* TwitterHandle
* URLDate
Capabilities of rule-based extraction based on an example
Capabilities Examples
Extracts syntactically regular entity types from the input text. My email is [email protected] -> [email protected]
Supported languages
Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn, zh-tw
Dependencies on other blocks
None
Code sample
import watson_nlp
Load a rule-based Entity Mention model for English
rbr_entity_model = watson_nlp.load('entity-mentions_rbr_en_stock')
Run the entity model on the input text
rbr_entity_mentions = rbr_entity_model.run('My email is [email protected]')
print(rbr_entity_mentions)
Output of the code sample:
{
"mentions": [
{
"span": {
"begin": 12,
"end": 27,
"text": "[email protected]"
},
"type": "EmailAddress",
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
},
"confidence": 0.8,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
}
}
Rule-based extraction for PII entities
The rule-based model entity-mentions_rbr_multi_pii handles the majority of the types by identifying common formats of PII entities and performing possible checksum or validations as appropriate for each entity type. For example, credit card number candidates are validated using the Luhn algorithm.
Block nameentity-mentions_rbr_multi_pii
Capabilities
The entity block entity-mentions_rbr_multi_pii recognizes the following types of entities:
Entities extracted by the entity-mentions_rbr_multi_pii block
Entity type name Description Supported languages
BankAccountNumber.CreditCardNumber.Amex Credit card number for card types AMEX (15 digits). Checked through the Luhn algorithm. All
BankAccountNumber.CreditCardNumber.Master Credit card number for card types Master card (16 digits). Checked through the Luhn algorithm. All
BankAccountNumber.CreditCardNumber.Other Credit card number for left-over category of other types. Checked through the Luhn algorithm. All
BankAccountNumber.CreditCardNumber.Visa Credit card number for card types VISA (16 to 19 digits). Checked through the Luhn algorithm. All
EmailAddress Email addresses, for example: [email protected] ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
IPAddress IPv4 and IPv6 addresses, for example, 10.142.250.123 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
PhoneNumber Any specific phone number, for example, 0511-123-456 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
Some PII entity type names are country-specific. The _ in the following entity types is a placeholder for a country code.
* BankAccountNumber.BBAN._ : These are more variable national bank account numbers and the extraction is mostly language-specific without a general checksum algorithm.
* BankAccountNumber.IBAN._ : Highly standardized IBANs are supported in a language-independent way and with a checksum algorithm.
* NationalNumber.NationalID._: These national IDs don’t have a (published) checksum algorithm, and are being extracted on a language-specific basis.
* NationalNumber.Passport._ : Checksums are implemented only for the countries where a checksum algorithm exists. These are specifically extracted language with additional context restrictions.
* NationalNumber.TaxID._ : These IDs don't have a (published) checksum algorithm, and are being extracted on a language-specific basis.
Which entity types are available for which languages and which country code to use is listed in the following table.
Country-specific PII entity types
Country Entity Type Name Description Supported Languages
Austria BankAccountNumber.BBAN.AT Basic bank account number de
BankAccountNumber.IBAN.AT International bank account number all
NationalNumber.Passport.AT Passport number de
NationalNumber.TaxID.AT Tax identification number de
Belgium BankAccountNumber.BBAN.BE Basic bank account number fr, nl
BankAccountNumber.IBAN.BE International bank account number all
NationalNumber.NationalID.BE National identification number fr, nl
NationalNumber.Passport.BE Passport number fr, nl
Bulgaria BankAccountNumber.BBAN.BG Basic bank account number bg
BankAccountNumber.IBAN.BG International bank account number all
NationalNumber.NationalID.BG National identification number bg
Canada NationalNumber.SocialInsuranceNumber.CA Social insurance number. Checksum algorithm is implemented. en, fr
Croatia BankAccountNumber.BBAN.HR Basic bank account number hr
BankAccountNumber.IBAN.HR International bank account number all
NationalNumber.NationalID.HR National identification number hr
NationalNumber.TaxID.HR Tax identification number hr
Cyprus BankAccountNumber.BBAN.CY Basic bank account number el
BankAccountNumber.IBAN.CY International bank account number all
NationalNumber.TaxID.CY Tax identification number el
Czechia BankAccountNumber.BBAN.CZ Basic bank account number cs
BankAccountNumber.IBAN.CZ International bank account number cs
NationalNumber.NationalID.CZ National identification number cs
NationalNumber.TaxID.CZ Tax identification number cs
Denmark BankAccountNumber.BBAN.DK Basic bank account number da
BankAccountNumber.IBAN.DK International bank account number all
NationalNumber.NationalID.DK National identification number da
Estonia BankAccountNumber.BBAN.EE Basic bank account number et
BankAccountNumber.IBAN.EE International bank account number all
NationalNumber.NationalID.EE National identification number et
Finland BankAccountNumber.BBAN.FI Basic bank account number fi
BankAccountNumber.IBAN.FI International bank account number all
NationalNumber.NationalID.FI National identification number fi
NationalNumber.Passport.FI Passport number fi
France BankAccountNumber.BBAN.FR Basic bank account number fr
BankAccountNumber.IBAN.FR International bank account number all
NationalNumber.Passport.FR Passport number fr
NationalNumber.SocialInsuranceNumber.FR Social insurance number. Checksum algorithm is implemented. fr
Germany BankAccountNumber.BBAN.DE Basic bank aAccount number de
BankAccountNumber.IBAN.DE International bank account number all
NationalNumber.Passport.DE Passport number de
NationalNumber.SocialInsuranceNumber.DE Social insurance number. Checksum algorithm is implemented. de
Greece BankAccountNumber.BBAN.GR Basic bank account number el
BankAccountNumber.IBAN.GR International bank account number all
NationalNumber.Passport.GR Passport number el
NationalNumber.TaxID.GR Tax identification number el
NationalNumber.NationalID.GR National ID number el
Hungary BankAccountNumber.BBAN.HU Basic bank account number hu
BankAccountNumber.IBAN.HU International bank account number all
NationalNumber.NationalID.HU National identification number hu
NationalNumber.TaxID.HU Tax identification number hu
Iceland BankAccountNumber.BBAN.IS Basic bank account number is
BankAccountNumber.IBAN.IS International bank account number all
NationalNumber.NationalID.IS National identification number is
Ireland BankAccountNumber.BBAN.IE Basic bank account number en
BankAccountNumber.IBAN.IE International bank account number all
NationalNumber.NationalID.IE National identification number en
NationalNumber.Passport.IE Passport number en
NationalNumber.TaxID.IE Tax identification number en
Italy BankAccountNumber.BBAN.IT Basic bank account number it
BankAccountNumber.IBAN.IT International bank account number all
NationalNumber.NationalID.IT National identification number it
NationalNumber.Passport.IT Passport number it
Latvia BankAccountNumber.BBAN.LV Basic bank account number lv
BankAccountNumber.IBAN.LV International bank account number all
NationalNumber.NationalID.LV National identification number lv
Liechtenstein BankAccountNumber.BBAN.LI Basic bank account number de
BankAccountNumber.IBAN.LI International bank account number all
Lithuania BankAccountNumber.BBAN.LT Basic bank account number lt
BankAccountNumber.IBAN.LT International bank account number all
NationalNumber.NationalID.LT National identification number lt
Luxembourg BankAccountNumber.BBAN.LU Basic bank account number de, fr
BankAccountNumber.IBAN.LU International bank account number all
NationalNumber.TaxID.LU Tax identification number de, fr
Malta BankAccountNumber.BBAN.MT Basic bank account number mt
BankAccountNumber.IBAN.MT International bank account number all
Netherlands BankAccountNumber.BBAN.NL Basic bank account number nl
BankAccountNumber.IBAN.NL International bank account number all
NationalNumber.NationalID.NL National identification number nl
NationalNumber.Passport.NL Passport number nl
Norway BankAccountNumber.BBAN.NO Basic bank account number no
BankAccountNumber.IBAN.NO International bank account number all
NationalNumber.NationalID.NO National identification number no
NationalNumber.NationalID.NO.Old National identification number old no
NationalNumber.Passport.NO Passport number no
Poland BankAccountNumber.BBAN.PL Basic bank account number pl
BankAccountNumber.IBAN.PL International bank account number all
NationalNumber.NationalID.PL National identification number pl
NationalNumber.Passport.PL Passport number pl
NationalNumber.TaxID.PL Tax identification number pl
Portugal BankAccountNumber.IBAN.PT International bank account number all
BankAccountNumber.BBAN.PT Basic bank account number pt
NationalNumber.NationalID.PT National identification number pt
NationalNumber.NationalID.PT.Old National identification number, obsolete format pt
NationalNumber.TaxID.PT Tax identification number pt
Romania BankAccountNumber.BBAN.RO Basic bank account number ro
BankAccountNumber.IBAN.RO International bank account number all
NationalNumber.NationalID.RO National identification number ro
NationalNumber.TaxID.RO Tax identification number ro
Slovakia BankAccountNumber.IBAN.SK International bank account number all
BankAccountNumber.BBAN.SK Basic bank account number sk
NationalNumber.TaxID.SK Tax identification number sk
NationalNumber.NationalID.SK National identification number sk
Slovenia BankAccountNumber.IBAN.SI International bank account number all
Spain BankAccountNumber.IBAN.ES International bank account number all
BankAccountNumber.BBAN.ES Basic bank account number es
NationalNumber.NationalID.ES National identification number es
NationalNumber.Passport.ES Passport number es
NationalNumber.TaxID.ES Tax identification number es
Sweden BankAccountNumber.IBAN.SE International bank account number all
BankAccountNumber.BBAN.SE Basic bank account number sv
NationalNumber.NationalID.SE National identification number sv
NationalNumber.Passport.SE Passport number sv
Switzerland BankAccountNumber.IBAN.CH International bank account number all
BankAccountNumber.BBAN.CH Basic bank account number de, fr, it
NationalNumber.NationalID.CH National identification number de, fr, it
NationalNumber.Passport.CH Passport number de, fr, it
NationalNumber.NationalID.CH.Old National identification number, obsolete format de, fr, it
United Kingdom of Great Britain and Northern Ireland BankAccountNumber.IBAN.GB International bank account number all
NationalNumber.SocialSecurityNumber.GB.NHS National Health Service number all
NationalNumber.SocialSecurityNumber.GB.NINO National Social Security Insurance number all
NationalNumber.NationalID.GB.Old National ID number, obsolete format all
NationalNumber.Passport.GB Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. all
United States NationalNumber.SocialSecurityNumber.US Social Security number. Checksum algorithm is not implemented and hence come with additional context restrictions. en
NationalNumber.Passport.US Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. en
Dependencies on other blocks
None
Code sample
import watson_nlp
Load the RBR PII model. Note that this is a multilingual model supporting multiple languages.
rbr_entity_model = watson_nlp.load('entity-mentions_rbr_multi_pii')
Run the RBR model. Note that language code of the input text is passed as a parameter to the run method.
rbr_entity_mentions = rbr_entity_model.run('Please find my credit card number here: 378282246310005. Thanks for the payment.', language_code='en')
print(rbr_entity_mentions)
Output of the code sample:
{
"mentions": [
{
"span": {
"begin": 40,
"end": 55,
"text": "378282246310005"
},
"type": "BankAccountNumber.CreditCardNumber.Amex",
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
},
"confidence": 0.8,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
}
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Entity extraction #
The Watson Natural Language Processing Entity extraction models extract entities from input text\.
For details, on available extraction types, refer to these sections:
<!-- <ul> -->
* [Machine\-learning\-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#machine-learning-general)
* [Machine\-learning\-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#machine-learning-pii)
* [Rule\-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#rule-based-general)
* [Rule\-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#rule-based-pii)
<!-- </ul> -->
## Machine\-learning\-based extraction for general entities ##
The machine\-learning\-based extraction models are trained on labeled data for the more complex entity types such as person, organization and location\.
**Capabilities**
The entity models extract entities from the input text\. The following types of entities are recognized:
<!-- <ul> -->
* Date
* Duration
* Facility
* Geographic feature
* Job title
* Location
* Measure
* Money
* Ordinal
* Organization
* Person
* Time
<!-- </ul> -->
<!-- <table> -->
Capabilities of machine\-learning\-based extraction based on an example
| Capabilities | Examples |
| --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| Extracts entities from the input text\. | `IBM\'s CEO Arvind Krishna is based in the US` \-> `IBM\Organization` , `CEO\JobTitle`, `Arvind Krishna\Person`, `US\Location` |
<!-- </table ""> -->
Available workflows and blocks differ, depending on the runtime used\.
<!-- <table> -->
Blocks and workflows for handling general entities with their corresponding runtimes
| Block or workflow name | Available in runtime |
| ---------------------------------------------------------------------------- | ------------------------------ |
| `entity-mentions_transformer-workflow_multilingual_slate.153m.distilled` | [Runtime 23\.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#runtime-231) |
| `entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu` | [Runtime 23\.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#runtime-231) |
| `entity-mentions_bert_multi_stock` | [Runtime 22\.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=en#runtime-222) |
<!-- </table ""> -->
### Machine\-learning\-based workflows for general entities in Runtime 23\.1 ###
**Workflow names**
<!-- <ul> -->
* `entity-mentions_transformer-workflow_multilingual_slate.153m.distilled`: this workflow can be used on both CPUs and GPUs\.
* `entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu`: this workflow is optimized for CPU\-based runtimes\.
<!-- </ul> -->
**Supported languages**
Entity extraction is available for the following languages\.
For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes):
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh\-cn
**Code sample**
import watson_nlp
# Load the workflow model
entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
# Run the entity extraction workflow on the input text
entities = entities_workflow.run('IBM\'s CEO Arvind Krishna is based in the US', language_code="en")
print(entities.get_mention_pairs())
**Output of the code sample:**
[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
### Machine\-learning\-based blocks for general entities in Runtime 22\.2 ###
**Block names**`entity-mentions_bert_multi_stock`
**Supported languages**
Entity extraction is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh\-cn
**Dependencies on other blocks**
The following block must run before you can run the Entity extraction block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
<!-- </ul> -->
**Code sample**
import watson_nlp
# Load Syntax Model for English, and the multilingual BERT Entity model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
bert_entity_model = watson_nlp.load('entity-mentions_bert_multi_stock')
# Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM\'s CEO Arvind Krishna is based in the US')
# Run the entity mention model on the result of syntax model
bert_entity_mentions = bert_entity_model.run(syntax_prediction)
print(bert_entity_mentions.get_mention_pairs())
**Output of the code sample:**
[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
## Machine\-learning\-based extraction for PII entities ##
**Block names**`entity-mentions_bilstm_en_pii`
<!-- <table> -->
Blocks for handling Personal Identifiable Information (PII) entities with their corresponding runtimes
| Block name | Available in runtime |
| ------------------------------- | ---------------------------- |
| `entity-mentions_bilstm_en_pii` | Runtime 22\.2, Runtime 23\.1 |
<!-- </table ""> -->
The `entity-mentions_bilstm_en_pii` machine\-learning based extraction model is trained on labeled data for types *person* and *location*\.
**Capabilities**
The `entity-mentions_bilstm_en_pii` block recognizes the following types of entities:
<!-- <table> -->
Entities extracted by the entity\-mentions\_bilstm\_en\_pii block
| Entity type name | Description | Supported languages |
| ---------------- | ------------------------------------------------------------------------------------------------------------------ | ------------------- |
| Location | All geo\-political regions, continents, countries, and street names, states, provinces, cities, towns or islands\. | en |
| Person | Any being; living, nonliving, fictional or real\. | en |
<!-- </table ""> -->
**Dependencies on other blocks**
The following block must run before you can run the `entity-mentions_bilstm_en_pii` block:
<!-- <ul> -->
* `syntax_izumo_en_stock`
<!-- </ul> -->
**Code sample**
import os
import watson_nlp
# Load Syntax and a Entity Mention BiLSTM model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
entity_model = watson_nlp.load('entity-mentions_bilstm_en_pii')
text = 'Denver is the capital of Colorado. The total estimated government spending in Colorado in fiscal year 2016 was $36.0 billion. IBM office is located in downtown Denver. Michael Hancock is the mayor of Denver.'
# Run the syntax model on the input text
syntax_prediction = syntax_model.run(text)
# Run the entity mention model on the result of the syntax analysis
entity_mentions = entity_model.run(syntax_prediction)
print(entity_mentions)
**Output of the code sample:**
{
"mentions": [
{
"span": {
"begin": 0,
"end": 6,
"text": "Denver"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.6885626912117004,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 25,
"end": 33,
"text": "Colorado"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.8509215116500854,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 78,
"end": 86,
"text": "Colorado"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.9928259253501892,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 151,
"end": 166,
"text": "downtown Denver"
},
"type": "Location",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.48378944396972656,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
},
{
"span": {
"begin": 168,
"end": 183,
"text": "Michael Hancock"
},
"type": "Person",
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
},
"confidence": 0.9972871541976929,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "BiLSTM Entity Mentions",
"version": "1.0.0"
}
}
## Rule\-based extraction for general entities ##
The rule\-based model `entity-mentions_rbr_xx_stock` identifies syntactically regular entities\.
**Block name**`entity-mentions_rbr_xx_stock`
**Capabilities**
Rule\-based extraction handles syntactically regular entity types\. The entity block extract entities from the input text\. The following types of entities are recognized:
<!-- <ul> -->
* PhoneNumber
* EmailAddress
* Number
* Percent
* IPAddress
* HashTag
* TwitterHandle
* URLDate
<!-- </ul> -->
<!-- <table> -->
Capabilities of rule\-based extraction based on an example
| Capabilities | Examples |
| ----------------------------------------------------------------- | ------------------------------------------------------------------- |
| Extracts syntactically regular entity types from the input text\. | `My email is [email protected]` \-> `[email protected]\EmailAddress` |
<!-- </table ""> -->
**Supported languages**
Entity extraction is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh\-cn, zh\-tw
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load a rule-based Entity Mention model for English
rbr_entity_model = watson_nlp.load('entity-mentions_rbr_en_stock')
# Run the entity model on the input text
rbr_entity_mentions = rbr_entity_model.run('My email is [email protected]')
print(rbr_entity_mentions)
Output of the code sample:
{
"mentions": [
{
"span": {
"begin": 12,
"end": 27,
"text": "[email protected]"
},
"type": "EmailAddress",
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
},
"confidence": 0.8,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
}
}
## Rule\-based extraction for PII entities ##
The rule\-based model `entity-mentions_rbr_multi_pii` handles the majority of the types by identifying common formats of PII entities and performing possible checksum or validations as appropriate for each entity type\. For example, credit card number candidates are validated using the Luhn algorithm\.
**Block name**`entity-mentions_rbr_multi_pii`
**Capabilities**
The entity block `entity-mentions_rbr_multi_pii` recognizes the following types of entities:
<!-- <table> -->
Entities extracted by the entity\-mentions\_rbr\_multi\_pii block
| Entity type name | Description | Supported languages |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------- |
| BankAccountNumber\.CreditCardNumber\.Amex | Credit card number for card types AMEX (15 digits)\. Checked through the Luhn algorithm\. | All |
| BankAccountNumber\.CreditCardNumber\.Master | Credit card number for card types Master card (16 digits)\. Checked through the Luhn algorithm\. | All |
| BankAccountNumber\.CreditCardNumber\.Other | Credit card number for left\-over category of other types\. Checked through the Luhn algorithm\. | All |
| BankAccountNumber\.CreditCardNumber\.Visa | Credit card number for card types VISA (16 to 19 digits)\. Checked through the Luhn algorithm\. | All |
| EmailAddress | Email addresses, for example: john@gmail\.com | ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh\-cn |
| IPAddress | IPv4 and IPv6 addresses, for example, `10.142.250.123` | ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh\-cn |
| `PhoneNumber` | Any specific phone number, for example, 0511\-123\-456 | ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh\-cn |
<!-- </table ""> -->
Some PII entity type names are country\-specific\. The `_` in the following entity types is a placeholder for a country code\.
<!-- <ul> -->
* `BankAccountNumber.BBAN._` : These are more variable national bank account numbers and the extraction is mostly language\-specific without a general checksum algorithm\.
* `BankAccountNumber.IBAN._` : Highly standardized IBANs are supported in a language\-independent way and with a checksum algorithm\.
* `NationalNumber.NationalID._`: These national IDs don’t have a (published) checksum algorithm, and are being extracted on a language\-specific basis\.
* `NationalNumber.Passport._` : Checksums are implemented only for the countries where a checksum algorithm exists\. These are specifically extracted language with additional context restrictions\.
* `NationalNumber.TaxID._` : These IDs don't have a (published) checksum algorithm, and are being extracted on a language\-specific basis\.
<!-- </ul> -->
Which entity types are available for which languages and which country code to use is listed in the following table\.
<!-- <table> -->
Country\-specific PII entity types
| Country | Entity Type Name | Description | Supported Languages |
| ---------------------------------------------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------- |
| Austria | `BankAccountNumber.BBAN.AT` | Basic bank account number | de |
| | `BankAccountNumber.IBAN.AT` | International bank account number | all |
| | `NationalNumber.Passport.AT` | Passport number | de |
| | `NationalNumber.TaxID.AT` | Tax identification number | de |
| Belgium | `BankAccountNumber.BBAN.BE` | Basic bank account number | fr, nl |
| | `BankAccountNumber.IBAN.BE` | International bank account number | all |
| | `NationalNumber.NationalID.BE` | National identification number | fr, nl |
| | `NationalNumber.Passport.BE` | Passport number | fr, nl |
| Bulgaria | `BankAccountNumber.BBAN.BG` | Basic bank account number | bg |
| | `BankAccountNumber.IBAN.BG` | International bank account number | all |
| | `NationalNumber.NationalID.BG` | National identification number | bg |
| Canada | `NationalNumber.SocialInsuranceNumber.CA` | Social insurance number\. Checksum algorithm is implemented\. | en, fr |
| Croatia | `BankAccountNumber.BBAN.HR` | Basic bank account number | hr |
| | `BankAccountNumber.IBAN.HR` | International bank account number | all |
| | `NationalNumber.NationalID.HR` | National identification number | hr |
| | `NationalNumber.TaxID.HR` | Tax identification number | hr |
| Cyprus | `BankAccountNumber.BBAN.CY` | Basic bank account number | el |
| | `BankAccountNumber.IBAN.CY` | International bank account number | all |
| | `NationalNumber.TaxID.CY` | Tax identification number | el |
| Czechia | `BankAccountNumber.BBAN.CZ` | Basic bank account number | cs |
| | `BankAccountNumber.IBAN.CZ` | International bank account number | cs |
| | `NationalNumber.NationalID.CZ` | National identification number | cs |
| | `NationalNumber.TaxID.CZ` | Tax identification number | cs |
| Denmark | `BankAccountNumber.BBAN.DK` | Basic bank account number | da |
| | `BankAccountNumber.IBAN.DK` | International bank account number | all |
| | `NationalNumber.NationalID.DK` | National identification number | da |
| Estonia | `BankAccountNumber.BBAN.EE` | Basic bank account number | et |
| | `BankAccountNumber.IBAN.EE` | International bank account number | all |
| | `NationalNumber.NationalID.EE` | National identification number | et |
| Finland | `BankAccountNumber.BBAN.FI` | Basic bank account number | fi |
| | `BankAccountNumber.IBAN.FI` | International bank account number | all |
| | `NationalNumber.NationalID.FI` | National identification number | fi |
| | `NationalNumber.Passport.FI` | Passport number | fi |
| France | `BankAccountNumber.BBAN.FR` | Basic bank account number | fr |
| | `BankAccountNumber.IBAN.FR` | International bank account number | all |
| | `NationalNumber.Passport.FR` | Passport number | fr |
| | `NationalNumber.SocialInsuranceNumber.FR` | Social insurance number\. Checksum algorithm is implemented\. | fr |
| Germany | `BankAccountNumber.BBAN.DE` | Basic bank aAccount number | de |
| | `BankAccountNumber.IBAN.DE` | International bank account number | all |
| | `NationalNumber.Passport.DE` | Passport number | de |
| | `NationalNumber.SocialInsuranceNumber.DE` | Social insurance number\. Checksum algorithm is implemented\. | de |
| Greece | `BankAccountNumber.BBAN.GR` | Basic bank account number | el |
| | `BankAccountNumber.IBAN.GR` | International bank account number | all |
| | `NationalNumber.Passport.GR` | Passport number | el |
| | `NationalNumber.TaxID.GR` | Tax identification number | el |
| | `NationalNumber.NationalID.GR` | National ID number | el |
| Hungary | `BankAccountNumber.BBAN.HU` | Basic bank account number | hu |
| | `BankAccountNumber.IBAN.HU` | International bank account number | all |
| | `NationalNumber.NationalID.HU` | National identification number | hu |
| | `NationalNumber.TaxID.HU` | Tax identification number | hu |
| Iceland | `BankAccountNumber.BBAN.IS` | Basic bank account number | is |
| | `BankAccountNumber.IBAN.IS` | International bank account number | all |
| | `NationalNumber.NationalID.IS` | National identification number | is |
| Ireland | `BankAccountNumber.BBAN.IE` | Basic bank account number | en |
| | `BankAccountNumber.IBAN.IE` | International bank account number | all |
| | `NationalNumber.NationalID.IE` | National identification number | en |
| | `NationalNumber.Passport.IE` | Passport number | en |
| | `NationalNumber.TaxID.IE` | Tax identification number | en |
| Italy | `BankAccountNumber.BBAN.IT` | Basic bank account number | it |
| | `BankAccountNumber.IBAN.IT` | International bank account number | all |
| | `NationalNumber.NationalID.IT` | National identification number | it |
| | `NationalNumber.Passport.IT` | Passport number | it |
| Latvia | `BankAccountNumber.BBAN.LV` | Basic bank account number | lv |
| | `BankAccountNumber.IBAN.LV` | International bank account number | all |
| | `NationalNumber.NationalID.LV` | National identification number | lv |
| Liechtenstein | `BankAccountNumber.BBAN.LI` | Basic bank account number | de |
| | `BankAccountNumber.IBAN.LI` | International bank account number | all |
| Lithuania | `BankAccountNumber.BBAN.LT` | Basic bank account number | lt |
| | `BankAccountNumber.IBAN.LT` | International bank account number | all |
| | `NationalNumber.NationalID.LT` | National identification number | lt |
| Luxembourg | `BankAccountNumber.BBAN.LU` | Basic bank account number | de, fr |
| | `BankAccountNumber.IBAN.LU` | International bank account number | all |
| | `NationalNumber.TaxID.LU` | Tax identification number | de, fr |
| Malta | `BankAccountNumber.BBAN.MT` | Basic bank account number | mt |
| | `BankAccountNumber.IBAN.MT` | International bank account number | all |
| Netherlands | `BankAccountNumber.BBAN.NL` | Basic bank account number | nl |
| | `BankAccountNumber.IBAN.NL` | International bank account number | all |
| | `NationalNumber.NationalID.NL` | National identification number | nl |
| | `NationalNumber.Passport.NL` | Passport number | nl |
| Norway | `BankAccountNumber.BBAN.NO` | Basic bank account number | no |
| | `BankAccountNumber.IBAN.NO` | International bank account number | all |
| | `NationalNumber.NationalID.NO` | National identification number | no |
| | `NationalNumber.NationalID.NO.Old` | National identification number old | no |
| | `NationalNumber.Passport.NO` | Passport number | no |
| Poland | `BankAccountNumber.BBAN.PL` | Basic bank account number | pl |
| | `BankAccountNumber.IBAN.PL` | International bank account number | all |
| | `NationalNumber.NationalID.PL` | National identification number | pl |
| | `NationalNumber.Passport.PL` | Passport number | pl |
| | `NationalNumber.TaxID.PL` | Tax identification number | pl |
| Portugal | `BankAccountNumber.IBAN.PT` | International bank account number | all |
| | `BankAccountNumber.BBAN.PT` | Basic bank account number | pt |
| | `NationalNumber.NationalID.PT` | National identification number | pt |
| | `NationalNumber.NationalID.PT.Old` | National identification number, obsolete format | pt |
| | `NationalNumber.TaxID.PT` | Tax identification number | pt |
| Romania | `BankAccountNumber.BBAN.RO` | Basic bank account number | ro |
| | `BankAccountNumber.IBAN.RO` | International bank account number | all |
| | `NationalNumber.NationalID.RO` | National identification number | ro |
| | `NationalNumber.TaxID.RO` | Tax identification number | ro |
| Slovakia | `BankAccountNumber.IBAN.SK` | International bank account number | all |
| | `BankAccountNumber.BBAN.SK` | Basic bank account number | sk |
| | `NationalNumber.TaxID.SK` | Tax identification number | sk |
| | `NationalNumber.NationalID.SK` | National identification number | sk |
| Slovenia | `BankAccountNumber.IBAN.SI` | International bank account number | all |
| Spain | `BankAccountNumber.IBAN.ES` | International bank account number | all |
| | `BankAccountNumber.BBAN.ES` | Basic bank account number | es |
| | `NationalNumber.NationalID.ES` | National identification number | es |
| | `NationalNumber.Passport.ES` | Passport number | es |
| | `NationalNumber.TaxID.ES` | Tax identification number | es |
| Sweden | `BankAccountNumber.IBAN.SE` | International bank account number | all |
| | `BankAccountNumber.BBAN.SE` | Basic bank account number | sv |
| | `NationalNumber.NationalID.SE` | National identification number | sv |
| | `NationalNumber.Passport.SE` | Passport number | sv |
| Switzerland | `BankAccountNumber.IBAN.CH` | International bank account number | all |
| | `BankAccountNumber.BBAN.CH` | Basic bank account number | de, fr, it |
| | `NationalNumber.NationalID.CH` | National identification number | de, fr, it |
| | `NationalNumber.Passport.CH` | Passport number | de, fr, it |
| | `NationalNumber.NationalID.CH.Old` | National identification number, obsolete format | de, fr, it |
| United Kingdom of Great Britain and Northern Ireland | `BankAccountNumber.IBAN.GB` | International bank account number | all |
| | `NationalNumber.SocialSecurityNumber.GB.NHS` | National Health Service number | all |
| | `NationalNumber.SocialSecurityNumber.GB.NINO` | National Social Security Insurance number | all |
| | `NationalNumber.NationalID.GB.Old` | National ID number, obsolete format | all |
| | `NationalNumber.Passport.GB` | Passport Number\. Checksum algorithm is not implemented and hence come with additional context restrictions\. | all |
| United States | `NationalNumber.SocialSecurityNumber.US` | Social Security number\. Checksum algorithm is not implemented and hence come with additional context restrictions\. | en |
| | `NationalNumber.Passport.US` | Passport Number\. Checksum algorithm is not implemented and hence come with additional context restrictions\. | en |
<!-- </table ""> -->
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load the RBR PII model. Note that this is a multilingual model supporting multiple languages.
rbr_entity_model = watson_nlp.load('entity-mentions_rbr_multi_pii')
# Run the RBR model. Note that language code of the input text is passed as a parameter to the run method.
rbr_entity_mentions = rbr_entity_model.run('Please find my credit card number here: 378282246310005. Thanks for the payment.', language_code='en')
print(rbr_entity_mentions)
Output of the code sample:
{
"mentions": [
{
"span": {
"begin": 40,
"end": 55,
"text": "378282246310005"
},
"type": "BankAccountNumber.CreditCardNumber.Amex",
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
},
"confidence": 0.8,
"mention_type": "MENTT_UNSET",
"mention_class": "MENTC_UNSET",
"role": ""
}
],
"producer_id": {
"name": "RBR mentions",
"version": "0.0.1"
}
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
1EC0AABFA78901776901CB2C57AFF822855B6B5E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html?context=cdpaas&locale=en | Hierarchical text categorization | Hierarchical text categorization
The Watson Natural Language Processing Categories block assigns individual nodes within a hierarchical taxonomy to an input document. For example, in the text IBM announces new advances in quantum computing, examples of extracted categories are technology and computing/hardware/computer and technology and computing/operating systems. These categories represent level 3 and level 2 nodes in a hierarchical taxonomy.
This block differs from the Classification block in that training starts from a set of seed phrases associated with each node in the taxonomy, and does not require labeled documents.
Note that the Hierarchical text categorization block can only be used in a notebook that is started in an environment based on Runtime 22.2 or Runtime 23.1 that includes the Watson Natural Language Processing library.
Block name
categories_esa_en_stock
Supported languages
The Categories block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
de, en
Capabilities
Use this block to determine the topics of documents on the web by categorizing web pages into a taxonomy of general domain topics, for ad placement and content recommendation. The model was tested on data from news reports and general web pages.
For a list of the categories that can be returned, see [Category types](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html).
Dependencies on other blocks
The following block must run before you can run the hierarchical categorization block:
* syntax_izumo_<language>_stock
Code sample
import watson_nlp
Load Syntax and a Categories model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
categories_model = watson_nlp.load('categories_esa_en_stock')
Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
Run the categories model on the result of syntax
categories = categories_model.run(syntax_prediction)
print(categories)
Output of the code sample:
{
"categories": [
{
"labels":
"technology & computing",
"computing"
],
"score": 0.992489,
"explanation": ]
},
{
"labels":
"science",
"physics"
],
"score": 0.945449,
"explanation": ]
}
],
"producer_id": {
"name": "ESA Hierarchical Categories",
"version": "1.0.0"
}
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Hierarchical text categorization #
The Watson Natural Language Processing Categories block assigns individual nodes within a hierarchical taxonomy to an input document\. For example, in the text *IBM announces new advances in quantum computing*, examples of extracted categories are `technology and computing/hardware/computer` and `technology and computing/operating systems`\. These categories represent level 3 and level 2 nodes in a hierarchical taxonomy\.
This block differs from the Classification block in that training starts from a set of seed phrases associated with each node in the taxonomy, and does not require labeled documents\.
Note that the Hierarchical text categorization block can only be used in a notebook that is started in an environment based on Runtime 22\.2 or Runtime 23\.1 that includes the Watson Natural Language Processing library\.
**Block name**
`categories_esa_en_stock`
**Supported languages**
The Categories block is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
de, en
**Capabilities**
Use this block to determine the topics of documents on the web by categorizing web pages into a taxonomy of general domain topics, for ad placement and content recommendation\. The model was tested on data from news reports and general web pages\.
For a list of the categories that can be returned, see [Category types](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html)\.
**Dependencies on other blocks**
The following block must run before you can run the hierarchical categorization block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
<!-- </ul> -->
**Code sample**
import watson_nlp
# Load Syntax and a Categories model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
categories_model = watson_nlp.load('categories_esa_en_stock')
# Run the syntax model on the input text
syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
# Run the categories model on the result of syntax
categories = categories_model.run(syntax_prediction)
print(categories)
Output of the code sample:
{
"categories": [
{
"labels":
"technology & computing",
"computing"
],
"score": 0.992489,
"explanation": ]
},
{
"labels":
"science",
"physics"
],
"score": 0.945449,
"explanation": ]
}
],
"producer_id": {
"name": "ESA Hierarchical Categories",
"version": "1.0.0"
}
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-keyword.html?context=cdpaas&locale=en | Keyword extraction and ranking | Keyword extraction and ranking
The Watson Natural Language Processing Keyword extraction with ranking block extracts noun phrases from input text based on their relevance.
Block name
keywords_text-rank_<language>_stock
Supported language
Keyword extraction with text ranking is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
Capabilities
The keywords and text rank block ranks noun phrases extracted from an input document based on how relevant they are within the document.
Capabilities of keyword extraction and ranking based on an example
Capabilities Examples
Ranks extracted noun phrases based on relevance "Anna went to school at University of California Santa Cruz. Anna joined the university in 2015." -> Anna, University of California Santa Cruz
Dependencies on other blocks
The following blocks must run before you can run the Keyword extraction with ranking block:
* syntax_izumo_<language>_stock
* noun-phrases_rbr_<language>_stock
Code sample
import watson_nlp
text = "Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."
Load Syntax, Noun Phrases and Keywords models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
Run the Syntax and Noun Phrases models
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
noun_phrases = noun_phrases_model.run(text)
Run the keywords model
keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
print(keywords)
Output of the code sample:
'keywords':
[{'text': 'University of California Santa Cruz', 'relevance': 0.939524, 'count': 1},
{'text': 'Anna', 'relevance': 0.891002, 'count': 2}]
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Keyword extraction and ranking #
The Watson Natural Language Processing Keyword extraction with ranking block extracts noun phrases from input text based on their relevance\.
**Block name**
`keywords_text-rank_<language>_stock`
**Supported language**
Keyword extraction with text ranking is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh\-cn
**Capabilities**
The keywords and text rank block ranks noun phrases extracted from an input document based on how relevant they are within the document\.
<!-- <table> -->
Capabilities of keyword extraction and ranking based on an example
| Capabilities | Examples |
| ----------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| Ranks extracted noun phrases based on relevance | "Anna went to school at University of California Santa Cruz\. Anna joined the university in 2015\." \-> Anna, University of California Santa Cruz |
<!-- </table ""> -->
**Dependencies on other blocks**
The following blocks must run before you can run the Keyword extraction with ranking block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
* `noun-phrases_rbr_<language>_stock`
<!-- </ul> -->
**Code sample**
import watson_nlp
text = "Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."
# Load Syntax, Noun Phrases and Keywords models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
# Run the Syntax and Noun Phrases models
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
noun_phrases = noun_phrases_model.run(text)
# Run the keywords model
keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
print(keywords)
Output of the code sample:
'keywords':
[{'text': 'University of California Santa Cruz', 'relevance': 0.939524, 'count': 1},
{'text': 'Anna', 'relevance': 0.891002, 'count': 2}]
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
E1074D5C232CB13E3CD1FB6E832753626D2FE30E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-language-detection.html?context=cdpaas&locale=en | Language detection | Language detection
The Watson Natural Language Processing Language Detection identifies the language of input text.
Block namelang-detect_izumo_multi_stock
Supported languages
The Language Detection block is able to detect the following languages:
af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
Capabilities
Use this block to detect the language of an input text.
Dependencies on other blocks
None
Code sample
Load the language detection model
lang_detection_model = watson_nlp.load('lang-detect_izumo_multi_stock')
Run it on input text
detected_lang = lang_detection_model.run('IBM announced new advances in quantum computing')
Retrieve language ISO code
print(detected_lang.to_iso_format())
Output of the code sample:
EN
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Language detection #
The Watson Natural Language Processing Language Detection identifies the language of input text\.
**Block name**`lang-detect_izumo_multi_stock`
**Supported languages**
The Language Detection block is able to detect the following languages:
af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw
**Capabilities**
Use this block to detect the language of an input text\.
**Dependencies on other blocks**
None
**Code sample**
# Load the language detection model
lang_detection_model = watson_nlp.load('lang-detect_izumo_multi_stock')
# Run it on input text
detected_lang = lang_detection_model.run('IBM announced new advances in quantum computing')
# Retrieve language ISO code
print(detected_lang.to_iso_format())
Output of the code sample:
EN
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
883359C27F09C3368292819B64149182441721E1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-noun-phrase.html?context=cdpaas&locale=en | Noun phrase extraction | Noun phrase extraction
The Watson Natural Language Processing Noun phrase extraction block extracts noun phrases from input text.
Block name
noun-phrases_rbr_<language>_stock
Note: The "rbr" abbreviation in model name means rule-based reasoning. RBR models handle syntactically regular entity types such as number, email and phone.
Supported languages
Noun phrase extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, cs, da, de, es, en, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh_cn, zh_tw
Capabilities
The Noun phrase extraction block extracts non-overlapping noun phrases from the input text.
Capabilities of noun phrase extraction based on an example
Capabilities Examples
Extraction of non-overlapping noun phrases "Anna went to school at University of California Santa Cruz" -> Anna, school, University of California Santa Cruz
Dependencies on other blocks
None
Code sample
import watson_nlp
Load the model for English
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
Run the model on the input text
noun_phrases = noun_phrases_model.run('Anna went to school at University of California Santa Cruz')
print(noun_phrases)
Output of the code sample:
{
"noun_phrases": [
{
"span": {
"begin": 0,
"end": 4,
"text": "Anna"
}
},
{
"span": {
"begin": 13,
"end": 19,
"text": "school"
}
},
{
"span": {
"begin": 23,
"end": 58,
"text": "University of California Santa Cruz"
}
}
],
"producer_id": {
"name": "RBR Noun phrases",
"version": "0.0.1"
}
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Noun phrase extraction #
The Watson Natural Language Processing Noun phrase extraction block extracts noun phrases from input text\.
**Block name**
`noun-phrases_rbr_<language>_stock`
Note: The "rbr" abbreviation in model name means rule\-based reasoning\. RBR models handle syntactically regular entity types such as number, email and phone\.
**Supported languages**
Noun phrase extraction is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, cs, da, de, es, en, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh\_cn, zh\_tw
**Capabilities**
The Noun phrase extraction block extracts non\-overlapping noun phrases from the input text\.
<!-- <table> -->
Capabilities of noun phrase extraction based on an example
| Capabilities | Examples |
| ------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
| Extraction of non\-overlapping noun phrases | "Anna went to school at University of California Santa Cruz" \-> Anna, school, University of California Santa Cruz |
<!-- </table ""> -->
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load the model for English
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
# Run the model on the input text
noun_phrases = noun_phrases_model.run('Anna went to school at University of California Santa Cruz')
print(noun_phrases)
Output of the code sample:
{
"noun_phrases": [
{
"span": {
"begin": 0,
"end": 4,
"text": "Anna"
}
},
{
"span": {
"begin": 13,
"end": 19,
"text": "school"
}
},
{
"span": {
"begin": 23,
"end": 58,
"text": "University of California Santa Cruz"
}
}
],
"producer_id": {
"name": "RBR Noun phrases",
"version": "0.0.1"
}
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
B4B2E864E1ABD4EA20845750E9567225BB3F417E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=en | Relations extraction | Relations extraction
Watson Natural Language Processing Relations extraction encapsulates algorithms for extracting relations between two entity mentions. For example, in the text Lionel Messi plays for FC Barcelona. a relation extraction model may decide that the entities Lionel Messi and F.C. Barcelona are in a relationship with each other, and the relationship type is works for.
Capabilities
Use this model to detect relations between discovered entities.
The following table lists common relations types that are available out-of-the-box after you have run the entity models.
Table 1. Available common relation types between entities
Relation Description
affiliatedWith Exists between two entities that have an affiliation or are similarly connected.
basedIn Exists between an Organization and the place where it is mainly, only, or intrinsically located.
bornAt Exists between a Person and the place where they were born.
bornOn Exists between a Person and the Date or Time when they were born.
clientOf Exists between two entities when one is a direct business client of the other (that is, pays for certain services or products).
colleague Exists between two Persons who are part of the same Organization.
competitor Exists between two Organizations that are engaged in economic competition.
contactOf Relates contact information with an entity.
diedAt Exists between a Person and the place at which he, she, or it died.
diedOn Exists between a Person and the Date or Time on which he, she, or it died.
dissolvedOn Exists between an Organization or URL and the Date or Time when it was dissolved.
educatedAt Exists between a Person and the Organization at which he or she is or was educated.
employedBy Exists between two entities when one pays the other for certain work or services; monetary reward must be involved. In many circumstances, marking this relation requires world knowledge.
foundedOn Exists between an Organization or URL and the Date or Time on which it was founded.
founderOf Exists between a Person and a Facility, Organization, or URL that they founded.
locatedAt Exists between an entity and its location.
managerOf Exists between a Person and another entity such as a Person or Organization that he or she manages as his or her job.
memberOf Exists between an entity, such as a Person or Organization, and another entity to which he, she, or it belongs.
ownerOf Exists between an entity, such as a Person or Organization, and an entity that he, she, or it owns. The owner does not need to have permanent ownership of the entity for the relation to exist.
parentOf Exists between a Person and their children or stepchildren.
partner Exists between two Organizations that are engaged in economic cooperation.
partOf Exists between a smaller and a larger entity of the same type or related types in which the second entity subsumes the first. If the entities are both events, the first must occur within the time span of the second for the relation to be recognized.
partOfMany Exists between smaller and larger entities of the same type or related types in which the second entity, which must be plural, includes the first, which can be singular or plural.
populationOf Exists between a place and the number of people located there, or an organization and the number of members or employees it has.
measureOf This relation indicates the quantity of an entity or measure (height, weight, etc) of an entity.
relative Exists between two Persons who are relatives. To identify parents, children, siblings, and spouses, use the parentOf, siblingOf, and spouseOf relations.
residesIn Exists between a Person and a place where they live or previously lived.
shareholdersOf Exists between a Person or Organization, and an Organization of which the first entity is a shareholder.
siblingOf Exists between a Person and their sibling or stepsibling.
spokespersonFor Exists between a Person and an Facility, Organization, or Person that he or she represents.
spouseOf Exists between two Persons that are spouses.
subsidiaryOf Exists between two Organizations when the first is a subsidiary of the second.
In [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-222), relation extraction is provided as an analysis block, which depends on the Syntax analysis block and a entity mention extraction block. Starting with [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-231), relation extraction is provided as a workflow, which is directly run on the input text.
Relation extraction in Runtime 23.1
Workflow name
relations_transformer-workflow_multilingual_slate.153m.distilled
Supported languages The Relations Workflow is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, de, en, es, fr, it, ja, ko, pt
Code sample
import watson_nlp
Load the workflow model
relations_workflow = watson_nlp.load('relations_transformer-workflow_multilingual_slate.153m.distilled')
Run the relation extraction workflow on the input text
relations = relations_workflow.run('Anna Smith is an engineer. Anna works at IBM.', language_code="en")
print(relations.get_relation_pairs_by_type())
Output of the code sample
{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
Relation extraction in Runtime 22.2
Block name
relations_transformer_en_stock
Supported languages
The Relations extraction block is available for English only.
Dependencies on other blocks
The following block must run before you can run the relations_transformer_en_stock block:
* syntax_izumo_en_stock
This must be followed by one of the following entity models on which the relations extraction block can build its results:
* entity-mentions_rbr_en_stock
* entity-mentions_bert_multi_stock
Code sample
import watson_nlp
Load the models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
entity_mentions_model = watson_nlp.load('entity-mentions_bert_multi_stock')
relation_model = watson_nlp.load('relations_transformer_en_stock')
Run the prerequisite models
syntax_prediction = syntax_model.run('Anna Smith is an engineer. Anna works at IBM.')
entity_mentions = entity_mentions_model.run(syntax_prediction)
Run the relations model
relations_on_mentions = relation_model.run(syntax_prediction, mentions_prediction=entity_mentions)
print(relations_on_mentions.get_relation_pairs_by_type())
Output of the code sample:
{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Relations extraction #
Watson Natural Language Processing Relations extraction encapsulates algorithms for extracting relations between two entity mentions\. For example, in the text *Lionel Messi plays for FC Barcelona\.* a relation extraction model may decide that the entities `Lionel Messi` and `F.C. Barcelona` are in a relationship with each other, and the relationship type is `works for`\.
**Capabilities**
Use this model to detect relations between discovered entities\.
The following table lists common relations types that are available out\-of\-the\-box after you have run the entity models\.
<!-- <table> -->
Table 1\. Available common relation types between entities
| Relation | Description |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `affiliatedWith` | Exists between two entities that have an affiliation or are similarly connected\. |
| `basedIn` | Exists between an Organization and the place where it is mainly, only, or intrinsically located\. |
| `bornAt` | Exists between a Person and the place where they were born\. |
| `bornOn` | Exists between a Person and the Date or Time when they were born\. |
| `clientOf` | Exists between two entities when one is a direct business client of the other (that is, pays for certain services or products)\. |
| `colleague` | Exists between two Persons who are part of the same Organization\. |
| `competitor` | Exists between two Organizations that are engaged in economic competition\. |
| `contactOf` | Relates contact information with an entity\. |
| `diedAt` | Exists between a Person and the place at which he, she, or it died\. |
| `diedOn` | Exists between a Person and the Date or Time on which he, she, or it died\. |
| `dissolvedOn` | Exists between an Organization or URL and the Date or Time when it was dissolved\. |
| `educatedAt` | Exists between a Person and the Organization at which he or she is or was educated\. |
| `employedBy` | Exists between two entities when one pays the other for certain work or services; monetary reward must be involved\. In many circumstances, marking this relation requires world knowledge\. |
| `foundedOn` | Exists between an Organization or URL and the Date or Time on which it was founded\. |
| `founderOf` | Exists between a Person and a Facility, Organization, or URL that they founded\. |
| `locatedAt` | Exists between an entity and its location\. |
| `managerOf` | Exists between a Person and another entity such as a Person or Organization that he or she manages as his or her job\. |
| `memberOf` | Exists between an entity, such as a Person or Organization, and another entity to which he, she, or it belongs\. |
| `ownerOf` | Exists between an entity, such as a Person or Organization, and an entity that he, she, or it owns\. The owner does not need to have permanent ownership of the entity for the relation to exist\. |
| `parentOf` | Exists between a Person and their children or stepchildren\. |
| `partner` | Exists between two Organizations that are engaged in economic cooperation\. |
| `partOf` | Exists between a smaller and a larger entity of the same type or related types in which the second entity subsumes the first\. If the entities are both events, the first must occur within the time span of the second for the relation to be recognized\. |
| `partOfMany` | Exists between smaller and larger entities of the same type or related types in which the second entity, which must be plural, includes the first, which can be singular or plural\. |
| `populationOf` | Exists between a place and the number of people located there, or an organization and the number of members or employees it has\. |
| `measureOf` | This relation indicates the quantity of an entity or measure (height, weight, etc) of an entity\. |
| `relative` | Exists between two Persons who are relatives\. To identify parents, children, siblings, and spouses, use the `parentOf`, `siblingOf`, and `spouseOf` relations\. |
| `residesIn` | Exists between a Person and a place where they live or previously lived\. |
| `shareholdersOf` | Exists between a Person or Organization, and an Organization of which the first entity is a shareholder\. |
| `siblingOf` | Exists between a Person and their sibling or stepsibling\. |
| `spokespersonFor` | Exists between a Person and an Facility, Organization, or Person that he or she represents\. |
| `spouseOf` | Exists between two Persons that are spouses\. |
| `subsidiaryOf` | Exists between two Organizations when the first is a subsidiary of the second\. |
<!-- </table ""> -->
In [Runtime 22\.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=en#runtime-222), relation extraction is provided as an analysis block, which depends on the Syntax analysis block and a entity mention extraction block\. Starting with [Runtime 23\.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=en#runtime-231), relation extraction is provided as a workflow, which is directly run on the input text\.
### Relation extraction in Runtime 23\.1 ###
**Workflow name**
`relations_transformer-workflow_multilingual_slate.153m.distilled`
**Supported languages** The Relations Workflow is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, de, en, es, fr, it, ja, ko, pt
**Code sample**
import watson_nlp
# Load the workflow model
relations_workflow = watson_nlp.load('relations_transformer-workflow_multilingual_slate.153m.distilled')
# Run the relation extraction workflow on the input text
relations = relations_workflow.run('Anna Smith is an engineer. Anna works at IBM.', language_code="en")
print(relations.get_relation_pairs_by_type())
**Output of the code sample**
{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
### Relation extraction in Runtime 22\.2 ###
**Block name**
`relations_transformer_en_stock`
**Supported languages**
The Relations extraction block is available for English only\.
**Dependencies on other blocks**
The following block must run before you can run the `relations_transformer_en_stock` block:
<!-- <ul> -->
* `syntax_izumo_en_stock`
<!-- </ul> -->
This must be followed by one of the following entity models on which the relations extraction block can build its results:
<!-- <ul> -->
* entity\-mentions\_rbr\_en\_stock
* entity\-mentions\_bert\_multi\_stock
<!-- </ul> -->
**Code sample**
import watson_nlp
# Load the models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
entity_mentions_model = watson_nlp.load('entity-mentions_bert_multi_stock')
relation_model = watson_nlp.load('relations_transformer_en_stock')
# Run the prerequisite models
syntax_prediction = syntax_model.run('Anna Smith is an engineer. Anna works at IBM.')
entity_mentions = entity_mentions_model.run(syntax_prediction)
# Run the relations model
relations_on_mentions = relation_model.run(syntax_prediction, mentions_prediction=entity_mentions)
print(relations_on_mentions.get_relation_pairs_by_type())
Output of the code sample:
{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
A152F3047C3B41F06773051EA4B5B6B14DDE709E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-sentiment.html?context=cdpaas&locale=en | Sentiment classification | Sentiment classification
The Watson Natural Language Processing Sentiment classification models classify the sentiment of the input text.
Supported languages
Sentiment classification is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
Sentiment
The sentiment of text can be positive, negative or neutral.
The sentiment model computes the sentiment for each sentence in the input document. The aggregated sentiment for the entire document is also calculated using the sentiment transformer workflow in Runtime 23.1. If you are using the sentiment models in Runtime 22.2 the overall document sentiment can be computed by the helper method called predict_document_sentiment.
The classifications returned contain a probability. The sentiment score varies from -1 to 1. A score greater than 0 denotes a positive sentiment, a score less than 0 a negative sentiment, and a score of 0 a neutral sentiment.
Sentence sentiment workflows in runtime 23.1
Workflow names
* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled
* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu
The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs.
The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes.
Code sample using the sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow
Load the Sentiment workflow
sentiment_model = watson_nlp.load('sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu')
Run the sentiment model on the result of the syntax results
sentiment_result = sentiment_model.run('The rooms are nice. But the beds are not very comfortable.')
Print the sentence sentiment results
print(sentiment_result)
Output of the code sample
{
"document_sentiment": {
"score": -0.339735,
"label": "SENT_NEGATIVE",
"mixed": true,
"sentiment_mentions": [
{
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.9720447063446045,
"neutral": 0.011838269419968128,
"negative": 0.016117043793201447
}
},
{
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.0011594508541747928,
"neutral": 0.006315878126770258,
"negative": 0.9925248026847839
}
}
]
},
"targeted_sentiments": {
"targeted_sentiments": {},
"producer_id": {
"name": "Aggregated Sentiment Workflow",
"version": "0.0.1"
}
},
"producer_id": {
"name": "Aggregated Sentiment Workflow",
"version": "0.0.1"
}
}
Sentence sentiment blocks in 22.2 runtimes
Block name
sentiment_sentence-bert_multi_stock
Dependencies on other blocks
The following block must run before you can run the Sentence sentiment block:
* syntax_izumo_<language>_stock
Code sample using the sentiment_sentence-bert_multi_stock block
import watson_nlp
from watson_nlp.toolkit.sentiment_analysis_utils import predict_document_sentiment
Load Syntax and a Sentiment model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
sentiment_model = watson_nlp.load('sentiment_sentence-bert_multi_stock')
Run the syntax model on the input text
syntax_result = syntax_model.run('The rooms are nice. But the beds are not very comfortable.')
Run the sentiment model on the result of the syntax results
sentiment_result = sentiment_model.run_batch(syntax_result.get_sentence_texts(), syntax_result.sentences)
Print the sentence sentiment results
print(sentiment_result)
Get the aggregated document sentiment
document_sentiment = predict_document_sentiment(sentiment_result, sentiment_model.class_idxs)
print(document_sentiment)
Output of the code sample:
[{
"score": 0.9540348989256836,
"label": "SENT_POSITIVE",
"sentiment_mention": {
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.919123649597168,
"neutral": 0.05862388014793396,
"negative": 0.022252488881349564
}
},
"producer_id": {
"name": "Sentence Sentiment Bert Processing",
"version": "0.1.0"
}
}, {
"score": -0.9772116371114815,
"label": "SENT_NEGATIVE",
"sentiment_mention": {
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.015949789434671402,
"neutral": 0.025898978114128113,
"negative": 0.9581512808799744
}
},
"producer_id": {
"name": "Sentence Sentiment Bert Processing",
"version": "0.1.0"
}
}]
{
"score": -0.335185,
"label": "SENT_NEGATIVE",
"mixed": true,
"sentiment_mentions": [
{
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.919123649597168,
"neutral": 0.05862388014793396,
"negative": 0.022252488881349564
}
},
{
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.015949789434671402,
"neutral": 0.025898978114128113,
"negative": 0.9581512808799744
}
}
]
}
Targets sentiment extraction
Targets sentiment extraction extracts sentiments expressed in text and identifies the targets of those sentiments.
It can handle multiple targets with different sentiment in one sentence as opposed to the sentiment block described above.
For example, given the input sentence The served food was delicious, yet the service was slow., the Targets sentiment block identifies that there is a positive sentiment expressed in the target "food", and a negative sentiment expressed in "service".
The model has been fine-tuned on English data only. Although you can use the model on the other languages listed under Supported languages, the results might vary.
Targets sentiment workflows in Runtime 23.1
Workflow names
* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled
* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu
The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs.
The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes.
Code sample for the targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow
import watson_nlp
Load Targets Sentiment model for English
targets_sentiment_model = watson_nlp.load('targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled')
Run the targets sentiment model on the input text
targets_sentiments = targets_sentiment_model.run('The rooms are nice, but the bed was not very comfortable.')
Print the targets with the associated sentiment
print(targets_sentiments)
Output of the code sample:
{
"targeted_sentiments": {
"rooms": {
"score": 0.990798830986023,
"label": "SENT_POSITIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 4,
"end": 9,
"text": "rooms"
},
"sentimentprob": {
"positive": 0.990798830986023,
"neutral": 0.0,
"negative": 0.00920116901397705
}
}
]
},
"bed": {
"score": -0.9920912981033325,
"label": "SENT_NEGATIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 28,
"end": 31,
"text": "bed"
},
"sentimentprob": {
"positive": 0.00790870189666748,
"neutral": 0.0,
"negative": 0.9920912981033325
}
}
]
}
},
"producer_id": {
"name": "Transformer-based Targets Sentiment Extraction Workflow",
"version": "0.0.1"
}
}
Targets sentiment blocks in 22.2 runtimes
Block nametargets-sentiment_sequence-bert_multi_stock
Dependencies on other blocks
The following block must run before you can run the Targets sentiment extraction block:
* syntax_izumo_<language>_stock
Code sample using the sentiment-targeted_bert_multi_stock block
import watson_nlp
Load Syntax and the Targets Sentiment model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
targets_sentiment_model = watson_nlp.load('targets-sentiment_sequence-bert_multi_stock')
Run the syntax model on the input text
syntax_result = syntax_model.run('The rooms are nice, but the bed was not very comfortable.')
Run the targets sentiment model on the syntax results
targets_sentiments = targets_sentiment_model.run(syntax_result)
Print the targets with the associated sentiment
print(targets_sentiments)
Output of the code sample:
{
"targeted_sentiments": {
"rooms": {
"score": 0.9989274144172668,
"label": "SENT_POSITIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 4,
"end": 9,
"text": "rooms"
},
"sentimentprob": {
"positive": 0.9989274144172668,
"neutral": 0.0,
"negative": 0.0010725855827331543
}
}
]
},
"bed": {
"score": -0.9977545142173767,
"label": "SENT_NEGATIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 28,
"end": 31,
"text": "bed"
},
"sentimentprob": {
"positive": 0.002245485782623291,
"neutral": 0.0,
"negative": 0.9977545142173767
}
}
]
}
},
"producer_id": {
"name": "BERT TSA",
"version": "0.0.1"
}
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Sentiment classification #
The Watson Natural Language Processing Sentiment classification models classify the sentiment of the input text\.
**Supported languages**
Sentiment classification is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh\-cn
## Sentiment ##
The sentiment of text can be positive, negative or neutral\.
The sentiment model computes the sentiment for each sentence in the input document\. The aggregated sentiment for the entire document is also calculated using the sentiment transformer workflow in Runtime 23\.1\. If you are using the sentiment models in Runtime 22\.2 the overall document sentiment can be computed by the helper method called `predict_document_sentiment`\.
The classifications returned contain a probability\. The sentiment score varies from \-1 to 1\. A score greater than 0 denotes a positive sentiment, a score less than 0 a negative sentiment, and a score of 0 a neutral sentiment\.
### Sentence sentiment workflows in runtime 23\.1 ###
**Workflow names**
<!-- <ul> -->
* `sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled`
* `sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu`
<!-- </ul> -->
The `sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled` workflow can be used on both CPUs and GPUs\.
The `sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu` workflow is optimized for CPU\-based runtimes\.
**Code sample using the `sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled` workflow**
# Load the Sentiment workflow
sentiment_model = watson_nlp.load('sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu')
# Run the sentiment model on the result of the syntax results
sentiment_result = sentiment_model.run('The rooms are nice. But the beds are not very comfortable.')
# Print the sentence sentiment results
print(sentiment_result)
**Output of the code sample**
{
"document_sentiment": {
"score": -0.339735,
"label": "SENT_NEGATIVE",
"mixed": true,
"sentiment_mentions": [
{
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.9720447063446045,
"neutral": 0.011838269419968128,
"negative": 0.016117043793201447
}
},
{
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.0011594508541747928,
"neutral": 0.006315878126770258,
"negative": 0.9925248026847839
}
}
]
},
"targeted_sentiments": {
"targeted_sentiments": {},
"producer_id": {
"name": "Aggregated Sentiment Workflow",
"version": "0.0.1"
}
},
"producer_id": {
"name": "Aggregated Sentiment Workflow",
"version": "0.0.1"
}
}
### Sentence sentiment blocks in 22\.2 runtimes ###
**Block name**
`sentiment_sentence-bert_multi_stock`
**Dependencies on other blocks**
The following block must run before you can run the Sentence sentiment block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
<!-- </ul> -->
**Code sample using the `sentiment_sentence-bert_multi_stock` block**
import watson_nlp
from watson_nlp.toolkit.sentiment_analysis_utils import predict_document_sentiment
# Load Syntax and a Sentiment model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
sentiment_model = watson_nlp.load('sentiment_sentence-bert_multi_stock')
# Run the syntax model on the input text
syntax_result = syntax_model.run('The rooms are nice. But the beds are not very comfortable.')
# Run the sentiment model on the result of the syntax results
sentiment_result = sentiment_model.run_batch(syntax_result.get_sentence_texts(), syntax_result.sentences)
# Print the sentence sentiment results
print(sentiment_result)
# Get the aggregated document sentiment
document_sentiment = predict_document_sentiment(sentiment_result, sentiment_model.class_idxs)
print(document_sentiment)
Output of the code sample:
[{
"score": 0.9540348989256836,
"label": "SENT_POSITIVE",
"sentiment_mention": {
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.919123649597168,
"neutral": 0.05862388014793396,
"negative": 0.022252488881349564
}
},
"producer_id": {
"name": "Sentence Sentiment Bert Processing",
"version": "0.1.0"
}
}, {
"score": -0.9772116371114815,
"label": "SENT_NEGATIVE",
"sentiment_mention": {
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.015949789434671402,
"neutral": 0.025898978114128113,
"negative": 0.9581512808799744
}
},
"producer_id": {
"name": "Sentence Sentiment Bert Processing",
"version": "0.1.0"
}
}]
{
"score": -0.335185,
"label": "SENT_NEGATIVE",
"mixed": true,
"sentiment_mentions": [
{
"span": {
"begin": 0,
"end": 19,
"text": "The rooms are nice."
},
"sentimentprob": {
"positive": 0.919123649597168,
"neutral": 0.05862388014793396,
"negative": 0.022252488881349564
}
},
{
"span": {
"begin": 20,
"end": 58,
"text": "But the beds are not very comfortable."
},
"sentimentprob": {
"positive": 0.015949789434671402,
"neutral": 0.025898978114128113,
"negative": 0.9581512808799744
}
}
]
}
## Targets sentiment extraction ##
Targets sentiment extraction extracts sentiments expressed in text and identifies the targets of those sentiments\.
It can handle multiple targets with different sentiment in one sentence as opposed to the sentiment block described above\.
For example, given the input sentence *The served food was delicious, yet the service was slow\.*, the Targets sentiment block identifies that there is a positive sentiment expressed in the target "food", and a negative sentiment expressed in "service"\.
The model has been fine\-tuned on English data only\. Although you can use the model on the other languages listed under Supported languages, the results might vary\.
### Targets sentiment workflows in Runtime 23\.1 ###
**Workflow names**
<!-- <ul> -->
* `targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled`
* `targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu`
<!-- </ul> -->
The `targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled` workflow can be used on both CPUs and GPUs\.
The `targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu` workflow is optimized for CPU\-based runtimes\.
**Code sample for the `targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled` workflow**
import watson_nlp
# Load Targets Sentiment model for English
targets_sentiment_model = watson_nlp.load('targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled')
# Run the targets sentiment model on the input text
targets_sentiments = targets_sentiment_model.run('The rooms are nice, but the bed was not very comfortable.')
# Print the targets with the associated sentiment
print(targets_sentiments)
**Output of the code sample:**
{
"targeted_sentiments": {
"rooms": {
"score": 0.990798830986023,
"label": "SENT_POSITIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 4,
"end": 9,
"text": "rooms"
},
"sentimentprob": {
"positive": 0.990798830986023,
"neutral": 0.0,
"negative": 0.00920116901397705
}
}
]
},
"bed": {
"score": -0.9920912981033325,
"label": "SENT_NEGATIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 28,
"end": 31,
"text": "bed"
},
"sentimentprob": {
"positive": 0.00790870189666748,
"neutral": 0.0,
"negative": 0.9920912981033325
}
}
]
}
},
"producer_id": {
"name": "Transformer-based Targets Sentiment Extraction Workflow",
"version": "0.0.1"
}
}
### Targets sentiment blocks in 22\.2 runtimes ###
**Block name**`targets-sentiment_sequence-bert_multi_stock`
**Dependencies on other blocks**
The following block must run before you can run the Targets sentiment extraction block:
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
<!-- </ul> -->
**Code sample using the `sentiment-targeted_bert_multi_stock` block**
import watson_nlp
# Load Syntax and the Targets Sentiment model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
targets_sentiment_model = watson_nlp.load('targets-sentiment_sequence-bert_multi_stock')
# Run the syntax model on the input text
syntax_result = syntax_model.run('The rooms are nice, but the bed was not very comfortable.')
# Run the targets sentiment model on the syntax results
targets_sentiments = targets_sentiment_model.run(syntax_result)
# Print the targets with the associated sentiment
print(targets_sentiments)
Output of the code sample:
{
"targeted_sentiments": {
"rooms": {
"score": 0.9989274144172668,
"label": "SENT_POSITIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 4,
"end": 9,
"text": "rooms"
},
"sentimentprob": {
"positive": 0.9989274144172668,
"neutral": 0.0,
"negative": 0.0010725855827331543
}
}
]
},
"bed": {
"score": -0.9977545142173767,
"label": "SENT_NEGATIVE",
"mixed": false,
"sentiment_mentions": [
{
"span": {
"begin": 28,
"end": 31,
"text": "bed"
},
"sentimentprob": {
"positive": 0.002245485782623291,
"neutral": 0.0,
"negative": 0.9977545142173767
}
}
]
}
},
"producer_id": {
"name": "BERT TSA",
"version": "0.0.1"
}
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
DCE29488A4D041B77F6E9B1B514F41335FAE0696 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-syntax.html?context=cdpaas&locale=en | Syntax analysis | Syntax analysis
The Watson Natural Language Processing Syntax block encapsulates syntax analysis functionality.
Block names
* syntax_izumo_<language>_stock
* syntax_izumo_<language>_stock-dp (Runtime 23.1 only)
Supported languages
The Syntax analysis block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
Language codes to use for model syntax_izumo_<language>_stock: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
Language codes to use for model syntax_izumo_<language>_stock-dp: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh
List of the supported languages for each syntax task
Task Supported language codes
Tokenization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
Part-of-speech tagging af, ar, bs, ca, cs, da, de, nl, nn, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
Lemmatization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
Sentence detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
Paragraph detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
Dependency parsing af, ar, bs, cs, da, de, en, es, fi, fr, hi, hr, it, ja, nb, nl, nn, pt, ro, ru, sk, sr, sv
Capabilities
Use this block to perform tasks like sentence detection, tokenization, part-of-speech tagging, lemmatization and dependency parsing in different languages. For most tasks, you will likely only need sentence detection, tokenization, and part-of-speech tagging. For these use cases use the syntax_model_xx_stock model. If you want to run dependency parsing in Runtime 23.1, use the syntax_model_xx_stock-dp model. In Runtime 22.2, dependency parsing is included in the syntax_model_xx_stock model.
The analysis for Part-of-speech (POS) tagging and dependencies follows the Universal Parts of Speech tagset ([Universal POS tags](https://universaldependencies.org/u/pos/)) and the Universal Dependencies v2 tagset ([Universal Dependency Relations](https://universaldependencies.org/u/dep/)).
The following table shows you the capabilities of each task based on the same example and the outcome to the parse.
Capabilities of each syntax task based on an example
Capabilities Examples Parser attributes
Tokenization I don't like Mondays" --> "I" , "do", "n't", "like", "Mondays token
Part-Of_Speech detection "I don't like Mondays" --> "I"\POS_PRON, "do"\POS_AUX, "n't"\POS_PART, "like"\POS_VERB, "Mondays"\POS_PROPN part_of_speech
Lemmatization I don't like Mondays" --> "I", "do", "not", "like", "Monday lemma
Dependency parsing I don't like Mondays" --> "I"-SUBJECT->"like"<-OBJECT-"Mondays dependency
Sentence detection "I don't like Mondays" --> returns this sentence sentence
Paragraph detection (Currently paragraph detection is still experimental and returns similar results to sentence detection.) "I don't like Mondays" --> returns this sentence as being a paragraph sentence
Dependencies on other blocks
None
Code sample
import watson_nlp
Load Syntax for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Detect tokens, lemma and part-of-speech
text = 'I don't like Mondays'
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
Print the syntax result
print(syntax_prediction)
Output of the code sample:
{
"text": "I don't like Mondays",
"producer_id": {
"name": "Izumo Text Processing",
"version": "0.0.1"
},
"tokens": [
{
"span": {
"begin": 0,
"end": 1,
"text": "I"
},
"lemma": "I",
"part_of_speech": "POS_PRON"
},
{
"span": {
"begin": 2,
"end": 4,
"text": "do"
},
"lemma": "do",
"part_of_speech": "POS_AUX"
},
{
"span": {
"begin": 4,
"end": 7,
"text": "n't"
},
"lemma": "not",
"part_of_speech": "POS_PART"
},
{
"span": {
"begin": 8,
"end": 12,
"text": "like"
},
"lemma": "like",
"part_of_speech": "POS_VERB"
},
{
"span": {
"begin": 13,
"end": 20,
"text": "Mondays"
},
"lemma": "Monday",
"part_of_speech": "POS_PROPN"
}
],
"sentences": [
{
"span": {
"begin": 0,
"end": 20,
"text": "I don't like Mondays"
}
}
],
"paragraphs": [
{
"span": {
"begin": 0,
"end": 20,
"text": "I don't like Mondays"
}
}
]
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Syntax analysis #
The Watson Natural Language Processing Syntax block encapsulates syntax analysis functionality\.
**Block names**
<!-- <ul> -->
* `syntax_izumo_<language>_stock`
* `syntax_izumo_<language>_stock-dp` (Runtime 23\.1 only)
<!-- </ul> -->
**Supported languages**
The Syntax analysis block is available for the following languages\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
Language codes to use for model `syntax_izumo_<language>_stock`: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw
Language codes to use for model `syntax_izumo_<language>_stock-dp`: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh
<!-- <table> -->
List of the supported languages for each syntax task
| Task | Supported language codes |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- |
| Tokenization | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw, zh |
| Part\-of\-speech tagging | af, ar, bs, ca, cs, da, de, nl, nn, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw, zh |
| Lemmatization | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw, zh |
| Sentence detection | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw, zh |
| Paragraph detection | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw, zh |
| Dependency parsing | af, ar, bs, cs, da, de, en, es, fi, fr, hi, hr, it, ja, nb, nl, nn, pt, ro, ru, sk, sr, sv |
<!-- </table ""> -->
**Capabilities**
Use this block to perform tasks like sentence detection, tokenization, part\-of\-speech tagging, lemmatization and dependency parsing in different languages\. For most tasks, you will likely only need sentence detection, tokenization, and part\-of\-speech tagging\. For these use cases use the `syntax_model_xx_stock` model\. If you want to run dependency parsing in Runtime 23\.1, use the `syntax_model_xx_stock-dp` model\. In Runtime 22\.2, dependency parsing is included in the `syntax_model_xx_stock` model\.
The analysis for Part\-of\-speech (POS) tagging and dependencies follows the Universal Parts of Speech tagset ([Universal POS tags](https://universaldependencies.org/u/pos/)) and the Universal Dependencies v2 tagset ([Universal Dependency Relations](https://universaldependencies.org/u/dep/))\.
The following table shows you the capabilities of each task based on the same example and the outcome to the parse\.
<!-- <table> -->
Capabilities of each syntax task based on an example
| Capabilities | Examples | Parser attributes |
| ----------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ----------------- |
| Tokenization | I don't like Mondays" \-\-> "I" , "do", "n't", "like", "Mondays | token |
| Part\-Of\_Speech detection | "I don't like Mondays" \-\-> "I"\\POS\_PRON, "do"\\POS\_AUX, "n't"\\POS\_PART, "like"\\POS\_VERB, "Mondays"\\POS\_PROPN | part\_of\_speech |
| Lemmatization | I don't like Mondays" \-\-> "I", "do", "not", "like", "Monday | lemma |
| Dependency parsing | I don't like Mondays" \-\-> "I"\-SUBJECT\->"like"<\-OBJECT\-"Mondays | dependency |
| Sentence detection | "I don't like Mondays" \-\-> returns this sentence | sentence |
| Paragraph detection (Currently paragraph detection is still experimental and returns similar results to sentence detection\.) | "I don't like Mondays" \-\-> returns this sentence as being a paragraph | sentence |
<!-- </table ""> -->
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load Syntax for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Detect tokens, lemma and part-of-speech
text = 'I don\'t like Mondays'
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
# Print the syntax result
print(syntax_prediction)
Output of the code sample:
{
"text": "I don't like Mondays",
"producer_id": {
"name": "Izumo Text Processing",
"version": "0.0.1"
},
"tokens": [
{
"span": {
"begin": 0,
"end": 1,
"text": "I"
},
"lemma": "I",
"part_of_speech": "POS_PRON"
},
{
"span": {
"begin": 2,
"end": 4,
"text": "do"
},
"lemma": "do",
"part_of_speech": "POS_AUX"
},
{
"span": {
"begin": 4,
"end": 7,
"text": "n't"
},
"lemma": "not",
"part_of_speech": "POS_PART"
},
{
"span": {
"begin": 8,
"end": 12,
"text": "like"
},
"lemma": "like",
"part_of_speech": "POS_VERB"
},
{
"span": {
"begin": 13,
"end": 20,
"text": "Mondays"
},
"lemma": "Monday",
"part_of_speech": "POS_PROPN"
}
],
"sentences": [
{
"span": {
"begin": 0,
"end": 20,
"text": "I don't like Mondays"
}
}
],
"paragraphs": [
{
"span": {
"begin": 0,
"end": 20,
"text": "I don't like Mondays"
}
}
]
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
ABCA967CD96AB805BE518E8A52EF984499C62F6C | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html?context=cdpaas&locale=en | Tone classification | Tone classification
The Tone model in the Watson Natural Language Processing classification workflow classifies the tone in the input text.
Workflow name
ensemble_classification-workflow_en_tone-stock
Supported languages
* English and French
Capabilities
The Tone classification model is a pre-trained document classification model for the task of classifying the tone in the input document. The model identifies the tone of the input document and classifies it as:
* Excited
* Frustrated
* Impolite
* Polite
* Sad
* Satisfied
* Sympathetic
Unlike the Sentiment model, which classifies each individual sentence, the Tone model classifies the entire input document. As such, the Tone model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Tone model on each sentence or paragraph.
A document may be classified into multiple categories or into no category.
Capabilities of tone classification
Capabilities Example
Identifies the tone of a document and classifies it "I'm really happy with how this was handled, thank you!" --> excited, satisfied
Dependencies on other blocks
None
Code sample
import watson_nlp
Load the Tone workflow model for English
tone_model = watson_nlp.load('ensemble_classification-workflow_en_tone-stock')
Run the Tone model
tone_result = tone_model.run("I'm really happy with how this was handled, thank you!")
print(tone_result)
Output of the code sample:
{
"classes": [
{
"class_name": "excited",
"confidence": 0.6896854620082722
},
{
"class_name": "satisfied",
"confidence": 0.6570277557333078
},
{
"class_name": "polite",
"confidence": 0.33628806679460566
},
{
"class_name": "sympathetic",
"confidence": 0.17089694967744093
},
{
"class_name": "sad",
"confidence": 0.06880583874412932
},
{
"class_name": "frustrated",
"confidence": 0.010365418217209686
},
{
"class_name": "impolite",
"confidence": 0.002470793624966174
}
],
"producer_id": {
"name": "Voting based Ensemble",
"version": "0.0.1"
}
}
Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
| # Tone classification #
The Tone model in the Watson Natural Language Processing classification workflow classifies the tone in the input text\.
**Workflow name**
`ensemble_classification-workflow_en_tone-stock`
**Supported languages**
<!-- <ul> -->
* English and French
<!-- </ul> -->
**Capabilities**
The Tone classification model is a pre\-trained document classification model for the task of classifying the tone in the input document\. The model identifies the tone of the input document and classifies it as:
<!-- <ul> -->
* Excited
* Frustrated
* Impolite
* Polite
* Sad
* Satisfied
* Sympathetic
<!-- </ul> -->
Unlike the Sentiment model, which classifies each individual sentence, the Tone model classifies the entire input document\. As such, the Tone model works optimally when the input text to classify is no longer than 1000 characters\. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Tone model on each sentence or paragraph\.
A document may be classified into multiple categories or into no category\.
<!-- <table> -->
Capabilities of tone classification
| Capabilities | Example |
| --------------------------------------------------- | ------------------------------------------------------------------------------------- |
| Identifies the tone of a document and classifies it | "I'm really happy with how this was handled, thank you\!" \-\-> excited, satisfied |
<!-- </table ""> -->
**Dependencies on other blocks**
None
**Code sample**
import watson_nlp
# Load the Tone workflow model for English
tone_model = watson_nlp.load('ensemble_classification-workflow_en_tone-stock')
# Run the Tone model
tone_result = tone_model.run("I'm really happy with how this was handled, thank you!")
print(tone_result)
Output of the code sample:
{
"classes": [
{
"class_name": "excited",
"confidence": 0.6896854620082722
},
{
"class_name": "satisfied",
"confidence": 0.6570277557333078
},
{
"class_name": "polite",
"confidence": 0.33628806679460566
},
{
"class_name": "sympathetic",
"confidence": 0.17089694967744093
},
{
"class_name": "sad",
"confidence": 0.06880583874412932
},
{
"class_name": "frustrated",
"confidence": 0.010365418217209686
},
{
"class_name": "impolite",
"confidence": 0.002470793624966174
}
],
"producer_id": {
"name": "Voting based Ensemble",
"version": "0.0.1"
}
}
**Parent topic:**[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
<!-- </article "role="article" "> -->
|
9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en | Classifying text with a custom classification model | Classifying text with a custom classification model
You can train your own models for text classification using strong classification algorithms from three different families:
* Classic machine learning using SVM (Support Vector Machines)
* Deep learning using CNN (Convolutional Neural Networks)
* A transformer-based algorithm using a pre-trained transformer model:
* Runtime 23.1: Slate IBM Foundation model
* Runtime 22.x: Google BERT Multilingual model
The Watson Natural Language Processing library also offers an easy to use Ensemble classifier that combines different classification algorithms and majority voting.
The algorithms support multi-label and multi-class tasks and special cases, like if the document belongs to one class only (single-label task), or binary classification tasks.
Note:Training classification models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Especially for transformer-based algorithms, you should use a GPU-based environment, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
Topic sections:
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data)
* [Input data requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data-reqs)
* [Stopwords](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enstopwords)
* [Training SVM algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-svm)
* [Training the CNN algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-cnn)
* [Training the transformer algorithm by using the Slate IBM Foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-slate)
* [Training a custom transformer model by using a model provided by Hugging Face](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-huface)
* [Training the multilingual BERT algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-bert)
* [Training an ensemble model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-ensemble)
* [Training best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enbest-practices)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enapply-model)
* [Choosing the right algorithm for your use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enchoose-algorithm)
Input data format for training
Classification blocks accept training data in CSV and JSON formats.
* The CSV Format
The CSV file should contain no header. Each row in the CSV file represents an example record. Each record has one or more columns, where the first column represents the text and the subsequent columns represent the labels associated with that text.
Note:
* The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each CSV row must have at least one label, i.e., 2 columns.
* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
Example 1,label 1
Example 2,label 1,label 2
* The JSON Format
The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a labels field. The text represents the training example, and labels stores the labels associated with the example (0, 1, or more than one label).
[
{
"text": "Example 1",
"labels": "label 1"]
},
{
"text": "Example 2",
"labels": "label 1", "label 2"]
},
{
"text": "Example 3",
"labels": ]
}
]
Note:
* "labels": [] denotes an example with no labels. The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each JSON object must have at least one label.
* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
Input data requirements
For SVM and CNN algorithms:
* Minimum number of unique labels required: 2
* Minimum number of text examples required per label: 5
For the BERT-based and Slate-based Transformer algorithms:
* Minimum number of unique labels required: 1
* Minimum number of text examples required per label: 5
Note that the training data in CSV or JSON format is converted to a DataStream before training. Instead of training data files, you can also pass data streams directly to the training functions of classification blocks.
Stopwords
You can provide your own stopwords that will be removed during preprocessing. Stopwords file inputs are expected in a standard format: a single text file with one phrase per line. Stopwords can be provided as a list or as a file in a standard format.
Stopwords can be used only with the Ensemble classifier.
Training SVM algorithms
SVM is a support vector machine classifier that can be trained using predictions on any kind of input provided by the embedding or vectorization blocks as feature vectors, for example, by USE (Universal Sentence Encoder) embeddings and TF-IDF vectorizers. It supports multi-class and multi-label text classification and produces confidence scores via Platt Scaling.
For all options that are available for configuring SVM training, enter:
help(watson_nlp.blocks.classification.svm.SVM.train)
To train SVM algorithms:
1. Begin with these preprocessing steps:
import watson_nlp
from watson_core.data_model.streams.resolver import DataStreamResolver
from watson_nlp.blocks.classification.svm import SVM
training_data_file = "<ADD TRAINING DATA FILE PATH>"
Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
Load a Syntax model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Create Syntax stream
text_stream, labels_stream = training_data[0], training_data[1]
syntax_stream = syntax_model.stream(text_stream)
1. Train the classification model using USE embeddings. See [Pretrained USE embeddings available out-of-the-box](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enuse-embeddings) for a list of the pretrained blocks that are available.
download embedding
use_embedding_model = watson_nlp.load('embedding_use_en_stock')
use_train_stream = use_embedding_model.stream(syntax_stream, doc_embed_style='raw_text')
NOTE: doc_embed_style can be changed to avg_sent as well. For more information check the documentation for Embeddings
Or the USE run function API docs
use_svm_train_stream = watson_nlp.data_model.DataStream.zip(use_train_stream, labels_stream)
Train SVM using Universal Sentence Encoder (USE) training stream
classification_model = SVM.train(use_svm_train_stream)
Pretrained USE embeddings available out-of-the-box
USE embeddings are wrappers around Google Universal Sentence Encoder embeddings available in TFHub. These embeddings are used in the document classification SVM algorithm.
The following table lists the pretrained blocks for USE embeddings that are available and the languages that are supported. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
List of pretrained USE embeddings with their supported languages
Block name Model name Supported languages
use embedding_use_en_stock English only
use embedding_use_multi_small ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw
use embedding_use_multi_large ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw
When using USE embeddings, consider the following:
* Choose embedding_use_en_stock if your task involves English text.
* Choose one of the multilingual USE embeddings if your task involves text in a non-English language, or you want to train multilingual models.
* The USE embeddings exhibit different trade-offs between quality of the trained model and throughput at inference time, as described below. Try different embeddings to decide the trade-off between quality of result and inference throughput that is appropriate for your use case.
* embedding_use_multi_small has reasonable quality, but it is fast at inference time
* embedding_use_en_stock is a English-only version of embedding_embedding_use_multi_small, hence it is smaller and exhibits higher inference throughput
* embedding_use_multi_large is based on Transformer architecture, and therefore it provides higher quality of result, with lower throughput at inference time
Training the CNN algorithm
CNN is a simple convolutional network architecture, built for multi-class and multi-label text classification on short texts. It utilizes GloVe embeddings. GloVe embeddings encode word-level semantics into a vector space. The GloVe embeddings for each language are trained on the Wikipedia corpus in that language. For information on using GloVe embeddings, see the open source GloVe embeddings documentation.
For all the options that are available for configuring CNN training, enter:
help(watson_nlp.blocks.classification.cnn.CNN.train)
To train CNN algorithms:
import watson_nlp
from watson_core.data_model.streams.resolver import DataStreamResolver
from watson_nlp.blocks.classification.cnn import CNN
training_data_file = "<ADD TRAINING DATA FILE PATH>"
Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
Load a Syntax model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Create Syntax stream
text_stream, labels_stream = training_data[0], training_data[1]
syntax_stream = syntax_model.stream(text_stream)
Download GloVe embeddings
glove_embedding_model = watson_nlp.load('embedding_glove_en_stock')
Train CNN
classification_model = CNN.train(watson_nlp.data_model.DataStream.zip(syntax_stream, labels_stream), embedding=glove_embedding_model.embedding)
Training the transformer algorithm by using the IBM Slate model
The transformer algorithm using the pretrained Slate IBM Foundation model can be used for multi-class and multi-label text classification on short texts.
The pretrained Slate IBM Foundation model is only available in Runtime 23.1.
For all the options available for configuring Transformer training, enter:
help(watson_nlp.blocks.classification.transformer.Transformer.train)
To train Transformer algorithms:
import watson_nlp
from watson_nlp.blocks.classification.transformer import Transformer
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "train_data.json"
create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
Load pre-trained Slate model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased ')
Train model
classification_model = Transformer.train(train_stream, pretrained_model_resource)
Training a custom transformer model by using a model provided by Hugging Face
Note: This training method is only available in Runtime 23.1.
You can train your custom transformer-based model by using a pretrained model from Hugging Face.
To use a Hugging Face model, specify the model name as the pretrained_model_resource parameter in the train method of watson_nlp.blocks.classification.transformer.Transformer. Go to [https://huggingface.co/models](https://huggingface.co/models) to copy the model name.
To get a list of all the options available for configuring a transformer training, type this code:
help(watson_nlp.blocks.classification.transformer.Transformer.train)
For information on how to train transformer algorithms, refer to this code example:
import watson_nlp
from watson_nlp.blocks.classification.transformer import Transformer
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "train_data.json"
create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
Specify the name of the Hugging Face model
huggingface_model_name = 'xml-roberta-base'
Train model
classification_model = Transformer.train(train_stream, pretrained_model_resource=huggingface_model_name)
Training the multilingual BERT algorithm
BERT is a transformer-based architecture, built for multi-class and multi-label text classification on short texts.
Note: The Google BERT Multilingual model is available in 22.2 runtimes only.
For all the options available for configuring BERT training, enter:
help(watson_nlp.blocks.classification.bert.BERT.train)
To train BERT algorithms:
import watson_nlp
from watson_nlp.blocks.classification.bert import BERT
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "<ADD TRAINING DATA FILE PATH>"
create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
Load pre-trained BERT model
pretrained_model_resource = watson_nlp.load('pretrained-model_bert_multi_bert_multi_uncased')
Train model
classification_model = BERT.train(train_stream, pretrained_model_resource)
Training an ensemble model
The Ensemble model is a weighted ensemble of these three algorithms: CNN, SVM with TF-IDF and SVM with USE. It computes the weighted mean of a set of classification predictions using confidence scores. The ensemble model is very easy to use.
Using the Runtime 22.2 and Runtime 23.1 environments
The GenericEnsemble classifier allows more flexibility for the user to choose from the three base classifiers TFIDF-SVM, USE-SVM and CNN. For texts ranging from 50 to 1000 characters, using the combination of TFIDF-SVM and USE-SVM classifiers often yields a good balance of quality and performance. On some medium or long documents (500-1000+ characters), adding the CNN to the Ensemble could help increase quality, but it usually comes with a significant runtime performance impact (lower throughput and increased model loading time).
For all of the options available for configuring Ensemble training, enter:
help(watson_nlp.workflows.classification.GenericEnsemble)
To train Ensemble algorithms:
import watson_nlp
from watson_nlp.workflows.classification import GenericEnsemble
from watson_nlp.workflows.classification.base_classifier import GloveCNN
from watson_nlp.workflows.classification.base_classifier import TFidfSvm
from watson_nlp.workflows.classification.base_classifier import UseSvm
training_data_file = "<ADD TRAINING DATA FILE PATH>"
Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
Syntax Model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
USE Embedding Model
use_model = watson_nlp.load('embedding_use_en_stock')
GloVE Embedding model
glove_model = watson_nlp.load('embedding_glove_en_stock')
ensemble_model = GenericEnsemble.train(training_data, syntax_model,
base_classifiers_params=[
TFidfSvm.TrainParams(syntax_model=syntax_model),
GloveCNN.TrainParams(syntax_model=syntax_model, glove_embedding_model=glove_model, cnn_epochs=5),
UseSvm.TrainParams(syntax_model=syntax_model, use_embedding_model=use_model, doc_embed_style='raw_text')],
use_ewl=True)
Pretrained stopword models available out-of-the-box
The text model for identifying stopwords is used in training the document classification ensemble model.
The following table lists the pretrained stopword models and the language codes that are supported (xx stands for the language code). For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
List of pretrained stopword models with their supported languages
Resource class Model name Supported languages
text text_stopwords_classification_ensemble_xx_stock ar, de, es, en, fr, it, ja, ko
Training best practices
There are certain constraints on the quality and quantity of data to ensure that the classifications model training can complete in a reasonable amount of time and also meets various performance criteria. These are listed below. Note that none are hard restrictions. However, the further one deviates from these guidelines, the greater the chance that the model fails to train or the model will not be satisfactory.
* Data quantity
* The highest number of classes classification model has been tested on is 1200.
* The best suited text size for training and testing data for classification is around 3000 code points. However, larger texts can also be processed, but the runtime performance might be slower.
* Training time will increase based on the number of examples and number of labels.
* Inference time will increased based on the number of labels.
* Data quality
* Size of each sample (for example, number of phrases in each training sample) can affect quality.
* Class separation is important. In other words, classes among the training (and test) data should be semantically distinguishable from each another in order to avoid misclassifications. Since the classifier algorithms in Watson Natural Language Processing rely on word embeddings, training classes that contain text examples with too much semantic overlap may make high-quality classification computationally intractable. While more sophisticated heuristics may exist for assessing the semantic similarity between classes, you should start with a simple "eye test" of a few examples from each class to discern whether or not they seem adequately separated.
* It is recommended to use balanced data for training. Ideally there should be roughly equal numbers of examples from each class in the training data, otherwise the classifiers may be biased towards classes with larger representation in the training data.
* It is best to avoid circumstances where some classes in the training data are highly under-represented as compared to other classes.
Limitations and caveats:
* The BERT classification block has a predefined sequence length of 128 code points. However, this can be configured at train time by changing the parameter max_seq_length. The maximum value allowed for this parameter is 512. This means that the BERT classification block can only be used to classify short text. Text longer than max_seq_length is trimmed and discarded during classification training and inference.
* The CNN classification block has a predefined sequence length of 1000 code points. This limit can be configured at train time by changing the parameter max_phrase_len. There is no maximum limit for this parameter, but increasing the maximum phrase length will affect CPU and memory consumption.
* SVM blocks do not have such limit on sequence length and can be used with longer texts.
Applying the model on new data
After you have trained the model on a data set, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
Sample code
* For the Ensemble and BERT models, for example for Ensemble:
run Ensemble model on new text
ensemble_prediction = ensemble_classification_model.run("new input text")
* For SVM and CNN models, for example for CNN:
run Syntax model first
syntax_result = syntax_model.run("new input text")
run CNN model on top of syntax result
cnn_prediction = cnn_classification_model.run(syntax_result)
Choosing the right algorithm for your use case
You need to choose the model algorithm that best suits your use case.
When choosing between SVM, CNN, and Transformers, consider the following:
* BERT and Transformer-based Slate
* Choose when high quality is required and higher computing resources are available.
* CNN
* Choose when decent size data is available
* Choose if GloVe embedding is available for the required language
* Choose if you have the option between single label versus multi-label
* CNN fine tunes embeddings, so it could give better performance for unknown terms or newer domains.
* SVM
* Choose if an easier and simpler model is required
* SVM has the fastest training and inference time
* Choose if your data set size is small
If you select SVM, you need to consider the following when choosing between the various implementations of SVM:
* SVMs train multi-label classifiers.
* The larger the number of classes, the longer the training time.
* TF-IDF:
* Choose TF-IDF vectorization with SVM if the data set is small, i.e. has a small number of classes, a small number of examples and shorter text size, for example, sentences containing fewer phrases.
* TF-IDF with SVM can be faster than other algorithms in the classification block.
* Choose TF-IDF if embeddings for the required language are not available.
* USE:
* Choose Universal Sentence Encoder (USE) with SVM if the data set has one or more sentences in input text.
* USE can perform better on data sets where understanding the context of words or sentences is important.
The Ensemble model combines multiple individual (diverse) models together to deliver superior prediction power. Consider the following key data for this model type:
* The ensemble model combines CNN, SVM with TF-IDF and SVM with USE.
* It is the easiest model to use.
* It can give better performance than the individual algorithms.
* It works for all kinds of data sets. However, training time for large datasets (more than 20000 examples) can be high.
* An ensemble model allows you to set weights. These weights decides how the ensemble model combines the results of individual classifiers. Currently, the selection of weights is a heuristics and needs to be set by trial and error. The default weights that are provided in the function itself are a good starting point for the exploration.
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
| # Classifying text with a custom classification model #
You can train your own models for text classification using strong classification algorithms from three different families:
<!-- <ul> -->
* Classic machine learning using SVM (Support Vector Machines)
* Deep learning using CNN (Convolutional Neural Networks)
* A transformer\-based algorithm using a pre\-trained transformer model:
<!-- <ul> -->
* Runtime 23.1: Slate IBM Foundation model
* Runtime 22.x: Google BERT Multilingual model
<!-- </ul> -->
<!-- </ul> -->
The Watson Natural Language Processing library also offers an easy to use Ensemble classifier that combines different classification algorithms and majority voting\.
The algorithms support multi\-label and multi\-class tasks and special cases, like if the document belongs to one class only (single\-label task), or binary classification tasks\.
Note:Training classification models is CPU and memory intensive\. Depending on the size of your training data, the environment might not be large enough to complete the training\. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook\. Especially for transformer\-based algorithms, you should use a GPU\-based environment, if it is available to you\. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\.
Topic sections:
<!-- <ul> -->
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#input-data)
* [Input data requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#input-data-reqs)
* [Stopwords](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#stopwords)
* [Training SVM algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-svm)
* [Training the CNN algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-cnn)
* [Training the transformer algorithm by using the Slate IBM Foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-slate)
* [Training a custom transformer model by using a model provided by Hugging Face](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-huface)
* [Training the multilingual BERT algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-bert)
* [Training an ensemble model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#train-ensemble)
* [Training best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#best-practices)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#apply-model)
* [Choosing the right algorithm for your use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#choose-algorithm)
<!-- </ul> -->
## Input data format for training ##
Classification blocks accept training data in CSV and JSON formats\.
<!-- <ul> -->
* The CSV Format
The CSV file should contain no header. Each row in the CSV file represents an example record. Each record has one or more columns, where the first column represents the text and the subsequent columns represent the labels associated with that text.
Note:
<!-- <ul> -->
* The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each CSV row must have at least one label, i.e., 2 columns.
* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
Example 1,label 1
Example 2,label 1,label 2
<!-- </ul> -->
* The JSON Format
The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a labels field. The text represents the training example, and labels stores the labels associated with the example (0, 1, or more than one label).
[
{
"text": "Example 1",
"labels": "label 1"]
},
{
"text": "Example 2",
"labels": "label 1", "label 2"]
},
{
"text": "Example 3",
"labels": ]
}
]
Note:
<!-- <ul> -->
* `"labels": []` denotes an example with no labels. The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each JSON object must have at least one label.
* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
<!-- </ul> -->
<!-- </ul> -->
## Input data requirements ##
For SVM and CNN algorithms:
<!-- <ul> -->
* Minimum number of unique labels required: 2
* Minimum number of text examples required per label: 5
<!-- </ul> -->
For the BERT\-based and Slate\-based Transformer algorithms:
<!-- <ul> -->
* Minimum number of unique labels required: 1
* Minimum number of text examples required per label: 5
<!-- </ul> -->
Note that the training data in CSV or JSON format is converted to a DataStream before training\. Instead of training data files, you can also pass data streams directly to the training functions of classification blocks\.
## Stopwords ##
You can provide your own stopwords that will be removed during preprocessing\. Stopwords file inputs are expected in a standard format: a single text file with one phrase per line\. Stopwords can be provided as a list or as a file in a standard format\.
Stopwords can be used only with the Ensemble classifier\.
## Training SVM algorithms ##
SVM is a support vector machine classifier that can be trained using predictions on any kind of input provided by the embedding or vectorization blocks as feature vectors, for example, by `USE` (Universal Sentence Encoder) embeddings and `TF-IDF` vectorizers\. It supports multi\-class and multi\-label text classification and produces confidence scores via Platt Scaling\.
For all options that are available for configuring SVM training, enter:
help(watson_nlp.blocks.classification.svm.SVM.train)
To train SVM algorithms:
<!-- <ol> -->
1. Begin with these preprocessing steps:
import watson_nlp
from watson_core.data_model.streams.resolver import DataStreamResolver
from watson_nlp.blocks.classification.svm import SVM
training_data_file = "<ADD TRAINING DATA FILE PATH>"
# Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
# Load a Syntax model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Create Syntax stream
text_stream, labels_stream = training_data[0], training_data[1]
syntax_stream = syntax_model.stream(text_stream)
<!-- </ol> -->
<!-- <ol> -->
1. Train the classification model using USE embeddings\. See [Pretrained USE embeddings available out\-of\-the\-box](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=en#use-embeddings) for a list of the pretrained blocks that are available\.
# download embedding
use_embedding_model = watson_nlp.load('embedding_use_en_stock')
use_train_stream = use_embedding_model.stream(syntax_stream, doc_embed_style='raw_text')
# NOTE: doc_embed_style can be changed to `avg_sent` as well. For more information check the documentation for Embeddings
# Or the USE run function API docs
use_svm_train_stream = watson_nlp.data_model.DataStream.zip(use_train_stream, labels_stream)
# Train SVM using Universal Sentence Encoder (USE) training stream
classification_model = SVM.train(use_svm_train_stream)
<!-- </ol> -->
### Pretrained USE embeddings available out\-of\-the\-box ###
USE embeddings are wrappers around Google Universal Sentence Encoder embeddings available in TFHub\. These embeddings are used in the document classification SVM algorithm\.
The following table lists the pretrained blocks for USE embeddings that are available and the languages that are supported\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
<!-- <table> -->
List of pretrained USE embeddings with their supported languages
| Block name | Model name | Supported languages |
| ---------- | --------------------------- | ------------------------------------------------------------------ |
| `use` | `embedding_use_en_stock` | English only |
| `use` | `embedding_use_multi_small` | ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh\-cn, zh\-tw |
| `use` | `embedding_use_multi_large` | ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh\-cn, zh\-tw |
<!-- </table ""> -->
When using USE embeddings, consider the following:
<!-- <ul> -->
* Choose `embedding_use_en_stock` if your task involves English text\.
* Choose one of the multilingual USE embeddings if your task involves text in a non\-English language, or you want to train multilingual models\.
* The USE embeddings exhibit different trade\-offs between quality of the trained model and throughput at inference time, as described below\. Try different embeddings to decide the trade\-off between quality of result and inference throughput that is appropriate for your use case\.
<!-- <ul> -->
* `embedding_use_multi_small` has reasonable quality, but it is fast at inference time
* `embedding_use_en_stock` is a English-only version of `embedding_embedding_use_multi_small`, hence it is smaller and exhibits higher inference throughput
* `embedding_use_multi_large` is based on Transformer architecture, and therefore it provides higher quality of result, with lower throughput at inference time
<!-- </ul> -->
<!-- </ul> -->
## Training the CNN algorithm ##
CNN is a simple convolutional network architecture, built for multi\-class and multi\-label text classification on short texts\. It utilizes GloVe embeddings\. GloVe embeddings encode word\-level semantics into a vector space\. The GloVe embeddings for each language are trained on the Wikipedia corpus in that language\. For information on using GloVe embeddings, see the open source GloVe embeddings documentation\.
For all the options that are available for configuring CNN training, enter:
help(watson_nlp.blocks.classification.cnn.CNN.train)
To train CNN algorithms:
import watson_nlp
from watson_core.data_model.streams.resolver import DataStreamResolver
from watson_nlp.blocks.classification.cnn import CNN
training_data_file = "<ADD TRAINING DATA FILE PATH>"
# Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
# Load a Syntax model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Create Syntax stream
text_stream, labels_stream = training_data[0], training_data[1]
syntax_stream = syntax_model.stream(text_stream)
# Download GloVe embeddings
glove_embedding_model = watson_nlp.load('embedding_glove_en_stock')
# Train CNN
classification_model = CNN.train(watson_nlp.data_model.DataStream.zip(syntax_stream, labels_stream), embedding=glove_embedding_model.embedding)
## Training the transformer algorithm by using the IBM Slate model ##
The transformer algorithm using the pretrained Slate IBM Foundation model can be used for multi\-class and multi\-label text classification on short texts\.
The pretrained Slate IBM Foundation model is only available in Runtime 23\.1\.
For all the options available for configuring Transformer training, enter:
help(watson_nlp.blocks.classification.transformer.Transformer.train)
To train Transformer algorithms:
import watson_nlp
from watson_nlp.blocks.classification.transformer import Transformer
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "train_data.json"
# create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
# Load pre-trained Slate model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased ')
# Train model
classification_model = Transformer.train(train_stream, pretrained_model_resource)
## Training a custom transformer model by using a model provided by Hugging Face ##
Note: This training method is only available in Runtime 23\.1\.
You can train your custom transformer\-based model by using a pretrained model from Hugging Face\.
To use a Hugging Face model, specify the model name as the `pretrained_model_resource` parameter in the `train` method of `watson_nlp.blocks.classification.transformer.Transformer`\. Go to [https://huggingface\.co/models](https://huggingface.co/models) to copy the model name\.
To get a list of all the options available for configuring a transformer training, type this code:
help(watson_nlp.blocks.classification.transformer.Transformer.train)
For information on how to train transformer algorithms, refer to this code example:
import watson_nlp
from watson_nlp.blocks.classification.transformer import Transformer
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "train_data.json"
# create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
# Specify the name of the Hugging Face model
huggingface_model_name = 'xml-roberta-base'
# Train model
classification_model = Transformer.train(train_stream, pretrained_model_resource=huggingface_model_name)
## Training the multilingual BERT algorithm ##
BERT is a transformer\-based architecture, built for multi\-class and multi\-label text classification on short texts\.
Note: The Google BERT Multilingual model is available in 22\.2 runtimes only\.
For all the options available for configuring BERT training, enter:
help(watson_nlp.blocks.classification.bert.BERT.train)
To train BERT algorithms:
import watson_nlp
from watson_nlp.blocks.classification.bert import BERT
from watson_core.data_model.streams.resolver import DataStreamResolver
training_data_file = "<ADD TRAINING DATA FILE PATH>"
# create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
train_stream = data_stream_resolver.as_data_stream(training_data_file)
# Load pre-trained BERT model
pretrained_model_resource = watson_nlp.load('pretrained-model_bert_multi_bert_multi_uncased')
# Train model
classification_model = BERT.train(train_stream, pretrained_model_resource)
## Training an ensemble model ##
The Ensemble model is a weighted ensemble of these three algorithms: CNN, SVM with TF\-IDF and SVM with USE\. It computes the weighted mean of a set of classification predictions using confidence scores\. The ensemble model is very easy to use\.
### Using the `Runtime 22.2` and `Runtime 23.1` environments ###
The GenericEnsemble classifier allows more flexibility for the user to choose from the three base classifiers TFIDF\-SVM, USE\-SVM and CNN\. For texts ranging from 50 to 1000 characters, using the combination of TFIDF\-SVM and USE\-SVM classifiers often yields a good balance of quality and performance\. On some medium or long documents (500\-1000\+ characters), adding the CNN to the Ensemble could help increase quality, but it usually comes with a significant runtime performance impact (lower throughput and increased model loading time)\.
For all of the options available for configuring Ensemble training, enter:
help(watson_nlp.workflows.classification.GenericEnsemble)
To train Ensemble algorithms:
import watson_nlp
from watson_nlp.workflows.classification import GenericEnsemble
from watson_nlp.workflows.classification.base_classifier import GloveCNN
from watson_nlp.workflows.classification.base_classifier import TFidfSvm
from watson_nlp.workflows.classification.base_classifier import UseSvm
training_data_file = "<ADD TRAINING DATA FILE PATH>"
# Create datastream from training data
data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
training_data = data_stream_resolver.as_data_stream(training_data_file)
# Syntax Model
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# USE Embedding Model
use_model = watson_nlp.load('embedding_use_en_stock')
# GloVE Embedding model
glove_model = watson_nlp.load('embedding_glove_en_stock')
ensemble_model = GenericEnsemble.train(training_data, syntax_model,
base_classifiers_params=[
TFidfSvm.TrainParams(syntax_model=syntax_model),
GloveCNN.TrainParams(syntax_model=syntax_model, glove_embedding_model=glove_model, cnn_epochs=5),
UseSvm.TrainParams(syntax_model=syntax_model, use_embedding_model=use_model, doc_embed_style='raw_text')],
use_ewl=True)
### Pretrained stopword models available out\-of\-the\-box ###
The text model for identifying stopwords is used in training the document classification ensemble model\.
The following table lists the pretrained stopword models and the language codes that are supported (`xx` stands for the language code)\. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
<!-- <table> -->
List of pretrained stopword models with their supported languages
| Resource class | Model name | Supported languages |
| -------------- | ------------------------------------------------- | ------------------------------ |
| `text` | `text_stopwords_classification_ensemble_xx_stock` | ar, de, es, en, fr, it, ja, ko |
<!-- </table ""> -->
## Training best practices ##
There are certain constraints on the quality and quantity of data to ensure that the classifications model training can complete in a reasonable amount of time and also meets various performance criteria\. These are listed below\. Note that none are hard restrictions\. However, the further one deviates from these guidelines, the greater the chance that the model fails to train or the model will not be satisfactory\.
<!-- <ul> -->
* Data quantity
<!-- <ul> -->
* The highest number of classes classification model has been tested on is ~1200.
* The best suited text size for training and testing data for classification is around 3000 code points. However, larger texts can also be processed, but the runtime performance might be slower.
* Training time will increase based on the number of examples and number of labels.
* Inference time will increased based on the number of labels.
<!-- </ul> -->
* Data quality
<!-- <ul> -->
* Size of each sample (for example, number of phrases in each training sample) can affect quality.
* Class separation is important. In other words, classes among the training (and test) data should be semantically distinguishable from each another in order to avoid misclassifications. Since the classifier algorithms in Watson Natural Language Processing rely on word embeddings, training classes that contain text examples with too much semantic overlap may make high-quality classification computationally intractable. While more sophisticated heuristics may exist for assessing the semantic similarity between classes, you should start with a simple "eye test" of a few examples from each class to discern whether or not they seem adequately separated.
* It is recommended to use balanced data for training. Ideally there should be roughly equal numbers of examples from each class in the training data, otherwise the classifiers may be biased towards classes with larger representation in the training data.
* It is best to avoid circumstances where some classes in the training data are highly under-represented as compared to other classes.
<!-- </ul> -->
<!-- </ul> -->
Limitations and caveats:
<!-- <ul> -->
* The BERT classification block has a predefined sequence length of 128 code points\. However, this can be configured at train time by changing the parameter `max_seq_length`\. The maximum value allowed for this parameter is 512\. This means that the BERT classification block can only be used to classify short text\. Text longer than `max_seq_length` is trimmed and discarded during classification training and inference\.
* The CNN classification block has a predefined sequence length of 1000 code points\. This limit can be configured at train time by changing the parameter `max_phrase_len`\. There is no maximum limit for this parameter, but increasing the maximum phrase length will affect CPU and memory consumption\.
* SVM blocks do not have such limit on sequence length and can be used with longer texts\.
<!-- </ul> -->
## Applying the model on new data ##
After you have trained the model on a data set, apply the model on new data using the `run()` method, as you would use on any of the existing pre\-trained blocks\.
**Sample code**
<!-- <ul> -->
* For the Ensemble and BERT models, for example for Ensemble:
# run Ensemble model on new text
ensemble_prediction = ensemble_classification_model.run("new input text")
* For SVM and CNN models, for example for CNN:
# run Syntax model first
syntax_result = syntax_model.run("new input text")
# run CNN model on top of syntax result
cnn_prediction = cnn_classification_model.run(syntax_result)
<!-- </ul> -->
## Choosing the right algorithm for your use case ##
You need to choose the model algorithm that best suits your use case\.
When choosing between SVM, CNN, and Transformers, consider the following:
<!-- <ul> -->
* BERT and Transformer\-based Slate
<!-- <ul> -->
* Choose when high quality is required and higher computing resources are available.
<!-- </ul> -->
* CNN
<!-- <ul> -->
* Choose when decent size data is available
* Choose if GloVe embedding is available for the required language
* Choose if you have the option between single label versus multi-label
* CNN fine tunes embeddings, so it could give better performance for unknown terms or newer domains.
<!-- </ul> -->
* SVM
<!-- <ul> -->
* Choose if an easier and simpler model is required
* SVM has the fastest training and inference time
* Choose if your data set size is small
<!-- </ul> -->
<!-- </ul> -->
If you select SVM, you need to consider the following when choosing between the various implementations of SVM:
<!-- <ul> -->
* SVMs train multi\-label classifiers\.
* The larger the number of classes, the longer the training time\.
* TF\-IDF:
<!-- <ul> -->
* Choose TF-IDF vectorization with SVM if the data set is small, i.e. has a small number of classes, a small number of examples and shorter text size, for example, sentences containing fewer phrases.
* TF-IDF with SVM can be faster than other algorithms in the classification block.
* Choose TF-IDF if embeddings for the required language are not available.
<!-- </ul> -->
* USE:
<!-- <ul> -->
* Choose Universal Sentence Encoder (USE) with SVM if the data set has one or more sentences in input text.
* USE can perform better on data sets where understanding the context of words or sentences is important.
<!-- </ul> -->
<!-- </ul> -->
The Ensemble model combines multiple individual (diverse) models together to deliver superior prediction power\. Consider the following key data for this model type:
<!-- <ul> -->
* The ensemble model combines CNN, SVM with TF\-IDF and SVM with USE\.
* It is the easiest model to use\.
* It can give better performance than the individual algorithms\.
* It works for all kinds of data sets\. However, training time for large datasets (more than 20000 examples) can be high\.
* An ensemble model allows you to set weights\. These weights decides how the ensemble model combines the results of individual classifiers\. Currently, the selection of weights is a heuristics and needs to be set by trial and error\. The default weights that are provided in the function itself are a good starting point for the exploration\.
<!-- </ul> -->
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
<!-- </article "role="article" "> -->
|
97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html?context=cdpaas&locale=en | Creating your own models | Creating your own models
Certain algorithms in Watson Natural Language Processing can be trained with your own data, for example you can create custom models based on your own data for entity extraction, to classify data, to extract sentiments, and to extract target sentiments.
Starting with Runtime 23.1 you can use the new built-in transformer-based IBM foundation model called Slate to create your own models. The Slate model has been trained on a very large data set that was preprocessed to filter hate, bias, and profanity.
To create your own classification, entity extraction model, or sentiment model you can fine-tune the Slate model on your own data. To train the model in reasonable time, it's recommended to use GPU-based environments.
* [Detecting entities with a custom dictionary](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html)
* [Detecting entities with regular expressions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html)
* [Detecting entities with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html)
* [Classifying text with a custom classification model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html)
* [Extracting sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html)
* [Extracting targets sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html)
Language support for custom models
You can create custom models and use the following pretrained dictionary and classification models for the shown languages. For a list of the language codes and the corresponding languages, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
Supported languages for out-of-the-box custom models
Custom model Supported language codes
Dictionary models af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging)
Regexes af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging)
SVM classification with TFIDF af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
SVM classification with USE ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh_cn, zh_tw
CNN classification with GloVe ar, de, en, es, fr, it, ja, ko, nl, pt, zh_cn
BERT Multilingual classification af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
Transformer model af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
Stopword lists ar, de, en, es, fr, it, ja, ko
Saving and loading custom models
If you want to use your custom model in another notebook, save it as a Data Asset to your project. This way, you can export the model as part of a project export.
Use the ibm-watson-studio-lib library to save and load custom models.
To save a custom model in your notebook as a data asset to export and use in another project:
1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook.
2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-waton-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Run the train() method to create a custom dictionary, regular expression, or classification model and assign this custom model to a variable. For example:
custom_block = CNN.train(train_stream, embedding_model.embedding, verbose=2)
4. If you want to save a custom dictionary or regular expression model, convert it to a RBRGeneric block. Converting a custom dictionary or regular expression model to a RBRGeneric block is useful if you want to load and execute the model using the [API for Watson Natural Language Processing for Embed](https://www.ibm.com/docs/en/watson-libraries?topic=home-api-reference). To date, Watson Natural Language Processing for Embed supports running dictionary and regular expression models only as RBRGeneric blocks. To convert a model to a RBRGeneric block, run the following commands:
Create the custom regular expression model
custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, language='en', regexes=regexes)
Save the model to the local file system
custom_regex_model_path = 'some/path'
custom_regex_block.save(custom_regex_model_path)
The model was saved in a file "executor.zip" in the provided path, in this case "some/path/executor.zip"
model_path = os.path.join(custom_regex_model_path, 'executor.zip')
Re-load the model as a RBRGeneric block
custom_block = watson_nlp.blocks.rules.RBRGeneric(watson_nlp.toolkit.rule_utils.RBRExecutor.load(model_path), language='en')
5. Save the model as a Data Asset to your project using ibm-watson-studio-lib:
wslib.save_data("<model name>", custom_block.as_bytes(), overwrite=True)
When saving transformer models, you have the option to save the model in CPU format. If you plan to use the model only in CPU environments, using this format will make your custom model run more efficiently. To do that, set the CPU format option as follows:
wslib.save_data('<model name>', data=custom_model.as_bytes(cpu_format=True), overwrite=True)
To load a custom model to a notebook that was imported from another project:
1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook.
2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Load the model using ibm-watson-studio-lib and watson-nlp:
custom_block = watson_nlp.load(wslib.load_data("<model name>"))
Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
| # Creating your own models #
Certain algorithms in Watson Natural Language Processing can be trained with your own data, for example you can create custom models based on your own data for entity extraction, to classify data, to extract sentiments, and to extract target sentiments\.
Starting with Runtime 23\.1 you can use the new built\-in transformer\-based IBM foundation model called Slate to create your own models\. The Slate model has been trained on a very large data set that was preprocessed to filter hate, bias, and profanity\.
To create your own classification, entity extraction model, or sentiment model you can fine\-tune the Slate model on your own data\. To train the model in reasonable time, it's recommended to use GPU\-based environments\.
<!-- <ul> -->
* [Detecting entities with a custom dictionary](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html)
* [Detecting entities with regular expressions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html)
* [Detecting entities with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html)
* [Classifying text with a custom classification model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html)
* [Extracting sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html)
* [Extracting targets sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html)
<!-- </ul> -->
## Language support for custom models ##
You can create custom models and use the following pretrained dictionary and classification models for the shown languages\. For a list of the language codes and the corresponding languages, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html#lang-codes)\.
<!-- <table> -->
Supported languages for out\-of\-the\-box custom models
| Custom model | Supported language codes |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Dictionary models | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw (all languages supported in the Syntax part of speech tagging) |
| Regexes | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw (all languages supported in the Syntax part of speech tagging) |
| SVM classification with TFIDF | af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw |
| SVM classification with USE | ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh\_cn, zh\_tw |
| CNN classification with GloVe | ar, de, en, es, fr, it, ja, ko, nl, pt, zh\_cn |
| BERT Multilingual classification | af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw |
| Transformer model | af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh\_cn, zh\_tw |
| Stopword lists | ar, de, en, es, fr, it, ja, ko |
<!-- </table ""> -->
## Saving and loading custom models ##
If you want to use your custom model in another notebook, save it as a Data Asset to your project\. This way, you can export the model as part of a project export\.
Use the `ibm-watson-studio-lib` library to save and load custom models\.
To save a custom model in your notebook as a data asset to export and use in another project:
<!-- <ol> -->
1. Ensure that you have an access token on the **Access control** page on the **Manage** tab of your project\. Only project admins can create access tokens\. The access token can have viewer or editor access permissions\. Only editors can inject the token into a notebook\.
2. Add the project token to a notebook by clicking **More > Insert project token** from the notebook action bar and then run the cell\. When you run the inserted hidden code cell, a `wslib` object is created that you can use for functions in the `ibm-waton-studio-lib` library\. For details on the available `ibm-watson-studio-lib` functions, see [Using `ibm-watson-studio-lib` for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html)\.
3. Run the `train()` method to create a custom dictionary, regular expression, or classification model and assign this custom model to a variable\. For example:
custom_block = CNN.train(train_stream, embedding_model.embedding, verbose=2)
4. If you want to save a custom dictionary or regular expression model, convert it to a RBRGeneric block\. Converting a custom dictionary or regular expression model to a RBRGeneric block is useful if you want to load and execute the model using the [API for Watson Natural Language Processing for Embed](https://www.ibm.com/docs/en/watson-libraries?topic=home-api-reference)\. To date, Watson Natural Language Processing for Embed supports running dictionary and regular expression models only as RBRGeneric blocks\. To convert a model to a RBRGeneric block, run the following commands:
# Create the custom regular expression model
custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, language='en', regexes=regexes)
# Save the model to the local file system
custom_regex_model_path = 'some/path'
custom_regex_block.save(custom_regex_model_path)
# The model was saved in a file "executor.zip" in the provided path, in this case "some/path/executor.zip"
model_path = os.path.join(custom_regex_model_path, 'executor.zip')
# Re-load the model as a RBRGeneric block
custom_block = watson_nlp.blocks.rules.RBRGeneric(watson_nlp.toolkit.rule_utils.RBRExecutor.load(model_path), language='en')
5. Save the model as a Data Asset to your project using `ibm-watson-studio-lib`:
wslib.save_data("<model name>", custom_block.as_bytes(), overwrite=True)
When saving transformer models, you have the option to save the model in CPU format. If you plan to use the model only in CPU environments, using this format will make your custom model run more efficiently. To do that, set the CPU format option as follows:
wslib.save_data('<model name>', data=custom_model.as_bytes(cpu_format=True), overwrite=True)
<!-- </ol> -->
To load a custom model to a notebook that was imported from another project:
<!-- <ol> -->
1. Ensure that you have an access token on the **Access control** page on the **Manage** tab of your project\. Only project admins can create access tokens\. The access token can have viewer or editor access permissions\. Only editors can inject the token into a notebook\.
2. Add the project token to a notebook by clicking **More > Insert project token** from the notebook action bar and then run the cell\. When you run the the inserted hidden code cell, a `wslib` object is created that you can use for functions in the `ibm-watson-studio-lib` library\. For details on the available `ibm-watson-studio-lib` functions, see [Using `ibm-watson-studio-lib` for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html)\.
3. Load the model using `ibm-watson-studio-lib` and `watson-nlp`:
custom_block = watson_nlp.load(wslib.load_data("<model name>"))
<!-- </ol> -->
**Parent topic:**[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
<!-- </article "role="article" "> -->
|
34BC2F43F99778FFA7E2C3E414C3CFB32509276D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html?context=cdpaas&locale=en | Detecting entities with a custom dictionary | Detecting entities with a custom dictionary
If you have a fixed set of terms that you want to detect, like a list of product names or organizations, you can create a dictionary. Dictionary matching is very fast and resource-efficient.
Watson Natural Language Processing dictionaries contain advanced matching capabilities that go beyond a simple string match, including:
* Dictionary terms can consist of a single token, for example wheel, or multiple tokens, for example, steering wheel.
* Dictionary term matching can be case-sensitive or case-insensitive. With a case-sensitive match, you can ensure that acronyms, like ABS don't match terms in the regular language, like abs that have a different meaning.
* You can specify how to consolidate matches when multiple dictionary entries match the same text. Given the two dictionary entries, Watson and Watson Natural Language Processing, you can configure which entry should match in "I like Watson Natural Language Processing": either only Watson Natural Language Processing, as it contains Watson, or both.
* You can specify to match the lemma instead of enumerating all inflections. This way, the single dictionary entry mouse will detect both mouse and mice in the text.
* You can attach a label to each dictionary entry, for example Organization category to include additional metadata in the match.
All of these capabilities can be configured, so you can pick the right option for your use case.
Types of dictionary files
Watson Natural Language Processing supports two types of dictionary files:
* Term list (ending in .dict)
Example of a term list:
Arthur
Allen
Albert
Alexa
* Table (ending in .csv)
Example of a table:
"label", "entry"
"ORGANIZATION", "NASA"
"COUNTRY", "USA"
"ACTOR", "Christian Bale"
You can use multiple dictionaries during the same extraction. You can also use both types at the same time, for example, run a single extraction with three dictionaries, one term list and two tables.
Creating dictionary files
Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that will be used temporarily to store your dictionary files.
To create dictionary files in your notebook:
1. Create a module directory. Note that the name of the module folder cannot contain any dashes as this will cause errors.
import os
import watson_nlp
module_folder = "NLP_Dict_Module_1"
os.makedirs(module_folder, exist_ok=True)
2. Create dictionary files, and store them in the module directory. You can either read in an external list or CSV file, or you can create dictionary files like so:
Create a term list dictionary
term_file = "names.dict"
with open(os.path.join(module_folder, term_file), 'w') as dictionary:
dictionary.write('Bruce')
dictionary.write('n')
dictionary.write('Peter')
dictionary.write('n')
Create a table dictionary
table_file = 'Places.csv'
with open(os.path.join(module_folder, table_file), 'w') as places:
places.write(""label", "entry"")
places.write("n")
places.write(""SIGHT", "Times Square"")
places.write("n")
places.write(""PLACE", "5th Avenue"")
places.write("n")
Loading the dictionaries and configuring matching options
The dictionaries can be loaded using the following helper methods.
* To load a single dictionary, use watson_nlp.toolkit.rule_utils.DictionaryConfig (<dictionary configuration>)
* To load multiple dictionaries, use watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([<dictionary configuration>)])
For each dictionary, you need to specify a dictionary configuration. The dictionary configuration is a Python dictionary, with the following attributes:
Attribute Value Description Required
name string The name of the dictionary Yes
source string The path to the dictionary, relative to module_folder Yes
dict_type file or table Whether the dictionary artifact is a term list (file) or a table of mappings (table) No. The default is file
consolidate ContainedWithin (Keep the longest match and deduplicate) / NotContainedWithin (Keep the shortest match and deduplicate) / ContainsButNotEqual (Keep longest match but keep duplicate matches) / ExactMatch (Deduplicate) / LeftToRight (Keep the leftmost longest non-overlapping span) What to do with dictionary matches that overlap. No. The default is to not consolidate matches.
case exact / insensitive Either match exact case or be case insensitive. No. The default is exact match.
lemma True / False Match the terms in the dictionary with the lemmas from the text. The dictionary should contain only lemma forms. For example, add mouse in the dictionary to match both mouse and mice in text. Do not add mice in the dictionary. To match terms that consist of multiple tokens in text, separate the lemmas of those terms in the dictionary by a space character. No. The default is False.
mappings.columns (columns as attribute of mappings: {}) list [ string ] List of column headers in the same order as present in the table csv Yes if dict_type: table
mappings.entry (entry as attribute of mappings: {}) string The name of the column header that contains the string to match against the document. Yes if dict_type: table
label string The label to attach to matches. No
Code sample
Load the dictionaries
dictionaries = watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([{
'name': 'Names',
'source': term_file,
'case':'insensitive'
}, {
'name': 'places_and_sights_mappings',
'source': table_file,
'dict_type': 'table',
'mappings': {
'columns': 'label', 'entry'],
'entry': 'entry'
}
}])
Training a model that contains dictionaries
After you have loaded the dictionaries, create a dictionary model and train the model using the RBR.train() method. In the method, specify:
* The module directory
* The language of the dictionary entries
* The dictionaries to use
Code sample
custom_dict_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder,
language='en', dictionaries=dictionaries)
Applying the model on new data
After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
Code sample
custom_dict_block.run('Bruce is at Times Square')
Output of the code sample:
{(0, 5): ['Names'], (12, 24): ['SIGHT']}
To show the labels or the name of the dictionary:
RBR_result = custom_dict_block.executor.get_raw_response('Bruce is at Times Square', language='en')
print(RBR_result)
Output showing the labels:
{'annotations': {'View_Names': [{'label': 'Names', 'match': {'location': {'begin': 0, 'end': 5}, 'text': 'Bruce'}}], 'View_places_and_sights_mappings': [{'label': 'SIGHT', 'match': {'location': {'begin': 12, 'end': 24}, 'text': 'Times Square'}}]}, 'instrumentationInfo': {'annotator': {'version': '1.0', 'key': 'Text match extractor for NLP_Dict_Module_1'}, 'runningTimeMS': 3, 'documentSizeChars': 32, 'numAnnotationsTotal': 2, 'numAnnotationsPerType': [{'annotationType': 'View_Names', 'numAnnotations': 1}, {'annotationType': 'View_places_and_sights_mappings', 'numAnnotations': 1}], 'interrupted': False, 'success': True}}
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
| # Detecting entities with a custom dictionary #
If you have a fixed set of terms that you want to detect, like a list of product names or organizations, you can create a dictionary\. Dictionary matching is very fast and resource\-efficient\.
Watson Natural Language Processing dictionaries contain advanced matching capabilities that go beyond a simple string match, including:
<!-- <ul> -->
* Dictionary terms can consist of a single token, for example *wheel*, or multiple tokens, for example, *steering wheel*\.
* Dictionary term matching can be case\-sensitive or case\-insensitive\. With a case\-sensitive match, you can ensure that acronyms, like *ABS* don't match terms in the regular language, like *abs* that have a different meaning\.
* You can specify how to consolidate matches when multiple dictionary entries match the same text\. Given the two dictionary entries, *Watson* and *Watson Natural Language Processing*, you can configure which entry should match in "I like Watson Natural Language Processing": either only *Watson Natural Language Processing*, as it contains *Watson*, or both\.
* You can specify to match the lemma instead of enumerating all inflections\. This way, the single dictionary entry *mouse* will detect both *mouse* and *mice* in the text\.
* You can attach a label to each dictionary entry, for example *Organization category* to include additional metadata in the match\.
<!-- </ul> -->
All of these capabilities can be configured, so you can pick the right option for your use case\.
## Types of dictionary files ##
Watson Natural Language Processing supports two types of dictionary files:
<!-- <ul> -->
* Term list (ending in `.dict`)
Example of a term list:
Arthur
Allen
Albert
Alexa
* Table (ending in `.csv`)
Example of a table:
"label", "entry"
"ORGANIZATION", "NASA"
"COUNTRY", "USA"
"ACTOR", "Christian Bale"
<!-- </ul> -->
You can use multiple dictionaries during the same extraction\. You can also use both types at the same time, for example, run a single extraction with three dictionaries, one term list and two tables\.
## Creating dictionary files ##
Begin by creating a module directory inside your notebook\. This is a directory inside the notebook file system that will be used temporarily to store your dictionary files\.
To create dictionary files in your notebook:
<!-- <ol> -->
1. Create a module directory\. Note that the name of the module folder cannot contain any dashes as this will cause errors\.
import os
import watson_nlp
module_folder = "NLP_Dict_Module_1"
os.makedirs(module_folder, exist_ok=True)
2. Create dictionary files, and store them in the module directory\. You can either read in an external list or CSV file, or you can create dictionary files like so:
# Create a term list dictionary
term_file = "names.dict"
with open(os.path.join(module_folder, term_file), 'w') as dictionary:
dictionary.write('Bruce')
dictionary.write('\n')
dictionary.write('Peter')
dictionary.write('\n')
# Create a table dictionary
table_file = 'Places.csv'
with open(os.path.join(module_folder, table_file), 'w') as places:
places.write("\"label\", \"entry\"")
places.write("\n")
places.write("\"SIGHT\", \"Times Square\"")
places.write("\n")
places.write("\"PLACE\", \"5th Avenue\"")
places.write("\n")
<!-- </ol> -->
## Loading the dictionaries and configuring matching options ##
The dictionaries can be loaded using the following helper methods\.
<!-- <ul> -->
* To load a single dictionary, use `watson_nlp.toolkit.rule_utils.DictionaryConfig (<dictionary configuration>)`
* To load multiple dictionaries, use `watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([<dictionary configuration>)])`
<!-- </ul> -->
For each dictionary, you need to specify a dictionary configuration\. The dictionary configuration is a Python dictionary, with the following attributes:
<!-- <table> -->
| Attribute | Value | Description | Required |
| ------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
| `name` | string | The name of the dictionary | Yes |
| `source` | string | The path to the dictionary, relative to `module_folder` | Yes |
| `dict_type` | file or table | Whether the dictionary artifact is a term list (file) or a table of mappings (table) | No\. The default is file |
| `consolidate` | ContainedWithin (Keep the longest match and deduplicate) / NotContainedWithin (Keep the shortest match and deduplicate) / ContainsButNotEqual (Keep longest match but keep duplicate matches) / ExactMatch (Deduplicate) / LeftToRight (Keep the leftmost longest non\-overlapping span) | What to do with dictionary matches that overlap\. | No\. The default is to not consolidate matches\. |
| `case` | exact / insensitive | Either match exact case or be case insensitive\. | No\. The default is exact match\. |
| `lemma` | True / False | Match the terms in the dictionary with the lemmas from the text\. The dictionary should contain only lemma forms\. For example, add `mouse` in the dictionary to match both `mouse` and `mice` in text\. Do not add `mice` in the dictionary\. To match terms that consist of multiple tokens in text, separate the lemmas of those terms in the dictionary by a space character\. | No\. The default is False\. |
| `mappings.columns` (columns `as attribute of` mappings: \{\}) | list \[ string \] | List of column headers in the same order as present in the table csv | Yes if `dict_type: table` |
| `mappings.entry` (entry `as attribute of` mappings: \{\}) | string | The name of the column header that contains the string to match against the document\. | Yes if `dict_type: table` |
| `label` | string | The label to attach to matches\. | No |
<!-- </table ""> -->
**Code sample**
# Load the dictionaries
dictionaries = watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([{
'name': 'Names',
'source': term_file,
'case':'insensitive'
}, {
'name': 'places_and_sights_mappings',
'source': table_file,
'dict_type': 'table',
'mappings': {
'columns': 'label', 'entry'],
'entry': 'entry'
}
}])
## Training a model that contains dictionaries ##
After you have loaded the dictionaries, create a dictionary model and train the model using the `RBR.train()` method\. In the method, specify:
<!-- <ul> -->
* The module directory
* The language of the dictionary entries
* The dictionaries to use
<!-- </ul> -->
**Code sample**
custom_dict_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder,
language='en', dictionaries=dictionaries)
## Applying the model on new data ##
After you have trained the dictionaries, apply the model on new data using the `run()` method, as you would use on any of the existing pre\-trained blocks\.
**Code sample**
custom_dict_block.run('Bruce is at Times Square')
Output of the code sample:
{(0, 5): ['Names'], (12, 24): ['SIGHT']}
To show the labels or the name of the dictionary:
RBR_result = custom_dict_block.executor.get_raw_response('Bruce is at Times Square', language='en')
print(RBR_result)
Output showing the labels:
{'annotations': {'View_Names': [{'label': 'Names', 'match': {'location': {'begin': 0, 'end': 5}, 'text': 'Bruce'}}], 'View_places_and_sights_mappings': [{'label': 'SIGHT', 'match': {'location': {'begin': 12, 'end': 24}, 'text': 'Times Square'}}]}, 'instrumentationInfo': {'annotator': {'version': '1.0', 'key': 'Text match extractor for NLP_Dict_Module_1'}, 'runningTimeMS': 3, 'documentSizeChars': 32, 'numAnnotationsTotal': 2, 'numAnnotationsPerType': [{'annotationType': 'View_Names', 'numAnnotations': 1}, {'annotationType': 'View_places_and_sights_mappings', 'numAnnotations': 1}], 'interrupted': False, 'success': True}}
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
<!-- </article "role="article" "> -->
|
6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html?context=cdpaas&locale=en | Detecting entities with regular expressions | Detecting entities with regular expressions
Similar to detecting entities with dictionaries, you can use regex pattern matches to detect entities.
Regular expressions are not provided in files like dictionaries but in-memory within a regex configuration. You can use multiple regex configurations during the same extraction.
Regexes that you define with Watson Natural Language Processing can use token boundaries. This way, you can ensure that your regular expression matches within one or more tokens. This is a clear advantage over simpler regular expression engines, especially when you work with a language that is not separated by whitespace, such as Chinese.
Regular expressions are processed by a dedicated component called Rule-Based Runtime, or RBR for short.
Creating regex configurations
Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that is used temporarily to store the files created by the RBR training. This module directory can be the same directory that you created and used for dictionary-based entity extraction. Dictionaries and regular expressions can be used in the same training run.
To create the module directory in your notebook, enter the following in a code cell. Note that the module directory can't contain a dash (-).
import os
import watson_nlp
module_folder = "NLP_RBR_Module_2"
os.makedirs(module_folder, exist_ok=True)
A regex configuration is a Python dictionary, with the following attributes:
Available attributes in regex configurations with their values, descriptions of use and indication if required or not
Attribute Value Description Required
name string The name of the regular expression. Matches of the regular expression in the input text are tagged with this name in the output. Yes
regexes list (string of perl based regex patterns) Should be non-empty. Multiple regexes can be provided. Yes
flags Delimited string of valid flags Flags such as UNICODE or CASE_INSENSITIVE control the matching. Can also be a combination of flags. For the supported flags, see [Pattern (Java Platform SE 8)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html). No (defaults to DOTALL)
token_boundary.min int token_boundary indicates whether to match the regular expression only on token boundaries. Specified as a dict object with min and max attributes. No (returns the longest non-overlapping match at each character position in the input text)
token_boundary.max int max is an optional attribute for token_boundary and needed when the boundary needs to extend for a range (between min and max tokens). token_boundary.max needs to be >= token_boundary.min No (if token_boundary is specified, the min attribute can be specified alone)
groups list (string labels for matching groups) String index in list corresponds to matched group in pattern starting with 1 where 0 index corresponds to entire match. For example: regex: (a)(b) on ab with group: ['full', 'first', 'second'] will yield full: ab, first: a, second: b No (defaults to label match on full match)
The regex configurations can be loaded using the following helper methods:
* To load a single regex configuration, use watson_nlp.toolkit.RegexConfig.load(<regex configuration>)
* To load multiple regex configurations, use watson_nlp.toolkit.RegexConfig.load_all([<regex configuration>)])
Code sample
This sample shows you how to load two different regex configurations. The first configuration detects person names. It uses the groups attribute to allow easy access to the full, first and last name at a later stage.
The second configuration detects acronyms as a sequence of all-uppercase characters. By using the token_boundary attribute, it prevents matches in words that contain both uppercase and lowercase characters.
from watson_nlp.toolkit.rule_utils import RegexConfig
Load some regex configs, for instance to match First names or acronyms
regexes = RegexConfig.load_all([
{
'name': 'full names',
'regexes': '(A-Z]a-z]) (A-Z]a-z])'],
'groups': 'full name', 'first name', 'last name']
},
{
'name': 'acronyms',
'regexes': '(A-Z]+)'],
'groups': 'acronym'],
'token_boundary': {
'min': 1,
'max': 1
}
}
])
Training a model that contains regular expressions
After you have loaded the regex configurations, create an RBR model using the RBR.train() method. In the method, specify:
* The module directory
* The language of the text
* The regex configurations to use
This is the same method that is used to train RBR with dictionary-based extraction. You can pass the dictionary configuration in the same method call.
Code sample
Train the RBR model
custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_path=module_folder, language='en', regexes=regexes)
Applying the model on new data
After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
Code sample
custom_regex_block.run('Bruce Wayne works for NASA')
Output of the code sample:
{(0, 11): ['regex::full names'], (0, 5): ['regex::full names'], (6, 11): ['regex::full names'], (22, 26): ['regex::acronyms']}
To show the matching subgroups or the matched text:
import json
Get the raw response including matching groups
full_regex_result = custom_regex_block.executor.get_raw_response('Bruce Wayne works for NASA‘, language='en')
print(json.dumps(full_regex_result, indent=2))
Output of the code sample:
{
"annotations": {
"View_full names": [
{
"label": "regex::full names",
"fullname": {
"location": {
"begin": 0,
"end": 11
},
"text": "Bruce Wayne"
},
"firstname": {
"location": {
"begin": 0,
"end": 5
},
"text": "Bruce"
},
"lastname": {
"location": {
"begin": 6,
"end": 11
},
"text": "Wayne"
}
}
],
"View_acronyms": [
{
"label": "regex::acronyms",
"acronym": {
"location": {
"begin": 22,
"end": 26
},
"text": "NASA"
}
}
]
},
...
}
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
| # Detecting entities with regular expressions #
Similar to detecting entities with dictionaries, you can use regex pattern matches to detect entities\.
Regular expressions are not provided in files like dictionaries but in\-memory within a regex configuration\. You can use multiple regex configurations during the same extraction\.
Regexes that you define with Watson Natural Language Processing can use token boundaries\. This way, you can ensure that your regular expression matches within one or more tokens\. This is a clear advantage over simpler regular expression engines, especially when you work with a language that is not separated by whitespace, such as Chinese\.
Regular expressions are processed by a dedicated component called Rule\-Based Runtime, or RBR for short\.
## Creating regex configurations ##
Begin by creating a module directory inside your notebook\. This is a directory inside the notebook file system that is used temporarily to store the files created by the RBR training\. This module directory can be the same directory that you created and used for dictionary\-based entity extraction\. Dictionaries and regular expressions can be used in the same training run\.
To create the module directory in your notebook, enter the following in a code cell\. Note that the module directory can't contain a dash (\-)\.
import os
import watson_nlp
module_folder = "NLP_RBR_Module_2"
os.makedirs(module_folder, exist_ok=True)
A regex configuration is a Python dictionary, with the following attributes:
<!-- <table> -->
Available attributes in regex configurations with their values, descriptions of use and indication if required or not
| Attribute | Value | Description | Required |
| -------------------- | ------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
| `name` | string | The name of the regular expression\. Matches of the regular expression in the input text are tagged with this name in the output\. | Yes |
| `regexes` | list (string of perl based regex patterns) | Should be non\-empty\. Multiple regexes can be provided\. | Yes |
| `flags` | Delimited string of valid flags | Flags such as UNICODE or CASE\_INSENSITIVE control the matching\. Can also be a combination of flags\. For the supported flags, see [Pattern (Java Platform SE 8)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html)\. | No (defaults to DOTALL) |
| `token_boundary.min` | int | `token_boundary` indicates whether to match the regular expression only on token boundaries\. Specified as a dict object with `min` and `max` attributes\. | No (returns the longest non\-overlapping match at each character position in the input text) |
| `token_boundary.max` | int | `max` is an optional attribute for `token_boundary` and needed when the boundary needs to extend for a range (between `min` and `max` tokens)\. `token_boundary.max` needs to be `>= token_boundary.min` | No (if `token_boundary` is specified, the `min` attribute can be specified alone) |
| `groups` | list (string labels for matching groups) | String index in list corresponds to matched group in pattern starting with 1 where 0 index corresponds to entire match\. For example: `regex: (a)(b)` on `ab` with `group: ['full', 'first', 'second']` will yield `full: ab, first: a, second: b` | No (defaults to label match on full match) |
<!-- </table ""> -->
The regex configurations can be loaded using the following helper methods:
<!-- <ul> -->
* To load a single regex configuration, use `watson_nlp.toolkit.RegexConfig.load(<regex configuration>)`
* To load multiple regex configurations, use `watson_nlp.toolkit.RegexConfig.load_all([<regex configuration>)])`
<!-- </ul> -->
**Code sample**
This sample shows you how to load two different regex configurations\. The first configuration detects person names\. It uses the groups attribute to allow easy access to the full, first and last name at a later stage\.
The second configuration detects acronyms as a sequence of all\-uppercase characters\. By using the token\_boundary attribute, it prevents matches in words that contain both uppercase and lowercase characters\.
from watson_nlp.toolkit.rule_utils import RegexConfig
# Load some regex configs, for instance to match First names or acronyms
regexes = RegexConfig.load_all([
{
'name': 'full names',
'regexes': '(A-Z]a-z]*) (A-Z]a-z]*)'],
'groups': 'full name', 'first name', 'last name']
},
{
'name': 'acronyms',
'regexes': '(A-Z]+)'],
'groups': 'acronym'],
'token_boundary': {
'min': 1,
'max': 1
}
}
])
## Training a model that contains regular expressions ##
After you have loaded the regex configurations, create an RBR model using the `RBR.train()` method\. In the method, specify:
<!-- <ul> -->
* The module directory
* The language of the text
* The regex configurations to use
<!-- </ul> -->
This is the same method that is used to train RBR with dictionary\-based extraction\. You can pass the dictionary configuration in the same method call\.
**Code sample**
# Train the RBR model
custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_path=module_folder, language='en', regexes=regexes)
## Applying the model on new data ##
After you have trained the dictionaries, apply the model on new data using the `run()` method, as you would use on any of the existing pre\-trained blocks\.
**Code sample**
custom_regex_block.run('Bruce Wayne works for NASA')
Output of the code sample:
{(0, 11): ['regex::full names'], (0, 5): ['regex::full names'], (6, 11): ['regex::full names'], (22, 26): ['regex::acronyms']}
To show the matching subgroups or the matched text:
import json
# Get the raw response including matching groups
full_regex_result = custom_regex_block.executor.get_raw_response('Bruce Wayne works for NASA‘, language='en')
print(json.dumps(full_regex_result, indent=2))
Output of the code sample:
{
"annotations": {
"View_full names": [
{
"label": "regex::full names",
"fullname": {
"location": {
"begin": 0,
"end": 11
},
"text": "Bruce Wayne"
},
"firstname": {
"location": {
"begin": 0,
"end": 5
},
"text": "Bruce"
},
"lastname": {
"location": {
"begin": 6,
"end": 11
},
"text": "Wayne"
}
}
],
"View_acronyms": [
{
"label": "regex::acronyms",
"acronym": {
"location": {
"begin": 22,
"end": 26
},
"text": "NASA"
}
}
]
},
...
}
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
<!-- </article "role="article" "> -->
|
D71261B71A4CF5A1AD5E148EDE7751B630060BDF | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html?context=cdpaas&locale=en | Detecting entities with a custom transformer model | Detecting entities with a custom transformer model
If you don't have a fixed set of terms or you cannot express entities that you like to detect as regular expressions, you can build a custom transformer model. The model is based on the pretrained Slate IBM Foundation model.
When you use the pretrained model, you can build multi-lingual models. You don't have to have separate models for each language.
You need sufficient training data to achieve high quality (2000 – 5000 per entity type). If you have GPUs available, use them for training.
Note:Training transformer models is CPU and memory intensive. The predefined environments are not large enough to complete the training. Create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. If you have GPUs available, it's highly recommended to use them. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
Input data format
The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a mentions field. The text field represents the training sentence text, and mentions is an array of JSON objects with the text, type, and location of each mention:
[
{
"text": str,
"mentions": {
"location": {
"begin": int,
"end": int
},
"text": str,
"type": str
},...]
},...
]
Example:
[
{
"id": 38863234,
"text": "I'm moving to Colorado in a couple months.",
"mentions": {
"text": "Colorado",
"type": "Location",
"location": {
"begin": 14,
"end": 22
}
},
{
"text": "couple months",
"type": "Duration",
"location": {
"begin": 28,
"end": 41
}
}]
}
]
Training your model
The transformer algorithm is using the pretrained Slate model. The pretrained Slate model is only available in Runtime 23.1.
To get the options available for configuring Transformer training, enter:
help(watson_nlp.workflows.entity_mentions.transformer.Transformer.train)
Sample code
import watson_nlp
from watson_nlp.toolkit.entity_mentions_utils.train_util import prepare_stream_of_train_records_from_JSON_collection
load the syntax models for all languages to be supported
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
syntax_models = [syntax_model]
load the pretrained Slate model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
prepare the train and dev data
entity_train_data is a directory with one or more json files in the input format specified above
train_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
dev_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
train a transformer workflow model
trained_workflow = watson_nlp.workflows.entity_mentions.transformer.Transformer.train(
train_data_stream=train_data_stream,
dev_data_stream=dev_data_stream,
syntax_models=syntax_models,
template_resource=pretrained_model_resource,
num_train_epochs=3,
)
Applying the model on new data
Apply the trained transformer workflow model on new data by using the run() method, as you would use on any of the existing pre-trained blocks.
Code sample
trained_workflow.run('Bruce is at Times Square')
Storing and loading the model
The custom transformer model can be stored as any other model as described in "Loading and storing models", using ibm_watson_studio_lib.
To load the custom transformer model, extra steps are required:
1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook.
2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar and then run the cell.
By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Download and extract the model to your local runtime environment:
import zipfile
model_zip = 'trained_workflow_file'
model_folder = 'trained_workflow_folder'
wslib.download_file('trained_workflow', file_name=model_zip)
with zipfile.ZipFile(model_zip, 'r') as zip_ref:
zip_ref.extractall(model_folder)
4. Load the model from the extracted folder:
trained_workflow = watson_nlp.load(model_folder)
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
| # Detecting entities with a custom transformer model #
If you don't have a fixed set of terms or you cannot express entities that you like to detect as regular expressions, you can build a custom transformer model\. The model is based on the pretrained Slate IBM Foundation model\.
When you use the pretrained model, you can build multi\-lingual models\. You don't have to have separate models for each language\.
You need sufficient training data to achieve high quality (2000 – 5000 per entity type)\. If you have GPUs available, use them for training\.
Note:Training transformer models is CPU and memory intensive\. The predefined environments are not large enough to complete the training\. Create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook\. If you have GPUs available, it's highly recommended to use them\. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\.
## Input data format ##
The training data is represented as an array with multiple JSON objects\. Each JSON object represents one training instance, and must have a `text` and a `mentions` field\. The `text` field represents the training sentence text, and `mentions` is an array of JSON objects with the text, type, and location of each mention:
[
{
"text": str,
"mentions": {
"location": {
"begin": int,
"end": int
},
"text": str,
"type": str
},...]
},...
]
Example:
[
{
"id": 38863234,
"text": "I'm moving to Colorado in a couple months.",
"mentions": {
"text": "Colorado",
"type": "Location",
"location": {
"begin": 14,
"end": 22
}
},
{
"text": "couple months",
"type": "Duration",
"location": {
"begin": 28,
"end": 41
}
}]
}
]
## Training your model ##
The transformer algorithm is using the pretrained Slate model\. The pretrained Slate model is only available in Runtime 23\.1\.
To get the options available for configuring Transformer training, enter:
help(watson_nlp.workflows.entity_mentions.transformer.Transformer.train)
**Sample code**
import watson_nlp
from watson_nlp.toolkit.entity_mentions_utils.train_util import prepare_stream_of_train_records_from_JSON_collection
# load the syntax models for all languages to be supported
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
syntax_models = [syntax_model]
# load the pretrained Slate model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
# prepare the train and dev data
# entity_train_data is a directory with one or more json files in the input format specified above
train_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
dev_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
# train a transformer workflow model
trained_workflow = watson_nlp.workflows.entity_mentions.transformer.Transformer.train(
train_data_stream=train_data_stream,
dev_data_stream=dev_data_stream,
syntax_models=syntax_models,
template_resource=pretrained_model_resource,
num_train_epochs=3,
)
## Applying the model on new data ##
Apply the trained transformer workflow model on new data by using the `run()` method, as you would use on any of the existing pre\-trained blocks\.
**Code sample**
trained_workflow.run('Bruce is at Times Square')
## Storing and loading the model ##
The custom transformer model can be stored as any other model as described in "Loading and storing models", using `ibm_watson_studio_lib`\.
To load the custom transformer model, extra steps are required:
<!-- <ol> -->
1. Ensure that you have an access token on the **Access control** page on the **Manage** tab of your project\. Only project admins can create access tokens\. The access token can have **Viewer** or **Editor** access permissions\. Only editors can inject the token into a notebook\.
2. Add the project token to the notebook by clicking **More > Insert project token** from the notebook action bar and then run the cell\.
By running the inserted hidden code cell, a `wslib` object is created that you can use for functions in the `ibm-watson-studio-lib` library. For information on the available `ibm-watson-studio-lib` functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Download and extract the model to your local runtime environment:
import zipfile
model_zip = 'trained_workflow_file'
model_folder = 'trained_workflow_folder'
wslib.download_file('trained_workflow', file_name=model_zip)
with zipfile.ZipFile(model_zip, 'r') as zip_ref:
zip_ref.extractall(model_folder)
4. Load the model from the extracted folder:
trained_workflow = watson_nlp.load(model_folder)
<!-- </ol> -->
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
<!-- </article "role="article" "> -->
|
355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en | Extracting sentiment with a custom transformer model | Extracting sentiment with a custom transformer model
You can train your own models for sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data.
The Slate IBM Foundation model is available only in Runtime 23.1.
Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=eninput)
* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enload)
* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=entrain)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enapply)
Input data format for training
You need to provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a labels field. The text represents the training example text, and the labels field is an array, which contains exactly one label of positive, neutral, or negative.
The following is an example of an array with sample training data:
[
{
"text": "I am happy",
"labels": "positive"]
},
{
"text": "I am sad",
"labels": "negative"]
},
{
"text": "The sky is blue",
"labels": "neutral"]
}
]
The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you might use the utility method prepare_data_from_json:
import watson_nlp
from watson_nlp.toolkit.sentiment_analysis_utils.training import train_util as utils
training_data_file = "train_data.json"
dev_data_file = "dev_data.json"
train_stream = utils.prepare_data_from_json(training_data_file)
dev_stream = utils.prepare_data_from_json(dev_data_file)
Loading the pretrained model resources
The pretrained Slate IBM Foundation model needs to be loaded before it passes to the training algorithm. In addition, you need to load the syntax analysis models for the languages that are used in your input texts.
To load the model:
Load the pretrained Slate IBM Foundation model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
Download relevant syntax analysis models
syntax_model_en = watson_nlp.load('syntax_izumo_en_stock')
syntax_model_de = watson_nlp.load('syntax_izumo_de_stock')
Create a list of all syntax analysis models
syntax_models = [syntax_model_en, syntax_model_de]
Training the model
For all options that are available for configuring sentiment transformer training, enter:
help(watson_nlp.workflows.sentiment.AggregatedSentiment.train_transformer)
The train_transformer method creates a workflow model, which automatically runs syntax analysis and the trained sentiment classification. In a subsequent step, enable language detection so that the workflow model can run on input text without any prerequisite information.
The following is a sample call using the input data and pretrained model from the previous section (Training the model):
from watson_nlp.workflows.sentiment import AggregatedSentiment
sentiment_model = AggregatedSentiment.train_transformer(
train_data_stream = train_stream,
dev_data_stream = dev_stream,
syntax_model=syntax_models,
pretrained_model_resource=pretrained_model_resource,
label_list=['negative', 'neutral', 'positive'],
learning_rate=2e-5,
num_train_epochs=10,
combine_approach="NON_NEUTRAL_MEAN",
keep_model_artifacts=True
)
lang_detect_model = watson_nlp.load('lang-detect_izumo_multi_stock')
sentiment_model.enable_lang_detect(lang_detect_model)
Applying the model on new data
After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks.
Sample code:
input_text = 'new input text'
sentiment_predictions = sentiment_model.run(input_text)
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
| # Extracting sentiment with a custom transformer model #
You can train your own models for sentiment extraction based on the Slate IBM Foundation model\. This pretrained model can be find\-tuned for your use case by training it on your specific input data\.
The Slate IBM Foundation model is available only in Runtime 23\.1\.
Note: Training transformer models is CPU and memory intensive\. Depending on the size of your training data, the environment might not be large enough to complete the training\. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook\. Use a GPU\-based environment for training and also inference time, if it is available to you\. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\.
<!-- <ul> -->
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en#input)
* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en#load)
* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en#train)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=en#apply)
<!-- </ul> -->
## Input data format for training ##
You need to provide a training and development data set to the training function\. The development data is usually around 10% of the training data\. Each training or development sample is represented as a JSON object\. It must have a **text** and a **labels** field\. The **text** represents the training example text, and the **labels** field is an array, which contains exactly one label of **positive**, **neutral**, or **negative**\.
The following is an example of an array with sample training data:
[
{
"text": "I am happy",
"labels": "positive"]
},
{
"text": "I am sad",
"labels": "negative"]
},
{
"text": "The sky is blue",
"labels": "neutral"]
}
]
The training and development data sets are created as data streams from arrays of JSON objects\. To create the data streams, you might use the utility method `prepare_data_from_json`:
import watson_nlp
from watson_nlp.toolkit.sentiment_analysis_utils.training import train_util as utils
training_data_file = "train_data.json"
dev_data_file = "dev_data.json"
train_stream = utils.prepare_data_from_json(training_data_file)
dev_stream = utils.prepare_data_from_json(dev_data_file)
## Loading the pretrained model resources ##
The pretrained Slate IBM Foundation model needs to be loaded before it passes to the training algorithm\. In addition, you need to load the syntax analysis models for the languages that are used in your input texts\.
To load the model:
# Load the pretrained Slate IBM Foundation model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
# Download relevant syntax analysis models
syntax_model_en = watson_nlp.load('syntax_izumo_en_stock')
syntax_model_de = watson_nlp.load('syntax_izumo_de_stock')
# Create a list of all syntax analysis models
syntax_models = [syntax_model_en, syntax_model_de]
## Training the model ##
For all options that are available for configuring sentiment transformer training, enter:
help(watson_nlp.workflows.sentiment.AggregatedSentiment.train_transformer)
The `train_transformer` method creates a workflow model, which automatically runs syntax analysis and the trained sentiment classification\. In a subsequent step, enable language detection so that the workflow model can run on input text without any prerequisite information\.
The following is a sample call using the input data and pretrained model from the previous section (Training the model):
from watson_nlp.workflows.sentiment import AggregatedSentiment
sentiment_model = AggregatedSentiment.train_transformer(
train_data_stream = train_stream,
dev_data_stream = dev_stream,
syntax_model=syntax_models,
pretrained_model_resource=pretrained_model_resource,
label_list=['negative', 'neutral', 'positive'],
learning_rate=2e-5,
num_train_epochs=10,
combine_approach="NON_NEUTRAL_MEAN",
keep_model_artifacts=True
)
lang_detect_model = watson_nlp.load('lang-detect_izumo_multi_stock')
sentiment_model.enable_lang_detect(lang_detect_model)
## Applying the model on new data ##
After you train the model on a data set, apply the model on new data by using the `run()` method, as you would use on any of the existing pre\-trained blocks\.
Sample code:
input_text = 'new input text'
sentiment_predictions = sentiment_model.run(input_text)
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
<!-- </article "role="article" "> -->
|
D174298E1DD7898C08771488715D83FC7A7740AE | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=en | Working with pre-trained models | Working with pre-trained models
Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements.
Loading and running a model
To load a model, you first need to know its name. Model names follow a standard convention encoding the type of model (like classification or entity extraction), type of algorithm (like BERT or SVM), language code, and details of the type system.
To find the model that matches your needs, use the task catalog. See [Watson NLP task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html).
You can find the expected input for a given block class (for example to the Entity Mentions model) by using help() on the block class run() method:
import watson_nlp
help(watson_nlp.blocks.keywords.TextRank.run)
Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Each block or workflow supports functions to:
* load(): load a model
* run(): run the model on input arguments
* train(): train the model on your own data (not all blocks and workflows support training)
* save(): save the model that has been trained on your own data
Blocks
Two types of blocks exist:
* [Blocks that operate directly on the input document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-data)
* [Blocks that depend on other blocks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-blocks)
[Workflows](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enworkflows) run one more blocks on the input document, in a pipeline.
Blocks that operate directly on the input document
An example of a block that operates directly on the input document is the Syntax block, which performs natural language processing operations such as tokenization, lemmatization, part of speech tagging or dependency parsing.
Example: running syntax analysis on a text snippet:
import watson_nlp
Load the syntax model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Run the syntax model and print the result
syntax_prediction = syntax_model.run('Welcome to IBM!')
print(syntax_prediction)
Blocks that depend on other blocks
Blocks that depend on other blocks cannot be applied on the input document directly. They are applied on the output of one or more preceeding blocks. For example, the Keyword Extraction block depends on the Syntax and Noun Phrases block.
These blocks can be loaded but can only be run in a particular order on the input document. For example:
import watson_nlp
text = "Anna went to school at University of California Santa Cruz.
Anna joined the university in 2015."
Load Syntax, Noun Phrases and Keywords models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
Run the Syntax and Noun Phrases models
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
noun_phrases = noun_phrases_model.run(text)
Run the keywords model
keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
print(keywords)
Workflows
Workflows are predefined end-to-end pipelines from a raw document to a final block, where all necessary blocks are chained as part of the workflow pipeline. For instance, the Entity Mentions block offered in Runtime 22.2 requires syntax analysis results, so the end-to-end process would be: input text -> Syntax analysis -> Entity Mentions -> Entity Mentions results. Starting with Runtime 23.1, you can call the Entity Mentions workflow. Refer to this sample:
import watson_nlp
Load the workflow model
mentions_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
Run the entity extraction workflow on the input text
mentions_workflow.run('IBM announced new advances in quantum computing', language_code="en")
Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
| # Working with pre\-trained models #
Watson Natural Language Processing provides pre\-trained models in over 20 languages\. They are curated by a dedicated team of experts, and evaluated for quality on each specific language\. These pre\-trained models can be used in production environments without you having to worry about license or intellectual property infringements\.
## Loading and running a model ##
To load a model, you first need to know its name\. Model names follow a standard convention encoding the type of model (like classification or entity extraction), type of algorithm (like BERT or SVM), language code, and details of the type system\.
To find the model that matches your needs, use the task catalog\. See [Watson NLP task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)\.
You can find the expected input for a given block class (for example to the Entity Mentions model) by using `help()` on the block class `run()` method:
import watson_nlp
help(watson_nlp.blocks.keywords.TextRank.run)
Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows\. Each block or workflow supports functions to:
<!-- <ul> -->
* `load()`: load a model
* `run()`: run the model on input arguments
* `train()`: train the model on your own data (not all blocks and workflows support training)
* `save()`: save the model that has been trained on your own data
<!-- </ul> -->
### Blocks ###
Two types of blocks exist:
<!-- <ul> -->
* [Blocks that operate directly on the input document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=en#operate-data)
* [Blocks that depend on other blocks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=en#operate-blocks)
<!-- </ul> -->
[Workflows](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=en#workflows) run one more blocks on the input document, in a pipeline\.
#### Blocks that operate directly on the input document ####
An example of a block that operates directly on the input document is the Syntax block, which performs natural language processing operations such as tokenization, lemmatization, part of speech tagging or dependency parsing\.
Example: running syntax analysis on a text snippet:
import watson_nlp
# Load the syntax model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Run the syntax model and print the result
syntax_prediction = syntax_model.run('Welcome to IBM!')
print(syntax_prediction)
#### Blocks that depend on other blocks ####
Blocks that depend on other blocks cannot be applied on the input document directly\. They are applied on the output of one or more preceeding blocks\. For example, the Keyword Extraction block depends on the Syntax and Noun Phrases block\.
These blocks can be loaded but can only be run in a particular order on the input document\. For example:
import watson_nlp
text = "Anna went to school at University of California Santa Cruz. \
Anna joined the university in 2015."
# Load Syntax, Noun Phrases and Keywords models for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
# Run the Syntax and Noun Phrases models
syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
noun_phrases = noun_phrases_model.run(text)
# Run the keywords model
keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
print(keywords)
### Workflows ###
Workflows are predefined end\-to\-end pipelines from a raw document to a final block, where all necessary blocks are chained as part of the workflow pipeline\. For instance, the Entity Mentions block offered in Runtime 22\.2 requires syntax analysis results, so the end\-to\-end process would be: input text \-> Syntax analysis \-> Entity Mentions \-> Entity Mentions results\. Starting with Runtime 23\.1, you can call the Entity Mentions workflow\. Refer to this sample:
import watson_nlp
# Load the workflow model
mentions_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
# Run the entity extraction workflow on the input text
mentions_workflow.run('IBM announced new advances in quantum computing', language_code="en")
**Parent topic:**[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
<!-- </article "role="article" "> -->
|
174D6FDF73627D7B2258D7F351C3D0156C06D1DC | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html?context=cdpaas&locale=en | Category types | Category types
The categories that are returned by the the Watson Natural Language Processing Categories block are based on the IAB Tech Lab Content Taxonomy, which provides common language categories that can be used when describing content.
The following table lists the IAB categories taxonomy returned by the Categories block.
LEVEL 1 LEVEL 2 LEVEL 3 LEVEL 4
Automotive
Automotive Auto Body Styles
Automotive Auto Body Styles Commercial Trucks
Automotive Auto Body Styles Sedan
Automotive Auto Body Styles Station Wagon
Automotive Auto Body Styles SUV
Automotive Auto Body Styles Van
Automotive Auto Body Styles Convertible
Automotive Auto Body Styles Coupe
Automotive Auto Body Styles Crossover
Automotive Auto Body Styles Hatchback
Automotive Auto Body Styles Microcar
Automotive Auto Body Styles Minivan
Automotive Auto Body Styles Off-Road Vehicles
Automotive Auto Body Styles Pickup Trucks
Automotive Auto Type
Automotive Auto Type Budget Cars
Automotive Auto Type Certified Pre-Owned Cars
Automotive Auto Type Classic Cars
Automotive Auto Type Concept Cars
Automotive Auto Type Driverless Cars
Automotive Auto Type Green Vehicles
Automotive Auto Type Luxury Cars
Automotive Auto Type Performance Cars
Automotive Car Culture
Automotive Dash Cam Videos
Automotive Motorcycles
Automotive Road-Side Assistance
Automotive Scooters
Automotive Auto Buying and Selling
Automotive Auto Insurance
Automotive Auto Parts
Automotive Auto Recalls
Automotive Auto Repair
Automotive Auto Safety
Automotive Auto Shows
Automotive Auto Technology
Automotive Auto Technology Auto Infotainment Technologies
Automotive Auto Technology Auto Navigation Systems
Automotive Auto Technology Auto Safety Technologies
Automotive Auto Rentals
Books and Literature
Books and Literature Art and Photography Books
Books and Literature Biographies
Books and Literature Children's Literature
Books and Literature Comics and Graphic Novels
Books and Literature Cookbooks
Books and Literature Fiction
Books and Literature Poetry
Books and Literature Travel Books
Books and Literature Young Adult Literature
Business and Finance
Business and Finance Business
Business and Finance Business Business Accounting & Finance
Business and Finance Business Human Resources
Business and Finance Business Large Business
Business and Finance Business Logistics
Business and Finance Business Marketing and Advertising
Business and Finance Business Sales
Business and Finance Business Small and Medium-sized Business
Business and Finance Business Startups
Business and Finance Business Business Administration
Business and Finance Business Business Banking & Finance
Business and Finance Business Business Banking & Finance Angel Investment
Business and Finance Business Business Banking & Finance Bankruptcy
Business and Finance Business Business Banking & Finance Business Loans
Business and Finance Business Business Banking & Finance Debt Factoring & Invoice Discounting
Business and Finance Business Business Banking & Finance Mergers and Acquisitions
Business and Finance Business Business Banking & Finance Private Equity
Business and Finance Business Business Banking & Finance Sale & Lease Back
Business and Finance Business Business Banking & Finance Venture Capital
Business and Finance Business Business I.T.
Business and Finance Business Business Operations
Business and Finance Business Consumer Issues
Business and Finance Business Consumer Issues Recalls
Business and Finance Business Executive Leadership & Management
Business and Finance Business Government Business
Business and Finance Business Green Solutions
Business and Finance Business Business Utilities
Business and Finance Economy
Business and Finance Economy Commodities
Business and Finance Economy Currencies
Business and Finance Economy Financial Crisis
Business and Finance Economy Financial Reform
Business and Finance Economy Financial Regulation
Business and Finance Economy Gasoline Prices
Business and Finance Economy Housing Market
Business and Finance Economy Interest Rates
Business and Finance Economy Job Market
Business and Finance Industries
Business and Finance Industries Advertising Industry
Business and Finance Industries Education industry
Business and Finance Industries Entertainment Industry
Business and Finance Industries Environmental Services Industry
Business and Finance Industries Financial Industry
Business and Finance Industries Food Industry
Business and Finance Industries Healthcare Industry
Business and Finance Industries Hospitality Industry
Business and Finance Industries Information Services Industry
Business and Finance Industries Legal Services Industry
Business and Finance Industries Logistics and Transportation Industry
Business and Finance Industries Agriculture
Business and Finance Industries Management Consulting Industry
Business and Finance Industries Manufacturing Industry
Business and Finance Industries Mechanical and Industrial Engineering Industry
Business and Finance Industries Media Industry
Business and Finance Industries Metals Industry
Business and Finance Industries Non-Profit Organizations
Business and Finance Industries Pharmaceutical Industry
Business and Finance Industries Power and Energy Industry
Business and Finance Industries Publishing Industry
Business and Finance Industries Real Estate Industry
Business and Finance Industries Apparel Industry
Business and Finance Industries Retail Industry
Business and Finance Industries Technology Industry
Business and Finance Industries Telecommunications Industry
Business and Finance Industries Automotive Industry
Business and Finance Industries Aviation Industry
Business and Finance Industries Biotech and Biomedical Industry
Business and Finance Industries Civil Engineering Industry
Business and Finance Industries Construction Industry
Business and Finance Industries Defense Industry
Careers
Careers Apprenticeships
Careers Career Advice
Careers Career Planning
Careers Job Search
Careers Job Search Job Fairs
Careers Job Search Resume Writing and Advice
Careers Remote Working
Careers Vocational Training
Education
Education Adult Education
Education Private School
Education Secondary Education
Education Special Education
Education College Education
Education College Education College Planning
Education College Education Postgraduate Education
Education College Education Postgraduate Education Professional School
Education College Education Undergraduate Education
Education Early Childhood Education
Education Educational Assessment
Education Educational Assessment Standardized Testing
Education Homeschooling
Education Homework and Study
Education Language Learning
Education Online Education
Education Primary Education
Events and Attractions
Events and Attractions Amusement and Theme Parks
Events and Attractions Fashion Events
Events and Attractions Historic Site and Landmark Tours
Events and Attractions Malls & Shopping Centers
Events and Attractions Museums & Galleries
Events and Attractions Musicals
Events and Attractions National & Civic Holidays
Events and Attractions Nightclubs
Events and Attractions Outdoor Activities
Events and Attractions Parks & Nature
Events and Attractions Party Supplies and Decorations
Events and Attractions Awards Shows
Events and Attractions Personal Celebrations & Life Events
Events and Attractions Personal Celebrations & Life Events Anniversary
Events and Attractions Personal Celebrations & Life Events Wedding
Events and Attractions Personal Celebrations & Life Events Baby Shower
Events and Attractions Personal Celebrations & Life Events Bachelor Party
Events and Attractions Personal Celebrations & Life Events Bachelorette Party
Events and Attractions Personal Celebrations & Life Events Birth
Events and Attractions Personal Celebrations & Life Events Birthday
Events and Attractions Personal Celebrations & Life Events Funeral
Events and Attractions Personal Celebrations & Life Events Graduation
Events and Attractions Personal Celebrations & Life Events Prom
Events and Attractions Political Event
Events and Attractions Religious Events
Events and Attractions Sporting Events
Events and Attractions Theater Venues and Events
Events and Attractions Zoos & Aquariums
Events and Attractions Bars & Restaurants
Events and Attractions Business Expos & Conferences
Events and Attractions Casinos & Gambling
Events and Attractions Cinemas and Events
Events and Attractions Comedy Events
Events and Attractions Concerts & Music Events
Events and Attractions Fan Conventions
Family and Relationships
Family and Relationships Bereavement
Family and Relationships Dating
Family and Relationships Divorce
Family and Relationships Eldercare
Family and Relationships Marriage and Civil Unions
Family and Relationships Parenting
Family and Relationships Parenting Adoption and Fostering
Family and Relationships Parenting Daycare and Pre-School
Family and Relationships Parenting Internet Safety
Family and Relationships Parenting Parenting Babies and Toddlers
Family and Relationships Parenting Parenting Children Aged 4-11
Family and Relationships Parenting Parenting Teens
Family and Relationships Parenting Special Needs Kids
Family and Relationships Single Life
Fine Art
Fine Art Costume
Fine Art Dance
Fine Art Design
Fine Art Digital Arts
Fine Art Fine Art Photography
Fine Art Modern Art
Fine Art Opera
Fine Art Theater
Food & Drink
Food & Drink Alcoholic Beverages
Food & Drink Vegan Diets
Food & Drink Vegetarian Diets
Food & Drink World Cuisines
Food & Drink Barbecues and Grilling
Food & Drink Cooking
Food & Drink Desserts and Baking
Food & Drink Dining Out
Food & Drink Food Allergies
Food & Drink Food Movements
Food & Drink Healthy Cooking and Eating
Food & Drink Non-Alcoholic Beverages
Healthy Living
Healthy Living Children's Health
Healthy Living Fitness and Exercise
Healthy Living Fitness and Exercise Participant Sports
Healthy Living Fitness and Exercise Running and Jogging
Healthy Living Men's Health
Healthy Living Nutrition
Healthy Living Senior Health
Healthy Living Weight Loss
Healthy Living Wellness
Healthy Living Wellness Alternative Medicine
Healthy Living Wellness Alternative Medicine Herbs and Supplements
Healthy Living Wellness Alternative Medicine Holistic Health
Healthy Living Wellness Physical Therapy
Healthy Living Wellness Smoking Cessation
Healthy Living Women's Health
Hobbies & Interests
Hobbies & Interests Antiquing and Antiques
Hobbies & Interests Magic and Illusion
Hobbies & Interests Model Toys
Hobbies & Interests Musical Instruments
Hobbies & Interests Paranormal Phenomena
Hobbies & Interests Radio Control
Hobbies & Interests Sci-fi and Fantasy
Hobbies & Interests Workshops and Classes
Hobbies & Interests Arts and Crafts
Hobbies & Interests Arts and Crafts Beadwork
Hobbies & Interests Arts and Crafts Candle and Soap Making
Hobbies & Interests Arts and Crafts Drawing and Sketching
Hobbies & Interests Arts and Crafts Jewelry Making
Hobbies & Interests Arts and Crafts Needlework
Hobbies & Interests Arts and Crafts Painting
Hobbies & Interests Arts and Crafts Photography
Hobbies & Interests Arts and Crafts Scrapbooking
Hobbies & Interests Arts and Crafts Woodworking
Hobbies & Interests Beekeeping
Hobbies & Interests Birdwatching
Hobbies & Interests Cigars
Hobbies & Interests Collecting
Hobbies & Interests Collecting Comic Books
Hobbies & Interests Collecting Stamps and Coins
Hobbies & Interests Content Production
Hobbies & Interests Content Production Audio Production
Hobbies & Interests Content Production Freelance Writing
Hobbies & Interests Content Production Screenwriting
Hobbies & Interests Content Production Video Production
Hobbies & Interests Games and Puzzles
Hobbies & Interests Games and Puzzles Board Games and Puzzles
Hobbies & Interests Games and Puzzles Card Games
Hobbies & Interests Games and Puzzles Roleplaying Games
Hobbies & Interests Genealogy and Ancestry
Home & Garden
Home & Garden Gardening
Home & Garden Remodeling & Construction
Home & Garden Smart Home
Home & Garden Home Appliances
Home & Garden Home Entertaining
Home & Garden Home Improvement
Home & Garden Home Security
Home & Garden Indoor Environmental Quality
Home & Garden Interior Decorating
Home & Garden Landscaping
Home & Garden Outdoor Decorating
Medical Health
Medical Health Diseases and Conditions
Medical Health Diseases and Conditions Allergies
Medical Health Diseases and Conditions Ear, Nose and Throat Conditions
Medical Health Diseases and Conditions Endocrine and Metabolic Diseases
Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Hormonal Disorders
Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Menopause
Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Thyroid Disorders
Medical Health Diseases and Conditions Eye and Vision Conditions
Medical Health Diseases and Conditions Foot Health
Medical Health Diseases and Conditions Heart and Cardiovascular Diseases
Medical Health Diseases and Conditions Infectious Diseases
Medical Health Diseases and Conditions Injuries
Medical Health Diseases and Conditions Injuries First Aid
Medical Health Diseases and Conditions Lung and Respiratory Health
Medical Health Diseases and Conditions Mental Health
Medical Health Diseases and Conditions Reproductive Health
Medical Health Diseases and Conditions Reproductive Health Birth Control
Medical Health Diseases and Conditions Reproductive Health Infertility
Medical Health Diseases and Conditions Reproductive Health Pregnancy
Medical Health Diseases and Conditions Blood Disorders
Medical Health Diseases and Conditions Sexual Health
Medical Health Diseases and Conditions Sexual Health Sexual Conditions
Medical Health Diseases and Conditions Skin and Dermatology
Medical Health Diseases and Conditions Sleep Disorders
Medical Health Diseases and Conditions Substance Abuse
Medical Health Diseases and Conditions Bone and Joint Conditions
Medical Health Diseases and Conditions Brain and Nervous System Disorders
Medical Health Diseases and Conditions Cancer
Medical Health Diseases and Conditions Cold and Flu
Medical Health Diseases and Conditions Dental Health
Medical Health Diseases and Conditions Diabetes
Medical Health Diseases and Conditions Digestive Disorders
Medical Health Medical Tests
Medical Health Pharmaceutical Drugs
Medical Health Surgery
Medical Health Vaccines
Medical Health Cosmetic Medical Services
Movies
Movies Action and Adventure Movies
Movies Romance Movies
Movies Science Fiction Movies
Movies Indie and Arthouse Movies
Movies Animation Movies
Movies Comedy Movies
Movies Crime and Mystery Movies
Movies Documentary Movies
Movies Drama Movies
Movies Family and Children Movies
Movies Fantasy Movies
Movies Horror Movies
Movies World Movies
Music and Audio
Music and Audio Adult Contemporary Music
Music and Audio Adult Contemporary Music Soft AC Music
Music and Audio Adult Contemporary Music Urban AC Music
Music and Audio Adult Album Alternative
Music and Audio Alternative Music
Music and Audio Children's Music
Music and Audio Classic Hits
Music and Audio Classical Music
Music and Audio College Radio
Music and Audio Comedy (Music and Audio)
Music and Audio Contemporary Hits/Pop/Top 40
Music and Audio Country Music
Music and Audio Dance and Electronic Music
Music and Audio World/International Music
Music and Audio Songwriters/Folk
Music and Audio Gospel Music
Music and Audio Hip Hop Music
Music and Audio Inspirational/New Age Music
Music and Audio Jazz
Music and Audio Oldies/Adult Standards
Music and Audio Reggae
Music and Audio Blues
Music and Audio Religious (Music and Audio)
Music and Audio R&B/Soul/Funk
Music and Audio Rock Music
Music and Audio Rock Music Album-oriented Rock
Music and Audio Rock Music Alternative Rock
Music and Audio Rock Music Classic Rock
Music and Audio Rock Music Hard Rock
Music and Audio Rock Music Soft Rock
Music and Audio Soundtracks, TV and Showtunes
Music and Audio Sports Radio
Music and Audio Talk Radio
Music and Audio Talk Radio Business News Radio
Music and Audio Talk Radio Educational Radio
Music and Audio Talk Radio News Radio
Music and Audio Talk Radio News/Talk Radio
Music and Audio Talk Radio Public Radio
Music and Audio Urban Contemporary Music
Music and Audio Variety (Music and Audio)
News and Politics
News and Politics Crime
News and Politics Disasters
News and Politics International News
News and Politics Law
News and Politics Local News
News and Politics National News
News and Politics Politics
News and Politics Politics Elections
News and Politics Politics Political Issues
News and Politics Politics War and Conflicts
News and Politics Weather
Personal Finance
Personal Finance Consumer Banking
Personal Finance Financial Assistance
Personal Finance Financial Assistance Government Support and Welfare
Personal Finance Financial Assistance Student Financial Aid
Personal Finance Financial Planning
Personal Finance Frugal Living
Personal Finance Insurance
Personal Finance Insurance Health Insurance
Personal Finance Insurance Home Insurance
Personal Finance Insurance Life Insurance
Personal Finance Insurance Motor Insurance
Personal Finance Insurance Pet Insurance
Personal Finance Insurance Travel Insurance
Personal Finance Personal Debt
Personal Finance Personal Debt Credit Cards
Personal Finance Personal Debt Home Financing
Personal Finance Personal Debt Personal Loans
Personal Finance Personal Debt Student Loans
Personal Finance Personal Investing
Personal Finance Personal Investing Hedge Funds
Personal Finance Personal Investing Mutual Funds
Personal Finance Personal Investing Options
Personal Finance Personal Investing Stocks and Bonds
Personal Finance Personal Taxes
Personal Finance Retirement Planning
Personal Finance Home Utilities
Personal Finance Home Utilities Gas and Electric
Personal Finance Home Utilities Internet Service Providers
Personal Finance Home Utilities Phone Services
Personal Finance Home Utilities Water Services
Pets
Pets Birds
Pets Cats
Pets Dogs
Pets Fish and Aquariums
Pets Large Animals
Pets Pet Adoptions
Pets Reptiles
Pets Veterinary Medicine
Pets Pet Supplies
Pop Culture
Pop Culture Celebrity Deaths
Pop Culture Celebrity Families
Pop Culture Celebrity Homes
Pop Culture Celebrity Pregnancy
Pop Culture Celebrity Relationships
Pop Culture Celebrity Scandal
Pop Culture Celebrity Style
Pop Culture Humor and Satire
Real Estate
Real Estate Apartments
Real Estate Retail Property
Real Estate Vacation Properties
Real Estate Developmental Sites
Real Estate Hotel Properties
Real Estate Houses
Real Estate Industrial Property
Real Estate Land and Farms
Real Estate Office Property
Real Estate Real Estate Buying and Selling
Real Estate Real Estate Renting and Leasing
Religion & Spirituality
Religion & Spirituality Agnosticism
Religion & Spirituality Spirituality
Religion & Spirituality Astrology
Religion & Spirituality Atheism
Religion & Spirituality Buddhism
Religion & Spirituality Christianity
Religion & Spirituality Hinduism
Religion & Spirituality Islam
Religion & Spirituality Judaism
Religion & Spirituality Sikhism
Science
Science Biological Sciences
Science Chemistry
Science Environment
Science Genetics
Science Geography
Science Geology
Science Physics
Science Space and Astronomy
Shopping
Shopping Coupons and Discounts
Shopping Flower Shopping
Shopping Gifts and Greetings Cards
Shopping Grocery Shopping
Shopping Holiday Shopping
Shopping Household Supplies
Shopping Lotteries and Scratchcards
Shopping Sales and Promotions
Shopping Children's Games and Toys
Sports
Sports American Football
Sports Boxing
Sports Cheerleading
Sports College Sports
Sports College Sports College Football
Sports College Sports College Basketball
Sports College Sports College Baseball
Sports Cricket
Sports Cycling
Sports Darts
Sports Disabled Sports
Sports Diving
Sports Equine Sports
Sports Equine Sports Horse Racing
Sports Extreme Sports
Sports Extreme Sports Canoeing and Kayaking
Sports Extreme Sports Climbing
Sports Extreme Sports Paintball
Sports Extreme Sports Scuba Diving
Sports Extreme Sports Skateboarding
Sports Extreme Sports Snowboarding
Sports Extreme Sports Surfing and Bodyboarding
Sports Extreme Sports Waterskiing and Wakeboarding
Sports Australian Rules Football
Sports Fantasy Sports
Sports Field Hockey
Sports Figure Skating
Sports Fishing Sports
Sports Golf
Sports Gymnastics
Sports Hunting and Shooting
Sports Ice Hockey
Sports Inline Skating
Sports Lacrosse
Sports Auto Racing
Sports Auto Racing Motorcycle Sports
Sports Martial Arts
Sports Olympic Sports
Sports Olympic Sports Summer Olympic Sports
Sports Olympic Sports Winter Olympic Sports
Sports Poker and Professional Gambling
Sports Rodeo
Sports Rowing
Sports Rugby
Sports Rugby Rugby League
Sports Rugby Rugby Union
Sports Sailing
Sports Skiing
Sports Snooker/Pool/Billiards
Sports Soccer
Sports Badminton
Sports Softball
Sports Squash
Sports Swimming
Sports Table Tennis
Sports Tennis
Sports Track and Field
Sports Volleyball
Sports Walking
Sports Water Polo
Sports Weightlifting
Sports Baseball
Sports Wrestling
Sports Basketball
Sports Beach Volleyball
Sports Bodybuilding
Sports Bowling
Sports Sports Equipment
Style & Fashion
Style & Fashion Beauty
Style & Fashion Beauty Hair Care
Style & Fashion Beauty Makeup and Accessories
Style & Fashion Beauty Nail Care
Style & Fashion Beauty Natural and Organic Beauty
Style & Fashion Beauty Perfume and Fragrance
Style & Fashion Beauty Skin Care
Style & Fashion Women's Fashion
Style & Fashion Women's Fashion Women's Accessories
Style & Fashion Women's Fashion Women's Accessories Women's Glasses
Style & Fashion Women's Fashion Women's Accessories Women's Handbags and Wallets
Style & Fashion Women's Fashion Women's Accessories Women's Hats and Scarves
Style & Fashion Women's Fashion Women's Accessories Women's Jewelry and Watches
Style & Fashion Women's Fashion Women's Clothing
Style & Fashion Women's Fashion Women's Clothing Women's Business Wear
Style & Fashion Women's Fashion Women's Clothing Women's Casual Wear
Style & Fashion Women's Fashion Women's Clothing Women's Formal Wear
Style & Fashion Women's Fashion Women's Clothing Women's Intimates and Sleepwear
Style & Fashion Women's Fashion Women's Clothing Women's Outerwear
Style & Fashion Women's Fashion Women's Clothing Women's Sportswear
Style & Fashion Women's Fashion Women's Shoes and Footwear
Style & Fashion Body Art
Style & Fashion Children's Clothing
Style & Fashion Designer Clothing
Style & Fashion Fashion Trends
Style & Fashion High Fashion
Style & Fashion Men's Fashion
Style & Fashion Men's Fashion Men's Accessories
Style & Fashion Men's Fashion Men's Accessories Men's Jewelry and Watches
Style & Fashion Men's Fashion Men's Clothing
Style & Fashion Men's Fashion Men's Clothing Men's Business Wear
Style & Fashion Men's Fashion Men's Clothing Men's Casual Wear
Style & Fashion Men's Fashion Men's Clothing Men's Formal Wear
Style & Fashion Men's Fashion Men's Clothing Men's Outerwear
Style & Fashion Men's Fashion Men's Clothing Men's Sportswear
Style & Fashion Men's Fashion Men's Clothing Men's Underwear and Sleepwear
Style & Fashion Men's Fashion Men's Shoes and Footwear
Style & Fashion Personal Care
Style & Fashion Personal Care Bath and Shower
Style & Fashion Personal Care Deodorant and Antiperspirant
Style & Fashion Personal Care Oral care
Style & Fashion Personal Care Shaving
Style & Fashion Street Style
Technology & Computing
Technology & Computing Artificial Intelligence
Technology & Computing Augmented Reality
Technology & Computing Computing
Technology & Computing Computing Computer Networking
Technology & Computing Computing Computer Peripherals
Technology & Computing Computing Computer Software and Applications
Technology & Computing Computing Computer Software and Applications 3-D Graphics
Technology & Computing Computing Computer Software and Applications Photo Editing Software
Technology & Computing Computing Computer Software and Applications Shareware and Freeware
Technology & Computing Computing Computer Software and Applications Video Software
Technology & Computing Computing Computer Software and Applications Web Conferencing
Technology & Computing Computing Computer Software and Applications Antivirus Software
Technology & Computing Computing Computer Software and Applications Browsers
Technology & Computing Computing Computer Software and Applications Computer Animation
Technology & Computing Computing Computer Software and Applications Databases
Technology & Computing Computing Computer Software and Applications Desktop Publishing
Technology & Computing Computing Computer Software and Applications Digital Audio
Technology & Computing Computing Computer Software and Applications Graphics Software
Technology & Computing Computing Computer Software and Applications Operating Systems
Technology & Computing Computing Data Storage and Warehousing
Technology & Computing Computing Desktops
Technology & Computing Computing Information and Network Security
Technology & Computing Computing Internet
Technology & Computing Computing Internet Cloud Computing
Technology & Computing Computing Internet Web Development
Technology & Computing Computing Internet Web Hosting
Technology & Computing Computing Internet Email
Technology & Computing Computing Internet Internet for Beginners
Technology & Computing Computing Internet Internet of Things
Technology & Computing Computing Internet IT and Internet Support
Technology & Computing Computing Internet Search
Technology & Computing Computing Internet Social Networking
Technology & Computing Computing Internet Web Design and HTML
Technology & Computing Computing Laptops
Technology & Computing Computing Programming Languages
Technology & Computing Consumer Electronics
Technology & Computing Consumer Electronics Cameras and Camcorders
Technology & Computing Consumer Electronics Home Entertainment Systems
Technology & Computing Consumer Electronics Smartphones
Technology & Computing Consumer Electronics Tablets and E-readers
Technology & Computing Consumer Electronics Wearable Technology
Technology & Computing Robotics
Technology & Computing Virtual Reality
Television
Television Animation TV
Television Soap Opera TV
Television Special Interest TV
Television Sports TV
Television Children's TV
Television Comedy TV
Television Drama TV
Television Factual TV
Television Holiday TV
Television Music TV
Television Reality TV
Television Science Fiction TV
Travel
Travel Travel Accessories
Travel Travel Locations
Travel Travel Locations Africa Travel
Travel Travel Locations Asia Travel
Travel Travel Locations Australia and Oceania Travel
Travel Travel Locations Europe Travel
Travel Travel Locations North America Travel
Travel Travel Locations Polar Travel
Travel Travel Locations South America Travel
Travel Travel Preparation and Advice
Travel Travel Type
Travel Travel Type Adventure Travel
Travel Travel Type Family Travel
Travel Travel Type Honeymoons and Getaways
Travel Travel Type Hotels and Motels
Travel Travel Type Rail Travel
Travel Travel Type Road Trips
Travel Travel Type Spas
Travel Travel Type Air Travel
Travel Travel Type Beach Travel
Travel Travel Type Bed & Breakfasts
Travel Travel Type Budget Travel
Travel Travel Type Business Travel
Travel Travel Type Camping
Travel Travel Type Cruises
Travel Travel Type Day Trips
Video Gaming
Video Gaming Console Games
Video Gaming eSports
Video Gaming Mobile Games
Video Gaming PC Games
Video Gaming Video Game Genres
Video Gaming Video Game Genres Action Video Games
Video Gaming Video Game Genres Role-Playing Video Games
Video Gaming Video Game Genres Simulation Video Games
Video Gaming Video Game Genres Sports Video Games
Video Gaming Video Game Genres Strategy Video Games
Video Gaming Video Game Genres Action-Adventure Video Games
Video Gaming Video Game Genres Adventure Video Games
Video Gaming Video Game Genres Casual Games
Video Gaming Video Game Genres Educational Video Games
Video Gaming Video Game Genres Exercise and Fitness Video Games
Video Gaming Video Game Genres MMOs
Video Gaming Video Game Genres Music and Party Video Games
Video Gaming Video Game Genres Puzzle Video Games
| # Category types #
The categories that are returned by the the Watson Natural Language Processing Categories block are based on the IAB Tech Lab Content Taxonomy, which provides common language categories that can be used when describing content\.
The following table lists the IAB categories taxonomy returned by the Categories block\.
<!-- <table> -->
| LEVEL 1 | LEVEL 2 | LEVEL 3 | LEVEL 4 |
| ------------------------ | ----------------------------------- | ---------------------------------------------- | ------------------------------------ |
| Automotive | | | |
| Automotive | Auto Body Styles | | |
| Automotive | Auto Body Styles | Commercial Trucks | |
| Automotive | Auto Body Styles | Sedan | |
| Automotive | Auto Body Styles | Station Wagon | |
| Automotive | Auto Body Styles | SUV | |
| Automotive | Auto Body Styles | Van | |
| Automotive | Auto Body Styles | Convertible | |
| Automotive | Auto Body Styles | Coupe | |
| Automotive | Auto Body Styles | Crossover | |
| Automotive | Auto Body Styles | Hatchback | |
| Automotive | Auto Body Styles | Microcar | |
| Automotive | Auto Body Styles | Minivan | |
| Automotive | Auto Body Styles | Off\-Road Vehicles | |
| Automotive | Auto Body Styles | Pickup Trucks | |
| Automotive | Auto Type | | |
| Automotive | Auto Type | Budget Cars | |
| Automotive | Auto Type | Certified Pre\-Owned Cars | |
| Automotive | Auto Type | Classic Cars | |
| Automotive | Auto Type | Concept Cars | |
| Automotive | Auto Type | Driverless Cars | |
| Automotive | Auto Type | Green Vehicles | |
| Automotive | Auto Type | Luxury Cars | |
| Automotive | Auto Type | Performance Cars | |
| Automotive | Car Culture | | |
| Automotive | Dash Cam Videos | | |
| Automotive | Motorcycles | | |
| Automotive | Road\-Side Assistance | | |
| Automotive | Scooters | | |
| Automotive | Auto Buying and Selling | | |
| Automotive | Auto Insurance | | |
| Automotive | Auto Parts | | |
| Automotive | Auto Recalls | | |
| Automotive | Auto Repair | | |
| Automotive | Auto Safety | | |
| Automotive | Auto Shows | | |
| Automotive | Auto Technology | | |
| Automotive | Auto Technology | Auto Infotainment Technologies | |
| Automotive | Auto Technology | Auto Navigation Systems | |
| Automotive | Auto Technology | Auto Safety Technologies | |
| Automotive | Auto Rentals | | |
| Books and Literature | | | |
| Books and Literature | Art and Photography Books | | |
| Books and Literature | Biographies | | |
| Books and Literature | Children's Literature | | |
| Books and Literature | Comics and Graphic Novels | | |
| Books and Literature | Cookbooks | | |
| Books and Literature | Fiction | | |
| Books and Literature | Poetry | | |
| Books and Literature | Travel Books | | |
| Books and Literature | Young Adult Literature | | |
| Business and Finance | | | |
| Business and Finance | Business | | |
| Business and Finance | Business | Business Accounting & Finance | |
| Business and Finance | Business | Human Resources | |
| Business and Finance | Business | Large Business | |
| Business and Finance | Business | Logistics | |
| Business and Finance | Business | Marketing and Advertising | |
| Business and Finance | Business | Sales | |
| Business and Finance | Business | Small and Medium\-sized Business | |
| Business and Finance | Business | Startups | |
| Business and Finance | Business | Business Administration | |
| Business and Finance | Business | Business Banking & Finance | |
| Business and Finance | Business | Business Banking & Finance | Angel Investment |
| Business and Finance | Business | Business Banking & Finance | Bankruptcy |
| Business and Finance | Business | Business Banking & Finance | Business Loans |
| Business and Finance | Business | Business Banking & Finance | Debt Factoring & Invoice Discounting |
| Business and Finance | Business | Business Banking & Finance | Mergers and Acquisitions |
| Business and Finance | Business | Business Banking & Finance | Private Equity |
| Business and Finance | Business | Business Banking & Finance | Sale & Lease Back |
| Business and Finance | Business | Business Banking & Finance | Venture Capital |
| Business and Finance | Business | Business I\.T\. | |
| Business and Finance | Business | Business Operations | |
| Business and Finance | Business | Consumer Issues | |
| Business and Finance | Business | Consumer Issues | Recalls |
| Business and Finance | Business | Executive Leadership & Management | |
| Business and Finance | Business | Government Business | |
| Business and Finance | Business | Green Solutions | |
| Business and Finance | Business | Business Utilities | |
| Business and Finance | Economy | | |
| Business and Finance | Economy | Commodities | |
| Business and Finance | Economy | Currencies | |
| Business and Finance | Economy | Financial Crisis | |
| Business and Finance | Economy | Financial Reform | |
| Business and Finance | Economy | Financial Regulation | |
| Business and Finance | Economy | Gasoline Prices | |
| Business and Finance | Economy | Housing Market | |
| Business and Finance | Economy | Interest Rates | |
| Business and Finance | Economy | Job Market | |
| Business and Finance | Industries | | |
| Business and Finance | Industries | Advertising Industry | |
| Business and Finance | Industries | Education industry | |
| Business and Finance | Industries | Entertainment Industry | |
| Business and Finance | Industries | Environmental Services Industry | |
| Business and Finance | Industries | Financial Industry | |
| Business and Finance | Industries | Food Industry | |
| Business and Finance | Industries | Healthcare Industry | |
| Business and Finance | Industries | Hospitality Industry | |
| Business and Finance | Industries | Information Services Industry | |
| Business and Finance | Industries | Legal Services Industry | |
| Business and Finance | Industries | Logistics and Transportation Industry | |
| Business and Finance | Industries | Agriculture | |
| Business and Finance | Industries | Management Consulting Industry | |
| Business and Finance | Industries | Manufacturing Industry | |
| Business and Finance | Industries | Mechanical and Industrial Engineering Industry | |
| Business and Finance | Industries | Media Industry | |
| Business and Finance | Industries | Metals Industry | |
| Business and Finance | Industries | Non\-Profit Organizations | |
| Business and Finance | Industries | Pharmaceutical Industry | |
| Business and Finance | Industries | Power and Energy Industry | |
| Business and Finance | Industries | Publishing Industry | |
| Business and Finance | Industries | Real Estate Industry | |
| Business and Finance | Industries | Apparel Industry | |
| Business and Finance | Industries | Retail Industry | |
| Business and Finance | Industries | Technology Industry | |
| Business and Finance | Industries | Telecommunications Industry | |
| Business and Finance | Industries | Automotive Industry | |
| Business and Finance | Industries | Aviation Industry | |
| Business and Finance | Industries | Biotech and Biomedical Industry | |
| Business and Finance | Industries | Civil Engineering Industry | |
| Business and Finance | Industries | Construction Industry | |
| Business and Finance | Industries | Defense Industry | |
| Careers | | | |
| Careers | Apprenticeships | | |
| Careers | Career Advice | | |
| Careers | Career Planning | | |
| Careers | Job Search | | |
| Careers | Job Search | Job Fairs | |
| Careers | Job Search | Resume Writing and Advice | |
| Careers | Remote Working | | |
| Careers | Vocational Training | | |
| Education | | | |
| Education | Adult Education | | |
| Education | Private School | | |
| Education | Secondary Education | | |
| Education | Special Education | | |
| Education | College Education | | |
| Education | College Education | College Planning | |
| Education | College Education | Postgraduate Education | |
| Education | College Education | Postgraduate Education | Professional School |
| Education | College Education | Undergraduate Education | |
| Education | Early Childhood Education | | |
| Education | Educational Assessment | | |
| Education | Educational Assessment | Standardized Testing | |
| Education | Homeschooling | | |
| Education | Homework and Study | | |
| Education | Language Learning | | |
| Education | Online Education | | |
| Education | Primary Education | | |
| Events and Attractions | | | |
| Events and Attractions | Amusement and Theme Parks | | |
| Events and Attractions | Fashion Events | | |
| Events and Attractions | Historic Site and Landmark Tours | | |
| Events and Attractions | Malls & Shopping Centers | | |
| Events and Attractions | Museums & Galleries | | |
| Events and Attractions | Musicals | | |
| Events and Attractions | National & Civic Holidays | | |
| Events and Attractions | Nightclubs | | |
| Events and Attractions | Outdoor Activities | | |
| Events and Attractions | Parks & Nature | | |
| Events and Attractions | Party Supplies and Decorations | | |
| Events and Attractions | Awards Shows | | |
| Events and Attractions | Personal Celebrations & Life Events | | |
| Events and Attractions | Personal Celebrations & Life Events | Anniversary | |
| Events and Attractions | Personal Celebrations & Life Events | Wedding | |
| Events and Attractions | Personal Celebrations & Life Events | Baby Shower | |
| Events and Attractions | Personal Celebrations & Life Events | Bachelor Party | |
| Events and Attractions | Personal Celebrations & Life Events | Bachelorette Party | |
| Events and Attractions | Personal Celebrations & Life Events | Birth | |
| Events and Attractions | Personal Celebrations & Life Events | Birthday | |
| Events and Attractions | Personal Celebrations & Life Events | Funeral | |
| Events and Attractions | Personal Celebrations & Life Events | Graduation | |
| Events and Attractions | Personal Celebrations & Life Events | Prom | |
| Events and Attractions | Political Event | | |
| Events and Attractions | Religious Events | | |
| Events and Attractions | Sporting Events | | |
| Events and Attractions | Theater Venues and Events | | |
| Events and Attractions | Zoos & Aquariums | | |
| Events and Attractions | Bars & Restaurants | | |
| Events and Attractions | Business Expos & Conferences | | |
| Events and Attractions | Casinos & Gambling | | |
| Events and Attractions | Cinemas and Events | | |
| Events and Attractions | Comedy Events | | |
| Events and Attractions | Concerts & Music Events | | |
| Events and Attractions | Fan Conventions | | |
| Family and Relationships | | | |
| Family and Relationships | Bereavement | | |
| Family and Relationships | Dating | | |
| Family and Relationships | Divorce | | |
| Family and Relationships | Eldercare | | |
| Family and Relationships | Marriage and Civil Unions | | |
| Family and Relationships | Parenting | | |
| Family and Relationships | Parenting | Adoption and Fostering | |
| Family and Relationships | Parenting | Daycare and Pre\-School | |
| Family and Relationships | Parenting | Internet Safety | |
| Family and Relationships | Parenting | Parenting Babies and Toddlers | |
| Family and Relationships | Parenting | Parenting Children Aged 4\-11 | |
| Family and Relationships | Parenting | Parenting Teens | |
| Family and Relationships | Parenting | Special Needs Kids | |
| Family and Relationships | Single Life | | |
| Fine Art | | | |
| Fine Art | Costume | | |
| Fine Art | Dance | | |
| Fine Art | Design | | |
| Fine Art | Digital Arts | | |
| Fine Art | Fine Art Photography | | |
| Fine Art | Modern Art | | |
| Fine Art | Opera | | |
| Fine Art | Theater | | |
| Food & Drink | | | |
| Food & Drink | Alcoholic Beverages | | |
| Food & Drink | Vegan Diets | | |
| Food & Drink | Vegetarian Diets | | |
| Food & Drink | World Cuisines | | |
| Food & Drink | Barbecues and Grilling | | |
| Food & Drink | Cooking | | |
| Food & Drink | Desserts and Baking | | |
| Food & Drink | Dining Out | | |
| Food & Drink | Food Allergies | | |
| Food & Drink | Food Movements | | |
| Food & Drink | Healthy Cooking and Eating | | |
| Food & Drink | Non\-Alcoholic Beverages | | |
| Healthy Living | | | |
| Healthy Living | Children's Health | | |
| Healthy Living | Fitness and Exercise | | |
| Healthy Living | Fitness and Exercise | Participant Sports | |
| Healthy Living | Fitness and Exercise | Running and Jogging | |
| Healthy Living | Men's Health | | |
| Healthy Living | Nutrition | | |
| Healthy Living | Senior Health | | |
| Healthy Living | Weight Loss | | |
| Healthy Living | Wellness | | |
| Healthy Living | Wellness | Alternative Medicine | |
| Healthy Living | Wellness | Alternative Medicine | Herbs and Supplements |
| Healthy Living | Wellness | Alternative Medicine | Holistic Health |
| Healthy Living | Wellness | Physical Therapy | |
| Healthy Living | Wellness | Smoking Cessation | |
| Healthy Living | Women's Health | | |
| Hobbies & Interests | | | |
| Hobbies & Interests | Antiquing and Antiques | | |
| Hobbies & Interests | Magic and Illusion | | |
| Hobbies & Interests | Model Toys | | |
| Hobbies & Interests | Musical Instruments | | |
| Hobbies & Interests | Paranormal Phenomena | | |
| Hobbies & Interests | Radio Control | | |
| Hobbies & Interests | Sci\-fi and Fantasy | | |
| Hobbies & Interests | Workshops and Classes | | |
| Hobbies & Interests | Arts and Crafts | | |
| Hobbies & Interests | Arts and Crafts | Beadwork | |
| Hobbies & Interests | Arts and Crafts | Candle and Soap Making | |
| Hobbies & Interests | Arts and Crafts | Drawing and Sketching | |
| Hobbies & Interests | Arts and Crafts | Jewelry Making | |
| Hobbies & Interests | Arts and Crafts | Needlework | |
| Hobbies & Interests | Arts and Crafts | Painting | |
| Hobbies & Interests | Arts and Crafts | Photography | |
| Hobbies & Interests | Arts and Crafts | Scrapbooking | |
| Hobbies & Interests | Arts and Crafts | Woodworking | |
| Hobbies & Interests | Beekeeping | | |
| Hobbies & Interests | Birdwatching | | |
| Hobbies & Interests | Cigars | | |
| Hobbies & Interests | Collecting | | |
| Hobbies & Interests | Collecting | Comic Books | |
| Hobbies & Interests | Collecting | Stamps and Coins | |
| Hobbies & Interests | Content Production | | |
| Hobbies & Interests | Content Production | Audio Production | |
| Hobbies & Interests | Content Production | Freelance Writing | |
| Hobbies & Interests | Content Production | Screenwriting | |
| Hobbies & Interests | Content Production | Video Production | |
| Hobbies & Interests | Games and Puzzles | | |
| Hobbies & Interests | Games and Puzzles | Board Games and Puzzles | |
| Hobbies & Interests | Games and Puzzles | Card Games | |
| Hobbies & Interests | Games and Puzzles | Roleplaying Games | |
| Hobbies & Interests | Genealogy and Ancestry | | |
| Home & Garden | | | |
| Home & Garden | Gardening | | |
| Home & Garden | Remodeling & Construction | | |
| Home & Garden | Smart Home | | |
| Home & Garden | Home Appliances | | |
| Home & Garden | Home Entertaining | | |
| Home & Garden | Home Improvement | | |
| Home & Garden | Home Security | | |
| Home & Garden | Indoor Environmental Quality | | |
| Home & Garden | Interior Decorating | | |
| Home & Garden | Landscaping | | |
| Home & Garden | Outdoor Decorating | | |
| Medical Health | | | |
| Medical Health | Diseases and Conditions | | |
| Medical Health | Diseases and Conditions | Allergies | |
| Medical Health | Diseases and Conditions | Ear, Nose and Throat Conditions | |
| Medical Health | Diseases and Conditions | Endocrine and Metabolic Diseases | |
| Medical Health | Diseases and Conditions | Endocrine and Metabolic Diseases | Hormonal Disorders |
| Medical Health | Diseases and Conditions | Endocrine and Metabolic Diseases | Menopause |
| Medical Health | Diseases and Conditions | Endocrine and Metabolic Diseases | Thyroid Disorders |
| Medical Health | Diseases and Conditions | Eye and Vision Conditions | |
| Medical Health | Diseases and Conditions | Foot Health | |
| Medical Health | Diseases and Conditions | Heart and Cardiovascular Diseases | |
| Medical Health | Diseases and Conditions | Infectious Diseases | |
| Medical Health | Diseases and Conditions | Injuries | |
| Medical Health | Diseases and Conditions | Injuries | First Aid |
| Medical Health | Diseases and Conditions | Lung and Respiratory Health | |
| Medical Health | Diseases and Conditions | Mental Health | |
| Medical Health | Diseases and Conditions | Reproductive Health | |
| Medical Health | Diseases and Conditions | Reproductive Health | Birth Control |
| Medical Health | Diseases and Conditions | Reproductive Health | Infertility |
| Medical Health | Diseases and Conditions | Reproductive Health | Pregnancy |
| Medical Health | Diseases and Conditions | Blood Disorders | |
| Medical Health | Diseases and Conditions | Sexual Health | |
| Medical Health | Diseases and Conditions | Sexual Health | Sexual Conditions |
| Medical Health | Diseases and Conditions | Skin and Dermatology | |
| Medical Health | Diseases and Conditions | Sleep Disorders | |
| Medical Health | Diseases and Conditions | Substance Abuse | |
| Medical Health | Diseases and Conditions | Bone and Joint Conditions | |
| Medical Health | Diseases and Conditions | Brain and Nervous System Disorders | |
| Medical Health | Diseases and Conditions | Cancer | |
| Medical Health | Diseases and Conditions | Cold and Flu | |
| Medical Health | Diseases and Conditions | Dental Health | |
| Medical Health | Diseases and Conditions | Diabetes | |
| Medical Health | Diseases and Conditions | Digestive Disorders | |
| Medical Health | Medical Tests | | |
| Medical Health | Pharmaceutical Drugs | | |
| Medical Health | Surgery | | |
| Medical Health | Vaccines | | |
| Medical Health | Cosmetic Medical Services | | |
| Movies | | | |
| Movies | Action and Adventure Movies | | |
| Movies | Romance Movies | | |
| Movies | Science Fiction Movies | | |
| Movies | Indie and Arthouse Movies | | |
| Movies | Animation Movies | | |
| Movies | Comedy Movies | | |
| Movies | Crime and Mystery Movies | | |
| Movies | Documentary Movies | | |
| Movies | Drama Movies | | |
| Movies | Family and Children Movies | | |
| Movies | Fantasy Movies | | |
| Movies | Horror Movies | | |
| Movies | World Movies | | |
| Music and Audio | | | |
| Music and Audio | Adult Contemporary Music | | |
| Music and Audio | Adult Contemporary Music | Soft AC Music | |
| Music and Audio | Adult Contemporary Music | Urban AC Music | |
| Music and Audio | Adult Album Alternative | | |
| Music and Audio | Alternative Music | | |
| Music and Audio | Children's Music | | |
| Music and Audio | Classic Hits | | |
| Music and Audio | Classical Music | | |
| Music and Audio | College Radio | | |
| Music and Audio | Comedy (Music and Audio) | | |
| Music and Audio | Contemporary Hits/Pop/Top 40 | | |
| Music and Audio | Country Music | | |
| Music and Audio | Dance and Electronic Music | | |
| Music and Audio | World/International Music | | |
| Music and Audio | Songwriters/Folk | | |
| Music and Audio | Gospel Music | | |
| Music and Audio | Hip Hop Music | | |
| Music and Audio | Inspirational/New Age Music | | |
| Music and Audio | Jazz | | |
| Music and Audio | Oldies/Adult Standards | | |
| Music and Audio | Reggae | | |
| Music and Audio | Blues | | |
| Music and Audio | Religious (Music and Audio) | | |
| Music and Audio | R&B/Soul/Funk | | |
| Music and Audio | Rock Music | | |
| Music and Audio | Rock Music | Album\-oriented Rock | |
| Music and Audio | Rock Music | Alternative Rock | |
| Music and Audio | Rock Music | Classic Rock | |
| Music and Audio | Rock Music | Hard Rock | |
| Music and Audio | Rock Music | Soft Rock | |
| Music and Audio | Soundtracks, TV and Showtunes | | |
| Music and Audio | Sports Radio | | |
| Music and Audio | Talk Radio | | |
| Music and Audio | Talk Radio | Business News Radio | |
| Music and Audio | Talk Radio | Educational Radio | |
| Music and Audio | Talk Radio | News Radio | |
| Music and Audio | Talk Radio | News/Talk Radio | |
| Music and Audio | Talk Radio | Public Radio | |
| Music and Audio | Urban Contemporary Music | | |
| Music and Audio | Variety (Music and Audio) | | |
| News and Politics | | | |
| News and Politics | Crime | | |
| News and Politics | Disasters | | |
| News and Politics | International News | | |
| News and Politics | Law | | |
| News and Politics | Local News | | |
| News and Politics | National News | | |
| News and Politics | Politics | | |
| News and Politics | Politics | Elections | |
| News and Politics | Politics | Political Issues | |
| News and Politics | Politics | War and Conflicts | |
| News and Politics | Weather | | |
| Personal Finance | | | |
| Personal Finance | Consumer Banking | | |
| Personal Finance | Financial Assistance | | |
| Personal Finance | Financial Assistance | Government Support and Welfare | |
| Personal Finance | Financial Assistance | Student Financial Aid | |
| Personal Finance | Financial Planning | | |
| Personal Finance | Frugal Living | | |
| Personal Finance | Insurance | | |
| Personal Finance | Insurance | Health Insurance | |
| Personal Finance | Insurance | Home Insurance | |
| Personal Finance | Insurance | Life Insurance | |
| Personal Finance | Insurance | Motor Insurance | |
| Personal Finance | Insurance | Pet Insurance | |
| Personal Finance | Insurance | Travel Insurance | |
| Personal Finance | Personal Debt | | |
| Personal Finance | Personal Debt | Credit Cards | |
| Personal Finance | Personal Debt | Home Financing | |
| Personal Finance | Personal Debt | Personal Loans | |
| Personal Finance | Personal Debt | Student Loans | |
| Personal Finance | Personal Investing | | |
| Personal Finance | Personal Investing | Hedge Funds | |
| Personal Finance | Personal Investing | Mutual Funds | |
| Personal Finance | Personal Investing | Options | |
| Personal Finance | Personal Investing | Stocks and Bonds | |
| Personal Finance | Personal Taxes | | |
| Personal Finance | Retirement Planning | | |
| Personal Finance | Home Utilities | | |
| Personal Finance | Home Utilities | Gas and Electric | |
| Personal Finance | Home Utilities | Internet Service Providers | |
| Personal Finance | Home Utilities | Phone Services | |
| Personal Finance | Home Utilities | Water Services | |
| Pets | | | |
| Pets | Birds | | |
| Pets | Cats | | |
| Pets | Dogs | | |
| Pets | Fish and Aquariums | | |
| Pets | Large Animals | | |
| Pets | Pet Adoptions | | |
| Pets | Reptiles | | |
| Pets | Veterinary Medicine | | |
| Pets | Pet Supplies | | |
| Pop Culture | | | |
| Pop Culture | Celebrity Deaths | | |
| Pop Culture | Celebrity Families | | |
| Pop Culture | Celebrity Homes | | |
| Pop Culture | Celebrity Pregnancy | | |
| Pop Culture | Celebrity Relationships | | |
| Pop Culture | Celebrity Scandal | | |
| Pop Culture | Celebrity Style | | |
| Pop Culture | Humor and Satire | | |
| Real Estate | | | |
| Real Estate | Apartments | | |
| Real Estate | Retail Property | | |
| Real Estate | Vacation Properties | | |
| Real Estate | Developmental Sites | | |
| Real Estate | Hotel Properties | | |
| Real Estate | Houses | | |
| Real Estate | Industrial Property | | |
| Real Estate | Land and Farms | | |
| Real Estate | Office Property | | |
| Real Estate | Real Estate Buying and Selling | | |
| Real Estate | Real Estate Renting and Leasing | | |
| Religion & Spirituality | | | |
| Religion & Spirituality | Agnosticism | | |
| Religion & Spirituality | Spirituality | | |
| Religion & Spirituality | Astrology | | |
| Religion & Spirituality | Atheism | | |
| Religion & Spirituality | Buddhism | | |
| Religion & Spirituality | Christianity | | |
| Religion & Spirituality | Hinduism | | |
| Religion & Spirituality | Islam | | |
| Religion & Spirituality | Judaism | | |
| Religion & Spirituality | Sikhism | | |
| Science | | | |
| Science | Biological Sciences | | |
| Science | Chemistry | | |
| Science | Environment | | |
| Science | Genetics | | |
| Science | Geography | | |
| Science | Geology | | |
| Science | Physics | | |
| Science | Space and Astronomy | | |
| Shopping | | | |
| Shopping | Coupons and Discounts | | |
| Shopping | Flower Shopping | | |
| Shopping | Gifts and Greetings Cards | | |
| Shopping | Grocery Shopping | | |
| Shopping | Holiday Shopping | | |
| Shopping | Household Supplies | | |
| Shopping | Lotteries and Scratchcards | | |
| Shopping | Sales and Promotions | | |
| Shopping | Children's Games and Toys | | |
| Sports | | | |
| Sports | American Football | | |
| Sports | Boxing | | |
| Sports | Cheerleading | | |
| Sports | College Sports | | |
| Sports | College Sports | College Football | |
| Sports | College Sports | College Basketball | |
| Sports | College Sports | College Baseball | |
| Sports | Cricket | | |
| Sports | Cycling | | |
| Sports | Darts | | |
| Sports | Disabled Sports | | |
| Sports | Diving | | |
| Sports | Equine Sports | | |
| Sports | Equine Sports | Horse Racing | |
| Sports | Extreme Sports | | |
| Sports | Extreme Sports | Canoeing and Kayaking | |
| Sports | Extreme Sports | Climbing | |
| Sports | Extreme Sports | Paintball | |
| Sports | Extreme Sports | Scuba Diving | |
| Sports | Extreme Sports | Skateboarding | |
| Sports | Extreme Sports | Snowboarding | |
| Sports | Extreme Sports | Surfing and Bodyboarding | |
| Sports | Extreme Sports | Waterskiing and Wakeboarding | |
| Sports | Australian Rules Football | | |
| Sports | Fantasy Sports | | |
| Sports | Field Hockey | | |
| Sports | Figure Skating | | |
| Sports | Fishing Sports | | |
| Sports | Golf | | |
| Sports | Gymnastics | | |
| Sports | Hunting and Shooting | | |
| Sports | Ice Hockey | | |
| Sports | Inline Skating | | |
| Sports | Lacrosse | | |
| Sports | Auto Racing | | |
| Sports | Auto Racing | Motorcycle Sports | |
| Sports | Martial Arts | | |
| Sports | Olympic Sports | | |
| Sports | Olympic Sports | Summer Olympic Sports | |
| Sports | Olympic Sports | Winter Olympic Sports | |
| Sports | Poker and Professional Gambling | | |
| Sports | Rodeo | | |
| Sports | Rowing | | |
| Sports | Rugby | | |
| Sports | Rugby | Rugby League | |
| Sports | Rugby | Rugby Union | |
| Sports | Sailing | | |
| Sports | Skiing | | |
| Sports | Snooker/Pool/Billiards | | |
| Sports | Soccer | | |
| Sports | Badminton | | |
| Sports | Softball | | |
| Sports | Squash | | |
| Sports | Swimming | | |
| Sports | Table Tennis | | |
| Sports | Tennis | | |
| Sports | Track and Field | | |
| Sports | Volleyball | | |
| Sports | Walking | | |
| Sports | Water Polo | | |
| Sports | Weightlifting | | |
| Sports | Baseball | | |
| Sports | Wrestling | | |
| Sports | Basketball | | |
| Sports | Beach Volleyball | | |
| Sports | Bodybuilding | | |
| Sports | Bowling | | |
| Sports | Sports Equipment | | |
| Style & Fashion | | | |
| Style & Fashion | Beauty | | |
| Style & Fashion | Beauty | Hair Care | |
| Style & Fashion | Beauty | Makeup and Accessories | |
| Style & Fashion | Beauty | Nail Care | |
| Style & Fashion | Beauty | Natural and Organic Beauty | |
| Style & Fashion | Beauty | Perfume and Fragrance | |
| Style & Fashion | Beauty | Skin Care | |
| Style & Fashion | Women's Fashion | | |
| Style & Fashion | Women's Fashion | Women's Accessories | |
| Style & Fashion | Women's Fashion | Women's Accessories | Women's Glasses |
| Style & Fashion | Women's Fashion | Women's Accessories | Women's Handbags and Wallets |
| Style & Fashion | Women's Fashion | Women's Accessories | Women's Hats and Scarves |
| Style & Fashion | Women's Fashion | Women's Accessories | Women's Jewelry and Watches |
| Style & Fashion | Women's Fashion | Women's Clothing | |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Business Wear |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Casual Wear |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Formal Wear |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Intimates and Sleepwear |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Outerwear |
| Style & Fashion | Women's Fashion | Women's Clothing | Women's Sportswear |
| Style & Fashion | Women's Fashion | Women's Shoes and Footwear | |
| Style & Fashion | Body Art | | |
| Style & Fashion | Children's Clothing | | |
| Style & Fashion | Designer Clothing | | |
| Style & Fashion | Fashion Trends | | |
| Style & Fashion | High Fashion | | |
| Style & Fashion | Men's Fashion | | |
| Style & Fashion | Men's Fashion | Men's Accessories | |
| Style & Fashion | Men's Fashion | Men's Accessories | Men's Jewelry and Watches |
| Style & Fashion | Men's Fashion | Men's Clothing | |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Business Wear |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Casual Wear |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Formal Wear |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Outerwear |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Sportswear |
| Style & Fashion | Men's Fashion | Men's Clothing | Men's Underwear and Sleepwear |
| Style & Fashion | Men's Fashion | Men's Shoes and Footwear | |
| Style & Fashion | Personal Care | | |
| Style & Fashion | Personal Care | Bath and Shower | |
| Style & Fashion | Personal Care | Deodorant and Antiperspirant | |
| Style & Fashion | Personal Care | Oral care | |
| Style & Fashion | Personal Care | Shaving | |
| Style & Fashion | Street Style | | |
| Technology & Computing | | | |
| Technology & Computing | Artificial Intelligence | | |
| Technology & Computing | Augmented Reality | | |
| Technology & Computing | Computing | | |
| Technology & Computing | Computing | Computer Networking | |
| Technology & Computing | Computing | Computer Peripherals | |
| Technology & Computing | Computing | Computer Software and Applications | |
| Technology & Computing | Computing | Computer Software and Applications | 3\-D Graphics |
| Technology & Computing | Computing | Computer Software and Applications | Photo Editing Software |
| Technology & Computing | Computing | Computer Software and Applications | Shareware and Freeware |
| Technology & Computing | Computing | Computer Software and Applications | Video Software |
| Technology & Computing | Computing | Computer Software and Applications | Web Conferencing |
| Technology & Computing | Computing | Computer Software and Applications | Antivirus Software |
| Technology & Computing | Computing | Computer Software and Applications | Browsers |
| Technology & Computing | Computing | Computer Software and Applications | Computer Animation |
| Technology & Computing | Computing | Computer Software and Applications | Databases |
| Technology & Computing | Computing | Computer Software and Applications | Desktop Publishing |
| Technology & Computing | Computing | Computer Software and Applications | Digital Audio |
| Technology & Computing | Computing | Computer Software and Applications | Graphics Software |
| Technology & Computing | Computing | Computer Software and Applications | Operating Systems |
| Technology & Computing | Computing | Data Storage and Warehousing | |
| Technology & Computing | Computing | Desktops | |
| Technology & Computing | Computing | Information and Network Security | |
| Technology & Computing | Computing | Internet | |
| Technology & Computing | Computing | Internet | Cloud Computing |
| Technology & Computing | Computing | Internet | Web Development |
| Technology & Computing | Computing | Internet | Web Hosting |
| Technology & Computing | Computing | Internet | Email |
| Technology & Computing | Computing | Internet | Internet for Beginners |
| Technology & Computing | Computing | Internet | Internet of Things |
| Technology & Computing | Computing | Internet | IT and Internet Support |
| Technology & Computing | Computing | Internet | Search |
| Technology & Computing | Computing | Internet | Social Networking |
| Technology & Computing | Computing | Internet | Web Design and HTML |
| Technology & Computing | Computing | Laptops | |
| Technology & Computing | Computing | Programming Languages | |
| Technology & Computing | Consumer Electronics | | |
| Technology & Computing | Consumer Electronics | Cameras and Camcorders | |
| Technology & Computing | Consumer Electronics | Home Entertainment Systems | |
| Technology & Computing | Consumer Electronics | Smartphones | |
| Technology & Computing | Consumer Electronics | Tablets and E\-readers | |
| Technology & Computing | Consumer Electronics | Wearable Technology | |
| Technology & Computing | Robotics | | |
| Technology & Computing | Virtual Reality | | |
| Television | | | |
| Television | Animation TV | | |
| Television | Soap Opera TV | | |
| Television | Special Interest TV | | |
| Television | Sports TV | | |
| Television | Children's TV | | |
| Television | Comedy TV | | |
| Television | Drama TV | | |
| Television | Factual TV | | |
| Television | Holiday TV | | |
| Television | Music TV | | |
| Television | Reality TV | | |
| Television | Science Fiction TV | | |
| Travel | | | |
| Travel | Travel Accessories | | |
| Travel | Travel Locations | | |
| Travel | Travel Locations | Africa Travel | |
| Travel | Travel Locations | Asia Travel | |
| Travel | Travel Locations | Australia and Oceania Travel | |
| Travel | Travel Locations | Europe Travel | |
| Travel | Travel Locations | North America Travel | |
| Travel | Travel Locations | Polar Travel | |
| Travel | Travel Locations | South America Travel | |
| Travel | Travel Preparation and Advice | | |
| Travel | Travel Type | | |
| Travel | Travel Type | Adventure Travel | |
| Travel | Travel Type | Family Travel | |
| Travel | Travel Type | Honeymoons and Getaways | |
| Travel | Travel Type | Hotels and Motels | |
| Travel | Travel Type | Rail Travel | |
| Travel | Travel Type | Road Trips | |
| Travel | Travel Type | Spas | |
| Travel | Travel Type | Air Travel | |
| Travel | Travel Type | Beach Travel | |
| Travel | Travel Type | Bed & Breakfasts | |
| Travel | Travel Type | Budget Travel | |
| Travel | Travel Type | Business Travel | |
| Travel | Travel Type | Camping | |
| Travel | Travel Type | Cruises | |
| Travel | Travel Type | Day Trips | |
| Video Gaming | | | |
| Video Gaming | Console Games | | |
| Video Gaming | eSports | | |
| Video Gaming | Mobile Games | | |
| Video Gaming | PC Games | | |
| Video Gaming | Video Game Genres | | |
| Video Gaming | Video Game Genres | Action Video Games | |
| Video Gaming | Video Game Genres | Role\-Playing Video Games | |
| Video Gaming | Video Game Genres | Simulation Video Games | |
| Video Gaming | Video Game Genres | Sports Video Games | |
| Video Gaming | Video Game Genres | Strategy Video Games | |
| Video Gaming | Video Game Genres | Action\-Adventure Video Games | |
| Video Gaming | Video Game Genres | Adventure Video Games | |
| Video Gaming | Video Game Genres | Casual Games | |
| Video Gaming | Video Game Genres | Educational Video Games | |
| Video Gaming | Video Game Genres | Exercise and Fitness Video Games | |
| Video Gaming | Video Game Genres | MMOs | |
| Video Gaming | Video Game Genres | Music and Party Video Games | |
| Video Gaming | Video Game Genres | Puzzle Video Games |
<!-- </table ""> -->
<!-- </article "role="article" "> -->
|
D92A34A349CEE727B017AF7D40B880B232220959 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-samples.html?context=cdpaas&locale=en | Watson Natural Language Processing library usage samples | Watson Natural Language Processing library usage samples
The sample notebooks demonstrate how to use the different Watson Natural Language Processing blocks and how to train your own models.
Sample project and notebooks
To help you get started with the Watson Natural Language Processing library, you can download a sample project and notebooks from the Samples.
You can access the Samples by selecting Samples from the Cloud Pak for Data navigation menu.
Sample notebooks
* [Financial complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/39047aede50128e7cbc8ea19660fe1f6)
This notebook shows you how to analyze financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). The notebook teaches you to use the Tone classification and Emotion classification models.
* [Car complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4b8aa2c1ee67a6cd1172a1cf760f65f7)
This notebook demonstrates how to analyze car complaints using Watson Natural Language Processing. It uses publicly available complaint records from car owners stored by the National Highway and Transit Association (NHTSA) of the US Department of Transportation. This notebook shows you how use syntax analysis to extract the most frequently used nouns, which typically depict the problems that review authors talk about and combine these results with structured data using association rule mining.
* [Complaint classification with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f011c232)
This notebook demonstrates how to train different text classifiers using Watson Natural Language Processing. The classifiers predict the product group from the text of a customer complaint. This could be used, for example to route a complaint to the appropriate staff member. The data that is used in this notebook is taken from the Consumer Complaint Database that is published by the Consumer Financial Protection Bureau (CFPB), a U.S. government agency and is publicly available. You will learn how to train a custom CNN model and a VotingEnsemble model and evaluate their quality.
* [Entity extraction on Financial Complaints with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f0112100)
This notebook demonstrates how to extract named entities from financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). In the notebook you will learn how to do dictionary-based term extraction to train a custom extraction model based on given dictionaries and extract entities using the BERT or a transformer model.
Sample project
If you don't want to download the sample notebooks to your project individually, you can download the entire sample project [Text Analysis with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f010e4cb) from the IBM watsonx Gallery.
The sample project contains the sample notebooks listed in the previous section, including:
* Analyzing hotel reviews using Watson Natural Language Processing
This notebook shows you how to use syntax analysis to extract the most frequently used nouns from the hotel reviews, classify the sentiment of the reviews and use targets sentiment analysis. The data file that is used by this notebook is included in the project as a data asset.
You can run all of the sample notebooks with the NLP + DO Runtime 23.1 on Python 3.10 XS environment except for the Analyzing hotel reviews using Watson Natural Language Processing notebook. To run this notebook, you need to create an environment template that is large enough to load the CPU-optimized models for sentiment and targets sentiment analysis.
Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
| # Watson Natural Language Processing library usage samples #
The sample notebooks demonstrate how to use the different Watson Natural Language Processing blocks and how to train your own models\.
## Sample project and notebooks ##
To help you get started with the Watson Natural Language Processing library, you can download a sample project and notebooks from the Samples\.
You can access the Samples by selecting **Samples** from the Cloud Pak for Data navigation menu\.
**Sample notebooks**
<!-- <ul> -->
* [Financial complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/39047aede50128e7cbc8ea19660fe1f6)
This notebook shows you how to analyze financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). The notebook teaches you to use the Tone classification and Emotion classification models.
* [Car complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4b8aa2c1ee67a6cd1172a1cf760f65f7)
This notebook demonstrates how to analyze car complaints using Watson Natural Language Processing. It uses publicly available complaint records from car owners stored by the National Highway and Transit Association (NHTSA) of the US Department of Transportation. This notebook shows you how use syntax analysis to extract the most frequently used nouns, which typically depict the problems that review authors talk about and combine these results with structured data using association rule mining.
* [Complaint classification with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f011c232)
This notebook demonstrates how to train different text classifiers using Watson Natural Language Processing. The classifiers predict the product group from the text of a customer complaint. This could be used, for example to route a complaint to the appropriate staff member. The data that is used in this notebook is taken from the Consumer Complaint Database that is published by the Consumer Financial Protection Bureau (CFPB), a U.S. government agency and is publicly available. You will learn how to train a custom CNN model and a VotingEnsemble model and evaluate their quality.
* [Entity extraction on Financial Complaints with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f0112100)
This notebook demonstrates how to extract named entities from financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). In the notebook you will learn how to do dictionary-based term extraction to train a custom extraction model based on given dictionaries and extract entities using the BERT or a transformer model.
<!-- </ul> -->
**Sample project**
If you don't want to download the sample notebooks to your project individually, you can download the entire sample project [Text Analysis with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f010e4cb) from the IBM watsonx Gallery\.
The sample project contains the sample notebooks listed in the previous section, including:
<!-- <ul> -->
* Analyzing hotel reviews using Watson Natural Language Processing
This notebook shows you how to use syntax analysis to extract the most frequently used nouns from the hotel reviews, classify the sentiment of the reviews and use targets sentiment analysis. The data file that is used by this notebook is included in the project as a data asset.
<!-- </ul> -->
You can run all of the sample notebooks with the `NLP + DO Runtime 23.1 on Python 3.10 XS` environment except for the *Analyzing hotel reviews using Watson Natural Language Processing* notebook\. To run this notebook, you need to create an environment template that is large enough to load the CPU\-optimized models for sentiment and targets sentiment analysis\.
**Parent topic:**[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
<!-- </article "role="article" "> -->
|
715ABFB108ED8F6361D07762656DBD0443C57904 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en | Extracting targets sentiment with a custom transformer model | Extracting targets sentiment with a custom transformer model
You can train your own models for targets sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data.
The Slate IBM Foundation model is available only in Runtime 23.1.
Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=eninput)
* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enload)
* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=entrain)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enapply)
* [Storing and loading the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enstore)
Input data format for training
You must provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a target_mentions field. The text represents the training example text, and the target_mentions field is an array, which contains an entry for each target mention with its text, location, and sentiment.
Consider using Watson Knowledge Studio to enable your domain subject matter experts to easily annotate text and create training data.
The following is an example of an array with sample training data:
[
{
"text": "Those waiters stare at you your entire meal, just waiting for you to put your fork down and they snatch the plate away in a second.",
"target_mentions":
{
"text": "waiters",
"location": {
"begin": 6,
"end": 13
},
"sentiment": "negative"
}
]
}
]
The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you may use the utility method read_json_to_stream. It requires the syntax analysis model for the language of your input data.
Sample code:
import watson_nlp
from watson_nlp.toolkit.targeted_sentiment.training_data_reader import read_json_to_stream
training_data_file = 'train_data.json'
dev_data_file = 'dev_data.json'
Load the syntax analysis model for the language of your input data
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Prepare train and dev data streams
train_stream = read_json_to_stream(json_path=training_data_file, syntax_model=syntax_model)
dev_stream = read_json_to_stream(json_path=dev_data_file, syntax_model=syntax_model)
Loading the pretrained model resources
The pretrained Slate IBM Foundation model needs to be loaded before passing it to the training algorithm.
To load the model:
Load the pretrained Slate IBM Foundation model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
Training the model
For all options that are available for configuring sentiment transformer training, enter:
help(watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train)
The train method will create a new targets sentiment block model.
The following is a sample call that uses the input data and pretrained model from the previous section (Training the model):
Train the model
custom_tsa_model = watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train(
train_stream,
dev_stream,
pretrained_model_resource,
num_train_epochs=5
)
Applying the model on new data
After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. Because the created custom model is a block model, you need to run syntax analysis on the input text and pass the results to the run() methods.
Sample code:
input_text = 'new input text'
Run syntax analysis first
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
syntax_analysis = syntax_model.run(input_text, parsers=('token',))
Apply the new model on top of the syntax predictions
tsa_predictions = custom_tsa_model.run(syntax_analysis)
Storing and loading the model
The custom targets sentiment model can be stored as any other model as described in "Loading and storing models", using ibm_watson_studio_lib.
To load the custom targets sentiment model, additional steps are required:
1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook.
2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar. Then run the cell.
By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Download and extract the model to your local runtime environment:
import zipfile
model_zip = 'custom_TSA_model_file'
model_folder = 'custom_TSA'
wslib.download_file('custom_TSA_model', file_name=model_zip)
with zipfile.ZipFile(model_zip, 'r') as zip_ref:
zip_ref.extractall(model_folder)
4. Load the model from the extracted folder:
custom_TSA_model = watson_nlp.load(model_folder)
Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
| # Extracting targets sentiment with a custom transformer model #
You can train your own models for targets sentiment extraction based on the Slate IBM Foundation model\. This pretrained model can be find\-tuned for your use case by training it on your specific input data\.
The Slate IBM Foundation model is available only in Runtime 23\.1\.
Note: Training transformer models is CPU and memory intensive\. Depending on the size of your training data, the environment might not be large enough to complete the training\. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook\. Use a GPU\-based environment for training and also inference time, if it is available to you\. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\.
<!-- <ul> -->
* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en#input)
* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en#load)
* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en#train)
* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en#apply)
* [Storing and loading the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=en#store)
<!-- </ul> -->
## Input data format for training ##
You must provide a training and development data set to the training function\. The development data is usually around 10% of the training data\. Each training or development sample is represented as a JSON object\. It must have a **text** and a **target\_mentions** field\. The **text** represents the training example text, and the **target\_mentions** field is an array, which contains an entry for each target mention with its **text**, **location**, and **sentiment**\.
Consider using Watson Knowledge Studio to enable your domain subject matter experts to easily annotate text and create training data\.
The following is an example of an array with sample training data:
[
{
"text": "Those waiters stare at you your entire meal, just waiting for you to put your fork down and they snatch the plate away in a second.",
"target_mentions":
{
"text": "waiters",
"location": {
"begin": 6,
"end": 13
},
"sentiment": "negative"
}
]
}
]
The training and development data sets are created as data streams from arrays of JSON objects\. To create the data streams, you may use the utility method `read_json_to_stream`\. It requires the syntax analysis model for the language of your input data\.
Sample code:
import watson_nlp
from watson_nlp.toolkit.targeted_sentiment.training_data_reader import read_json_to_stream
training_data_file = 'train_data.json'
dev_data_file = 'dev_data.json'
# Load the syntax analysis model for the language of your input data
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Prepare train and dev data streams
train_stream = read_json_to_stream(json_path=training_data_file, syntax_model=syntax_model)
dev_stream = read_json_to_stream(json_path=dev_data_file, syntax_model=syntax_model)
## Loading the pretrained model resources ##
The pretrained Slate IBM Foundation model needs to be loaded before passing it to the training algorithm\.
To load the model:
# Load the pretrained Slate IBM Foundation model
pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
## Training the model ##
For all options that are available for configuring sentiment transformer training, enter:
help(watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train)
The `train` method will create a new targets sentiment block model\.
The following is a sample call that uses the input data and pretrained model from the previous section (Training the model):
# Train the model
custom_tsa_model = watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train(
train_stream,
dev_stream,
pretrained_model_resource,
num_train_epochs=5
)
## Applying the model on new data ##
After you train the model on a data set, apply the model on new data by using the `run()` method, as you would use on any of the existing pre\-trained blocks\. Because the created custom model is a block model, you need to run syntax analysis on the input text and pass the results to the `run()` methods\.
Sample code:
input_text = 'new input text'
# Run syntax analysis first
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
syntax_analysis = syntax_model.run(input_text, parsers=('token',))
# Apply the new model on top of the syntax predictions
tsa_predictions = custom_tsa_model.run(syntax_analysis)
## Storing and loading the model ##
The custom targets sentiment model can be stored as any other model as described in "Loading and storing models", using `ibm_watson_studio_lib`\.
To load the custom targets sentiment model, additional steps are required:
<!-- <ol> -->
1. Ensure that you have an access token on the **Access control** page on the **Manage** tab of your project\. Only project admins can create access tokens\. The access token can have **Viewer** or **Editor** access permissions\. Only editors can inject the token into a notebook\.
2. Add the project token to the notebook by clicking **More > Insert project token** from the notebook action bar\. Then run the cell\.
By running the inserted hidden code cell, a `wslib` object is created that you can use for functions in the `ibm-watson-studio-lib` library. For information on the available `ibm-watson-studio-lib` functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
3. Download and extract the model to your local runtime environment:
import zipfile
model_zip = 'custom_TSA_model_file'
model_folder = 'custom_TSA'
wslib.download_file('custom_TSA_model', file_name=model_zip)
with zipfile.ZipFile(model_zip, 'r') as zip_ref:
zip_ref.extractall(model_folder)
4. Load the model from the extracted folder:
custom_TSA_model = watson_nlp.load(model_folder)
<!-- </ol> -->
**Parent topic:**[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
<!-- </article "role="article" "> -->
|
F7E8527824E15B4194A3FD12CEEE049F910016DB | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html?context=cdpaas&locale=en | Watson Natural Language Processing library | Watson Natural Language Processing library
The Watson Natural Language Processing library provides natural language processing functions for syntax analysis and pre-trained models for a wide variety of text processing tasks, such as sentiment analysis, keyword extraction, and classification. The Watson Natural Language Processing library is available for Python only.
With Watson Natural Language Processing, you can turn unstructured data into structured data, making the data easier to understand and transferable, in particular if you are working with a mix of unstructured and structured data. Examples of such data are call center records, customer complaints, social media posts, or problem reports. The unstructured data is often part of a larger data record that includes columns with structured data. Extracting meaning and structure from the unstructured data and combining this information with the data in the columns of structured data:
* Gives you a deeper understanding of the input data
* Can help you to make better decisions.
Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements.
Although you can create your own models, the easiest way to get started with Watson Natural Language Processing is to run the pre-trained models on unstructured text to perform language processing tasks.
Some examples of language processing tasks available in Watson Natural Language Processing pre-trained models:
* Language detection: detect the language of the input text
* Syntax: tokenization, lemmatization, part of speech tagging, and dependency parsing
* Entity extraction: find mentions of entities (like person, organization, or date)
* Noun phrase extraction: extract noun phrases from the input text
* Text classification: analyze text and then assign a set of pre-defined tags or categories based on its content
* Sentiment classification: is the input document positive, negative or neutral?
* Tone classification: classify the tone in the input document (like excited, frustrated, or sad)
* Emotion classification: classify the emotion of the input document (like anger or disgust)
* Keywords extraction: extract noun phrases that are relevant in the input text
* Concepts: find concepts from DBPedia in the input text
* Relations: detect relations between two entities
* Hierarchical categories: assign individual nodes within a hierarchical taxonomy to the input document
* Embeddings: map individual words or larger text snippets into a vector space
Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Blocks and workflows support functions to load, run, train, and save a model.
For more information, refer to [Working with pre-trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html).
Some examples of how you can use the Watson Natural Language Processing library:
Running syntax analysis on a text snippet:
import watson_nlp
Load the syntax model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
Run the syntax model and print the result
syntax_prediction = syntax_model.run('Welcome to IBM!')
print(syntax_prediction)
Extracting entities from a text snippet:
import watson_nlp
entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code="en")
print(entities.get_mention_pairs())
For examples of how to use the Watson Natural Language Processing library, refer to [Watson Natural Language Processing library usage samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-samples.html).
Using Watson Natural Language Processing in a notebook
You can run your Python notebooks that use the Watson Natural Language Processing library in any of the environments that listed here. The GPU environment templates include the Watson Natural Language Processing library.
DO + NLP: Indicates that the environment template includes both the CPLEX and the DOcplex libraries to model and solve decision optimization problems and the Watson Natural Language Processing library.
: Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html).
Environment templates that include the Watson Natural Language Processing library
Name Hardware configuration CUH rate per hour
NLP + DO Runtime 23.1 on Python 3.10 XS 2vCPU and 8 GB RAM 6
DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6
GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68
GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136
GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68
GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136
Normally these environments are sufficient to run notebooks that use prebuilt models. If you need a larger environment, for example to train your own models, you can create a custom template that includes the Watson Natural Language Processing library. Refer to [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
* Create a custom template without GPU by selecting the engine type Default, the hardware configuration size that you need, and choosing NLP + DO Runtime 23.1 on Python 3.10 or DO + NLP Runtime 22.2 on Python 3.10 as the software version.
* Create a custom template with GPU by selecting the engine type GPU, the hardware configuration size that you need, and choosing GPU Runtime 23.1 on Python 3.10 or GPU Runtime 22.2 on Python 3.10 as the software version.
Learn more
* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
| # Watson Natural Language Processing library #
The Watson Natural Language Processing library provides natural language processing functions for syntax analysis and pre\-trained models for a wide variety of text processing tasks, such as sentiment analysis, keyword extraction, and classification\. The Watson Natural Language Processing library is available for Python only\.
With Watson Natural Language Processing, you can turn unstructured data into structured data, making the data easier to understand and transferable, in particular if you are working with a mix of unstructured and structured data\. Examples of such data are call center records, customer complaints, social media posts, or problem reports\. The unstructured data is often part of a larger data record that includes columns with structured data\. Extracting meaning and structure from the unstructured data and combining this information with the data in the columns of structured data:
<!-- <ul> -->
* Gives you a deeper understanding of the input data
* Can help you to make better decisions\.
<!-- </ul> -->
Watson Natural Language Processing provides pre\-trained models in over 20 languages\. They are curated by a dedicated team of experts, and evaluated for quality on each specific language\. These pre\-trained models can be used in production environments without you having to worry about license or intellectual property infringements\.
Although you can create your own models, the easiest way to get started with Watson Natural Language Processing is to run the pre\-trained models on unstructured text to perform language processing tasks\.
Some examples of language processing tasks available in Watson Natural Language Processing pre\-trained models:
<!-- <ul> -->
* Language detection: detect the language of the input text
* Syntax: tokenization, lemmatization, part of speech tagging, and dependency parsing
* Entity extraction: find mentions of entities (like person, organization, or date)
* Noun phrase extraction: extract noun phrases from the input text
* Text classification: analyze text and then assign a set of pre\-defined tags or categories based on its content
* Sentiment classification: is the input document positive, negative or neutral?
* Tone classification: classify the tone in the input document (like excited, frustrated, or sad)
* Emotion classification: classify the emotion of the input document (like anger or disgust)
* Keywords extraction: extract noun phrases that are relevant in the input text
* Concepts: find concepts from DBPedia in the input text
* Relations: detect relations between two entities
* Hierarchical categories: assign individual nodes within a hierarchical taxonomy to the input document
* Embeddings: map individual words or larger text snippets into a vector space
<!-- </ul> -->
Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows\. Blocks and workflows support functions to load, run, train, and save a model\.
For more information, refer to [Working with pre\-trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html)\.
Some examples of how you can use the Watson Natural Language Processing library:
Running syntax analysis on a text snippet:
import watson_nlp
# Load the syntax model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')
# Run the syntax model and print the result
syntax_prediction = syntax_model.run('Welcome to IBM!')
print(syntax_prediction)
Extracting entities from a text snippet:
import watson_nlp
entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
entities = entities_workflow.run('IBM\'s CEO Arvind Krishna is based in the US', language_code="en")
print(entities.get_mention_pairs())
For examples of how to use the Watson Natural Language Processing library, refer to [Watson Natural Language Processing library usage samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-samples.html)\.
## Using Watson Natural Language Processing in a notebook ##
You can run your Python notebooks that use the Watson Natural Language Processing library in any of the environments that listed here\. The GPU environment templates include the Watson Natural Language Processing library\.
DO \+ NLP: Indicates that the environment template includes both the CPLEX and the DOcplex libraries to model and solve decision optimization problems and the Watson Natural Language Processing library\.
**~** : Indicates that the environment template requires the Watson Studio Professional plan\. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)\.
<!-- <table> -->
Environment templates that include the Watson Natural Language Processing library
| Name | Hardware configuration | CUH rate per hour |
| ---------------------------------------------- | ------------------------------------------- | ----------------- |
| NLP \+ DO Runtime 23\.1 on Python 3\.10 XS | 2vCPU and 8 GB RAM | 6 |
| DO \+ NLP Runtime 22\.2 on Python 3\.10 XS | 2 vCPU and 8 GB RAM | 6 |
| GPU V100 Runtime 23\.1 on Python 3\.10 **~** | 40 vCPU \+ 172 GB \+ 1 NVIDIA® V100 (1 GPU) | 68 |
| GPU 2xV100 Runtime 23\.1 on Python 3\.10 **~** | 80 vCPU \+ 344 GB \+ 2 NVIDIA® V100 (2 GPU) | 136 |
| GPU V100 Runtime 22\.2 on Python 3\.10 **~** | 40 vCPU \+ 172 GB \+ 1 NVIDIA® V100 (1 GPU) | 68 |
| GPU 2xV100 Runtime 22\.2 on Python 3\.10 **~** | 80 vCPU \+ 344 GB \+ 2 NVIDIA® V100 (2 GPU) | 136 |
<!-- </table ""> -->
Normally these environments are sufficient to run notebooks that use prebuilt models\. If you need a larger environment, for example to train your own models, you can create a custom template that includes the Watson Natural Language Processing library\. Refer to [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\.
<!-- <ul> -->
* Create a custom template without GPU by selecting the engine type `Default`, the hardware configuration size that you need, and choosing `NLP + DO Runtime 23.1 on Python 3.10` or `DO + NLP Runtime 22.2 on Python 3.10` as the software version\.
* Create a custom template with GPU by selecting the engine type `GPU`, the hardware configuration size that you need, and choosing `GPU Runtime 23.1 on Python 3.10` or `GPU Runtime 22.2 on Python 3.10` as the software version\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
<!-- </ul> -->
**Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
<!-- </article "role="article" "> -->
|
0ECEAC44DA213D067B5B5EA66694E6283457A441 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en | ibm-watson-studio-lib for Python | ibm-watson-studio-lib for Python
The ibm-watson-studio-lib library for Python provides access to assets. It can be used in notebooks that are created in the notebook editor. ibm-watson-studio-lib provides support for working with data assets and connections, as well as browsing functionality for all other asset types.
There are two kinds of data assets:
* Stored data assets refer to files in the storage associated with the current project. The library can load and save these files. For data larger than one megabyte, this is not recommended. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets.
* Connected data assets represent data that must be accessed through a connection. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection. The functions do not return the data of a connected data asset. You can either use the code that is generated for you when you click Read data on the Code snippets pane to access the data or you must write your own code.
Note: The ibm-watson-studio-lib functions do not encode or decode data when saving data to or getting data from a file. Additionally, the ibm-watson-studio-lib functions can't be used to access connected folder assets (files on a path to the project storage).
Setting up the ibm-watson-studio-lib library
The ibm-watson-studio-lib library for Python is pre-installed and can be imported directly in a notebook in the notebook editor. To use the ibm-watson-studio-lib library in your notebook, you need the ID of the project and the project token.
To insert the project token to your notebook:
1. Click the More icon on your notebook toolbar and then click Insert project token.
If a project token exists, a cell is added to your notebook with the following information:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
<ProjectToken> is the value of the project token.
If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
To create a project token:
1. From the Manage tab, select the Access Control page, and click New access token under Access tokens.
2. Enter a name, select Editor role for the project, and create a token.
3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token.
Helper functions
You can get information about the supported functions in the ibm-watson-studio-lib library programmatically by using help(wslib), or for an individual function by using help(wslib.<function_name>, for example help(wslib.get_connection).
You can use the helper function wslib.show(...) for formatted printing of Python dictionaries and lists of dictionaries, which are the common result output type of the ibm-watson-studio-lib functions.
The ibm-watson-studio-lib functions
The ibm-watson-studio-lib library exposes a set of functions that are grouped in the following way:
* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-infos)
* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-auth-token)
* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data)
* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=ensave-data)
* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-info)
* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-data-info)
* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enaccess-by-id)
* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=endirect-proj-storage)
* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enspark-support)
* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enbrowse-assets)
Get project information
While developing code, you might not know the exact names of data assets or connections. The following functions provide lists of assets, from which you can pick the relevant ones. In all examples, you can use wslib.show(assets) to pretty-print the list. The index of each item is printed in front of the item.
* list_connections()
This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connection function.
For example:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
assets = wslib.list_connections()
wslib.show(assets)
connprops = wslib.get_connection(assets[0])
wslib.show(connprops)
* list_connected_data()
This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connected_data function.
* list_stored_data()
This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the load_data and save_datafunctions.
Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists.
* wslib.here
By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point wslib.here provides the following functions:
* get_name()
This function returns the name of the project.
* get_description()
This function returns the description of the project.
* get_ID()
This function returns the ID of the project.
* get_storage()
This function returns storage information for the project.
Get authentication token
Some tasks require an authentication token. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token.
You can use the following function to get the bearer token:
* get_current_token()
For example:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
token = wslib.auth.get_current_token()
This function returns the bearer token that is currently used by the ibm-watson-studio-lib library.
Fetch data
You can use the following functions to fetch data from a stored data asset (a file) in your project.
* load_data(asset_name_or_item, attachment_type_or_item=None)
This function loads the data of a stored data asset into a BytesIO buffer. The function is not recommended for very large files.
The function takes the following parameters:
* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
* attachment_type_or_item: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is loaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type data_profile_nlu.
Here is an example that shows you how to load the data of a data asset:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
Fetch the data from a file
my_file = wslib.load_data("MyFile.csv")
Read the CSV data file into a pandas DataFrame
my_file.seek(0)
import pandas as pd
pd.read_csv(my_file, nrows=10)
* download_file(asset_name_or_item, file_name=None, attachment_type_or_item=None)
This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists.
The function takes the following parameters:
* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
* file_name: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name.
* attachment_type_or_item: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is downloaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type data_profile_nlu.
Here is an example that shows you how to you can use download_file to make your custom Python script available in your notebook:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
Let's assume you have a Python script "helpers.py" with helper functions on your local machine.
Upload the script to your project using the Data Panel on the right of the opened notebook.
Download the script to the file system of your runtime
wslib.download_file("helpers.py")
import the required functions to use them in your notebook
from helpers import my_func
my_func()
Save data
The functions to save data in your project storage do multiple things:
* Store the data in project storage
* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project.
* Associate the asset with the file in the storage.
You can use the following functions to save data:
* save_data(asset_name_or_item, data, overwrite=None, mime_type=None, file_name=None)
This function saves data in memory to the project storage.
The function takes the following parameters:
* asset_name_or_item: (Required) The name of the created asset or list item that is returned by list_stored_data(). You can use the item if you like to overwrite an existing file.
* data: (Required) The data to upload. This can be any object of type bytes-like-object, for example a byte buffer.
* overwrite: (Optional) Overwrites the data of a stored data asset if it already exists. By default, this is set to false. If an asset item is passed instead of a name, the behavior is to overwrite the asset.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type=application/text for plain text data. This parameter is ignored when overwriting an asset.
* file_name: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset.
Here is an example that shows you how to save data to a file:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
let's assume you have the pandas DataFrame pandas_df which contains the data
you want to save as a csv file
wslib.save_data("my_asset_name.csv", pandas_df.to_csv(index=False).encode())
the function returns a dict which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data
* upload_file(file_path, asset_name=None, file_name=None, overwrite=False, mime_type=None) This function saves data in the file system in the runtime to a file associated with your project. The function takes the following parameters:
* file_path: (Required) The path to the file in the file system.
* asset_name: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded.
* file_name: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded.
* overwrite: (Optional) Overwrites an existing file in storage. Defaults to false.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type='application/text' for plain text data. This parameter is ignored when overwriting an asset.
Here is an example that shows you how you can upload a file to the project:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
Let's assume you have downloaded a file and want to save it
in your project.
import urllib.request
urllib.request.urlretrieve("https://some/url/data_file.csv", "data_file.csv")
wslib.upload_file("data_file.csv")
The function returns a dictionary which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data.
Get connection information
You can use the following function to access the connection metadata of a given connection.
* get_connection(name_or_item)
This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use wslib.show(connprops) to view the properties. The special key "." in the returned dictionary provides information about the connection asset.
The function takes the following required parameter:
* name_or_item: Either a string with the name of a connection or an item like those returned by list_connections().
Note that when you work with notebooks, you can click Read data on the Code snippets pane to generate code to load data from a connection into a pandas DataFrame for example.
Get connected data information
You can use the following function to access the metadata of a connected data asset.
* get_connected_data(name_or_item)
This function returns the properties of a connected data asset, including the properties of the underlying connection. Use wslib.show() to view the properties. The special key "." in the returned dictionary provides information about the data and the connection assets.
The function takes the following required parameter:
* name_or_item: Either a string with the name of a connected data asset or an item like those returned by list_connected_data().
Note that when you work with notebooks, you can click Read data on the Code snippets pane to generate code to load data from a connected data asset into a pandas DataFrame for example.
Access asset by ID instead of name
You should preferably always access data assets and connections by a unique name. Asset names are not necessarily always unique and the ibm-watson-studio-lib functions will raise an exception when a name is ambiguous. You can rename data assets in the UI to resolve the conflict.
Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project. This can happen for example, when projects are exported and re-imported. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example list_connections().
The entry point wslib.by_id provides the following functions:
* get_connection(asset_id)
This function accesses a connection by the connection asset ID.
* get_connected_data(asset_id)
This function accesses a connected data asset by the connected data asset ID.
* load_data(asset_id, attachment_type_or_item=None)
This function loads the data of a stored data asset by passing the asset ID. See [load_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass.
* save_data(asset_id, data, overwrite=None, mime_type=None, file_name=None)
This function saves data to a stored data asset by passing the asset ID. This implies overwrite=True. See [save_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=ensave-data) for a description of the other parameters you can pass.
* download_file(asset_id, file_name=None, attachment_type_or_item=None)
This function downloads the data of a stored data asset by passing the asset ID. See [download_file()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass.
Access project storage directly
You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point wslib.storage.
The entry point wslib.storage provides the following functions:
* fetch_data(filename)
This function returns the data in a file as a BytesIO buffer. The file does not need to be registered as a data asset.
The function takes the following required parameter:
* filename: The name of the file in the projectstorage.
* store_data(filename, data, overwrite=False)
This function saves data in memory to storage, but does not create a new data asset. The function returns a dictionary which contains the file name, file path and additional information. Use wslib.show() to print the information.
The function takes the following parameters:
* filename: (Required) The name of the file in the project storage.
* data: (Required) The data to save as a bytes-like object.
* overwrite: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false.
* download_file(storage_filename, local_filename=None)
This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed.
The function takes the following parameters:
* storage_filename: (Required) The name of the file in storage to download.
* local_filename: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name.
* register_asset(storage_path, asset_name=None, mime_type=None)
This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists.
You can use this function if you have very large files that you cannot upload via save_data(). You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using register_asset().
The function takes the following parameters:
* storage_path: (Required) The path of the file in storage.
* asset_name: (Optional) The name of the created asset. It defaults to the file name.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type.
Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken.
Spark support
The entry point wslib.spark provides functions to access files in storage with Spark. To get help information about the available functions, use help(wslib.spark.API).
The entry point wslib.spark provides the following functions:
* provide_spark_context(sc)
Use this function to enable Spark support.
The function takes the following required parameter:
* sc: The SparkContext. It is provided in the notebook runtime.
The following example shows you how to set up Spark support:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
wslib.spark.provide_spark_context(sc)
* get_data_url(asset_name)
This function returns a URL to access a file in storage from Spark via Hadoop.
The function takes the following required parameter:
* asset_name: The name of the asset.
* storage.get_data_url(file_name)
This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name.
The function takes the following required parameter:
* file_name: The name of a file in the project storage.
Browse project assets
The entry point wslib.assets provides generic, read-only access to assets of any type. For selected asset types, there are dedicated functions that provide additional data. To get help on the available functions, use help(wslib.assets.API).
The following naming conventions apply:
* Functions named list_<something> return a list of Python dictionaries. Each dictionary represents one asset and includes a small set of properties (metadata) that identifies the asset.
* Functions named get_<something> return a single Python dictionary with the properties for the asset.
To pretty-print a dictionary or list of dictionaries, use wslib.show().
The functions expect either the name of an asset, or an item from a list as the parameter. By default, the functions return only a subset of the available asset properties. By setting the parameter raw=True, you can get the full set of asset properties.
The entry point wslib.assets provides the following functions:
* list_assets(asset_type, name=None, query=None, selector=None, raw=False)
This function lists all assets for the given type with respect to the given constraints.
The function takes the following parameters:
* asset_type: (Required) The type of the assets to list, for example data_asset. See list_asset_types() for a list of the available asset types. Use asset type asset for the list of all available assets in the project.
* name: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either name and query.
* query: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either name and query.
* selector: (Optional) A custom filter function on the candidate asset dictionary items. If the selector function returns True, the asset is included in the returned asset list.
* raw: (Optional) Returns all of the available metadata. By default, the parameter is set to False and only a subset of the properties is returned.
Examples of using the list_assets function:
Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
List all assets in the project
all_assets = wslib.assets.list_assets("asset")
wslib.show(all_assets)
List all data assets with name 'MyFile.csv'
assets_by_name = wslib.assets.list_assets("data_asset", name="MyFile.csv")
List all data assets whose name starts with "MyF"
assets_by_query = wslib.assets.list_assets("data_asset", query="asset.name:(MyF)")
List all data assets which are larger than 1MB
sizeFilter = lambda x: x['metadata'] > 1000000
large_assets = wslib.assets.list_assets("data_asset", selector=sizeFilter, raw=True)
List all notebooks
notebooks = wslib.assets.list_assets("notebook")
* list_asset_types(raw=False)
This function lists all available asset types.
The function can take the following parameter:
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
* list_datasource_types(raw=False)
This function lists all available data source types.
The function can take the following parameter:
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
* get_asset(name_or_item, asset_type=None, raw=False)
The function returns the metadata of an asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the asset or an item like those returned by list_assets()
* asset_type: (Optional) The type of the asset. If the parameter name_or_item contains a string for the name of the asset, setting asset_type is required.
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
Example of using the list_assets and get_asset functions:
notebooks = wslib.assets.list_assets('notebook')
wslib.show(notebooks)
notebook = wslib.assets.get_asset(notebooks[0])
wslib.show(notebook)
* get_connection(name_or_item, with_datasourcetype=False, raw=False)
This function returns the metadata of a connection.
The function takes the following parameters:
* name_or_item: (Required) The name of the connection or an item like those returned by list_connections()
* with_datasourcetype: (Optional) Returns additional information about the data source type of the connection.
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
* get_connected_data(name_or_item, with_datasourcetype=False, raw=False)
This function returns the metadata of a connected data asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the connected data asset or an item like those returned by list_connected_data()
* with_datasourcetype: (Optional) Returns additional information about the data source type of the associated connected data asset.
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
* get_stored_data(name_or_item, raw=False)
This function returns the metadata of a stored data asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the stored data asset or an item like those returned by list_stored_data()
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
* list_attachments(name_or_item_or_asset, asset_type=None, raw=False)
This function returns a list of the attachments of an asset.
The function takes the following parameters:
* name_or_item_or_asset: (Required) The name of the asset or an item like those returned by list_stored_data() or get_asset().
* asset_type: (Optional) The type of the asset. It defaults to type data_asset.
* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned.
Example of using the list_attachments function to read an attachment of a stored data asset:
assets = wslib.list_stored_data()
wslib.show(assets)
asset = assets[0]
attachments = wslib.assets.list_attachments(asset)
wslib.show(attachments)
buffer = wslib.load_data(asset, attachments[0])
Parent topic:[Using ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html)
| # ibm\-watson\-studio\-lib for Python #
The `ibm-watson-studio-lib` library for Python provides access to assets\. It can be used in notebooks that are created in the notebook editor\. `ibm-watson-studio-lib` provides support for working with data assets and connections, as well as browsing functionality for all other asset types\.
There are two kinds of data assets:
<!-- <ul> -->
* *Stored data assets* refer to files in the storage associated with the current project\. The library can load and save these files\. For data larger than one megabyte, this is not recommended\. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets\.
* *Connected data assets* represent data that must be accessed through a connection\. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection\. The functions do not return the data of a connected data asset\. You can either use the code that is generated for you when you click **Read data** on the Code snippets pane to access the data or you must write your own code\.
<!-- </ul> -->
Note: The `ibm-watson-studio-lib` functions do not encode or decode data when saving data to or getting data from a file\. Additionally, the `ibm-watson-studio-lib` functions can't be used to access connected folder assets (files on a path to the project storage)\.
## Setting up the `ibm-watson-studio-lib` library ##
The `ibm-watson-studio-lib` library for Python is pre\-installed and can be imported directly in a notebook in the notebook editor\. To use the `ibm-watson-studio-lib` library in your notebook, you need the ID of the project and the project token\.
To insert the project token to your notebook:
<!-- <ol> -->
1. Click the **More** icon on your notebook toolbar and then click **Insert project token**\.
If a project token exists, a cell is added to your notebook with the following information:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
`<ProjectToken>` is the value of the project token.
If you are told in a message that no project token exists, click the link in the message to be redirected to the project's **Access Control** page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
To create a project token:
<!-- <ol> -->
1. From the **Manage** tab, select the **Access Control** page, and click **New access token** under **Access tokens**.
2. Enter a name, select **Editor** role for the project, and create a token.
3. Go back to your notebook, click the **More** icon on the notebook toolbar and then click **Insert project token**.
<!-- </ol> -->
<!-- </ol> -->
## Helper functions ##
You can get information about the supported functions in the `ibm-watson-studio-lib` library programmatically by using `help(wslib)`, or for an individual function by using `help(wslib.<function_name>`, for example `help(wslib.get_connection)`\.
You can use the helper function `wslib.show(...)` for formatted printing of Python dictionaries and lists of dictionaries, which are the common result output type of the `ibm-watson-studio-lib` functions\.
## The `ibm-watson-studio-lib` functions ##
The `ibm-watson-studio-lib` library exposes a set of functions that are grouped in the following way:
<!-- <ul> -->
* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#get-infos)
* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#get-auth-token)
* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#fetch-data)
* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#save-data)
* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#get-conn-info)
* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#get-conn-data-info)
* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#access-by-id)
* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#direct-proj-storage)
* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#spark-support)
* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#browse-assets)
<!-- </ul> -->
### Get project information ###
While developing code, you might not know the exact names of data assets or connections\. The following functions provide lists of assets, from which you can pick the relevant ones\. In all examples, you can use `wslib.show(assets)` to pretty\-print the list\. The index of each item is printed in front of the item\.
<!-- <ul> -->
* `list_connections()`
This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the `get_connection` function.
For example:
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
assets = wslib.list_connections()
wslib.show(assets)
connprops = wslib.get_connection(assets[0])
wslib.show(connprops)
* `list_connected_data()`
This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the `get_connected_data` function.
* `list_stored_data()`
This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the `load_data` and `save_data`functions.
Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists.
* `wslib.here`
By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point `wslib.here` provides the following functions:
<!-- <ul> -->
* `get_name()`
This function returns the name of the project.
* `get_description()`
This function returns the description of the project.
* `get_ID()`
This function returns the ID of the project.
* `get_storage()`
This function returns storage information for the project.
<!-- </ul> -->
<!-- </ul> -->
### Get authentication token ###
Some tasks require an authentication token\. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token\.
You can use the following function to get the bearer token:
<!-- <ul> -->
* `get_current_token()`
<!-- </ul> -->
For example:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
token = wslib.auth.get_current_token()
This function returns the bearer token that is currently used by the `ibm-watson-studio-lib` library\.
### Fetch data ###
You can use the following functions to fetch data from a stored data asset (a file) in your project\.
<!-- <ul> -->
* `load_data(asset_name_or_item, attachment_type_or_item=None)`
This function loads the data of a stored data asset into a BytesIO buffer. The function is not recommended for very large files.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) Either a string with the name of a stored data asset or an item like those returned by `list_stored_data()`.
* `attachment_type_or_item`: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely `data_asset` is loaded. Specify this parameter if the attachment type is not `data_asset`. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type `data_profile_nlu`.
Here is an example that shows you how to load the data of a data asset:
<!-- </ul> -->
<!-- </ul> -->
```python
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
# Fetch the data from a file
my_file = wslib.load_data("MyFile.csv")
# Read the CSV data file into a pandas DataFrame
my_file.seek(0)
import pandas as pd
pd.read_csv(my_file, nrows=10)
```
<!-- <ul> -->
* `download_file(asset_name_or_item, file_name=None, attachment_type_or_item=None)`
This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) Either a string with the name of a stored data asset or an item like those returned by `list_stored_data()`.
* `file_name`: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name.
* `attachment_type_or_item`: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely `data_asset` is downloaded. Specify this parameter if the attachment type is not `data_asset`. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type `data_profile_nlu`.
Here is an example that shows you how to you can use `download_file` to make your custom Python script available in your notebook:
<!-- </ul> -->
<!-- </ul> -->
```python
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
# Let's assume you have a Python script "helpers.py" with helper functions on your local machine.
# Upload the script to your project using the Data Panel on the right of the opened notebook.
# Download the script to the file system of your runtime
wslib.download_file("helpers.py")
# import the required functions to use them in your notebook
from helpers import my_func
my_func()
```
### Save data ###
The functions to save data in your project storage do multiple things:
<!-- <ul> -->
* Store the data in project storage
* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project\.
* Associate the asset with the file in the storage\.
<!-- </ul> -->
You can use the following functions to save data:
<!-- <ul> -->
* `save_data(asset_name_or_item, data, overwrite=None, mime_type=None, file_name=None)`
This function saves data in memory to the project storage.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) The name of the created asset or list item that is returned by `list_stored_data()`. You can use the item if you like to overwrite an existing file.
* `data`: (Required) The data to upload. This can be any object of type `bytes-like-object`, for example a byte buffer.
* `overwrite`: (Optional) Overwrites the data of a stored data asset if it already exists. By default, this is set to false. If an asset item is passed instead of a name, the behavior is to overwrite the asset.
* `mime_type`: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example `mime_type=application/text` for plain text data. This parameter is ignored when overwriting an asset.
* `file_name`: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset.
Here is an example that shows you how to save data to a file:
<!-- </ul> -->
<!-- </ul> -->
```python
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
# let's assume you have the pandas DataFrame pandas_df which contains the data
# you want to save as a csv file
wslib.save_data("my_asset_name.csv", pandas_df.to_csv(index=False).encode())
# the function returns a dict which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data
```
<!-- <ul> -->
* `upload_file(file_path, asset_name=None, file_name=None, overwrite=False, mime_type=None)` This function saves data in the file system in the runtime to a file associated with your project\. The function takes the following parameters:
<!-- <ul> -->
* `file_path`: (Required) The path to the file in the file system.
* `asset_name`: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded.
* `file_name`: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded.
* `overwrite`: (Optional) Overwrites an existing file in storage. Defaults to false.
* `mime_type`: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example `mime_type='application/text'` for plain text data. This parameter is ignored when overwriting an asset.
Here is an example that shows you how you can upload a file to the project:
<!-- </ul> -->
<!-- </ul> -->
```python
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
# Let's assume you have downloaded a file and want to save it
# in your project.
import urllib.request
urllib.request.urlretrieve("https://some/url/data_file.csv", "data_file.csv")
wslib.upload_file("data_file.csv")
# The function returns a dictionary which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data.
```
### Get connection information ###
You can use the following function to access the connection metadata of a given connection\.
<!-- <ul> -->
* `get_connection(name_or_item)`
This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use `wslib.show(connprops)` to view the properties. The special key `"."` in the returned dictionary provides information about the connection asset.
The function takes the following required parameter:
<!-- <ul> -->
* `name_or_item`: Either a string with the name of a connection or an item like those returned by `list_connections()`.
Note that when you work with notebooks, you can click **Read data** on the Code snippets pane to generate code to load data from a connection into a pandas DataFrame for example.
<!-- </ul> -->
<!-- </ul> -->
### Get connected data information ###
You can use the following function to access the metadata of a connected data asset\.
<!-- <ul> -->
* `get_connected_data(name_or_item)`
This function returns the properties of a connected data asset, including the properties of the underlying connection. Use `wslib.show()` to view the properties. The special key `"."` in the returned dictionary provides information about the data and the connection assets.
The function takes the following required parameter:
<!-- <ul> -->
* `name_or_item`: Either a string with the name of a connected data asset or an item like those returned by `list_connected_data()`.
Note that when you work with notebooks, you can click **Read data** on the Code snippets pane to generate code to load data from a connected data asset into a pandas DataFrame for example.
<!-- </ul> -->
<!-- </ul> -->
### Access asset by ID instead of name ###
You should preferably always access data assets and connections by a unique name\. Asset names are not necessarily always unique and the `ibm-watson-studio-lib` functions will raise an exception when a name is ambiguous\. You can rename data assets in the UI to resolve the conflict\.
Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project\. This can happen for example, when projects are exported and re\-imported\. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example `list_connections()`\.
The entry point `wslib.by_id` provides the following functions:
<!-- <ul> -->
* `get_connection(asset_id)`
This function accesses a connection by the connection asset ID.
* `get_connected_data(asset_id)`
This function accesses a connected data asset by the connected data asset ID.
* `load_data(asset_id, attachment_type_or_item=None)`
This function loads the data of a stored data asset by passing the asset ID. See [`load_data()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#fetch-data) for a description of the other parameters you can pass.
* `save_data(asset_id, data, overwrite=None, mime_type=None, file_name=None)`
This function saves data to a stored data asset by passing the asset ID. This implies `overwrite=True`. See [`save_data()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#save-data) for a description of the other parameters you can pass.
* `download_file(asset_id, file_name=None, attachment_type_or_item=None)`
This function downloads the data of a stored data asset by passing the asset ID. See [`download_file()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=en#fetch-data) for a description of the other parameters you can pass.
<!-- </ul> -->
### Access project storage directly ###
You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point `wslib.storage`\.
The entry point `wslib.storage` provides the following functions:
<!-- <ul> -->
* `fetch_data(filename)`
This function returns the data in a file as a BytesIO buffer. The file does not need to be registered as a data asset.
The function takes the following required parameter:
<!-- <ul> -->
* `filename`: The name of the file in the projectstorage.
<!-- </ul> -->
* `store_data(filename, data, overwrite=False)`
This function saves data in memory to storage, but does not create a new data asset. The function returns a dictionary which contains the file name, file path and additional information. Use `wslib.show()` to print the information.
The function takes the following parameters:
<!-- <ul> -->
* `filename`: (Required) The name of the file in the project storage.
* `data`: (Required) The data to save as a bytes-like object.
* `overwrite`: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false.
<!-- </ul> -->
* `download_file(storage_filename, local_filename=None)`
This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed.
The function takes the following parameters:
<!-- <ul> -->
* `storage_filename`: (Required) The name of the file in storage to download.
* `local_filename`: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name.
<!-- </ul> -->
* `register_asset(storage_path, asset_name=None, mime_type=None)`
This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists.
<!-- </ul> -->
You can use this function if you have very large files that you cannot upload via save\_data()\. You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using `register_asset()`\.
The function takes the following parameters:
<!-- <ul> -->
* `storage_path`: (Required) The path of the file in storage\.
* `asset_name`: (Optional) The name of the created asset\. It defaults to the file name\.
* `mime_type`: (Optional) The MIME type for the created asset\. By default the MIME type is determined from the asset name suffix\. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type\.
Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken.
<!-- </ul> -->
### Spark support ###
The entry point `wslib.spark` provides functions to access files in storage with Spark\. To get help information about the available functions, use `help(wslib.spark.API)`\.
The entry point `wslib.spark` provides the following functions:
<!-- <ul> -->
* `provide_spark_context(sc)`
Use this function to enable Spark support.
The function takes the following required parameter:
<!-- <ul> -->
* sc: The SparkContext. It is provided in the notebook runtime.
The following example shows you how to set up Spark support:
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
wslib.spark.provide_spark_context(sc)
<!-- </ul> -->
* `get_data_url(asset_name)`
This function returns a URL to access a file in storage from Spark via Hadoop.
The function takes the following required parameter:
<!-- <ul> -->
* `asset_name`: The name of the asset.
<!-- </ul> -->
* `storage.get_data_url(file_name)`
This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name.
The function takes the following required parameter:
<!-- <ul> -->
* `file_name`: The name of a file in the project storage.
<!-- </ul> -->
<!-- </ul> -->
### Browse project assets ###
The entry point `wslib.assets` provides generic, read\-only access to assets of any type\. For selected asset types, there are dedicated functions that provide additional data\. To get help on the available functions, use `help(wslib.assets.API)`\.
The following naming conventions apply:
<!-- <ul> -->
* Functions named `list_<something>` return a list of Python dictionaries\. Each dictionary represents one asset and includes a small set of properties (metadata) that identifies the asset\.
* Functions named `get_<something>` return a single Python dictionary with the properties for the asset\.
<!-- </ul> -->
To pretty\-print a dictionary or list of dictionaries, use `wslib.show()`\.
The functions expect either the name of an asset, or an item from a list as the parameter\. By default, the functions return only a subset of the available asset properties\. By setting the parameter `raw=True`, you can get the full set of asset properties\.
The entry point `wslib.assets` provides the following functions:
<!-- <ul> -->
* `list_assets(asset_type, name=None, query=None, selector=None, raw=False)`
This function lists all assets for the given type with respect to the given constraints.
The function takes the following parameters:
<!-- <ul> -->
* `asset_type`: (Required) The type of the assets to list, for example `data_asset`. See `list_asset_types()` for a list of the available asset types. Use asset type `asset` for the list of all available assets in the project.
* `name`: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either `name` and `query`.
* `query`: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either `name` and `query`.
* `selector`: (Optional) A custom filter function on the candidate asset dictionary items. If the selector function returns `True`, the asset is included in the returned asset list.
* `raw`: (Optional) Returns all of the available metadata. By default, the parameter is set to `False` and only a subset of the properties is returned.
<!-- </ul> -->
Examples of using the `list_assets` function:
<!-- </ul> -->
# Import the lib
from ibm_watson_studio_lib import access_project_or_space
wslib = access_project_or_space({"token":"<ProjectToken>"})
# List all assets in the project
all_assets = wslib.assets.list_assets("asset")
wslib.show(all_assets)
# List all data assets with name 'MyFile.csv'
assets_by_name = wslib.assets.list_assets("data_asset", name="MyFile.csv")
# List all data assets whose name starts with "MyF"
assets_by_query = wslib.assets.list_assets("data_asset", query="asset.name:(MyF*)")
# List all data assets which are larger than 1MB
sizeFilter = lambda x: x['metadata'] > 1000000
large_assets = wslib.assets.list_assets("data_asset", selector=sizeFilter, raw=True)
# List all notebooks
notebooks = wslib.assets.list_assets("notebook")
<!-- <ul> -->
* `list_asset_types(raw=False)`
This function lists all available asset types.
The function can take the following parameter:
<!-- <ul> -->
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
* `list_datasource_types(raw=False)`
This function lists all available data source types.
The function can take the following parameter:
<!-- <ul> -->
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_asset(name_or_item, asset_type=None, raw=False)`
The function returns the metadata of an asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the asset or an item like those returned by `list_assets()`
* `asset_type`: (Optional) The type of the asset. If the parameter `name_or_item` contains a string for the name of the asset, setting `asset_type` is required.
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
Example of using the `list_assets` and `get_asset` functions:
notebooks = wslib.assets.list_assets('notebook')
wslib.show(notebooks)
notebook = wslib.assets.get_asset(notebooks[0])
wslib.show(notebook)
<!-- </ul> -->
* `get_connection(name_or_item, with_datasourcetype=False, raw=False)`
This function returns the metadata of a connection.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the connection or an item like those returned by `list_connections()`
* `with_datasourcetype`: (Optional) Returns additional information about the data source type of the connection.
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_connected_data(name_or_item, with_datasourcetype=False, raw=False)`
This function returns the metadata of a connected data asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the connected data asset or an item like those returned by `list_connected_data()`
* `with_datasourcetype`: (Optional) Returns additional information about the data source type of the associated connected data asset.
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_stored_data(name_or_item, raw=False)`
This function returns the metadata of a stored data asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the stored data asset or an item like those returned by `list_stored_data()`
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
* `list_attachments(name_or_item_or_asset, asset_type=None, raw=False)`
This function returns a list of the attachments of an asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item_or_asset`: (Required) The name of the asset or an item like those returned by `list_stored_data()` or `get_asset()`.
* `asset_type`: (Optional) The type of the asset. It defaults to type `data_asset`.
* `raw`: (Optional) Returns the full set of metadata. By default, the parameter is `False` and only a subset of the properties is returned.
<!-- </ul> -->
Example of using the `list_attachments` function to read an attachment of a stored data asset:
assets = wslib.list_stored_data()
wslib.show(assets)
asset = assets[0]
attachments = wslib.assets.list_attachments(asset)
wslib.show(attachments)
buffer = wslib.load_data(asset, attachments[0])
<!-- </ul> -->
**Parent topic:**[Using ibm\-watson\-studio\-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html)
<!-- </article "role="article" "> -->
|
B019692A5844A9A72292A35B8953AA67836F8201 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en | ibm-watson-studio-lib for R | ibm-watson-studio-lib for R
The ibm-watson-studio-lib library for R provides access to assets. It can be used in notebooks that are created in the notebook editor or in RStudio in a project. ibm-watson-studio-lib provides support for working with data assets and connections, as well as browsing functionality for all other asset types.
There are two kinds of data assets:
* Stored data assets refer to files in the storage associated with the current project. The library can load and save these files. For data larger than one megabyte, this is not recommended. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets.
* Connected data assets represent data that must be accessed through a connection. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection. The functions do not return the data of a connected data asset. You can either use the code that is generated for you when you click Read data on the Code snippets panel to access the data or you must write your own code.
Note: The ibm-watson-studio-lib functions do not encode or decode data when saving data to or getting data from a file. Additionally, the ibm-watson-studio-lib functions can't be used to access connected folder assets (files on a path to the project storage).
Setting up the ibm-watson-studio-lib library
The ibm-watson-studio-lib library for R is pre-installed and can be imported directly in a notebook in the notebook editor. To use the ibm-watson-studio-lib library in your notebook, you need the ID of the project and the project token.
To insert the project token to your notebook:
1. Click the More icon on your notebook toolbar and then click Insert project token.
If a project token exists, a cell is added to your notebook with the following information:
library(ibmWatsonStudioLib)
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
<ProjectToken> is the value of the project token.
If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
To create a project token:
1. From the Manage tab, select the Access Control page, and click New access token under Access tokens.
2. Enter a name, select Editor role for the project, and create a token.
3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token.
The ibm-watson-studio-lib functions
The ibm-watson-studio-lib library exposes a set of functions that are grouped in the following way:
* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-infos)
* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-auth-token)
* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data)
* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=ensave-data)
* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-conn-info)
* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-conn-data-info)
* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enaccess-by-id)
* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=endirect-proj-storage)
* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enspark-support)
* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enbrowse-assets)
Get project information
While developing code, you might not know the exact names of data assets or connections. The following functions provide lists of assets, from which you can pick the relevant ones. In all examples, you can use wslib$show(assets) to pretty-print the list. The index of each item is printed in front of the item.
* list_connections()
This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to get_connection function.
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
assets <- wslib$list_connections()
wslib$show(assets)
connprops <- wslib$get_connection(assets[0])
* list_connected_data()
This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the get_connected_data function.
* list_stored_data()
This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the load_data and save_data functions.
Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists.
* wslib$here By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point wslib$here provides the following functions:
* get_name()
This function returns the name of the project.
* get_description()
This function returns the description of the project.
* get_ID()
This function returns the ID of the project.
* get_storage()
This function returns storage information for the project.
Get authentication token
Some tasks require an authentication token. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token.
You can use the following function to get the bearer token:
* get_current_token()
For example:
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
token <- wslib$auth$get_current_token()
This function returns the bearer token that is currently used by the ibm-watson-studio-lib library.
Fetch data
You can use the following functions to fetch data from a stored data asset (a file) in your project.
* load_data(asset_name_or_item, attachment_type_or_item = NULL)
This function loads the data of a stored data asset into a bytes buffer. The function is not recommended for very large files.
The function takes the following parameters:
* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
* attachment_type_or_item: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is loaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type data_profile_nlu.
Here is an example that shows you how to load the data of a data asset:
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
Fetch the data from a file
my_file <- wslib$load_data("MyFile.csv")
Read the CSV data file into a data frame
df <- read.csv(text = rawToChar(my_file))
head(df)
* download_file(asset_name_or_item, file_name = NULL, attachment_type_or_item = NULL)
This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists.
The function takes the following parameters:
* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
* file_name: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name.
* attachment_type_or_item: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is downloaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type data_profile_nlu.
Here is an example that shows you how to you can use download_file to make your custom R script available in your notebook:
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
Let's assume you have a R script "helpers.R" with helper functions on your local machine.
Upload the script to your project using the Data Panel on the right.
Download the script to the file system of your runtime
wslib$download_file("helpers.R")
Source the script to use the contained functions, e.g. ‘my_func’, in your notebook.
source("helpers.R")
my_func()
Save data
The functions to store data in your project storage do multiple things:
* Store the data in project storage
* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project.
* Associate the asset with the file in the storage.
You can use the following functions to save data:
* save_data(asset_name_or_item, data, overwrite = NULL, mime_type = NULL, file_name = NULL)
This function saves data in memory to the project storage.
The function takes the following parameters:
* asset_name_or_item: (Required) The name of the created asset or list item that is returned by list_stored_data(). You can use the item if you like to overwrite an existing file.
* data: (Required) The data to upload. The expected data type is raw.
* overwrite: (Optional) Overwrites the data of a stored data asset if it already exists. Defaults to FALSE. If an asset item is passed instead of a name, the behavior is to overwrite the asset.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type=application/text for plain text data. This parameter is ignored when overwriting an asset.
* file_name: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset.
Here is an example that shows you how to save data to a file:
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
let's assume you have a data frame df which contains the data
you want to save as a csv file
csv <- capture.output(write.csv(df, row.names=FALSE), type="output")
csv_raw <- charToRaw(paste0(csv, collapse='n'))
wslib$save_data("my_asset_name.csv", csv_raw)
the function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data
* upload_file(file_path, asset_name = NULL, file_name = NULL, overwrite = FALSE, mime_type = NULL)
This function saves data in the file system in the runtime to a file associated with your project.
The function takes the following parameters:
* file_path: (Required) The path to the file in the file system.
* asset_name: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded.
* file_name: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded.
* overwrite: (Optional) Overwrites an existing file in storage. Defaults to FALSE.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type='application/text' for plain text data. This parameter is ignored when overwriting an asset.
Here is an example that shows you how you can upload a file to the project:
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
Let's assume you have downloaded a file and want to save it
in your project.
download.file("https://some/url/data_file.csv", "data_file.csv")
wslib$upload_file("data_file.csv")
The function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data.
Get connection information
You can use the following function to access the connection metadata of a given connection.
* get_connection(name_or_item)
This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use wslib$show(connprops) to view the properties. The special key "." in the returned list item provides information about the connection asset.
The function takes the following required parameter:
* name_or_item: Either a string with the name of a connection or an item like those returned by list_connections().
Note that when you work with notebooks, you can click Read data on the Code snippets panel to generate code to load data from a connection into a pandas DataFrame for example.
Get connected data information
You can use the following function to access the metadata of a connected data asset.
* get_connected_data(name_or_item)
This function returns the properties of a connected data asset, including the properties of the underlying connection. Use wslib$show() to view the properties. The special key "." in the returned list provides information about the data and the connection assets.
The function takes the following required parameter:
* name_or_item: Either a string with the name of a connected data asset or an item like those returned by list_connected_data().
Note that when you work with notebooks, you can click Read data on the Code snippets panel to generate code to load data from a connected data asset into a pandas DataFrame for example.
Access asset by ID instead of name
You should preferably always access data assets and connections by a unique name. Asset names are not necessarily always unique and the ibm-watson-studio-lib functions will raise an exception when a name is ambiguous. You can rename data assets in the UI to resolve the conflict.
Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project. This can happen for example, when projects are exported and re-imported. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example list_connections().
The entry point wslib$by_id provides the following functions:
* get_connection(asset_id)
This function accesses a connection by the connection asset ID.
* get_connected_data(asset_id)
This function accesses a connected data asset by the connected data asset ID.
* load_data(asset_id, attachment_type_or_item = NULL)
This function loads the data of a stored data asset by passing the asset ID. See [load_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data) for a decsription of the other parameters you can pass.
* save_data(asset_id, data, overwrite = NULL, mime_type = NULL, file_name = NULL)
This function saves data to a stored data asset by passing the asset ID. This implies overwrite=TRUE. See [save_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=ensave-data) for a description of the other parameters you can pass.
* download_file(asset_id, file_name = NULL, attachment_type_or_item = NULL)
This function downloads the data of a stored data asset by passing the asset ID. See [download_file()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass.
Access project storage directly
You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point wslib$storage.
The entry point wslib$storage provides the following functions:
* fetch_data(filename)
This function returns the data in a file as a bytes buffer. The file does not need to be registered as data asset.
The function takes the following required parameter:
* filename: The name of the file in the project.
* store_data(filename, data, overwrite = FALSE)
This function saves data in memory to storage, but does not create a new data asset. The function returns a list which contains the file name, file path and additional information. Use Use wslib$show() to print the information.
The function takes the following parameters:
* filename: (Required) The name of the file in the project storage.
* data: (Required) The data to save as a raw object.
* overwrite: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false.
* download_file(storage_filename, local_filename = NULL)
This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed.
The function takes the following parameters:
* storage_filename: (Required) The name of the file in storage to download.
* local_filename: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name.
* register_asset(storage_path, asset_name = NULL, mime_type = NULL)
This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists. You can use this function if you have very large files that you cannot upload via save_data(). You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using register_asset().
The function takes the following parameters:
* storage_path: (Required) The path of the file in storage.
* asset_name: (Optional) The name of the created asset. It defaults to the file name.
* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type.
Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken.
Spark support
The entry point wslib$spark provides functions to access files in storage with Spark.
The entry point wslib$spark provides the following functions:
* provide_spark_context(sc)
Use this function to enable Spark support.
The function takes the following required parameter:
* sc: The SparkContext. It is provided in the notebook runtime.
The following example shows you how to set up Spark support:
library(ibmWatsonStudioLib)
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
wslib$spark$provide_spark_context(sc)
* get_data_url(asset_name)
This function returns a URL to access a file in storage from Spark via Hadoop.
The function takes the following required parameter:
* asset_name: The name of the asset.
* storage.get_data_url(file_name)
This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name.
The function takes the following required parameter:
* file_name: The name of a file in the project storage.
Browse project assets
The entry point wslib$assets provides generic, read-only access to assets of any type. For selected asset types, there are dedicated functions that provide additional data.
The following naming conventions apply:
* Functions named list_<something> return a list of named lists. Each contained list represents one asset and includes a small set of properties (metadata) that identifies the asset.
* Functions named get_<something> return a single named list with the properties for the asset.
To pretty-print a list or list of named lists, use wslib$show().
The functions expect either the name of an asset, or an item from a list as the parameter. By default, the functions return only a subset of the available asset properties. By setting the parameter raw_info=TRUE, you can get the full set of asset properties.
The entry point wslib$assets provides the following functions:
* list_assets(asset_type, name = NULL, query = NULL, selector = NULL, raw_info = FALSE)
This function lists all assets for the given type with respect to the given constraints.
The function takes the following parameters:
* asset_type: (Required) The type of the assets to list, for example data_asset. See list_asset_types() for a list of the available asset types. Use asset type asset for the list of all available assets in the project.
* name: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either name and query.
* query: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either name and query.
* selector: (Optional) A custom filter function on the candidate asset list items. If the selector function returns TRUE, the asset is included in the returned asset list.
* raw_info: (Optional) Returns all of the available metadata. By default, the parameter is set to FALSE and only a subset of the properties is returned.
Examples of using the list_assets function:
Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
List all assets in the project
all_assets <- wslib$assets$list_assets("asset")
wslib$show(all_assets)
List all data assets with name 'MyFile.csv'
assets_by_name <- wslib$assets$list_assets("data_asset", name = "MyFile.csv")
List all data assets whose name starts with "MyF"
assets_by_query <- wslib$assets$list_assets("data_asset", query = "asset.name:(MyF)")
List all data assets which are larger than 1MB
sizeFilter <- function(asset) asset$metadata$size > 1000000
large_assets <- wslib$assets$list_assets("data_asset", selector = sizeFilter, raw_info = TRUE)
wslib$show(large_assets)
List all notebooks
notebooks <- wslib$assets$list_assets("notebook")
* list_asset_types(raw_info = FALSE)
This function lists all available asset types.
The function can take the following parameter:
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
* list_datasource_types(raw_info = FALSE)
This function lists all available data source types.
The function can take the following parameter:
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
* get_asset(name_or_item, asset_type=None, raw_info = FALSE)
The function returns the metadata of an asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the asset or an item like those returned by list_assets()
* asset_type: (Optional) The type of the asset. If the parameter name_or_item contains a string for the name of the asset, setting asset_type is required.
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
Example of using the list_assets and get_asset functions:
notebooks <- wslib$assets$list_assets("notebook")
wslib$show(notebooks)
notebook <- wslib$assets$get_asset(notebooks[1]])
wslib$show(notebook)
* get_connection(name_or_item, with_datasourcetype=False, raw_info = FALSE)
This function returns the metadata of a connection.
The function takes the following parameters:
* name_or_item: (Required) The name of the connection or an item like those returned by list_connections()
* with_datasourcetype: (Optional) Returns additional information about the data source type of the connection.
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
* get_connected_data(name_or_item, with_datasourcetype=False, raw_info = FALSE)
This function returns the metadata of a connected data asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the connected data asset or an item like those returned by list_connected_data()
* with_datasourcetype: (Optional) Returns additional information about the data source type of the associated connected data asset.
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
* get_stored_data(name_or_item, raw_info = FALSE)
This function returns the metadata of a stored data asset.
The function takes the following parameters:
* name_or_item: (Required) The name of the stored data asset or an item like those returned by list_stored_data()
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
* list_attachments(name_or_item_or_asset, asset_type=None, raw_info = FALSE)
This function returns a list of the attachments of an asset.
The function takes the following parameters:
* name_or_item_or_asset: (Required) The name of the asset or an item like those returned by list_stored_data() or get_asset().
* asset_type: (Optional) The type of the asset. It defaults to type data_asset.
* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned.
Example of using the list_attachments function to read an attachment of a stored data asset:
assets <- wslib$list_stored_data()
wslib$show(assets)
asset <- assets[1]]
attachments <- wslib$assets$list_attachments(asset)
wslib$show(attachments)
buffer <- wslib$load_data(asset, attachments[1]])
Parent topic:[Using ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html)
| # ibm\-watson\-studio\-lib for R #
The `ibm-watson-studio-lib` library for R provides access to assets\. It can be used in notebooks that are created in the notebook editor or in RStudio in a project\. `ibm-watson-studio-lib` provides support for working with data assets and connections, as well as browsing functionality for all other asset types\.
There are two kinds of data assets:
<!-- <ul> -->
* *Stored data assets* refer to files in the storage associated with the current project\. The library can load and save these files\. For data larger than one megabyte, this is not recommended\. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets\.
* *Connected data assets* represent data that must be accessed through a connection\. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection\. The functions do not return the data of a connected data asset\. You can either use the code that is generated for you when you click **Read data** on the Code snippets panel to access the data or you must write your own code\.
<!-- </ul> -->
Note: The `ibm-watson-studio-lib` functions do not encode or decode data when saving data to or getting data from a file\. Additionally, the `ibm-watson-studio-lib` functions can't be used to access connected folder assets (files on a path to the project storage)\.
## Setting up the `ibm-watson-studio-lib` library ##
The `ibm-watson-studio-lib` library for R is pre\-installed and can be imported directly in a notebook in the notebook editor\. To use the `ibm-watson-studio-lib` library in your notebook, you need the ID of the project and the project token\.
To insert the project token to your notebook:
<!-- <ol> -->
1. Click the **More** icon on your notebook toolbar and then click **Insert project token**\.
If a project token exists, a cell is added to your notebook with the following information:
library(ibmWatsonStudioLib)
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
`<ProjectToken>` is the value of the project token.
If you are told in a message that no project token exists, click the link in the message to be redirected to the project's **Access Control** page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
To create a project token:
<!-- <ol> -->
1. From the **Manage** tab, select the **Access Control** page, and click **New access token** under **Access tokens**.
2. Enter a name, select **Editor** role for the project, and create a token.
3. Go back to your notebook, click the **More** icon on the notebook toolbar and then click **Insert project token**.
<!-- </ol> -->
<!-- </ol> -->
## The `ibm-watson-studio-lib` functions ##
The `ibm-watson-studio-lib` library exposes a set of functions that are grouped in the following way:
<!-- <ul> -->
* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#get-infos)
* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#get-auth-token)
* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#fetch-data)
* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#save-data)
* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#get-conn-info)
* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#get-conn-data-info)
* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#access-by-id)
* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#direct-proj-storage)
* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#spark-support)
* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#browse-assets)
<!-- </ul> -->
### Get project information ###
While developing code, you might not know the exact names of data assets or connections\. The following functions provide lists of assets, from which you can pick the relevant ones\. In all examples, you can use `wslib$show(assets)` to pretty\-print the list\. The index of each item is printed in front of the item\.
<!-- <ul> -->
* `list_connections()`
This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to `get_connection function`.
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
assets <- wslib$list_connections()
wslib$show(assets)
connprops <- wslib$get_connection(assets[0])
* `list_connected_data()`
This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the `get_connected_data` function.
* `list_stored_data()`
This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the `load_data` and `save_data` functions.
Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists.
* `wslib$here` By using this entry point, you can retrieve metadata about the project that the lib is working with\. The entry point `wslib$here` provides the following functions:
<!-- <ul> -->
* `get_name()`
This function returns the name of the project.
* `get_description()`
This function returns the description of the project.
* `get_ID()`
This function returns the ID of the project.
* `get_storage()`
This function returns storage information for the project.
<!-- </ul> -->
<!-- </ul> -->
### Get authentication token ###
Some tasks require an authentication token\. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token\.
You can use the following function to get the bearer token:
<!-- <ul> -->
* `get_current_token()`
<!-- </ul> -->
For example:
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
token <- wslib$auth$get_current_token()
This function returns the bearer token that is currently used by the `ibm-watson-studio-lib` library\.
### Fetch data ###
You can use the following functions to fetch data from a stored data asset (a file) in your project\.
<!-- <ul> -->
* `load_data(asset_name_or_item, attachment_type_or_item = NULL)`
This function loads the data of a stored data asset into a bytes buffer. The function is not recommended for very large files.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) Either a string with the name of a stored data asset or an item like those returned by `list_stored_data()`.
* `attachment_type_or_item`: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely `data_asset` is loaded. Specify this parameter if the attachment type is not `data_asset`. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type `data_profile_nlu`.
Here is an example that shows you how to load the data of a data asset:
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
# Fetch the data from a file
my_file <- wslib$load_data("MyFile.csv")
# Read the CSV data file into a data frame
df <- read.csv(text = rawToChar(my_file))
head(df)
<!-- </ul> -->
* `download_file(asset_name_or_item, file_name = NULL, attachment_type_or_item = NULL)`
This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) Either a string with the name of a stored data asset or an item like those returned by `list_stored_data()`.
* `file_name`: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name.
* `attachment_type_or_item`: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely `data_asset` is downloaded. Specify this parameter if the attachment type is not `data_asset`. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type `data_profile_nlu`.
Here is an example that shows you how to you can use `download_file` to make your custom R script available in your notebook:
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
# Let's assume you have a R script "helpers.R" with helper functions on your local machine.
# Upload the script to your project using the Data Panel on the right.
# Download the script to the file system of your runtime
wslib$download_file("helpers.R")
# Source the script to use the contained functions, e.g. ‘my_func’, in your notebook.
source("helpers.R")
my_func()
<!-- </ul> -->
<!-- </ul> -->
### Save data ###
The functions to store data in your project storage do multiple things:
<!-- <ul> -->
* Store the data in project storage
* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project\.
* Associate the asset with the file in the storage\.
<!-- </ul> -->
You can use the following functions to save data:
<!-- <ul> -->
* `save_data(asset_name_or_item, data, overwrite = NULL, mime_type = NULL, file_name = NULL)`
This function saves data in memory to the project storage.
The function takes the following parameters:
<!-- <ul> -->
* `asset_name_or_item`: (Required) The name of the created asset or list item that is returned by `list_stored_data()`. You can use the item if you like to overwrite an existing file.
* `data`: (Required) The data to upload. The expected data type is `raw`.
* `overwrite`: (Optional) Overwrites the data of a stored data asset if it already exists. Defaults to FALSE. If an asset item is passed instead of a name, the behavior is to overwrite the asset.
* `mime_type`: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example `mime_type=application/text` for plain text data. This parameter is ignored when overwriting an asset.
* `file_name`: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset.
Here is an example that shows you how to save data to a file:
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
# let's assume you have a data frame df which contains the data
# you want to save as a csv file
csv <- capture.output(write.csv(df, row.names=FALSE), type="output")
csv_raw <- charToRaw(paste0(csv, collapse='\n'))
wslib$save_data("my_asset_name.csv", csv_raw)
# the function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data
<!-- </ul> -->
* `upload_file(file_path, asset_name = NULL, file_name = NULL, overwrite = FALSE, mime_type = NULL)`
This function saves data in the file system in the runtime to a file associated with your project.
The function takes the following parameters:
<!-- <ul> -->
* `file_path`: (Required) The path to the file in the file system.
* `asset_name`: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded.
* `file_name`: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded.
* `overwrite`: (Optional) Overwrites an existing file in storage. Defaults to FALSE.
* `mime_type`: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example `mime_type='application/text'` for plain text data. This parameter is ignored when overwriting an asset.
Here is an example that shows you how you can upload a file to the project:
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
# Let's assume you have downloaded a file and want to save it
# in your project.
download.file("https://some/url/data_file.csv", "data_file.csv")
wslib$upload_file("data_file.csv")
# The function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data.
<!-- </ul> -->
<!-- </ul> -->
### Get connection information ###
You can use the following function to access the connection metadata of a given connection\.
<!-- <ul> -->
* `get_connection(name_or_item)`
This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use `wslib$show(connprops)` to view the properties. The special key `"."` in the returned list item provides information about the connection asset.
The function takes the following required parameter:
<!-- <ul> -->
* `name_or_item`: Either a string with the name of a connection or an item like those returned by `list_connections()`.
Note that when you work with notebooks, you can click **Read data** on the Code snippets panel to generate code to load data from a connection into a pandas DataFrame for example.
<!-- </ul> -->
<!-- </ul> -->
### Get connected data information ###
You can use the following function to access the metadata of a connected data asset\.
<!-- <ul> -->
* `get_connected_data(name_or_item)`
This function returns the properties of a connected data asset, including the properties of the underlying connection. Use `wslib$show()` to view the properties. The special key `"."` in the returned list provides information about the data and the connection assets.
The function takes the following required parameter:
<!-- <ul> -->
* `name_or_item`: Either a string with the name of a connected data asset or an item like those returned by `list_connected_data()`.
Note that when you work with notebooks, you can click **Read data** on the Code snippets panel to generate code to load data from a connected data asset into a pandas DataFrame for example.
<!-- </ul> -->
<!-- </ul> -->
### Access asset by ID instead of name ###
You should preferably always access data assets and connections by a unique name\. Asset names are not necessarily always unique and the `ibm-watson-studio-lib` functions will raise an exception when a name is ambiguous\. You can rename data assets in the UI to resolve the conflict\.
Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project\. This can happen for example, when projects are exported and re\-imported\. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example `list_connections()`\.
The entry point `wslib$by_id` provides the following functions:
<!-- <ul> -->
* `get_connection(asset_id)`
This function accesses a connection by the connection asset ID.
* `get_connected_data(asset_id)`
This function accesses a connected data asset by the connected data asset ID.
* `load_data(asset_id, attachment_type_or_item = NULL)`
This function loads the data of a stored data asset by passing the asset ID. See [`load_data()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#fetch-data) for a decsription of the other parameters you can pass.
* `save_data(asset_id, data, overwrite = NULL, mime_type = NULL, file_name = NULL)`
This function saves data to a stored data asset by passing the asset ID. This implies `overwrite=TRUE`. See [`save_data()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#save-data) for a description of the other parameters you can pass.
* `download_file(asset_id, file_name = NULL, attachment_type_or_item = NULL)`
This function downloads the data of a stored data asset by passing the asset ID. See [`download_file()`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=en#fetch-data) for a description of the other parameters you can pass.
<!-- </ul> -->
### Access project storage directly ###
You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point wslib$storage\.
The entry point `wslib$storage` provides the following functions:
<!-- <ul> -->
* `fetch_data(filename)`
This function returns the data in a file as a bytes buffer. The file does not need to be registered as data asset.
The function takes the following required parameter:
<!-- <ul> -->
* `filename`: The name of the file in the project.
<!-- </ul> -->
* `store_data(filename, data, overwrite = FALSE)`
This function saves data in memory to storage, but does not create a new data asset. The function returns a list which contains the file name, file path and additional information. Use `Use wslib$show()` to print the information.
The function takes the following parameters:
<!-- <ul> -->
* `filename`: (Required) The name of the file in the project storage.
* `data`: (Required) The data to save as a raw object.
* `overwrite`: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false.
<!-- </ul> -->
* `download_file(storage_filename, local_filename = NULL)`
This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed.
The function takes the following parameters:
<!-- <ul> -->
* `storage_filename`: (Required) The name of the file in storage to download.
* `local_filename`: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name.
<!-- </ul> -->
* `register_asset(storage_path, asset_name = NULL, mime_type = NULL)`
This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists. You can use this function if you have very large files that you cannot upload via save\_data(). You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using `register_asset()`.
The function takes the following parameters:
<!-- <ul> -->
* `storage_path`: (Required) The path of the file in storage.
* `asset_name`: (Optional) The name of the created asset. It defaults to the file name.
* `mime_type`: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type.
Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken.
<!-- </ul> -->
<!-- </ul> -->
### Spark support ###
The entry point `wslib$spark` provides functions to access files in storage with Spark\.
The entry point `wslib$spark` provides the following functions:
<!-- <ul> -->
* `provide_spark_context(sc)`
Use this function to enable Spark support.
The function takes the following required parameter:
<!-- <ul> -->
* sc: The SparkContext. It is provided in the notebook runtime.
The following example shows you how to set up Spark support:
library(ibmWatsonStudioLib)
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
wslib$spark$provide_spark_context(sc)
<!-- </ul> -->
* `get_data_url(asset_name)`
This function returns a URL to access a file in storage from Spark via Hadoop.
The function takes the following required parameter:
<!-- <ul> -->
* `asset_name`: The name of the asset.
<!-- </ul> -->
* `storage.get_data_url(file_name)`
This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name.
The function takes the following required parameter:
<!-- <ul> -->
* `file_name`: The name of a file in the project storage.
<!-- </ul> -->
<!-- </ul> -->
### Browse project assets ###
The entry point `wslib$assets` provides generic, read\-only access to assets of any type\. For selected asset types, there are dedicated functions that provide additional data\.
The following naming conventions apply:
<!-- <ul> -->
* Functions named `list_<something>` return a list of named lists\. Each contained list represents one asset and includes a small set of properties (metadata) that identifies the asset\.
* Functions named `get_<something>` return a single named list with the properties for the asset\.
<!-- </ul> -->
To pretty\-print a list or list of named lists, use `wslib$show()`\.
The functions expect either the name of an asset, or an item from a list as the parameter\. By default, the functions return only a subset of the available asset properties\. By setting the parameter `raw_info=TRUE`, you can get the full set of asset properties\.
The entry point `wslib$assets` provides the following functions:
<!-- <ul> -->
* `list_assets(asset_type, name = NULL, query = NULL, selector = NULL, raw_info = FALSE)`
This function lists all assets for the given type with respect to the given constraints.
The function takes the following parameters:
<!-- <ul> -->
* `asset_type`: (Required) The type of the assets to list, for example `data_asset`. See `list_asset_types()` for a list of the available asset types. Use asset type `asset` for the list of all available assets in the project.
* `name`: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either `name` and `query`.
* `query`: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either `name` and `query`.
* `selector`: (Optional) A custom filter function on the candidate asset list items. If the selector function returns `TRUE`, the asset is included in the returned asset list.
* `raw_info`: (Optional) Returns all of the available metadata. By default, the parameter is set to `FALSE` and only a subset of the properties is returned.
Examples of using the `list_assets` function:
# Import the lib
library("ibmWatsonStudioLib")
wslib <- access_project_or_space(list("token"="<ProjectToken>"))
# List all assets in the project
all_assets <- wslib$assets$list_assets("asset")
wslib$show(all_assets)
# List all data assets with name 'MyFile.csv'
assets_by_name <- wslib$assets$list_assets("data_asset", name = "MyFile.csv")
# List all data assets whose name starts with "MyF"
assets_by_query <- wslib$assets$list_assets("data_asset", query = "asset.name:(MyF*)")
# List all data assets which are larger than 1MB
sizeFilter <- function(asset) asset$metadata$size > 1000000
large_assets <- wslib$assets$list_assets("data_asset", selector = sizeFilter, raw_info = TRUE)
wslib$show(large_assets)
# List all notebooks
notebooks <- wslib$assets$list_assets("notebook")
<!-- </ul> -->
* `list_asset_types(raw_info = FALSE)`
This function lists all available asset types.
The function can take the following parameter:
<!-- <ul> -->
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
<!-- </ul> -->
* `list_datasource_types(raw_info = FALSE)`
This function lists all available data source types.
The function can take the following parameter:
<!-- <ul> -->
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_asset(name_or_item, asset_type=None, raw_info = FALSE)`
The function returns the metadata of an asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the asset or an item like those returned by `list_assets()`
* `asset_type`: (Optional) The type of the asset. If the parameter `name_or_item` contains a string for the name of the asset, setting `asset_type` is required.
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
Example of using the `list_assets` and `get_asset` functions:
notebooks <- wslib$assets$list_assets("notebook")
wslib$show(notebooks)
notebook <- wslib$assets$get_asset(notebooks[1]])
wslib$show(notebook)
<!-- </ul> -->
* `get_connection(name_or_item, with_datasourcetype=False, raw_info = FALSE)`
This function returns the metadata of a connection.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the connection or an item like those returned by `list_connections()`
* `with_datasourcetype`: (Optional) Returns additional information about the data source type of the connection.
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_connected_data(name_or_item, with_datasourcetype=False, raw_info = FALSE)`
This function returns the metadata of a connected data asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the connected data asset or an item like those returned by `list_connected_data()`
* `with_datasourcetype`: (Optional) Returns additional information about the data source type of the associated connected data asset.
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
<!-- </ul> -->
* `get_stored_data(name_or_item, raw_info = FALSE)`
This function returns the metadata of a stored data asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item`: (Required) The name of the stored data asset or an item like those returned by `list_stored_data()`
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
<!-- </ul> -->
* `list_attachments(name_or_item_or_asset, asset_type=None, raw_info = FALSE)`
This function returns a list of the attachments of an asset.
The function takes the following parameters:
<!-- <ul> -->
* `name_or_item_or_asset`: (Required) The name of the asset or an item like those returned by `list_stored_data()` or `get_asset()`.
* `asset_type`: (Optional) The type of the asset. It defaults to type `data_asset`.
* `raw_info`: (Optional) Returns the full set of metadata. By default, the parameter is `FALSE` and only a subset of the properties is returned.
Example of using the `list_attachments` function to read an attachment of a stored data asset:
assets <- wslib$list_stored_data()
wslib$show(assets)
asset <- assets[1]]
attachments <- wslib$assets$list_attachments(asset)
wslib$show(attachments)
buffer <- wslib$load_data(asset, attachments[1]])
<!-- </ul> -->
<!-- </ul> -->
**Parent topic:**[Using ibm\-watson\-studio\-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html)
<!-- </article "role="article" "> -->
|
23060B35041C9ABD00099B1E0B1D83DAFF453C6D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html?context=cdpaas&locale=en | Collaboration roles for governance | Collaboration roles for governance
Review the collaboration roles for managing access to governance tools such as inventories, AI use cases, and evaluations.
User roles and permissions for governance
The permissions that you allow you to work with governance artifacts depend on your watsonx roles:
* IAM Platform access roles determine your permissions for the IBM Cloud account. At least the Viewer role is required to work with services.
* IAM Service access roles determine your permissions within services.
* Workspace collaborator roles determine what actions you have permission to perform within workspaces in IBM watsonx.
For details, see [Levels of user access roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html).
Roles for governance
If you have the IAM Platform Admin role, you can:
* Provision watsonx.governance
* Create inventory
* Create platform assets catalog
* Enable external model tracking
* Create attachment fact definitions
* Customize report templates
If you have these workspace roles for an inventory, you can:
Governance permissions for inventories
Enabled permission Viewer Editor Admin/Owner
Create and edit AI use cases ✓ ✓
View AI use cases ✓ ✓ ✓
Add collaborators to an inventory ✓
Delete inventory ✓
Evaluate model deployment ✓ ✓
Add collaborators to a use case ✓ ✓
Generate reports ✓ ✓ ✓
Add attachments to a use case ✓ ✓
Update asset type definitions <br>(For example: model_entry_user, modelfacts_user) ✓
If you have these workspace roles for an AI use case, you can:
Governance permissions for AI use cases
Enabled permission Editor/Collaborator Admin/Owner
Delete AI use cases ✓
Add collaborators to the use case ✓
Edit AI use case ✓ ✓
Edit use case ✓ ✓
Add values to custom facts ✓ ✓
Upload attachments to use case ✓ ✓
If you have these workspace roles for a project or space, you can:
Governance permissions for project and space roles
Enabled permission Viewer Editor/Collaborator Admin/Owner
Track/untrack prompt template ✓ ✓
Upload attachments to use case ✓ ✓
Add values to custom facts ✓ ✓
View AI factsheet ✓ ✓ ✓
Generate report ✓ ✓ ✓
Learn more
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
| # Collaboration roles for governance #
Review the collaboration roles for managing access to governance tools such as inventories, AI use cases, and evaluations\.
## User roles and permissions for governance ##
The permissions that you allow you to work with governance artifacts depend on your watsonx roles:
<!-- <ul> -->
* IAM Platform access roles determine your permissions for the IBM Cloud account\. At least the Viewer role is required to work with services\.
* IAM Service access roles determine your permissions within services\.
* Workspace collaborator roles determine what actions you have permission to perform within workspaces in IBM watsonx\.
<!-- </ul> -->
For details, see [Levels of user access roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)\.
## Roles for governance ##
If you have the IAM Platform Admin role, you can:
<!-- <ul> -->
* Provision watsonx\.governance
* Create inventory
* Create platform assets catalog
* Enable external model tracking
* Create attachment fact definitions
* Customize report templates
<!-- </ul> -->
If you have these workspace roles for an inventory, you can:
<!-- <table> -->
Governance permissions for inventories
| Enabled permission | Viewer | Editor | Admin/Owner |
| -------------------------------------------------------------------------------------- | ------ | ------ | ----------- |
| Create and edit AI use cases | | ✓ | ✓ |
| View AI use cases | ✓ | ✓ | ✓ |
| Add collaborators to an inventory | | | ✓ |
| Delete inventory | | | ✓ |
| Evaluate model deployment | | ✓ | ✓ |
| Add collaborators to a use case | | ✓ | ✓ |
| Generate reports | ✓ | ✓ | ✓ |
| Add attachments to a use case | | ✓ | ✓ |
| Update asset type definitions <br>(For example: model\_entry\_user, modelfacts\_user) | | | ✓ |
<!-- </table ""> -->
If you have these workspace roles for an AI use case, you can:
<!-- <table> -->
Governance permissions for AI use cases
| Enabled permission | Editor/Collaborator | Admin/Owner |
| --------------------------------- | ------------------- | ----------- |
| Delete AI use cases | | ✓ |
| Add collaborators to the use case | | ✓ |
| Edit AI use case | ✓ | ✓ |
| Edit use case | ✓ | ✓ |
| Add values to custom facts | ✓ | ✓ |
| Upload attachments to use case | ✓ | ✓ |
<!-- </table ""> -->
If you have these workspace roles for a project or space, you can:
<!-- <table> -->
Governance permissions for project and space roles
| Enabled permission | Viewer | Editor/Collaborator | Admin/Owner |
| ------------------------------ | ------ | ------------------- | ----------- |
| Track/untrack prompt template | | ✓ | ✓ |
| Upload attachments to use case | | ✓ | ✓ |
| Add values to custom facts | | ✓ | ✓ |
| View AI factsheet | ✓ | ✓ | ✓ |
| Generate report | ✓ | ✓ | ✓ |
<!-- </table ""> -->
## Learn more ##
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
<!-- </article "role="article" "> -->
|
074C9BAEB0177E3CF57BAC36E5FCBD13063498A1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html?context=cdpaas&locale=en | Governing assets in AI use cases | Governing assets in AI use cases
Create an AI use case to track and govern AI assets from request through production. Factsheets capture details about the asset for each stage of the AI lifecycle to help you meet governance and compliance goals.
To learn about AI use cases, you can follow a tutorial in the Getting started with watsonx.governance sample project. Assets in the sample are prompt templates for a car insurance claim processing use case. The prompts use car insurance claims as input and then use large language models to help insurance agents process the claims. One prompt summarizes claims, another prompt extracts key information such as make and model, and the last prompt generates suggestions for the insurance agent.
In Projects, start a new project, then choose to create a project from a sample. The project gallery includes the getting started sample.

When your project is ready, open the Readme for a step-by-step tutorial.

Get started with AI use cases
Set up or work with AI use cases:
* [Create an inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for storing AI use cases
* [Set up an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
* [Track assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) in an AI use case
* [View factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-factsheet-viewing.html) for tracked assets
Parent topic:[Governing AI assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)
| # Governing assets in AI use cases #
Create an AI use case to track and govern AI assets from request through production\. Factsheets capture details about the asset for each stage of the AI lifecycle to help you meet governance and compliance goals\.
To learn about AI use cases, you can follow a tutorial in the *Getting started with watsonx\.governance* sample project\. Assets in the sample are prompt templates for a car insurance claim processing use case\. The prompts use car insurance claims as input and then use large language models to help insurance agents process the claims\. One prompt summarizes claims, another prompt extracts key information such as make and model, and the last prompt generates suggestions for the insurance agent\.
In **Projects**, start a new project, then choose to create a project from a sample\. The project gallery includes the getting started sample\.

When your project is ready, open the Readme for a step\-by\-step tutorial\.

## Get started with AI use cases ##
Set up or work with AI use cases:
<!-- <ul> -->
* [Create an inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for storing AI use cases
* [Set up an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
* [Track assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) in an AI use case
* [View factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-factsheet-viewing.html) for tracked assets
<!-- </ul> -->
**Parent topic:**[Governing AI assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)
<!-- </article "role="article" "> -->
|
0F13ADFC739D217925DDCEBB152284565BD43DE8 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-customize-user-facts.html?context=cdpaas&locale=en | Customizing details for a use case or factsheet | Customizing details for a use case or factsheet
You can programmatically customize the information that is collected in factsheets for AI use cases. Use customized factsheets as part of your AI Governance strategy.
Updating a model or model use case programmatically
You might want to update a model use case or model factsheet with additional information. For example, some companies have a standard set of details they want to accompany a model use case or model facts.
Currently, you must update the tenant-level asset types by modifying the user attributes that uses the [Watson Data REST API](https://cloud.ibm.com/apidocs/watson-data-apiintroduction) to update the asset.
Updating a custom asset type
Follow these steps to update a custom asset type:
1. Provide the bss_account_id query parameter for the [getcatalogtype method](https://cloud.ibm.com/apidocs/watson-data-apigetcatalogtype).
2. Provide asset_type as model_entry_user if you are updating attributes for model_entry. Provide asset_type as modelfacts_user if you are updating attributes for model facts.
3. Retrieve the current asset type definition by using the [getcatalogtype method](https://cloud.ibm.com/apidocs/watson-data-apigetcatalogtype) where asset_type is either modelfacts_user or model_entry_user.
4. Update the current asset type definition with the custom attributes by adding them to properties JSON object following the schema that is defined in the API documentation. The following types of attributes are supported to view and edit from the user interface of the model use case or model:
* string
* date
* integer
5. After the JSON is updated with the new properties, start the changes by using the [replaceassettype method](https://cloud.ibm.com/apidocs/watson-data-apireplaceassettype). Provide the asset_type , bss_account_id, and request payload.
When the update is complete, you can view the custom attributes in the AI use case details page and model details page.
Example 1: Retrieving and updating the model_entry_user asset type
Note:This example updates the use case user data. You can use the same format but substitute modelfacts_user to retrieve and update details for the model factsheet.
This curl command retrieves the asset type model_entry_user:
curl -X GET --header 'Accept: application/json' --header "Authorization: ZenApiKey ${MY_TOKEN}" 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=<bss_account_id>'
This snippet is a sample response payload for model use case user details:
{
"description": "The model use case to capture user defined attributes.",
"fields": [],
"relationships": [],
"properties": {},
"decorates": [{
"asset_type_name": "model_entry"
}],
"global_search_searchable": [],
"localized_metadata_attributes": {
"name": {
"default": "Additional details",
"en": "Additional details"
}
},
"attribute_only": false,
"name": "model_entry_user",
"version": 1,
"scope": "ACCOUNT"
}
This curl command updates the model_entry_user asset type:
curl -X PUT --header 'Content-Type: application/json' --header 'Accept: application/json' --header "Authorization: ZenApiKey ${MY_TOKEN}" -d '@requestbody.json' 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=<bss_account_id>'
The requestbody.json contents look like this:
{
"description": "The model use case to capture user defined attributes.",
"fields": [],
"relationships": [],
"properties": {
"user_attribute1": {
"type": "string",
"description": "User attribute1",
"placeholder": "User attribute1",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"default_value": "None",
"label": {
"default": "User attribute1"
}
},
"user_attribute2": {
"type": "integer",
"description": "User attribute2",
"placeholder": "User attribute2",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"label": {
"default": "User attribute2"
}
},
"user_attribute3": {
"type": "date",
"description": "User attribute3",
"placeholder": "User attribute3",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"default_value": "None",
"label": {
"default": "User attribute3"
}
}
"decorates": [{
"asset_type_name": "model_entry"
}],
"global_search_searchable": [],
"attribute_only": false,
"localized_metadata_attributes": {
"name": {
"default": "Additional details",
"en": "Additional details"
}
}
}
Updating user details by using the Python client
You can also update and replace an asset type with properties by using a Python script. For details, see [fact sheet elements description](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlfactsheet-asset-elements).
After you update asset type definitions with custom attributes, you can provide values for those attributes from the model use case overview and model details pages. You can also update values to the custom attributes that use these Python API client methods:
* [Model Asset Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlibm_aigov_facts_client.factsheet.asset_utils_model.ModelAssetUtilities.set_custom_fact)
* [Model Entry Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlibm_aigov_facts_client.factsheet.asset_utils_model.ModelEntryUtilities.set_custom_fact)
Capturing cell facts for a model
When a data scientist develops a model in a notebook, they generate visualizations for key model details, such as ROC curve, confusion matrix, panda profiling report, or the output of any cell execution. To capture those facts as part of a model use case, use the ['capture_cell_facts`](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlcapture-cell-facts) function in the AI Factsheets Python client library.
Troubleshooting custom fields
After you customize fields and make them available to users, a user trying to update fields in the Additional details section of model details might get this error:
Update failed. To update an asset attribute, you must be a catalog Admin or an asset owner or member with the Editor role. Ask a catalog Admin to update your catalog role or ask an asset member with the Editor role to add you as a member.
If the user already has edit permission on the model and is still getting the error message, follow these steps to resolve it.
1. Invoke the API command for [createassetattributenewv2](https://cloud.ibm.com/apidocs/watson-data-api-cpdcreateassetattributenewv2).
2. Use this payload with the command:
{
"name": "modelfacts_system",
"entity": {
}
}
where asset_id is the model_id. Enter either project_id or space_id or catalog_id where the model exists.
Learn more
Find out about working with an inventory programmatically, by using the [IBM_AIGOV_FACTS_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html).
| # Customizing details for a use case or factsheet #
You can programmatically customize the information that is collected in factsheets for AI use cases\. Use customized factsheets as part of your AI Governance strategy\.
## Updating a model or model use case programmatically ##
You might want to update a model use case or model factsheet with additional information\. For example, some companies have a standard set of details they want to accompany a model use case or model facts\.
Currently, you must update the tenant\-level asset types by modifying the user attributes that uses the [Watson Data REST API](https://cloud.ibm.com/apidocs/watson-data-api#introduction) to update the asset\.
## Updating a custom asset type ##
Follow these steps to update a custom asset type:
<!-- <ol> -->
1. Provide the `bss_account_id` query parameter for the [`getcatalogtype` method](https://cloud.ibm.com/apidocs/watson-data-api#getcatalogtype)\.
2. Provide `asset_type` as `model_entry_user` if you are updating attributes for `model_entry`\. Provide `asset_type` as `modelfacts_user` if you are updating attributes for model facts\.
3. Retrieve the current asset type definition by using the [`getcatalogtype` method](https://cloud.ibm.com/apidocs/watson-data-api#getcatalogtype) where `asset_type` is either `modelfacts_user` or `model_entry_user`\.
4. Update the current asset type definition with the custom attributes by adding them to properties JSON object following the schema that is defined in the API documentation\. The following types of attributes are supported to view and edit from the user interface of the model use case or model:
<!-- <ul> -->
* string
* date
* integer
<!-- </ul> -->
5. After the JSON is updated with the new properties, start the changes by using the [`replaceassettype` method](https://cloud.ibm.com/apidocs/watson-data-api#replaceassettype)\. Provide the `asset_type` , `bss_account_id`, and request payload\.
<!-- </ol> -->
When the update is complete, you can view the custom attributes in the AI use case details page and model details page\.
## Example 1: Retrieving and updating the `model_entry_user` asset type ##
Note:This example updates the use case user data\. You can use the same format but substitute `modelfacts_user` to retrieve and update details for the model factsheet\.
This curl command retrieves the asset type `model_entry_user`:
curl -X GET --header 'Accept: application/json' --header "Authorization: ZenApiKey ${MY_TOKEN}" 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=<bss_account_id>'
This snippet is a sample response payload for model use case user details:
{
"description": "The model use case to capture user defined attributes.",
"fields": [],
"relationships": [],
"properties": {},
"decorates": [{
"asset_type_name": "model_entry"
}],
"global_search_searchable": [],
"localized_metadata_attributes": {
"name": {
"default": "Additional details",
"en": "Additional details"
}
},
"attribute_only": false,
"name": "model_entry_user",
"version": 1,
"scope": "ACCOUNT"
}
This curl command updates the `model_entry_user` asset type:
curl -X PUT --header 'Content-Type: application/json' --header 'Accept: application/json' --header "Authorization: ZenApiKey ${MY_TOKEN}" -d '@requestbody.json' 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=<bss_account_id>'
The `requestbody.json` contents look like this:
{
"description": "The model use case to capture user defined attributes.",
"fields": [],
"relationships": [],
"properties": {
"user_attribute1": {
"type": "string",
"description": "User attribute1",
"placeholder": "User attribute1",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"default_value": "None",
"label": {
"default": "User attribute1"
}
},
"user_attribute2": {
"type": "integer",
"description": "User attribute2",
"placeholder": "User attribute2",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"label": {
"default": "User attribute2"
}
},
"user_attribute3": {
"type": "date",
"description": "User attribute3",
"placeholder": "User attribute3",
"is_array": false,
"required": true,
"hidden": false,
"readonly": false,
"default_value": "None",
"label": {
"default": "User attribute3"
}
}
"decorates": [{
"asset_type_name": "model_entry"
}],
"global_search_searchable": [],
"attribute_only": false,
"localized_metadata_attributes": {
"name": {
"default": "Additional details",
"en": "Additional details"
}
}
}
## Updating user details by using the Python client ##
You can also update and replace an asset type with properties by using a Python script\. For details, see [fact sheet elements description](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#factsheet-asset-elements)\.
After you update asset type definitions with custom attributes, you can provide values for those attributes from the model use case overview and model details pages\. You can also update values to the custom attributes that use these Python API client methods:
<!-- <ul> -->
* [Model Asset Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#ibm_aigov_facts_client.factsheet.asset_utils_model.ModelAssetUtilities.set_custom_fact)
* [Model Entry Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#ibm_aigov_facts_client.factsheet.asset_utils_model.ModelEntryUtilities.set_custom_fact)
<!-- </ul> -->
## Capturing cell facts for a model ##
When a data scientist develops a model in a notebook, they generate visualizations for key model details, such as ROC curve, confusion matrix, panda profiling report, or the output of any cell execution\. To capture those facts as part of a model use case, use the ['capture\_cell\_facts\`](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#capture-cell-facts) function in the AI Factsheets Python client library\.
## Troubleshooting custom fields ##
After you customize fields and make them available to users, a user trying to update fields in the **Additional details** section of model details might get this error:
Update failed. To update an asset attribute, you must be a catalog Admin or an asset owner or member with the Editor role. Ask a catalog Admin to update your catalog role or ask an asset member with the Editor role to add you as a member.
If the user already has edit permission on the model and is still getting the error message, follow these steps to resolve it\.
<!-- <ol> -->
1. Invoke the API command for [createassetattributenewv2](https://cloud.ibm.com/apidocs/watson-data-api-cpd#createassetattributenewv2)\.
2. Use this payload with the command:
{
"name": "modelfacts_system",
"entity": {
}
}
where `asset_id` is the `model_id`. Enter either `project_id` or `space_id` or `catalog_id` where the model exists.
<!-- </ol> -->
## Learn more ##
Find out about working with an inventory programmatically, by using the [IBM\_AIGOV\_FACTS\_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html)\.
<!-- </article "role="article" "> -->
|
0BD226C12CB659BF7711FE30C594E548525DBBD2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html?context=cdpaas&locale=en | Governing external models | Governing external models
Enable governance for models that are created in notebooks or outside of Cloud Pak for Data. Track the results of model evaluations and model details in factsheets.
In addition to governing models trained by using Watson Machine Learning, you can govern models that are created by using third-party tools such as Amazon Web Services or Microsoft Azure. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html). Additionally, models that are developed in notebooks are considered external models, so you can use AI Factsheets to govern models that you develop, deploy, and monitor on platforms other than Cloud Pak for Data.
Use the model evaluations provided with watsonx.governance to measure performance metrics for a model you imported from an external provider. Capture the facts in factsheets for the model and the evaluation metrics as part of an AI use case. Use the tracked data as part of your governance and compliance strategy.
Before you begin
Before you can begin, make sure that you, or a user with an Admin role, does the following:
* Enable the tracking of external models in an inventory.
* Assign an owner for the inventory.
For details, see [Managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html).
Preparing to track external models
These points are an overview of the process for preserving facts for an external model.
* Tracked external models are listed under AI use cases in the main navigation menu.
* You can use the API in a model notebook to save an external model asset to an inventory.
* Associate the external model asset with an AI use case in the inventory to start preserving the facts. Along with model metadata, new fields External model identifier and External deployment identifier describe how the models and deployments are identified in external systems, for example: AWS or Azure.
* You can also automatically add external models to an inventory when they are evaluated in watsonx.governance. The destination inventory is established following these rules:
* The external model is created in the Platform assets catalog if its corresponding development-time model exists in the Platform assets catalog or if there is no development-time model that is created in any inventory.
* If the corresponding development-time model is created in an inventory by using the Python client, then the model is created in that inventory.
Associating an external model asset with an AI use case
Automatic external model tracking adds any external models that are evaluated in watsonx.governance to the inventory where the development-time model exists. After the model is in the inventory, you can associate an external model asset with a use case in the following ways:
* Use the API to save the external model asset to any inventory programmatically from a notebook. The external model asset can then be associated with an AI use case.
* Associate the external model that is created with Watson OpenScale evaluation with an AI use case.
Creating an external model asset with the API
1. Create a model in a notebook.
2. Save the model. For example, you can save to an S3 bucket.
3. Use the API to create an external model asset (a representation of the external model) in an inventory. For more information on API commands that interact with the inventory, see the [IBM_AIGOV_FACTS_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlexternalmodelfactselements).
Registering an external model asset with an inventory
1. Open the Assets tab in the inventory where you want to track the model.
2. Select the External model asset that you want to track.
3. Return to the Assets tab in the Inventory and click Add to AI use case.
4. Select an existing AI use case or create a new one.
5. Follow the prompts to save the details to the inventory.
Registering an external model from Watson OpenScale
If you are validating an external model in Watson OpenScale, you can associate an external model with an AI use case to track the lifecycle facts.
1. Add an external model to the OpenScale dashboard.
2. If you already defined an AI use case with the API, the system recognizes the use case association.
3. As you create and monitor a deployment, the facts are registered with the associated use case. These facts display in the Validate or Operate stage, depending on how you classified the machine learning provider for the model.
Populating the AI use case
When facts are saved for an external model asset, they are associated with the pillar that represents their phase in the lifecycle, as follows:
* If the external model asset is created from a notebook without deployment, it displays in the Develop pillar.
* If the external model asset is created from a notebook with deployment, it displays in the Test pillar.
* When the external model deployment is evaluated in OpenScale, it displays in the Validate or Operate stage, depending on how you classified the machine learning provider for the model.
Example: tracking a Sagemaker model
This sample model, created in Sagemaker, is registered for tracking and moves through the Test, Validate, and Operate phases.

Viewing facts for an external model
Viewing facts for an external model is slightly different from viewing facts for a Watson Machine Learning model. These rules apply:
* Click the Assets tab of the inventory containing the external model assets to view facts.
* Unlike Watson Machine Learning model use cases, which have different fact sheets for models and deployments, fact sheets for external models combine information for the model and deployments on the same page.
* Multiple assets with the same name can be created in an inventory. To differentiate them the tags development, pre-production and production are assigned automatically to reflect their state.
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
| # Governing external models #
Enable governance for models that are created in notebooks or outside of Cloud Pak for Data\. Track the results of model evaluations and model details in factsheets\.
In addition to governing models trained by using Watson Machine Learning, you can govern models that are created by using third\-party tools such as Amazon Web Services or Microsoft Azure\. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html)\. Additionally, models that are developed in notebooks are considered external models, so you can use AI Factsheets to govern models that you develop, deploy, and monitor on platforms other than Cloud Pak for Data\.
Use the model evaluations provided with watsonx\.governance to measure performance metrics for a model you imported from an external provider\. Capture the facts in factsheets for the model and the evaluation metrics as part of an AI use case\. Use the tracked data as part of your governance and compliance strategy\.
## Before you begin ##
Before you can begin, make sure that you, or a user with an Admin role, does the following:
<!-- <ul> -->
* Enable the tracking of external models in an inventory\.
* Assign an owner for the inventory\.
<!-- </ul> -->
For details, see [Managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)\.
## Preparing to track external models ##
These points are an overview of the process for preserving facts for an external model\.
<!-- <ul> -->
* Tracked external models are listed under **AI use cases** in the main navigation menu\.
* You can use the API in a model notebook to save an external model asset to an inventory\.
* Associate the external model asset with an AI use case in the inventory to start preserving the facts\. Along with model metadata, new fields `External model identifier` and `External deployment identifier` describe how the models and deployments are identified in external systems, for example: AWS or Azure\.
* You can also automatically add external models to an inventory when they are evaluated in watsonx\.governance\. The destination inventory is established following these rules:
<!-- <ul> -->
* The external model is created in the Platform assets catalog if its corresponding development-time model exists in the Platform assets catalog or if there is no development-time model that is created in any inventory.
* If the corresponding development-time model is created in an inventory by using the Python client, then the model is created in that inventory.
<!-- </ul> -->
<!-- </ul> -->
## Associating an external model asset with an AI use case ##
Automatic external model tracking adds any external models that are evaluated in watsonx\.governance to the inventory where the development\-time model exists\. After the model is in the inventory, you can associate an external model asset with a use case in the following ways:
<!-- <ul> -->
* Use the API to save the external model asset to any inventory programmatically from a notebook\. The external model asset can then be associated with an AI use case\.
* Associate the external model that is created with Watson OpenScale evaluation with an AI use case\.
<!-- </ul> -->
### Creating an external model asset with the API ###
<!-- <ol> -->
1. Create a model in a notebook\.
2. Save the model\. For example, you can save to an S3 bucket\.
3. Use the API to create an external model asset (a representation of the external model) in an inventory\. For more information on API commands that interact with the inventory, see the [IBM\_AIGOV\_FACTS\_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#externalmodelfactselements)\.
<!-- </ol> -->
### Registering an external model asset with an inventory ###
<!-- <ol> -->
1. Open the **Assets** tab in the inventory where you want to track the model\.
2. Select the External model asset that you want to track\.
3. Return to the **Assets** tab in the Inventory and click **Add to AI use case**\.
4. Select an existing AI use case or create a new one\.
5. Follow the prompts to save the details to the inventory\.
<!-- </ol> -->
### Registering an external model from Watson OpenScale ###
If you are validating an external model in Watson OpenScale, you can associate an external model with an AI use case to track the lifecycle facts\.
<!-- <ol> -->
1. Add an external model to the OpenScale dashboard\.
2. If you already defined an AI use case with the API, the system recognizes the use case association\.
3. As you create and monitor a deployment, the facts are registered with the associated use case\. These facts display in the Validate or Operate stage, depending on how you classified the machine learning provider for the model\.
<!-- </ol> -->
## Populating the AI use case ##
When facts are saved for an external model asset, they are associated with the pillar that represents their phase in the lifecycle, as follows:
<!-- <ul> -->
* If the external model asset is created from a notebook without deployment, it displays in the Develop pillar\.
* If the external model asset is created from a notebook with deployment, it displays in the Test pillar\.
* When the external model deployment is evaluated in OpenScale, it displays in the Validate or Operate stage, depending on how you classified the machine learning provider for the model\.
<!-- </ul> -->
## Example: tracking a Sagemaker model ##
This sample model, created in Sagemaker, is registered for tracking and moves through the Test, Validate, and Operate phases\.

## Viewing facts for an external model ##
Viewing facts for an external model is slightly different from viewing facts for a Watson Machine Learning model\. These rules apply:
<!-- <ul> -->
* Click the **Assets** tab of the inventory containing the external model assets to view facts\.
* Unlike Watson Machine Learning model use cases, which have different fact sheets for models and deployments, fact sheets for external models combine information for the model and deployments on the same page\.
* Multiple assets with the same name can be created in an inventory\. To differentiate them the tags *development*, *pre\-production* and *production* are assigned automatically to reflect their state\.
<!-- </ul> -->
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
<!-- </article "role="article" "> -->
|
3334DCCDB9872C1E7F698751B138AA5AF6CC8335 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-factsheet-viewing.html?context=cdpaas&locale=en | Viewing a factsheet for a tracked asset | Viewing a factsheet for a tracked asset
Review the details that are captured for each tracked asset in an AI use case or print a report to share or archive.
What is captured in a factsheet?
From the point where you start tracking an asset in an AI use case, facts are collected in a factsheet for the asset. As the asset moves from one phase of the lifecycle to the next, the facts are added to the appropriate section. For example, a factsheet for a prompt template collects information for these categories:
Category Description
Governance basic details for the governance, including the name of the use case, version number, and approach information
Foundation model name and provider for the foundation model
Prompt template prompt template name, description, input, and variables
Prompt parameters options used to create the prompt template, such as decoding method
Evaluation results of the most recent evaluation
Attachments attached files and supporting documents
Important: The factsheet records the most recent activity in any category. For example, if you evaluate a deployed prompt template in a pre-production space, and then evaluate it in a production space, the details from the production evaluation are captured in the factsheet, over-writing the previous data. Thus, the factsheet maintains a complete record of the current state of the asset.

Next steps
Click Export report to save a report of the factsheet.
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html)
| # Viewing a factsheet for a tracked asset #
Review the details that are captured for each tracked asset in an AI use case or print a report to share or archive\.
## What is captured in a factsheet? ##
From the point where you start tracking an asset in an AI use case, facts are collected in a factsheet for the asset\. As the asset moves from one phase of the lifecycle to the next, the facts are added to the appropriate section\. For example, a factsheet for a prompt template collects information for these categories:
<!-- <table> -->
| Category | Description |
| ----------------- | -------------------------------------------------------------------------------------------------------------- |
| Governance | basic details for the governance, including the name of the use case, version number, and approach information |
| Foundation model | name and provider for the foundation model |
| Prompt template | prompt template name, description, input, and variables |
| Prompt parameters | options used to create the prompt template, such as decoding method |
| Evaluation | results of the most recent evaluation |
| Attachments | attached files and supporting documents |
<!-- </table ""> -->
Important: The factsheet records the most recent activity in any category\. For example, if you evaluate a deployed prompt template in a pre\-production space, and then evaluate it in a production space, the details from the production evaluation are captured in the factsheet, over\-writing the previous data\. Thus, the factsheet maintains a complete record of the current state of the asset\.

## Next steps ##
Click **Export report** to save a report of the factsheet\.
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html)
<!-- </article "role="article" "> -->
|
391DBD504569F02CCC48B181E3B953198C8F3C8A | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html?context=cdpaas&locale=en | Managing an inventory for AI use cases | Managing an inventory for AI use cases
Create or manage an inventory for storing and reviewing AI use cases. AI use cases collect governance facts for AI assets your organization tracks. You can view all the AI use cases in an inventory or open one to explore the details of an AI asset.
Creating an inventory for AI use cases
You must have Admin rights to create and manage an inventory. For more information, see [Collaboration roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html).
1. Click AI use cases from the navigation menu.
2. Click the settings icon  for the AI use cases view.

3. Click New inventory on the Inventory management tab.
4. Assign a name, add an optional description, and associate a Cloud Object Storage instance.
5. (Optional) Click General to extend the functions of an inventory with these options:
* If there is no Platform Access Catalog available for your account, you are prompted to create one. A Platform Access Catalog (PAC) is a platform catalog that provides a repository for inventory assets. It is required for governing external models or managing attachments and reports.
* Enable the option for External model governance to govern models that are trained with machine learning providers other than the Watson Machine Learning. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html).
Adding collaborators to an inventory
Inventories are meant to be collaborative so that multiple people that perform different roles can contribute to governance of key assets. To add collaborators to an inventory:
1. From the Inventory management tab, click Set access from the overflow menu for the inventory.
2. Click Add collaborators to add collaborators individually, or by user group.
3. Assign a role of Admin, Editor, or Viewer.
4. Collaborators are added to the list for the inventory. You can remove or change the assigned access as needed.
Managing external models, report templates, and attachments
You can extend inventory management to include the ability to govern external models, customize report templates, and manage attachments for factsheets.
Before you can access these services, you must have access to a Platform Access Catalog. A Platform Access Catalog is a common catalog for storing data connections and is required for governing external models and notebooks, and for customizing report templates and creating attachment groups.
Creating a Platform Assets Catalog
If you have Admin access, you can create a Platform Access Catalog if one does not exist.
1. From the General tab of Inventory Management, you are prompted to create a Platform Access Catalog.
2. Click Get started and follow the prompts to name the catalog, associate it with a Cloud Object Storage instance, and specify some configuration details.
3. After the catalog is created, you can add users as collaborators in the catalog.
Enabling governance of external models
Enable governance for models that are created in notebooks or outside of Cloud Pak for Data. Track the results of model evaluations and model details in factsheets.
1. From the General tab of an inventory, enable the option for External model management.
2. Select an inventory for tracking external models.
3. Select an owner, then click Apply.
Note: When external models are added, they are listed under AI use cases in the main navigation menu.
Managing report templates
As an inventory administrator, you can manage report templates to customize the report templates for inventory users.
For details, see [Managing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html).
Managing attachments
As an inventory administrator, you can create and manage attachment groups for AI use cases to provide the structure for users to attach supporting files to enrich a use case or a factsheet. For example, if you want every use case to include approval documents, you can create a group to define placeholders for those documents in each use case. Users can then upload the documents to those placeholder slots.
For more information, see [Managing attachments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-attachments.html)
Learn more
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
| # Managing an inventory for AI use cases #
Create or manage an inventory for storing and reviewing AI use cases\. AI use cases collect governance facts for AI assets your organization tracks\. You can view all the AI use cases in an inventory or open one to explore the details of an AI asset\.
## Creating an inventory for AI use cases ##
You must have Admin rights to create and manage an inventory\. For more information, see [Collaboration roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html)\.
<!-- <ol> -->
1. Click **AI use cases** from the navigation menu\.
2. Click the settings icon  for the AI use cases view\.

3. Click **New inventory** on the **Inventory management** tab\.
4. Assign a name, add an optional description, and associate a Cloud Object Storage instance\.
5. (Optional) Click **General** to extend the functions of an inventory with these options:
<!-- <ul> -->
* If there is no Platform Access Catalog available for your account, you are prompted to create one. A Platform Access Catalog (PAC) is a platform catalog that provides a repository for inventory assets. It is required for governing external models or managing attachments and reports.
* Enable the option for **External model governance** to govern models that are trained with machine learning providers other than the Watson Machine Learning. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html).
<!-- </ul> -->
<!-- </ol> -->
## Adding collaborators to an inventory ##
Inventories are meant to be collaborative so that multiple people that perform different roles can contribute to governance of key assets\. To add collaborators to an inventory:
<!-- <ol> -->
1. From the **Inventory management** tab, click **Set access** from the overflow menu for the inventory\.
2. Click **Add collaborators** to add collaborators individually, or by user group\.
3. Assign a role of Admin, Editor, or Viewer\.
4. Collaborators are added to the list for the inventory\. You can remove or change the assigned access as needed\.
<!-- </ol> -->
## Managing external models, report templates, and attachments ##
You can extend inventory management to include the ability to govern external models, customize report templates, and manage attachments for factsheets\.
Before you can access these services, you must have access to a Platform Access Catalog\. A Platform Access Catalog is a common catalog for storing data connections and is required for governing external models and notebooks, and for customizing report templates and creating attachment groups\.
### Creating a Platform Assets Catalog ###
If you have Admin access, you can create a Platform Access Catalog if one does not exist\.
<!-- <ol> -->
1. From the **General** tab of Inventory Management, you are prompted to create a Platform Access Catalog\.
2. Click **Get started** and follow the prompts to name the catalog, associate it with a Cloud Object Storage instance, and specify some configuration details\.
3. After the catalog is created, you can add users as collaborators in the catalog\.
<!-- </ol> -->
### Enabling governance of external models ###
Enable governance for models that are created in notebooks or outside of Cloud Pak for Data\. Track the results of model evaluations and model details in factsheets\.
<!-- <ol> -->
1. From the General tab of an inventory, enable the option for **External model management**\.
2. Select an inventory for tracking external models\.
3. Select an owner, then click **Apply\.**
<!-- </ol> -->
Note: When external models are added, they are listed under AI use cases in the main navigation menu\.
### Managing report templates ###
As an inventory administrator, you can manage report templates to customize the report templates for inventory users\.
For details, see [Managing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html)\.
### Managing attachments ###
As an inventory administrator, you can create and manage attachment groups for AI use cases to provide the structure for users to attach supporting files to enrich a use case or a factsheet\. For example, if you want every use case to include approval documents, you can create a group to define placeholders for those documents in each use case\. Users can then upload the documents to those placeholder slots\.
For more information, see [Managing attachments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-attachments.html)
## Learn more ##
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
<!-- </article "role="article" "> -->
|
6A186D2C83108A0288BDFE3D4CEA201AC0837503 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-attachments.html?context=cdpaas&locale=en | Managing attachments for AI use cases | Managing attachments for AI use cases
Create attachment groups and define attachment slots for an AI use case or factsheet.
Adding attachment groups
If you have admin access to an inventory, you can define attachment groups and manage attachment definitions for the AI use cases or factsheets in the inventory. Use an attachment group to organize a set of related attachment facts and render them together. Attachments can provide supporting information and extra details for a use case. Data scientists might want to attach visualizations from their model. Model requesters might want to attach a file of requirements to describe a business need.
Creating an attachment group
1. Open the AI uses cases settings and click the Attachments tab. If you do not see this tab, you might have insufficient access.
2. Choose whether to add an attachment group to an AI use case or to the factsheet template.
3. Click Add group.
4. Enter a name and an optional description.
5. When you define the attachment group, an identifier is created from the name of the group. The identifier can be used for programmatic access to the group. Click Show identifier to view and edit the ID.
6. Save your changes to create the attachment group.
Adding attachment facts to a group
From an attachment group, add attachment fact definitions that specify how a user can add an attachment to a factsheet. Attachment definitions display as available slots in the attachment section for a use case or factsheet.
Use the up and down arrow keys to reorder attachments in the list.
In this example, an attachment group for approvals defines attachment facts for approvals from risk and compliance and from the model validator.

When you save your attachment fact definitions, an attachment slot and description display on the use case or factsheet for attaching a file. A pin icon indicates an available attachment slot. Any user with at least edit access to the use case or factsheet can upload attachments.

Parent topic:[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)
| # Managing attachments for AI use cases #
Create attachment groups and define attachment slots for an AI use case or factsheet\.
## Adding attachment groups ##
If you have admin access to an inventory, you can define attachment groups and manage attachment definitions for the AI use cases or factsheets in the inventory\. Use an attachment group to organize a set of related attachment facts and render them together\. Attachments can provide supporting information and extra details for a use case\. Data scientists might want to attach visualizations from their model\. Model requesters might want to attach a file of requirements to describe a business need\.
### Creating an attachment group ###
<!-- <ol> -->
1. Open the AI uses cases settings and click the **Attachments** tab\. If you do not see this tab, you might have insufficient access\.
2. Choose whether to add an attachment group to an AI use case or to the factsheet template\.
3. Click **Add group**\.
4. Enter a name and an optional description\.
5. When you define the attachment group, an identifier is created from the name of the group\. The identifier can be used for programmatic access to the group\. Click **Show identifier** to view and edit the ID\.
6. Save your changes to create the attachment group\.
<!-- </ol> -->
### Adding attachment facts to a group ###
From an attachment group, add attachment fact definitions that specify how a user can add an attachment to a factsheet\. Attachment definitions display as available slots in the attachment section for a use case or factsheet\.
Use the up and down arrow keys to reorder attachments in the list\.
In this example, an attachment group for approvals defines attachment facts for approvals from risk and compliance and from the model validator\.

When you save your attachment fact definitions, an attachment slot and description display on the use case or factsheet for attaching a file\. A pin icon indicates an available attachment slot\. Any user with at least edit access to the use case or factsheet can upload attachments\.

**Parent topic:**[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)
<!-- </article "role="article" "> -->
|
538ECAE0B5AA21E499F39C2637764A05BFF7B6B6 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html?context=cdpaas&locale=en | Managing and customizing report templates | Managing and customizing report templates
If the default report templates that are provided with AI Factsheets do not meet your needs, you can download a default report template, customize it, and upload the new template.
Customizing a custom report template
Any user with at least Editor access can create a report from an AI use case that captures all the details from An AI use case. You can use reports for compliance verification, archiving, or other purposes.
If the default templates for the reports do not meet the needs of your organization, you can customize the report templates, the branding file, or the default stylesheet. For example, you can replace the IBM logo with your own logo image file. You must have the Admin role for managing inventories to customize report templates.
Follow these steps to customize a report template.
Downloading a report
To download a report template from the UI:
1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access.
2. In the options menu for a report template, click Download. 
3. Open the <report-name>.ftl file in an editor.
4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands.
To download a report template by using APIs:
1. Use the GET endpoint for /v1/aigov/report_templates in the [IBM Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api) to list the available templates. Note the ID for the template that you want to download.
2. Use the GET endpoint /v1/aigov/report_templates/{template_id}/content with the template ID to download the template file.
3. Open the <report-name>.ftl file in an editor.
4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands.
Uploading a template
1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access.
2. Click Add template.
3. Specify a name for the template and an optional description.
4. Choose the type of template: model or model use case. The reports are available for external models and Watson Machine Learning models.
5. Upload the updated FTL file.
Restriction:The ftl file that you upload must not import any other files. Support is not yet available for import statements other than system templates in the ftl file.
The custom template displays in the Report templates section and is available for creating reports. Click Edit or Delete from the action menu for a custom template to update the template details or to remove the template.
Parent topic:[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)
| # Managing and customizing report templates #
If the default report templates that are provided with AI Factsheets do not meet your needs, you can download a default report template, customize it, and upload the new template\.
## Customizing a custom report template ##
Any user with at least Editor access can create a report from an AI use case that captures all the details from An AI use case\. You can use reports for compliance verification, archiving, or other purposes\.
If the default templates for the reports do not meet the needs of your organization, you can customize the report templates, the branding file, or the default stylesheet\. For example, you can replace the IBM logo with your own logo image file\. You must have the Admin role for managing inventories to customize report templates\.
Follow these steps to customize a report template\.
### Downloading a report ###
To download a report template from the UI:
<!-- <ol> -->
1. Open the AI uses cases settings and click the **Report templates** tab\. If you do not see this tab, you might have insufficient access\.
2. In the options menu for a report template, click **Download**\. 
3. Open the `<report-name>.ftl` file in an editor\.
4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands\.
<!-- </ol> -->
To download a report template by using APIs:
<!-- <ol> -->
1. Use the `GET` endpoint for `/v1/aigov/report_templates` in the [IBM Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api) to list the available templates\. Note the ID for the template that you want to download\.
2. Use the `GET` endpoint `/v1/aigov/report_templates/{template_id}/content` with the template ID to download the template file\.
3. Open the `<report-name>.ftl` file in an editor\.
4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands\.
<!-- </ol> -->
### Uploading a template ###
<!-- <ol> -->
1. Open the AI uses cases settings and click the **Report templates** tab\. If you do not see this tab, you might have insufficient access\.
2. Click **Add template**\.
3. Specify a name for the template and an optional description\.
4. Choose the type of template: model or model use case\. The reports are available for external models and Watson Machine Learning models\.
5. Upload the updated `FTL` file\.
<!-- </ol> -->
Restriction:The `ftl` file that you upload must not import any other files\. Support is not yet available for `import` statements other than system templates in the `ftl` file\.
The custom template displays in the Report templates section and is available for creating reports\. Click **Edit** or **Delete** from the action menu for a custom template to update the template details or to remove the template\.
**Parent topic:**[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)
<!-- </article "role="article" "> -->
|
C6223EEB52369B6B2BAA2B489C9DA41C882154B9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html?context=cdpaas&locale=en | Watsonx.governance | Watsonx.governance
Use watsonx.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end-to-end monitoring for machine learning and generative AI models. Monitor your foundation model and machine learning assets from request to production. Collect facts about models that are built with IBM tools or third-party providers in a single dashboard to aid in meeting compliance and governance goals.
Develop a comprehensive governance solution
Using watsonx.governance, you can extend the best practices of AI governance from predictive machine learning models to generative AI while monitoring and mitigating the risks associated with models, users, and data sets. The benefits of this approach include:
* Responsible AI: extend the practices of responsible AI from governing predictive machine learning models to the use of generative AI with any foundation or model provider.
* Explainability: Use automation to improve transparency and explainability for tracked models. Use tools for detecting and mitigating risks that are associated with AI.
* Transparent and regulatory policies: Mitigate AI risks by tracking the end-to-end AI lifecycle to aid compliance with internal policies and external regulations for enterprise-wide AI solutions.
Use the AI risk atlas a guide
Start your governance journey by reviewing the [Risk Atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) to learn about the potential risks of working with AI models. The Risk Atlas provides a guide to understanding some of the risks of working with AI models, including generative AI, foundation models, and machine learning models. In addition to describing potential risks, it provides real-world context. It is intended as an educational resource and is not meant as a prescriptive tool.
Governance in action
This illustration depicts a typical governance flow, from request to monitoring in production.

Components of watsonx.governance
Watsonx.governance includes these tools for addressing your governance needs in an integrated solution:
* Watson OpenScale provides tools for configuring monitors that evaluate your deployed assets against thresholds you specify. For example, you can configure threshold that alerts you when predictive machine learning models perform below a specified threshold for fairness in monitored outcomes, or drift from accuracy. Alerts for foundation models can warn you when a threshold is breached for the presence of hateful or abusive language or the detection of personal identifiable information. A Model Health monitor provides real-time performance tracking for deployed models.
* AI Factsheets collects the metadata for machine learning models and prompt templates you explicitly track. Develop AI use cases to gather all of the information for managing a model or prompt template from the request phase through development and into production. Manage multiple versions or a model, or compare different approaches to solving a business problem within a use case. Factsheets display information about the models including creation information, data that is used, and where the asset is in the lifecycle. A common model inventory dashboard gives you a view of all tracked assets, or you can view the details of a particular model, all in service of meeting policy and compliance goals.
Extend governance with watsonx.ai
To create an end-to-end experience for developing assets and then adding them to governance, use watsonx.ai with watsonx.governance. Watsonx.ai extends the Watson Studio and Watson Machine Learning services to work with foundation models, including capabilities for saving prompt templates for a curated collection of large language model assets.
For more information on watsonx.ai, see:
* [Overview of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Signing up for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)
Next steps
* [Develop a governance plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html)
* To begin governance, follow the steps in [Provisioning and launching IBM watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) to provision Watson OpenScale with AI Factsheets.
| # Watsonx\.governance #
Use watsonx\.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end\-to\-end monitoring for machine learning and generative AI models\. Monitor your foundation model and machine learning assets from request to production\. Collect facts about models that are built with IBM tools or third\-party providers in a single dashboard to aid in meeting compliance and governance goals\.
## Develop a comprehensive governance solution ##
Using watsonx\.governance, you can extend the best practices of AI governance from predictive machine learning models to generative AI while monitoring and mitigating the risks associated with models, users, and data sets\. The benefits of this approach include:
<!-- <ul> -->
* Responsible AI: extend the practices of responsible AI from governing predictive machine learning models to the use of generative AI with any foundation or model provider\.
* Explainability: Use automation to improve transparency and explainability for tracked models\. Use tools for detecting and mitigating risks that are associated with AI\.
* Transparent and regulatory policies: Mitigate AI risks by tracking the end\-to\-end AI lifecycle to aid compliance with internal policies and external regulations for enterprise\-wide AI solutions\.
<!-- </ul> -->
### Use the AI risk atlas a guide ###
Start your governance journey by reviewing the [Risk Atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) to learn about the potential risks of working with AI models\. The Risk Atlas provides a guide to understanding some of the risks of working with AI models, including generative AI, foundation models, and machine learning models\. In addition to describing potential risks, it provides real\-world context\. It is intended as an educational resource and is not meant as a prescriptive tool\.
### Governance in action ###
This illustration depicts a typical governance flow, from request to monitoring in production\.

## Components of watsonx\.governance ##
Watsonx\.governance includes these tools for addressing your governance needs in an integrated solution:
<!-- <ul> -->
* **Watson OpenScale** provides tools for configuring monitors that evaluate your deployed assets against thresholds you specify\. For example, you can configure threshold that alerts you when predictive machine learning models perform below a specified threshold for fairness in monitored outcomes, or drift from accuracy\. Alerts for foundation models can warn you when a threshold is breached for the presence of hateful or abusive language or the detection of personal identifiable information\. A Model Health monitor provides real\-time performance tracking for deployed models\.
* **AI Factsheets** collects the metadata for machine learning models and prompt templates you explicitly track\. Develop AI use cases to gather all of the information for managing a model or prompt template from the request phase through development and into production\. Manage multiple versions or a model, or compare different approaches to solving a business problem within a use case\. Factsheets display information about the models including creation information, data that is used, and where the asset is in the lifecycle\. A common model inventory dashboard gives you a view of all tracked assets, or you can view the details of a particular model, all in service of meeting policy and compliance goals\.
<!-- </ul> -->
### Extend governance with watsonx\.ai ###
To create an end\-to\-end experience for developing assets and then adding them to governance, use watsonx\.ai with watsonx\.governance\. Watsonx\.ai extends the Watson Studio and Watson Machine Learning services to work with foundation models, including capabilities for saving prompt templates for a curated collection of large language model assets\.
For more information on watsonx\.ai, see:
<!-- <ul> -->
* [Overview of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Signing up for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)
<!-- </ul> -->
## Next steps ##
<!-- <ul> -->
* [Develop a governance plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html)
* To begin governance, follow the steps in [Provisioning and launching IBM watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) to provision Watson OpenScale with AI Factsheets\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
A85E898F28AC27DAA8961337A9B468004C1B8B21 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en | Planning for AI governance | Planning for AI governance
Plan how to use watsonx.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end-to-end monitoring for machine learning and generative AI models.
Governance capabilities
Note:To govern metadata from foundation models, you must have watsonx.ai provisioned.
Consider these watsonx.governance capabilities as you plan your governance strategy:
* Collect metadata in factsheets about machine learning models and prompt templates for large language models.
* Customize the metadata facts that are captured in factsheets for machine learning and foundation models.
* Monitor machine learning deployments for fairness, drift, and quality to ensure that your models are meeting specified standards.
* Monitor foundation models for breaches of toxic language thresholds or detection of personal identifiable information.
* Evaluate prompt templates with metrics designed to measure performance and to test for the presence of prohibited content, such as hateful speech.
* Collect model health data including data size, latency, and throughput to help you assess performance issues and manage resource consumption.
* Assign a single risk score to tracked models to indicate the relative impact of the associated model. For example, a model that predicts sensitive information such as a credit score might be assigned a higher risk score than a model that projects ice cream sales.
* Use the automated transaction analysis tools to improve transparency and explainability for your AI assets. For example, see how a feature contributes to a prediction and test what-if scenarios to explore different outcomes.
Planning for governance
Consider these governance strategies:
* [Build your governance team](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=enpeople)
* [Set up your governance structures](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=enstructure)
* [Manage collaboration with roles and access control](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encollab)
* [Develop a communication plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encommunicate).
* [Implement a simple solution](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=ensimple)
* [Plan for more complex solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encomplex)
Build a governance team
Consider the expertise that you need on your governance team. A typical governance plan might include the following roles. In some cases, the same person might fill multiple roles. In other cases, a role might represent a team of people.
* Model owner: The owner creates an AI use case to track a solution to a business need. The owner requests the model or prompt template, manages the approval process, and tracks the solution through the AI Lifecycle.
* Model developer/Data scientist: The developer works with the data in a data set or a large language model (LLM) and creates the machine learning model or LLM prompt template.
* Model validator: the validator tests the solution to determine whether it meets the goals that are stated in the AI use case.
* Risk and compliance manager: The risk manager determines the policies and compliance thresholds for the AI use case. For example, the risk manager might determine the rules to apply for testing a solution for fairness or for screening output for hateful and abusive speech.
* MLOps engineer: The MLOps engineer moves a solution from a pre-production (test) environment, to a production environment when a solution is deemed ready to be fully deployed.
* App developer: Following deployment, and app developer runs evaluations against the deployment to monitor how the solution performs against the metric threshold set by the risk and compliance owner. If performance drops below specified thresholds, the app developer works with the other stakeholders to address problems and update the model or prompt template.
Set up a governance structure
After identifying roles and assembling a team, plan your governance structure.
1. Create an inventory for storing AI use cases. An inventory is where you store and view AI use cases and the factsheets that are associated with the assets being governed. Depending on your governance requirements, store all use cases in a single inventory, or create multiple inventories for your governance efforts.
2. Create projects for collaboration. If you are using IBM tools, create a Watson Studio project. The project can hold the data that is required to train or test the AI solution and the model or prompt template being governed. Use the access control to restrict access to the approved collaborators.
3. Create a pre-production deployment space. Use the space to test your model or prompt template by using test data. Like a project, a space provides access control features so you can include the required collaborators.
4. Configure test and validation evaluations. Provide the model or prompt template details and configure a set of evaluations to test the performance of your solution. For example, you might test a machine learning model for dimensions such as fairness, quality, and drift, and test a prompt template against metrics such as perplexity (how accurate the output is), or toxicity (whether the output contains hateful or abusive speech.) By testing on known (labeled) data, you can evaluate the performance before moving a solution to production.
5. Configure a production space. When the model or prompt template is ready to be deployed to a production environment, move the solution and all dependencies to a production space. A production space typically has a tighter access control list.
6. Configure evaluations for the deployed model. Provide the model details and configure evaluations for the solution. You now test against live data rather than test data. It is important to monitor your solution so that you are alerted if thresholds are crossed, indicating a potential problem with the deployed solution.
Manage collaboration for governance
Watsonx.governance is built on a collaborative platform to allow for all approved team members to contribute to the goals of solving business problems.
To plan for collaboration, consider how to manage access to the inventories, projects, and spaces you use for governance.
Use roles along with access control features to ensure that your team has appropriate access to meet goals.
Develop a communication plan
Some of the workflow around defining an AI use case and moving assets through the lifecycle rely on effective communication. Decide how your team will communicate and establish the details. For example,:
* Will you use email for decision-making or a messaging tool such as Slack?
* Is there a formal process for adding comments to an asset as it moves through a workflow?
Create your communication plan and share it with your team.
Implement a simple governance solution
As you roll out your governance strategy, start with a simple implementation, then consider how to build incrementally to a more comprehensive solution. The simplest implementation requires an AI use case in an inventory, with an asset moving from request to production.
For the most straightforward implementation of AI governance, you can use a IBM Knowledge Catalog to track and inventory models. An AI use case in an inventory consists of a set of factsheets containing lineage, history, and other relevant information about a model's lifecycle. A watsonx administrator must create an inventory and add data scientists, data engineers, and other users as collaborators.

AI use case owners can request and track assets:
* Business users create AI use cases in the inventory to request machine-learning models or LLM prompt templates.
* Data scientists associate the trained asset with an AI use case to create AI factsheets.
AI factsheets accumulate information about the model or prompt templates in the following ways:
* All actions that are associated with the tracked asset are automatically saved, including deployments and evaluations.
* All changes to input data assets are automatically saved.
* Data scientists can add tags, business terms, supporting documentation, and other information.
* Data scientists can associate challenger models with the AI use cases to compare model performance.
Validators and other stakeholders review AI factsheets to ensure compliance and certify asset progress from development to production. They can also generate reports from the factsheets to print, share, or archive details.
Plan for more complex solutions
You can extend your AI governance implementation at any time. Consider these options to extend governance:
* MLOps engineers can extend model tracking to include external models that are created with third-party machine learning models.
* MLOps engineers can add custom properties to factsheets to track more information.
* Compliance analysts can customize the default report templates to generate tailored reports for the organization.
* Record the results of IBM Watson OpenScale evaluations for fairness and other metrics as part of model tracking.
Governing assets that are created locally or externally
Watsonx.governance provides the tools for you to govern assets you created using IBM tools, such as machine learning models created by using AutoAI or foundation model prompt templates created in a watsonx project. You can also govern machine learning models that are created by using non-IBM tools, such as Microsoft Azure or Amazon Web Services. As you develop your governance plan, consider these differences:
* IBM assets developed with tools such as Watson Studio are available for governance earlier in the lifecycle. You can track the factsheet for a local asset from the Development phase, and have visibility into details such as the training data and creation details from an earlier stage.
* An inventory owner or administrator must enable governance for external models.
* When governance is enabled for external models, they can be added to an AI use case explicitly, or automatically, when they are evaluated with Watson OpenScale.
For a list of supported machine learning model providers, see [Supported machine learning providers in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html).
Next steps
To begin governance, follow the steps in [Provisioning and launching IBM watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) to provision Watson OpenScale with AI Factsheets.
Parent topic:[Watsonx.governance overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)
| # Planning for AI governance #
Plan how to use watsonx\.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end\-to\-end monitoring for machine learning and generative AI models\.
## Governance capabilities ##
Note:To govern metadata from foundation models, you must have watsonx\.ai provisioned\.
Consider these watsonx\.governance capabilities as you plan your governance strategy:
<!-- <ul> -->
* Collect metadata in factsheets about machine learning models and prompt templates for large language models\.
* Customize the metadata facts that are captured in factsheets for machine learning and foundation models\.
* Monitor machine learning deployments for fairness, drift, and quality to ensure that your models are meeting specified standards\.
* Monitor foundation models for breaches of toxic language thresholds or detection of personal identifiable information\.
* Evaluate prompt templates with metrics designed to measure performance and to test for the presence of prohibited content, such as hateful speech\.
* Collect model health data including data size, latency, and throughput to help you assess performance issues and manage resource consumption\.
* Assign a single risk score to tracked models to indicate the relative impact of the associated model\. For example, a model that predicts sensitive information such as a credit score might be assigned a higher risk score than a model that projects ice cream sales\.
* Use the automated transaction analysis tools to improve transparency and explainability for your AI assets\. For example, see how a feature contributes to a prediction and test what\-if scenarios to explore different outcomes\.
<!-- </ul> -->
## Planning for governance ##
Consider these governance strategies:
<!-- <ul> -->
* [Build your governance team](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#people)
* [Set up your governance structures](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#structure)
* [Manage collaboration with roles and access control](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#collab)
* [Develop a communication plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#communicate)\.
* [Implement a simple solution](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#simple)
* [Plan for more complex solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=en#complex)
<!-- </ul> -->
### Build a governance team ###
Consider the expertise that you need on your governance team\. A typical governance plan might include the following roles\. In some cases, the same person might fill multiple roles\. In other cases, a role might represent a team of people\.
<!-- <ul> -->
* **Model owner**: The owner creates an AI use case to track a solution to a business need\. The owner requests the model or prompt template, manages the approval process, and tracks the solution through the AI Lifecycle\.
* **Model developer/Data scientist**: The developer works with the data in a data set or a large language model (LLM) and creates the machine learning model or LLM prompt template\.
* **Model validator**: the validator tests the solution to determine whether it meets the goals that are stated in the AI use case\.
* **Risk and compliance manager**: The risk manager determines the policies and compliance thresholds for the AI use case\. For example, the risk manager might determine the rules to apply for testing a solution for fairness or for screening output for hateful and abusive speech\.
* **MLOps engineer**: The MLOps engineer moves a solution from a pre\-production (test) environment, to a production environment when a solution is deemed ready to be fully deployed\.
* **App developer**: Following deployment, and app developer runs evaluations against the deployment to monitor how the solution performs against the metric threshold set by the risk and compliance owner\. If performance drops below specified thresholds, the app developer works with the other stakeholders to address problems and update the model or prompt template\.
<!-- </ul> -->
### Set up a governance structure ###
After identifying roles and assembling a team, plan your governance structure\.
<!-- <ol> -->
1. Create an inventory for storing AI use cases\. An inventory is where you store and view AI use cases and the factsheets that are associated with the assets being governed\. Depending on your governance requirements, store all use cases in a single inventory, or create multiple inventories for your governance efforts\.
2. Create projects for collaboration\. If you are using IBM tools, create a Watson Studio project\. The project can hold the data that is required to train or test the AI solution and the model or prompt template being governed\. Use the access control to restrict access to the approved collaborators\.
3. Create a pre\-production deployment space\. Use the space to test your model or prompt template by using test data\. Like a project, a space provides access control features so you can include the required collaborators\.
4. Configure test and validation evaluations\. Provide the model or prompt template details and configure a set of evaluations to test the performance of your solution\. For example, you might test a machine learning model for dimensions such as fairness, quality, and drift, and test a prompt template against metrics such as perplexity (how accurate the output is), or toxicity (whether the output contains hateful or abusive speech\.) By testing on known (labeled) data, you can evaluate the performance before moving a solution to production\.
5. Configure a production space\. When the model or prompt template is ready to be deployed to a production environment, move the solution and all dependencies to a production space\. A production space typically has a tighter access control list\.
6. Configure evaluations for the deployed model\. Provide the model details and configure evaluations for the solution\. You now test against live data rather than test data\. It is important to monitor your solution so that you are alerted if thresholds are crossed, indicating a potential problem with the deployed solution\.
<!-- </ol> -->
### Manage collaboration for governance ###
Watsonx\.governance is built on a collaborative platform to allow for all approved team members to contribute to the goals of solving business problems\.
To plan for collaboration, consider how to manage access to the inventories, projects, and spaces you use for governance\.
Use roles along with access control features to ensure that your team has appropriate access to meet goals\.
### Develop a communication plan ###
Some of the workflow around defining an AI use case and moving assets through the lifecycle rely on effective communication\. Decide how your team will communicate and establish the details\. For example,:
<!-- <ul> -->
* Will you use email for decision\-making or a messaging tool such as Slack?
* Is there a formal process for adding comments to an asset as it moves through a workflow?
<!-- </ul> -->
Create your communication plan and share it with your team\.
### Implement a simple governance solution ###
As you roll out your governance strategy, start with a simple implementation, then consider how to build incrementally to a more comprehensive solution\. The simplest implementation requires an AI use case in an inventory, with an asset moving from request to production\.
For the most straightforward implementation of AI governance, you can use a IBM Knowledge Catalog to track and inventory models\. An AI use case in an inventory consists of a set of factsheets containing lineage, history, and other relevant information about a model's lifecycle\. A watsonx administrator must create an inventory and add data scientists, data engineers, and other users as collaborators\.

AI use case owners can request and track assets:
<!-- <ul> -->
* Business users create AI use cases in the inventory to request machine\-learning models or LLM prompt templates\.
* Data scientists associate the trained asset with an AI use case to create AI factsheets\.
<!-- </ul> -->
AI factsheets accumulate information about the model or prompt templates in the following ways:
<!-- <ul> -->
* All actions that are associated with the tracked asset are automatically saved, including deployments and evaluations\.
* All changes to input data assets are automatically saved\.
* Data scientists can add tags, business terms, supporting documentation, and other information\.
* Data scientists can associate challenger models with the AI use cases to compare model performance\.
<!-- </ul> -->
Validators and other stakeholders review AI factsheets to ensure compliance and certify asset progress from development to production\. They can also generate reports from the factsheets to print, share, or archive details\.
### Plan for more complex solutions ###
You can extend your AI governance implementation at any time\. Consider these options to extend governance:
<!-- <ul> -->
* MLOps engineers can extend model tracking to include external models that are created with third\-party machine learning models\.
* MLOps engineers can add custom properties to factsheets to track more information\.
* Compliance analysts can customize the default report templates to generate tailored reports for the organization\.
* Record the results of IBM Watson OpenScale evaluations for fairness and other metrics as part of model tracking\.
<!-- </ul> -->
## Governing assets that are created locally or externally ##
Watsonx\.governance provides the tools for you to govern assets you created using IBM tools, such as machine learning models created by using AutoAI or foundation model prompt templates created in a watsonx project\. You can also govern machine learning models that are created by using non\-IBM tools, such as Microsoft Azure or Amazon Web Services\. As you develop your governance plan, consider these differences:
<!-- <ul> -->
* IBM assets developed with tools such as Watson Studio are available for governance earlier in the lifecycle\. You can track the factsheet for a local asset from the Development phase, and have visibility into details such as the training data and creation details from an earlier stage\.
* An inventory owner or administrator must enable governance for external models\.
* When governance is enabled for external models, they can be added to an AI use case explicitly, or automatically, when they are evaluated with Watson OpenScale\.
<!-- </ul> -->
For a list of supported machine learning model providers, see [Supported machine learning providers in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html)\.
## Next steps ##
To begin governance, follow the steps in [Provisioning and launching IBM watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) to provision Watson OpenScale with AI Factsheets\.
**Parent topic:**[Watsonx\.governance overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)
<!-- </article "role="article" "> -->
|
71479786E864B942786028481E30DFB35E422BA8 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-ml-model.html?context=cdpaas&locale=en | Tracking a machine learning model | Tracking a machine learning model
Track machine learning models in an AI use case to meet governance and compliance goals.
Tracking machine learning models in an AI use case
Track machine learning models that are trained in a project and saved as a model asset. You can add a machine learning model to an AI use case from a project or space.
1. Open the project or space that contains the model asset that you want to govern.
2. From the action menu for the asset, click Track in AI use case.
3. Select an existing AI use case or follow the prompts to create a new one.
4. Choose an existing approach or create a new approach. An approach creates a version set for all assets in the same approach.
5. Choose a version numbering scheme. All of the assets in an approach share a common version. Choose from:
* Experimental if you plan to update frequently.
* Stable if the assets are not changing rapidly.
* Custom if you want to start a new version number. Version numbering must follow a schema of major.minor.patch.

Watch this video to see how to track a machine learning model in an AI use case.
This video provides a visual method to learn the concepts and tasks in this documentation.
Once tracking is enabled, all collaborators for the use case can review details for the asset.

For a machine learning model, facts include creation details, training data used, and information from evaluation metrics.

For details on tracking a machine learning model that is created in a Jupyter Notebook or trained with a third-party machine learning provider, see [Tracking external models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html).
Learn more
Parent topic:[Tracking assets in use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html)
| # Tracking a machine learning model #
Track machine learning models in an AI use case to meet governance and compliance goals\.
## Tracking machine learning models in an AI use case ##
Track machine learning models that are trained in a project and saved as a model asset\. You can add a machine learning model to an AI use case from a project or space\.
<!-- <ol> -->
1. Open the project or space that contains the model asset that you want to govern\.
2. From the action menu for the asset, click **Track in AI use case**\.
3. Select an existing AI use case or follow the prompts to create a new one\.
4. Choose an existing approach or create a new approach\. An approach creates a version set for all assets in the same approach\.
5. Choose a version numbering scheme\. All of the assets in an approach share a common version\. Choose from:
<!-- <ul> -->
* *Experimental* if you plan to update frequently.
* *Stable* if the assets are not changing rapidly.
* *Custom* if you want to start a new version number. Version numbering must follow a schema of major.minor.patch.
<!-- </ul> -->
<!-- </ol> -->

Watch this video to see how to track a machine learning model in an AI use case\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
Once tracking is enabled, all collaborators for the use case can review details for the asset\.

For a machine learning model, facts include creation details, training data used, and information from evaluation metrics\.

For details on tracking a machine learning model that is created in a Jupyter Notebook or trained with a third\-party machine learning provider, see [Tracking external models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html)\.
## Learn more ##
**Parent topic:**[Tracking assets in use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html)
<!-- </article "role="article" "> -->
|
0E6365D1DD3EC522C4DA68B662F05A0120617593 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html?context=cdpaas&locale=en | Tracking prompt templates | Tracking prompt templates
Track a prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals.
Tracking prompt templates
A prompt template is the saved prompt input for a foundation model. A prompt template can include variables so that it can be run with different options. For example, if you have a prompt that summarizes meeting notes for project-X, you can define a variable so that the same prompt can run for project-Y.
You can add a saved prompt template to an AI use case to track the details for the prompt template. In addition to recording details about the prompt template creation information and source model details, the factsheet tracks information from prompt template evaluations to capture performance metrics. You can evaluate prompt templates before or after you start tracking a prompt template.
Important: Before you start tracking a prompt template in an AI use case, make sure the prompt template is stable. After you enable tracking, the prompt template is locked, and you can no longer update it. This is to preserve the integrity of the prompt template so that all of the facts collected in the factsheet apply to a single version of the prompt template. If you are still experimenting with a prompt template, do not start tracking it in an AI use case.
Before you begin
Before you can track a prompt template, these conditions must be met.
* Be an administrator or editor for the project that contains the prompt template.
* The prompt template must include at least one variable. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html).
Watch this video to see how to track a prompt template in an AI use case.
This video provides a visual method to learn the concepts and tasks in this documentation.
Tracking a prompt template or machine learning model in an AI use case
You can add a prompt template to an AI use case from a project or space.
1. Open the project or space that contains the prompt template that you want to govern.
2. From the action menu for the asset, click View AI use case. 
3. If this prompt template is not already part of an AI use case, you are prompted to Track in AI use case. When you start tracking a prompt template, it is locked and you can no longer edit it. To make changes, you must create a new prompt template. 
4. Select an existing AI use case or follow the prompts to create a new one.
5. Choose an existing approach or create a new approach. An approach represents one facet of a complete solution. Each approach creates a version set for all assets in the same approach.
6. Choose a version numbering scheme. All the assets in an approach share a common version. Choose from:
* Experimental if you plan to update frequently.
* Stable if the assets are not changing rapidly.
* Custom if you want to start a new version number. Version numbering must follow a schema of major.minor.patch.
When tracking is enabled, all collaborators for the use case can review details for the prompt template.

Details are captured for each lifecycle stage for a prompt template.
* Develop provides information about how the prompt is defined, including the prompt itself, creation date, foundation model that is used, prompt parameters set, and variables defined.
* Evaluate displays the dimension metrics from evaluating your prompt template.
* Operate provides details that are related to how the prompt template is deployed for productive use.

Viewing the factsheet for a tracked prompt template
Click the name of the prompt template in an AI use case to view the associated factsheet.

The factsheet for a prompt template collects this type of data:
* Governance collects basic information such as the name of the AI use case, the description, and the approach name and version data.
* Foundation model displays the name of the foundation model, the license ID, and the model publisher.
* Prompt template shows the prompt name, ID, prompt input, and variables.
* Prompt parameters collect the configuration options for the prompt template, including the decoding method and stopping criteria.
* Evaluation displays the data from evaluation, including alerts, and metric data from the evaluation. For example, this prompt template shows the metrics data for quality evaluations on the prompt template. One threshold alert was triggered by the evaluation: 
* Validate shows the data for how the prompt template was evaluated, including the data set used for the validation, alerts triggered, and evaluation metric data.
* Attachments shows information about attachments that support the use case.
Note: As the prompt template moves from one stage of the lifecycle to the next, facts are added to the factsheet for the prompt template. The factsheet always represents the latest state of the prompt template. For example, if you validate a prompt template in a pre-production deployment space, and then again in a production deployment space, the details from the production phase are recorded in the factsheet, overwriting previous evaluation results.
Moving a prompt template through lifecycle stages
When a prompt template is tracked, you can see details from creating the prompt template, and evaluating performance against appropriate metrics. The next stage in the lifecycle is to _validate the prompt template. This involves testing the prompt template with new data. If you are the prompt engineer who is tasked with validating the asset, follow these steps to validate the prompt template and capture the validation data in the associated factsheet.
1. From the project containing the prompt template, export the project to a compressed ZIP file.
2. Create a new project and populate it with the exported ZIP file.
3. Upload validation data, evaluate the prompt template, and save the results to the validation project.
4. From the project, promote the prompt template to a new or existing deployment space that is designated as a Production stage. The stage is assigned when the space is created and cannot be updated, so create a new space if you do not have a production space available. 
5. After you promote the prompt template to a deployment space, you can configure continuous monitoring.
6. Details from monitoring the prompt template in a production space are displayed in the Operate lifecycle stage of the AI use case.
Learn more
* See [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html) for details on preparing a prompt template for production.
* See [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html) for details on evaluating a prompt template for dimensions such as accuracy or to test for the presence of hateful or abusive speech.
Parent topic:[Tracking assets in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html)
| # Tracking prompt templates #
Track a prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals\.
## Tracking prompt templates ##
A prompt template is the saved prompt input for a foundation model\. A prompt template can include variables so that it can be run with different options\. For example, if you have a prompt that summarizes meeting notes for project\-X, you can define a variable so that the same prompt can run for project\-Y\.
You can add a saved prompt template to an AI use case to track the details for the prompt template\. In addition to recording details about the prompt template creation information and source model details, the factsheet tracks information from prompt template evaluations to capture performance metrics\. You can evaluate prompt templates before or after you start tracking a prompt template\.
Important: Before you start tracking a prompt template in an AI use case, make sure the prompt template is stable\. After you enable tracking, the prompt template is locked, and you can no longer update it\. This is to preserve the integrity of the prompt template so that all of the facts collected in the factsheet apply to a single version of the prompt template\. If you are still experimenting with a prompt template, do not start tracking it in an AI use case\.
### Before you begin ###
Before you can track a prompt template, these conditions must be met\.
<!-- <ul> -->
* Be an administrator or editor for the project that contains the prompt template\.
* The prompt template must include at least one variable\. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)\.
<!-- </ul> -->
Watch this video to see how to track a prompt template in an AI use case\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
### Tracking a prompt template or machine learning model in an AI use case ###
You can add a prompt template to an AI use case from a project or space\.
<!-- <ol> -->
1. Open the project or space that contains the prompt template that you want to govern\.
2. From the action menu for the asset, click **View AI use case**\. 
3. If this prompt template is not already part of an AI use case, you are prompted to **Track in AI use case\.** When you start tracking a prompt template, it is locked and you can no longer edit it\. To make changes, you must create a new prompt template\. 
4. Select an existing AI use case or follow the prompts to create a new one\.
5. Choose an existing approach or create a new approach\. An approach represents one facet of a complete solution\. Each approach creates a version set for all assets in the same approach\.
6. Choose a version numbering scheme\. All the assets in an approach share a common version\. Choose from:
<!-- <ul> -->
* *Experimental* if you plan to update frequently.
* *Stable* if the assets are not changing rapidly.
* *Custom* if you want to start a new version number. Version numbering must follow a schema of major.minor.patch.
<!-- </ul> -->
<!-- </ol> -->
When tracking is enabled, all collaborators for the use case can review details for the prompt template\.

Details are captured for each lifecycle stage for a prompt template\.
<!-- <ul> -->
* **Develop** provides information about how the prompt is defined, including the prompt itself, creation date, foundation model that is used, prompt parameters set, and variables defined\.
* **Evaluate** displays the dimension metrics from evaluating your prompt template\.
* **Operate** provides details that are related to how the prompt template is deployed for productive use\.
<!-- </ul> -->

## Viewing the factsheet for a tracked prompt template ##
Click the name of the prompt template in an AI use case to view the associated factsheet\.

The factsheet for a prompt template collects this type of data:
<!-- <ul> -->
* **Governance** collects basic information such as the name of the AI use case, the description, and the approach name and version data\.
* **Foundation model** displays the name of the foundation model, the license ID, and the model publisher\.
* **Prompt template** shows the prompt name, ID, prompt input, and variables\.
* **Prompt parameters** collect the configuration options for the prompt template, including the decoding method and stopping criteria\.
* **Evaluation** displays the data from evaluation, including alerts, and metric data from the evaluation\. For example, this prompt template shows the metrics data for quality evaluations on the prompt template\. One threshold alert was triggered by the evaluation: 
* **Validate** shows the data for how the prompt template was evaluated, including the data set used for the validation, alerts triggered, and evaluation metric data\.
* **Attachments** shows information about attachments that support the use case\.
<!-- </ul> -->
Note: As the prompt template moves from one stage of the lifecycle to the next, facts are added to the factsheet for the prompt template\. The factsheet always represents the latest state of the prompt template\. For example, if you validate a prompt template in a pre\-production deployment space, and then again in a production deployment space, the details from the production phase are recorded in the factsheet, overwriting previous evaluation results\.
## Moving a prompt template through lifecycle stages ##
When a prompt template is tracked, you can see details from creating the prompt template, and evaluating performance against appropriate metrics\. The next stage in the lifecycle is to \_*validate* the prompt template\. This involves testing the prompt template with new data\. If you are the prompt engineer who is tasked with validating the asset, follow these steps to validate the prompt template and capture the validation data in the associated factsheet\.
<!-- <ol> -->
1. From the project containing the prompt template, export the project to a compressed ZIP file\.
2. Create a new project and populate it with the exported ZIP file\.
3. Upload validation data, evaluate the prompt template, and save the results to the validation project\.
4. From the project, promote the prompt template to a new or existing deployment space that is designated as a **Production** stage\. The stage is assigned when the space is created and cannot be updated, so create a new space if you do not have a production space available\. 
5. After you promote the prompt template to a deployment space, you can configure continuous monitoring\.
6. Details from monitoring the prompt template in a production space are displayed in the **Operate** lifecycle stage of the AI use case\.
<!-- </ol> -->
## Learn more ##
<!-- <ul> -->
* See [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html) for details on preparing a prompt template for production\.
* See [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html) for details on evaluating a prompt template for dimensions such as accuracy or to test for the presence of hateful or abusive speech\.
<!-- </ul> -->
**Parent topic:**[Tracking assets in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html)
<!-- </article "role="article" "> -->
|
F30CF59ADFFCBE4164483B5A63260724A1DFC7CA | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html?context=cdpaas&locale=en | Tracking assets in an AI use case | Tracking assets in an AI use case
Track machine learning models or prompt templates in AI use cases to capture details about them in factsheets. Use the information collected in the AI use case to monitor the progress of assets through the AI lifecycle, from request to production.
Define an AI use case to identify a business problem and request a solution. A solution might be a predictive machine learning model or a generative AI prompt template. When an asset is developed, associate it with the use case to capture details about the asset in factsheets. As the asset moves through the AI lifecycle, from development to testing and then to production, the factsheets collect the data to support governance or compliance goals.
Creating approaches to compare ways to solve a problem
Each AI use case can contain at least one approach. An approach is one facet of the solution to the business problem represented by the AI use case. For example, you might create two approaches to compare by using different frameworks for predictive models to see which one performs best. Or, created approaches to track several prompt templates in a use case.
Approaches also capture version information. The same version number is applied to all assets in an approach. If you have a stable version of an asset, you might maintain that version in an approach and create a new approach for the next round of iteration and experimentation.
This use case includes three approaches for organizing three prompt templates for an insurance claims processing use case:

Adding assets to a use case
You can track these assets in an AI use case:
* [Prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) include the prompt input for a foundation model and variables that are defined to make the prompt reusable for generating new output.
* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-ml-model.html) that are created by using a Watson Machine Learning tool such as AutoAI or SPSS Modeler.
* [External models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html) are models that are created in Jupyter Notebooks or models that are created by using a third-party machine learning provider.
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
| # Tracking assets in an AI use case #
Track machine learning models or prompt templates in AI use cases to capture details about them in factsheets\. Use the information collected in the AI use case to monitor the progress of assets through the AI lifecycle, from request to production\.
Define an AI use case to identify a business problem and request a solution\. A solution might be a predictive machine learning model or a generative AI prompt template\. When an asset is developed, associate it with the use case to capture details about the asset in factsheets\. As the asset moves through the AI lifecycle, from development to testing and then to production, the factsheets collect the data to support governance or compliance goals\.
## Creating approaches to compare ways to solve a problem ##
Each AI use case can contain at least one *approach*\. An approach is one facet of the solution to the business problem represented by the AI use case\. For example, you might create two approaches to compare by using different frameworks for predictive models to see which one performs best\. Or, created approaches to track several prompt templates in a use case\.
Approaches also capture version information\. The same version number is applied to all assets in an approach\. If you have a stable version of an asset, you might maintain that version in an approach and create a new approach for the next round of iteration and experimentation\.
This use case includes three approaches for organizing three prompt templates for an insurance claims processing use case:

## Adding assets to a use case ##
You can track these assets in an AI use case:
<!-- <ul> -->
* [Prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) include the prompt input for a foundation model and variables that are defined to make the prompt reusable for generating new output\.
* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-ml-model.html) that are created by using a Watson Machine Learning tool such as AutoAI or SPSS Modeler\.
* [External models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html) are models that are created in Jupyter Notebooks or models that are created by using a third\-party machine learning provider\.
<!-- </ul> -->
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
<!-- </article "role="article" "> -->
|
8BE1A39CDBAAA858051954548474DD3E307B20CB | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html?context=cdpaas&locale=en | Setting up an AI use case | Setting up an AI use case
Create an AI use case to define a business problem and track the related AI assets through their lifecycle. View details about governed assets or generate reports to help meet governance and compliance goals.
Creating AI use cases in an inventory
An inventory presents a view of all the AI use cases that you can access that are assigned to that inventory. Use multiple inventories to manage groups of AI use cases. For example, you might create an inventory for governing prompt templates and another for governing machine learning assets. Add collaborators to inventories so they can view or contribute to AI uses cases.
Before you begin
* Enable watsonx.governance and provision Watson OpenScale.
* You must have access to an existing inventory or have sufficient access to [create a new inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html).
For details on watsonx.governance roles and managing access for governance, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html). If you do not have sufficient access to create or contribute to an inventory, contact your administrator.
Viewing AI use cases
1. Click AI use cases from the navigation menu to view all existing AI use cases you can access, or click Request a model with an AI use case from the home page. From the primary view, you can search for a specific use case or filter the view to focus on certain use cases. For example, filter the view by Inventory to view all the AI use cases in a particular inventory.

2. Click the name of an AI use case to open it and view the details on these tabs:
* Overview shows the essential details for the use case.
* Lifecycle shows the assets that are tracked in the use case, which is organized by the phases of the AI lifecycle.
* Access lists collaborators for the use case and assigned roles.
3. Click the name of an asset to view the associated factsheet.
Generating a report from a use case
You can generate reports from use cases or factsheets to share or preserve records. By default, the reports generate these default reports:
* Basic report contains the set of facts visible on the Overview and Lifecycle tabs.
* Full report contains all facts about the use case and the models, prompt templates, and deployments it contains.
The inventory admin can customize reports to include custom branding or to change the fields included in reports. For details, see [Customizing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html). To create a report:
1. Open a use case in an inventory.
2. Click the Export report icon to generate a PDF record of the use case.
3. Choose a format option and export the report.
Creating an AI use case
1. Click AI use cases from the navigation menu.
2. Click New AI use case.
3. Enter a name and choose an inventory for the use case. If you do not have access to an inventory, you must create one before you can define a use case. See [Managing an inventory for AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for details.
4. Complete optional fields as needed:
Option Notes
Description Define the business problem and provide any details about the proposed solution.
Risk level Assign a risk level that reflects the nature of the business problem and the anticipated solution according to your governance policies. For example, assign a risk level of High for a model that processes sensitive personal data.
Supporting data Enter links to supporting documents that support or clarify the purpose of the use case
Owner For a use case with multiple owners, you can edit ownership
Status By default, a new AI use case is assigned a status of default, as it is typically waiting for assets to be added for tracking. You can manually change the status. For example, change to Awaiting development if you do not require any additional review or approval for a requested model. Change to Developed if you already have a model to add to governance. Review the complete list of status options in the following section.
Tags Assign or create tags to make your AI use cases easier to find or group.
Use case status details
Update the status field to provide users of the use case an immediate reflection of the current state.
Status Description
Ready for use case approval Use case is defined and ready for review
Use case approved Use case ready for model or prompt template development
Use case rejected Use case not ready for model or prompt development
Awaiting development Awaiting delivery of AI asset (model or prompt)
Development in progress AI asset (model or prompt) in development
Developed Trained model or prompt template added to use case
Ready for AI asset validation AI asset ready for testing or evaluation
Validation complete AI asset is tested or evaluated
Ready for AI asset approval Waiting for approval to move AI asset to production
Promote to production space AI asset is promoted to a production environment
Deployed for operation AI asset deployed for production
In operation AI asset is live in a production environment
Under revision AI asset requires updating
Decommissioned AI asset removed from production environment
Adding collaborators to an AI use case
Add collaborators so they can view or contribute to the AI use case.
1. From the Access tab of the AI use case, click Add members.
2. Search for a member by name or email address.
3. Assign an access level and click Add. For details on permissions, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html).
Next steps
After you create an AI use case, use it to track assets. Depending on your governance strategy, your next step might be to:
* Send a link to the use case to a reviewer for approval.
* Send a link to a data scientist to create the requested asset.
* [Add an asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) for tracking in the use case.
Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
| # Setting up an AI use case #
Create an AI use case to define a business problem and track the related AI assets through their lifecycle\. View details about governed assets or generate reports to help meet governance and compliance goals\.
## Creating AI use cases in an inventory ##
An inventory presents a view of all the AI use cases that you can access that are assigned to that inventory\. Use multiple inventories to manage groups of AI use cases\. For example, you might create an inventory for governing prompt templates and another for governing machine learning assets\. Add collaborators to inventories so they can view or contribute to AI uses cases\.
### Before you begin ###
<!-- <ul> -->
* Enable watsonx\.governance and provision Watson OpenScale\.
* You must have access to an existing inventory or have sufficient access to [create a new inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html)\.
<!-- </ul> -->
For details on watsonx\.governance roles and managing access for governance, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html)\. If you do not have sufficient access to create or contribute to an inventory, contact your administrator\.
## Viewing AI use cases ##
<!-- <ol> -->
1. Click **AI use cases** from the navigation menu to view all existing AI use cases you can access, or click **Request a model with an AI use case** from the home page\. From the primary view, you can search for a specific use case or filter the view to focus on certain use cases\. For example, filter the view by *Inventory* to view all the AI use cases in a particular inventory\.

2. Click the name of an AI use case to open it and view the details on these tabs:
<!-- <ul> -->
* **Overview** shows the essential details for the use case.
* **Lifecycle** shows the assets that are tracked in the use case, which is organized by the phases of the AI lifecycle.
* **Access** lists collaborators for the use case and assigned roles.
<!-- </ul> -->
3. Click the name of an asset to view the associated factsheet\.
<!-- </ol> -->
## Generating a report from a use case ##
You can generate reports from use cases or factsheets to share or preserve records\. By default, the reports generate these default reports:
<!-- <ul> -->
* **Basic report** contains the set of facts visible on the Overview and Lifecycle tabs\.
* **Full report** contains all facts about the use case and the models, prompt templates, and deployments it contains\.
<!-- </ul> -->
The inventory admin can customize reports to include custom branding or to change the fields included in reports\. For details, see [Customizing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html)\. To create a report:
<!-- <ol> -->
1. Open a use case in an inventory\.
2. Click the **Export report** icon to generate a PDF record of the use case\.
3. Choose a format option and export the report\.
<!-- </ol> -->
## Creating an AI use case ##
<!-- <ol> -->
1. Click **AI use cases** from the navigation menu\.
2. Click **New AI use case**\.
3. Enter a name and choose an inventory for the use case\. If you do not have access to an inventory, you must create one before you can define a use case\. See [Managing an inventory for AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for details\.
4. Complete optional fields as needed:
<!-- </ol> -->
<!-- <table> -->
| Option | Notes |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Description | Define the business problem and provide any details about the proposed solution\. |
| Risk level | Assign a risk level that reflects the nature of the business problem and the anticipated solution according to your governance policies\. For example, assign a risk level of *High* for a model that processes sensitive personal data\. |
| Supporting data | Enter links to supporting documents that support or clarify the purpose of the use case |
| Owner | For a use case with multiple owners, you can edit ownership |
| Status | By default, a new AI use case is assigned a status of default, as it is typically waiting for assets to be added for tracking\. You can manually change the status\. For example, change to *Awaiting development* if you do not require any additional review or approval for a requested model\. Change to *Developed* if you already have a model to add to governance\. Review the complete list of status options in the following section\. |
| Tags | Assign or create tags to make your AI use cases easier to find or group\. |
<!-- </table ""> -->
### Use case status details ###
Update the status field to provide users of the use case an immediate reflection of the current state\.
<!-- <table> -->
| Status | Description |
| ----------------------------- | ------------------------------------------------------- |
| Ready for use case approval | Use case is defined and ready for review |
| Use case approved | Use case ready for model or prompt template development |
| Use case rejected | Use case not ready for model or prompt development |
| Awaiting development | Awaiting delivery of AI asset (model or prompt) |
| Development in progress | AI asset (model or prompt) in development |
| Developed | Trained model or prompt template added to use case |
| Ready for AI asset validation | AI asset ready for testing or evaluation |
| Validation complete | AI asset is tested or evaluated |
| Ready for AI asset approval | Waiting for approval to move AI asset to production |
| Promote to production space | AI asset is promoted to a production environment |
| Deployed for operation | AI asset deployed for production |
| In operation | AI asset is live in a production environment |
| Under revision | AI asset requires updating |
| Decommissioned | AI asset removed from production environment |
<!-- </table ""> -->
## Adding collaborators to an AI use case ##
Add collaborators so they can view or contribute to the AI use case\.
<!-- <ol> -->
1. From the **Access** tab of the AI use case, click **Add members**\.
2. Search for a member by name or email address\.
3. Assign an access level and click **Add\.** For details on permissions, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html)\.
<!-- </ol> -->
## Next steps ##
After you create an AI use case, use it to track assets\. Depending on your governance strategy, your next step might be to:
<!-- <ul> -->
* Send a link to the use case to a reviewer for approval\.
* Send a link to a data scientist to create the requested asset\.
* [Add an asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) for tracking in the use case\.
<!-- </ul> -->
**Parent topic:**[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html)
<!-- </article "role="article" "> -->
|
1F14865C04B28B02EE0760D7099554A916E26926 | https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=en | Creating the catalog for platform connections | Creating the catalog for platform connections
You can create a Platform assets catalog to share connections across your organization. Any user who you add as a collaborator to the catalog can see these connections.
You can add an unlimited number of collaborators and connection assets to the Platform assets catalog.
If you are signed up for both Cloud Pak for Data as a Service and watsonx, you share a single Platform assets catalog between the two platforms. Any connection assets that you add to the catalog on either platform are available in both platforms. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx.
Requirements
Before you create the Platform assets catalog, understand the required permissions and the requirements for storage and duplicate handling.
Required permission : You must have the IAM Administrator role in the IBM Cloud account. : To view your roles, go to Administration > Access (IAM). Then select Roles in the IBM Cloud console.
Storage requirement : You must specify the IBM Cloud Object Storage instance configured during [IBM Cloud account setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html). If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow catalog creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html).
Duplicate asset handling : Assets are considered duplicates if they have the same asset type and the same name. : Select how to handle duplicate assets: : - Update original assets : - Overwrite original assets : - Allow duplicates (default) : - Preserve original assets and reject duplicates : You can change the duplicate handling preferences at any time on the catalog Settings page.
Creating the Platform assets catalog
To create the Platform assets catalog:
1. From the main menu, choose Data > Platform connections.
2. Click Create catalog.
3. Select the IBM Cloud Object Storage service. If you don't have an existing service instance, [create a IBM Cloud Object Storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) and then refresh the page.
4. Click Create. The Platform assets catalog is created in a dedicated storage bucket. Initially, you are the only collaborator in the catalog.
5. Add collaborators to the catalog. Go to the Access control page in the catalog and add collaborators. You assign each user a [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=enroles):
* Assign the Admin role at least one other user so that you are not the only person who can add collaborators.
* Assign the Editor role to all users who are responsible for adding connections to the catalog.
* Assign the Viewer role to the users who need to find connections and use them in projects.
You can give all the users access to the Platform assets catalog by assigning the Viewer role to the Public Access group. By default, all users in your account are members of the Public Access group. See [add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/catalog-collaborators.html).
6. Add connections to the catalog. You can delegate this step to other collaborators who have the Admin or Editor role. See [Add connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html).
Platform assets catalog collaborator roles
The Platform assets catalog roles provide the permissions in the following table.
Action Viewer Editor Admin
View connections ✓ ✓ ✓
Use connections in projects ✓ ✓ ✓
Use connections in spaces ✓ ✓ ✓
View collaborators ✓ ✓ ✓
Add connections ✓ ✓
Modify connections ✓ ✓
Delete connections ✓ ✓
Add or remove collaborators ✓
Change collaborator roles ✓
Delete the catalog ✓
Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Creating the catalog for platform connections #
You can create a Platform assets catalog to share connections across your organization\. Any user who you add as a collaborator to the catalog can see these connections\.
You can add an unlimited number of collaborators and connection assets to the Platform assets catalog\.
If you are signed up for both Cloud Pak for Data as a Service and watsonx, you share a single Platform assets catalog between the two platforms\. Any connection assets that you add to the catalog on either platform are available in both platforms\. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx\.
## Requirements ##
Before you create the Platform assets catalog, understand the required permissions and the requirements for storage and duplicate handling\.
**Required permission** : You must have the IAM Administrator role in the IBM Cloud account\. : To view your roles, go to **Administration > Access (IAM**)\. Then select **Roles** in the IBM Cloud console\.
**Storage requirement** : You must specify the IBM Cloud Object Storage instance configured during [IBM Cloud account setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\. If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow catalog creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)\.
**Duplicate asset handling** : Assets are considered duplicates if they have the same asset type and the same name\. : Select how to handle duplicate assets: : \- Update original assets : \- Overwrite original assets : \- Allow duplicates (default) : \- Preserve original assets and reject duplicates : You can change the duplicate handling preferences at any time on the catalog **Settings** page\.
## Creating the Platform assets catalog ##
To create the Platform assets catalog:
<!-- <ol> -->
1. From the main menu, choose **Data > Platform connections**\.
2. Click **Create catalog**\.
3. Select the IBM Cloud Object Storage service\. If you don't have an existing service instance, [create a IBM Cloud Object Storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) and then refresh the page\.
4. Click **Create**\. The Platform assets catalog is created in a dedicated storage bucket\. Initially, you are the only collaborator in the catalog\.
5. Add collaborators to the catalog\. Go to the **Access control** page in the catalog and add collaborators\. You assign each user a [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=en#roles):
<!-- <ul> -->
* Assign the **Admin** role at least one other user so that you are not the only person who can add collaborators.
* Assign the **Editor** role to all users who are responsible for adding connections to the catalog.
* Assign the **Viewer** role to the users who need to find connections and use them in projects.
<!-- </ul> -->
You can give all the users access to the Platform assets catalog by assigning the **Viewer** role to the **Public Access** group. By default, all users in your account are members of the **Public Access** group. See [add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/catalog-collaborators.html).
6. Add connections to the catalog\. You can delegate this step to other collaborators who have the **Admin** or **Editor** role\. See [Add connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\.
<!-- </ol> -->
### Platform assets catalog collaborator roles ###
The Platform assets catalog roles provide the permissions in the following table\.
<!-- <table> -->
| Action | Viewer | Editor | Admin |
| --------------------------- | ------ | ------ | ----- |
| View connections | ✓ | ✓ | ✓ |
| Use connections in projects | ✓ | ✓ | ✓ |
| Use connections in spaces | ✓ | ✓ | ✓ |
| View collaborators | ✓ | ✓ | ✓ |
| Add connections | | ✓ | ✓ |
| Modify connections | | ✓ | ✓ |
| Delete connections | | ✓ | ✓ |
| Add or remove collaborators | | | ✓ |
| Change collaborator roles | | | ✓ |
| Delete the catalog | | | ✓ |
<!-- </table ""> -->
**Parent topic:**[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=en | Stop using services or IBM watsonx | Stop using services or IBM watsonx
You can stop using any services or IBM watsonx at any time, whether you are accessing the services from your own or someone else's IBM Cloud account.
The method you choose to stop using IBM watsonx depends on your goal:
* To remove your access to IBM watsonx in all IBM Cloud accounts that you belong to, [leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeactivate).
* To stop the use of a service in your IBM Cloud account, [delete your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeleteapps) in your IBM Cloud account.
* To stop all use of all IBM Cloud services in your account, [delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeletecloud).
When other users in your account stop using IBM watsonx, they are cleaned up appropriately.
Leave IBM watsonx
If you want to leave IBM watsonx:
1. Log in to IBM watsonx.
2. Click your avatar and then Profile.
3. On the Profile page, click Leave watsonx. If you change your mind about leaving, you can [sign up to re-activate your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html).
Use this process when you want to stop using IBM watsonx, you are not the account owner, and you want to keep your IBM Cloud account.
These are the results when you leave IBM watsonx:
* Your profile is deleted and you can't log in to IBM watsonx.
* Your projects and deployment spaces remain until you delete your services.
* Your IBM Cloud account remains active.
* Your IBM Cloud services are not affected.
Delete a service
To remove any of your services:
1. Log in to IBM watsonx.
2. Click Administration > Services > Service instances.
3. Click the menu next to the service you want to remove and choose Delete.
This action is the same as deleting the service in IBM Cloud. If you change your mind within 30 days, you can get your services and data back by reprovisioning the service.
These are the results when you delete the Watson Studio service:
* Your IBM watsonx profile remains.
* You can no longer access that service from that IBM Cloud account.
* You can still access your services from other accounts.
* Your billing for that service stops.
* Your data in IBM Cloud Object Storage remains.
* Your projects remain.
* You remain a collaborator in all your projects in other IBM Cloud accounts.
Closing a IBM Cloud account
If you want to stop using IBM Cloud services altogether and delete all your data, you can deactivate your IBM Cloud account. Follow these steps to close your Lite account:
1. Sign in to your IBM Cloud account.
2. In the IBM Cloud console, go to the Manage > Account > Account settings page.
3. Click Close Account. After an account is closed for 30 days, all data is deleted and all services are removed.
If you are not the owner of the account, you do not see a Close Account button.
These are the results when your IBM Cloud account is in the Canceled state:
* All your data in IBM Cloud is permanently deleted in 30 days.
* The projects and catalogs in your account are deleted.
* Your IBM watsonx profile and your IBM Cloud profile are deleted.
* All the IBM Cloud services in you account are deleted in 30 days.
* You are removed as a collaborator from projects and catalogs in other accounts within 30 days.
If you want to close a Pay-As-You-Go or Subscription account, contact [Support](https://cloud.ibm.com/unifiedsupport/supportcenter).
Learn more
* [Removing users from the account or from the workspace](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html)
* [IBM Cloud docs: Leaving an account](https://cloud.ibm.com/docs/account?topic=account-account-membership)
* [IBM Cloud docs: Managing your account settings](https://cloud.ibm.com/docs/account?topic=account-account_settings)
Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
| # Stop using services or IBM watsonx #
You can stop using any services or IBM watsonx at any time, whether you are accessing the services from your own or someone else's IBM Cloud account\.
The method you choose to stop using IBM watsonx depends on your goal:
<!-- <ul> -->
* To remove your access to IBM watsonx in all IBM Cloud accounts that you belong to, [leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=en#deactivate)\.
* To stop the use of a service in your IBM Cloud account, [delete your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=en#deleteapps) in your IBM Cloud account\.
* To stop all use of all IBM Cloud services in your account, [delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=en#deletecloud)\.
<!-- </ul> -->
When other users in your account stop using IBM watsonx, they are cleaned up appropriately\.
### Leave IBM watsonx ###
If you want to leave IBM watsonx:
<!-- <ol> -->
1. Log in to IBM watsonx\.
2. Click your avatar and then **Profile**\.
3. On the **Profile** page, click **Leave watsonx**\. If you change your mind about leaving, you can [sign up to re\-activate your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)\.
<!-- </ol> -->
Use this process when you want to stop using IBM watsonx, you are not the account owner, and you want to keep your IBM Cloud account\.
These are the results when you leave IBM watsonx:
<!-- <ul> -->
* Your profile is deleted and you can't log in to IBM watsonx\.
* Your projects and deployment spaces remain until you delete your services\.
* Your IBM Cloud account remains active\.
* Your IBM Cloud services are not affected\.
<!-- </ul> -->
## Delete a service ##
To remove any of your services:
<!-- <ol> -->
1. Log in to IBM watsonx\.
2. Click **Administration > Services > Service instances**\.
3. Click the menu next to the service you want to remove and choose **Delete**\.
<!-- </ol> -->
This action is the same as deleting the service in IBM Cloud\. If you change your mind within 30 days, you can get your services and data back by reprovisioning the service\.
These are the results when you delete the Watson Studio service:
<!-- <ul> -->
* Your IBM watsonx profile remains\.
* You can no longer access that service from that IBM Cloud account\.
* You can still access your services from other accounts\.
* Your billing for that service stops\.
* Your data in IBM Cloud Object Storage remains\.
* Your projects remain\.
* You remain a collaborator in all your projects in other IBM Cloud accounts\.
<!-- </ul> -->
## Closing a IBM Cloud account ##
If you want to stop using IBM Cloud services altogether and delete all your data, you can deactivate your IBM Cloud account\. Follow these steps to close your Lite account:
<!-- <ol> -->
1. Sign in to your IBM Cloud account\.
2. In the IBM Cloud console, go to the **Manage > Account > Account settings** page\.
3. Click **Close Account**\. After an account is closed for 30 days, all data is deleted and all services are removed\.
<!-- </ol> -->
If you are not the owner of the account, you do not see a **Close Account** button\.
These are the results when your IBM Cloud account is in the Canceled state:
<!-- <ul> -->
* All your data in IBM Cloud is permanently deleted in 30 days\.
* The projects and catalogs in your account are deleted\.
* Your IBM watsonx profile and your IBM Cloud profile are deleted\.
* All the IBM Cloud services in you account are deleted in 30 days\.
* You are removed as a collaborator from projects and catalogs in other accounts within 30 days\.
<!-- </ul> -->
If you want to close a Pay\-As\-You\-Go or Subscription account, contact [Support](https://cloud.ibm.com/unifiedsupport/supportcenter)\.
## Learn more ##
<!-- <ul> -->
* [Removing users from the account or from the workspace](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html)
* [IBM Cloud docs: Leaving an account](https://cloud.ibm.com/docs/account?topic=account-account-membership)
* [IBM Cloud docs: Managing your account settings](https://cloud.ibm.com/docs/account?topic=account-account_settings)
<!-- </ul> -->
**Parent topic:**[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
<!-- </article "role="article" "> -->
|
F964EFDA57733A3B39890B30FF22BD5C47EED893 | https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=en | Managing IBM watsonx | Managing IBM watsonx
As the owner or an administrator of the IBM Cloud account, you can monitor and manage services and the platform.
* [Configuring services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=encore)
An IBM Cloud account administrator is a user in the account who was assigned the Administrator role in IBM Cloud for the All Identity and Access enabled services option in IAM. If you're not sure of your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html).
You perform some administrative tasks within IBM watsonx, and others in IBM Cloud. Some tasks require steps in both areas, depending on your goals.
Configuring services
The services that are included in watsonx.ai are Watson Studio and Watson Machine Learning.
Task In IBM watsonx? In IBM Cloud?
[Manage services in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) ✓ ✓
[Switch service region](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=enregion) ✓
[Upgrade your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlaccount) ✓ ✓
[Upgrade your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlapp) ✓
[Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) ✓
[Remove users](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html) ✓ ✓
[Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) ✓ ✓
[Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) ✓ ✓
[View and manage environment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.htmlmonitor-cuh) ✓
[Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) ✓ ✓
[Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) ✓ ✓
[Set resources scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓
[Set type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓
[Manage IBM Cloud account in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓
[Manage all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) ✓ ✓
[Secure IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) ✓ ✓
[Set up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) ✓
[Delegate encryption keys for IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok) ✓ ✓
Switch service region
The platform and services are available in multiple IBM Cloud service regions and you can have services in more than one region. Your projects, catalogs, and data are specific to the region in which they were saved and can be accessed only from your services in that region. If you provision Watson Studio services in both the Dallas and the Frankfurt regions, you can't access projects that you created in the Frankfurt region from the Dallas region.
To switch your service region:
1. Log in to IBM watsonx.
2. Click the Region Switcher in the home page header.
3. Select the region that contains your services and projects.
For wider browsers, you can select the region from the dropdown menu.
Learn more
* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)
* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
* [Roles in the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)
Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
| # Managing IBM watsonx #
As the owner or an administrator of the IBM Cloud account, you can monitor and manage services and the platform\.
<!-- <ul> -->
* [Configuring services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=en#core)
<!-- </ul> -->
An IBM Cloud account administrator is a user in the account who was assigned the **Administrator** role in IBM Cloud for the All Identity and Access enabled services option in IAM\. If you're not sure of your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)\.
You perform some administrative tasks within IBM watsonx, and others in IBM Cloud\. Some tasks require steps in both areas, depending on your goals\.
## Configuring services ##
The services that are included in watsonx\.ai are Watson Studio and Watson Machine Learning\.
<!-- <table> -->
| Task | In IBM watsonx? | In IBM Cloud? |
| --------------------------------------------------------------- | --------------- | ------------- |
| [Manage services in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html#manage) | ✓ | ✓ |
| [Switch service region](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=en#region) | ✓ | |
| [Upgrade your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html#account) | ✓ | ✓ |
| [Upgrade your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html#app) | ✓ | |
| [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) | | ✓ |
| [Remove users](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html) | ✓ | ✓ |
| [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) | ✓ | ✓ |
| [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) | ✓ | ✓ |
| [View and manage environment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html#monitor-cuh) | ✓ | |
| [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) | ✓ | ✓ |
| [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) | ✓ | ✓ |
| [Set resources scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) | ✓ | |
| [Set type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) | ✓ | |
| [Manage IBM Cloud account in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) | | ✓ |
| [Manage all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) | ✓ | ✓ |
| [Secure IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) | ✓ | ✓ |
| [Set up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) | | ✓ |
| [Delegate encryption keys for IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#byok) | ✓ | ✓ |
<!-- </table ""> -->
## Switch service region ##
The platform and services are available in multiple IBM Cloud service regions and you can have services in more than one region\. Your projects, catalogs, and data are specific to the region in which they were saved and can be accessed only from your services in that region\. If you provision Watson Studio services in both the Dallas and the Frankfurt regions, you can't access projects that you created in the Frankfurt region from the Dallas region\.
To switch your service region:
<!-- <ol> -->
1. Log in to IBM watsonx\.
2. Click the **Region Switcher** in the home page header\.
3. Select the region that contains your services and projects\.
<!-- </ol> -->
For wider browsers, you can select the region from the dropdown menu\.
## Learn more ##
<!-- <ul> -->
* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)
* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
* [Roles in the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)
<!-- </ul> -->
**Parent topic:**[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
<!-- </article "role="article" "> -->
|
39AD64C9004E83507A968C5C0B1C8EF952B3EACE | https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en | Setting up IBM Cloud Object Storage for use with IBM watsonx | Setting up IBM Cloud Object Storage for use with IBM watsonx
An IBM Cloud Object Storage service instance is provisioned automatically with a Lite plan when you join IBM watsonx. Workspaces, such as projects, require IBM Cloud Object Storage to store files that are related to assets, including uploaded data files or notebook files.
You can also connect to IBM Cloud Object Storage as a data source. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html).
Overview of setting up Cloud Object Storage
To set up Cloud Object Storage, complete these tasks:
1. [Generate an administrative key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=engen-key).
2. [Ensure that Global location is set in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=englobal).
3. [Provide access to Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enaccess).
* [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign).
* [Enable storage delegation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del).
4. [Optional: Protect sensitive data](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enprotect).
5. [Optional: Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enbyok).
Watch the following video to see how administrators set up Cloud Object Storage for use with Cloud Pak for Data as a Service.
This video provides a visual method to learn the concepts and tasks in this documentation.
Generate an administrative key
You generate an administrative key for Cloud Object Storage by creating an initial test project. The test project can be deleted after its creation. Its sole purpose is to generate the key.
To automatically generate the administrative key for your Cloud Object Storage instance:
1. From the IBM watsonx main menu, select Projects > View all projects and then click New project.
2. Specify to create an empty project.
3. Enter a project name, such as "Test Project".
4. Select your Cloud Object Storage instance.
5. Click Create. The administrative key is generated.
6. Delete the test project.
Ensure that Global location is set for Cloud Object Storage in each user's profile
Cloud Object Storage requires the Global location to be configured in each user's profile. The Global location is configured automatically, but it might be changed by mistake. An error occurs when a project is created if the Global location is not enabled in the user's profile. Ask users to check that Global location is enabled.
[Check for the Global location in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html).
Provide access to Cloud Object Storage
You can provide different levels of access to Cloud Object Storage for people who need to work in IBM watsonx. Using the storage delegation setting on the Cloud Object Storage instance, you can provide quick access to most users to create projects and catalogs. However, another option is to provide targeted access by using IAM roles and access groups. Role-based access enacts stricter controls for viewing the Cloud Object Storage instance directly and for creating projects and catalogs. If you decide to provide controlled access with IAM roles and access groups, you must disable storage delegation for the Cloud Object Storage instance.
You enable storage delegation for the Cloud Object Storage instance to provide access to nonadministrative users. Users with minimal IAM permissions can create projects and catalogs, which automatically create buckets in the Cloud Object Storage instance. See [Enable storage delegation for nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del).
You provide more controlled access with IAM roles and access groups. For example, the Cloud Object Storage Manager role provides permissions to create projects and spaces together with the corresponding buckets in the Cloud Object Storage instance. It also provides permissions to view all buckets and encryption root keys in the Cloud Object Storage instance, to view the metadata for a bucket and delete buckets, and to perform other administrative tasks that are related to buckets. See [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign).
No role assignments are needed for collaborators who work with the data in a project or catalog. Users who are given collaborator roles can work in the project or catalog without storage delegation or an IAM role. See [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html).
Assign roles to enable access
The IBM Cloud account owner or administrator assigns appropriate roles to users to provide access to Cloud Object Storage. Storage delegation must be disabled when using role-based access.
Rather than assigning each individual user a set of roles, you can create an access group. Access groups expedite role assignments by grouping permissions. For instructions on creating access groups, see [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui).
Enable storage delegation
Storage delegation for the Cloud Object Storage instance allows nonadministrative users to create projects, the Platform assets catalog, and the corresponding Cloud Object Storage buckets. Storage delegation provides wide access to Cloud Object Storage and allows users with minimal permissions to create projects. Storage delegation for projects also includes deployment spaces.
To enable storage delegation for the Cloud Object Storage instance:
1. From the navigation menu, select Administration > Configurations and settings > Storage delegation.
2. Set storage delegation for Projects to on.
3. Optional. If you want a non-administrative user to create the Platform assets catalog, set storage delegation for Catalogs to on.

Optional: Encrypt your IBM Cloud Object Storage instance with your own key
Encryption protects the data for your projects and catalogs. Data at rest in Cloud Object Storage is encrypted by default with randomly generated keys that are managed by IBM. For increased protection, you can create and manage your own encryption keys with IBM Key Protect. IBM Key Protect for IBM Cloud is a centralized key management system for generating, managing, and deleting encryption keys used by IBM Cloud services.
For more information, see [IBM Cloud docs: IBM Key Protect for IBM Cloud](https://cloud.ibm.com/docs/services/key-protect?topic=key-protect-aboutabout).
Not all [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) support the use of your own encryption keys. Check your specific plan for details.
To encrypt your Cloud Object Storage instance with your own key, you need an instance of the IBM Key Project service. Although Key Protect is a paid service, each account is allowed five keys without charge.
In IBM Cloud, provision Key Protect and generate a key:
1. Create an instance of Key Protect for your account from the IBM Cloud catalog. See [IBM Cloud docs: Provisioning the Key Protect service](https://cloud.ibm.com/docs/key-protect?topic=key-protect-provision&interface=ui).
2. Grant a service authorization between your Key Protect instance and your Cloud Object Storage instance. Do not associate a key with a bucket. If you don't grant the authorization, users cannot create projects and catalogs with the Cloud Object Storage instance. For more information, see [IBM Cloud docs: Using authorizations to grant access between services](https://cloud.ibm.com/docs/account?topic=account-serviceauth&interface=ui). You can also grant a service authorization for a root key from Watson Studio, by choosing Manage > Access (IAM).
3. Create a root key to protect your Cloud Object Storage instance. See [IBM Cloud docs: Creating root keys](https://cloud.ibm.com/docs/key-protect?topic=key-protect-create-root-keys&interface=uicreate_root_keys).
In IBM watsonx, add the key to the Cloud Object Storage instance:
1. Select Administration > Configurations and settings > Storage delegation.
2. Slide the toggle for Projects, Catalogs, or both to select data for encryption with your key.
3. Click Add... under Encryption keys to add an encryption key.
4. Select the Key Protect instance and the Key Protect key.
5. Click OK to add the encryption key.
Important: If you change or remove the key, you lose access to existing encrypted data in the Cloud Object Storage instance.
Optional: Protect sensitive data stored on Cloud Object Storage
When you join IBM watsonx, a single Cloud Object Storage instance is automatically provisioned for you. The Cloud Object Storage instance contains separate buckets for each project to store data assets and related files. The ability to create projects and thus to add buckets to Cloud Object Storage is available only to users with the Platform Administrator role and the Manager role for the Cloud Object Storage Service. Although only users with these roles can create projects and their accompanying buckets, any user with the Editor or Viewer role can see the data files. For some businesses, the data files contain sensitive information and require stricter access controls.
Control access to Cloud Object Storage with multiple instances
For paid plans, you can control access to sensitive data files by creating one or more Cloud Object Storage instances and assigning access to specific users. Project creators select the appropriate Cloud Object Storage instance when they create a project. The data assets and files for the project are stored in a bucket in the selected instance. Users with Editor or Viewer roles can work in the projects, but they cannot see the assets directly in the related Cloud Object Storage bucket. You can assign access to a specific Cloud Object Storage instance either to an individual user or to an access group. You must be the account owner or administrator to create service instances and assign access.
Extra fees are not incurred by creating more than one Cloud Object Storage instances because charges are determined by overall storage utilization. The number of instances is not a factor for Cloud Object Storage fees.
Only one instance of Cloud Object Storage is allowed for the Lite plan. You can change your pricing plan from the IBM Cloud catalog.
To create a Cloud Object Storage instance and assign access:
1. Select Services > Services catalog from the navigation menu.
2. Select Storage > Cloud Object Storage.
3. Click Create. A Service name is generated for you on IBM Cloud.
4. Select Manage > Access(IAM).
5. Select Users or Access groups.
6. Click Assign access.
7. In the Services list, choose Cloud Object Storage.
8. For Resources, choose:
* Scope = Specific resources
* Attribute type = Service instance
* Operator = string equals
* Value = name of Cloud Object Storage
9. For Roles and actions, choose:
* Service access = Manager
* Platform access = Administrator
10. Click Add and Assign.
The specified Cloud Object Storage instance can be accessed only by the user or access group with the Service role of Manager and the Platform role of Administrator. Other users can work in the projects but cannot create projects or view assets directly in the bucket.
Next step
Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html).
Learn more
* [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
Parent topic:[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Setting up IBM Cloud Object Storage for use with IBM watsonx #
An IBM Cloud Object Storage service instance is provisioned automatically with a Lite plan when you join IBM watsonx\. Workspaces, such as projects, require IBM Cloud Object Storage to store files that are related to assets, including uploaded data files or notebook files\.
You can also connect to IBM Cloud Object Storage as a data source\. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)\.
## Overview of setting up Cloud Object Storage ##
To set up Cloud Object Storage, complete these tasks:
<!-- <ol> -->
1. [Generate an administrative key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#gen-key)\.
2. [Ensure that Global location is set in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#global)\.
3. [Provide access to Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#access)\.
<!-- <ul> -->
* [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#assign).
* [Enable storage delegation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#stor-del).
<!-- </ul> -->
4. [Optional: Protect sensitive data](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#protect)\.
5. [Optional: Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#byok)\.
<!-- </ol> -->
Watch the following video to see how administrators set up Cloud Object Storage for use with Cloud Pak for Data as a Service\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Generate an administrative key ##
You generate an administrative key for Cloud Object Storage by creating an initial test project\. The test project can be deleted after its creation\. Its sole purpose is to generate the key\.
To automatically generate the administrative key for your Cloud Object Storage instance:
<!-- <ol> -->
1. From the IBM watsonx main menu, select **Projects > View all projects** and then click **New project**\.
2. Specify to create an empty project\.
3. Enter a project name, such as "Test Project"\.
4. Select your Cloud Object Storage instance\.
5. Click **Create**\. The administrative key is generated\.
6. Delete the test project\.
<!-- </ol> -->
## Ensure that Global location is set for Cloud Object Storage in each user's profile ##
Cloud Object Storage requires the Global location to be configured in each user's profile\. The Global location is configured automatically, but it might be changed by mistake\. An error occurs when a project is created if the Global location is not enabled in the user's profile\. Ask users to check that Global location is enabled\.
[Check for the **Global** location in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html)\.
## Provide access to Cloud Object Storage ##
You can provide different levels of access to Cloud Object Storage for people who need to work in IBM watsonx\. Using the storage delegation setting on the Cloud Object Storage instance, you can provide quick access to most users to create projects and catalogs\. However, another option is to provide targeted access by using IAM roles and access groups\. Role\-based access enacts stricter controls for viewing the Cloud Object Storage instance directly and for creating projects and catalogs\. If you decide to provide controlled access with IAM roles and access groups, you must disable storage delegation for the Cloud Object Storage instance\.
You enable storage delegation for the Cloud Object Storage instance to provide access to nonadministrative users\. Users with minimal IAM permissions can create projects and catalogs, which automatically create buckets in the Cloud Object Storage instance\. See [Enable storage delegation for nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#stor-del)\.
You provide more controlled access with IAM roles and access groups\. For example, the Cloud Object Storage **Manager** role provides permissions to create projects and spaces together with the corresponding buckets in the Cloud Object Storage instance\. It also provides permissions to view all buckets and encryption root keys in the Cloud Object Storage instance, to view the metadata for a bucket and delete buckets, and to perform other administrative tasks that are related to buckets\. See [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=en#assign)\.
No role assignments are needed for collaborators who work with the data in a project or catalog\. Users who are given collaborator roles can work in the project or catalog without storage delegation or an IAM role\. See [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)\.
### Assign roles to enable access ###
The IBM Cloud account owner or administrator assigns appropriate roles to users to provide access to Cloud Object Storage\. Storage delegation must be disabled when using role\-based access\.
Rather than assigning each individual user a set of roles, you can create an access group\. Access groups expedite role assignments by grouping permissions\. For instructions on creating access groups, see [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui)\.
### Enable storage delegation ###
Storage delegation for the Cloud Object Storage instance allows nonadministrative users to create projects, the Platform assets catalog, and the corresponding Cloud Object Storage buckets\. Storage delegation provides wide access to Cloud Object Storage and allows users with minimal permissions to create projects\. Storage delegation for projects also includes deployment spaces\.
To enable storage delegation for the Cloud Object Storage instance:
<!-- <ol> -->
1. From the navigation menu, select **Administration > Configurations and settings > Storage delegation**\.
2. Set storage delegation for Projects to on\.
3. Optional\. If you want a non\-administrative user to create the Platform assets catalog, set storage delegation for Catalogs to on\.
<!-- </ol> -->

## Optional: Encrypt your IBM Cloud Object Storage instance with your own key ##
Encryption protects the data for your projects and catalogs\. Data at rest in Cloud Object Storage is encrypted by default with randomly generated keys that are managed by IBM\. For increased protection, you can create and manage your own encryption keys with IBM Key Protect\. IBM Key Protect for IBM Cloud is a centralized key management system for generating, managing, and deleting encryption keys used by IBM Cloud services\.
For more information, see [IBM Cloud docs: IBM Key Protect for IBM Cloud](https://cloud.ibm.com/docs/services/key-protect?topic=key-protect-about#about)\.
Not all [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) support the use of your own encryption keys\. Check your specific plan for details\.
To encrypt your Cloud Object Storage instance with your own key, you need an instance of the IBM Key Project service\. Although Key Protect is a paid service, each account is allowed five keys without charge\.
In IBM Cloud, provision Key Protect and generate a key:
<!-- <ol> -->
1. Create an instance of Key Protect for your account from the IBM Cloud catalog\. See [IBM Cloud docs: Provisioning the Key Protect service](https://cloud.ibm.com/docs/key-protect?topic=key-protect-provision&interface=ui)\.
2. Grant a service authorization between your Key Protect instance and your Cloud Object Storage instance\. Do not associate a key with a bucket\. If you don't grant the authorization, users cannot create projects and catalogs with the Cloud Object Storage instance\. For more information, see [IBM Cloud docs: Using authorizations to grant access between services](https://cloud.ibm.com/docs/account?topic=account-serviceauth&interface=ui)\. You can also grant a service authorization for a root key from Watson Studio, by choosing **Manage > Access (IAM)**\.
3. Create a root key to protect your Cloud Object Storage instance\. See [IBM Cloud docs: Creating root keys](https://cloud.ibm.com/docs/key-protect?topic=key-protect-create-root-keys&interface=ui#create_root_keys)\.
<!-- </ol> -->
In IBM watsonx, add the key to the Cloud Object Storage instance:
<!-- <ol> -->
1. Select **Administration > Configurations and settings > Storage delegation**\.
2. Slide the toggle for **Projects**, **Catalogs**, or both to select data for encryption with your key\.
3. Click **Add\.\.\.** under **Encryption keys** to add an encryption key\.
4. Select the **Key Protect instance** and the **Key Protect key**\.
5. Click **OK** to add the encryption key\.
<!-- </ol> -->
Important: If you change or remove the key, you lose access to existing encrypted data in the Cloud Object Storage instance\.
## Optional: Protect sensitive data stored on Cloud Object Storage ##
When you join IBM watsonx, a single Cloud Object Storage instance is automatically provisioned for you\. The Cloud Object Storage instance contains separate buckets for each project to store data assets and related files\. The ability to create projects and thus to add buckets to Cloud Object Storage is available only to users with the Platform **Administrator** role and the **Manager** role for the Cloud Object Storage Service\. Although only users with these roles can create projects and their accompanying buckets, any user with the **Editor** or **Viewer** role can see the data files\. For some businesses, the data files contain sensitive information and require stricter access controls\.
### Control access to Cloud Object Storage with multiple instances ###
For paid plans, you can control access to sensitive data files by creating one or more Cloud Object Storage instances and assigning access to specific users\. Project creators select the appropriate Cloud Object Storage instance when they create a project\. The data assets and files for the project are stored in a bucket in the selected instance\. Users with **Editor** or **Viewer** roles can work in the projects, but they cannot see the assets directly in the related Cloud Object Storage bucket\. You can assign access to a specific Cloud Object Storage instance either to an individual user or to an access group\. You must be the account owner or administrator to create service instances and assign access\.
Extra fees are not incurred by creating more than one Cloud Object Storage instances because charges are determined by overall storage utilization\. The number of instances is not a factor for Cloud Object Storage fees\.
Only one instance of Cloud Object Storage is allowed for the Lite plan\. You can change your pricing plan from the IBM Cloud catalog\.
To create a Cloud Object Storage instance and assign access:
<!-- <ol> -->
1. Select **Services > Services catalog** from the navigation menu\.
2. Select **Storage > Cloud Object Storage**\.
3. Click **Create**\. A Service name is generated for you on IBM Cloud\.
4. Select **Manage > Access(IAM)**\.
5. Select **Users** or **Access groups**\.
6. Click **Assign access**\.
7. In the **Services** list, choose Cloud Object Storage\.
8. For **Resources**, choose:
<!-- <ul> -->
* Scope = Specific resources
* Attribute type = Service instance
* Operator = string equals
* Value = name of Cloud Object Storage
<!-- </ul> -->
9. **For Roles and actions**, choose:
<!-- <ul> -->
* Service access = Manager
* Platform access = Administrator
<!-- </ul> -->
10. Click **Add** and **Assign**\.
<!-- </ol> -->
The specified Cloud Object Storage instance can be accessed only by the user or access group with the Service role of Manager and the Platform role of Administrator\. Other users can work in the projects but cannot create projects or view assets directly in the bucket\.
## Next step ##
Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)\.
## Learn more ##
<!-- <ul> -->
* [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
<!-- </ul> -->
**Parent topic:**[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
322E404E76067637F1D0AFDF44CBE309C2A53221 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/accessibility.html?context=cdpaas&locale=en | Accessibility features in IBM watsonx content and documentation | Accessibility features in IBM watsonx content and documentation
IBM is committed to accessibility. Accessibility features that follow compliance guidelines are included in IBM watsonx content and documentation to benefit users with disabilities. Parts of the user interface of IBM watsonx are accessible, but not entirely. Only documentation is compliant, with a subset of parts of the overall product.
IBM watsonx documentation uses the latest W3C Standard, [WAI-ARIA 1.0](https://www.w3.org/TR/wai-aria/) to ensure compliance with the [United States Access Board Section 508 Standards](https://www.access-board.gov/ict/), and the [ Web Content Accessibility Guidelines (WCAG) 2.0](https://www.w3.org/TR/WCAG20/).
The IBM watsonx online product documentation is enabled for accessibility. Accessibility features help users who have a disability, such as restricted mobility or limited vision, to use information technology products successfully. Documentation is provided in HTML so that it is easily accessible through assistive technology. With the accessibility features of IBM watsonx, you can do the following tasks:
* Use screen-reader software and digital speech synthesizers to hear what is displayed on the screen. Consult the product documentation of the assistive technology for details on using assistive technologies with HTML-based information.
* Use screen magnifiers to magnify what is displayed on the screen.
* Operate specific or equivalent features by using only the keyboard.
For more information about the commitment that IBM has to accessibility, see [IBM Accessibility](http://www.ibm.com/able).
TTY service
In addition to standard IBM help desk and support websites, IBM has established a TTY telephone service for use by deaf or hard of hearing customers to access sales and support services:
800-IBM-3383 (800-426-3383) within North America
Additional interface information
The IBM watsonx user interfaces do not have content that flashes 2 - 55 times per second.
The IBM watsonx web user interfaces rely on cascading stylesheets to render content properly and to provide a usable experience. If you are a low-vision user, you can adjust your operating system display settings, and use settings such as high contrast mode. You can control font size by using the device or web browser settings.
| # Accessibility features in IBM watsonx content and documentation #
IBM is committed to accessibility\. Accessibility features that follow compliance guidelines are included in IBM watsonx content and documentation to benefit users with disabilities\. Parts of the user interface of IBM watsonx are accessible, but not entirely\. Only documentation is compliant, with a subset of parts of the overall product\.
IBM watsonx documentation uses the latest W3C Standard, [WAI\-ARIA 1\.0](https://www.w3.org/TR/wai-aria/) to ensure compliance with the [United States Access Board Section 508 Standards](https://www.access-board.gov/ict/), and the [ Web Content Accessibility Guidelines (WCAG) 2\.0](https://www.w3.org/TR/WCAG20/)\.
The IBM watsonx online product documentation is enabled for accessibility\. Accessibility features help users who have a disability, such as restricted mobility or limited vision, to use information technology products successfully\. Documentation is provided in HTML so that it is easily accessible through assistive technology\. With the accessibility features of IBM watsonx, you can do the following tasks:
<!-- <ul> -->
* Use screen\-reader software and digital speech synthesizers to hear what is displayed on the screen\. Consult the product documentation of the assistive technology for details on using assistive technologies with HTML\-based information\.
* Use screen magnifiers to magnify what is displayed on the screen\.
* Operate specific or equivalent features by using only the keyboard\.
<!-- </ul> -->
For more information about the commitment that IBM has to accessibility, see [IBM Accessibility](http://www.ibm.com/able)\.
## TTY service ##
In addition to standard IBM help desk and support websites, IBM has established a TTY telephone service for use by deaf or hard of hearing customers to access sales and support services:
800\-IBM\-3383 (800\-426\-3383) within North America
## Additional interface information ##
The IBM watsonx user interfaces do not have content that flashes 2 \- 55 times per second\.
The IBM watsonx web user interfaces rely on cascading stylesheets to render content properly and to provide a usable experience\. If you are a low\-vision user, you can adjust your operating system display settings, and use settings such as high contrast mode\. You can control font size by using the device or web browser settings\.
<!-- </article "role="article" "> -->
|
C3552C5E0F334C8BC3557960821DC5EF931851A1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=en | Activities for assets | Activities for assets
For some asset types, you can see the activities of each asset in projects. The activities graph shows the history of the events that are performed on the asset for some tools. An event is an action that changes or copies the asset. For example, editing the asset description is an event, but viewing the asset is not an event.
Requirements and restrictions
You can view the activities of assets under the following circumstances.
* Workspaces
You can view the asset activities in projects.
* Limitations
Activities have the following limitations:
* Activities graphs are currently available only for Watson Machine Learning models and data assets.
* Activities graphs do not appear in Microsoft Internet Explorer 11 browsers.
Activities events
To view activities for an asset in a project, click the asset name and click . The activities panel shows a timeline of events. Summary information about the asset shows where asset was created, what the last event for it was, and when the last event happened. The first event for each asset is its creation.
Activities events can describe actions that are applicable to all asset types or actions that are specific to an asset type:
* [General events](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=engeneral)
* [Events specific to Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=enwml)
* [Events specific to data assets from files and connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=endata)
You can see this type of information about each event:
* Where: In which catalog or project the event occurred.
* Who: The name of the user who performed the action, unless the action was automated. Automated actions generate events, but don't show usernames.
* What: A description of the action. Some events show details about the original and updated values.
* When: The date and time of the event.
Activities also track relationships between assets. In the activities panel, the creation of a new asset based on the original asset is shown at the top of the list. Click See details to view asset details.
General events
You can see these general events:
* Name updated
* Description updated
* Tags updated
Events specific to Watson Machine Learning models
Activities tracking is available for all Watson Machine Learning service plans, however, you wouldn't see events for actions that are not available with your plan.
In addition to general events, you can see these events that are specific to models:
* Model created
* Model deployed
* Model re-evaluated
* Model retrained
* Set as active model
A model asset shows this information in the Created from field, depending on how it was created:
* The name of the associated data asset
* The name of the associated connection asset
* The project name where it was created
Events specific to data assets from files and connected data assets
In addition to general events, you can see these events that are specific to data assets from files and connected data assets:
* Added to project from a Data Refinery flow
* Added to a project from a file
* Data classes updated
* Schema updated by a Data Refinery flow
* Profile created
* Profile updated
* Profile deleted
* Downloaded
A data asset shows this information in the Created from field, depending on how it was created:
* The name of the Data Refinery flow that created it
* Its associated connection name
* The project name where it was created or came from
Parent topic:[Finding and viewing an asset in a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/view-asset.html)
| # Activities for assets #
For some asset types, you can see the activities of each asset in projects\. The activities graph shows the history of the events that are performed on the asset for some tools\. An event is an action that changes or copies the asset\. For example, editing the asset description is an event, but viewing the asset is not an event\.
## Requirements and restrictions ##
You can view the activities of assets under the following circumstances\.
<!-- <ul> -->
* **Workspaces**
You can view the asset activities in projects.
<!-- </ul> -->
<!-- <ul> -->
* **Limitations**
Activities have the following limitations:
<!-- <ul> -->
* Activities graphs are currently available only for Watson Machine Learning models and data assets.
* Activities graphs do not appear in Microsoft Internet Explorer 11 browsers.
<!-- </ul> -->
<!-- </ul> -->
## Activities events ##
To view activities for an asset in a project, click the asset name and click \. The activities panel shows a timeline of events\. Summary information about the asset shows where asset was created, what the last event for it was, and when the last event happened\. The first event for each asset is its creation\.
Activities events can describe actions that are applicable to all asset types or actions that are specific to an asset type:
<!-- <ul> -->
* [General events](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=en#general)
* [Events specific to Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=en#wml)
* [Events specific to data assets from files and connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=en#data)
<!-- </ul> -->
You can see this type of information about each event:
<!-- <ul> -->
* **Where:** In which catalog or project the event occurred\.
* **Who:** The name of the user who performed the action, unless the action was automated\. Automated actions generate events, but don't show usernames\.
* **What:** A description of the action\. Some events show details about the original and updated values\.
* **When:** The date and time of the event\.
<!-- </ul> -->
Activities also track relationships between assets\. In the activities panel, the creation of a new asset based on the original asset is shown at the top of the list\. Click **See details** to view asset details\.
### General events ###
You can see these general events:
<!-- <ul> -->
* Name updated
* Description updated
* Tags updated
<!-- </ul> -->
### Events specific to Watson Machine Learning models ###
Activities tracking is available for all Watson Machine Learning service plans, however, you wouldn't see events for actions that are not available with your plan\.
In addition to general events, you can see these events that are specific to models:
<!-- <ul> -->
* Model created
* Model deployed
* Model re\-evaluated
* Model retrained
* Set as active model
<!-- </ul> -->
A model asset shows this information in the **Created from** field, depending on how it was created:
<!-- <ul> -->
* The name of the associated data asset
* The name of the associated connection asset
* The project name where it was created
<!-- </ul> -->
### Events specific to data assets from files and connected data assets ###
In addition to general events, you can see these events that are specific to data assets from files and connected data assets:
<!-- <ul> -->
* Added to project from a Data Refinery flow
* Added to a project from a file
* Data classes updated
* Schema updated by a Data Refinery flow
* Profile created
* Profile updated
* Profile deleted
* Downloaded
<!-- </ul> -->
A data asset shows this information in the **Created from** field, depending on how it was created:
<!-- <ul> -->
* The name of the Data Refinery flow that created it
* Its associated connection name
* The project name where it was created or came from
<!-- </ul> -->
**Parent topic:**[Finding and viewing an asset in a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/view-asset.html)
<!-- </article "role="article" "> -->
|
256ED6CA079A147359A51199DC333B23C2708B42 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en | Asset types and properties | Asset types and properties
You create content, in the form of assets, when you work with tools in collaborative workspaces. An asset is an item that contains information about a data set, a model, or another item that works with data.
You add assets by importing them or creating them with tools. You work with assets in collaborative workspaces. The workspace that you use depends on your tasks.
* Projects
Where you collaborate with others to work with data and create assets. Most tools are in projects and you run assets that contain code in projects. For example, you can import data, prepare data, analyze data, or create models in projects. See [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html).
* Deployment spaces
Where you deploy and run assets that are ready for testing or production. You move assets from projects into deployment spaces and then create deployments from those assets. You monitor and update deployments as necessary. See [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
You can find any asset in any of the workspaces for which you are a collaborator by searching for it from the global search bar. See [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html).
You can create many different types of assets, but all assets have some common properties:
* [Asset types](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=entypes)
* [Common properties for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=encommon)
* [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata)
Asset types
To create most types of assets, you must use a specific tool.
The following table lists the types of assets that you can create, the tools you need to create them, and the workspaces where you can add them.
Asset types
Asset type Description Tools to create it Workspaces
[AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Automatically generates candidate predictive model pipelines. AutoAI Projects
[Connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents data that is accessed through a connection to a remote data source. Connected data tool Projects, Spaces
[Connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Contains the information to connect to a data source. Connection tool Projects, Spaces
[Data asset from a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents a file that you uploaded from your local system. Upload pane Projects, Spaces
[Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html) Prepares data. Data Refinery Projects, Spaces
[Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solves optimization problems. Decision Optimization Projects
[Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Trains a common model on a set of remote data sources. Federated Learning Projects
[Folder asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents a folder in IBM Cloud Object Storage. Connected data tool Projects, Spaces
[Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) Runs Python or R code to analyze data or build models. Jupyter notebook editor, AutoAI, Prompt Lab Projects
Model Contains information about a saved or imported model. Various tools that run experiments or train models Projects, Spaces
[Model use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) Tracks the lifecycle of a model from request to production. watsonx.governance Inventory
[Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automates the model lifecycle. Watson Pipelines Projects
[Prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) A single prompt. Prompt Lab Projects
[Prompt session](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) The history of a working session in the Prompt Lab. Prompt Lab Projects
[Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) Contains Python code to support a model in production. Jupyter notebook editor Projects, Spaces
[Script](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) Contains a Python or R script to support a model in production. Jupyter notebook editor, RStudio Projects, Spaces
[SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Runs a flow to prepare data and build a model. SPSS Modeler Projects
[Visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Shows visualizations from a data asset. Visualization page in data assets Projects
[Synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generates synthetic tabular data. Synthetic Data Generator Projects
[Tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html) A tuned foundation model. Tuning Studio Projects
[Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) A tuning experiment that builds a tuned foundation model. Tuning Studio Projects
Common properties for assets
Assets accumulate information in properties when you create them, use them, or when they are updated by automated processes. Some properties are provided by users and can be edited by users. Other properties are automatically provided by the system. Most system-provided properties can't be edited by users.
Common properties for assets everywhere
Most types of assets have the properties that are listed in the following table in all the workspaces where those asset types exist.
Common properties for assets
Property Description Editable?
Name The asset name. Can contain up to 255 characters. Supports multibyte characters. Cannot be empty, contain Unicode control characters, or contain only blank spaces. Asset names do not need to be unique within a project or deployment space. Yes
Description Optional. Supports multibyte characters and hyperlinks. Yes
Creation date The timestamp of when the asset was created or imported. No
Creator or Owner The username or email address of the person who created or imported the asset. No
Last modified date The timestamp of when the asset was last modified. No
Last editor The username or email address of the person who last modified the asset. No
Common properties for assets that run in tools
Some assets are associated with running a tool. For example, an AutoAI experiment asset runs in the AutoAI tool. Assets that run in tools are also known as operational assets. Every time that you run assets in tools, you start a job. You can monitor and schedule jobs. Jobs use compute resources. Compute resources are measured in capacity unit hours (CUH) and are tracked. Depending on your service plans, you can have a limited amount of CUH per month, or pay for the CUH that you use every month.
For many assets that run in tools, you have a choice of the compute environment configuration to use. Typically, larger and faster environment configurations consume compute resources faster.
In addition to basic properties, most assets that run in tools contain the following types of information in projects:
Properties for assets in projects
Properties Description Editable? Workspaces
Environment definition The environment template, hardware specification, and software specification for running the asset. See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). Yes Projects, Spaces
Settings Information that defines how the asset is run. Specific to each type of asset. Yes Projects
Associated data assets The data that the asset is working on. Yes Projects
Jobs Information about how to run the asset, including the environment definition, schedule, and notification options. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). Yes Projects, Spaces
Data asset types and their properties
Data asset types contain metadata and other information about data, including how to access the data.
How you create a data asset depends on where your data is:
* If your data is in a file, you upload the file from your local system to a project or deployment space.
* If your data is in a remote data source, you first create a connection asset that defines the connection to that data source. Then, you create a data asset by selecting the connection, the path or other structure, and the table or file that contains the data. This type of data asset is called a connected data asset.
The following graphic illustrates how data assets from files point to uploaded files in Cloud Object Storage. Connected data assets require a connection asset and point to data in a remote data source.

You can create the following types of data assets in a project or deployment space:
* Data asset from a file
Represents a file that you uploaded from your local system. The file is stored in the object storage container on the IBM Cloud Object Storage instance that is associated with the workspace. The contents of the file can include structured data, unstructured textual data, images, and other types of data. You can create a data asset with a file of any format. However, you can do more actions on CSV files than other file types. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data).
You can create a data asset from a file by uploading a file in a workspace. You can also create data files with tools and convert them to assets. For example, you can create data assets from files with the Data Refinery, Jupyter notebook, and RStudio tools.
* Connected data asset
Represents a table, file, or folder that is accessed through a connection to a remote data source. The connection is defined in the connection asset that is associated with the connected data asset. You can create a connected data asset for every supported connection. When you access a connected data asset, the data is dynamically retrieved from the data source. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data).
You can import connected data assets from a data source with the connected data tool in a workspace.
* Folder asset
Represents a folder in IBM Cloud Object Storage. A folder data asset is special case of a connected data asset. You create a folder data asset by specifying the path to the folder and the IBM Cloud Object Storage connection asset. You can view the files and subfolders that share the path with the folder data asset. The files that you can view within the folder data asset are not themselves data assets. For example, you can create a folder data asset for a path that contains news feeds that are continuously updated. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data).
You can import folder assets from IBM Cloud Object Storage with the connected data tool in a workspace.
* Connection asset
Contains the information necessary to create a connection to a data source. See [Properties of connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enconn).
You can create connections with the connection tool in a workspace.
Learn more about creating and importing data assets:
* [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
* [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)
Properties of data assets from files and connected data assets
In addition to basic properties, data assets from files and connected data assets have the properties or pages that are listed in the following table.
Properties of data assets from files and connected data assets
Property or page Description Editable? Workspaces
Tags Optional. Text labels that users create to simplify searching. A tag consists of one string of up to 255 characters. It can contain spaces, letters, numbers, underscores, dashes, and the symbols # and @. Yes Projects
Format The MIME type of a file. Automatically detected. Yes Projects, Spaces
Source Information about the data file in storage or the data source and connection. No Projects, Spaces
Asset details Information about the size of the data, the number of columns and rows, and the asset version. No Projects, Spaces
Preview asset A preview of the data that includes a limited set of columns and rows from the original data source. See [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html). No Projects, Spaces
Profile page Metadata and statistics about the content of the data. See [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html). Yes Projects
Visualizations page Charts and graphs that users create to understand the data. See [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html). Yes Projects
Feature group page Information about which columns in the data asset are used as features in models. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html). Yes Projects, Spaces
Properties of connection assets
The properties of connection assets depend on the data source that you select when you create a connection. See [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). Connection assets for most data sources have the properties that are listed in the following table.
Properties of connection assets
Properties Description Editable? Workspaces
Connection details The information that identifies the data source. For example, the database name, hostname, IP address, port, instance ID, bucket, endpoint URL, and so on. Yes Projects, Spaces
Credential setting Whether the credentials are shared across the platform (default) or each user must enter their personal credentials. Not all data sources support personal credentials. Yes Projects, Spaces
Authentication method The format of the credentials information. For example, an API key or a username and password. Yes Projects, Spaces
Credentials The username and password, API key, or other credentials, as required by the data source and the specified authentication method. Yes Projects, Spaces
Certificates Whether the data source port is configured to accept SSL connections and other information about the SSL certificate. Yes Projects, Spaces
Private connectivity The method to connect to a database that is not externalized to the internet. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Yes Projects, Spaces
Learn more
* [Profiles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html)
* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html)
* [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html)
* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html)
* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html)
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
* [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
Parent topic:[Overview of IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
| # Asset types and properties #
You create content, in the form of assets, when you work with tools in collaborative workspaces\. An *asset* is an item that contains information about a data set, a model, or another item that works with data\.
You add assets by importing them or creating them with tools\. You work with assets in collaborative workspaces\. The workspace that you use depends on your tasks\.
<!-- <ul> -->
* **Projects**
Where you collaborate with others to work with data and create assets. Most tools are in projects and you run assets that contain code in projects. For example, you can import data, prepare data, analyze data, or create models in projects. See [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html).
<!-- </ul> -->
<!-- <ul> -->
* **Deployment spaces**
Where you deploy and run assets that are ready for testing or production. You move assets from projects into deployment spaces and then create deployments from those assets. You monitor and update deployments as necessary. See [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
<!-- </ul> -->
You can find any asset in any of the workspaces for which you are a collaborator by searching for it from the global search bar\. See [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html)\.
You can create many different types of assets, but all assets have some common properties:
<!-- <ul> -->
* [Asset types](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#types)
* [Common properties for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#common)
* [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#data)
<!-- </ul> -->
## Asset types ##
To create most types of assets, you must use a specific tool\.
The following table lists the types of assets that you can create, the tools you need to create them, and the workspaces where you can add them\.
<!-- <table> -->
Asset types
| Asset type | Description | Tools to create it | Workspaces |
| --------------------------------------------------------------- | ------------------------------------------------------------------------------- | -------------------------------------------------- | ---------------- |
| [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) | Automatically generates candidate predictive model pipelines\. | AutoAI | Projects |
| [Connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#data) | Represents data that is accessed through a connection to a remote data source\. | Connected data tool | Projects, Spaces |
| [Connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#data) | Contains the information to connect to a data source\. | Connection tool | Projects, Spaces |
| [Data asset from a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#data) | Represents a file that you uploaded from your local system\. | Upload pane | Projects, Spaces |
| [Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html) | Prepares data\. | Data Refinery | Projects, Spaces |
| [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) | Solves optimization problems\. | Decision Optimization | Projects |
| [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) | Trains a common model on a set of remote data sources\. | Federated Learning | Projects |
| [Folder asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#data) | Represents a folder in IBM Cloud Object Storage\. | Connected data tool | Projects, Spaces |
| [Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) | Runs Python or R code to analyze data or build models\. | Jupyter notebook editor, AutoAI, Prompt Lab | Projects |
| Model | Contains information about a saved or imported model\. | Various tools that run experiments or train models | Projects, Spaces |
| [Model use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) | Tracks the lifecycle of a model from request to production\. | watsonx\.governance | Inventory |
| [Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) | Automates the model lifecycle\. | Watson Pipelines | Projects |
| [Prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) | A single prompt\. | Prompt Lab | Projects |
| [Prompt session](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) | The history of a working session in the Prompt Lab\. | Prompt Lab | Projects |
| [Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) | Contains Python code to support a model in production\. | Jupyter notebook editor | Projects, Spaces |
| [Script](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) | Contains a Python or R script to support a model in production\. | Jupyter notebook editor, RStudio | Projects, Spaces |
| [SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) | Runs a flow to prepare data and build a model\. | SPSS Modeler | Projects |
| [Visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) | Shows visualizations from a data asset\. | **Visualization** page in data assets | Projects |
| [Synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) | Generates synthetic tabular data\. | Synthetic Data Generator | Projects |
| [Tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html) | A tuned foundation model\. | Tuning Studio | Projects |
| [Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) | A tuning experiment that builds a tuned foundation model\. | Tuning Studio | Projects |
<!-- </table ""> -->
## Common properties for assets ##
Assets accumulate information in properties when you create them, use them, or when they are updated by automated processes\. Some properties are provided by users and can be edited by users\. Other properties are automatically provided by the system\. Most system\-provided properties can't be edited by users\.
### Common properties for assets everywhere ###
Most types of assets have the properties that are listed in the following table in all the workspaces where those asset types exist\.
<!-- <table> -->
Common properties for assets
| Property | Description | Editable? |
| ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- |
| Name | The asset name\. Can contain up to 255 characters\. Supports multibyte characters\. Cannot be empty, contain Unicode control characters, or contain only blank spaces\. Asset names do not need to be unique within a project or deployment space\. | Yes |
| Description | Optional\. Supports multibyte characters and hyperlinks\. | Yes |
| Creation date | The timestamp of when the asset was created or imported\. | No |
| Creator or Owner | The username or email address of the person who created or imported the asset\. | No |
| Last modified date | The timestamp of when the asset was last modified\. | No |
| Last editor | The username or email address of the person who last modified the asset\. | No |
<!-- </table ""> -->
### Common properties for assets that run in tools ###
Some assets are associated with running a tool\. For example, an AutoAI experiment asset runs in the AutoAI tool\. Assets that run in tools are also known as operational assets\. Every time that you run assets in tools, you start a job\. You can monitor and schedule jobs\. Jobs use compute resources\. Compute resources are measured in capacity unit hours (CUH) and are tracked\. Depending on your service plans, you can have a limited amount of CUH per month, or pay for the CUH that you use every month\.
For many assets that run in tools, you have a choice of the compute environment configuration to use\. Typically, larger and faster environment configurations consume compute resources faster\.
In addition to basic properties, most assets that run in tools contain the following types of information in projects:
<!-- <table> -->
Properties for assets in projects
| Properties | Description | Editable? | Workspaces |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------- | ---------------- |
| Environment definition | The environment template, hardware specification, and software specification for running the asset\. See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)\. | Yes | Projects, Spaces |
| Settings | Information that defines how the asset is run\. Specific to each type of asset\. | Yes | Projects |
| Associated data assets | The data that the asset is working on\. | Yes | Projects |
| Jobs | Information about how to run the asset, including the environment definition, schedule, and notification options\. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)\. | Yes | Projects, Spaces |
<!-- </table ""> -->
## Data asset types and their properties ##
Data asset types contain metadata and other information about data, including how to access the data\.
How you create a data asset depends on where your data is:
<!-- <ul> -->
* If your data is in a file, you upload the file from your local system to a project or deployment space\.
* If your data is in a remote data source, you first create a *connection asset* that defines the connection to that data source\. Then, you create a data asset by selecting the connection, the path or other structure, and the table or file that contains the data\. This type of data asset is called a *connected data asset*\.
<!-- </ul> -->
The following graphic illustrates how data assets from files point to uploaded files in Cloud Object Storage\. Connected data assets require a connection asset and point to data in a remote data source\.

You can create the following types of data assets in a project or deployment space:
<!-- <ul> -->
* **Data asset from a file**
Represents a file that you uploaded from your local system. The file is stored in the object storage container on the IBM Cloud Object Storage instance that is associated with the workspace. The contents of the file can include structured data, unstructured textual data, images, and other types of data. You can create a data asset with a file of any format. However, you can do more actions on CSV files than other file types. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#prop-data).
You can create a data asset from a file by uploading a file in a workspace. You can also create data files with tools and convert them to assets. For example, you can create data assets from files with the Data Refinery, Jupyter notebook, and RStudio tools.
* **Connected data asset**
Represents a table, file, or folder that is accessed through a connection to a remote data source. The connection is defined in the connection asset that is associated with the connected data asset. You can create a connected data asset for every supported connection. When you access a connected data asset, the data is dynamically retrieved from the data source. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#prop-data).
You can import connected data assets from a data source with the connected data tool in a workspace.
* **Folder asset**
Represents a folder in IBM Cloud Object Storage. A folder data asset is special case of a connected data asset. You create a folder data asset by specifying the path to the folder and the IBM Cloud Object Storage connection asset. You can view the files and subfolders that share the path with the folder data asset. The files that you can view within the folder data asset are not themselves data assets. For example, you can create a folder data asset for a path that contains news feeds that are continuously updated. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#prop-data).
You can import folder assets from IBM Cloud Object Storage with the connected data tool in a workspace.
* **Connection asset**
Contains the information necessary to create a connection to a data source. See [Properties of connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=en#conn).
You can create connections with the connection tool in a workspace.
<!-- </ul> -->
Learn more about creating and importing data assets:
<!-- <ul> -->
* [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
* [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)
<!-- </ul> -->
### Properties of data assets from files and connected data assets ###
In addition to basic properties, data assets from files and connected data assets have the properties or pages that are listed in the following table\.
<!-- <table> -->
Properties of data assets from files and connected data assets
| Property or page | Description | Editable? | Workspaces |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ---------------- |
| Tags | Optional\. Text labels that users create to simplify searching\. A tag consists of one string of up to 255 characters\. It can contain spaces, letters, numbers, underscores, dashes, and the symbols \# and @\. | Yes | Projects |
| Format | The MIME type of a file\. Automatically detected\. | Yes | Projects, Spaces |
| Source | Information about the data file in storage or the data source and connection\. | No | Projects, Spaces |
| Asset details | Information about the size of the data, the number of columns and rows, and the asset version\. | No | Projects, Spaces |
| **Preview asset** | A preview of the data that includes a limited set of columns and rows from the original data source\. See [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html)\. | No | Projects, Spaces |
| **Profile** page | Metadata and statistics about the content of the data\. See [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html)\. | Yes | Projects |
| **Visualizations** page | Charts and graphs that users create to understand the data\. See [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html)\. | Yes | Projects |
| **Feature group** page | Information about which columns in the data asset are used as features in models\. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html)\. | Yes | Projects, Spaces |
<!-- </table ""> -->
### Properties of connection assets ###
The properties of connection assets depend on the data source that you select when you create a connection\. See [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)\. Connection assets for most data sources have the properties that are listed in the following table\.
<!-- <table> -->
Properties of connection assets
| Properties | Description | Editable? | Workspaces |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ---------------- |
| Connection details | The information that identifies the data source\. For example, the database name, hostname, IP address, port, instance ID, bucket, endpoint URL, and so on\. | Yes | Projects, Spaces |
| Credential setting | Whether the credentials are shared across the platform (default) or each user must enter their personal credentials\. Not all data sources support personal credentials\. | Yes | Projects, Spaces |
| Authentication method | The format of the credentials information\. For example, an API key or a username and password\. | Yes | Projects, Spaces |
| Credentials | The username and password, API key, or other credentials, as required by the data source and the specified authentication method\. | Yes | Projects, Spaces |
| Certificates | Whether the data source port is configured to accept SSL connections and other information about the SSL certificate\. | Yes | Projects, Spaces |
| Private connectivity | The method to connect to a database that is not externalized to the internet\. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. | Yes | Projects, Spaces |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Profiles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html)
* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html)
* [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html)
* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html)
* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html)
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
* [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
<!-- </ul> -->
**Parent topic:**[Overview of IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
<!-- </article "role="article" "> -->
|
C562415979D3B38AA74C27FD68F13D54FFE47FE5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html?context=cdpaas&locale=en | Adding associated services to a project | Adding associated services to a project
To run some tools, you must associate a Watson Machine Learning service instance with the project.
After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), you can add an associated service to it at any time.
Required permission : You must have the Admin role in the project to add an associated service.
For some types of assets, you must associate the IBM Watson Machine Learning service with the project. You are prompted to associate the IBM Watson Machine Learning the first time you open tools like Prompt Lab, AutoAI, SPSS Modeler, and Decision Optimization.
You can also add the Watson Machine Learning service to a project directly:
1. Go to the project's Manage tab and select the Services and integrations page.
2. In the IBM Services section, click Associate Service.
3. Select your IBM Watson Machine Learning service instance and click Associate.
Learn more
* [Creating and managing IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html)
* [IBM Cloud services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
| # Adding associated services to a project #
To run some tools, you must associate a Watson Machine Learning service instance with the project\.
After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), you can add an associated service to it at any time\.
**Required permission** : You must have the **Admin** role in the project to add an associated service\.
For some types of assets, you must associate the IBM Watson Machine Learning service with the project\. You are prompted to associate the IBM Watson Machine Learning the first time you open tools like Prompt Lab, AutoAI, SPSS Modeler, and Decision Optimization\.
You can also add the Watson Machine Learning service to a project directly:
<!-- <ol> -->
1. Go to the project's **Manage** tab and select the **Services and integrations** page\.
2. In the **IBM Services** section, click **Associate Service**\.
3. Select your IBM Watson Machine Learning service instance and click **Associate**\.
<!-- </ol> -->
## Learn more ##
<!-- <ul> -->
* [Creating and managing IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html)
* [IBM Cloud services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
<!-- </ul> -->
**Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
<!-- </article "role="article" "> -->
|
E45C894CB0DF39AD55A35183D62A6CBD570076CA | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html?context=cdpaas&locale=en | Browser support | Browser support
The supported web browsers provide the best experience for IBM watsonx.
Use the latest versions of these web browers with IBM watsonx:
* Chrome
* Microsoft Edge
* Mozilla Firefox
Tip for Firefox on Mac users: Horizontal scrolling within the UI can be interpreted by your Mac as an attempt to swipe between pages. If this behavior is undesired or if the browser crashes after the service prompts you to stay on the page, consider disabling the Swipe between pages gesture in Launchpad > System Preferences > Trackpad > More Gestures.
* Firefox ESR (see Mozilla Firefox Extended Support Release for more details)
Learn more
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
Parent topic:[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
| # Browser support #
The supported web browsers provide the best experience for IBM watsonx\.
Use the latest versions of these web browers with IBM watsonx:
<!-- <ul> -->
* Chrome
* Microsoft Edge
* Mozilla Firefox
Tip for Firefox on Mac users: Horizontal scrolling within the UI can be interpreted by your Mac as an attempt to swipe between pages. If this behavior is undesired or if the browser crashes after the service prompts you to stay on the page, consider disabling the **Swipe between pages** gesture in **Launchpad > System Preferences > Trackpad > More Gestures**.
* Firefox ESR (see Mozilla Firefox Extended Support Release for more details)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
<!-- </ul> -->
**Parent topic:**[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
<!-- </article "role="article" "> -->
|
E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=en | Project collaborators | Project collaborators
Collaborators are the people you add to the project to work together. After you create a project, add collaborators to share knowledge and resources freely, shift workloads flexibly, and help one another complete jobs.
Required permissions : To manage collaborators, both of the following conditions must be true: : - You must have the Admin role in the project. : - You must belong to the project creator's IBM Cloud account.
* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enadd-collaborators)
* [Add service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enserviceids)
* [Change collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enchange-role)
* [Remove a collaborator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enremove-a-collaborator)
Add collaborators
To add a collaborator as a Viewer or Editor of your project, they must either be:
* A member of the project creator's IBM Cloud account, or;
* A member of the same organization single sign-on (SAML federation on IBM Cloud).
To add a collaborator as an Admin of your project, they must be a member of the project creator's IBM Cloud account.
Watch this video to see how to add collaborators and grant them access to your projects.
This video provides a visual method to learn the concepts and tasks in this documentation.
To add collaborators to your project:
1. From your project, click the Access Control page on the Manage tab.
2. Click Add collaborators then select Add users.
3. Add the collaborators who you want to have the same access level:
* Type email addresses into the Find users field.
* Copy multiple email addresses, separated by commas, and paste them into the Find users field.
4. Choose the [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) for the collaborators and click Add:
* Viewer: View the project.
* Editor: Control project assets.
* Admin: Control project assets, collaborators, and settings.
5. Add more collaborators with the same or different access levels.
6. Click Add.
The invited users are added to your project immediately.
Add service IDs
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Because service IDs are not tied to a specific user, if a user happens to leave an organization and is deleted from the account, the service ID remains ensuring that your application or service stays up and running. See [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids).
To add a service ID to your project:
1. From your project, select the Access Control page on the Manage tab.
2. Click Add collaborators and select Add service IDs.
3. In the Find service IDs field, search for the service name or description and select the one you want.
4. Add other service IDs that you want to have the same access level.
5. Select the access level.
6. Click Add.
Change collaborator roles
To change the role for a project collaborator or service ID:
1. Go to the Access Control page on the Manage tab.
2. In the row for the collaborator or service ID, click the edit icon next to the role name.
3. Select the new role and click Save.
Remove a collaborator
To remove a collaborator or service ID from a project, go to the Access Control page on the Manage tab. In the row for the collaborator or service ID, click the remove icon.
Learn more
* [Collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)
* [Setup additional account users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
| # Project collaborators #
Collaborators are the people you add to the project to work together\. After you create a project, add collaborators to share knowledge and resources freely, shift workloads flexibly, and help one another complete jobs\.
**Required permissions** : To manage collaborators, both of the following conditions must be true: : \- You must have the **Admin** role in the project\. : \- You must belong to the project creator's IBM Cloud account\.
<!-- <ul> -->
* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=en#add-collaborators)
* [Add service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=en#serviceids)
* [Change collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=en#change-role)
* [Remove a collaborator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=en#remove-a-collaborator)
<!-- </ul> -->
## Add collaborators ##
To add a collaborator as a **Viewer** or **Editor** of your project, they must either be:
<!-- <ul> -->
* A member of the project creator's IBM Cloud account, or;
* A member of the same organization single sign\-on (SAML federation on IBM Cloud)\.
<!-- </ul> -->
To add a collaborator as an **Admin** of your project, they must be a member of the project creator's IBM Cloud account\.
Watch this video to see how to add collaborators and grant them access to your projects\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
To add collaborators to your project:
<!-- <ol> -->
1. From your project, click the **Access Control** page on the **Manage** tab\.
2. Click **Add collaborators** then select **Add users**\.
3. Add the collaborators who you want to have the same access level:
<!-- <ul> -->
* Type email addresses into the **Find users** field.
* Copy multiple email addresses, separated by commas, and paste them into the **Find users** field.
<!-- </ul> -->
4. Choose the [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) for the collaborators and click **Add**:
<!-- <ul> -->
* **Viewer**: View the project.
* **Editor**: Control project assets.
* **Admin**: Control project assets, collaborators, and settings.
<!-- </ul> -->
5. Add more collaborators with the same or different access levels\.
6. Click **Add**\.
<!-- </ol> -->
The invited users are added to your project immediately\.
## Add service IDs ##
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services\. Because service IDs are not tied to a specific user, if a user happens to leave an organization and is deleted from the account, the service ID remains ensuring that your application or service stays up and running\. See [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids)\.
To add a service ID to your project:
<!-- <ol> -->
1. From your project, select the **Access Control** page on the **Manage** tab\.
2. Click **Add collaborators** and select **Add service IDs**\.
3. In the **Find service IDs** field, search for the service name or description and select the one you want\.
4. Add other service IDs that you want to have the same access level\.
5. Select the access level\.
6. Click **Add**\.
<!-- </ol> -->
## Change collaborator roles ##
To change the role for a project collaborator or service ID:
<!-- <ol> -->
1. Go to the **Access Control** page on the **Manage** tab\.
2. In the row for the collaborator or service ID, click the edit icon next to the role name\.
3. Select the new role and click **Save**\.
<!-- </ol> -->
## Remove a collaborator ##
To remove a collaborator or service ID from a project, go to the **Access Control** page on the **Manage** tab\. In the row for the collaborator or service ID, click the remove icon\.
## Learn more ##
<!-- <ul> -->
* [Collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)
* [Setup additional account users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
<!-- </ul> -->
**Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
<!-- </article "role="article" "> -->
|
DE79F406DB76B8D50A2B8AB35D4A385983AA5F54 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html?context=cdpaas&locale=en | Project collaborator roles and permissions | Project collaborator roles and permissions
When you add a collaborator to a project, you specify which actions that the user can do by assigning a role.
These roles provide these permissions for projects:
Action Viewer Editor Admin
View all information for data assets ✓ ✓ ✓
View jobs ✓ ✓ ✓
Add and read data assets ✓ ✓
View Data Refinery flows and SPSS Modeler flows ✓ ✓
View all other types of assets ✓ ✓ ✓
Create, add, modify, or delete all types of assets ✓ ✓
Submit inference requests to foundation models, including tuned foundation models ✓ ✓
Run and schedule assets that run in tools and jobs ✓ ✓
Create and modify data asset visualizations ✓ ✓ ✓
Save visualizations to your project ✓ ✓
Create and modify data asset profiles ✓ ✓
Share notebooks ✓ ✓
Promote assets to deployment spaces ✓ ✓
Edit the project readme ✓ ✓
Use project access tokens ✓ ✓
Manage environment templates ✓ ✓
Stop your own environment runtimes ✓ ✓
Export a project to desktop ✓ ✓
Manage project collaborators * ✓
Set up integrations ✓
Manage associated services ✓
Manage project access tokens ✓
Mark project as sensitive ✓
* To add collaborators or change collaborator roles, users with the Admin role in the project must also belong to the project creator's IBM Cloud account.
Learn more
* [Adding collaborators to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)
Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
| # Project collaborator roles and permissions #
When you add a collaborator to a project, you specify which actions that the user can do by assigning a role\.
These roles provide these permissions for projects:
<!-- <table> -->
| Action | Viewer | Editor | Admin |
| --------------------------------------------------------------------------------- | ------ | ------ | ----- |
| View all information for data assets | ✓ | ✓ | ✓ |
| View jobs | ✓ | ✓ | ✓ |
| Add and read data assets | | ✓ | ✓ |
| View Data Refinery flows and SPSS Modeler flows | | ✓ | ✓ |
| View all other types of assets | ✓ | ✓ | ✓ |
| Create, add, modify, or delete all types of assets | | ✓ | ✓ |
| Submit inference requests to foundation models, including tuned foundation models | | ✓ | ✓ |
| Run and schedule assets that run in tools and jobs | | ✓ | ✓ |
| Create and modify data asset visualizations | ✓ | ✓ | ✓ |
| Save visualizations to your project | | ✓ | ✓ |
| Create and modify data asset profiles | | ✓ | ✓ |
| Share notebooks | | ✓ | ✓ |
| Promote assets to deployment spaces | | ✓ | ✓ |
| Edit the project readme | | ✓ | ✓ |
| Use project access tokens | | ✓ | ✓ |
| Manage environment templates | | ✓ | ✓ |
| Stop your own environment runtimes | | ✓ | ✓ |
| Export a project to desktop | | ✓ | ✓ |
| Manage project collaborators **\*** | | | ✓ |
| Set up integrations | | | ✓ |
| Manage associated services | | | ✓ |
| Manage project access tokens | | | ✓ |
| Mark project as sensitive | | | ✓ |
<!-- </table ""> -->
**\*** To add collaborators or change collaborator roles, users with the **Admin** role in the project must also belong to the project creator's IBM Cloud account\.
## Learn more ##
<!-- <ul> -->
* [Adding collaborators to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)
<!-- </ul> -->
**Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
<!-- </article "role="article" "> -->
|
41AD83283A66CC3C467F70EA638B9C1C6681A160 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en | Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service | Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service
IBM watsonx as a Service and Cloud Pak for Data as a Service have similar platform functionality and are compatible in many ways. The watsonx platform provides a subset of the tools and services that are provided by Cloud Pak for Data as a Service. However, watsonx.ai and watsonx.governance on watsonx provide more functionality than the same set of tools on Cloud Pak for Data as a Service.
* [Common platform functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enplatform)
* [Services on each platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enservices)
* [Data science and MLOps tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=entools)
* [AI governance tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=engov)
Common platform functionality
The following platform functionality is common to both watsonx and Cloud Pak for Data as a Service:
* Security, compliance, and isolation
* Compute resources for running workloads
* Global search for assets across the platform
* The Platform assets catalog for sharing connections across the platform
* Role-based user management within workspaces
* A services catalog for adding services
* View compute usage from the Administration menu
* Connections to remote data sources
* Connection credentials that are personal or shared
* Sample assets and projects
If you are signed up for both watsonx and Cloud Pak for Data as a Service, you can switch between platforms. See [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html).
Services on each platform
Both platforms provide services for data science and MLOps and AI governance use cases:
* Watson Studio
* Watson Machine Learning
* Watson OpenScale
However, the services for watsonx.ai and watsonx.governance on the watsonx platform include features for working with foundation models and generative AI that are not included in these services on Cloud Pak for Data as a Service.
Cloud Pak for Data as a Service also provides services for these use cases:
* Data integration
* Data governance
Data science and AI tools
Both platforms provide a common set of data science and AI tools. However, on watsonx, you can also perform foundation model inferencing with the Prompt Lab tool or with a Python library in notebooks. Foundation model inferencing and the Prompt Lab tool are not available on Cloud Pak for Data as a Service.
The following table shows which data science and AI tools are available on each platform.
Tools on watsonx and Cloud Pak for Data
Tool On watsonx? On Cloud Pak for Data?
[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) ✓ No
[Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) ✓ No
[Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) ✓ ✓
[Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) ✓ ✓
[Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) ✓ ✓
[Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) ✓ ✓
[RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) ✓ ✓
[SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) ✓ ✓
[Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) ✓ ✓
[AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) ✓ ✓
[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) ✓ ✓
If you are signed up for Cloud Pak for Data as a Service, you can access watsonx and you can move your projects and deployment spaces that meet the requirements from one platform to the other. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html).
AI governance tools
Both platforms contain the same AI use case inventory and evaluation tools. However, on watsonx, you can track and evaluate generative AI assets and dimensions. See [Comparison of governance solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-comparison.html).
Learn more
* [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html)
* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)
* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)
* [Overview of IBM watsonx as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
Parent topic:[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
| # Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service #
IBM watsonx as a Service and Cloud Pak for Data as a Service have similar platform functionality and are compatible in many ways\. The watsonx platform provides a subset of the tools and services that are provided by Cloud Pak for Data as a Service\. However, watsonx\.ai and watsonx\.governance on watsonx provide more functionality than the same set of tools on Cloud Pak for Data as a Service\.
<!-- <ul> -->
* [Common platform functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en#platform)
* [Services on each platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en#services)
* [Data science and MLOps tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en#tools)
* [AI governance tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=en#gov)
<!-- </ul> -->
## Common platform functionality ##
The following platform functionality is common to both watsonx and Cloud Pak for Data as a Service:
<!-- <ul> -->
* Security, compliance, and isolation
* Compute resources for running workloads
* Global search for assets across the platform
* The Platform assets catalog for sharing connections across the platform
* Role\-based user management within workspaces
* A services catalog for adding services
* View compute usage from the **Administration** menu
* Connections to remote data sources
* Connection credentials that are personal or shared
* Sample assets and projects
<!-- </ul> -->
If you are signed up for both watsonx and Cloud Pak for Data as a Service, you can switch between platforms\. See [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html)\.
## Services on each platform ##
Both platforms provide services for data science and MLOps and AI governance use cases:
<!-- <ul> -->
* Watson Studio
* Watson Machine Learning
* Watson OpenScale
<!-- </ul> -->
However, the services for watsonx\.ai and watsonx\.governance on the watsonx platform include features for working with foundation models and generative AI that are not included in these services on Cloud Pak for Data as a Service\.
Cloud Pak for Data as a Service also provides services for these use cases:
<!-- <ul> -->
* Data integration
* Data governance
<!-- </ul> -->
## Data science and AI tools ##
Both platforms provide a common set of data science and AI tools\. However, on watsonx, you can also perform foundation model inferencing with the Prompt Lab tool or with a Python library in notebooks\. Foundation model inferencing and the Prompt Lab tool are not available on Cloud Pak for Data as a Service\.
The following table shows which data science and AI tools are available on each platform\.
<!-- <table> -->
Tools on watsonx and Cloud Pak for Data
| Tool | On watsonx? | On Cloud Pak for Data? |
| ---------------------------- | ----------- | ---------------------- |
| [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) | ✓ | No |
| [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) | ✓ | No |
| [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) | ✓ | ✓ |
| [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) | ✓ | ✓ |
| [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) | ✓ | ✓ |
| [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) | ✓ | ✓ |
| [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) | ✓ | ✓ |
| [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) | ✓ | ✓ |
| [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) | ✓ | ✓ |
| [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) | ✓ | ✓ |
| [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) | ✓ | ✓ |
<!-- </table ""> -->
If you are signed up for Cloud Pak for Data as a Service, you can access watsonx and you can move your projects and deployment spaces that meet the requirements from one platform to the other\. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)\.
## AI governance tools ##
Both platforms contain the same AI use case inventory and evaluation tools\. However, on watsonx, you can track and evaluate generative AI assets and dimensions\. See [Comparison of governance solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-comparison.html)\.
## Learn more ##
<!-- <ul> -->
* [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html)
* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)
* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)
* [Overview of IBM watsonx as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
<!-- </ul> -->
**Parent topic:**[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
<!-- </article "role="article" "> -->
|
812C39CF410F9FE3F0D0E7C62ED1BC015370C849 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en | Frequently asked questions | Frequently asked questions
Find answers to frequently asked questions about watsonx.ai.
Account and setup questions
* [How do I sign up for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensign-up-wxai)
* [Can I try watsonx for free?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfree)
* [How do I upgrade watsonx.ai and watsonx.governance?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enupgrade)
* [Which regions can I provision watsonx.ai and watsonx.governance in?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html)
* [Which web browsers are supported for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [How can I get the most runtime from my Watson Studio Lite plan?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enws-lite)
* [How do I change languages for the product and the documentation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
* [How do I find my IBM Cloud account owner or administrator?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enaccountadmin)
* [Can I provide feedback?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfeedback)
Foundation model questions
* [What foundation models are available and where do they come from?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-available)
* [What data was used to train foundation models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-data)
* [Do I need to check generated output for biased, inappropriate, or incorrect content?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-check)
* [Is there a limit to how much text generation I can do?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-token-limit)
* [Does prompt engineering train the foundation model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-train)
* [Does IBM have access to or use my data in any way?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-privacy)
* [What APIs are available?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-apis)
Project questions
* [How do I load very large files to my project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enverylarge)
IBM Cloud Object Storage questions
* [What is saved in IBM Cloud Object Storage for workspaces?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensaved-in-cos)
* [Do I need to upgrade IBM Cloud Object Storage when I upgrade other services?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enupgrade-cos)
* [Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encosstep)
Notebook questions
* [Can I install libraries or packages to use in my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=eninstall-libraries)
* [Can I call functions that are defined in one notebook from another notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfunctions-defined)
* [Can I add arbitrary notebook extensions?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enarbitrary)
* [How do I access the data from a CSV file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encsv-file)
* [How do I access the data from a compressed file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encompressed-file)
Security and reliability questions
* [How secure is IBM watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensecurity)
* [Is my data and notebook protected from sharing outside of my collaborators?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enprotected-notebooks)
* [Do I need to back up my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enbackup-notebooks)
Sharing and collaboration questions
* [What are the implications of sharing a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensharing-notebooks)
* [How can I share my work outside of RStudio?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enhow-share)
* [How do I share my SPSS Modeler flow with another project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enshare-spss)
Machine learning questions
* [How do I run an AutoAI experiment?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enrun-autoai)
* [What is available for automated model building?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-autoai)
* [What frameworks and libraries are available for my machine learning models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-frameworks)
* [What is an API Key?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-api-key)
Watson OpenScale questions
* [What is Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfaq-whatsa)
* [How do I convert a prediction column from an integer data type to a categorical data type?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-faqs-convert-data-types)
* [Why does Watson OpenScale need access to training data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=entrainingdata)
* [What does it mean if the fairness score is greater than 100 percent?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfairness-score-over100)
* [How is model bias mitigated by using Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-001-bias)
* [Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-002-attrib)
* [Is it possible to mitigate bias for regression-based models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-003-regress)
* [What are the different methods of debiasing in Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-004-methods-bias)
* [Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2. If the data is in Netezza, can Watson OpenScale use Netezza?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enconfigmodel)
* [Why doesn't Watson OpenScale see the updates that were made to the model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ennew-model-missing)
* [What are the various kinds of risks associated in using a machine learning model? ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-risk)
* [Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-dashboard-email)
* [In Watson OpenScale, what data is used for Quality metrics computation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-quality-data)
* [In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-thresholds)
IBM watsonx.ai questions
How do I sign up for watsonx?
Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). If you sign up for watsonx.governance, you automatically provision watsonx.ai as well.
Can I try watsonx for free?
Yes, when you sign up for IBM watsonx.ai, you automatically provision the free version of the underlying services: Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage. When you sign up for IBM watsonx.governance, you automatically provision the free version of Watson OpenScale and the free versions of the services for IBM watsonx.ai.
How do I upgrade watsonx.ai and watsonx.governance?
When you're ready to upgrade any of the underlying services for watsonx.ai or watsonx.governance, you can upgrade in place without losing any of your work or data.
You must be the owner or administrator of the IBM Cloud account for a service to upgrade it. See [Upgrading services on watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html).
How can I get the most runtime from my Watson Studio Lite plan?
The Watson Studio Lite plan allows for 10 CUH per month. You can maximize your available CUH by setting your assets to use environments with lower CUH rates. For example, you can [change your notebook environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env). To see the available environments and the required CUH, go to the [Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=wx).
How do I find my IBM Cloud account owner?
If you have an enterprise account or work in an IBM Cloud that you don't own, you might need to ask an account owner to give you access to a workspace or another role.
To find your IBM Cloud account owner:
1. From the navigation menu, choose Administration > Access (IAM).
2. From the avatar menu, make sure you're in the right account, or switch accounts, if necessary.
3. Click Users, and find the username with the word owner next to it.
To understand roles, see [Roles for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html). To determine your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html).
Can I provide feedback?
Yes, we encourage feedback as we continue to develop this platform. From the navigation menu, select Support > Share an idea.
Foundation models
What foundation models are available and where do they come from?
See the complete list of [supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
What data was used to train foundation models?
Links to details about each model, including pretraining data and fine-tuning, are available here: [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
Do I need to check generated output for biased, inappropriate, or incorrect content?
Yes, you must review the generated output of foundation models. Third Party models have been trained with data that might contain biases and inaccuracies and can generate outputs containing misinformation, obscene or offensive language, or discriminatory content.
In the Prompt Lab, when you toggle AI guardrails on, any sentence in the prompt text or model output that contains harmful language will be replaced with a message saying potentially harmful text has been removed.
See [Avoiding undesirable output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html).
Is there a limit to how much text generation I can do?
With the free trial of watsonx.ai, you can use up to 25,000 tokens per month. Your token usage is the sum of your input and output tokens.
With a paid service plan, there is no token limit, but you are charged for the tokens that you submit as input plus the tokens that you receive in the generated output.
See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Does prompt engineering train the foundation model?
No, submitting prompts to a foundation model does not train the model. The models available in watsonx.ai are pretrained, so you do not need to train the models before you use them.
See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html).
Does IBM have access to or use my data in any way?
No, IBM does not have access to your data.
Your work on watsonx.ai, including your data and the models that you create, are private to your account:
* Your data is accessible only by you. Your data is used to train only your models. Your data will never be accessible or used by IBM or any other person or organization. Your data is stored in dedicated storage buckets and is encrypted at rest and in motion.
* Your models are accessible only by you. Your models will never be accessible or used by IBM or any other person or organization. Your models are secured in the same way as your data.
Learn more about security and your options:
* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* [Security for IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
What APIs are available?
You can prompt foundation models in watsonx.ai programmatically using the Python library.
See [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html).
Projects
How do I load very large files to my project?
You can't load data files larger than 5 GB to your project. If your files are larger, you must use the Cloud Object Storage API and load the data in multiple parts. See the [curl commands](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) for working with Cloud Object Storage directly on IBM Cloud.
See [Adding very large objects to a project's Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html).
IBM Cloud Object Storage
What is saved in IBM Cloud Object Storage for workspaces?
When you create a project or deployment space, you specify a IBM Cloud Object Storage and create a bucket that is dedicated to that workspace. These types of objects are stored in the IBM Cloud Object Storage bucket for the workspace:
* Files for data assets that you uploaded into the workspace.
* Files associated with assets that run in tools, such as, notebooks and models.
* Metadata about assets, such as the asset type, format, and tags.
Do I need to upgrade IBM Cloud Object Storage when I upgrade other services?
You must upgrade your IBM Cloud Object Storage instance only when you run out of storage space. Other services can use any IBM Cloud Object Storage plan and you can upgrade any service or your IBM Cloud Object Storage service independently.
Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog?
IBM Cloud Object Storage requires an extra step for users who do not have administrative privileges for it. The account administrator must [enable nonadministrative users to create projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlcos-delegation).
If you have administrator privileges and do not see the latest IBM Cloud Object Storage, try again later because server-side caching might cause a delay in rendering the latest values.
Notebooks
Can I install libraries or packages to use in my notebooks?
You can install Python libraries and R packages through a notebook, and those libraries and packages will be available to all your notebooks that use the same environment template. For instructions, see [Import custom or third-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html). If you get an error about missing operating system dependencies when you install a library or package, notify IBM Support. To see the preinstalled libraries and packages and the libraries and packages that you installed, from within a notebook, run the appropriate command:
* Python: !pip list
* R: installed.packages()
Can I call functions that are defined in one notebook from another notebook?
There is no way to call one notebook from another notebook on the platform. However, you can put your common code into a library outside of the platform and then install it.
Can I add arbitrary notebook extensions?
No, you can't extend your notebook capabilities by adding arbitrary extensions as a customization because all notebook extensions must be preinstalled.
How do I access the data from a CSV file in a notebook?
After you load a CSV file into object storage, load the data by clicking the Code snippets icon () in an opened notebook, clicking Read data and selecting the CSV file from the project. Then, click in an empty code cell in your notebook and insert the generated code.
How do I access the data from a compressed file in a notebook?
After you load the compressed file to object storage, get the file credentials by clicking the Code snippets icon () in an opened notebook, clicking Read data and selecting the compressed file from the project. Then, click in an empty code cell in your notebook and load the credentials to the cell. Alternatively, click to copy the credentials to the clipboard and paste them into your notebook.
Security and reliability
How secure is IBM watsonx?
The IBM watsonx platform is very secure and resilient. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html).
Is my data and notebook protected from sharing outside of my collaborators?
The data that is loaded into your project and notebooks is secure. Only the collaborators in your project can access your data or notebooks. Each platform account acts as a separate tenant of the Spark and IBM Cloud Object Storage services. Tenants cannot access other tenant's data.
If you want to share your notebook with the public, then hide your data service credentials in your notebook. For the Python and R languages, enter the following syntax: @hidden_cell
Be sure to save your notebook immediately after you enter the syntax to hide cells with sensitive data.
Only then should you share your work.
Do I need to back up my notebooks?
No. Your notebooks are stored in IBM Cloud Object Storage, which provides resiliency against outages.
Sharing and collaboration
What are the implications of sharing a notebook?
When you share a notebook, the permalink never changes. Any person with the link can view your notebook. You can stop sharing the notebook by clearing the checkbox to share it. Updates are not automatically shared. When you update your notebook, you can sync the shared notebook by reselecting the checkbox to share it.
How can I share my work outside of RStudio?
One way of sharing your work outside of RStudio is connecting it to a shared GitHub repository that you and your collaborators can work from. Read this [blog post](https://www.r-bloggers.com/rstudio-and-github/) for more information.
However, the best method to share your work with the members of a project is to use notebooks in the project that uses the R kernel.
RStudio is a great environment to work in for prototyping and working individually on R projects, but it is not yet integrated with projects.
How do I share my SPSS Modeler flow with another project?
By design, modeler flows can be used only in the project where the flow is created or imported. If you need to use a modeler flow in a different project, you must download the flow from current project (source project) to your local environment and then import the flow to another project (target project).
IBM Watson Machine Learning
How do I run an AutoAI experiment?
Go to [Creating an AutoAI experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) to watch a short video to see how to create and run an AutoAI experiment and then follow a tutorial to set up your own sample.
What is available for automated model building?
The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines that are customized for your predictive modeling problem. These model pipelines are created iteratively as AutoAI analyzes your data set and discovers data transformations, algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective. For details, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html).
What frameworks and libraries are supported for my machine learning models?
You can use popular tools, libraries, and frameworks to train and deploy machine learning models by using IBM Watson Machine Learning. The [supported frameworks topic](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) lists supported versions and features, as well as deprecated versions scheduled to be discontinued.
What is an API Key?
API keys allow you to easily authenticate when using the CLI or APIs that can be used across multiple services. API Keys are considered confidential since they are used to grant access. Treat all API keys as you would a password since anyone with your API key can impersonate your service.
Watson OpenScale
What is Watson OpenScale
IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production
How do I convert a prediction column from an integer data type to a categorical data type?
For fairness monitoring, the prediction column allows only an integer numerical value even though the prediction label is categorical. How do I configure a categorical feature that is not an integer? Is a manual conversion required?
The training data might have class labels such as “Loan Denied”, “Loan Granted”. The prediction value that is returned by IBM Watson Machine Learning scoring end point has values such as “0.0”, “1.0". The scoring end point also has an optional column that contains the text representation of prediction. For example, if prediction=1.0, the predictionLabel column might have a value “Loan Granted”. If such a column is available, when you configure the favorable and unfavorable outcome for the model, specify the string values “Loan Granted” and “Loan Denied”. If such a column is not available, then you need to specify the integer and double values of 1.0, 0.0 for the favorable, and unfavorable classes.
IBM Watson Machine Learning has a concept of output schema that defines the schema of the output of IBM Watson Machine Learning scoring end point and the role for the different columns. The roles are used to identify which column contains the prediction value, which column contains the prediction probability, and the class label value, and so on. The output schema is automatically set for models that are created by using model builder. It can also be set by using the IBM Watson Machine Learning Python client. Users can use the output schema to define a column that contains the string representation of the prediction. Set the modeling_role for the column to ‘decoded-target’. Read the [documentation for the IBM Watson Machine Learning Python client](https://ibm.github.io/watson-machine-learning-sdk/). Search for “OUTPUT_DATA_SCHEMA” to understand the output schema and the API to use is to store_model API that accepts the OUTPUT_DATA_SCHEMA as a parameter.
Why does Watson OpenScale need access to training data?
You must either provide Watson OpenScale access to training data that is stored in Db2 or IBM Cloud Object Storage, or you must run a Notebook to access the training data.
Watson OpenScale needs access to your training data for the following reasons:
* To generate contrastive explanations: To create explanations, access to statistics, such as median value, standard deviation, and distinct values from the training data is required.
* To display training data statistics: To populate the bias details page, Watson OpenScale must have training data from which to generate statistics.
* To build a drift detection model: The Drift monitor uses training data to create and calibrate drift detection.
In the Notebook-based approach, you are expected to upload the statistics and other information when you configure a deployment in Watson OpenScale. Watson OpenScale no longer has access to the training data outside of the Notebook, which is run in your environment. It has access only to the information uploaded during the configuration.
What does it mean if the fairness score is greater than 100 percent?
Depending on your fairness configuration, your fairness score can exceed 100 percent. It means that your monitored group is getting relatively more “fair” outcomes as compared to the reference group. Technically, it means that the model is unfair in the opposite direction.
How is model bias mitigated by using Watson OpenScale?
The debiasing capability in Watson OpenScale is enterprise grade. It is robust, scalable and can handle a wide variety of models. Debiasing in Watson OpenScale consists of a two-step process: Learning Phase: Learning customer model behavior to understand when it acts in a biased manner.
Application Phase: Identifying whether the customer’s model acts in a biased manner on a specific data point and, if needed, fixing the bias. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.html) and [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.html).
Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them?
Yes. Recently, Watson OpenScale delivered a ground-breaking feature called “Indirect Bias detection.” Use it to detect whether the model is exhibiting bias indirectly for sensitive attributes, even though the model is not trained on these attributes. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.htmlmf-debias-indirect).
Is it possible to mitigate bias for regression-based models?
Yes. You can use Watson OpenScale to mitigate bias on regression-based models. No additional configuration is needed from you to use this feature. Bias mitigation for regression models is done out-of-box when the model exhibits bias.
What are the different methods of debiasing in Watson OpenScale?
You can use both Active Debiasing and Passive Debiasing for debiasing. For more information, see [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.htmlit-dbo-active).
Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2. If the data is in Netezza, can Watson OpenScale use Netezza?
Use this [Watson OpenScale Notebook](https://github.com/IBM/watson-openscale-samples/blob/main/Cloud%20Pak%20for%20Data/Batch%20Support/Configuration%20generation%20for%20OpenScale%20batch%20subscription.ipynb) to read the data from Netezza and generate the training statistics and also the drift detection model.
Why doesn't Watson OpenScale see the updates that were made to the model?
Watson OpenScale works on a deployment of a model, not on the model itself. You must create a new deployment and then configure this new deployment as a new subscription in Watson OpenScale. With this arrangement, you are able to compare the two versions of the model.
What are the various kinds of risks associated in using a machine learning model?
Multiple kinds of risks that are associated with machine learning models, such as any change in input data that is also known as Drift can cause the model to make inaccurate decisions, impacting business predictions. Training data can be cleaned to be free from bias but runtime data might induce biased behavior of model.
Traditional statistical models are simpler to interpret and explain, but unable to explain the outcome of the machine learning model can pose a serious threat to the usage of the model.
Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected?
No, you can set up email alerts for your production model deployments in Watson OpenScale. You receive email alerts whenever a risk evaluation test fails, and then you can come and check the issues and address them.
In Watson OpenScale, what data is used for Quality metrics computation?
Quality metrics are calculated that use manually labeled feedback data and monitored deployment responses for this data.
In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration?
No, currently, the threshold can be set only for the 'Area under ROC' metric.
| # Frequently asked questions #
Find answers to frequently asked questions about watsonx\.ai\.
## Account and setup questions ##
<!-- <ul> -->
* [How do I sign up for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#sign-up-wxai)
* [Can I try watsonx for free?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#free)
* [How do I upgrade watsonx\.ai and watsonx\.governance?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#upgrade)
* [Which regions can I provision watsonx\.ai and watsonx\.governance in?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html)
* [Which web browsers are supported for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [How can I get the most runtime from my Watson Studio Lite plan?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#ws-lite)
* [How do I change languages for the product and the documentation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
* [How do I find my IBM Cloud account owner or administrator?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#accountadmin)
* [Can I provide feedback?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#feedback)
<!-- </ul> -->
## Foundation model questions ##
<!-- <ul> -->
* [What foundation models are available and where do they come from?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-available)
* [What data was used to train foundation models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-data)
* [Do I need to check generated output for biased, inappropriate, or incorrect content?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-check)
* [Is there a limit to how much text generation I can do?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-token-limit)
* [Does prompt engineering train the foundation model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-train)
* [Does IBM have access to or use my data in any way?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-privacy)
* [What APIs are available?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fm-apis)
<!-- </ul> -->
## Project questions ##
<!-- <ul> -->
* [How do I load very large files to my project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#verylarge)
<!-- </ul> -->
## IBM Cloud Object Storage questions ##
<!-- <ul> -->
* [What is saved in IBM Cloud Object Storage for workspaces?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#saved-in-cos)
* [Do I need to upgrade IBM Cloud Object Storage when I upgrade other services?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#upgrade-cos)
* [Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#cosstep)
<!-- </ul> -->
## Notebook questions ##
<!-- <ul> -->
* [Can I install libraries or packages to use in my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#install-libraries)
* [Can I call functions that are defined in one notebook from another notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#functions-defined)
* [Can I add arbitrary notebook extensions?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#arbitrary)
* [How do I access the data from a CSV file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#csv-file)
* [How do I access the data from a compressed file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#compressed-file)
<!-- </ul> -->
## Security and reliability questions ##
<!-- <ul> -->
* [How secure is IBM watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#security)
* [Is my data and notebook protected from sharing outside of my collaborators?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#protected-notebooks)
* [Do I need to back up my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#backup-notebooks)
<!-- </ul> -->
## Sharing and collaboration questions ##
<!-- <ul> -->
* [What are the implications of sharing a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#sharing-notebooks)
* [How can I share my work outside of RStudio?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#how-share)
* [How do I share my SPSS Modeler flow with another project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#share-spss)
<!-- </ul> -->
## Machine learning questions ##
<!-- <ul> -->
* [How do I run an AutoAI experiment?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#run-autoai)
* [What is available for automated model building?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wml-autoai)
* [What frameworks and libraries are available for my machine learning models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wml-frameworks)
* [What is an API Key?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wml-api-key)
<!-- </ul> -->
## Watson OpenScale questions ##
<!-- <ul> -->
* [What is Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#faq-whatsa)
* [How do I convert a prediction column from an integer data type to a categorical data type?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-faqs-convert-data-types)
* [Why does Watson OpenScale need access to training data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#trainingdata)
* [What does it mean if the fairness score is greater than 100 percent?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#fairness-score-over100)
* [How is model bias mitigated by using Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-001-bias)
* [Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-002-attrib)
* [Is it possible to mitigate bias for regression\-based models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-003-regress)
* [What are the different methods of debiasing in Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-004-methods-bias)
* [Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2\. If the data is in Netezza, can Watson OpenScale use Netezza?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#configmodel)
* [Why doesn't Watson OpenScale see the updates that were made to the model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#new-model-missing)
* [What are the various kinds of risks associated in using a machine learning model? ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-risk)
* [Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-dashboard-email)
* [In Watson OpenScale, what data is used for Quality metrics computation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-quality-data)
* [In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=en#wos-thresholds)
<!-- </ul> -->
## IBM watsonx\.ai questions ##
### How do I sign up for watsonx? ###
Go to [Try IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx\.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south)\. If you sign up for watsonx\.governance, you automatically provision watsonx\.ai as well\.
### Can I try watsonx for free? ###
Yes, when you sign up for IBM watsonx\.ai, you automatically provision the free version of the underlying services: Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage\. When you sign up for IBM watsonx\.governance, you automatically provision the free version of Watson OpenScale and the free versions of the services for IBM watsonx\.ai\.
### How do I upgrade watsonx\.ai and watsonx\.governance? ###
When you're ready to upgrade any of the underlying services for watsonx\.ai or watsonx\.governance, you can upgrade in place without losing any of your work or data\.
You must be the owner or administrator of the IBM Cloud account for a service to upgrade it\. See [Upgrading services on watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)\.
### How can I get the most runtime from my Watson Studio Lite plan? ###
The Watson Studio Lite plan allows for 10 CUH per month\. You can maximize your available CUH by setting your assets to use environments with lower CUH rates\. For example, you can [change your notebook environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#change-env)\. To see the available environments and the required CUH, go to the [Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=wx)\.
### How do I find my IBM Cloud account owner? ###
If you have an enterprise account or work in an IBM Cloud that you don't own, you might need to ask an account owner to give you access to a workspace or another role\.
To find your IBM Cloud account owner:
<!-- <ol> -->
1. From the navigation menu, choose **Administration > Access (IAM)**\.
2. From the avatar menu, make sure you're in the right account, or switch accounts, if necessary\.
3. Click **Users**, and find the username with the word `owner` next to it\.
<!-- </ol> -->
To understand roles, see [Roles for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)\. To determine your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)\.
### Can I provide feedback? ###
Yes, we encourage feedback as we continue to develop this platform\. From the navigation menu, select **Support > Share an idea**\.
## Foundation models ##
### What foundation models are available and where do they come from? ###
See the complete list of [supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
### What data was used to train foundation models? ###
Links to details about each model, including pretraining data and fine\-tuning, are available here: [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
### Do I need to check generated output for biased, inappropriate, or incorrect content? ###
Yes, you must review the generated output of foundation models\. Third Party models have been trained with data that might contain biases and inaccuracies and can generate outputs containing misinformation, obscene or offensive language, or discriminatory content\.
In the Prompt Lab, when you toggle **AI guardrails** on, any sentence in the prompt text or model output that contains harmful language will be replaced with a message saying potentially harmful text has been removed\.
See [Avoiding undesirable output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html)\.
### Is there a limit to how much text generation I can do? ###
With the free trial of watsonx\.ai, you can use up to 25,000 tokens per month\. Your token usage is the sum of your input and output tokens\.
With a paid service plan, there is no token limit, but you are charged for the tokens that you submit as input plus the tokens that you receive in the generated output\.
See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
### Does prompt engineering train the foundation model? ###
No, submitting prompts to a foundation model does not train the model\. The models available in watsonx\.ai are pretrained, so you do not need to train the models before you use them\.
See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)\.
### Does IBM have access to or use my data in any way? ###
No, IBM does not have access to your data\.
Your work on watsonx\.ai, including your data and the models that you create, are private to your account:
<!-- <ul> -->
* Your data is accessible only by you\. Your data is used to train only your models\. Your data will never be accessible or used by IBM or any other person or organization\. Your data is stored in dedicated storage buckets and is encrypted at rest and in motion\.
* Your models are accessible only by you\. Your models will never be accessible or used by IBM or any other person or organization\. Your models are secured in the same way as your data\.
<!-- </ul> -->
Learn more about security and your options:
<!-- <ul> -->
* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* [Security for IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
<!-- </ul> -->
### What APIs are available? ###
You can prompt foundation models in watsonx\.ai programmatically using the Python library\.
See [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)\.
## Projects ##
### How do I load very large files to my project? ###
You can't load data files larger than 5 GB to your project\. If your files are larger, you must use the Cloud Object Storage API and load the data in multiple parts\. See the [curl commands](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) for working with Cloud Object Storage directly on IBM Cloud\.
See [Adding very large objects to a project's Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html)\.
## IBM Cloud Object Storage ##
### What is saved in IBM Cloud Object Storage for workspaces? ###
When you create a project or deployment space, you specify a IBM Cloud Object Storage and create a bucket that is dedicated to that workspace\. These types of objects are stored in the IBM Cloud Object Storage bucket for the workspace:
<!-- <ul> -->
* Files for data assets that you uploaded into the workspace\.
* Files associated with assets that run in tools, such as, notebooks and models\.
* Metadata about assets, such as the asset type, format, and tags\.
<!-- </ul> -->
### Do I need to upgrade IBM Cloud Object Storage when I upgrade other services? ###
You must upgrade your IBM Cloud Object Storage instance only when you run out of storage space\. Other services can use any IBM Cloud Object Storage plan and you can upgrade any service or your IBM Cloud Object Storage service independently\.
### Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog? ###
IBM Cloud Object Storage requires an extra step for users who do not have administrative privileges for it\. The account administrator must [enable nonadministrative users to create projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#cos-delegation)\.
If you have administrator privileges and do not see the latest IBM Cloud Object Storage, try again later because server\-side caching might cause a delay in rendering the latest values\.
## Notebooks ##
### Can I install libraries or packages to use in my notebooks? ###
You can install Python libraries and R packages through a notebook, and those libraries and packages will be available to all your notebooks that use the same environment template\. For instructions, see [Import custom or third\-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html)\. If you get an error about missing operating system dependencies when you install a library or package, notify IBM Support\. To see the preinstalled libraries and packages and the libraries and packages that you installed, from within a notebook, run the appropriate command:
<!-- <ul> -->
* **Python**: `!pip list`
* **R**: `installed.packages()`
<!-- </ul> -->
### Can I call functions that are defined in one notebook from another notebook? ###
There is no way to call one notebook from another notebook on the platform\. However, you can put your common code into a library outside of the platform and then install it\.
### Can I add arbitrary notebook extensions? ###
No, you can't extend your notebook capabilities by adding arbitrary extensions as a customization because all notebook extensions must be preinstalled\.
### How do I access the data from a CSV file in a notebook? ###
After you load a CSV file into object storage, load the data by clicking the **Code snippets** icon () in an opened notebook, clicking **Read data** and selecting the CSV file from the project\. Then, click in an empty code cell in your notebook and insert the generated code\.
### How do I access the data from a compressed file in a notebook? ###
After you load the compressed file to object storage, get the file credentials by clicking the **Code snippets** icon () in an opened notebook, clicking **Read data** and selecting the compressed file from the project\. Then, click in an empty code cell in your notebook and load the credentials to the cell\. Alternatively, click to copy the credentials to the clipboard and paste them into your notebook\.
## Security and reliability ##
### How secure is IBM watsonx? ###
The IBM watsonx platform is very secure and resilient\. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)\.
### Is my data and notebook protected from sharing outside of my collaborators? ###
The data that is loaded into your project and notebooks is secure\. Only the collaborators in your project can access your data or notebooks\. Each platform account acts as a separate tenant of the Spark and IBM Cloud Object Storage services\. Tenants cannot access other tenant's data\.
If you want to share your notebook with the public, then hide your data service credentials in your notebook\. For the Python and R languages, enter the following syntax: `# @hidden_cell`
Be sure to save your notebook immediately after you enter the syntax to hide cells with sensitive data\.
Only then should you share your work\.
### Do I need to back up my notebooks? ###
No\. Your notebooks are stored in IBM Cloud Object Storage, which provides resiliency against outages\.
## Sharing and collaboration ##
### What are the implications of sharing a notebook? ###
When you share a notebook, the permalink never changes\. Any person with the link can view your notebook\. You can stop sharing the notebook by clearing the checkbox to share it\. Updates are not automatically shared\. When you update your notebook, you can sync the shared notebook by reselecting the checkbox to share it\.
### How can I share my work outside of RStudio? ###
One way of sharing your work outside of RStudio is connecting it to a shared GitHub repository that you and your collaborators can work from\. Read this [blog post](https://www.r-bloggers.com/rstudio-and-github/) for more information\.
However, the best method to share your work with the members of a project is to use notebooks in the project that uses the R kernel\.
RStudio is a great environment to work in for prototyping and working individually on R projects, but it is not yet integrated with projects\.
### How do I share my SPSS Modeler flow with another project? ###
By design, modeler flows can be used only in the project where the flow is created or imported\. If you need to use a modeler flow in a different project, you must download the flow from current project (source project) to your local environment and then import the flow to another project (target project)\.
## IBM Watson Machine Learning ##
### How do I run an AutoAI experiment? ###
Go to [Creating an AutoAI experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) to watch a short video to see how to create and run an AutoAI experiment and then follow a tutorial to set up your own sample\.
### What is available for automated model building? ###
The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines that are customized for your predictive modeling problem\. These model pipelines are created iteratively as AutoAI analyzes your data set and discovers data transformations, algorithms, and parameter settings that work best for your problem setting\. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective\. For details, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)\.
### What frameworks and libraries are supported for my machine learning models? ###
You can use popular tools, libraries, and frameworks to train and deploy machine learning models by using IBM Watson Machine Learning\. The [supported frameworks topic](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) lists supported versions and features, as well as deprecated versions scheduled to be discontinued\.
### What is an API Key? ###
API keys allow you to easily authenticate when using the CLI or APIs that can be used across multiple services\. API Keys are considered confidential since they are used to grant access\. Treat all API keys as you would a password since anyone with your API key can impersonate your service\.
## Watson OpenScale ##
### What is Watson OpenScale ###
IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running\. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production
### How do I convert a prediction column from an integer data type to a categorical data type? ###
For fairness monitoring, the prediction column allows only an integer numerical value even though the prediction label is categorical\. How do I configure a categorical feature that is not an integer? Is a manual conversion required?
The training data might have class labels such as “Loan Denied”, “Loan Granted”\. The prediction value that is returned by IBM Watson Machine Learning scoring end point has values such as “0\.0”, “1\.0"\. The scoring end point also has an optional column that contains the text representation of prediction\. For example, if prediction=1\.0, the predictionLabel column might have a value “Loan Granted”\. If such a column is available, when you configure the favorable and unfavorable outcome for the model, specify the string values “Loan Granted” and “Loan Denied”\. If such a column is not available, then you need to specify the integer and double values of 1\.0, 0\.0 for the favorable, and unfavorable classes\.
IBM Watson Machine Learning has a concept of output schema that defines the schema of the output of IBM Watson Machine Learning scoring end point and the role for the different columns\. The roles are used to identify which column contains the prediction value, which column contains the prediction probability, and the class label value, and so on\. The output schema is automatically set for models that are created by using model builder\. It can also be set by using the IBM Watson Machine Learning Python client\. Users can use the output schema to define a column that contains the string representation of the prediction\. Set the `modeling_role` for the column to ‘decoded\-target’\. Read the [documentation for the IBM Watson Machine Learning Python client](https://ibm.github.io/watson-machine-learning-sdk/)\. Search for “OUTPUT\_DATA\_SCHEMA” to understand the output schema and the API to use is to store\_model API that accepts the OUTPUT\_DATA\_SCHEMA as a parameter\.
### Why does Watson OpenScale need access to training data? ###
You must either provide Watson OpenScale access to training data that is stored in Db2 or IBM Cloud Object Storage, or you must run a Notebook to access the training data\.
Watson OpenScale needs access to your training data for the following reasons:
<!-- <ul> -->
* To generate contrastive explanations: To create explanations, access to statistics, such as median value, standard deviation, and distinct values from the training data is required\.
* To display training data statistics: To populate the bias details page, Watson OpenScale must have training data from which to generate statistics\.
* To build a drift detection model: The Drift monitor uses training data to create and calibrate drift detection\.
<!-- </ul> -->
In the Notebook\-based approach, you are expected to upload the statistics and other information when you configure a deployment in Watson OpenScale\. Watson OpenScale no longer has access to the training data outside of the Notebook, which is run in your environment\. It has access only to the information uploaded during the configuration\.
### What does it mean if the fairness score is greater than 100 percent? ###
Depending on your fairness configuration, your fairness score can exceed 100 percent\. It means that your monitored group is getting relatively more “fair” outcomes as compared to the reference group\. Technically, it means that the model is unfair in the opposite direction\.
### How is model bias mitigated by using Watson OpenScale? ###
The debiasing capability in Watson OpenScale is enterprise grade\. It is robust, scalable and can handle a wide variety of models\. Debiasing in Watson OpenScale consists of a two\-step process: Learning Phase: Learning customer model behavior to understand when it acts in a biased manner\.
Application Phase: Identifying whether the customer’s model acts in a biased manner on a specific data point and, if needed, fixing the bias\. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.html) and [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.html)\.
### Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them? ###
Yes\. Recently, Watson OpenScale delivered a ground\-breaking feature called “Indirect Bias detection\.” Use it to detect whether the model is exhibiting bias indirectly for sensitive attributes, even though the model is not trained on these attributes\. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.html#mf-debias-indirect)\.
### Is it possible to mitigate bias for regression\-based models? ###
Yes\. You can use Watson OpenScale to mitigate bias on regression\-based models\. No additional configuration is needed from you to use this feature\. Bias mitigation for regression models is done out\-of\-box when the model exhibits bias\.
### What are the different methods of debiasing in Watson OpenScale? ###
You can use both Active Debiasing and Passive Debiasing for debiasing\. For more information, see [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.html#it-dbo-active)\.
### Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2\. If the data is in Netezza, can Watson OpenScale use Netezza? ###
Use this [Watson OpenScale Notebook](https://github.com/IBM/watson-openscale-samples/blob/main/Cloud%20Pak%20for%20Data/Batch%20Support/Configuration%20generation%20for%20OpenScale%20batch%20subscription.ipynb) to read the data from Netezza and generate the training statistics and also the drift detection model\.
### Why doesn't Watson OpenScale see the updates that were made to the model? ###
Watson OpenScale works on a deployment of a model, not on the model itself\. You must create a new deployment and then configure this new deployment as a new subscription in Watson OpenScale\. With this arrangement, you are able to compare the two versions of the model\.
### What are the various kinds of risks associated in using a machine learning model? ###
Multiple kinds of risks that are associated with machine learning models, such as any change in input data that is also known as Drift can cause the model to make inaccurate decisions, impacting business predictions\. Training data can be cleaned to be free from bias but runtime data might induce biased behavior of model\.
Traditional statistical models are simpler to interpret and explain, but unable to explain the outcome of the machine learning model can pose a serious threat to the usage of the model\.
### Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected? ###
No, you can set up email alerts for your production model deployments in Watson OpenScale\. You receive email alerts whenever a risk evaluation test fails, and then you can come and check the issues and address them\.
### In Watson OpenScale, what data is used for Quality metrics computation? ###
Quality metrics are calculated that use manually labeled feedback data and monitored deployment responses for this data\.
### In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration? ###
No, currently, the threshold can be set only for the 'Area under ROC' metric\.
<!-- </article "role="article" "> -->
|
2BB452B4C9E3458BC02A9D392961E9C643E402DE | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en | Feature differences between watsonx deployments | Feature differences between watsonx deployments
IBM watsonx as a Service and watsonx on Cloud Pak for Data software have some differences in features and implementation. IBM watsonx as a Service is a set of IBM Cloud services. Watsonx services on Cloud Pak for Data 4.8 are offered as software that you must install and maintain. Services that are available on both deployments also have differences in features on IBM watsonx as a Service compared to watsonx software on Cloud Pak for Data 4.8.
* [Platform differences](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enplatform)
* [Common features across services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=encommon)
* [Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enws)
* [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwml)
* [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwos)
Platform differences
IBM watsonx as a Service and watsonx software on Cloud Pak for Data share a common code base, however, they differ in the following key ways:
Platform differences
Features As a service Software
Software, hardware, and installation IBM watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic. You sign up at [https://dataplatform.cloud.ibm.com](https://dataplatform.cloud.ibm.com). You provide and maintain hardware. You install, maintain, and upgrade the software. See [Software requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/software-reqs.html).
Storage You provision a IBM Cloud Object Storage service instance to provide storage. See [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html). You provide persistent storage on a Red Hat OpenShift cluster. See [Storage requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/storage-requirements.html).
Compute resources for running workloads Users choose the appropriate runtime for their jobs. Compute usage is billed based on the rate for the runtime environment and the duration of the job. See [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html). You set up the number of Red Hat OpenShift nodes with the appropriate number of vCPUs. See [Hardware requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/hardware-reqs.html) and [Monitoring the platform](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/platform-management.html).
Cost You buy each service that you need at the appropriate plan level. Many services bill for compute and other resource consumption. See each service page in the [IBM Cloud catalog](https://cloud.ibm.com/catalog) or in the services catalog on IBM watsonx, by selecting Administration > Services > Services catalog from the navigation menu. You buy a software license based on the services that you need. See [Cloud Pak for Data](https://cloud.ibm.com/catalog/content/ibm-cp-datacore-6825cc5d-dbf8-4ba2-ad98-690e6f221701-global).
Security, compliance, and isolation The data security, network security, security standards compliance, and isolation of IBM watsonx are managed by IBM Cloud. You can set up extra security and encryption options. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html). Red Hat OpenShift Container Platform provides basic security features. Cloud Pak for Data is assessed for various Privacy and Compliance regulations and provides features that you can use in preparation for various privacy and compliance assessments. You are responsible for additional security features, encryption, and network isolation. See [Security considerations](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/security.html).
Available services Most watsonx services are available in both deployment environments. <br>See [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Includes many other services for other components and solutions. See [Services for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html).
User management You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. See [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html). <br>You can also set up SAML federation on IBM Cloud. See [IBM Cloud docs: How IBM Cloud IAM works](https://cloud.ibm.com/docs/account?topic=account-iamoverview). You can add users and create user groups from the Administration menu. You can use the Identity and Access Management Service or use your existing SAML SSO or LDAP provider for identity and password management. You can create dynamic, attribute-based user groups. See [User management](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/users.html).
Common features across services
The following features that are provided with the platform are effectively the same for services on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8:
* Global search for assets across the platform
* The Platform assets catalog for sharing connections across the platform
* Role-based user management within collaborative workspaces across the platform
* Common infrastructure for assets and workspaces
* A services catalog for adding services
* View compute usage from the Administration menu
The following table describes differences in features across services between IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8:
Differences in common features across services
Feature As a service Software
Manage all projects Users with the Manage projects permission from the IAM service access Manager role for the IBM Cloud Pak for Data service can join any project with the Admin role and then manage or delete the project. Users with the Manage projects permission can join any project with the Admin role and then manage or delete the project.
Connections to remote data sources Most supported data sources are common to both deployment environments. <br>See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). See [Supported data sources](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/access/data-sources.html).
Connection credentials that are personal or shared Connections in projects and catalogs can require personal credentials or allow shared credentials. Shared credentials can be disabled at the account level. Platform connections can require personal credentials or allow shared credentials. Shared credentials can be disabled at the platform level.
Connection credentials from secrets in a vault Not available Available
Kerberos authentication Not available Available for [some services and connections](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/kerberos.html)
Sample assets and projects from the Samples app Available Not available
Custom JDBC connector Not available Available starting in 4.8.0
Watson Studio
The following Watson Studio features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8:
* Collaboration in projects and deployment spaces
* Accessing project assets programmatically
* Project import and export by using a project ZIP file
* Jupyter notebooks
* Job scheduling
* Data Refinery
* Watson Natural Language Processing for Python
This table describes the feature differences between the Watson Studio service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required. For more information about feature differences between offering plans on IBM watsonx, see [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html).
Differences in Watson Studio
Feature As a service Software
Sandbox project Created automatically Not available
Create project Create: <br>* An empty project <br>* A project from a sample in the Samples <br>* A project from file Create: <br>* An empty project <br>* A project from file <br>* A project with Git integration
Git integration * Publish notebooks on GitHub <br>* Publish notebooks as gist * Integrate a project with Git <br>* sync assets to repository in one project and use those assets into another project
Project terminal for advanced Git operations Not available Available in projects with default Git integration
Organize assets in projects with folders Not available Available starting with 4.8.0
Foundation model inferencing Available Requires the watsonx.ai service.
Foundation model tuning Available Not available
Supported foundation models Most foundation models are available on both deployments. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) Requires that the models are installed on the cluster. See [Supported foundation models](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/fm-models.html).
AI guardrails for prompting Available Not available
Prompt variables Available Not available
Synthentic data generation Available Requires the Synthetic Data Generator service.
JupyterLab Not available Available in projects with Git integration
Visual Studio Code integration Not available Available
RStudio Cannot integrate with Git Can integrate with Git. Requires an RStudio Server Runtimes service.
Python scripts Not available Work with Python scripts in JupyterLab. Requires a Watson Studio Runtimes service.
Generate code to load data to a notebook by using the Flight service Not available Available
Manage notebook lifecycle Not available Use CPDCTL for notebook lifecycle management
Code package assets (set of dependent files in a folder structure) Not available Use CPDCTL to create code package assets in a deployment space
Promote notebooks to spaces Not available Available manually from the project's Assets page or programmatically by using CPDCTL
Python with GPU Support available for a single GPU type only (Nvidia K80) Support available for multiple Nvidia GPU types. Requires a Watson Studio Runtimes service.
Create and use custom images Not available Create custom images for Python (with and without GPU), R, JupyterLab (with and without GPU), RStudio, and SPSS environments. Requires a Watson Studio Runtimes and other applicable services.
Anaconda Repository Not available Use to create custom environments and custom images
Hadoop integration Not available Build and train models, and run Data Refinery flows on a Hadoop cluster. Requires the Execution Engine for Apache Hadoop service.
Decision Optimization Available Requires the Decision Optimization service.
SPSS Modeler Available Requires the SPSS Modeler service.
Watson Pipelines Available Requires the Watson Pipelines service.
Watson Machine Learning
The following Watson Machine Learning features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8:
* Collaboration in projects and deployment spaces
* Deploy models
* Deploy functions
* Watson Machine Learning REST APIs
* Watson Machine Learning Python client
* Create online deployments
* Scale and update deployments
* Define and use custom components
* Use Federated Learning to train a common model with separate and secure data sources
* Monitor deployments across spaces
* Updated forms for testing online deployment
* Use nested pipelines
* AutoAI data imputation
* AutoAI fairness evaluation
* AutoAI time series supporting features
This table describes the differences in features between the Watson Machine Learning service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required. For details about functionality differences between offering plans on IBM watsonx, see [Watson Machine Learning offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Feature differences between Watson Machine Learning deployments
Feature As a service Software
AutoAI training input Current [supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) [Supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) change by release
AutoAI experiment compute configuration 8 CPU and 32 GB [Different sizes available](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
AutoAI limits on data size <br>and number of prediction targets Set limits [Limits differ by compute configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
AutoAI incremental learning Not available Available
Deploy using popular frameworks <br>and software specifications Check for latest [supported versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) [Supported versions](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/ml-manage-frame-and-specs.html) differ by release
Connect to databases for batch deployments Check for [support by deployment type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) Check for support by [deployment type](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/deploy-batch-details.html) <br>and by version
Deploy and score Python scripts Available via Python client Create scripts in JupyterLab or Python client, then deploy
Deploy and batch score R Scripts Not available Available
Deploy Shiny apps Not available Create and deploy Shiny apps <br>Deploy from code package
Evaluate jobs for fairness, or drift Requires the Watson OpenScale service Requires the Watson OpenScale service
Evaluate online deployments in a space <br>for fairness, drift or explainability Not available Available <br>Requires the Watson OpenScale service
Control space creation No restrictions by role Use permissions to control who can view and create spaces
Import from GIT project to space Not available Available
Code package automatically created when importing <br>from Git project to space Not available Available
Update RShiny app from code package Not available Available
Track model details in a model inventory Register models to view factsheets with lifecycle details. Requires the IBM Knowledge Catalog service. Available <br>Requires the AI Factsheets service.
Create and use custom images Not available Create custom images for Python or SPSS
Notify collaborators about Pipeline events Not available Use Send Mail to notify collaborators
Import project or space file into a nonempty space Not available Available
Deep Learning Experiments Not available Requires the Watson Machine Learning Accelerator service
Provision and manage IBM Cloud service instances Add instances for Watson Machine Learning <br>or Watson OpenScale Services are provisioned on the cluster <br>by the administrator
Watson OpenScale
The following Watson OpenScale features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8:
* Evaluate deployments for fairness
* Evaluate the quality of deployments
* Monitor deployments for drift
* View and compare model results in an Insights dashboard
* Add deployments from the machine learning provider of your choice
* Set alerts to trigger when evaluations fall below a specified threshold
* Evaluate deployments in a user interface or notebook
* Custom evaluations and metrics
* View details about evaluations in model factsheets
This table describes the differences in features between the Watson OpenScale service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required.
Differences IBM Watson OpenScale
Feature As a service Software
Upload pre-scored test data Not available Available
IBM SPSS Collaboration and Deployment Services Not available Available
Batch processing Not available Available
Support access control by user groups Not available Available
Free database and Postgres plans Available Postgres available starting in 4.8
Set up multiple instances Not available Available
Integration with OpenPages Not available Available
Learn more
* [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
* [Services for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html)
* [Cloud deployment environment options for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/deployment-environments.html)
Parent topic:[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
| # Feature differences between watsonx deployments #
IBM watsonx as a Service and watsonx on Cloud Pak for Data software have some differences in features and implementation\. IBM watsonx as a Service is a set of IBM Cloud services\. Watsonx services on Cloud Pak for Data 4\.8 are offered as software that you must install and maintain\. Services that are available on both deployments also have differences in features on IBM watsonx as a Service compared to watsonx software on Cloud Pak for Data 4\.8\.
<!-- <ul> -->
* [Platform differences](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en#platform)
* [Common features across services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en#common)
* [Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en#ws)
* [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en#wml)
* [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=en#wos)
<!-- </ul> -->
## Platform differences ##
IBM watsonx as a Service and watsonx software on Cloud Pak for Data share a common code base, however, they differ in the following key ways:
<!-- <table> -->
Platform differences
| Features | As a service | Software |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Software, hardware, and installation | IBM watsonx is fully managed by IBM on IBM Cloud\. Software updates are automatic\. Scaling of compute resources and storage is automatic\. You sign up at [https://dataplatform\.cloud\.ibm\.com](https://dataplatform.cloud.ibm.com)\. | You provide and maintain hardware\. You install, maintain, and upgrade the software\. See [Software requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/software-reqs.html)\. |
| Storage | You provision a IBM Cloud Object Storage service instance to provide storage\. See [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html)\. | You provide persistent storage on a Red Hat OpenShift cluster\. See [Storage requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/storage-requirements.html)\. |
| Compute resources for running workloads | Users choose the appropriate runtime for their jobs\. Compute usage is billed based on the rate for the runtime environment and the duration of the job\. See [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)\. | You set up the number of Red Hat OpenShift nodes with the appropriate number of vCPUs\. See [Hardware requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/hardware-reqs.html) and [Monitoring the platform](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/platform-management.html)\. |
| Cost | You buy each service that you need at the appropriate plan level\. Many services bill for compute and other resource consumption\. See each service page in the [IBM Cloud catalog](https://cloud.ibm.com/catalog) or in the services catalog on IBM watsonx, by selecting **Administration > Services > Services catalog** from the navigation menu\. | You buy a software license based on the services that you need\. See [Cloud Pak for Data](https://cloud.ibm.com/catalog/content/ibm-cp-datacore-6825cc5d-dbf8-4ba2-ad98-690e6f221701-global)\. |
| Security, compliance, and isolation | The data security, network security, security standards compliance, and isolation of IBM watsonx are managed by IBM Cloud\. You can set up extra security and encryption options\. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)\. | Red Hat OpenShift Container Platform provides basic security features\. Cloud Pak for Data is assessed for various Privacy and Compliance regulations and provides features that you can use in preparation for various privacy and compliance assessments\. You are responsible for additional security features, encryption, and network isolation\. See [Security considerations](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/security.html)\. |
| Available services | Most watsonx services are available in both deployment environments\. <br>See [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)\. | Includes many other services for other components and solutions\. See [Services for Cloud Pak for Data 4\.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html)\. |
| User management | You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management\. See [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)\. <br>You can also set up SAML federation on IBM Cloud\. See [IBM Cloud docs: How IBM Cloud IAM works](https://cloud.ibm.com/docs/account?topic=account-iamoverview)\. | You can add users and create user groups from the **Administration** menu\. You can use the Identity and Access Management Service or use your existing SAML SSO or LDAP provider for identity and password management\. You can create dynamic, attribute\-based user groups\. See [User management](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/users.html)\. |
<!-- </table ""> -->
## Common features across services ##
The following features that are provided with the platform are effectively the same for services on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4\.8:
<!-- <ul> -->
* Global search for assets across the platform
* The Platform assets catalog for sharing connections across the platform
* Role\-based user management within collaborative workspaces across the platform
* Common infrastructure for assets and workspaces
* A services catalog for adding services
* View compute usage from the **Administration** menu
<!-- </ul> -->
The following table describes differences in features across services between IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4\.8:
<!-- <table> -->
Differences in common features across services
| Feature | As a service | Software |
| -------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| Manage all projects | Users with the **Manage projects** permission from the IAM service access **Manager** role for the IBM Cloud Pak for Data service can join any project with the **Admin** role and then manage or delete the project\. | Users with the **Manage projects** permission can join any project with the **Admin** role and then manage or delete the project\. |
| Connections to remote data sources | Most supported data sources are common to both deployment environments\. <br>See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)\. | See [Supported data sources](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/access/data-sources.html)\. |
| Connection credentials that are personal or shared | Connections in projects and catalogs can require personal credentials or allow shared credentials\. Shared credentials can be disabled at the account level\. | Platform connections can require personal credentials or allow shared credentials\. Shared credentials can be disabled at the platform level\. |
| Connection credentials from secrets in a vault | Not available | Available |
| Kerberos authentication | Not available | Available for [some services and connections](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/kerberos.html) |
| Sample assets and projects from the Samples app | Available | Not available |
| Custom JDBC connector | Not available | Available starting in 4\.8\.0 |
<!-- </table ""> -->
## Watson Studio ##
The following Watson Studio features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4\.8:
<!-- <ul> -->
* Collaboration in projects and deployment spaces
* Accessing project assets programmatically
* Project import and export by using a project ZIP file
* Jupyter notebooks
* Job scheduling
* Data Refinery
* Watson Natural Language Processing for Python
<!-- </ul> -->
This table describes the feature differences between the Watson Studio service on the as\-a\-service and software deployment environments, differences between offering plans, and whether additional services are required\. For more information about feature differences between offering plans on IBM watsonx, see [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)\.
<!-- <table> -->
Differences in Watson Studio
| Feature | As a service | Software |
| -------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Sandbox project | Created automatically | Not available |
| Create project | Create: <br>• An empty project <br>• A project from a sample in the Samples <br>• A project from file | Create: <br>• An empty project <br>• A project from file <br>• A project with Git integration |
| Git integration | • Publish notebooks on GitHub <br>• Publish notebooks as gist | • Integrate a project with Git <br>• sync assets to repository in one project and use those assets into another project |
| Project terminal for advanced Git operations | Not available | Available in projects with default Git integration |
| Organize assets in projects with folders | Not available | Available starting with 4\.8\.0 |
| Foundation model inferencing | Available | Requires the watsonx\.ai service\. |
| Foundation model tuning | Available | Not available |
| Supported foundation models | Most foundation models are available on both deployments\. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) | Requires that the models are installed on the cluster\. See [Supported foundation models](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/fm-models.html)\. |
| AI guardrails for prompting | Available | Not available |
| Prompt variables | Available | Not available |
| Synthentic data generation | Available | Requires the Synthetic Data Generator service\. |
| JupyterLab | Not available | Available in projects with Git integration |
| Visual Studio Code integration | Not available | Available |
| RStudio | Cannot integrate with Git | Can integrate with Git\. Requires an RStudio Server Runtimes service\. |
| Python scripts | Not available | Work with Python scripts in JupyterLab\. Requires a Watson Studio Runtimes service\. |
| Generate code to load data to a notebook by using the Flight service | Not available | Available |
| Manage notebook lifecycle | Not available | Use CPDCTL for notebook lifecycle management |
| Code package assets (set of dependent files in a folder structure) | Not available | Use CPDCTL to create code package assets in a deployment space |
| Promote notebooks to spaces | Not available | Available manually from the project's Assets page or programmatically by using CPDCTL |
| Python with GPU | Support available for a single GPU type only (Nvidia K80) | Support available for multiple Nvidia GPU types\. Requires a Watson Studio Runtimes service\. |
| Create and use custom images | Not available | Create custom images for Python (with and without GPU), R, JupyterLab (with and without GPU), RStudio, and SPSS environments\. Requires a Watson Studio Runtimes and other applicable services\. |
| Anaconda Repository | Not available | Use to create custom environments and custom images |
| Hadoop integration | Not available | Build and train models, and run Data Refinery flows on a Hadoop cluster\. Requires the Execution Engine for Apache Hadoop service\. |
| Decision Optimization | Available | Requires the Decision Optimization service\. |
| SPSS Modeler | Available | Requires the SPSS Modeler service\. |
| Watson Pipelines | Available | Requires the Watson Pipelines service\. |
<!-- </table ""> -->
## Watson Machine Learning ##
The following Watson Machine Learning features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4\.8:
<!-- <ul> -->
* Collaboration in projects and deployment spaces
* Deploy models
* Deploy functions
* Watson Machine Learning REST APIs
* Watson Machine Learning Python client
* Create online deployments
* Scale and update deployments
* Define and use custom components
* Use Federated Learning to train a common model with separate and secure data sources
* Monitor deployments across spaces
* Updated forms for testing online deployment
* Use nested pipelines
* AutoAI data imputation
* AutoAI fairness evaluation
* AutoAI time series supporting features
<!-- </ul> -->
This table describes the differences in features between the Watson Machine Learning service on the as\-a\-service and software deployment environments, differences between offering plans, and whether additional services are required\. For details about functionality differences between offering plans on IBM watsonx, see [Watson Machine Learning offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
<!-- <table> -->
Feature differences between Watson Machine Learning deployments
| Feature | As a service | Software |
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
| AutoAI training input | Current [supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) | [Supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) change by release |
| AutoAI experiment compute configuration | 8 CPU and 32 GB | [Different sizes available](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) |
| AutoAI limits on data size <br>and number of prediction targets | Set limits | [Limits differ by compute configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) |
| AutoAI incremental learning | Not available | Available |
| Deploy using popular frameworks <br>and software specifications | Check for latest [supported versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) | [Supported versions](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/ml-manage-frame-and-specs.html) differ by release |
| Connect to databases for batch deployments | Check for [support by deployment type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) | Check for support by [deployment type](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/deploy-batch-details.html) <br>and by version |
| Deploy and score Python scripts | Available via Python client | Create scripts in JupyterLab or Python client, then deploy |
| Deploy and batch score R Scripts | Not available | Available |
| Deploy Shiny apps | Not available | Create and deploy Shiny apps <br>Deploy from code package |
| Evaluate jobs for fairness, or drift | Requires the Watson OpenScale service | Requires the Watson OpenScale service |
| Evaluate online deployments in a space <br>for fairness, drift or explainability | Not available | Available <br>Requires the Watson OpenScale service |
| Control space creation | No restrictions by role | Use permissions to control who can view and create spaces |
| Import from GIT project to space | Not available | Available |
| Code package automatically created when importing <br>from Git project to space | Not available | Available |
| Update RShiny app from code package | Not available | Available |
| Track model details in a model inventory | Register models to view factsheets with lifecycle details\. Requires the IBM Knowledge Catalog service\. | Available <br>Requires the AI Factsheets service\. |
| Create and use custom images | Not available | Create custom images for Python or SPSS |
| Notify collaborators about Pipeline events | Not available | Use Send Mail to notify collaborators |
| Import project or space file into a nonempty space | Not available | Available |
| Deep Learning Experiments | Not available | Requires the Watson Machine Learning Accelerator service |
| Provision and manage IBM Cloud service instances | Add instances for Watson Machine Learning <br>or Watson OpenScale | Services are provisioned on the cluster <br>by the administrator |
<!-- </table ""> -->
## Watson OpenScale ##
The following Watson OpenScale features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4\.8:
<!-- <ul> -->
* Evaluate deployments for fairness
* Evaluate the quality of deployments
* Monitor deployments for drift
* View and compare model results in an Insights dashboard
* Add deployments from the machine learning provider of your choice
* Set alerts to trigger when evaluations fall below a specified threshold
* Evaluate deployments in a user interface or notebook
* Custom evaluations and metrics
* View details about evaluations in model factsheets
<!-- </ul> -->
This table describes the differences in features between the Watson OpenScale service on the as\-a\-service and software deployment environments, differences between offering plans, and whether additional services are required\.
<!-- <table> -->
Differences IBM Watson OpenScale
| Feature | As a service | Software |
| ---------------------------------------------- | ------------- | ----------------------------------- |
| Upload pre\-scored test data | Not available | Available |
| IBM SPSS Collaboration and Deployment Services | Not available | Available |
| Batch processing | Not available | Available |
| Support access control by user groups | Not available | Available |
| Free database and Postgres plans | Available | Postgres available starting in 4\.8 |
| Set up multiple instances | Not available | Available |
| Integration with OpenPages | Not available | Available |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
* [Services for Cloud Pak for Data 4\.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html)
* [Cloud deployment environment options for Cloud Pak for Data 4\.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/deployment-environments.html)
<!-- </ul> -->
**Parent topic:**[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
<!-- </article "role="article" "> -->
|
EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en | Get help | Get help
You can get help with IBM watsonx through documentation, training, support, and community resources.
* [Platform setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=enplatform)
* [Training](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=entraining)
* [Community resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=encommunity)
* [Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=ensamples)
* [Support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=ensupport)
Help with platform setup
You must be the account owner or administrator for a billable IBM Cloud account to set up the IBM watsonx platform for your organization. To learn how to set up IBM watsonx, see [Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html).
Training
Start training your model by data preparation, analysis, and visualization. Learn how to build, deploy, and trust your models. Use the following tutorials and videos to get started with IBM watsonx:
* [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)
Community resources
Share and gain knowledge using IBM Community and get the most out of our services.
Explore blogs, forums, and other resources in these communities:
* [watsonx.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e)
* [Data Science Community](https://community.ibm.com/community/user/datascience/home)
* [Watson Studio Community](https://community.ibm.com/community/user/watsonstudio/home)
Find more blogs and forums on the following platforms:
* [IBM Data and AI on Medium](https://medium.com/ibm-data-ai)
* [Watson Studio Stack Overflow](https://stackoverflow.com/questions/tagged/watson-studio)
Samples
You can use sample projects, notebooks, and data sets to get started fast.
Find samples in the following locations:
* [Samples](https://dataplatform.cloud.ibm.com/gallery)
* [IBM Data Science assets in GitHub](https://github.com/IBMDataScience)
Support
IBM Cloud provides you with three paid support options to customize your experience according to your business needs. Choose from [Basic, Advanced, or Premium support plan](https://cloud.ibm.com/docs/get-support?topic=get-support-support-plans). The level of support that you select determines the severity that you can assign to support cases and your level of access to the tools available in the Support Center.
You can also go to [IBM Cloud Support Center](https://cloud.ibm.com/unifiedsupport/supportcenter) to open a support case, browse FAQs, or ask questions to IBM Chat Bot.
Learn more
* [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html)
* [FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
| # Get help #
You can get help with IBM watsonx through documentation, training, support, and community resources\.
<!-- <ul> -->
* [Platform setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en#platform)
* [Training](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en#training)
* [Community resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en#community)
* [Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en#samples)
* [Support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=en#support)
<!-- </ul> -->
## Help with platform setup ##
You must be the account owner or administrator for a billable IBM Cloud account to set up the IBM watsonx platform for your organization\. To learn how to set up IBM watsonx, see [Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)\.
## Training ##
Start training your model by data preparation, analysis, and visualization\. Learn how to build, deploy, and trust your models\. Use the following tutorials and videos to get started with IBM watsonx:
<!-- <ul> -->
* [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)
<!-- </ul> -->
## Community resources ##
Share and gain knowledge using IBM Community and get the most out of our services\.
Explore blogs, forums, and other resources in these communities:
<!-- <ul> -->
* [watsonx\.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e)
* [Data Science Community](https://community.ibm.com/community/user/datascience/home)
* [Watson Studio Community](https://community.ibm.com/community/user/watsonstudio/home)
<!-- </ul> -->
Find more blogs and forums on the following platforms:
<!-- <ul> -->
* [IBM Data and AI on Medium](https://medium.com/ibm-data-ai)
* [Watson Studio Stack Overflow](https://stackoverflow.com/questions/tagged/watson-studio)
<!-- </ul> -->
## Samples ##
You can use sample projects, notebooks, and data sets to get started fast\.
Find samples in the following locations:
<!-- <ul> -->
* [Samples](https://dataplatform.cloud.ibm.com/gallery)
* [IBM Data Science assets in GitHub](https://github.com/IBMDataScience)
<!-- </ul> -->
## Support ##
IBM Cloud provides you with three paid support options to customize your experience according to your business needs\. Choose from [Basic, Advanced, or Premium support plan](https://cloud.ibm.com/docs/get-support?topic=get-support-support-plans)\. The level of support that you select determines the severity that you can assign to support cases and your level of access to the tools available in the Support Center\.
You can also go to [IBM Cloud Support Center](https://cloud.ibm.com/unifiedsupport/supportcenter) to open a support case, browse FAQs, or ask questions to IBM Chat Bot\.
## Learn more ##
<!-- <ul> -->
* [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html)
* [FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
292D19849E8FBE48869F5E3A50439964563A90D1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en | Quick start: Analyze data in a Jupyter notebook | Quick start: Analyze data in a Jupyter notebook
You can create a notebook in which you run code to prepare, visualize, and analyze data, or build and train a model. Read about Jupyter notebooks, then watch a video and take a tutorial that’s suitable for users with some knowledge of Python code.
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add your data to the project. You can add CSV files or data from a remote data source through a connection.
3. Create a notebook in the project.
4. Add code to the notebook to load and analyze your data.
5. Run your notebook and share the results with your colleagues.
Read about notebooks
A Jupyter notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data:
* The data
* The code computations that process the data
* Visualizations of the results
* Text and rich media to enhance understanding
[Read more about notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
Watch a video about notebooks
 Watch this video to learn the basics of Jupyter notebooks.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a notebook
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep01)
* [Task 2: Add a notebook to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep02)
* [Task 3: Load a file and save the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep03)
* [Task 4: Find and edit the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep04)
* [Task 5: Share read-only version of the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep05)
* [Task 6: Schedule a notebook to run at a different time.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep06)
This tutorial will take approximately 15 minutes to complete.
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the notebook and data asset. You can use your sandbox project or create a project. Follow these steps to open a project and add a data asset to the project: 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. From the navigation menu, click Samples. 1. Search for an interesting data set, and select the data set. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. After the data set is added, click View Project. 1. In the project, click the Assets tab to see the data set. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.
### {: iih} Check your progress The following image shows the Assets tab in the project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 2: Add a notebook to your project
 To preview this task, watch the video beginning at 00:06. Follow these steps to create a new notebook in your project. 1. In your project, on the Assets tab, click New asset > Work with data and models in Python or R notebooks. 1. Type a name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. ### {: iih} Check your progress The following image shows blank notebook.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 3: Load a file and save the notebook
 To preview this task, watch the video beginning at 00:23. Now you can access the data asset in your notebook that you uploaded to your project earlier. Follow these steps to load data into a data frame: 1. Click in an empty code cell in your notebook. 1. Click the Code snippets icon ( {: iih}). 1. In the side pane, click Read data. 1. Click Select data from project. 1. Locate the data asset from the project, and click Select. 1. In the Load as drop-down list, select the load option that you prefer. 1. Click Insert code to cell. The code to read and load the data asset is inserted into the cell. 1. Click Run to run your code. The first few rows of your data set will display. 1. To save a version of your notebook, click File > Save Version. You can also just save your notebook with File > Save. ### {: iih} Check your progress The following image shows the notebook with the pandas DataFrame.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 4: Find and edit the notebook
 To preview this task, watch the video beginning at 01:19. Follow these steps to locate the saved notebook on the Assets tab, and edit the notebook: 1. In the project navigation trail, click your project name to return to your project. 1. Click the Assets tab to find the notebook. 1. When you click the notebook, it will open in READ ONLY mode. 1. To edit the notebook, click the pencil icon {: iih}. 1. Click the Information icon {: iih} to open the Information panel. 1. On the General tab, edit the name and description of the notebook. 1. Click the Environment tab to see how you can change the environment used to run the notebook or update the runtime status to either stop and restart. ### {: iih} Check your progress The following image shows the notebook with the Information panel displayed.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 5: Share read-only version of the notebook
 To preview this task, watch the video beginning at 01:52. Follow these steps to create a link to the notebook to share with colleagues: 1. Click the Share icon {: iih} if you would like to share the read-only view of the notebook. 1. Click to turn on the Share with anyone who has the link toggle button. 1. Select what content you would like to share through a link or social media. 1. Click the Copy icon {: iih} to copy a direct link to this notebook. 1. Click Close. ### {: iih} Check your progress The following image shows the Share dialog box.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
* Task 6: Schedule a notebook to run at a different time
 To preview this task, watch the video beginning at 02:08. Follow these steps to create a job to schedule the notebook to run at a specific time or repeat based on a schedule: 1. Click the Jobs icon, and select Create a job.
 1. Provide the name and description of the job, and click Next. 1. Select the notebook version and environment runtime, and click Next. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click Next. 1. (Optional) click the toggle button to receive notifications for this job, and click Next. 1. Review the details, and click either Create (to create the job, but not run the job immediately) or Create and run (to run the job immediately). 1. The job will display in the Jobs tab in the project. ### {: iih} Check your progress The following image shows the Jobs tab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview)
Next steps
Now you can use this data set for further analysis. For example, you or other users can do any of these tasks:
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
Additional resources
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Analyze data in a Jupyter notebook #
You can create a notebook in which you run code to prepare, visualize, and analyze data, or build and train a model\. Read about Jupyter notebooks, then watch a video and take a tutorial that’s suitable for users with some knowledge of Python code\.
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add your data to the project\. You can add CSV files or data from a remote data source through a connection\.
3. Create a notebook in the project\.
4. Add code to the notebook to load and analyze your data\.
5. Run your notebook and share the results with your colleagues\.
<!-- </ol> -->
## Read about notebooks ##
A Jupyter notebook is a web\-based environment for interactive computing\. You can run small pieces of code that process your data, and you can immediately view the results of your computation\. Notebooks include all of the building blocks you need to work with data:
<!-- <ul> -->
* The data
* The code computations that process the data
* Visualizations of the results
* Text and rich media to enhance understanding
<!-- </ul> -->
[Read more about notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
## Watch a video about notebooks ##
 Watch this video to learn the basics of Jupyter notebooks\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a notebook ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step01)
* [Task 2: Add a notebook to your project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step02)
* [Task 3: Load a file and save the notebook\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step03)
* [Task 4: Find and edit the notebook\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step04)
* [Task 5: Share read\-only version of the notebook\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step05)
* [Task 6: Schedule a notebook to run at a different time\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#step06)
<!-- </ul> -->
This tutorial will take approximately 15 minutes to complete\.
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the notebook and data asset. You can use your sandbox project or create a project. Follow these steps to open a project and add a data asset to the project: 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. 1. From the navigation menu, click **Samples**. 1. Search for an interesting data set, and select the data set. 1. Click **Add to project**. 1. Select the project from the list, and click **Add**. 1. After the data set is added, click **View Project**. 1. In the project, click the **Assets** tab to see the data set. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}.
\#\#\# \{: iih\} Check your progress The following image shows the Assets tab in the project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Add a notebook to your project
 To preview this task, watch the video beginning at 00:06. Follow these steps to create a new notebook in your project. 1. In your project, on the *Assets* tab, click **New asset > Work with data and models in Python or R notebooks**. 1. Type a name and description (optional). 1. Select a runtime environment for this notebook. 1. Click **Create**. Wait for the notebook editor to load. \#\#\# \{: iih\} Check your progress The following image shows blank notebook.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Load a file and save the notebook
 To preview this task, watch the video beginning at 00:23. Now you can access the data asset in your notebook that you uploaded to your project earlier. Follow these steps to load data into a data frame: 1. Click in an empty code cell in your notebook. 1. Click the **Code snippets** icon ( \{: iih\}). 1. In the side pane, click **Read data**. 1. Click **Select data from project**. 1. Locate the data asset from the project, and click **Select**. 1. In the *Load as* drop-down list, select the load option that you prefer. 1. Click **Insert code to cell**. The code to read and load the data asset is inserted into the cell. 1. Click **Run** to run your code. The first few rows of your data set will display. 1. To save a version of your notebook, click **File > Save Version**. You can also just save your notebook with **File > Save**. \#\#\# \{: iih\} Check your progress The following image shows the notebook with the pandas DataFrame.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Find and edit the notebook
 To preview this task, watch the video beginning at 01:19. Follow these steps to locate the saved notebook on the Assets tab, and edit the notebook: 1. In the project navigation trail, click your project name to return to your project. 1. Click the **Assets** tab to find the notebook. 1. When you click the notebook, it will open in `READ ONLY` mode. 1. To edit the notebook, click the **pencil** icon \{: iih\}. 1. Click the **Information** icon \{: iih\} to open the *Information* panel. 1. On the *General* tab, edit the name and description of the notebook. 1. Click the **Environment** tab to see how you can change the environment used to run the notebook or update the runtime status to either stop and restart. \#\#\# \{: iih\} Check your progress The following image shows the notebook with the Information panel displayed.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Share read\-only version of the notebook
 To preview this task, watch the video beginning at 01:52. Follow these steps to create a link to the notebook to share with colleagues: 1. Click the **Share** icon \{: iih\} if you would like to share the read-only view of the notebook. 1. Click to turn on the **Share with anyone who has the link** toggle button. 1. Select what content you would like to share through a link or social media. 1. Click the **Copy** icon \{: iih\} to copy a direct link to this notebook. 1. Click **Close**. \#\#\# \{: iih\} Check your progress The following image shows the Share dialog box.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Schedule a notebook to run at a different time
 To preview this task, watch the video beginning at 02:08. Follow these steps to create a job to schedule the notebook to run at a specific time or repeat based on a schedule: 1. Click the **Jobs** icon, and select **Create a job**.
 1. Provide the name and description of the job, and click **Next**. 1. Select the notebook version and environment runtime, and click **Next**. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click **Next**. 1. (Optional) click the toggle button to receive notifications for this job, and click **Next**. 1. Review the details, and click either **Create** (to create the job, but not run the job immediately) or **Create and run** (to run the job immediately). 1. The job will display in the **Jobs** tab in the project. \#\#\# \{: iih\} Check your progress The following image shows the Jobs tab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now you can use this data set for further analysis\. For example, you or other users can do any of these tasks:
<!-- <ul> -->
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
316974F0A70EE2199BF6CD912E62BFB53D200F0A | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en | Quick start: Build and deploy a machine learning model in a Jupyter notebook | Quick start: Build and deploy a machine learning model in a Jupyter notebook
You can create, train, and deploy machine learning models with Watson Machine Learning in a Jupyter notebook. Read about the Jupyter notebooks, then watch a video and take a tutorial that’s suitable for intermediate users and requires coding.
Required services : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add a notebook to the project. You can create a blank notebook or import a notebook from a file or GitHub repository.
3. Add code and run the notebook.
4. Review the model pipelines and save the desired pipeline as a model.
5. Deploy and test your model.
Read about Jupyter notebooks
A Jupyter notebook is a web-based environment for interactive computing. If you choose to build a machine learning model in a notebook, you should be comfortable with coding in a Jupyter notebook. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model.
[Read more about training models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Watch a video about creating a model in a Jupyter notebook
 Watch this video to see how to train, deploy, and test a machine learning model in a Jupyter notebook.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a model in a Jupyter notebook
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep01)
* [Task 2: Add a notebook to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep02)
* [Task 3: Set up the environment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep03)
* [Task 4: Run the notebook:](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep04)
* Build and train a model.
* Save a pipeline as a model.
* Deploy the model.
* Test the deployed model.
* [Task 5: View and test the deployed model in the deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep05)
* [(Optional) Clean up.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep06)
This tutorial will take approximately 30 minutes to complete.
Sample data
The sample data used in this tutorial is from data that is part of scikit-learn and will be used to train a model to recognize images of hand-written digits, from 0-9.
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page.  To preview this task, watch the video beginning at 00:07. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.
For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### {: iih} Check your progress The following image shows the new project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 2: Add a notebook to your project
 To preview this task, watch the video beginning at 00:18. You will use a sample notebook in this tutorial. Follow these steps to add the sample notebook to your project: 1. Access the [Use sckit-learn to recognize hand-written digits notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4){: new_window} in the Samples. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. Verify the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. 1. From the menu, click Kernel > Restart & Clear Output, then confirm by clicking Restart and Clear All Outputs to clear the output from the last saved run. ### {: iih} Check your progress The following image shows the new notebook.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 3: Set up the environment
 To preview this task, watch the video beginning at 00:44. The first section in the notebook sets up the environment by specifying your IBM Cloud credentials and Watson Machine Learning service instance location. Follow these steps to set up the environment in your notebook: 1. Scroll to the Set up the environment section. 1. Choose a method to obtain the API key and location. - Run the IBM Cloud CLI commands in the notebook from a command prompt. - Use the IBM Cloud console. 1. Launch the [API keys section in the IBM Cloud Console](https://cloud.ibm.com/iam/apikeys){: new_window}, and [create an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key){: new_window}. 1. Access your [IBM Cloud resource list](https://cloud.ibm.com/resources){: new_window}, view your Watson Machine Learning service instance, and note the Location. 1. See the Watson Machine Learning [API Docs](https://cloud.ibm.com/apidocs/machine-learning){: new_window} for the correct endpoint URL. For example, Dallas is in us-south. 1. Paste your API key and location into cell 1. 1. Run cells 1 and 2. 1. Run cell 3 to install the ibm-watson-machine-learning package. 1. Run cell 4 to import the API client and create the API client instance using your credentials. 1. Run cell 5 to see a list of all existing deployment spaces. If you do not have a deployment space, then follow these steps: 1. Open another tab with your watsonx deployment. 1. From the navigation menu {: iih}, click Deployments. 1. Click New deployment space. 1. Add a name and optional description for the deployment. 1. Click Create, then View new space. 1. Click the Manage tab. 1. Copy the Space GUID and close the tab, this value will be your space_id. 1. Copy and paste the appropriate deployment space ID into cell 6, then run cell 6 and cell 7 to set the default space. \ {: iih} Check your progress The following image shows the notebook with all of the environment variables set up.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 4: Run the notebook
 To preview this task, watch the video beginning at 02:14. Now that all of the environment variables are set up, you can run the rest of the cells in the notebook. Follow these steps to read through the comments, run the cells, and review the output: 1. Run the cells in the Explore data section. 1. Run the cells in the Create a scikit-learn model section to. 1. Prepare the data by splitting it into three data sets (train, test, and score). 1. Create the pipeline. 1. Train the model. 1. Evaluate the model using the test data. 1. Run the cells in the Publish model section to publish the model, get model details, and get all models. 1. Run the cells in the Create model deployment section. 1. Run the cells in the Get deployment details section. 1. Run the cells in the Score section, which sends a scoring request to the deployed model and shows the prediction. 1. Click *File > Save to save the notebook and its output. ### {: iih} Check your progress The following image shows the notebook with the prediction.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 5: View and test the deployed model in the deployment space
 To preview this task, watch the video beginning at 04:07. You can also view the model deployment directly from the deployment space. Follow these steps to test the deployed model in the space. 1. From the navigation menu {: iih}, click Deployments. 1. Click the Spaces tab. 1. Select the appropriate deployment space from the list. 1. Click Scikit model. 1. Click Deployment of scikit model. 1. Review the Endpoint and Code snippets. 1. Click the Test tab. You can test the deployed model by pasting the following JSON code: json {"input_data": [{"values": 0.0, 0.0, 5.0, 16.0, 16.0, 3.0, 0.0, 0.0, 0.0, 0.0, 9.0, 16.0, 7.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 2.0, 0.0, 0.0, 0.0, 0.0, 1.0, 15.0, 16.0, 15.0, 4.0, 0.0, 0.0, 0.0, 0.0, 9.0, 13.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 12.0, 0.0, 0.0, 0.0, 0.0, 5.0, 12.0, 16.0, 8.0, 0.0, 0.0, 0.0, 0.0, 3.0, 15.0, 15.0, 1.0, 0.0, 0.0], 0.0, 0.0, 6.0, 16.0, 12.0, 1.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 13.0, 10.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 5.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 13.0, 13.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.0, 16.0, 9.0, 4.0, 1.0, 0.0, 0.0, 3.0, 16.0, 16.0, 16.0, 16.0, 10.0, 0.0, 0.0, 5.0, 16.0, 11.0, 9.0, 6.0, 2.0]]}]} 1. Click Predict. The resulting prediction indicates that the hand-written digits are 5 and 4. ### {: iih} Check your progress The following image shows the Test tab with the prediction.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
* (Optional) Task 6: Clean up
If you'd like to remove all of the assets created by the notebook, create a new notebook based on the [Machine Learning artifacts management notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb){: new_window}. A link to this notebook is also available in the Clean up section of the Use scikit-learn to recognize hand-written digits notebook used in this tutorial.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview)
Next steps
Now you can use this data set for further analysis. For example, you or other users can do any of these tasks:
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
Additional resources
* Try these other methods to build models:
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* Find more [Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Build and deploy a machine learning model in a Jupyter notebook #
You can create, train, and deploy machine learning models with Watson Machine Learning in a Jupyter notebook\. Read about the Jupyter notebooks, then watch a video and take a tutorial that’s suitable for intermediate users and requires coding\.
**Required services** : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add a notebook to the project\. You can create a blank notebook or import a notebook from a file or GitHub repository\.
3. Add code and run the notebook\.
4. Review the model pipelines and save the desired pipeline as a model\.
5. Deploy and test your model\.
<!-- </ol> -->
## Read about Jupyter notebooks ##
A Jupyter notebook is a web\-based environment for interactive computing\. If you choose to build a machine learning model in a notebook, you should be comfortable with coding in a Jupyter notebook\. You can run small pieces of code that process your data, and then immediately view the results of your computation\. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model\.
[Read more about training models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
## Watch a video about creating a model in a Jupyter notebook ##
 Watch this video to see how to train, deploy, and test a machine learning model in a Jupyter notebook\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a model in a Jupyter notebook ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step01)
* [Task 2: Add a notebook to your project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step02)
* [Task 3: Set up the environment\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step03)
* [Task 4: Run the notebook:](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step04)
<!-- <ul> -->
* Build and train a model.
* Save a pipeline as a model.
* Deploy the model.
* Test the deployed model.
<!-- </ul> -->
* [Task 5: View and test the deployed model in the deployment space\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step05)
<!-- </ul> -->
<!-- <ul> -->
* [(Optional) Clean up\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#step06)
<!-- </ul> -->
This tutorial will take approximately 30 minutes to complete\.
### Sample data ###
The sample data used in this tutorial is from data that is part of **scikit\-learn** and will be used to train a model to recognize images of hand\-written digits, from 0\-9\.
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. 1. When the project opens, click the **Manage** tab and select the **Services and integrations** page.  To preview this task, watch the video beginning at 00:07. 1. On the *IBM services* tab, click **Associate service**. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click **New service**. 1. Select **Watson Machine Learning**. 1. Click **Create**. 1. Select the new service instance from the list. 1. Click **Associate service**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}.
For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the new project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Add a notebook to your project
 To preview this task, watch the video beginning at 00:18. You will use a sample notebook in this tutorial. Follow these steps to add the sample notebook to your project: 1. Access the [Use sckit-learn to recognize hand-written digits notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4)\{: new\_window\} in the *Samples*. 1. Click **Add to project**. 1. Select the project from the list, and click **Add**. 1. Verify the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click **Create**. Wait for the notebook editor to load. 1. From the menu, click **Kernel > Restart & Clear Output**, then confirm by clicking **Restart and Clear All Outputs** to clear the output from the last saved run. \#\#\# \{: iih\} Check your progress The following image shows the new notebook.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Set up the environment
 To preview this task, watch the video beginning at 00:44. The first section in the notebook sets up the environment by specifying your IBM Cloud credentials and Watson Machine Learning service instance location. Follow these steps to set up the environment in your notebook: 1. Scroll to the *Set up the environment* section. 1. Choose a method to obtain the API key and location. - Run the IBM Cloud CLI commands in the notebook from a command prompt. - Use the IBM Cloud console. 1. Launch the [API keys section in the IBM Cloud Console](https://cloud.ibm.com/iam/apikeys)\{: new\_window\}, and [create an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key)\{: new\_window\}. 1. Access your [IBM Cloud resource list](https://cloud.ibm.com/resources)\{: new\_window\}, view your Watson Machine Learning service instance, and note the *Location*. 1. See the Watson Machine Learning [API Docs](https://cloud.ibm.com/apidocs/machine-learning)\{: new\_window\} for the correct endpoint URL. For example, Dallas is in us-south. 1. Paste your API key and location into cell 1. 1. Run cells 1 and 2. 1. Run cell 3 to install the `ibm-watson-machine-learning` package. 1. Run cell 4 to import the API client and create the API client instance using your credentials. 1. Run cell 5 to see a list of all existing deployment spaces. If you do not have a deployment space, then follow these steps: 1. Open another tab with your watsonx deployment. 1. From the navigation menu \{: iih\}, click **Deployments**. 1. Click **New deployment space**. 1. Add a name and optional description for the deployment. 1. Click **Create**, then **View new space**. 1. Click the **Manage** tab. 1. Copy the **Space GUID** and close the tab, this value will be your `space_id`. 1. Copy and paste the appropriate deployment space ID into cell 6, then run cell 6 and cell 7 to set the default space. \#\#\# \{: iih\} Check your progress The following image shows the notebook with all of the environment variables set up.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Run the notebook
 To preview this task, watch the video beginning at 02:14. Now that all of the environment variables are set up, you can run the rest of the cells in the notebook. Follow these steps to read through the comments, run the cells, and review the output: 1. Run the cells in the *Explore data* section. 1. Run the cells in the *Create a scikit-learn model* section to. 1. Prepare the data by splitting it into three data sets (train, test, and score). 1. Create the pipeline. 1. Train the model. 1. Evaluate the model using the test data. 1. Run the cells in the *Publish model* section to publish the model, get model details, and get all models. 1. Run the cells in the *Create model deployment* section. 1. Run the cells in the *Get deployment details* section. 1. Run the cells in the *Score* section*, which sends a scoring request to the deployed model and shows the prediction. 1. Click \*File > Save* to save the notebook and its output. \#\#\# \{: iih\} Check your progress The following image shows the notebook with the prediction.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: View and test the deployed model in the deployment space
 To preview this task, watch the video beginning at 04:07. You can also view the model deployment directly from the deployment space. Follow these steps to test the deployed model in the space. 1. From the navigation menu \{: iih\}, click **Deployments**. 1. Click the **Spaces** tab. 1. Select the appropriate deployment space from the list. 1. Click **Scikit model**. 1. Click **Deployment of scikit model**. 1. Review the *Endpoint* and *Code snippets*. 1. Click the **Test** tab. You can test the deployed model by pasting the following JSON code: `json {"input_data": [{"values": 0.0, 0.0, 5.0, 16.0, 16.0, 3.0, 0.0, 0.0, 0.0, 0.0, 9.0, 16.0, 7.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 2.0, 0.0, 0.0, 0.0, 0.0, 1.0, 15.0, 16.0, 15.0, 4.0, 0.0, 0.0, 0.0, 0.0, 9.0, 13.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 12.0, 0.0, 0.0, 0.0, 0.0, 5.0, 12.0, 16.0, 8.0, 0.0, 0.0, 0.0, 0.0, 3.0, 15.0, 15.0, 1.0, 0.0, 0.0], 0.0, 0.0, 6.0, 16.0, 12.0, 1.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 13.0, 10.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 5.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 13.0, 13.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.0, 16.0, 9.0, 4.0, 1.0, 0.0, 0.0, 3.0, 16.0, 16.0, 16.0, 16.0, 10.0, 0.0, 0.0, 5.0, 16.0, 11.0, 9.0, 6.0, 2.0]]}]}` 1. Click **Predict**. The resulting prediction indicates that the hand-written digits are 5 and 4. \#\#\# \{: iih\} Check your progress The following image shows the *Test* tab with the prediction.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* (Optional) Task 6: Clean up
If you'd like to remove all of the assets created by the notebook, create a new notebook based on the [Machine Learning artifacts management notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb)\{: new\_window\}. A link to this notebook is also available in the **Clean up** section of the *Use scikit-learn to recognize hand-written digits notebook* used in this tutorial.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now you can use this data set for further analysis\. For example, you or other users can do any of these tasks:
<!-- <ul> -->
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* Try these other methods to build models:
<!-- <ul> -->
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
<!-- </ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* Find more [Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
F870AF12BC30438B0DAB4FF5365B5279F2F9A93A | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en | Quick start: Build a model using SPSS Modeler | Quick start: Build a model using SPSS Modeler
You can create, train, and deploy models using SPSS Modeler. Read about SPSS Modeler, then watch a video and follow a tutorial that’s suitable for beginners and requires no coding.
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add an SPSS Modeler flow to the project.
3. Configure the nodes on the canvas, and run the flow.
4. Review the model details and save the model.
5. Deploy and test your model.
Read about SPSS Modeler
With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results.
SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems.
[Read more about SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Watch a video about creating a model using SPSS Modeler
 Watch this video to see how to create and run an SPSS Modeler flow to train a machine learning model.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a model using SPSS Modeler
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep01)
* [Task 2: Add a data set to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep02)
* [Task 3: Create the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep03)
* [Task 4: Add the nodes to the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep04)
* [Task 5: Run the SPSS Modeler flow and explore the model details.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep05)
* [Task 6: Evaluate the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep06)
* [Task 7: Deploy and test the model with new data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep07)
This tutorial will take approximately 30 minutes to complete.
Example data
The data set used in this tutorial is from the University of California, Irvine, and is the result of an extensive study based on hospital admissions over a period of time. The model will use three important factors to help predict chronic kidney disease.
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the SPSS Modeler flow. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### {: iih} Check your progress The following image shows the new project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 2: Add the data set to your project
 To preview this task, watch the video beginning at 00:13. This tutorial uses a sample data set. Follow these steps to add the sample data set to your project: 1. Access the [UCI ML Repository: Chronic Kidney Disease Data Set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a25870b7249ad55605de7a2e59567a7e){: new_window} in the Samples. 1. Click Preview. There are three important factors that help predict chronic kidney disease which are available as part of this analysis: the age of the test subject, the serum creatinine test results, and diabetes test results. And the class value indicates if the patient has been previously diagnosed for kidney disease. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. Click View Project. 1. From your project's Assets page, locate the UCI ML Repository Chronic Kidney Disease Data Set.csv file. ### {: iih} Check your progress The following image shows the Assets tab in the project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 3: Create the SPSS Modeler flow
 To preview this task, watch the video beginning at 01:11. Follow these steps to create an SPSS Modeler flow in the project: 1. Click New asset > Build models as a visual flow. 1. Type a name and description for the flow. 1. For the runtime definition, accept the Default SPSS Modeler S definition. 1. Click Create. This opens up the Flow Editor that you'll use to create the flow. ### {: iih} Check your progress The following image shows the flow editor.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 4: Add the nodes to the SPSS Modeler flow
 To preview this task, watch the video beginning at 01:31. After you load the data, you must transform the data. Create a simple flow by dragging transformers and estimators onto the canvas and connecting them to the data source. Use the following nodes from the palette: - Data Asset: loads the csv file from the project - Partition: divides the data into training and testing segments - Type: sets the data type. Use it to designate the class field as a target type. - C5.0: a classification algorithm - Analysis: view the model and check its accuracy - Table: preview the data with predictions Follow these steps to create the flow: 1. Add the data asset node: 1. From the Import section, drag the Data Asset node onto the canvas. 1. Double-click the Data Asset node to select the data set. 1. Select Data asset > UCI ML Repository Chronic Kidney Disease Data Set.csv. 1. Click Select. 1. View the Data Asset properties. 1. Click Save. 1. Add the Partition node: 1. From the Field Operations section, drag the Partition node onto the canvas. 1. Connect the Data Asset node to the Partition node. 1. Double-click the Partition node to view its properties. The default partition divides half of the data for training and the other half for testing. 1. Click Save. 1. Add the Type node: 1. From the Field Operations section, drag the Type node onto the canvas. 1. Connect the Partition node to the Type node. 1. Double-click the Type node to view its properties. The Type node specifies the measurement level for each field. This source data file uses four different measurement levels: Continuous, Categorical, Nominal, Ordinal, and Flag. 1. Search for the class field. For each field, the role indicates the part that each field plays in modeling. Change the classRole to Target - the field you want to predict. 1. Click Save. 1. Add the C5.0 classification algorithm node: 1. From the Modeling section, drag the C5.0 node onto the canvas. 1. Connect the Type node to the C5.0 node. 1. Double-click the C5.0 node to view its properties. By default, the C5.0 algorithm builds a decision tree. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples can't be split any further. Finally, the lowest-level splits are reexamined, and those that don't contribute significantly to the value of the model are removed. 1. Toggle on Use settings defined in this node. 1. For Target, select class. 1. In the Inputs section, click Add columns. 1. Clear the checkbox next to Field name. 1. Select age, sc, dm. 1. Click OK. 1. Click Save. ### {: iih} Check your progress The following image shows the completed flow.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 5: Run the SPSS Modeler flow and explore the model details
 To preview this task, watch the video beginning at 04:20. Now that you have designed the flow, follow these steps to run the flow, and examine the tree diagram to see the decision points: 1. Right-click the C5.0 node and select Run. Running the flow generates a new model nugget on the canvas. 1. Right-click the model nugget and select View Model to view the model details. 1. View the Model Information which provides a model summary. 1. Click Top Decision Rules. A table displays a series of rules that were used to assign individual records to child nodes based on the values of different input fields. 1. Click Feature Importance. A chart shows the relative importance of each predictor in estimating the model. From this, you can see that serum creatinine is easily the most significant factor, with diabetes being the next most significant factor. 1. Click Tree Diagram. The same model is displayed in the form of a tree, with a node at each decision point. 1. Hover over the top node, which provides a summary for all the records in the data set. Almost 40% of the cases in the data set are classified as not diagnosed with kidney disease. The tree can provide additional clues as to what factors might be responsible. 1. Notice the two branches stemming from the top node, which indicates a split by serum creatinine. - Review the branch that shows records where the serum creatinine is greater than 1.25. In this case, 100% of those patients have a positive kidney disease diagnosis. - Review the branch that shows records where the serum creatinine is less than or equal to 1.25. Almost 80% of those patients don't have a positive kidney disease diagnosis, but almost 20% with lower serum creatinine were still diagnosed with kidney disease. 1. Notice the branches stemming from sc<=1.250, which is split by diabetes. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and diagnosed diabetes (dm=yes). 100% of these patients were also diagnosed with kidney disease. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and no diabetes (dm=no), 85% were not diagnosed with kidney disease, but 15% of them were still diagnosed with kidney disease. 1. Notice the branches stemming from dm = no, which is split by the last significant factor, age. - Review the branch that shows patients 14 years old or younger (age <= 14). This branch shows that 75% of young patients with low serum creatinine and no diabetes were at risk of getting kidney disease. - Review the branch that shows patients older than 14 years old (age > 14). This branch shows that only 12% of patients over 14 years old with low serum creatinine and no diabetes were at risk of getting kidney disease. 1. Close the model details. ### {: iih} Check your progress The following image shows the tree diagram.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 6: Evaluate the model
 To preview this task, watch the video beginning at 07:24. Follow these steps to use the Analysis and Table nodes to evaluate the model: 1. From the Outputs section, drag the Analysis node onto the canvas. 1. Connect the Model nugget to the Analysis node. 1. Right-click the Analysis node, and select Run. 1. From the Outputs panel, open the Analysis, which shows that the model correctly predicted a kidney disease diagnosis almost 95% of the time. Close the Analysis. 1. Right-click the Analysis node, and select Save branch as a model. 1. For the Model name, type Kidney Disease Analysis{: .cp}. 1. Click Save. 1. Click Close. 1. From the Outputs section, drag the Table node onto the canvas. 1. Connect the Model nugget to the Table node. 1. Right-click the Table node, and select Preview data. 1. When the Preview displays, scroll to the last two columns. The $C-Class column contains the prediction of kidney disease, and the $CC-Class column indicates the confidence score for that prediction. 1. Close the Preview. ### {: iih} Check your progress The following image shows the preview table with the predictions.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
* Task 7: Deploy and test the model with new data
 To preview this task, watch the video beginning at 09:10. Lastly, follow these steps to deploy this model and predict the outcome with new data. 1. Return to the Project's Assets tab. 1. Click the Models section, and open the Kidney Disease Analysis model. 1. Click Promote to deployment space. 1. Choose an existing deployment space. If you don't have a deployment space, you can create a new one: 1. Provide a space name. 1. Select a storage service. 1. Select a machine learning service. 1. Click Create. 1. Click Close. 1. Select Go to the model in the space after promoting it. 1. Click Promote. 1. When the model displays inside the deployment space, click New deployment. 1. Select Online as the Deployment type. 1. Specify a name for the deployment. 1. Click Create. 1. When the deployment is complete, click the deployment name to view the deployment details page. 1. Go to the Test tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. Click the JSON input, then copy the following test data and paste it to replace the existing JSON text: json { "input_data": [ { "fields": "age", "bp", "sg", "al", "su", "rbc", "pc", "pcc", "ba", "bgr", "bu", "sc", "sod", "pot", "hemo", "pcv", "wbcc", "rbcc", "htn", "dm", "cad", "appet", "pe", "ane", "class" ], "values": "62", "80", "1.01", "2", "3", "normal", "normal", "notpresent", "notpresent", "423", "53", "1.8", "", "", "9.6", "31", "7500", "", "no", "yes", "no", "poor", "no", "yes", "ckd" ] ] } ] } 1. Click Predict to predict whether a 62 year old with diabetes and a serum creatinine ratio of 1.8 would likely be diagnosed with kidney disease. The resulting prediction indicates that this patient has a high probability of a kidney disease diagnosis. ### {: iih} Check your progress The following image shows the Test tab for the model deployment with a prediction.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview)
Next steps
Now you can use this data set for further analysis. For example, you can perform tasks such as:
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
Additional resources
* Find more [SPSS Modeler tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials.html)
* Try these other methods to build models:
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* Contribute to the [SPSS Modeler community](https://ibm.biz/spss-modeler-community)
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Build a model using SPSS Modeler #
You can create, train, and deploy models using SPSS Modeler\. Read about SPSS Modeler, then watch a video and follow a tutorial that’s suitable for beginners and requires no coding\.
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add an SPSS Modeler flow to the project\.
3. Configure the nodes on the canvas, and run the flow\.
4. Review the model details and save the model\.
5. Deploy and test your model\.
<!-- </ol> -->
## Read about SPSS Modeler ##
With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making\. Designed around the long\-established SPSS Modeler client software and the industry\-standard CRISP\-DM model it uses, the flows interface supports the entire data mining process, from data to better business results\.
SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics\. The methods available on the node palette allow you to derive new information from your data and to develop predictive models\. Each method has certain strengths and is best suited for particular types of problems\.
[Read more about SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
## Watch a video about creating a model using SPSS Modeler ##
 Watch this video to see how to create and run an SPSS Modeler flow to train a machine learning model\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a model using SPSS Modeler ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step01)
* [Task 2: Add a data set to your project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step02)
* [Task 3: Create the SPSS Modeler flow\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step03)
* [Task 4: Add the nodes to the SPSS Modeler flow\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step04)
* [Task 5: Run the SPSS Modeler flow and explore the model details\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step05)
* [Task 6: Evaluate the model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step06)
* [Task 7: Deploy and test the model with new data\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#step07)
<!-- </ul> -->
This tutorial will take approximately 30 minutes to complete\.
### Example data ###
The data set used in this tutorial is from the University of California, Irvine, and is the result of an extensive study based on hospital admissions over a period of time\. The model will use three important factors to help predict chronic kidney disease\.
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the SPSS Modeler flow. You can use your sandbox project or create a project. 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the new project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Add the data set to your project
 To preview this task, watch the video beginning at 00:13. This tutorial uses a sample data set. Follow these steps to add the sample data set to your project: 1. Access the [UCI ML Repository: Chronic Kidney Disease Data Set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a25870b7249ad55605de7a2e59567a7e)\{: new\_window\} in the *Samples*. 1. Click **Preview**. There are three important factors that help predict chronic kidney disease which are available as part of this analysis: the age of the test subject, the serum creatinine test results, and diabetes test results. And the class value indicates if the patient has been previously diagnosed for kidney disease. 1. Click **Add to project**. 1. Select the project from the list, and click **Add**. 1. Click **View Project**. 1. From your project's *Assets* page, locate the **UCI ML Repository Chronic Kidney Disease Data Set.csv** file. \#\#\# \{: iih\} Check your progress The following image shows the *Assets* tab in the project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Create the SPSS Modeler flow
 To preview this task, watch the video beginning at 01:11. Follow these steps to create an SPSS Modeler flow in the project: 1. Click **New asset > Build models as a visual flow**. 1. Type a name and description for the flow. 1. For the runtime definition, accept the **Default SPSS Modeler S** definition. 1. Click **Create**. This opens up the Flow Editor that you'll use to create the flow. \#\#\# \{: iih\} Check your progress The following image shows the flow editor.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Add the nodes to the SPSS Modeler flow
 To preview this task, watch the video beginning at 01:31. After you load the data, you must transform the data. Create a simple flow by dragging transformers and estimators onto the canvas and connecting them to the data source. Use the following nodes from the palette: - Data Asset: loads the csv file from the project - Partition: divides the data into training and testing segments - Type: sets the data type. Use it to designate the `class` field as a `target` type. - C5.0: a classification algorithm - Analysis: view the model and check its accuracy - Table: preview the data with predictions Follow these steps to create the flow: 1. Add the data asset node: 1. From the *Import* section, drag the **Data Asset** node onto the canvas. 1. Double-click the **Data Asset** node to select the data set. 1. Select **Data asset > UCI ML Repository Chronic Kidney Disease Data Set.csv**. 1. Click **Select**. 1. View the Data Asset properties. 1. Click **Save**. 1. Add the Partition node: 1. From the *Field Operations* section, drag the **Partition** node onto the canvas. 1. Connect the **Data Asset** node to the **Partition** node. 1. Double-click the **Partition** node to view its properties. The default partition divides half of the data for training and the other half for testing. 1. Click **Save**. 1. Add the Type node: 1. From the *Field Operations* section, drag the **Type** node onto the canvas. 1. Connect the **Partition** node to the **Type** node. 1. Double-click the **Type** node to view its properties. The Type node specifies the measurement level for each field. This source data file uses four different measurement levels: Continuous, Categorical, Nominal, Ordinal, and Flag. 1. Search for the `class` field. For each field, the role indicates the part that each field plays in modeling. Change the `class`**Role** to **Target** - the field you want to predict. 1. Click **Save**. 1. Add the C5.0 classification algorithm node: 1. From the *Modeling* section, drag the **C5.0** node onto the canvas. 1. Connect the **Type** node to the **C5.0** node. 1. Double-click the **C5.0** node to view its properties. By default, the C5.0 algorithm builds a decision tree. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples can't be split any further. Finally, the lowest-level splits are reexamined, and those that don't contribute significantly to the value of the model are removed. 1. Toggle on **Use settings defined in this node**. 1. For *Target*, select **class**. 1. In the **Inputs** section, click **Add columns**. 1. Clear the checkbox next to *Field name*. 1. Select **age**, **sc**, **dm**. 1. Click **OK**. 1. Click **Save**. \#\#\# \{: iih\} Check your progress The following image shows the completed flow.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Run the SPSS Modeler flow and explore the model details
 To preview this task, watch the video beginning at 04:20. Now that you have designed the flow, follow these steps to run the flow, and examine the tree diagram to see the decision points: 1. Right-click the **C5.0** node and select **Run**. Running the flow generates a new model nugget on the canvas. 1. Right-click the model nugget and select **View Model** to view the model details. 1. View the **Model Information** which provides a model summary. 1. Click **Top Decision Rules**. A table displays a series of rules that were used to assign individual records to child nodes based on the values of different input fields. 1. Click **Feature Importance**. A chart shows the relative importance of each predictor in estimating the model. From this, you can see that serum creatinine is easily the most significant factor, with diabetes being the next most significant factor. 1. Click **Tree Diagram**. The same model is displayed in the form of a tree, with a node at each decision point. 1. Hover over the top node, which provides a summary for all the records in the data set. Almost 40% of the cases in the data set are classified as not diagnosed with kidney disease. The tree can provide additional clues as to what factors might be responsible. 1. Notice the two branches stemming from the top node, which indicates a split by *serum creatinine*. - Review the branch that shows records where the serum creatinine is greater than 1.25. In this case, 100% of those patients have a positive kidney disease diagnosis. - Review the branch that shows records where the serum creatinine is less than or equal to 1.25. Almost 80% of those patients don't have a positive kidney disease diagnosis, but almost 20% with lower serum creatinine were still diagnosed with kidney disease. 1. Notice the branches stemming from *sc<=1.250*, which is split by *diabetes*. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and diagnosed diabetes (dm=yes). 100% of these patients were also diagnosed with kidney disease. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and no diabetes (dm=no), 85% were not diagnosed with kidney disease, but 15% of them were still diagnosed with kidney disease. 1. Notice the branches stemming from *dm = no*, which is split by the last significant factor, *age*. - Review the branch that shows patients 14 years old or younger (age <= 14). This branch shows that 75% of young patients with low serum creatinine and no diabetes were at risk of getting kidney disease. - Review the branch that shows patients older than 14 years old (age > 14). This branch shows that only 12% of patients over 14 years old with low serum creatinine and no diabetes were at risk of getting kidney disease. 1. Close the model details. \#\#\# \{: iih\} Check your progress The following image shows the tree diagram.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Evaluate the model
 To preview this task, watch the video beginning at 07:24. Follow these steps to use the Analysis and Table nodes to evaluate the model: 1. From the *Outputs* section, drag the **Analysis** node onto the canvas. 1. Connect the **Model** nugget to the **Analysis** node. 1. Right-click the **Analysis** node, and select **Run**. 1. From the *Outputs* panel, open the **Analysis**, which shows that the model correctly predicted a kidney disease diagnosis almost 95% of the time. Close the **Analysis**. 1. Right-click the **Analysis** node, and select **Save branch as a model**. 1. For the *Model name*, type `Kidney Disease Analysis`\{: .cp\}. 1. Click **Save**. 1. Click **Close**. 1. From the *Outputs* section, drag the **Table** node onto the canvas. 1. Connect the **Model** nugget to the **Table** node. 1. Right-click the **Table** node, and select **Preview data**. 1. When the Preview displays, scroll to the last two columns. The **$C-Class** column contains the prediction of kidney disease, and the **$CC-Class** column indicates the confidence score for that prediction. 1. Close the **Preview**. \#\#\# \{: iih\} Check your progress The following image shows the preview table with the predictions.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 7: Deploy and test the model with new data
 To preview this task, watch the video beginning at 09:10. Lastly, follow these steps to deploy this model and predict the outcome with new data. 1. Return to the Project's **Assets** tab. 1. Click the **Models** section, and open the **Kidney Disease Analysis** model. 1. Click **Promote to deployment space**. 1. Choose an existing deployment space. If you don't have a deployment space, you can create a new one: 1. Provide a space name. 1. Select a storage service. 1. Select a machine learning service. 1. Click **Create**. 1. Click **Close**. 1. Select **Go to the model in the space after promoting it**. 1. Click **Promote**. 1. When the model displays inside the deployment space, click **New deployment**. 1. Select **Online** as the *Deployment type*. 1. Specify a name for the deployment. 1. Click **Create**. 1. When the deployment is complete, click the deployment name to view the deployment details page. 1. Go to the **Test** tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. Click the **JSON input**, then copy the following test data and paste it to replace the existing JSON text: `json { "input_data": [ { "fields": "age", "bp", "sg", "al", "su", "rbc", "pc", "pcc", "ba", "bgr", "bu", "sc", "sod", "pot", "hemo", "pcv", "wbcc", "rbcc", "htn", "dm", "cad", "appet", "pe", "ane", "class" ], "values": "62", "80", "1.01", "2", "3", "normal", "normal", "notpresent", "notpresent", "423", "53", "1.8", "", "", "9.6", "31", "7500", "", "no", "yes", "no", "poor", "no", "yes", "ckd" ] ] } ] }` 1. Click **Predict** to predict whether a 62 year old with diabetes and a serum creatinine ratio of 1.8 would likely be diagnosed with kidney disease. The resulting prediction indicates that this patient has a high probability of a kidney disease diagnosis. \#\#\# \{: iih\} Check your progress The following image shows the Test tab for the model deployment with a prediction.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now you can use this data set for further analysis\. For example, you can perform tasks such as:
<!-- <ul> -->
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* Find more [SPSS Modeler tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials.html)
* Try these other methods to build models:
<!-- <ul> -->
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
<!-- </ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* Contribute to the [SPSS Modeler community](https://ibm.biz/spss-modeler-community)
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
E14C56A78F56157E862DE99906254B291F5B3321 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en | Quick start: Build and deploy a machine learning model with AutoAI | Quick start: Build and deploy a machine learning model with AutoAI
You can automate the process of building a machine learning model with the AutoAI tool. Read about the AutoAI tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding.
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add your data to the project. You can add CSV files or data from a remote data source through a connection.
3. Create an AutoAI experiment in the project.
4. Review the model pipelines and save the desired pipeline as a model to deploy or as a notebook to customize.
5. Deploy and test your model.
Read about AutoAI
The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem. These model pipelines are created iteratively as AutoAI analyzes your dataset and discovers data transformations, algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective.
[Read more about AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Watch a video about creating a model using AutoAI
 Watch this video to see how to create and run an AutoAI experiment based on the bank marketing sample.
Note: This video shows tasks 2-5 of this tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a model using AutoAI
This tutorial guides you through training a model to predict if a customer is likely subscribe to a term deposit based on a marketing campaign.
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep01)
* [Task 2: Build and train the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep02)
* [Task 3: Promote the model to a deployment space and deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep03)
* [Task 4: Test the deployed model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep04)
* [Task 5: Create a batch job to score the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep05)
This tutorial will take approximately 30 minutes to complete.
Sample data
The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion.

Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### {: iih} Check your progress The following image shows the new project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
* Task 2: Build and train the model
 To preview this task, watch the video beginning at 00:08. Now that you have a project, you are ready to build and train the model using AutoAI. Follow these steps to create the AutoAI experiment, review the model pipelines, and select a pipeline to save as a model: 1. Click the Assets tab in your project, and then click New asset > Build machine learning models automatically. 1. On the Build machine learning models automatically page, complete the basic fields: 1. Click the Samples panel. 1. Select Bank marketing sample data, and click Next. The project name and description will be filled in for you. 1. Confirm that the Machine Learning service instance that you associated with your project is selected in the Watson Machine Learning Service Instance field. 1. Click Create. 1. In this sample AutoAI experiment, you will see that the Bank marketing sample data is already selected for your experiment. {: biw} 1. Review the preset experiment settings. Based on the data set and the selected column to predict, AutoAI analyzes a subset of the data and chooses a prediction type and metric to optimize. In this case, the prediction type is Binary Classification, the positive class is Yes, and the optimized metric is ROC AUC & run time. 1. Click Run experiment. As the model trains, you see an infographic that shows the process of building the pipelines.
{: biw} For a list of algorithms, or estimators, available with each machine learning technique in AutoAI, see: [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). 1. After the experiment run is complete, you can view and compare the ranked pipelines in a leaderboard. {: biw} 1. You can click Pipeline comparison to see how they differ. {: biw} 1. Click the highest ranked pipeline to see the pipeline details. 1. Click Save as, select Model, and click Create. This saves the pipeline as a model in your project. 1. When the model is saved, click the View in project link in the notification to view the model in your project. Alternatively, you can navigate to the Assets tab in the project, and click the model name in the Models section. ### {: iih} Check your progress The following image shows the model.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
* Task 3: Promote the model to a deployment space and deploy the trained model
 To preview this task, watch the video beginning at 04:57. Before you can deploy the model, you need to promote the model to a deployment space. Follow these steps to promote the model to a deployment space to deploy the model: 1. Click Promote to deployment space. 1. Choose an existing deployment space. If you don't have a deployment space: 1. Click Create a new deployment space. 1. Provide a space name and optional description. 1. Select a storage service. 1. Select a machine learning service. 1. Click Create. 1. Click Close. 1. Select your new deployment space from the list. 1. Select the Go to the model in the space after promoting it option. 1. Click Promote. Note: If you didn't select the option to go to the model in the space after promoting it, you can use the navigation menu to navigate to Deployments to select your deployment space and model.1. With the model open, click New deployment. 1. Select Online as the Deployment type. 1. Specify a name for the deployment. 1. Click Create. 1. When the deployment is complete, click the deployment name to view the deployment details page. ### {: iih} Check your progress The following image shows the new deployment.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
* Task 4: Test the deployed model
 To preview this task, watch the video beginning at 06:22. Now that you have the model deployed, you can test that that online deployment using the user interface or through the Watson Machine Learning APIs. Follow these steps to use the user interface to test the model with new data: 1. Click the Test tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. Click the JSON input tab, copy the following test data, and paste it to replace the existing JSON text: json { "input_data": [ { "fields": "age", "job", "marital", "education", "default", "balance", "housing", "loan", "contact", "day", "month", "duration", "campaign", "pdays", "previous", "poutcome" ], "values": 27, "unemployed", "married", "primary", "no", 1787, "no", "no", "cellular", 19, "oct", 79, 1, -1, 0, "unknown" ] ] } ] } 1. Click Predict to predict whether a customer with the specified attributes is likely to sign up for a particular kind of account. The resulting prediction indicates that this customer has a high probability of not enrolling in the marketing promotion. 1. Click the X to close the Prediction results window. ### {: iih} Check your progress The following image shows the results of testing the deployment. The values for your prediction might differ from the values in the following image.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
* Task 5: Create a batch job to score the model
Now that you have tested the deployed model with a single prediction, you can create a batch deployment to score multiple records at the same time. ### Task 5a: Set up batch deployment  To preview this task, watch the video beginning at 07:00. For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction. Follow these steps to upload the payload data to the deployment space: 1. Copy and paste the following text into a text editor, and save the file as bank-payload.csv. txt age,job,marital,education,default,balance,housing,loan,contact,day,month,duration,campaign,pdays,previous,poutcome 30,unemployed,married,primary,no,1787,no,no,cellular,19,oct,79,1,-1,0,unknown 33,services,married,secondary,no,4789,yes,yes,cellular,11,may,220,1,339,4,failure 35,management,single,tertiary,no,1350,yes,no,cellular,16,apr,185,1,330,1,failure 30,management,married,tertiary,no,1476,yes,yes,unknown,3,jun,199,4,-1,0,unknown 59,blue-collar,married,secondary,no,0,yes,no,unknown,5,may,226,1,-1,0,unknown 35,management,single,tertiary,no,747,no,no,cellular,23,feb,141,2,176,3,failure 36,self-employed,married,tertiary,no,307,yes,no,cellular,14,may,341,1,330,2,other 39,technician,married,secondary,no,147,yes,no,cellular,6,may,151,2,-1,0,unknown 41,entrepreneur,married,tertiary,no,221,yes,no,unknown,14,may,57,2,-1,0,unknown 43,services,married,primary,no,-88,yes,yes,cellular,17,apr,313,1,147,2,failure 39,services,married,secondary,no,9374,yes,no,unknown,20,may,273,1,-1,0,unknown 43,admin.,married,secondary,no,264,yes,no,cellular,17,apr,113,2,-1,0,unknown 36,technician,married,tertiary,no,1109,no,no,cellular,13,aug,328,2,-1,0,unknown 20,student,single,secondary,no,502,no,no,cellular,30,apr,261,1,-1,0,unknown 31,blue-collar,married,secondary,no,360,yes,yes,cellular,29,jan,89,1,241,1,failure 40,management,married,tertiary,no,194,no,yes,cellular,29,aug,189,2,-1,0,unknown 56,technician,married,secondary,no,4073,no,no,cellular,27,aug,239,5,-1,0,unknown 37,admin.,single,tertiary,no,2317,yes,no,cellular,20,apr,114,1,152,2,failure 25,blue-collar,single,primary,no,-221,yes,no,unknown,23,may,250,1,-1,0,unknown 31,services,married,secondary,no,132,no,no,cellular,7,jul,148,1,152,1,other 1. Click your deployment space in the navigation trail.  1. Click the Assets tab. 1. Drag the bank-payload.csv file into the side panel, and wait for the file to upload. ### {: iih} Check your progress The following image shows the Assets tab in the deployment space.
{: width="100%" } ### Task 5b: Create the batch deployment  To preview this task, watch the video beginning at 07:30. To process a batch of inputs and have the output written to a file instead of displayed in real time, create a batch deployment job. 1. Go to the Assets tab in the deployment space. 1. Click the {: iih} Overflow menu for your model, and choose Deploy. 1. For the Deployment type, select Batch. 1. Type a name for the deployment. 1. Choose the smallest hardware specification. 1. Click Create. ### {: iih} Check your progress The following image shows batch deployment.
{: width="100%" } ### Task 5c: Create the batch job  To preview this task, watch the video beginning at 07:44. The batch job runs the deployment. To create the job, you specify the input data and the name for the output file. You can set up a job to run on a schedule, or run immediately. Follow these steps to create a batch job: 1. On the deployment page, click New job. 1. Specify a name for the job, and click Next. 1. Select the smallest hardware specification, and click Next. 1. Optional: Set a schedule, and click Next. 1. Optional: Choose to receive notifications, and click Next. 1. On the Choose data screen, select the Input data: 1. Click Select data source. 1. Select Data asset > bank-payload.csv. 1. Click Confirm. 1. Back on the Choose data screen, specify the Output file: 1. Click Add. 1. Click Select data source. 1. Ensure that the Create new tab is selected. 1. For the Name, type bank-output.csv{: .cp}. 1. Click Confirm. 1. Click Next for the final step. 1. Review the settings, and click Create and run to run the job immediately. ### {: iih} Check your progress The following image shows the job details for the batch deployment.
{: width="100%" } ### Task 5d: View the output  To preview this task, watch the video beginning at 08:42. Follow these steps to review the output file from the batch job. 1. Click the job name to see the status. 1. When the status changes to Completed, click your deployment space name in the navigation trail. 1. Click the Assets tab. 1. Click the bank-output.csv file to review the prediction results for the customer information that is submitted for batch processing. For each case, the prediction returned these customers are unlikely to subscribe to the bank promotion. ### {: iih} Check your progress The following image shows the results of the batch deployment job.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview)
Next steps
Now you can use this data set for further analysis. For example, you or other users can do any of these tasks:
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a Juypter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
Additional resources
* Try these additional tutorials to get more hands-on experience with building models using AutoAI:
* [Build a Binary classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html)
* [Build a univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)
* [Build a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
* Try these other methods to build models:
* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html)
* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Build and deploy a machine learning model with AutoAI #
You can automate the process of building a machine learning model with the AutoAI tool\. Read about the AutoAI tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding\.
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add your data to the project\. You can add CSV files or data from a remote data source through a connection\.
3. Create an AutoAI experiment in the project\.
4. Review the model pipelines and save the desired pipeline as a model to deploy or as a notebook to customize\.
5. Deploy and test your model\.
<!-- </ol> -->
## Read about AutoAI ##
The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem\. These model pipelines are created iteratively as AutoAI analyzes your dataset and discovers data transformations, algorithms, and parameter settings that work best for your problem setting\. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective\.
[Read more about AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
## Watch a video about creating a model using AutoAI ##
 Watch this video to see how to create and run an AutoAI experiment based on the bank marketing sample\.
Note: This video shows tasks 2\-5 of this tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a model using AutoAI ##
This tutorial guides you through training a model to predict if a customer is likely subscribe to a term deposit based on a marketing campaign\.
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#step01)
* [Task 2: Build and train the model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#step02)
* [Task 3: Promote the model to a deployment space and deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#step03)
* [Task 4: Test the deployed model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#step04)
* [Task 5: Create a batch job to score the model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#step05)
<!-- </ul> -->
This tutorial will take approximately 30 minutes to complete\.
### Sample data ###
The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion\.

Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. 1. When the project opens, click the **Manage** tab and select the **Services and integrations** page. 1. On the *IBM services* tab, click **Associate service**. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click **New service**. 1. Select **Watson Machine Learning**. 1. Click **Create**. 1. Select the new service instance from the list. 1. Click **Associate service**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the new project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Build and train the model
 To preview this task, watch the video beginning at 00:08. Now that you have a project, you are ready to build and train the model using AutoAI. Follow these steps to create the AutoAI experiment, review the model pipelines, and select a pipeline to save as a model: 1. Click the **Assets** tab in your project, and then click **New asset > Build machine learning models automatically**. 1. On the *Build machine learning models automatically* page, complete the basic fields: 1. Click the **Samples** panel. 1. Select **Bank marketing sample data**, and click **Next**. The project name and description will be filled in for you. 1. Confirm that the Machine Learning service instance that you associated with your project is selected in the *Watson Machine Learning Service Instance* field. 1. Click **Create**. 1. In this sample AutoAI experiment, you will see that the *Bank marketing sample data* is already selected for your experiment. \{: biw\} 1. Review the preset experiment settings. Based on the data set and the selected column to predict, AutoAI analyzes a subset of the data and chooses a prediction type and metric to optimize. In this case, the prediction type is *Binary Classification*, the positive class is *Yes*, and the optimized metric is *ROC AUC & run time*. 1. Click **Run experiment**. As the model trains, you see an infographic that shows the process of building the pipelines.
\{: biw\} For a list of algorithms, or estimators, available with each machine learning technique in AutoAI, see: [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). 1. After the experiment run is complete, you can view and compare the ranked pipelines in a leaderboard. \{: biw\} 1. You can click **Pipeline comparison** to see how they differ. \{: biw\} 1. Click the highest ranked pipeline to see the pipeline details. 1. Click **Save as**, select **Model**, and click **Create**. This saves the pipeline as a model in your project. 1. When the model is saved, click the **View in project** link in the notification to view the model in your project. Alternatively, you can navigate to the **Assets** tab in the project, and click the model name in the *Models* section. \#\#\# \{: iih\} Check your progress The following image shows the model.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Promote the model to a deployment space and deploy the trained model
 To preview this task, watch the video beginning at 04:57. Before you can deploy the model, you need to promote the model to a deployment space. Follow these steps to promote the model to a deployment space to deploy the model: 1. Click **Promote to deployment space**. 1. Choose an existing deployment space. If you don't have a deployment space: 1. Click **Create a new deployment space**. 1. Provide a space name and optional description. 1. Select a storage service. 1. Select a machine learning service. 1. Click **Create**. 1. Click **Close**. 1. Select your new deployment space from the list. 1. Select the **Go to the model in the space after promoting it** option. 1. Click **Promote**. Note: If you didn't select the option to go to the model in the space after promoting it, you can use the navigation menu to navigate to **Deployments** to select your deployment space and model.1. With the model open, click **New deployment**. 1. Select **Online** as the *Deployment type*. 1. Specify a name for the deployment. 1. Click **Create**. 1. When the deployment is complete, click the deployment name to view the deployment details page. \#\#\# \{: iih\} Check your progress The following image shows the new deployment.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Test the deployed model
 To preview this task, watch the video beginning at 06:22. Now that you have the model deployed, you can test that that online deployment using the user interface or through the Watson Machine Learning APIs. Follow these steps to use the user interface to test the model with new data: 1. Click the **Test** tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. Click the **JSON input** tab, copy the following test data, and paste it to replace the existing JSON text: `json { "input_data": [ { "fields": "age", "job", "marital", "education", "default", "balance", "housing", "loan", "contact", "day", "month", "duration", "campaign", "pdays", "previous", "poutcome" ], "values": 27, "unemployed", "married", "primary", "no", 1787, "no", "no", "cellular", 19, "oct", 79, 1, -1, 0, "unknown" ] ] } ] }` 1. Click **Predict** to predict whether a customer with the specified attributes is likely to sign up for a particular kind of account. The resulting prediction indicates that this customer has a high probability of not enrolling in the marketing promotion. 1. Click the **X** to close the *Prediction results* window. \#\#\# \{: iih\} Check your progress The following image shows the results of testing the deployment. The values for your prediction might differ from the values in the following image.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Create a batch job to score the model
Now that you have tested the deployed model with a single prediction, you can create a batch deployment to score multiple records at the same time. \#\#\# Task 5a: Set up batch deployment  To preview this task, watch the video beginning at 07:00. For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction. Follow these steps to upload the payload data to the deployment space: 1. Copy and paste the following text into a text editor, and save the file as `bank-payload.csv`. `txt age,job,marital,education,default,balance,housing,loan,contact,day,month,duration,campaign,pdays,previous,poutcome 30,unemployed,married,primary,no,1787,no,no,cellular,19,oct,79,1,-1,0,unknown 33,services,married,secondary,no,4789,yes,yes,cellular,11,may,220,1,339,4,failure 35,management,single,tertiary,no,1350,yes,no,cellular,16,apr,185,1,330,1,failure 30,management,married,tertiary,no,1476,yes,yes,unknown,3,jun,199,4,-1,0,unknown 59,blue-collar,married,secondary,no,0,yes,no,unknown,5,may,226,1,-1,0,unknown 35,management,single,tertiary,no,747,no,no,cellular,23,feb,141,2,176,3,failure 36,self-employed,married,tertiary,no,307,yes,no,cellular,14,may,341,1,330,2,other 39,technician,married,secondary,no,147,yes,no,cellular,6,may,151,2,-1,0,unknown 41,entrepreneur,married,tertiary,no,221,yes,no,unknown,14,may,57,2,-1,0,unknown 43,services,married,primary,no,-88,yes,yes,cellular,17,apr,313,1,147,2,failure 39,services,married,secondary,no,9374,yes,no,unknown,20,may,273,1,-1,0,unknown 43,admin.,married,secondary,no,264,yes,no,cellular,17,apr,113,2,-1,0,unknown 36,technician,married,tertiary,no,1109,no,no,cellular,13,aug,328,2,-1,0,unknown 20,student,single,secondary,no,502,no,no,cellular,30,apr,261,1,-1,0,unknown 31,blue-collar,married,secondary,no,360,yes,yes,cellular,29,jan,89,1,241,1,failure 40,management,married,tertiary,no,194,no,yes,cellular,29,aug,189,2,-1,0,unknown 56,technician,married,secondary,no,4073,no,no,cellular,27,aug,239,5,-1,0,unknown 37,admin.,single,tertiary,no,2317,yes,no,cellular,20,apr,114,1,152,2,failure 25,blue-collar,single,primary,no,-221,yes,no,unknown,23,may,250,1,-1,0,unknown 31,services,married,secondary,no,132,no,no,cellular,7,jul,148,1,152,1,other` 1. Click your deployment space in the navigation trail.  1. Click the **Assets** tab. 1. Drag the **bank-payload.csv** file into the side panel, and wait for the file to upload. \#\#\# \{: iih\} Check your progress The following image shows the *Assets* tab in the deployment space.
\{: width="100%" \} \#\#\# Task 5b: Create the batch deployment  To preview this task, watch the video beginning at 07:30. To process a batch of inputs and have the output written to a file instead of displayed in real time, create a batch deployment job. 1. Go to the **Assets** tab in the deployment space. 1. Click the \{: iih\} **Overflow** menu for your model, and choose **Deploy**. 1. For the *Deployment type*, select **Batch**. 1. Type a name for the deployment. 1. Choose the smallest hardware specification. 1. Click **Create**. \#\#\# \{: iih\} Check your progress The following image shows batch deployment.
\{: width="100%" \} \#\#\# Task 5c: Create the batch job  To preview this task, watch the video beginning at 07:44. The batch job runs the deployment. To create the job, you specify the input data and the name for the output file. You can set up a job to run on a schedule, or run immediately. Follow these steps to create a batch job: 1. On the deployment page, click **New job**. 1. Specify a name for the job, and click **Next**. 1. Select the smallest hardware specification, and click **Next**. 1. Optional: Set a schedule, and click **Next**. 1. Optional: Choose to receive notifications, and click **Next**. 1. On the *Choose data* screen, select the *Input* data: 1. Click **Select data source**. 1. Select **Data asset > bank-payload.csv**. 1. Click **Confirm**. 1. Back on the *Choose data* screen, specify the *Output* file: 1. Click **Add**. 1. Click **Select data source**. 1. Ensure that the **Create new** tab is selected. 1. For the *Name*, type `bank-output.csv`\{: .cp\}. 1. Click **Confirm**. 1. Click **Next** for the final step. 1. Review the settings, and click **Create and run** to run the job immediately. \#\#\# \{: iih\} Check your progress The following image shows the job details for the batch deployment.
\{: width="100%" \} \#\#\# Task 5d: View the output  To preview this task, watch the video beginning at 08:42. Follow these steps to review the output file from the batch job. 1. Click the job name to see the status. 1. When the status changes to *Completed*, click your deployment space name in the navigation trail. 1. Click the **Assets** tab. 1. Click the **bank-output.csv** file to review the prediction results for the customer information that is submitted for batch processing. For each case, the prediction returned these customers are unlikely to subscribe to the bank promotion. \#\#\# \{: iih\} Check your progress The following image shows the results of the batch deployment job.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now you can use this data set for further analysis\. For example, you or other users can do any of these tasks:
<!-- <ul> -->
* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze the data in a Juypter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* Try these additional tutorials to get more hands\-on experience with building models using AutoAI:
<!-- <ul> -->
* [Build a Binary classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html)
* [Build a univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)
* [Build a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
<!-- </ul> -->
* Try these other methods to build models:
<!-- <ul> -->
* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html)
* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
<!-- </ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
C535650C17CDE010EACBF5B6BF85FD8E593B77D6 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en | Quick start: Build, run, and deploy a Decision Optimization model | Quick start: Build, run, and deploy a Decision Optimization model
You can build and run Decision Optimization models to help you make the best decisions to solve business problems based on your objectives. Read about Decision Optimization, then watch a video and take a tutorial that’s suitable for users with some knowledge of prescriptive analytics, but does not require coding.
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add a Decision Optimization Experiment to the project. You can add compressed files or data from sample files.
3. Associate a Watson Machine Learning Service with the project.
4. Create a deployment space to associate with the project's Watson Machine Learning Service.
5. Review the data, model objectives, and constraints in the Modeling Assistant.
6. Run one or more scenarios to test your model and review the results.
7. Deploy your model.
Read about Decision Optimization
Decision Optimization can analyze data and create an optimization model (with the Modeling Assistant) based on a business problem. First, an optimization model is derived by converting a business problem into a mathematical formulation that can be understood by the optimization engine. The formulation consists of objectives and constraints that define the model that the final decision is based on. The model, together with your input data, forms a scenario. The optimization engine solves the scenario by applying the objectives and constraints to limit millions of possibilities and provides the best solution. This solution satisfies the model formulation or relaxes certain constraints if the model is infeasible. You can test scenarios using different data, or by modifying the objectives and constraints and re-running them and viewing solutions. Once satisfied you can deploy your model.
[Read more about Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
Watch a video about creating a Decision Optimization model
 Watch this video to see how to run a sample Decision Optimization experiment to create, solve, and deploy a Decision Optimization model with Watson Studio and Watson Machine Learning.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. The user interface is frequently improved.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a model that uses Decision Optimization
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep01)
* [Task 2: Create a Decision Optimization experiment in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep02)
* [Task 3: Build a model and visualize a scenario result.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep03)
* [Task 4: Change model objectives and constraints.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep04)
* [Task 5: Deploy the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep05)
* [Task 6: Test the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep06)
This tutorial will take approximately 30 minutes to complete.
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### {: iih} Check your progress The following image shows the new project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 2: Create a Decision Optimization experiment
 To preview this task, watch the video beginning at 00:20. Now, follow these steps to create the Decision Optimization experiment in your project: 1. From your new project, click New asset > Solve optimization problems. 1. Select Local file. 1. Click Get sample files to view the GitHub repository containing the sample files. 1. In the DO-Samples repository, open the watsonx.ai and Cloud Pak for Data as a Service folder. 1. Click the HouseConstructionScheduling.zip file containing the house construction sample files. 1. Click Download to save the zip file to your computer. 1. Return to the Create a Decision Optimization experiment page, and click Browse. 1. Select the HouseConstructionScheduling.zip file from your computer. 1. Click Open. 1. If you don't already have a Watson Machine Learning service associated with this project, click Add a Machine Learning service. 1. Review your Watson Machine Learning service instances. You can use an existing service, or create a new service instance from here: click New service, select Machine Learning, and click Create. 1. Select your Watson Machine Learning instance from the list, and click Associate. 1. If necessary, click Cancel to return to the Services & integrations page. For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. 1. Choose a deployment space to associate with this experiment. If you do not have an existing deployment space, create one: 1. In the Select deployment space section, click New deployment space. 1. In the Name field, type House sample{: .cp} to provide a name for the deployment space. 1. Click Create. 1. When the space is ready, and click Close to return to the Create a Decision Optimization experiment page. Your new deployment space is selected. 1. Click Create to open the Decision Optimization experiment. ### {: iih} Check your progress The following image shows the experiment with the sample files.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 3: Build a model and visualize a scenario result
 To preview this task, watch the video beginning at 01:47. Follow these steps to build a model and visualize the result using the Decision Optimization Modeling Assistant. 1. In the left pane, click Build model to open the Modeling Assistant. This model was built with the Modeling Assistant so you can see that the objectives and constraints are in natural language, but you can also formulate your model in Python, OPL or import CPLEX and CPO models. 1. Click Run to run the scenario to solve the model and wait for the run to complete. 1. When the run completes, the Explore solution view displays. Under the Results tab, click Solution assets to see the resulting (best) values for the decision variables. These solution tables are displayed in alphabetical order by default. 1. In the left pane, select Visualization. 1. Under the Solutions tab, select Gantt to view the scenario with the optimal schedule. ### {: iih} Check your progress The following image shows the Visualization page with a Gantt chart.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 4: Change model objectives and constraints
 To preview this task, watch the video beginning at 03:01. Now, you want to make a change to your model formulation to consider an additional objective. Follow these steps to change the model objectives and constraints: 1. Click Build model. 1. In the left pane, click the Overflow menu {: iih} next to Scenario 1, and select Duplicate. 1. For the name, type Scenario 2{: .cp}, and click Create. 1. For Scenario 2, add an objective to the model to optimize the quality of work based on the expertise of each contractor. 1. Under Add to model, in the search field, type overall quality{: .cp}, and press Enter. 1. Expand the Objective section. 1. Click Maximize overall quality of Subcontractor-Activity assignments according to table of assignment values to add it as an objective. This new objective is now listed under the Objectives section along with the Minimize time to complete all Activities objective. 1. For the objective that you just added, click table of assignment values, and select Expertise. A list of Expertise parameters displays. 1. From this list, click definition to change the field that defines contractor expertise, and select Skill Level. 1. Click Run to run the scenario to build the model and wait for the run to complete. 1. Return to the Explore solution page to view the Objectives and Solution assets. 1. In the left pane, select Visualization. 1. Under the Solutions tab, select Gantt to view the scenario with the optimal schedule. 1. Click Overview in the left pane to compare statistics between Scenario 1 and Scenario 2. ### {: iih} Check your progress The following image shows the Visualization page with the new Gantt chart.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 5: Deploy the model
 To preview this task, watch the video beginning at 04:07. Next, follow these steps to promote the model to a deployment space, and create a deployment: 1. Click the Overflow menu {: iih} next to Scenario 1, and select Save for deployment. 1. In the Model name field, type House Construction{: .cp}, and click Next. 1. Review the model information, and click Save. 1. After the model is successfully saved, a notification bar displays with a link to the model. Click View in project. 1. If you miss the notification, then click the project name in the navigation trail. 1. Click the Assets tab in the project. 1. Click the House Construction model. 1. Click Promote to deployment space. 1. For the Target space, select House sample (or your deployment space) from the list. 1. Check the option to Check Go to the model in the space after deploying it. 1. Click Promote. 1. After the model is successfully promoted, the House Construction model displays in the deployment space. 1. Click New deployment. 1. For the deployment name, type House deployment{: .cp}. 1. For the Hardware definition, select 2 CPU and 8 GB RAM from the list. 1. Click Create. 1. Wait for the deployment status to change to Deployed. ### {: iih} Check your progress The following image shows the House deployment.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
* Task 6: Test a model
 To preview this task, watch the video beginning at 04:55. To test the model with a scenario, you must upload data files from your computer to the deployment space. Follow these steps to test the model by creating a job using the CSV files included with the sample zip file: 1. Click House sample (or your deployment space) in the navigation trail to return to the deployment space. 1. Click the Assets tab. 1. In the HouseConstructionScheduling.zip file on your computer, you will find several CSV files in the .containers > Scenario 1 folder. 1. Click the Upload asset icon {: iih} to open the Data panel. 1. Drag the Subcontractor.csv, Activity.csv, and Expertise.csv files into the Drop files here or browse for files to upload area in the Data panel. 1. Click the Deployments tab. 1. Click House deployment. 1. Now to submit a job to score the model, click New job. 1. For the job name, type House construction job{: .cp}. 1. Click Next. 1. Select the default values on the Configure page, and click Next. 1. Select the default values on the Schedule page, and click Next. 1. Select the default values on the Notify page, and click Next. 1. On the Choose data page, in the Input section, select the corresponding data assets that you previously loaded into your space for each input ID. 1. In the Output section, you will provide the name for each solution table to be created. 1. For Output ID ScheduledActivities.csv, click Select data source > Create new, type ScheduledActivities.csv{: .cp} for the name, and click Confirm. 1. For Output ID NotScheduledActivities.csv, click Select data source > Create new, type NotScheduledActivities.csv{: .cp} for the name, and click Confirm. 1. For Output ID stats.csv, click Select data source > Create new, type stats.csv{: .cp} for the name, and click Confirm. 1. For Output ID kpis.csv, click Select data source > Create new, type kpis.csv{: .cp} for the name, and click Confirm. 1. For Output ID solution.json, click Select data source > Create new, type solution.json{: .cp} for the name, and click Confirm. 1. For Output ID log.txt, click Select data source > Create new, type log.txt{: .cp} for the name, and click Confirm. 1. Review the information on the Choose data page, and then click Next. 1. Review the information on the Review and create page, and then click Create and run. 1. From the House deployment model page, click the job that you created named House construction job to see its status. 1. After the job run completes, click House sample (or your deployment space) to return to the deployment space. 1. On the Assets tab, you will see the output files: - ScheduledActivities.csv - NotScheduledactivities.csv - stats.csv - kpis.csv - solution.json - log.txt 1. For each of these assets, click the Download icon, and then view each of these files. ### {: iih} Check your progress The following image shows the completed batch job.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview)
Next steps
Now you can use this data set for further analysis. For example, you or other users can do any of these tasks:
* [Learn to build this model from scratch with the Modeling Assistant](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html)
* [Leverage this deployed model in an end user application using the Watson Machine Learning Rest API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html)
* [Deploy Decision Optimization models using the Watson Machine Learning Python Client](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html)
Additional resources
* Try these other methods to build models:
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* [Submit jobs by using the Watson Machine Learning API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.html)
* [Building and running Decision Optimization Experiments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html)
* [Deploying Decision Optimization models with UI](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
![Data set icon]Upload asset to project iconData sets][] that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* Contribute to the [Decision Optimization community](https://ibm.biz/decision-optimization-community)
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Build, run, and deploy a Decision Optimization model #
You can build and run Decision Optimization models to help you make the best decisions to solve business problems based on your objectives\. Read about Decision Optimization, then watch a video and take a tutorial that’s suitable for users with some knowledge of prescriptive analytics, but does not require coding\.
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add a Decision Optimization Experiment to the project\. You can add compressed files or data from sample files\.
3. Associate a Watson Machine Learning Service with the project\.
4. Create a deployment space to associate with the project's Watson Machine Learning Service\.
5. Review the data, model objectives, and constraints in the Modeling Assistant\.
6. Run one or more scenarios to test your model and review the results\.
7. Deploy your model\.
<!-- </ol> -->
## Read about Decision Optimization ##
Decision Optimization can analyze data and create an optimization model (with the Modeling Assistant) based on a business problem\. First, an optimization model is derived by converting a business problem into a mathematical formulation that can be understood by the optimization engine\. The formulation consists of objectives and constraints that define the model that the final decision is based on\. The model, together with your input data, forms a scenario\. The optimization engine solves the scenario by applying the objectives and constraints to limit millions of possibilities and provides the best solution\. This solution satisfies the model formulation or relaxes certain constraints if the model is infeasible\. You can test scenarios using different data, or by modifying the objectives and constraints and re\-running them and viewing solutions\. Once satisfied you can deploy your model\.
[Read more about Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
## Watch a video about creating a Decision Optimization model ##
 Watch this video to see how to run a sample Decision Optimization experiment to create, solve, and deploy a Decision Optimization model with Watson Studio and Watson Machine Learning\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. The user interface is frequently improved\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a model that uses Decision Optimization ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step01)
* [Task 2: Create a Decision Optimization experiment in the project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step02)
* [Task 3: Build a model and visualize a scenario result\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step03)
* [Task 4: Change model objectives and constraints\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step04)
* [Task 5: Deploy the model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step05)
* [Task 6: Test the model\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#step06)
<!-- </ul> -->
This tutorial will take approximately 30 minutes to complete\.
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. 1. When the project opens, click the **Manage** tab and select the **Services and integrations** page. 1. On the *IBM services* tab, click **Associate service**. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click **New service**. 1. Select **Watson Machine Learning**. 1. Click **Create**. 1. Select the new service instance from the list. 1. Click **Associate service**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the new project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Create a Decision Optimization experiment
 To preview this task, watch the video beginning at 00:20. Now, follow these steps to create the Decision Optimization experiment in your project: 1. From your new project, click **New asset > Solve optimization problems**. 1. Select **Local file**. 1. Click **Get sample files** to view the GitHub repository containing the sample files. 1. In the *DO-Samples* repository, open the **watsonx.ai and Cloud Pak for Data as a Service** folder. 1. Click the `HouseConstructionScheduling.zip` file containing the house construction sample files. 1. Click **Download** to save the zip file to your computer. 1. Return to the *Create a Decision Optimization experiment* page, and click **Browse**. 1. Select the `HouseConstructionScheduling.zip` file from your computer. 1. Click **Open**. 1. If you don't already have a Watson Machine Learning service associated with this project, click **Add a Machine Learning service**. 1. Review your Watson Machine Learning service instances. You can use an existing service, or create a new service instance from here: click **New service**, select **Machine Learning**, and click **Create**. 1. Select your **Watson Machine Learning** instance from the list, and click **Associate**. 1. If necessary, click **Cancel** to return to the *Services & integrations* page. For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)\{: new\_window\}. 1. Choose a deployment space to associate with this experiment. If you do not have an existing deployment space, create one: 1. In the *Select deployment space* section, click **New deployment space**. 1. In the *Name* field, type `House sample`\{: .cp\} to provide a name for the deployment space. 1. Click **Create**. 1. When the space is ready, and click **Close** to return to the *Create a Decision Optimization experiment* page. Your new deployment space is selected. 1. Click **Create** to open the Decision Optimization experiment. \#\#\# \{: iih\} Check your progress The following image shows the experiment with the sample files.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Build a model and visualize a scenario result
 To preview this task, watch the video beginning at 01:47. Follow these steps to build a model and visualize the result using the Decision Optimization Modeling Assistant. 1. In the left pane, click **Build model** to open the Modeling Assistant. This model was built with the Modeling Assistant so you can see that the objectives and constraints are in natural language, but you can also formulate your model in Python, OPL or import CPLEX and CPO models. 1. Click **Run** to run the scenario to solve the model and wait for the run to complete. 1. When the run completes, the *Explore solution* view displays. Under the *Results* tab, click **Solution assets** to see the resulting (best) values for the decision variables. These solution tables are displayed in alphabetical order by default. 1. In the left pane, select **Visualization**. 1. Under the *Solutions* tab, select **Gantt** to view the scenario with the optimal schedule. \#\#\# \{: iih\} Check your progress The following image shows the Visualization page with a Gantt chart.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Change model objectives and constraints
 To preview this task, watch the video beginning at 03:01. Now, you want to make a change to your model formulation to consider an additional objective. Follow these steps to change the model objectives and constraints: 1. Click **Build model**. 1. In the left pane, click the **Overflow** menu \{: iih\} next to *Scenario 1*, and select **Duplicate**. 1. For the name, type `Scenario 2`\{: .cp\}, and click **Create**. 1. For *Scenario 2*, add an objective to the model to optimize the quality of work based on the expertise of each contractor. 1. Under *Add to model*, in the search field, type `overall quality`\{: .cp\}, and press `Enter`. 1. Expand the **Objective** section. 1. Click **Maximize overall quality of Subcontractor-Activity assignments according to table of assignment values** to add it as an objective. This new objective is now listed under the *Objectives* section along with the *Minimize time to complete all Activities* objective. 1. For the objective that you just added, click **table of assignment values**, and select **Expertise**. A list of *Expertise* parameters displays. 1. From this list, click **definition** to change the field that defines contractor expertise, and select **Skill Level**. 1. Click **Run** to run the scenario to build the model and wait for the run to complete. 1. Return to the *Explore solution* page to view the **Objectives** and **Solution assets**. 1. In the left pane, select **Visualization**. 1. Under the *Solutions* tab, select **Gantt** to view the scenario with the optimal schedule. 1. Click **Overview** in the left pane to compare statistics between *Scenario 1* and *Scenario 2*. \#\#\# \{: iih\} Check your progress The following image shows the Visualization page with the new Gantt chart.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Deploy the model
 To preview this task, watch the video beginning at 04:07. Next, follow these steps to promote the model to a deployment space, and create a deployment: 1. Click the **Overflow** menu \{: iih\} next to *Scenario 1*, and select **Save for deployment**. 1. In the *Model name* field, type `House Construction`\{: .cp\}, and click **Next**. 1. Review the model information, and click **Save**. 1. After the model is successfully saved, a notification bar displays with a link to the model. Click **View in project**. 1. If you miss the notification, then click the project name in the navigation trail. 1. Click the **Assets** tab in the project. 1. Click the **House Construction** model. 1. Click **Promote to deployment space**. 1. For the *Target space*, select **House sample** (or your deployment space) from the list. 1. Check the option to Check **Go to the model in the space after deploying it**. 1. Click **Promote**. 1. After the model is successfully promoted, the *House Construction* model displays in the deployment space. 1. Click **New deployment**. 1. For the deployment name, type `House deployment`\{: .cp\}. 1. For the *Hardware definition*, select **2 CPU and 8 GB RAM** from the list. 1. Click **Create**. 1. Wait for the deployment status to change to *Deployed*. \#\#\# \{: iih\} Check your progress The following image shows the House deployment.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Test a model
 To preview this task, watch the video beginning at 04:55. To test the model with a scenario, you must upload data files from your computer to the deployment space. Follow these steps to test the model by creating a job using the CSV files included with the sample zip file: 1. Click **House sample** (or your deployment space) in the navigation trail to return to the deployment space. 1. Click the **Assets** tab. 1. In the `HouseConstructionScheduling.zip` file on your computer, you will find several CSV files in the *.containers > Scenario 1* folder. 1. Click the **Upload asset** icon \{: iih\} to open the *Data* panel. 1. Drag the `Subcontractor.csv`, `Activity.csv`, and `Expertise.csv` files into the *Drop files here or browse for files to upload* area in the *Data* panel. 1. Click the **Deployments** tab. 1. Click **House deployment**. 1. Now to submit a job to score the model, click **New job**. 1. For the job name, type `House construction job`\{: .cp\}. 1. Click **Next**. 1. Select the default values on the *Configure* page, and click **Next**. 1. Select the default values on the *Schedule* page, and click **Next**. 1. Select the default values on the *Notify* page, and click **Next**. 1. On the *Choose data* page, in the *Input* section, select the corresponding data assets that you previously loaded into your space for each input ID. 1. In the *Output* section, you will provide the name for each solution table to be created. 1. For *Output ID ScheduledActivities.csv*, click **Select data source > Create new**, type `ScheduledActivities.csv`\{: .cp\} for the name, and click **Confirm**. 1. For *Output ID NotScheduledActivities.csv*, click **Select data source > Create new**, type `NotScheduledActivities.csv`\{: .cp\} for the name, and click **Confirm**. 1. For *Output ID stats.csv*, click **Select data source > Create new**, type `stats.csv`\{: .cp\} for the name, and click **Confirm**. 1. For *Output ID kpis.csv*, click **Select data source > Create new**, type `kpis.csv`\{: .cp\} for the name, and click **Confirm**. 1. For *Output ID solution.json*, click **Select data source > Create new**, type `solution.json`\{: .cp\} for the name, and click **Confirm**. 1. For *Output ID log.txt*, click **Select data source > Create new**, type `log.txt`\{: .cp\} for the name, and click **Confirm**. 1. Review the information on the *Choose data* page, and then click **Next**. 1. Review the information on the *Review and create* page, and then click **Create and run**. 1. From the *House deployment* model page, click the job that you created named *House construction job* to see its status. 1. After the job run completes, click **House sample** (or your deployment space) to return to the deployment space. 1. On the *Assets tab*, you will see the output files: - ScheduledActivities.csv - NotScheduledactivities.csv - stats.csv - kpis.csv - solution.json - log.txt 1. For each of these assets, click the **Download** icon, and then view each of these files. \#\#\# \{: iih\} Check your progress The following image shows the completed batch job.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now you can use this data set for further analysis\. For example, you or other users can do any of these tasks:
<!-- <ul> -->
* [Learn to build this model from scratch with the Modeling Assistant](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html)
* [Leverage this deployed model in an end user application using the Watson Machine Learning Rest API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html)
* [Deploy Decision Optimization models using the Watson Machine Learning Python Client](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* Try these other methods to build models:
<!-- <ul> -->
* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* [Submit jobs by using the Watson Machine Learning API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.html)
<!-- </ul> -->
* [Building and running Decision Optimization Experiments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html)
* [Deploying Decision Optimization models with UI](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
![Data set icon]Upload asset to project iconData sets][] that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
<!-- <ul> -->
* Contribute to the [Decision Optimization community](https://ibm.biz/decision-optimization-community)
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
7FEB0313C4AA5133F215A847F2ABAA025E83BB38 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en | Quick start: Evaluate and track a prompt template | Quick start: Evaluate and track a prompt template
Take this tutorial to learn how to evaluate and track a prompt template. You can evaluate prompt templates in projects or deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. Then, you can track the prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals.
Required services : watsonx.governance
Your basic workflow includes these tasks:
1. Open a project that contains the prompt template to evaluate. Projects are where you can collaborate with others to work with assets.
2. Evaluate a prompt template using test data.
3. Review the results on the AI Factsheet.
4. Track the evaluated prompt template in an AI use case.
5. Deploy and test your evaluated prompt template.
Read about prompt templates
With watsonx.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types:
* Classification
* Summarization
* Generation
* Question answering
* Entity extraction
[Read more about evaluating prompt templates in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html)
[Read more about evaluating prompt templates in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
Watch a video about evaluating and tracking a prompt template
 Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to evaluating and tracking a prompt template
In this tutorial, you will complete these tasks:
* [Task 1: Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep01)
* [Task 2: Evaluate the sample prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep02)
* [Task 3: Create a model inventory and AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep03)
* [Task 4: Start tracking the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep04)
* [Task 5: Create a new project for validation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep05)
* [Task 6: Validate the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep06)
* [Task 7: Deploy the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep07)
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 1: Create a project
 To preview this task, watch the video beginning at 00:08. You need a project store the prompt template and the evaluation. Follow these steps to create a project based on a sample: 1. Access the [Getting started with watsonx governance](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1b6c8d6e-a45c-4bf1-84ee-8fe9a6daa56d){: external} project in the Samples. 1. Click Create project. 1. Accept the default values for the project name, and click Create. 1. Click View new project when the project is successfully created. 1. Associate a Watson Machine Learning service with the project: 1. When the project opens, click the Manage tab, and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. 1. Click the Assets tab in the project to see the sample assets. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### {: iih} Check your progress The following image shows the project Assets tab. You are now ready to evaluate the sample prompt template in the project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 2: Evaluate the sample prompt template
 To preview this task, watch the video beginning at 00:36. The sample project contains a few prompt templates and CSV files used as test data. Follow these steps to download the test data and evaluate one of the sample prompt templates: 1. On the project's Assets tab, click the Overflow menu {: iih} next to the Insurance claim summarization test data.csv file. 1. Click Insurance claim summarization to open the prompt template in Prompt Lab. 1. Click the Prompt variables icon {: iih}. Note: To run evaluations, you must create at least one prompt variable.1. Scroll to the Try section. Notice the {input} variable in the Input field. You must include the prompt variable as input for testing your prompt. 1. Click Evaluate. 1. Expand the Generative AI Quality section to see a list of dimensions. The available metrics depend on the task type of the prompt. For example, summarization has different metrics than classification. 1. Click Next. 1. Select the test data: 1. Click Browse. 1. Select the Insurance claim summarization test data.csv file. 1. Click Open. 1. For the Input column, select Insurance _Claim. 1. For the Reference output column, select Summary. 1. Click Next. 1. Click Evaluate. When the evaluation completes, you see the test results on the Evaluate tab. 1. Click the AI Factsheet tab. 1. View the information on each of the sections on the tab. 1. Click Evaluation > Develop > Test to see the test results again. ### {: iih} Check your progress The following image shows the results of the evaluation. Now you can start tracking the prompt template in an AI use case.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 3: Create a model inventory and AI use case
 To preview this task, watch the video beginning at 01:54. You use a model inventory for storing and reviewing AI use cases. AI use cases collect governance facts for AI assets that your organization tracks. You can view all the AI use cases in an inventory. Follow these steps to create a model inventory and AI use case: ### Create a model inventory 1. From the navigation menu {: iih}, choose AI governance > AI use cases. 1. Manage your inventories: - If you have existing inventory, then you skip to [Create a new AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=ennew-ai-use-case) can use that inventory. - If you don't have any inventories, then click Manage inventories. 1. Click New inventory. 1. For the name, copy and paste the following text: txt Golden Bank Insurance Inventory 1. For the description, copy and paste the following text: txt Model inventory for insurance related processing 1. Clear the Add collaborators after creation option. 1. Select your Cloud Object Storage instance from the list. 1. Click Create. 1. Close the Manage inventories page. \ {: iih} Check your progress The following image shows the model inventory. You are now ready to create an AI use case.
{: width="100%" } ### Create an AI use case 1. Click New AI use case. 1. For the Name, copy and paste the following text: txt Insurance claims processing AI use case 1. Select an existing model inventory. 1. Click Create to accept the default values for the rest of the fields. ### {: iih} Check your progress The following image shows the AI use case. You are now ready to track the prompt template.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 4: Start tracking the prompt template
 To preview this task, watch the video beginning at 02:33. You can track your prompt template in an AI use case to report the development and test process to your peers. Follow these steps to start tracking the prompt template: 1. From the navigation menu {: iih}, choose Projects > View all projects. 1. Select the Getting started with watsonx governance project. 1. Click the Assets tab. 1. From the Overflow menu {: iih} for the Claims processing summarization prompt template, select View AI Factsheet. 1. On the AI Factsheet tab, click the Governance page. 1. Click Track an AI use case. 1. Select the Insurance claims processing AI use case. 1. Select LLM Prompt Engineering for the approach. 1. Click Next. 1. For the model version, select Experimental. 1. Accept the default value for the version number. 1. Click Next. 1. Click Track asset. 1. Click the View details icon {: iih} to open the AI use case. 1. Click the Lifecycle tab to see the prompt template in the Develop phase. ### {: iih} Check your progress The following image shows the Lifecycle tab in the AI use case with the prompt temmplate in the Develop phase. You are now ready to continue to the Validate phase
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 5: Create a new project for validation
 To preview this task, watch the video beginning at 03:22. Typically, the prompt engineer evaluates the prompt with test data, and the validation engineer validates the prompt. The validation engineer has access to the validation data that prompt engineers might not have. In this case, validation data occurs in a different project. Follow these steps to export the development project and import it as a new validation project to move the asset into the validation phase of the AI lifecycle: 1. From the navigation menu {: iih}, choose Projects > View all projects. 1. Select the Getting started with watsonx governance project. 1. Click the Import/Export icon {: iih} > Export project. 1. Check the box to select all assets. 1. Click Export. 1. For the project name, copy and paste the following text, and then click Save. txt validation project.zip 1. From the navigation menu {: iih}, choose Projects > View all projects. 1. Click New project. 1. Select Create a project from a sample of file. 1. Click Browse. 1. Select the validation project.zip, and click Open. 1. For the project name, copy and paste the following text: txt Validation project 1. Click Create. 1. When the project is created, click View new project. 1. Follow the same steps as in [Step 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep01) to associate your Watson Machine Learning service with this project. \ {: iih} Check your progress The following image shows the validation project Assets tab. You are now ready to evaluate the sample prompt template in the validation project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 6: Validate the prompt template
 To preview this task, watch the video beginning at 04:18. Now you are ready to evaluate the prompt template in this validation project using the same evaluation process as before. Use the same test data set for evaluation. And select the same Input and Output columns as before. Follow these steps to validate the prompt template: 1. Click the Assets tab in the Validation project. 1. Repeat the steps in [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep02) to evaluate the Claims processing summarization prompt template. 1. Click the AI Factsheet tab when the evaluation is complete. 1. View both sets of test results: 1. Click Evaluation > Develop > Test. 1. Click Evaluation > Validate > Test. \ {: iih} Check your progress The following image shows the validation test results. You are now ready to promote the prompt template to a deployment space, and then deploy the prompt template.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
* Task 7: Deploy the prompt template
 To preview this task, watch the video beginning at 05:00. ### Promote the prompt template to a deployment space You promote the prompt template to a deployment space in preparation for deploying it. Follow these steps to prompte the prompt template: 1. Click Validation project in the projects navigation trail. 1. From the Overflow menu {: iih} for the Claims processing summarization prompt template, select Promote to space. 1. For the Target space, select Create a new deployment space. 1. For the Space name, copy and paste the following text: txt Insurance claims deployment space 1. For the Deployment stage, select Production. 1. Select your machine learning service from the list. 1. Click Create. 1. Click Close. 1. Select the Insurance claims deployment space deployment space from the list. 1. Check the option to Go to the space after promoting the prompt template. 1. Click Promote. ### {: iih} Check your progress The following image shows the prompt template in the deployment space. You are now ready to create a deployment.
{: width="100%" } ### Deploy the prompt template Now you can deploy the prompt template from inside the deployment space. Follow these steps to create a deployment: 1. From the Overflow menu {: iih} for the Insurance claims summarization prompt template, select Deploy. 1. For the deployment name, copy and paste the following text: txt Insurance claims summarization deployment 1. Click Create. ### {: iih} Check your progress The following image shows the deployed prompt template.
{: width="100%" } ### View the deployed prompt template Follow these steps to view the deployed prompt template in its current phase of the lifecycle: 1. View the deployment when it is ready. The API reference tab provides information for you to use the prompt template deployment in your application. 1. Click the Test tab. The Test tab allows you to submit an instruction and Input to test the deployment. 1. Click Generate. 1. Click the AI Factsheet tab. The AI Factsheet shows that the prompt template is now in the operate phase. 1. Scroll down, and click the arrow for more details. 1. Select the Evaluation > Operate > Deployment 1 page. 1. Click the View details icon {: iih} to open the AI use case. 1. Click the Lifecycle tab. 1. Click the Insurance claim summarization prompt template in the Operate phase. When you are done, click Close. 1. Click the Insurance claims summarization deployment prompt template deployment in the Operate phase. ### {: iih} Check your progress The following image shows the prompt template prompt template in the Operate phase of the lifecycle.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview)
Next steps
You are now ready to try the [Prompt a foundation model with the retrieval-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html).
Additional resources
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Evaluate and track a prompt template #
Take this tutorial to learn how to evaluate and track a prompt template\. You can evaluate prompt templates in projects or deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses\. Then, you can track the prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals\.
**Required services** : watsonx\.governance
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open a project that contains the prompt template to evaluate\. Projects are where you can collaborate with others to work with assets\.
2. Evaluate a prompt template using test data\.
3. Review the results on the AI Factsheet\.
4. Track the evaluated prompt template in an AI use case\.
5. Deploy and test your evaluated prompt template\.
<!-- </ol> -->
## Read about prompt templates ##
With watsonx\.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types:
<!-- <ul> -->
* Classification
* Summarization
* Generation
* Question answering
* Entity extraction
<!-- </ul> -->
[Read more about evaluating prompt templates in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html)
[Read more about evaluating prompt templates in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
## Watch a video about evaluating and tracking a prompt template ##
 Watch this video to preview the steps in this tutorial\. There might be slight differences in the user interface shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to evaluating and tracking a prompt template ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step01)
* [Task 2: Evaluate the sample prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step02)
* [Task 3: Create a model inventory and AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step03)
* [Task 4: Start tracking the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step04)
* [Task 5: Create a new project for validation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step05)
* [Task 6: Validate the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step06)
* [Task 7: Deploy the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step07)
<!-- </ul> -->
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Create a project
 To preview this task, watch the video beginning at 00:08. You need a project store the prompt template and the evaluation. Follow these steps to create a project based on a sample: 1. Access the [Getting started with watsonx governance](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1b6c8d6e-a45c-4bf1-84ee-8fe9a6daa56d)\{: external\} project in the Samples. 1. Click **Create project**. 1. Accept the default values for the project name, and click **Create**. 1. Click **View new project** when the project is successfully created. 1. Associate a Watson Machine Learning service with the project: 1. When the project opens, click the **Manage** tab, and select the **Services and integrations** page. 1. On the *IBM services* tab, click **Associate service**. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click **New service**. 1. Select **Watson Machine Learning**. 1. Click **Create**. 1. Select the new service instance from the list. 1. Click **Associate service**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. 1. Click the **Assets** tab in the project to see the sample assets. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}.For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the project Assets tab. You are now ready to evaluate the sample prompt template in the project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Evaluate the sample prompt template
 To preview this task, watch the video beginning at 00:36. The sample project contains a few prompt templates and CSV files used as test data. Follow these steps to download the test data and evaluate one of the sample prompt templates: 1. On the project's *Assets* tab, click the **Overflow** menu \{: iih\} next to the *Insurance claim summarization test data.csv* file. 1. Click **Insurance claim summarization** to open the prompt template in Prompt Lab. 1. Click the **Prompt variables** icon \{: iih\}. Note: To run evaluations, you must create at least one prompt variable.1. Scroll to the *Try* section. Notice the `{input}` variable in the *Input* field. You must include the prompt variable as input for testing your prompt. 1. Click **Evaluate**. 1. Expand the **Generative AI Quality** section to see a list of dimensions. The available metrics depend on the task type of the prompt. For example, summarization has different metrics than classification. 1. Click **Next**. 1. Select the test data: 1. Click **Browse**. 1. Select the **Insurance claim summarization test data.csv** file. 1. Click **Open**. 1. For the *Input column*, select **Insurance \_Claim**. 1. For the *Reference output column*, select **Summary**. 1. Click **Next**. 1. Click **Evaluate**. When the evaluation completes, you see the test results on the *Evaluate* tab. 1. Click the **AI Factsheet** tab. 1. View the information on each of the sections on the tab. 1. Click **Evaluation > Develop > Test** to see the test results again. \#\#\# \{: iih\} Check your progress The following image shows the results of the evaluation. Now you can start tracking the prompt template in an AI use case.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Create a model inventory and AI use case
 To preview this task, watch the video beginning at 01:54. You use a model inventory for storing and reviewing AI use cases. AI use cases collect governance facts for AI assets that your organization tracks. You can view all the AI use cases in an inventory. Follow these steps to create a model inventory and AI use case: \#\#\# Create a model inventory 1. From the navigation menu \{: iih\}, choose **AI governance > AI use cases**. 1. Manage your inventories: - If you have existing inventory, then you skip to [Create a new AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#new-ai-use-case) can use that inventory. - If you don't have any inventories, then click **Manage inventories**. 1. Click **New inventory**. 1. For the name, copy and paste the following text: `txt Golden Bank Insurance Inventory` 1. For the description, copy and paste the following text: `txt Model inventory for insurance related processing` 1. Clear the **Add collaborators after creation** option. 1. Select your Cloud Object Storage instance from the list. 1. Click **Create**. 1. Close the *Manage inventories* page. \#\#\# \{: iih\} Check your progress The following image shows the model inventory. You are now ready to create an AI use case.
\{: width="100%" \} \#\#\# Create an AI use case 1. Click **New AI use case**. 1. For the *Name*, copy and paste the following text: `txt Insurance claims processing AI use case` 1. Select an existing model inventory. 1. Click **Create** to accept the default values for the rest of the fields. \#\#\# \{: iih\} Check your progress The following image shows the AI use case. You are now ready to track the prompt template.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Start tracking the prompt template
 To preview this task, watch the video beginning at 02:33. You can track your prompt template in an AI use case to report the development and test process to your peers. Follow these steps to start tracking the prompt template: 1. From the navigation menu \{: iih\}, choose **Projects > View all projects**. 1. Select the **Getting started with watsonx governance** project. 1. Click the **Assets** tab. 1. From the **Overflow** menu \{: iih\} for the *Claims processing summarization* prompt template, select **View AI Factsheet**. 1. On the *AI Factsheet* tab, click the **Governance** page. 1. Click **Track an AI use case**. 1. Select the **Insurance claims processing** AI use case. 1. Select **LLM Prompt Engineering** for the approach. 1. Click **Next**. 1. For the model version, select **Experimental**. 1. Accept the default value for the version number. 1. Click **Next**. 1. Click **Track asset**. 1. Click the **View details** icon \{: iih\} to open the AI use case. 1. Click the **Lifecycle** tab to see the prompt template in the *Develop* phase. \#\#\# \{: iih\} Check your progress The following image shows the Lifecycle tab in the AI use case with the prompt temmplate in the Develop phase. You are now ready to continue to the Validate phase
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Create a new project for validation
 To preview this task, watch the video beginning at 03:22. Typically, the prompt engineer evaluates the prompt with test data, and the validation engineer validates the prompt. The validation engineer has access to the validation data that prompt engineers might not have. In this case, validation data occurs in a different project. Follow these steps to export the development project and import it as a new validation project to move the asset into the validation phase of the AI lifecycle: 1. From the navigation menu \{: iih\}, choose **Projects > View all projects**. 1. Select the **Getting started with watsonx governance** project. 1. Click the **Import/Export** icon \{: iih\} > **Export project**. 1. Check the box to select all assets. 1. Click **Export**. 1. For the project name, copy and paste the following text, and then click **Save**. `txt validation project.zip` 1. From the navigation menu \{: iih\}, choose **Projects > View all projects**. 1. Click **New project**. 1. Select **Create a project from a sample of file**. 1. Click **Browse**. 1. Select the **validation project.zip**, and click **Open**. 1. For the project name, copy and paste the following text: `txt Validation project` 1. Click **Create**. 1. When the project is created, click **View new project**. 1. Follow the same steps as in [Step 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step01) to associate your Watson Machine Learning service with this project. \#\#\# \{: iih\} Check your progress The following image shows the validation project Assets tab. You are now ready to evaluate the sample prompt template in the validation project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Validate the prompt template
 To preview this task, watch the video beginning at 04:18. Now you are ready to evaluate the prompt template in this validation project using the same evaluation process as before. Use the same test data set for evaluation. And select the same Input and Output columns as before. Follow these steps to validate the prompt template: 1. Click the **Assets** tab in the *Validation project*. 1. Repeat the steps in [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#step02) to evaluate the *Claims processing summarization* prompt template. 1. Click the **AI Factsheet** tab when the evaluation is complete. 1. View both sets of test results: 1. Click **Evaluation > Develop > Test**. 1. Click **Evaluation > Validate > Test**. \#\#\# \{: iih\} Check your progress The following image shows the validation test results. You are now ready to promote the prompt template to a deployment space, and then deploy the prompt template.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 7: Deploy the prompt template
 To preview this task, watch the video beginning at 05:00. \#\#\# Promote the prompt template to a deployment space You promote the prompt template to a deployment space in preparation for deploying it. Follow these steps to prompte the prompt template: 1. Click **Validation project** in the projects navigation trail. 1. From the **Overflow** menu \{: iih\} for the *Claims processing summarization* prompt template, select **Promote to space**. 1. For the *Target space*, select **Create a new deployment space**. 1. For the *Space name*, copy and paste the following text: `txt Insurance claims deployment space` 1. For the *Deployment stage*, select **Production**. 1. Select your machine learning service from the list. 1. Click **Create**. 1. Click **Close**. 1. Select the **Insurance claims deployment space** deployment space from the list. 1. Check the option to **Go to the space after promoting the prompt template**. 1. Click **Promote**. \#\#\# \{: iih\} Check your progress The following image shows the prompt template in the deployment space. You are now ready to create a deployment.
\{: width="100%" \} \#\#\# Deploy the prompt template Now you can deploy the prompt template from inside the deployment space. Follow these steps to create a deployment: 1. From the **Overflow** menu \{: iih\} for the *Insurance claims summarization* prompt template, select **Deploy**. 1. For the deployment name, copy and paste the following text: `txt Insurance claims summarization deployment` 1. Click **Create**. \#\#\# \{: iih\} Check your progress The following image shows the deployed prompt template.
\{: width="100%" \} \#\#\# View the deployed prompt template Follow these steps to view the deployed prompt template in its current phase of the lifecycle: 1. View the deployment when it is ready. The *API reference* tab provides information for you to use the prompt template deployment in your application. 1. Click the **Test** tab. The *Test* tab allows you to submit an instruction and Input to test the deployment. 1. Click **Generate**. 1. Click the **AI Factsheet** tab. The *AI Factsheet* shows that the prompt template is now in the operate phase. 1. Scroll down, and click the arrow for more details. 1. Select the Evaluation > Operate > Deployment 1 page. 1. Click the **View details** icon \{: iih\} to open the AI use case. 1. Click the **Lifecycle** tab. 1. Click the **Insurance claim summarization** prompt template in the *Operate* phase. When you are done, click **Close**. 1. Click the **Insurance claims summarization deployment** prompt template deployment in the *Operate* phase. \#\#\# \{: iih\} Check your progress The following image shows the prompt template prompt template in the Operate phase of the lifecycle.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
You are now ready to try the [Prompt a foundation model with the retrieval\-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)\.
## Additional resources ##
<!-- <ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
4E83416B551F557D5BDA600450E6CCB7742EB51D | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en | Quick start: Prompt a foundation model with the retrieval-augmented generation pattern | Quick start: Prompt a foundation model with the retrieval-augmented generation pattern
Take this tutorial to learn how to use foundation models in IBM watsonx.ai to generate factually accurate output grounded in information in a knowledge base by applying the retrieval-augmented generation pattern. Foundation models can generate output that is factually inaccurate for a variety of reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text. This tutorial uses a sample notebook using the retrieval-augmented generation pattern method to improve the accuracy of the generated output.
Required services : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
1. Open a project. Projects are where you can collaborate with others to work with data.
2. Add a notebook to your project. You can create your own notebook, or add a [sample notebook](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) to your project.
3. Add and edit code, then run the notebook.
4. Review the notebook output.
Read about retrieval-augmented generation pattern
You can scale out the technique of including context in your prompts by leveraging information in a knowledge base. The retrieval-augmented generation pattern involves three basic steps:
* Search for relevant content in your knowledge base
* Pull the most relevant content into your prompt as context
* Send the combined prompt text to the model to generate output
[Read more about the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html?context=wx)
Watch a video about using the retrieval-augmented generation pattern
 Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to prompt a foundation model with the retrieval-augmented generation pattern
In this tutorial, you will complete these tasks:
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep01)
* [Task 2: Add a sample notebook to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep02)
* [Task 3: Edit the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep03)
* [Task 4: Run the notebook and review the output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep04)
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the sample notebook. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
Follow the steps to verify that you have an existing project or create a project. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You will use Watson Machine Learning to prompt the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check if this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click Associate service. 1. Check the box next to your Watson Machine Learning service instance. 1. Click Associate. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ {: iih} Check your progress The following image shows the Manage tab with the associated service. You are now ready to add the sample notebook to your project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 2: Add the sample notebook to your project
The sample notebook uses a small knowledge base and a simple search component to demonstrate the basic pattern. The scenario used in this notebook is for a company that sells seeds for planting in a garden. The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. The new widge is being added to the website to answer customer questions on the contents of the articles. Watch this video to see how to add a sample notebook to a project, and then follow the steps to add the notebook to your project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. Access the [Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. Type the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. 1. From the menu, click Kernel > Restart & Clear Output, then confirm by clicking Restart and Clear All Outputs to clear the output from the last saved run.
For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### {: iih} Check your progress The following image shows the notebook open in Edit mode. Now you are ready to set up the prerequisites for running the notebook.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 3: Edit the notebook
 To preview this task, watch the video beginning at 00:57. Before you can run the notebook, you need to set up the environment. Follow these steps to verify the notebook prerequisites: 1. Scroll to the For IBM watsonx on IBM Cloud section in the notebook to see the two prerequisites to run the notebook. 1. Under the Create an IBM Cloud API key section, you need to pass your credentials to the Watson Machine Learning API using an API key. If you don't already have a saved API key, then follow these steps to create an API key.
1. Access the [IBM Cloud console API keys page](https://cloud.ibm.com/iam/apikeys){: new_window}. 1. Click Create an IBM Cloud API key. If you have any existing API keys, the button may be labelled Create. 1. Type a name and description. 1. Click Create. 1. Copy the API key. 1. Download the API key for future use. 1. Review the Associate an instance of the Watson Machine Learning service with the current project section. You completed this prerequisite in [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep01). 1. Scroll to the Run the cell to provide the IBM Cloud API key section: 1. Click the Run icon {: iih} to run the cell. 1. Paste the API key, and press Enter. 1. Under Run the cell to set the credentials for IBM watsonx on IBM Cloud, click the Run icon {: iih} to run the cell and set the credentials. \ {: iih} Check your progress The following images shows the notebook with the prerequisites completed. Now you are ready to run the notebook and review the output.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview)
* Task 4: Run the notebook and review the output
 To preview this task, watch the video beginning at 01:03. The sample notebook includes information about the retrieval-augmented generation and how you can adapt the notebook for your specific use case. Follow these steps to run the notebook and review the output: 1. Scroll to the Step 2: Create a Knowledge Base section in the notebook: 1. Click the Run icon {: iih} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells sets up the knowledge base as a collection of two articles. These articles were written as samples for watsonx.ai, they are not real articles published anywhere else. The authors and publication dates are fictional. 1. Scroll to the Step 3: Build a simple search component section in the notebook: 1. Click the Run icon {: iih} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells builds a simple search component. Many articles that discuss retrieval-augmented generation assume the retrieval component uses a vector database. However, to perform the general retrieval-augmented generation pattern, any search-and-retrieve method that can reliably return relevant content from the knowledge base will do. In this notebook, the search component is a trivial search function that returns the index of one or the other of the two articles in the knowledge base, based on a simple regular expression match. 1. Scroll to the Step 4: Craft prompt text section in the notebook: 1. Click the Run icon {: iih} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells crafts the prompt text. There is no one, best prompt for any given task. However, models that have been instruction-tuned, such as bigscience/mt0-xxl-13b, google/flan-t5-xxl-11b, or google/flan-ul2-20b, can generally perform this task with a sample prompt. Conservative decoding methods tend towards succinct answers. In the prompt, notice two string placeholders (marked with %s) that will be replaced at generation time: - The first placeholder will be replaced with the text of the relevant article from the knowledge base - The second placeholder will be replaced with the question to be answered 1. Scroll to the Step 5: Generate output using the foundation models Python library section in the notebook: 1. Click the Run icon {: iih} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells generates output by using the Python library. You can prompt foundation models in watsonx.ai programmatically using the Python library. For more information about the library, see the following topics: - [Introduction to the foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=wx){: new_window} - [Foundation models Python library reference](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html){: new_window} 1. Scroll to the Step 6: Pull everything together to perform retrieval-augmented generation section in the notebook: 1. Click the Run icon {: iih} for each of the two cells in that section. This code pulls everything together to perform retrieval-augmented generation. 1. Review the output for the first cell in the section. The code in this cell sets up the user input elements. 1. For the second cell in the section, type a question related to tomatoes or cucumbers to see the answer and the source. For example, Do I use mulch with tomatoes?. 1. Review the answer to your question. ### {: iih} Check your progress The following image shows the completed notebook.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview)
Next steps
*  Watch the video beginning at 02:55 to learn about considerations for applying the retrieval-augmented generation pattern to a production solution.
* Try the [Prompt a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) tutorial using Prompt Lab.
Additional resources
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Prompt a foundation model with the retrieval\-augmented generation pattern #
Take this tutorial to learn how to use foundation models in IBM watsonx\.ai to generate factually accurate output grounded in information in a knowledge base by applying the retrieval\-augmented generation pattern\. Foundation models can generate output that is factually inaccurate for a variety of reasons\. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text\. This tutorial uses a sample notebook using the retrieval\-augmented generation pattern method to improve the accuracy of the generated output\.
**Required services** : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open a project\. Projects are where you can collaborate with others to work with data\.
2. Add a notebook to your project\. You can create your own notebook, or add a [sample notebook](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) to your project\.
3. Add and edit code, then run the notebook\.
4. Review the notebook output\.
<!-- </ol> -->
## Read about retrieval\-augmented generation pattern ##
You can scale out the technique of including context in your prompts by leveraging information in a knowledge base\. The retrieval\-augmented generation pattern involves three basic steps:
<!-- <ul> -->
* Search for relevant content in your knowledge base
* Pull the most relevant content into your prompt as context
* Send the combined prompt text to the model to generate output
<!-- </ul> -->
[Read more about the retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html?context=wx)
## Watch a video about using the retrieval\-augmented generation pattern ##
 Watch this video to preview the steps in this tutorial\. There might be slight differences in the user interface shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to prompt a foundation model with the retrieval\-augmented generation pattern ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#step01)
* [Task 2: Add a sample notebook to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#step02)
* [Task 3: Edit the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#step03)
* [Task 4: Run the notebook and review the output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#step04)
<!-- </ul> -->
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the sample notebook. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
Follow the steps to verify that you have an existing project or create a project. 1. From the watsonx home screen, scroll to the *Projects* section. If you see any projects listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#associate). If you don't see any projects, then follow these steps to create a project. 1. Click **Create a sandbox project**. When the project is created, you will see the sandbox in the *Projects* section. 1. Open an existing project or the new sandbox project. \#\#\# Associate the Watson Machine Learning service with the project You will use Watson Machine Learning to prompt the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the **Manage** tab. 1. Click the **Services & Integrations** page. 1. Check if this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click **Associate service**. 1. Check the box next to your **Watson Machine Learning** service instance. 1. Click **Associate**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the *Manage* tab with the associated service. You are now ready to add the sample notebook to your project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Add the sample notebook to your project
The sample notebook uses a small knowledge base and a simple search component to demonstrate the basic pattern. The scenario used in this notebook is for a company that sells seeds for planting in a garden. The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. The new widge is being added to the website to answer customer questions on the contents of the articles. Watch this video to see how to add a sample notebook to a project, and then follow the steps to add the notebook to your project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. Access the [Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43)\{: new\_window\} in the *Samples*. 1. Click **Add to project**. 1. Select your project from the list, and click **Add**. 1. Type the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click **Create**. Wait for the notebook editor to load. 1. From the menu, click **Kernel > Restart & Clear Output**, then confirm by clicking **Restart and Clear All Outputs** to clear the output from the last saved run.
For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the notebook open in Edit mode. Now you are ready to set up the prerequisites for running the notebook.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Edit the notebook
 To preview this task, watch the video beginning at 00:57. Before you can run the notebook, you need to set up the environment. Follow these steps to verify the notebook prerequisites: 1. Scroll to the *For IBM watsonx on IBM Cloud* section in the notebook to see the two prerequisites to run the notebook. 1. Under the *Create an IBM Cloud API key* section, you need to pass your credentials to the Watson Machine Learning API using an API key. If you don't already have a saved API key, then follow these steps to create an API key.
1. Access the [IBM Cloud console API keys page](https://cloud.ibm.com/iam/apikeys)\{: new\_window\}. 1. Click **Create an IBM Cloud API key**. If you have any existing API keys, the button may be labelled **Create**. 1. Type a name and description. 1. Click **Create**. 1. **Copy** the API key. 1. Download the API key for future use. 1. Review the *Associate an instance of the Watson Machine Learning service with the current project* section. You completed this prerequisite in [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#step01). 1. Scroll to the *Run the cell to provide the IBM Cloud API key* section: 1. Click the **Run** icon \{: iih\} to run the cell. 1. Paste the API key, and press `Enter`. 1. Under *Run the cell to set the credentials for IBM watsonx on IBM Cloud*, click the **Run** icon \{: iih\} to run the cell and set the credentials. \#\#\# \{: iih\} Check your progress The following images shows the notebook with the prerequisites completed. Now you are ready to run the notebook and review the output.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Run the notebook and review the output
 To preview this task, watch the video beginning at 01:03. The sample notebook includes information about the retrieval-augmented generation and how you can adapt the notebook for your specific use case. Follow these steps to run the notebook and review the output: 1. Scroll to the *Step 2: Create a Knowledge Base* section in the notebook: 1. Click the **Run** icon \{: iih\} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells sets up the knowledge base as a collection of two articles. These articles were written as samples for watsonx.ai, they are not real articles published anywhere else. The authors and publication dates are fictional. 1. Scroll to the *Step 3: Build a simple search component* section in the notebook: 1. Click the **Run** icon \{: iih\} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells builds a simple search component. Many articles that discuss retrieval-augmented generation assume the retrieval component uses a vector database. However, to perform the general retrieval-augmented generation pattern, any search-and-retrieve method that can reliably return relevant content from the knowledge base will do. In this notebook, the search component is a trivial search function that returns the index of one or the other of the two articles in the knowledge base, based on a simple regular expression match. 1. Scroll to the *Step 4: Craft prompt text* section in the notebook: 1. Click the **Run** icon \{: iih\} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells crafts the prompt text. There is no one, best prompt for any given task. However, models that have been instruction-tuned, such as bigscience/mt0-xxl-13b, google/flan-t5-xxl-11b, or google/flan-ul2-20b, can generally perform this task with a sample prompt. Conservative decoding methods tend towards succinct answers. In the prompt, notice two string placeholders (marked with %s) that will be replaced at generation time: - The first placeholder will be replaced with the text of the relevant article from the knowledge base - The second placeholder will be replaced with the question to be answered 1. Scroll to the *Step 5: Generate output using the foundation models Python library* section in the notebook: 1. Click the **Run** icon \{: iih\} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells generates output by using the Python library. You can prompt foundation models in watsonx.ai programmatically using the Python library. For more information about the library, see the following topics: - [Introduction to the foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=wx)\{: new\_window\} - [Foundation models Python library reference](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html)\{: new\_window\} 1. Scroll to the *Step 6: Pull everything together to perform retrieval-augmented generation* section in the notebook: 1. Click the **Run** icon \{: iih\} for each of the two cells in that section. This code pulls everything together to perform retrieval-augmented generation. 1. Review the output for the first cell in the section. The code in this cell sets up the user input elements. 1. For the second cell in the section, type a question related to tomatoes or cucumbers to see the answer and the source. For example, `Do I use mulch with tomatoes?`. 1. Review the answer to your question. \#\#\# \{: iih\} Check your progress The following image shows the completed notebook.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
<!-- <ul> -->
*  Watch the video beginning at 02:55 to learn about considerations for applying the retrieval\-augmented generation pattern to a production solution\.
* Try the [Prompt a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) tutorial using Prompt Lab\.
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
220E465DBC0C22FF06F80DF18B25044DD1EBC787 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en | Quick start: Generate synthetic tabular data | Quick start: Generate synthetic tabular data
Take this tutorial to learn how to generate synthetic tabular data in IBM watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities. This tutorial helps you learn how to use the graphical flow editor tool, Synthetic Data Generator, to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms.
Required services : Watson Studio
Your basic workflow includes these tasks:
1. Open a project. Projects are where you can collaborate with others to work with data.
2. Add your data to the project. You can add CSV files or data from a remote data source through a connection.
3. Create and run a synthetic data flow to the project. You use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms.
4. Review the synthetic data flow and output.
Read about synthetic data
Synthetic data is information that has been generated on a computer to augment or replace real data to improve AI models, protect sensitive data, and mitigate bias. Synthetic data helps to mitigate many of the logistical, ethical, and privacy issues that come with training machine learning models on real-world examples.
[Read more about Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
Watch a video about generating synthetic tabular data
 Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to generate synthetic tabular data
In this tutorial, you will complete these tasks:
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep01)
* [Task 2: Add data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02)
* [Task 3: Create a synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep03)
* [Task 4: Review the data flow and output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep04)
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to to store the assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ {: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview)
* Task 2: Add data to your project
 To preview this task, watch the video beginning at 00:24. The data set used in this tutorial contains typical information that a company gathers about their customers, and is available in the Samples. Follow these steps to find the data set in the Samples and add it to your project: 1. Access the [Customers data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4bfbe430a82e23821aed0647b506da93){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. After the data set is added, click View Project. For more information on adding data assets from the Samples to your project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). ### {: iih} Check your progress The following image shows the Assets tab in the project. Now you are ready to create the synthetic data flow.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview)
* Task 3: Create a synthetic data flow
 To preview this task, watch the video beginning at 00:43. Use the Synthetic Data Generator to create a data flow that generates synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. Follow these steps to create a synthetic data flow asset in your project: 1. From the Assets tab in your project, click New asset > Generate synthetic tabular data. 1. For the name, type Bank customers{: .cp}. 1. Click Create. 1. On the Welcome to Synthetic Data Generator screen, click First time user, and click Continue. This option provides a guided experience for you to build the data flow. 1. Review the two use cases: - Leverage your existing data: Generate a structured synthetic data set based on your production data. You can connect to a database, import or upload a file, mask, and generate your output before exporting. - Create from custom data: Generate a structured synthetic data set based on meta data. You can define the data within each table column, their distributions, and any correlations. 1. Select the Leverage your existing data use case, and click Next to import existing data. 1. Click Select data from project to use the customers data asset that you added from the Samples. 1. Select Data asset > customers.csv. 1. Click Select. 1. Click Next. 1. In the list of columns, search for creditcard_number{: .cp}. 1. In the Anonymize column for CREDITCARD_NUMBER, select Yes to mask customers' credit card numbers. 1. Click Next. 1. Accept the default settings on the Mimic options page. These options generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data. Click Next. 1. For the File name, type bank_customers.csv{: .cp}, and click Next. 1. Review the settings, and click Save and run. The Synthetic Data Generator tool displays with the data flow. Wait for the run to complete. ### {: iih} Check your progress The following image shows the data flow open in the Synthetic Data Generator. Now you can explore the data flow and view the output.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview)
* Task 4: Review the data flow and output
 To preview this task, watch the video beginning at 01:48. When the run completes, you can explore the data flow. Follow these steps to review the synthetic data flow and the results: 1. Click the Palette icon {: iih} to close the node panel. 1. Double-click the Import node to see the settings. 1. Review the Data properties. The tool read the data set from the project and filled in the appropriate data properties. 1. Expand the Types section. The tool read the values and columns in the data set. 1. Click Cancel. 1. Double-click the Anonymize node to see the settings. 1. Verify that the CREDITCARD_NUMBER column is set to be anonymized. 1. Expand the Anonymize values section. Here you can customize how the values are anonymized. 1. Click Cancel. 1. Double-click the Mimic node to see the settings. 1. Review the default settings to mimic the data in the source customers data set. 1. Click Cancel. 1. Double-click the Generate node to see the settings. 1. Review the list of Synthesized columns. 1. Optional: Review the Correlations and Advanced Options. 1. Click Cancel. 1. Double-click the Export node to see the settings. 1. Optional: By default the exported data is stored in the project. Click Change path to store the exported data in a connection, such as Db2 Warehouse. 1. Click Cancel. 1. Click your project name to return to the Assets tab. {: biw} 1. Click bank_customers.csv to see a preview of the generated synthetic tabular data. ### {: iih} Check your progress The following image shows the exported, generated synthetic tabular data set.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview)
Next steps
Try these additional tutorials to get more hands-on experience with watsonx.ai:
* [Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
* [Build machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.htmltutorials-for-building-deploying-and-trusting-models)
Additional resources
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Generate synthetic tabular data #
Take this tutorial to learn how to generate synthetic tabular data in IBM watsonx\.ai\. The benefit to synthetic data is that you can procure the data on\-demand, then customize to fit your use case, and produce it in large quantities\. This tutorial helps you learn how to use the graphical flow editor tool, Synthetic Data Generator, to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms\.
**Required services** : Watson Studio
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open a project\. Projects are where you can collaborate with others to work with data\.
2. Add your data to the project\. You can add CSV files or data from a remote data source through a connection\.
3. Create and run a synthetic data flow to the project\. You use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms\.
4. Review the synthetic data flow and output\.
<!-- </ol> -->
## Read about synthetic data ##
Synthetic data is information that has been generated on a computer to augment or replace real data to improve AI models, protect sensitive data, and mitigate bias\. Synthetic data helps to mitigate many of the logistical, ethical, and privacy issues that come with training machine learning models on real\-world examples\.
[Read more about Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
## Watch a video about generating synthetic tabular data ##
 Watch this video to preview the steps in this tutorial\. There might be slight differences in the user interface shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to generate synthetic tabular data ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#step01)
* [Task 2: Add data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#step02)
* [Task 3: Create a synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#step03)
* [Task 4: Review the data flow and output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#step04)
<!-- </ul> -->
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to to store the assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the *Projects* section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#step02). If you don't see any projects, then follow these steps to create a project. 1. Click **Create a sandbox project**. When the project is created, you will see the sandbox project in the *Projects* section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Add data to your project
 To preview this task, watch the video beginning at 00:24. The data set used in this tutorial contains typical information that a company gathers about their customers, and is available in the Samples. Follow these steps to find the data set in the Samples and add it to your project: 1. Access the [Customers data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4bfbe430a82e23821aed0647b506da93)\{: new\_window\} in the Samples. 1. Click **Add to project**. 1. Select your project from the list, and click **Add**. 1. After the data set is added, click **View Project**. For more information on adding data assets from the Samples to your project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). \#\#\# \{: iih\} Check your progress The following image shows the Assets tab in the project. Now you are ready to create the synthetic data flow.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Create a synthetic data flow
 To preview this task, watch the video beginning at 00:43. Use the Synthetic Data Generator to create a data flow that generates synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. Follow these steps to create a synthetic data flow asset in your project: 1. From the *Assets* tab in your project, click **New asset > Generate synthetic tabular data**. 1. For the name, type `Bank customers`\{: .cp\}. 1. Click **Create**. 1. On the *Welcome to Synthetic Data Generator* screen, click **First time user**, and click **Continue**. This option provides a guided experience for you to build the data flow. 1. Review the two use cases: - Leverage your existing data: Generate a structured synthetic data set based on your production data. You can connect to a database, import or upload a file, mask, and generate your output before exporting. - Create from custom data: Generate a structured synthetic data set based on meta data. You can define the data within each table column, their distributions, and any correlations. 1. Select the **Leverage your existing data** use case, and click **Next** to import existing data. 1. Click **Select data from project** to use the customers data asset that you added from the Samples. 1. Select **Data asset > customers.csv**. 1. Click **Select**. 1. Click **Next**. 1. In the list of columns, search for `creditcard_number`\{: .cp\}. 1. In the *Anonymize* column for `CREDITCARD_NUMBER`, select **Yes** to mask customers' credit card numbers. 1. Click **Next**. 1. Accept the default settings on the *Mimic options* page. These options generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data. Click **Next**. 1. For the *File name*, type `bank_customers.csv`\{: .cp\}, and click **Next**. 1. Review the settings, and click **Save and run**. The Synthetic Data Generator tool displays with the data flow. Wait for the run to complete. \#\#\# \{: iih\} Check your progress The following image shows the data flow open in the Synthetic Data Generator. Now you can explore the data flow and view the output.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Review the data flow and output
 To preview this task, watch the video beginning at 01:48. When the run completes, you can explore the data flow. Follow these steps to review the synthetic data flow and the results: 1. Click the **Palette** icon \{: iih\} to close the node panel. 1. Double-click the **Import** node to see the settings. 1. Review the *Data* properties. The tool read the data set from the project and filled in the appropriate data properties. 1. Expand the **Types** section. The tool read the values and columns in the data set. 1. Click **Cancel**. 1. Double-click the **Anonymize** node to see the settings. 1. Verify that the *CREDITCARD\_NUMBER* column is set to be anonymized. 1. Expand the **Anonymize values** section. Here you can customize how the values are anonymized. 1. Click **Cancel**. 1. Double-click the **Mimic** node to see the settings. 1. Review the default settings to mimic the data in the source customers data set. 1. Click **Cancel**. 1. Double-click the **Generate** node to see the settings. 1. Review the list of *Synthesized columns*. 1. Optional: Review the *Correlations* and *Advanced Options*. 1. Click **Cancel**. 1. Double-click the **Export** node to see the settings. 1. Optional: By default the exported data is stored in the project. Click **Change path** to store the exported data in a connection, such as Db2 Warehouse. 1. Click **Cancel**. 1. Click your project name to return to the *Assets* tab. \{: biw\} 1. Click **bank\_customers.csv** to see a preview of the generated synthetic tabular data. \#\#\# \{: iih\} Check your progress The following image shows the exported, generated synthetic tabular data set.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Try these additional tutorials to get more hands\-on experience with watsonx\.ai:
<!-- <ul> -->
* [Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
* [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
* [Build machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html#tutorials-for-building-deploying-and-trusting-models)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en | Quick start: Automate the lifecycle for a model with pipelines | Quick start: Automate the lifecycle for a model with pipelines
You can create an end-to-end pipeline to deliver concise, pre-processed, and up-to-date data stored in an external data source. Read about Watson Pipelines, then watch a video and take a tutorial.
Required services : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add connections and data to the project. You can add CSV files or data from a remote data source through a connection.
3. Create a pipeline in the project.
4. Add nodes to the pipeline to perform tasks.
5. Run the pipeline and view the results.
Read about pipelines
The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts. Putting a model into production is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift.
[Read more about pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Watch a video about pipelines
 Watch this video to preview the steps in this tutorial. You might notice slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to create a model with Pipelines
This tutorial guides you through exploring and running an AI pipeline to build and deploy a model. The model predicts if a customer is likely subscribe to a term deposit based on a marketing campaign.
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep01)
* [Task 2: Create a deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02)
* [Task 3: Create the sample pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep03)
* [Task 4: Explore an existing pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep04)
* [Task 5: Run the pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep05)
* [Task 6: View the assets, deployed model, and online deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep06)
This tutorial takes approximately 30 minutes to complete.
Sample data
The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion.

Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ {: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 2: Create a deployment space
 To preview this task, watch the video beginning at 00:16. Deployment spaces help you to organize supporting resources such as input data and environments; deploy models or functions to generate predictions or solutions; and view or edit deployment details. Follow these steps to create a deployment space. 1. From the watsonx navigation menu {: iih}, choose Deployments. If you have an existing deployment space, you can skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02). 1. Click New deployment space. 1. Type a name for your deployment space. 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click Create. \ {: iih} Check your progress The following image shows the empty deployment space:
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 3: Create the sample pipeline
 To preview this task, watch the video beginning at 00:08. You create and run pipelines in a project. Follow these steps to create a pipeline based on a sample in a project: 1. On the watsonx home page, select your sandbox or a different existing project from the drop down list. {: biw} 1. Click Customize my journey, and then select View all tasks. 1. Select Automate model lifecycle. 1. Click Samples. 1. Select Orchestrate an AutoAI experiment, and click Next. 1. Optional: Change the name for the pipeline. 1. Click Create. The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is deployed to a space. ### {: iih} Check your progress The following image shows the sample pipeline.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 4: Explore the existing pipeline
 To preview this task, watch the video beginning at 00:30. The sample pipeline includes several nodes to create assets and use those assets to build a model. Follow these steps to view the nodes: 1. Click the Global objects{: iih} icon to view the pipeline parameters. Expand the deployment_space parameter. This pipeline includes a parameter to specify a deployment space where the best model from the AutoAI experiment is stored and deployed. Click the X to close the window. 1. Double-click the Create data file node to see that it is configured to access the data set for the experiment. Click Cancel to close the properties pane. 1. Double-click the Create AutoAI experiment node. View the experiment name, the scope, which is where the experiment is stored, the prediction type (binary classification, multiclass classification, or regression), the prediction column, and positive class. The rest of the parameters are all optional. Click Cancel to close the properties pane. 1. Double-click the Run AutoAI experiment node. This node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model. The first two parameters are required. The first parameter takes the output from the Create AutoAI experiment node as the input to run the experiment. The second parameter takes the output from the Create data file node as the training data input for the experiment. The rest of the parameters are all optional. Click Cancel to close the properties pane. 1. Double-click the Create Web service node. This node creates a deployment with the name onboarding-bank-marketing-prediction-deployment. The first parameter takes the best model output from the Run AutoAI experiment node as the input to create the deployment with the specified name. The rest of the parameters are all optional. Click Cancel to close the properties pane. ### {: iih} Check your progress The following image shows the properties for the Create web service node. You are now ready to run the sample pipeline.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 5: Run the pipeline
 To preview this task, watch the video beginning at 03:43. Now that the pipeline is complete, follow these steps to run the pipeline: 1. From the toolbar, click Run pipeline > Trial run. 1. In the Values for pipeline parameters section, select your deployment space: 1. Click Select Space. 1. Click Spaces. 1. Select your deployment space from [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep01). 1. Click Choose. 1. Provide an API key if this occasion is your first time running a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption. - If you have an existing API key, click Use existing API key, paste the API key, and click Save. - If you don't have an existing API key, click Generate new API key, provide a name, and click Save. Copy the API key, and then save the API key for future use. When you're done, click Close. 1. Click Run to start running the pipeline. 1. Monitor the pipeline progress. 1. Scroll through consolidated logs while the pipeline is running. The trial run might take up to 10 minutes to complete. 1. As each operation completes, select the node for that operation on the canvas. 1. On the Node Inspector tab, view the details of the operation. 1. Click the Node output tab to see a summary of the output for each node operation. \ {: iih} Check your progress The following image shows the pipeline after it completed the trial run. You are now ready to review the assets that the pipeline created.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Task 6: View the assets, deployed model, and online deployment
 To preview this task, watch the video beginning at 04:27. The pipeline created several assets in the deployment space. Follow these steps to view the assets: 1. From the watsonx navigation menu {: iih}, choose Deployments. 1. Click the name for your deployment space. 1. On the Assets tab, view All assets. 1. Click the bank-marketing-data.csv data asset. The Create data file node created this asset. 1. Click the model beginning with the name onboarding-bank-marketing-prediction. The Run AutoAI experiment node generated several model candidates, and chose this as the best model. 1. Click the Model details tab, and scroll through the model and training information. 1. Click the Deployments tab, and open the onboarding-bank-marketing-prediction-deployment. 1. Click the Test tab. 1. Click the JSON input tab. 1. Replace the sample text with the following JSON text, and click Predict.JSON { "input_data": [ { "fields": "age", "job", "marital", "education", "default", "balance", "housing", "loan", "contact", "day", "month", "duration", "campaign", "pdays", "previous", "poutcome" ], "values": 35, "management", "married", "tertiary", "no", 0, "yes", "no", "cellular", 1, "jun", 850, 10, -1, 4, "unknown" ] ] } ] }### {: iih} Check your progress The following image shows the results of the test; the prediction is to approve the applicant. The confidence scores for your test might be different from the scores that are shown in the image.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview)
* Try these other methods to build models:
* [Build and deploy a model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html)
* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Learn more
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Automate the lifecycle for a model with pipelines #
You can create an end\-to\-end pipeline to deliver concise, pre\-processed, and up\-to\-date data stored in an external data source\. Read about Watson Pipelines, then watch a video and take a tutorial\.
**Required services** : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add connections and data to the project\. You can add CSV files or data from a remote data source through a connection\.
3. Create a pipeline in the project\.
4. Add nodes to the pipeline to perform tasks\.
5. Run the pipeline and view the results\.
<!-- </ol> -->
## Read about pipelines ##
The Watson Pipelines editor provides a graphical interface for orchestrating an end\-to\-end flow of assets from creation through deployment\. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts\. Putting a model into production is a multi\-step process\. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested\. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift\.
[Read more about pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
## Watch a video about pipelines ##
 Watch this video to preview the steps in this tutorial\. You might notice slight differences in the user interface that is shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to create a model with Pipelines ##
This tutorial guides you through exploring and running an AI pipeline to build and deploy a model\. The model predicts if a customer is likely subscribe to a term deposit based on a marketing campaign\.
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step01)
* [Task 2: Create a deployment space\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step02)
* [Task 3: Create the sample pipeline\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step03)
* [Task 4: Explore an existing pipeline\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step04)
* [Task 5: Run the pipeline\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step05)
* [Task 6: View the assets, deployed model, and online deployment\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step06)
<!-- </ul> -->
This tutorial takes approximately 30 minutes to complete\.
### Sample data ###
The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion\.

Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the *Projects* section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step02). If you don't see any projects, then follow these steps to create a project. 1. Click **Create a sandbox project**. When the project is created, you will see the sandbox project in the *Projects* section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Create a deployment space
 To preview this task, watch the video beginning at 00:16. Deployment spaces help you to organize supporting resources such as input data and environments; deploy models or functions to generate predictions or solutions; and view or edit deployment details. Follow these steps to create a deployment space. 1. From the watsonx navigation menu \{: iih\}, choose **Deployments**. If you have an existing deployment space, you can skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step02). 1. Click **New deployment space**. 1. Type a name for your deployment space. 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click **Create**. \#\#\# \{: iih\} Check your progress The following image shows the empty deployment space:
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Create the sample pipeline
 To preview this task, watch the video beginning at 00:08. You create and run pipelines in a project. Follow these steps to create a pipeline based on a sample in a project: 1. On the watsonx home page, select your sandbox or a different existing project from the drop down list. \{: biw\} 1. Click **Customize my journey**, and then select **View all tasks**. 1. Select **Automate model lifecycle**. 1. Click **Samples**. 1. Select **Orchestrate an AutoAI experiment**, and click **Next**. 1. Optional: Change the name for the pipeline. 1. Click **Create**. The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is deployed to a space. \#\#\# \{: iih\} Check your progress The following image shows the sample pipeline.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Explore the existing pipeline
 To preview this task, watch the video beginning at 00:30. The sample pipeline includes several nodes to create assets and use those assets to build a model. Follow these steps to view the nodes: 1. Click the **Global objects**\{: iih\} icon to view the pipeline parameters. Expand the **deployment\_space** parameter. This pipeline includes a parameter to specify a deployment space where the best model from the AutoAI experiment is stored and deployed. Click the **X** to close the window. 1. Double-click the **Create data file** node to see that it is configured to access the data set for the experiment. Click **Cancel** to close the properties pane. 1. Double-click the **Create AutoAI experiment** node. View the experiment name, the scope, which is where the experiment is stored, the prediction type (binary classification, multiclass classification, or regression), the prediction column, and positive class. The rest of the parameters are all optional. Click **Cancel** to close the properties pane. 1. Double-click the **Run AutoAI experiment** node. This node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model. The first two parameters are required. The first parameter takes the output from the *Create AutoAI experiment* node as the input to run the experiment. The second parameter takes the output from the *Create data file* node as the training data input for the experiment. The rest of the parameters are all optional. Click **Cancel** to close the properties pane. 1. Double-click the **Create Web service** node. This node creates a deployment with the name `onboarding-bank-marketing-prediction-deployment`. The first parameter takes the best model output from the Run AutoAI experiment node as the input to create the deployment with the specified name. The rest of the parameters are all optional. Click **Cancel** to close the properties pane. \#\#\# \{: iih\} Check your progress The following image shows the properties for the Create web service node. You are now ready to run the sample pipeline.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Run the pipeline
 To preview this task, watch the video beginning at 03:43. Now that the pipeline is complete, follow these steps to run the pipeline: 1. From the toolbar, click **Run pipeline > Trial run**. 1. In the *Values for pipeline parameters* section, select your deployment space: 1. Click **Select Space**. 1. Click **Spaces**. 1. Select your deployment space from [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#step01). 1. Click **Choose**. 1. Provide an API key if this occasion is your first time running a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption. - If you have an existing API key, click **Use existing API key**, paste the API key, and click **Save**. - If you don't have an existing API key, click **Generate new API key**, provide a name, and click **Save**. Copy the API key, and then save the API key for future use. When you're done, click **Close**. 1. Click **Run** to start running the pipeline. 1. Monitor the pipeline progress. 1. Scroll through consolidated logs while the pipeline is running. The trial run might take up to 10 minutes to complete. 1. As each operation completes, select the node for that operation on the canvas. 1. On the **Node Inspector** tab, view the details of the operation. 1. Click the **Node output** tab to see a summary of the output for each node operation. \#\#\# \{: iih\} Check your progress The following image shows the pipeline after it completed the trial run. You are now ready to review the assets that the pipeline created.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: View the assets, deployed model, and online deployment
 To preview this task, watch the video beginning at 04:27. The pipeline created several assets in the deployment space. Follow these steps to view the assets: 1. From the watsonx navigation menu \{: iih\}, choose **Deployments**. 1. Click the name for your deployment space. 1. On the *Assets* tab, view **All assets**. 1. Click the **bank-marketing-data.csv** data asset. The *Create data file* node created this asset. 1. Click the model beginning with the name **onboarding-bank-marketing-prediction**. The *Run AutoAI experiment* node generated several model candidates, and chose this as the best model. 1. Click the **Model details** tab, and scroll through the model and training information. 1. Click the **Deployments** tab, and open the **onboarding-bank-marketing-prediction-deployment**. 1. Click the **Test** tab. 1. Click the **JSON input** tab. 1. Replace the sample text with the following JSON text, and click **Predict**.`JSON { "input_data": [ { "fields": "age", "job", "marital", "education", "default", "balance", "housing", "loan", "contact", "day", "month", "duration", "campaign", "pdays", "previous", "poutcome" ], "values": 35, "management", "married", "tertiary", "no", 0, "yes", "no", "cellular", 1, "jun", 850, 10, -1, 4, "unknown" ] ] } ] }`\#\#\# \{: iih\} Check your progress The following image shows the results of the test; the prediction is to approve the applicant. The confidence scores for your test might be different from the scores that are shown in the image.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Try these other methods to build models:
<!-- <ul> -->
* [Build and deploy a model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html)
* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html)
* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)
<!-- </ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
C038BA342A00562BCB7A569E4E2ACB7349C9CEF9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en | Quick start: Prompt a foundation model using Prompt Lab | Quick start: Prompt a foundation model using Prompt Lab
Take this tutorial to learn how to use the Prompt Lab in watsonx.ai. There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) to help you successfully prompt most text-generating foundation models.
Required services : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
1. Open a project. Projects are where you can collaborate with others to work with data.
2. Open the Prompt Lab. The Prompt Lab lets you experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts.
3. Type your prompt in the prompt editor. You can type prompts in either freeform and structured mode.
4. Select the model to use. You can submit your prompt to any of the models supported by watsonx.ai.
5. Save your work as a projet asset. Saving your work as a project asset makes your work available to collaborators in the current project.
Read about prompting a foundation model
Foundation models are very large AI models. They have billions of parameters and are trained on terabytes of data. Foundation models can perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. Large language models are a subset of foundation models used for text- and code-related tasks. In IBM watsonx.ai, there is a collection of deployed large language models that you can use, as well as tools for experimenting with prompts.
[Read more about Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
Watch a video about prompting a foundation model
 Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to prompt a foundation model
In this tutorial, you will complete these tasks:
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep01)
* [Task 2: Use the Prompt Lab in Freeform mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02)
* [Task 3: Use the Prompt Lab in Structured mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep03)
* [Task 4: Use the sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep04)
* [Task 5: Choose a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep05)
* [Task 6: Adjust model parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep06)
* [Task 7: Save your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep07)
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ {: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 2: Use the Prompt Lab in Freeform mode
 To preview this task, watch the video beginning at 00:03. You can type your prompt text in a freeform, plain text editor and then click Generate to send your prompt to the model. Follow these steps to use the Prompt Lab in Freeform mode: 1. From the home screen, click the Experiment with foundation models and build prompts tile. 1. Select each checkbox to accept the acknowledgements, and then click Skip tour. 1. Click the Freeform tab to prompt a foundation model in Freeform mode. 1. Click Switch mode. 1. Copy and paste the following text in the text field, and then click Generate to see the output for the Class name: Problem.Classify this customer message into one of two classes: question, problem. Class name: Question Description: The customer is asking a technical question or a how-to question about our products or services. Class name: Problem Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results. Message: I'm having trouble registering for a new account. Class name:
### {: iih} Check your progress The following images shows the generated output for the prompt in Freeform mode. Now you are ready to prompt a foundation model in Structured mode.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 3: Use the Prompt Lab in Structured mode
 To preview this task, watch the video beginning at 00:19. You can type your prompt in a structured format. The structured format is helpful for few-shot prompting, when your prompt has multiple examples. Follow these steps to use the Prompt Lab in Structured mode: 1. Click the Structured tab. 1. Click Switch mode. 1. In the Instruction field, copy and paste the following text: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.{: .cp} 1. In the Setup field, copy and paste the following text in each column: | Input | Output | | ----- | ----- | | When I try to log in, I get an error. | Problem | | Where can I find the plan prices? | Question | | What is the difference between trial and paygo? | Question | | The registration page crashed, and now I can't create a new account. | Problem | | What regions are supported? | Question | | I can't remember my password. | Problem |
1. In the Try field, copy and paste the following text: I'm having trouble registering for a new account.{: .cp} 1. Click Generate to see the output Problem. ### {: iih} Check your progress The following images shows the generated output for the prompt in Structured mode. Now you are ready to try the sample prompts.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 4: Use the sample prompts
 To preview this task, watch the video beginning at 00:33. If you’re not sure how to begin, sample prompts can get your started. Follow these steps to use the sample prompts: 1. Open the Sample prompts icon {: iih} to display the list. 1. Scroll through the list, and click the Marketing email generation sample prompt. 1. View the selected model. When you load a sample prompt, an appropriate model is selected for you. 1. Open the Model Parameters{: iih} panel. The appropriate decoding and stopping criteria parameters are set automatically too. 1. Click Generate to submit the sample prompt to the model, and see the sample email output. ### {: iih} Check your progress The following image shows the generated output from a sample prompt. Now you are ready to customize the sample prompt output by selecting a different model and parameters.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 5: Choose a foundation model
 To preview this task, watch the video beginning at 01:04. You can submit the same prompt to a different model. Follow these steps to choose a different foundation model: 1. Click Model > View all foundation models. 1. Click a model to learn more about a model, and see detail such as the model architecture, pretraining data, fine-tuning information, and performance against benchmarks. 1. Click Back to return to the list of models. 1. Select either the flan-t5-xxl-11b or mt0-xxl-13b foundation model, and click Select model. 1. Hover over the model output column and click the X icon to delete the previous output. 1. Click the same sample prompt, Marketing email generation, from the list. 1. Click Generate to generate output using the new model. ### {: iih} Check your progress The following image shows generated output using a different model. You are now ready to adjust the model parameters.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
* Task 6: Adjust model parameters
 To preview this task, watch the video beginning at 01:28. You can experiment with changing decoding or stopping criteria parameters. Follow these steps to adjust model parameters. Note: The model parameters vary based on the currently selected model.The following table defines the model parameters available for the flan-t5-xxl-11b foundation model. | Model parameters | Meaning | | ----- | ----- | | Decoding | Set decoding to Greedy to always select words with the highest probability. Set decoding to Sampling to customize the variability of word selection. | | Temperature | Control the creativity of generated text. Higher values will lead to more randomly generated outputs. | | Top P (nucleus sampling) | Set to < 1.0 to use only the smallest set of most probable tokens with probabilities that add up to top_p or higher. | | Top K | Set the number of highest probability vocabulary tokens to keep for top-k-filtering. Lower values make it less likely the model will go off topic. | | Random seed | Control the random sampling of the generated tokens when sampling is enabled. Setting the random see to the same number for each generation ensures experimental repeatability. | | Repetition penalty | Set a repetition penalty to counteract the model's tendency to repeat prompt text verbatim or get stuck in a loop. 1.00 indicates no penalty. | | Stop sequences | Set stop sequences to one ore more strings to cause the text generation to stop if or when they are produced as part of the output. | | Min tokens | Define the minimum number to tokens to generate. Stop sequences encountered prior to the minimum number of tokens being generated are ignored. | | Max tokens | Define the maximum number to tokens to generate. |
1. Change the Top K parameter to 10 to make it less likely the model will go off topic. 1. Click X to delete the previous model output. 1. Click the same sample prompt from the list. 1. Click Generate to generate output using the new model parameters. 1. Click the Session history icon {: iih} after submitting multiple prompts to view your session history. 1. Click any entry to work with a previous prompt, model specification, and parameter settings, and then click Restore. 1. Edit the prompt, change the model, or adjust decoding and stopping criteria parameters. 1. Click Generate to generate output using the updated information. ### {: iih} Check your progress The following image shows generated output using different model parameters. You are now ready to save your work.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
>
* Task 7: Save your work
 To preview this task, watch the video beginning at 02:15. You can save your work in three formats: | Asset type | Description | | ----- | ----- | | Prompt template | Save the current prompt only, without its history. | | Prompt session | Save history and data from the current session. | | Notebook | Save the current prompt as a notebook. |
Follow these steps to save your work: 1. Click Save work > Save as. 1. Select Prompt template. 1. For the name, type Sample prompts{: .cp}. 1. Select the View in project after saving option. 1. Click Save. 1. On the project's Assets tab, click the Sample prompts asset to load that prompt in the Prompt Lab and get right back to work. 1. Click the Saved prompts{: iih} to see saved prompt from your sandbox project. ### {: iih} Check your progress The following image shows the project's Assets tab with the prompt template asset:
{: width="100%" } {: iih} The following image shows saved prompt in the Prompt Lab: {: biw}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview)
Next steps
You are now ready to:
* Use the [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to prompt [foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) and save your work to a project.
* Try the [Prompt a foundation model with the retrieval-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
Additional resources
* [Saving your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Prompt a foundation model using Prompt Lab #
Take this tutorial to learn how to use the Prompt Lab in watsonx\.ai\. There are usually multiple ways to prompt a foundation model for a successful result\. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts\. See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) to help you successfully prompt most text\-generating foundation models\.
**Required services** : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open a project\. Projects are where you can collaborate with others to work with data\.
2. Open the Prompt Lab\. The Prompt Lab lets you experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts\.
3. Type your prompt in the prompt editor\. You can type prompts in either freeform and structured mode\.
4. Select the model to use\. You can submit your prompt to any of the models supported by watsonx\.ai\.
5. Save your work as a projet asset\. Saving your work as a project asset makes your work available to collaborators in the current project\.
<!-- </ol> -->
## Read about prompting a foundation model ##
Foundation models are very large AI models\. They have billions of parameters and are trained on terabytes of data\. Foundation models can perform a variety of tasks, including text\-, code\-, or image generation, classification, conversation, and more\. Large language models are a subset of foundation models used for text\- and code\-related tasks\. In IBM watsonx\.ai, there is a collection of deployed large language models that you can use, as well as tools for experimenting with prompts\.
[Read more about Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
## Watch a video about prompting a foundation model ##
 Watch this video to preview the steps in this tutorial\. There might be slight differences in the user interface shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to prompt a foundation model ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step01)
* [Task 2: Use the Prompt Lab in Freeform mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step02)
* [Task 3: Use the Prompt Lab in Structured mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step03)
* [Task 4: Use the sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step04)
* [Task 5: Choose a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step05)
* [Task 6: Adjust model parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step06)
* [Task 7: Save your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step07)
<!-- </ul> -->
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
1. From the watsonx home screen, scroll to the *Projects* section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#step02). If you don't see any projects, then follow these steps to create a project. 1. Click **Create a sandbox project**. When the project is created, you will see the sandbox project in the *Projects* section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}. \#\#\# \{: iih\} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Use the Prompt Lab in Freeform mode
 To preview this task, watch the video beginning at 00:03. You can type your prompt text in a freeform, plain text editor and then click **Generate** to send your prompt to the model. Follow these steps to use the Prompt Lab in Freeform mode: 1. From the home screen, click the **Experiment with foundation models and build prompts** tile. 1. Select each checkbox to accept the acknowledgements, and then click **Skip tour**. 1. Click the **Freeform** tab to prompt a foundation model in *Freeform* mode. 1. Click **Switch mode**. 1. Copy and paste the following text in the text field, and then click **Generate** to see the output for the *Class name: Problem*.`Classify this customer message into one of two classes: question, problem. Class name: Question Description: The customer is asking a technical question or a how-to question about our products or services. Class name: Problem Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results. Message: I'm having trouble registering for a new account. Class name:`
\#\#\# \{: iih\} Check your progress The following images shows the generated output for the prompt in Freeform mode. Now you are ready to prompt a foundation model in Structured mode.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Use the Prompt Lab in Structured mode
 To preview this task, watch the video beginning at 00:19. You can type your prompt in a structured format. The structured format is helpful for few-shot prompting, when your prompt has multiple examples. Follow these steps to use the Prompt Lab in Structured mode: 1. Click the **Structured** tab. 1. Click **Switch mode**. 1. In the *Instruction* field, copy and paste the following text: `Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.`\{: .cp\} 1. In the *Setup* field, copy and paste the following text in each column: \| Input \| Output \| \| ----- \| ----- \| \| When I try to log in, I get an error. \| Problem \| \| Where can I find the plan prices? \| Question \| \| What is the difference between trial and paygo? \| Question \| \| The registration page crashed, and now I can't create a new account. \| Problem \| \| What regions are supported? \| Question \| \| I can't remember my password. \| Problem \|
1. In the *Try* field, copy and paste the following text: `I'm having trouble registering for a new account.`\{: .cp\} 1. Click **Generate** to see the output *Problem*. \#\#\# \{: iih\} Check your progress The following images shows the generated output for the prompt in Structured mode. Now you are ready to try the sample prompts.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Use the sample prompts
 To preview this task, watch the video beginning at 00:33. If you’re not sure how to begin, sample prompts can get your started. Follow these steps to use the sample prompts: 1. Open the **Sample prompts** icon \{: iih\} to display the list. 1. Scroll through the list, and click the **Marketing email generation** sample prompt. 1. View the selected model. When you load a sample prompt, an appropriate model is selected for you. 1. Open the **Model Parameters**\{: iih\} panel. The appropriate decoding and stopping criteria parameters are set automatically too. 1. Click **Generate** to submit the sample prompt to the model, and see the sample email output. \#\#\# \{: iih\} Check your progress The following image shows the generated output from a sample prompt. Now you are ready to customize the sample prompt output by selecting a different model and parameters.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Choose a foundation model
 To preview this task, watch the video beginning at 01:04. You can submit the same prompt to a different model. Follow these steps to choose a different foundation model: 1. Click **Model > View all foundation models**. 1. Click a model to learn more about a model, and see detail such as the model architecture, pretraining data, fine-tuning information, and performance against benchmarks. 1. Click **Back** to return to the list of models. 1. Select either the **flan-t5-xxl-11b** or **mt0-xxl-13b** foundation model, and click **Select model**. 1. Hover over the model output column and click the **X** icon to delete the previous output. 1. Click the same sample prompt, **Marketing email generation**, from the list. 1. Click **Generate** to generate output using the new model. \#\#\# \{: iih\} Check your progress The following image shows generated output using a different model. You are now ready to adjust the model parameters.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Adjust model parameters
 To preview this task, watch the video beginning at 01:28. You can experiment with changing decoding or stopping criteria parameters. Follow these steps to adjust model parameters. Note: The model parameters vary based on the currently selected model.The following table defines the model parameters available for the *flan-t5-xxl-11b* foundation model. \| Model parameters \| Meaning \| \| ----- \| ----- \| \| Decoding \| Set decoding to *Greedy* to always select words with the highest probability. Set decoding to *Sampling* to customize the variability of word selection. \| \| Temperature \| Control the creativity of generated text. Higher values will lead to more randomly generated outputs. \| \| Top P (nucleus sampling) \| Set to `< 1.0` to use only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher. \| \| Top K \| Set the number of highest probability vocabulary tokens to keep for top-k-filtering. Lower values make it less likely the model will go off topic. \| \| Random seed \| Control the random sampling of the generated tokens when sampling is enabled. Setting the random see to the same number for each generation ensures experimental repeatability. \| \| Repetition penalty \| Set a repetition penalty to counteract the model's tendency to repeat prompt text verbatim or get stuck in a loop. 1.00 indicates no penalty. \| \| Stop sequences \| Set stop sequences to one ore more strings to cause the text generation to stop if or when they are produced as part of the output. \| \| Min tokens \| Define the minimum number to tokens to generate. Stop sequences encountered prior to the minimum number of tokens being generated are ignored. \| \| Max tokens \| Define the maximum number to tokens to generate. \|
1. Change the *Top K* parameter to `10` to make it less likely the model will go off topic. 1. Click **X** to delete the previous model output. 1. Click the same sample prompt from the list. 1. Click **Generate** to generate output using the new model parameters. 1. Click the **Session history** icon \{: iih\} after submitting multiple prompts to view your session history. 1. Click any entry to work with a previous prompt, model specification, and parameter settings, and then click **Restore**. 1. Edit the prompt, change the model, or adjust decoding and stopping criteria parameters. 1. Click **Generate** to generate output using the updated information. \#\#\# \{: iih\} Check your progress The following image shows generated output using different model parameters. You are now ready to save your work.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
>
<!-- <ul> -->
* Task 7: Save your work
 To preview this task, watch the video beginning at 02:15. You can save your work in three formats: \| Asset type \| Description \| \| ----- \| ----- \| \| Prompt template \| Save the current prompt only, without its history. \| \| Prompt session \| Save history and data from the current session. \| \| Notebook \| Save the current prompt as a notebook. \|
Follow these steps to save your work: 1. Click **Save work > Save as**. 1. Select **Prompt template**. 1. For the name, type `Sample prompts`\{: .cp\}. 1. Select the **View in project after saving** option. 1. Click **Save**. 1. On the project's *Assets* tab, click the **Sample prompts** asset to load that prompt in the Prompt Lab and get right back to work. 1. Click the **Saved prompts**\{: iih\} to see saved prompt from your sandbox project. \#\#\# \{: iih\} Check your progress The following image shows the project's Assets tab with the prompt template asset:
\{: width="100%" \} \{: iih\} The following image shows saved prompt in the Prompt Lab: \{: biw\}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
You are now ready to:
<!-- <ul> -->
* Use the [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to prompt [foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) and save your work to a project\.
* Try the [Prompt a foundation model with the retrieval\-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* [Saving your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
98AA3E34D14723232D266A85CBB9E2B1816B1AA5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en | Quick start: Refine data | Quick start: Refine data
You can save data preparation time by quickly transforming large amounts of raw data into consumable, high-quality information that is ready for analytics. Read about the Data Refinery tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding.
Your basic workflow includes these tasks:
1. Open your sandbox project. Projects are where you can collaborate with others to work with data.
2. Add your data to the project. You can add CSV files or data from a remote data source through a connection.
3. Open the data in Data Refinery.
4. Perform steps using operations to refine the data.
5. Create and run a job to transform the data.
Read about Data Refinery
Use Data Refinery to cleanse and shape tabular data with a graphical flow editor. You can also use interactive templates to code operations, functions, and logical operators. When you cleanse data, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated. When you shape data, you customize it by filtering, sorting, combining or removing columns, and performing operations.
You create a Data Refinery flow as a set of ordered operations on data. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you perspective and insights into your data. When you save the refined data set, you typically load it to a different location than where you read it from. In this way, your source data remains untouched by the refinement process.
[Read more about refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
Watch a video about refining data
 Watch this video to see how to refine data.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to refine data
In this tutorial, you will complete these tasks:
* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep01)
* [Task 2: Open the data set in Data Refinery.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep02)
* [Task 3: Review the data with Profile and Visualizations.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep03)
* [Task 4: Refine the data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep04)
* [Task 5: Run a job for the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep05)
* [Task 6: Create another data asset from the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep06)
* [Task 7: View the data assets and your Data Refinery flow in your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep07)
This tutorial will take approximately 30 minutes to complete.
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
You need a project to store the data and the Data Refinery flow. You can use your sandbox project or create a project. 1. From the navigation menu {: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. ### {: iih} Check your progress The following image shows a new, empty project.
{: width="100%" } For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 2: Open the data set in Data Refinery
 To preview this task, watch the video beginning at 00:05. Follow these steps to add a data asset to your project and create a Data Refinery flow. The data set you will use in this tutorial is available in the Samples. 1. Access the [Airline data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8fa07e57e69f7d0cb970c86c6ae52d41){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. After the data set is added, click View Project. For more information on adding a data asset from the Samples to a project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). 1. On the Assets tab, click the airline-data.csv data asset to preview its content. 1. Click Prepare data to open a sample of the file in Data Refinery, and wait until Data Refinery reads and processes a sample of the data. 1. Close the Information and Steps panels. ### {: iih} Check your progress The following image shows the airline data asset open in Data Refinery.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 3: Review the data with Profile and Visualizations
 To preview this task, watch the video beginning at 00:47. IBM Knowledge Catalog automatically profiles and classifies the content of an asset based on the values in those columns. Follow these steps to use the Profile and Visualizations tabs to explore the data. Tip: Use the Profile and Visualizations pages to view changes in the data as you refine it.1. Click the Profile tab to review the [frequency distribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html){: new_window} of the data so that you can find the outliers. 1. Scroll through the columns to the see the statistics for each column. The statistics show the interquartile range, minimum, maximum, median and standard deviation in each column. 1. Hover over a bar to see additional details. The following image shows the Profile tab:
 1. Click the Visualizations tab. 1. Select the UniqueCarrier column to visualize. Suggested [charts](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html){: new_window} have a blue dot next to their icons. 1. Click the Pie chart. Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data. ### {: iih} Check your progress The following image shows the Visualizations tab. You are now ready to refine the data.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 4: Refine the data
### Data Refinery operations Data Refinery uses two kinds of operations to refine data, GUI operations and coding operations. You will use both kinds of operations in this tutorial. - [GUI operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html){: new_window} can consist of multiple steps. Select an operation from New step. A subset of the GUI operations is also available from each column's overflow menu ({: iih}). When you open a file in Data Refinery, the Convert column type operation is automatically applied as the first step to convert any non-string data types to inferred data types (for example, to Integer, Date, Boolean, etc.). You can undo or edit this step. - [Coding operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html){: new_window} are interactive templates for coding operations, functions, and logical operators. Most of the operations have interactive help. Click the operation name in the command-line text box to see the coding operations and their syntax options.  To preview this task, watch the video beginning at 01:16. Refining data is a series of steps to build a Data Refinery flow. As you go through this task, view the Steps panel to follow your progress. You can select a step to delete or edit it. If you make a mistake, you can also click the Undo icon {: iih}. Follow these steps to refine the data: 1. Go back to the Data tab. 1. Select the Year column. Click the Overflow menu ({: iih}) and choose Sort descending. 1. Click Steps to see the new step in the Steps panel. 1. Focus on the delays for a specific airline. This tutorial uses United Airlines (UA), but you can choose any airline. 1. Click New step, and then choose the GUI operation Filter. 1. Choose the UniqueCarrier column. 1. For Operator, choose Is equal to. 1. For Value, type the string for the airline for which you want to see delay information. For example, UA{: .cp}.
 1. Click Apply. Scroll to the UniqueCarrier column to see the results. 1. Create a new column that adds the arrival and departure delay times together. 1. Select the DepDelay column. 1. Notice that the Convert column type operation was automatically applied as the first step to convert the String data types in all the columns whose values are numbers to Integer data types. 1. Click New step, and then choose the GUI operation Calculate. 1. For Operator, choose Addition. 1. Select Column, and then choose the ArrDelay column. 1. Select Create new column for results. 1. For New column name, type TotalDelay{: .cp}.
 1. You can position the new column at the end of the list of columns or next to the original column. In this case, select Next to original column. 1. Click Apply. The new column, TotalDelay, is added. 1. Move the new TotalDelay column to the beginning of the data set: 1. In the command-line text box, choose the select operation. 1. Click the word select, and then choose select(`<column>, everything()). 1. Click <column> , and then choose the TotalDelay column. When you finish, the command should look like this: select(TotalDelay, everything()) 1. Click Apply. The TotalDelay column is now the first column. 1. Reduce the data to four columns: Year, Month, DayofMonth, and TotalDelay. Use the group_by coding operation to divide the columns into groups of year, month, and day. 1. In the command-line text box, choose the group_by operation. 1. Click <column>, and then choose the Year column. 1. Before the closing parenthesis, type: ,Month,DayofMonth{: .cp}. When you finish, the command should look like this: group_by(Year,Month,DayofMonth) 1. Click Apply. 1. Use the select coding operation for the TotalDelay column. In the command-line text box, select the select operation.
Click <column>, and choose the TotalDelay column. The command should look like this: select(TotalDelay) 1. Click Apply. The shaped data now consists of the Year, Month, DayofMonth, and TotalDelay columns. The following screen image shows the first four rows of the data.
 1. Show the mean of the values of the TotalDelay column, and create a new AverageDelay column: 1. Click New step, and then choose the GUI operation Aggregate. 1. For the Column, select TotalDelay. 1. For Operator, select Mean. 1. For Name of the aggregated column, type AverageDelay{: .cp}.
{: height="500px"} 1. Click Apply. The new column AverageDelay is the average of all the delay times. ### {: iih} Check your progress The following image shows the first four rows of the data.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 5: Run a job for the Data Refinery flow
 To preview this task, watch the video beginning at 04:16. When you run a job for the Data Refinery flow, the steps are run on the entire data set. You select the runtime and add a one-time or repeating schedule. The output of the Data Refinery flow is added to the data assets in the project. Follow these steps to run a job to create the refined data set. 1. From the Data Refinery toolbar, click the Jobs icon, and select Save and create a job.
 1. Type a name and description for the job, and click Next. 1. Select a runtime environment, and click Next. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click Next. 1. (Optional) Turn on notifications for this job, and click Next. 1. Review the details, and click Create and run to run the job immediately.
 1. When the job is created, click the job details link in the notification to view the job in your project. Alternatively, you can navigate to the Jobs tab in the project, and click the job name to open it. 1. When the Status for the job is Completed, use the project navigation trail to navigate back to the Assets tab in the project. 1. Click the Data > Data assets section to see the output of the Data Refinery flow, airline-data_shaped.csv. 1. Click the Flows > Data Refinery flows section to see the Data Refinery flow, airline-data.csv_flow. ### {: iih} Check your progress The following image shows the Assets tab with the Data Refinery flow and shaped asset.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 6: Create another data asset from the Data Refinery flow
 To preview this task, watch the video beginning at 05:26. Follow these steps to further refine the data set by editing the Data Refinery flow: 1. Click airline-data.csv_flow to open the flow in Data Refinery. 1. Sort the AverageDelay column in descending order. 1. Select the AverageDelay column. 1. Click the column Overflow menu ({: iih}), and then select Sort descending. 1. Click the Flow settings icon {: iih}. 1. Click the Target data set panel. 1. Click Edit properties. 1. In the Format target properties dialog, change the data asset name to airline-data_sorted_shaped.csv{: .cp}.
 1. Click Save to return to the Flow settings. 1. Click Apply to save the settings. 1. From the Data Refinery toolbar, click the Jobs icon and select Save and view jobs.
 1. Select the job for the airline data, and then click View. 1. From the Job window toolbar, click the Run job icon.
 ### {: iih} Check your progress The following image shows the completed job details.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
* Task 7: View the data assets and your Data Refinery flow in your project
 To preview this task, watch the video beginning at 06:40. Now follow these steps to view the three data assets, the original, the first refined data set, and the second refined data set: 1. When the job completes, go to the project page. 1. Click the Assets tab. 1. In the Data assets section, you will see the original data set that you uploaded and the output of the two Data Refinery flows. airline-data_sorted_shaped.csvairline-data_csv_shapedairline-data.csv 1. Click the airline-data_csv_shaped data asset to see the mean delay unsorted. Navigate back to the Assets tab. 1. Click airline-data_sorted_shaped.csv data asset to see the mean delay sorted in descending order. Navigate back to the Assets tab. 1. Click the *Flows > Data Refinery flows section shows the Data Refinery flow: airline-data.csv_flow. ### {: iih} Check your progress The following image shows the Assets tab with all of the assets displayed.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview)
Next steps
Now the data is ready to be used. For example, you or other users can do any of these tasks:
* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
Additional resources
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Refine data #
You can save data preparation time by quickly transforming large amounts of raw data into consumable, high\-quality information that is ready for analytics\. Read about the Data Refinery tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding\.
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open your sandbox project\. Projects are where you can collaborate with others to work with data\.
2. Add your data to the project\. You can add CSV files or data from a remote data source through a connection\.
3. Open the data in Data Refinery\.
4. Perform steps using operations to refine the data\.
5. Create and run a job to transform the data\.
<!-- </ol> -->
## Read about Data Refinery ##
Use Data Refinery to cleanse and shape tabular data with a graphical flow editor\. You can also use interactive templates to code operations, functions, and logical operators\. When you *cleanse data*, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated\. When you *shape data*, you customize it by filtering, sorting, combining or removing columns, and performing operations\.
You create a Data Refinery flow as a set of ordered operations on data\. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you perspective and insights into your data\. When you save the refined data set, you typically load it to a different location than where you read it from\. In this way, your source data remains untouched by the refinement process\.
[Read more about refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
## Watch a video about refining data ##
 Watch this video to see how to refine data\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to refine data ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step01)
* [Task 2: Open the data set in Data Refinery\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step02)
* [Task 3: Review the data with Profile and Visualizations\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step03)
* [Task 4: Refine the data\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step04)
* [Task 5: Run a job for the Data Refinery flow\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step05)
* [Task 6: Create another data asset from the Data Refinery flow\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step06)
* [Task 7: View the data assets and your Data Refinery flow in your project\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#step07)
<!-- </ul> -->
This tutorial will take approximately 30 minutes to complete\.
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
You need a project to store the data and the Data Refinery flow. You can use your sandbox project or create a project. 1. From the navigation menu \{: iih\}, choose **Projects > View all projects** 1. Open your sandbox project. If you want to use a new project: 1. Click **New project**. 1. Select **Create an empty project**. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\{: new\_window\} or create a new one. 1. Click **Create**. \#\#\# \{: iih\} Check your progress The following image shows a new, empty project.
\{: width="100%" \} For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\}.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Open the data set in Data Refinery
 To preview this task, watch the video beginning at 00:05. Follow these steps to add a data asset to your project and create a Data Refinery flow. The data set you will use in this tutorial is available in the Samples. 1. Access the [Airline data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8fa07e57e69f7d0cb970c86c6ae52d41)\{: new\_window\} in the Samples. 1. Click **Add to project**. 1. Select your project from the list, and click **Add**. 1. After the data set is added, click **View Project**. For more information on adding a data asset from the Samples to a project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). 1. On the *Assets* tab, click the **airline-data.csv** data asset to preview its content. 1. Click **Prepare data** to open a sample of the file in Data Refinery, and wait until Data Refinery reads and processes a sample of the data. 1. Close the **Information** and **Steps** panels. \#\#\# \{: iih\} Check your progress The following image shows the airline data asset open in Data Refinery.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Review the data with Profile and Visualizations
 To preview this task, watch the video beginning at 00:47. IBM Knowledge Catalog automatically profiles and classifies the content of an asset based on the values in those columns. Follow these steps to use the Profile and Visualizations tabs to explore the data. Tip: Use the Profile and Visualizations pages to view changes in the data as you refine it.1. Click the **Profile** tab to review the [frequency distribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html)\{: new\_window\} of the data so that you can find the outliers. 1. Scroll through the columns to the see the statistics for each column. The statistics show the interquartile range, minimum, maximum, median and standard deviation in each column. 1. Hover over a bar to see additional details. The following image shows the Profile tab:
 1. Click the **Visualizations** tab. 1. Select the **UniqueCarrier** column to visualize. Suggested [charts](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html)\{: new\_window\} have a blue dot next to their icons. 1. Click the **Pie** chart. Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data. \#\#\# \{: iih\} Check your progress The following image shows the Visualizations tab. You are now ready to refine the data.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Refine the data
\#\#\# Data Refinery operations Data Refinery uses two kinds of operations to refine data, *GUI operations* and *coding operations*. You will use both kinds of operations in this tutorial. - [GUI operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html)\{: new\_window\} can consist of multiple steps. Select an operation from **New step**. A subset of the GUI operations is also available from each column's overflow menu (\{: iih\}). When you open a file in Data Refinery, the **Convert column type** operation is automatically applied as the first step to convert any non-string data types to inferred data types (for example, to Integer, Date, Boolean, etc.). You can undo or edit this step. - [Coding operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html)\{: new\_window\} are interactive templates for coding operations, functions, and logical operators. Most of the operations have interactive help. Click the operation name in the command-line text box to see the coding operations and their syntax options.  To preview this task, watch the video beginning at 01:16. Refining data is a series of steps to build a *Data Refinery flow*. As you go through this task, view the **Steps** panel to follow your progress. You can select a step to delete or edit it. If you make a mistake, you can also click the Undo icon \{: iih\}. Follow these steps to refine the data: 1. Go back to the **Data** tab. 1. Select the *Year* column. Click the **Overflow** menu (\{: iih\}) and choose **Sort descending**. 1. Click **Steps** to see the new step in the *Steps* panel. 1. Focus on the delays for a specific airline. This tutorial uses United Airlines (UA), but you can choose any airline. 1. Click **New step**, and then choose the GUI operation **Filter**. 1. Choose the **UniqueCarrier** column. 1. For *Operator*, choose **Is equal to**. 1. For *Value*, type the string for the airline for which you want to see delay information. For example, `UA`\{: .cp\}.
 1. Click **Apply**. Scroll to the *UniqueCarrier* column to see the results. 1. Create a new column that adds the arrival and departure delay times together. 1. Select the **DepDelay** column. 1. Notice that the *Convert column type* operation was automatically applied as the first step to convert the String data types in all the columns whose values are numbers to Integer data types. 1. Click **New step**, and then choose the GUI operation **Calculate**. 1. For *Operator*, choose **Addition**. 1. Select **Column**, and then choose the **ArrDelay** column. 1. Select **Create new column for results**. 1. For *New column name*, type `TotalDelay`\{: .cp\}.
 1. You can position the new column at the end of the list of columns or next to the original column. In this case, select **Next to original column**. 1. Click **Apply**. The new column, *TotalDelay*, is added. 1. Move the new *TotalDelay* column to the beginning of the data set: 1. In the command-line text box, choose the **select** operation. 1. Click the word **select**, and then choose **select(\``<column>`\`, everything())**. 1. Click **`` `<column>` ``**, and then choose the **TotalDelay** column. When you finish, the command should look like this: ``select(`TotalDelay`, everything())`` 1. Click **Apply**. The *TotalDelay* column is now the first column. 1. Reduce the data to four columns: *Year*, *Month*, *DayofMonth*, and *TotalDelay*. Use the **group\_by** coding operation to divide the columns into groups of year, month, and day. 1. In the command-line text box, choose the **group\_by** operation. 1. Click **`<column>`**, and then choose the **Year** column. 1. Before the closing parenthesis, type: `,Month,DayofMonth`\{: .cp\}. When you finish, the command should look like this: ``group_by(`Year`,Month,DayofMonth)`` 1. Click **Apply**. 1. Use the **select** coding operation for the *TotalDelay* column. In the command-line text box, select the **select** operation.
Click **`<column>`**, and choose the **TotalDelay** column. The command should look like this: ``select(`TotalDelay`)`` 1. Click **Apply**. The shaped data now consists of the *Year*, *Month*, *DayofMonth*, and *TotalDelay* columns. The following screen image shows the first four rows of the data.
 1. Show the mean of the values of the *TotalDelay* column, and create a new *AverageDelay* column: 1. Click **New step**, and then choose the GUI operation **Aggregate**. 1. For the *Column*, select **TotalDelay**. 1. For *Operator*, select **Mean**. 1. For *Name of the aggregated column*, type `AverageDelay`\{: .cp\}.
\{: height="500px"\} 1. Click **Apply**. The new column *AverageDelay* is the average of all the delay times. \#\#\# \{: iih\} Check your progress The following image shows the first four rows of the data.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Run a job for the Data Refinery flow
 To preview this task, watch the video beginning at 04:16. When you run a job for the Data Refinery flow, the steps are run on the entire data set. You select the runtime and add a one-time or repeating schedule. The output of the Data Refinery flow is added to the data assets in the project. Follow these steps to run a job to create the refined data set. 1. From the Data Refinery toolbar, click the **Jobs** icon, and select **Save and create a job**.
 1. Type a name and description for the job, and click **Next**. 1. Select a runtime environment, and click **Next**. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click **Next**. 1. (Optional) Turn on notifications for this job, and click **Next**. 1. Review the details, and click **Create and run** to run the job immediately.
 1. When the job is created, click the **job details** link in the notification to view the job in your project. Alternatively, you can navigate to the **Jobs** tab in the project, and click the job name to open it. 1. When the *Status* for the job is **Completed**, use the project navigation trail to navigate back to the *Assets* tab in the project. 1. Click the **Data > Data assets** section to see the output of the Data Refinery flow, *airline-data\_shaped.csv*. 1. Click the **Flows > Data Refinery flows** section to see the Data Refinery flow, *airline-data.csv\_flow*. \#\#\# \{: iih\} Check your progress The following image shows the Assets tab with the Data Refinery flow and shaped asset.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Create another data asset from the Data Refinery flow
 To preview this task, watch the video beginning at 05:26. Follow these steps to further refine the data set by editing the Data Refinery flow: 1. Click **airline-data.csv\_flow** to open the flow in Data Refinery. 1. Sort the *AverageDelay* column in descending order. 1. Select the **AverageDelay** column. 1. Click the column **Overflow** menu (\{: iih\}), and then select **Sort descending**. 1. Click the **Flow settings** icon \{: iih\}. 1. Click the **Target data set** panel. 1. Click **Edit properties**. 1. In the *Format target properties* dialog, change the data asset name to `airline-data_sorted_shaped.csv`\{: .cp\}.
 1. Click **Save** to return to the Flow settings. 1. Click **Apply** to save the settings. 1. From the Data Refinery toolbar, click the **Jobs** icon and select **Save and view jobs**.
 1. Select the job for the airline data, and then click **View**. 1. From the *Job window* toolbar, click the **Run job** icon.
 \#\#\# \{: iih\} Check your progress The following image shows the completed job details.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 7: View the data assets and your Data Refinery flow in your project
 To preview this task, watch the video beginning at 06:40. Now follow these steps to view the three data assets, the original, the first refined data set, and the second refined data set: 1. When the job completes, go to the project page. 1. Click the **Assets tab**. 1. In the *Data assets* section, you will see the original data set that you uploaded and the output of the two Data Refinery flows. *`airline-data_sorted_shaped.csv`*`airline-data_csv_shaped`*`airline-data.csv` 1. Click the **airline-data\_csv\_shaped** data asset to see the mean delay unsorted. Navigate back to the* Assets *tab. 1. Click **airline-data\_sorted\_shaped.csv** data asset to see the mean delay sorted in descending order. Navigate back to the* Assets *tab. 1. Click the \*Flows > Data Refinery flows* section shows the Data Refinery flow: `airline-data.csv_flow`. \#\#\# \{: iih\} Check your progress The following image shows the Assets tab with all of the assets displayed.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Now the data is ready to be used\. For example, you or other users can do any of these tasks:
<!-- <ul> -->
* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
8109B6380043CE464115025DD32A7A821FD56DB7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en | Quick start: Tune a foundation model | Quick start: Tune a foundation model
There are a couple of reasons to tune your foundation model. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone. By tuning a base model to perform similarly to a bigger model in the same model family, you can reduce costs by deploying that smaller model.
Required services : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
1. Open a project. Projects are where you can collaborate with others to work with data.
2. Add your data to the project. You can upload data files, or add data from a remote data source through a connection.
3. Create a Tuning experiment in the project. The tuning experiment uses the Tuning Studio experiment builder.
4. Review the results of the experiment and the tuned model. The results include a Loss Function chart and the details of the tuned model.
5. Deploy and test your tuned model. Test your model in the Prompt Lab.
Read about tuning a foundation model
Prompt tuning adjusts the content of the prompt that is passed to the model. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. You tune a model with the Tuning Studio to guide an AI foundation model to return the output you want.
[Read more about Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
Watch a video about tuning a foundation model
 Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial to tune a foundation model
In this tutorial, you will complete these tasks:
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep01)
* [Task 2: Test your base model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep02)
* [Task 3: Add your data to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep03)
* [Task 4: Create a Tuning experiment in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep04)
* [Task 5: Configure the Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep05)
* [Task 6: Deploy your tuned model to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep06)
* [Task 7: Test your tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep07)
Expand all sections
* Tips for completing this tutorial
### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: {: width="560px" height="315px" data-tearsheet="this"} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. {: width="560px" height="315px" data-tearsheet="this"} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 1: Open a project
 To preview this task, watch the video beginning at 00:04. You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
### Verify a existing project or create a new project 1. From the watsonx home screen, scroll to the Projects section. If you see any projects that are listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You use Watson Machine Learning to tune the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check whether this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click Associate service. 1. Check the box next to your Watson Machine Learning service instance. 1. Click Associate. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window} and [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html). \ {: iih} Check your progress The following image shows the Manage tab with the associated service. You are now ready to add the sample notebook to your project.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 2: Test your base model
 To preview this task, watch the video beginning at 00:19. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. {: biw} 1. Click the Experiment with foundation models and build prompts tile. 1. Select your tuned model. 1. Click the model drop-down list, and select View all foundation models. 1. Select the flan-t5-xl-3b model. 1. Click Select model. 1. On the Structured mode page, type the Instruction: txt Summarize customer complaints 1. Provide the examples and test input. | Example input | Example output | | ----- | ----- | | I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. | Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue | | I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. | Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft |
1. In the Try text field, copy and paste the following prompt: txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file. 1. Click Generate, and review the results. 1. Click Save work > Save as. 1. Select Prompt template. 1. For the name, type Base model prompt{: .cp}. 1. Click Save. ### {: iih} Check your progress The following image shows results in the Prompt Lab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 3: Add your data to the project
 To preview this task, watch the video beginning at 01:12. You need to add the training data to your project. On the Samples page, you can find the customer complaints data set. This data set includes fictitious data of typical customer complaints regarding credit reports. Follow these steps to add the data set from the Samples to the project: 1. Access the [Customer complaints data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/725afa8c-0f58-47ac-b88c-26961c4f20a0){: new_window} on the Samples page. 1. Click Add to project. 1. Select your sandbox project. 1. Click Add. ### {: iih} Check your progress The following image shows the Samples asset added to the project. The next step is to create the Tuning experiment.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 4: Create a Tuning experiment in the project
 To preview this task, watch the video beginning at 01:32. Now you are ready to create a tuning experiment in your sandbox project that uses the data set you just added to the project. Follow these steps to create a Tuning experiment: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. {: biw} 1. Click Tune a foundation model with labeled data. 1. For the name, type: txt Summarize customer complaints tuned model 1. For the description, type: txt Tuning Studio experiment to tune a foundation model to handle customer complaints. 1. Click Create. The Tuning Studio displays. ### {: iih} Check your progress The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 5: Configure the Tuning experiment
 To preview this task, watch the video beginning at 01:47. In the Tuning Studio, you can configure the tuning experiment. The foundation model to tune is completed for you. Follow these steps to configure the tuning experiment: 1. For the foundation model to tune, select flan-t5-xl-3b. 1. Select Text for the method to initialize the prompt. There are two options: - Text: Uses text that you specify. - Random: Uses values that are generated for you as part of the tuning experiment. 1. For the Text field, type: txt Summarize the complaint provided into one sentence. The following table shows example text for each task type: | Task type | Example | | ----- | ----- | | Classification | Classify whether the sentiment of each comment is Positive or Negative | | Generation | Make the case for allowing employees to work from home a few days a week | | Summarization | Summarize the main points from a meeting transcript |
1. Select Summarization for the task type that most closely matches what you want the model to do. There are three task types: - Summarization generates text that describes the main ideas that are expressed in a body of text. - Generation generates text such as a promotional email. - Classification predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. When you use the classification task, you need to list the class labels that you want the model to use. Specify the same labels that are used in your tuning training data. 1. Select your training data from the project. 1. Click Select from project. 1. Click Data asset. 1. Select the customer complaints training data.json file. 1. Click Select asset. 1. Click Start tuning. ### {: iih} Check your progress The following image shows the configured tuning experiment. Next, you review the results and deploy the tuned model.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 6: Deploy your tuned model to a deployment space
 To preview this task, watch the video beginning at 03:17. When the experiment run is complete, you see the tuned model and the Loss function chart. Loss function measures the difference between predicted and actual results with each training run. Follow these steps to view the loss function chart and the tuned model: 1. Review the Loss function chart. A downward sloping curve means that the model is getting better at generating the expected output. {: biw} 1. Below the chart, click the Summarize customer complaints tuned model. 1. Scroll through the model details. 1. Click Deploy. 1. For the name, type: txt Summarize customer complaints tuned model 1. For the Target deployment space, select an existing deployment space. If you don't have an existing deployment space, follow these steps: 1. For the Target deployment space, select Create a new deployment space. 1. For the deployment space name, type: txt Foundation models deployment space 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click Create. 1. Click Close. 1. For the Target deployment space, verify that Foundation models deployment space is selected. 1. Check the View deployment in deployment space after creating option. 1. Click Create. 1. On the Deployments page, click the Summarize customer complaints tuned mode deployment to view the details. ### {: iih} Check your progress The following image shows the deployment in the deployment space. You are now ready to test the deployed model.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
* Task 7: Test your tuned model
 To preview this task, watch the video beginning at 04:04. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. From the model deployment page, click Open in prompt lab, and then select your sandbox project. The Prompt Lab displays. 1. Select your tuned model. 1. Click the model drop-down list, and select View all foundation models. 1. Select the Summarize customer complaints tuned model model. 1. Click Select model. 1. On the Structured mode page, type the Instruction: Summarize customer complaints{: .cp} 1. On the Structured mode page, provide the examples and test input. | Example input | Example output | | ----- | ----- | | I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. | Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue | | I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. | Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft |
1. In the Try text field, copy and paste the following prompt: txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file. 1. Click Generate, and review the results. ### {: iih} Check your progress The following image shows results in the Prompt Lab.
{: width="100%" }
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview)
Next steps
Try these other tutorials:
* [Prompt a foundation model in the Prompt Lab tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html)
* [Prompt a foundation model with retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
Additional resources
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
| # Quick start: Tune a foundation model #
There are a couple of reasons to tune your foundation model\. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone\. By tuning a base model to perform similarly to a bigger model in the same model family, you can reduce costs by deploying that smaller model\.
**Required services** : Watson Studio : Watson Machine Learning
Your basic workflow includes these tasks:
<!-- <ol> -->
1. Open a project\. Projects are where you can collaborate with others to work with data\.
2. Add your data to the project\. You can upload data files, or add data from a remote data source through a connection\.
3. Create a Tuning experiment in the project\. The tuning experiment uses the Tuning Studio experiment builder\.
4. Review the results of the experiment and the tuned model\. The results include a Loss Function chart and the details of the tuned model\.
5. Deploy and test your tuned model\. Test your model in the Prompt Lab\.
<!-- </ol> -->
## Read about tuning a foundation model ##
Prompt tuning adjusts the content of the prompt that is passed to the model\. The underlying foundation model and its parameters are not edited\. Only the prompt input is altered\. You tune a model with the Tuning Studio to guide an AI foundation model to return the output you want\.
[Read more about Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
## Watch a video about tuning a foundation model ##
 Watch this video to preview the steps in this tutorial\. There might be slight differences in the user interface that is shown in the video\. The video is intended to be a companion to the written tutorial\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Try a tutorial to tune a foundation model ##
In this tutorial, you will complete these tasks:
<!-- <ul> -->
* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step01)
* [Task 2: Test your base model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step02)
* [Task 3: Add your data to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step03)
* [Task 4: Create a Tuning experiment in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step04)
* [Task 5: Configure the Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step05)
* [Task 6: Deploy your tuned model to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step06)
* [Task 7: Test your tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#step07)
<!-- </ul> -->
Expand all sections
<!-- <ul> -->
* Tips for completing this tutorial
\#\#\# Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: \{: width="560px" height="315px" data-tearsheet="this"\} \#\#\# Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235)\{: new\_window\}. \#\#\# Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. \{: width="560px" height="315px" data-tearsheet="this"\} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click **Maybe later**.
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 1: Open a project
 To preview this task, watch the video beginning at 00:04. You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.
This video provides a visual method to learn the concepts and tasks in this documentation.
\#\#\# Verify a existing project or create a new project 1. From the watsonx home screen, scroll to the *Projects* section. If you see any projects that are listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#associate). If you don't see any projects, then follow these steps to create a project. 1. Click **Create a sandbox project**. When the project is created, you see the sandbox in the *Projects* section. 1. Open an existing project or the new sandbox project. \#\#\# Associate the Watson Machine Learning service with the project You use Watson Machine Learning to tune the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the **Manage** tab. 1. Click the **Services & Integrations** page. 1. Check whether this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click **Associate service**. 1. Check the box next to your **Watson Machine Learning** service instance. 1. Click **Associate**. 1. If necessary, click **Cancel** to return to the *Services & Integrations* page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\{: new\_window\} and [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html). \#\#\# \{: iih\} Check your progress The following image shows the *Manage* tab with the associated service. You are now ready to add the sample notebook to your project.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 2: Test your base model
 To preview this task, watch the video beginning at 00:19. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. \{: biw\} 1. Click the **Experiment with foundation models and build prompts** tile. 1. Select your tuned model. 1. Click the model drop-down list, and select **View all foundation models**. 1. Select the **flan-t5-xl-3b** model. 1. Click **Select model**. 1. On the *Structured mode* page, type the *Instruction*: `txt Summarize customer complaints` 1. Provide the examples and test input. \| Example input \| Example output \| \| ----- \| ----- \| \| I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. \| Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue \| \| I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. \| Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft \|
1. In the *Try* text field, copy and paste the following prompt: `txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.` 1. Click **Generate**, and review the results. 1. Click **Save work > Save as**. 1. Select **Prompt template**. 1. For the name, type `Base model prompt`\{: .cp\}. 1. Click **Save**. \#\#\# \{: iih\} Check your progress The following image shows results in the Prompt Lab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 3: Add your data to the project
 To preview this task, watch the video beginning at 01:12. You need to add the training data to your project. On the Samples page, you can find the customer complaints data set. This data set includes fictitious data of typical customer complaints regarding credit reports. Follow these steps to add the data set from the Samples to the project: 1. Access the [Customer complaints data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/725afa8c-0f58-47ac-b88c-26961c4f20a0)\{: new\_window\} on the Samples page. 1. Click **Add to project**. 1. Select your sandbox project. 1. Click **Add**. \#\#\# \{: iih\} Check your progress The following image shows the Samples asset added to the project. The next step is to create the Tuning experiment.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 4: Create a Tuning experiment in the project
 To preview this task, watch the video beginning at 01:32. Now you are ready to create a tuning experiment in your sandbox project that uses the data set you just added to the project. Follow these steps to create a Tuning experiment: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. \{: biw\} 1. Click **Tune a foundation model with labeled data**. 1. For the name, type: `txt Summarize customer complaints tuned model` 1. For the description, type: `txt Tuning Studio experiment to tune a foundation model to handle customer complaints.` 1. Click **Create**. The Tuning Studio displays. \#\#\# \{: iih\} Check your progress The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 5: Configure the Tuning experiment
 To preview this task, watch the video beginning at 01:47. In the Tuning Studio, you can configure the tuning experiment. The foundation model to tune is completed for you. Follow these steps to configure the tuning experiment: 1. For the foundation model to tune, select **flan-t5-xl-3b**. 1. Select **Text** for the method to initialize the prompt. There are two options: - Text: Uses text that you specify. - Random: Uses values that are generated for you as part of the tuning experiment. 1. For the *Text* field, type: `txt Summarize the complaint provided into one sentence.` The following table shows example text for each task type: \| Task type \| Example \| \| ----- \| ----- \| \| Classification \| Classify whether the sentiment of each comment is Positive or Negative \| \| Generation \| Make the case for allowing employees to work from home a few days a week \| \| Summarization \| Summarize the main points from a meeting transcript \|
1. Select **Summarization** for the task type that most closely matches what you want the model to do. There are three task types: - *Summarization* generates text that describes the main ideas that are expressed in a body of text. - *Generation* generates text such as a promotional email. - *Classification* predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. When you use the classification task, you need to list the class labels that you want the model to use. Specify the same labels that are used in your tuning training data. 1. Select your training data from the project. 1. Click **Select from project**. 1. Click **Data asset**. 1. Select the **customer complaints training data.json** file. 1. Click **Select asset**. 1. Click **Start tuning**. \#\#\# \{: iih\} Check your progress The following image shows the configured tuning experiment. Next, you review the results and deploy the tuned model.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 6: Deploy your tuned model to a deployment space
 To preview this task, watch the video beginning at 03:17. When the experiment run is complete, you see the tuned model and the Loss function chart. Loss function measures the difference between predicted and actual results with each training run. Follow these steps to view the loss function chart and the tuned model: 1. Review the Loss function chart. A downward sloping curve means that the model is getting better at generating the expected output. \{: biw\} 1. Below the chart, click the **Summarize customer complaints** tuned model. 1. Scroll through the model details. 1. Click **Deploy**. 1. For the name, type: `txt Summarize customer complaints tuned model` 1. For the *Target deployment space*, select an existing deployment space. If you don't have an existing deployment space, follow these steps: 1. For the *Target deployment space*, select **Create a new deployment space**. 1. For the deployment space name, type: `txt Foundation models deployment space` 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click **Create**. 1. Click **Close**. 1. For the *Target deployment space*, verify that **Foundation models deployment space** is selected. 1. Check the **View deployment in deployment space after creating** option. 1. Click **Create**. 1. On the *Deployments* page, click the **Summarize customer complaints tuned mode** deployment to view the details. \#\#\# \{: iih\} Check your progress The following image shows the deployment in the deployment space. You are now ready to test the deployed model.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
<!-- <ul> -->
* Task 7: Test your tuned model
 To preview this task, watch the video beginning at 04:04. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. From the model deployment page, click **Open in prompt lab**, and then select your sandbox project. The Prompt Lab displays. 1. Select your tuned model. 1. Click the model drop-down list, and select **View all foundation models**. 1. Select the **Summarize customer complaints tuned model** model. 1. Click **Select model**. 1. On the *Structured mode* page, type the *Instruction*: `Summarize customer complaints`\{: .cp\} 1. On the *Structured mode* page, provide the examples and test input. \| Example input \| Example output \| \| ----- \| ----- \| \| I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. \| Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue \| \| I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. \| Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft \|
1. In the *Try* text field, copy and paste the following prompt: `txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.` 1. Click **Generate**, and review the results. \#\#\# \{: iih\} Check your progress The following image shows results in the Prompt Lab.
\{: width="100%" \}
[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=en#video-preview)
<!-- </ul> -->
## Next steps ##
Try these other tutorials:
<!-- <ul> -->
* [Prompt a foundation model in the Prompt Lab tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html)
* [Prompt a foundation model with retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
<!-- </ul> -->
**Parent topic:**[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
<!-- </article "role="article" "> -->
|
F495F5206C908FB1A31F18A8AB3CE9465164564C | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html?context=cdpaas&locale=en | Getting started with IBM watsonx as a Service | Getting started with IBM watsonx as a Service
You can sign up for IBM watsonx.ai or IBM watsonx.governance and explore the tutorials, resources, and tools to immediately get started working with models or governing models. If you are an administrator, follow the steps to set up watsonx for your organization.
Start working
To start working:
1. If you haven't already, [sign up](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) for watsonx.ai or watsonx.governance.
2. Click a task tile on the watsonx home page and start working. For example, click Experiment with foundation models and build prompts to open the Prompt Lab. Then, choose a sample prompt and start experimenting. Your first project, where you save your work, is created automatically. See [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html).
3. Explore your resources:
* Take a [Quick start tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* Click a category in the Samples area of the home page to try out a notebook, a prompt, or a sample project.
If you are an existing Cloud Pak for Data as a Service user, you can [switch to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html).
Set up the platform as an administrator
To set up the watsonx platform for your organization, see [Setting up the platform as an administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html).
Learn about watsonx
To understand watsonx, start with these resources:
* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)
* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Read blogs on Medium and the IBM Community](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.htmlcommunity)
Other information:
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
* [IBM watsonx APIs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wdp-apis.html)
| # Getting started with IBM watsonx as a Service #
You can sign up for IBM watsonx\.ai or IBM watsonx\.governance and explore the tutorials, resources, and tools to immediately get started working with models or governing models\. If you are an administrator, follow the steps to set up watsonx for your organization\.
## Start working ##
To start working:
<!-- <ol> -->
1. If you haven't already, [sign up](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) for watsonx\.ai or watsonx\.governance\.
2. Click a task tile on the watsonx home page and start working\. For example, click **Experiment with foundation models and build prompts** to open the Prompt Lab\. Then, choose a sample prompt and start experimenting\. Your first project, where you save your work, is created automatically\. See [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html)\.
3. Explore your resources:
<!-- <ul> -->
* Take a [Quick start tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* Click a category in the **Samples** area of the home page to try out a notebook, a prompt, or a sample project.
<!-- </ul> -->
<!-- </ol> -->
If you are an existing Cloud Pak for Data as a Service user, you can [switch to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html)\.
## Set up the platform as an administrator ##
To set up the watsonx platform for your organization, see [Setting up the platform as an administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)\.
## Learn about watsonx ##
To understand watsonx, start with these resources:
<!-- <ul> -->
* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)
* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Read blogs on Medium and the IBM Community](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html#community)
<!-- </ul> -->
Other information:
<!-- <ul> -->
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html)
* [IBM watsonx APIs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wdp-apis.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
AAE40F1CC335A650C1EB806E404394DA596FB433 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en | Known issues and limitations | Known issues and limitations
The following limitations and known issues apply to watsonx.
* [Regional limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/region-lims.html)
* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=ennotebooks)
* [Machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enwmlissues)
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enspssissues)
* [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enconnectissues)
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enpipeline-issues)
* [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enxgov-issues)
Notebook issues
You might encounter some of these issues when getting started with and using notebooks.
Manual installation of some tensor libraries is not supported
Some tensor flow libraries are preinstalled, but if you try to install additional tensor flow libraries yourself, you get an error.
Connection to notebook kernel is taking longer than expected after running a code cell
If you try to reconnect to the kernel and immediately run a code cell (or if the kernel reconnection happened during code execution), the notebook doesn't reconnect to the kernel and no output is displayed for the code cell. You need to manually reconnect to the kernel by clicking Kernel > Reconnect. When the kernel is ready, you can try running the code cell again.
Using the predefined sqlContext object in multiple notebooks causes an error
You might receive an Apache Spark error if you use the predefined sqlContext object in multiple notebooks. Create a new sqlContext object for each notebook. See [this Stack Overflow explanation](http://stackoverflow.com/questions/38117849/you-must-build-spark-with-hive-export-spark-hive-true/3811811238118112).
Connection failed message
If your kernel stops, your notebook is no longer automatically saved. To save it, click File > Save manually, and you should get a Notebook saved message in the kernel information area, which appears before the Spark version. If you get a message that the kernel failed, to reconnect your notebook to the kernel click Kernel > Reconnect. If nothing you do restarts the kernel and you can't save the notebook, you can download it to save your changes by clicking File > Download as > Notebook (.ipynb). Then you need to create a new notebook based on your downloaded notebook file.
Hyperlinks to notebook sections don't work in preview mode
If your notebook contains sections that you link to from an introductory section at the top of the notebook for example, the links to these sections will not work if the notebook was opened in view-only mode in Firefox. However, if you open the notebook in edit mode, these links will work.
Can't connect to notebook kernel
If you try to run a notebook and you see the message Connecting to Kernel, followed by Connection failed. Reconnecting and finally by a connection failed error message, the reason might be that your firewall is blocking the notebook from running.
If Watson Studio is installed behind a firewall, you must add the WebSocket connection wss://dataplatform.cloud.ibm.com to the firewall settings. Enabling this WebSocket connection is required when you're using notebooks and RStudio.
Insufficient resources available error when opening or editing a notebook
If you see the following message when opening or editing a notebook, the environment runtime associated with your notebook has resource issues:
Insufficient resources available
A runtime instance with the requested configuration can't be started at this time because the required hardware resources aren't available.
Try again later or adjust the requested sizes.
To find the cause, try checking the status page for IBM Cloud incidents affecting Watson Studio. Additionally, you can open a support case at the IBM Cloud Support portal.
Machine learning issues
You might encounter some of these issues when working with machine learning tools.
Region requirements
You can only associate a Watson Machine Learning service instance with your project when the Watson Machine Learning service instance and the Watson Studio instance are located in the same region.
Accessing links if you create a service instance while associating a service with a project
While you are associating a Watson Machine Learning service to a project, you have the option of creating a new service instance. If you choose to create a new service, the links on the service page might not work. To access the service terms, APIs, and documentation, right click the links to open them in new windows.
Federated Learning assets cannot be searched in All assets, search results, or filter results in the new projects UI
You cannot search Federated Learning assets from the All assets view, the search results, or the filter results of your project.
Workaround: Click the Federated Learning asset to open the tool.
Deployment issues
* A deployment that is inactive (no scores) for a set time (24 hours for the free plan or 120 hours for a paid plan) is automatically hibernated. When a new scoring request is submitted, the deployment is reactivated and the score request is served. Expect a brief delay of 1 to 60 seconds for the first score request after activation, depending on the model framework.
* For some frameworks, such as SPSS modeler, the first score request for a deployed model after hibernation might result in a 504 error. If this happens, submit the request again; subsequent requests should succeed.
Watson Machine Learning limitations
AutoAI known limitations
* Currently, AutoAI experiments do not support double-byte character sets. AutoAI only supports CSV files with ASCII characters. Users must convert any non-ASCII characters in the file name or content, and provide input data as a CSV as defined in [this CSV standard](https://tools.ietf.org/html/rfc4180).
* To interact programmatically with an AutoAI model, use the REST API instead of the Python client. The APIs for the Python client required to support AutoAI are not generally available at this time.
Data module not found in IBM Federated Learning
The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it. You might see the following error message:
ModuleNotFoundError: No module named 'ibmfl.util.datasets'
The issue possibly results from using an outdated DataHandler. Please review and update your DataHandler to conform to the latest spec. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up-to-date.
SPSS Modeler issues
You might encounter some of these issues when working in SPSS Modeler.
SPSS Modeler runtime restrictions
Watson Studio does not include SPSS functionality in Peru, Ecuador, Colombia and Venezuela.
Merge node and unicode characters
The Merge node treats the following very similar Japanese characters as the same character.

Connection issues
You might encounter this issue when working with connections.
Cloudera Impala connection does not work with LDAP authentication
If you create a connection to a Cloudera Impala data source and the Cloudera Impala server is set up for LDAP authentication, the username and password authentication method in IBM watsonx will not work.
Workaround: Disable the Enable LDAP Authentication option on the Impala server. See [Configuring LDAP Authentication](https://docs.cloudera.com/cdp-private-cloud-base/latest/impala-secure/topics/impala-ldap.html) in the Cloudera documentation.
Watson Pipelines known issues
The issues pertain to Watson Pipelines.
Nesting loops more than 2 levels can result in pipeline error
Nesting loops more than 2 levels can result in an error when you run the pipeline, such as Error retrieving the run. Reviewing the logs can show an error such as text in text not resolved: neither pipeline_input nor node_output. If you are looping with output from a Bash script, the log might list an error like this: PipelineLoop can't be run; it has an invalid spec: non-existent variable in $(params.run-bash-script-standard-output). To resolve the problem, do not nest loops more than 2 levels.
Asset browser does not always reflect count for total numbers of asset type
When selecting an asset from the asset browser, such as choosing a source for a Copy node, you see that some of the assets list the total number of that asset type available, but notebooks do not. That is a current limitation.
Cannot delete pipeline versions
Currently, you cannot delete saved versions of pipelines that you no longer need.
Deleting an AutoAI experiment fails under some conditions
Using a Delete AutoAI experiment node to delete an AutoAI experiment that was created from the Projects UI does not delete the AutoAI asset. However, the rest of the flow can complete successfully.
Cache appears enabled but is not enabled
If the Copy assets Pipelines node's Copy mode is set to Overwrite, cache is displayed as enabled but remains disabled.
Watson Pipelines limitations
These limitations apply to Watson Pipelines.
* [Single pipeline limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enpipeline-limits)
* [Limitations by configuration size](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enconfig-size)
* [Input and output size limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=eninput-limit)
* [Batch input limited to data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enbatch-input)
Single pipeline limits
These limitation apply to a single pipeline, regardless of configuration.
* Any single pipeline cannot contain more than 120 standard nodes
* Any pipeline with a loop cannot contain more than 600 nodes across all iterations (for example, 60 iterations - 10 nodes each)
Limitations by configuration size
Small configuration
A SMALL configuration supports 600 standard nodes (across all active pipelines) or 300 nodes run in a loop. For example:
* 30 standard pipelines with 20 nodes run in parallel = 600 standard nodes
* 3 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 300 nodes in a loop
Medium configuration
A MEDIUM configuration supports 1200 standard nodes (across all active pipelines) or 600 nodes run in a loop. For example:
* 30 standard pipelines with 40 nodes run in parallel = 1200 standard nodes
* 6 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 600 nodes in a loop
Large configuration
A LARGE configuration supports 4800 standard nodes (across all active pipelines) or 2400 nodes run in a loop. For example:
* 80 standard pipelines with 60 nodes run in parallel = 4800 standard nodes
* 24 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 2400 nodes in a loop
Input and output size limits
Input and output values, which include pipeline parameters, user variables, and generic node inputs and outputs, cannot exceed 10 KB of data.
Batch input limited to data assets
Currently, input for batch deployment jobs is limited to data assets. This means that certain types of deployments, which require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.
Issues with Cloud Object Storage
These issue apply to working with Cloud Object Storage.
Issues with Cloud Object Storage when Key Protect is enabled
Key Protect in conjunction with Cloud Object Storage is not supported for working with Watson Machine Learning assets. If you are using Key Protect, you might encounter these issues when you are working with assets in Watson Studio.
* Training or saving these Watson Machine Learning assets might fail:
* Auto AI
* Federated Learning
* Watson Pipelines
* You might be unable to save an SPSS model or a notebook model to a project
Issues with watsonx.governance
Delay showing prompt template deployment data in a factsheet
When a deployment is created for a prompt template, the facts for the deployment are not added to factsheet immediately. You must first evaluate the deployment or view the lifecycle tracking page to add the facts to the factsheet.
Display issues for existing Factsheet users
If you previously used factsheets with IBM Knowledge Catalog and you create a new AI use case in watsonx.governance, you might see some display issues, such as duplicate Risk level fields in the General information and Details section of the AI use case interface.
To resolve display problems, update the model_entry_user asset type definition. For details on updating a use case programmatically, see [Customizing details for a use case or factsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-customize-user-facts.html).
Redundant attachment links in factsheet
A factsheet tracks all of the events for an asset over all phases of the lifecycle. Attachments show up in each stage, creating some redundancy in the factsheet.
| # Known issues and limitations #
The following limitations and known issues apply to watsonx\.
<!-- <ul> -->
* [**Regional limitations**](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/region-lims.html)
* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#notebooks)
* [Machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#wmlissues)
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#spssissues)
* [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#connectissues)
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#pipeline-issues)
* [watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#xgov-issues)
<!-- </ul> -->
## Notebook issues ##
You might encounter some of these issues when getting started with and using notebooks\.
### Manual installation of some tensor libraries is not supported ###
Some tensor flow libraries are preinstalled, but if you try to install additional tensor flow libraries yourself, you get an error\.
### Connection to notebook kernel is taking longer than expected after running a code cell ###
If you try to reconnect to the kernel and immediately run a code cell (or if the kernel reconnection happened during code execution), the notebook doesn't reconnect to the kernel and no output is displayed for the code cell\. You need to manually reconnect to the kernel by clicking **Kernel** > **Reconnect**\. When the kernel is ready, you can try running the code cell again\.
### Using the predefined sqlContext object in multiple notebooks causes an error ###
You might receive an Apache Spark error if you use the predefined sqlContext object in multiple notebooks\. Create a new sqlContext object for each notebook\. See [this Stack Overflow explanation](http://stackoverflow.com/questions/38117849/you-must-build-spark-with-hive-export-spark-hive-true/38118112#38118112)\.
### Connection failed message ###
If your kernel stops, your notebook is no longer automatically saved\. To save it, click **File** > **Save** manually, and you should get a **Notebook saved** message in the kernel information area, which appears before the Spark version\. If you get a message that the kernel failed, to reconnect your notebook to the kernel click **Kernel** > **Reconnect**\. If nothing you do restarts the kernel and you can't save the notebook, you can download it to save your changes by clicking **File** > **Download as** > **Notebook (\.ipynb)**\. Then you need to create a new notebook based on your downloaded notebook file\.
### Hyperlinks to notebook sections don't work in preview mode ###
If your notebook contains sections that you link to from an introductory section at the top of the notebook for example, the links to these sections will not work if the notebook was opened in view\-only mode in Firefox\. However, if you open the notebook in edit mode, these links will work\.
### Can't connect to notebook kernel ###
If you try to run a notebook and you see the message `Connecting to Kernel`, followed by `Connection failed. Reconnecting` and finally by a connection failed error message, the reason might be that your firewall is blocking the notebook from running\.
If Watson Studio is installed behind a firewall, you must add the WebSocket connection `wss://dataplatform.cloud.ibm.com` to the firewall settings\. Enabling this WebSocket connection is required when you're using notebooks and RStudio\.
### Insufficient resources available error when opening or editing a notebook ###
If you see the following message when opening or editing a notebook, the environment runtime associated with your notebook has resource issues:
Insufficient resources available
A runtime instance with the requested configuration can't be started at this time because the required hardware resources aren't available.
Try again later or adjust the requested sizes.
To find the cause, try checking the status page for IBM Cloud incidents affecting Watson Studio\. Additionally, you can open a support case at the IBM Cloud Support portal\.
## Machine learning issues ##
You might encounter some of these issues when working with machine learning tools\.
### Region requirements ###
You can only associate a Watson Machine Learning service instance with your project when the Watson Machine Learning service instance and the Watson Studio instance are located in the same region\.
### Accessing links if you create a service instance while associating a service with a project ###
While you are associating a Watson Machine Learning service to a project, you have the option of creating a new service instance\. If you choose to create a new service, the links on the service page might not work\. To access the service terms, APIs, and documentation, right click the links to open them in new windows\.
### Federated Learning assets cannot be searched in All assets, search results, or filter results in the new projects UI ###
You cannot search Federated Learning assets from the **All assets** view, the search results, or the filter results of your project\.
**Workaround:** Click the Federated Learning asset to open the tool\.
### Deployment issues ###
<!-- <ul> -->
* A deployment that is inactive (no scores) for a set time (24 hours for the free plan or 120 hours for a paid plan) is automatically hibernated\. When a new scoring request is submitted, the deployment is reactivated and the score request is served\. Expect a brief delay of 1 to 60 seconds for the first score request after activation, depending on the model framework\.
* For some frameworks, such as SPSS modeler, the first score request for a deployed model after hibernation might result in a 504 error\. If this happens, submit the request again; subsequent requests should succeed\.
<!-- </ul> -->
## Watson Machine Learning limitations ##
### AutoAI known limitations ###
<!-- <ul> -->
* Currently, AutoAI experiments do not support double\-byte character sets\. AutoAI only supports CSV files with ASCII characters\. Users must convert any non\-ASCII characters in the file name or content, and provide input data as a CSV as defined in [this CSV standard](https://tools.ietf.org/html/rfc4180)\.
* To interact programmatically with an AutoAI model, use the REST API instead of the Python client\. The APIs for the Python client required to support AutoAI are not generally available at this time\.
<!-- </ul> -->
### Data module not found in IBM Federated Learning ###
The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it\. You might see the following error message:
ModuleNotFoundError: No module named 'ibmfl.util.datasets'
The issue possibly results from using an outdated DataHandler\. Please review and update your DataHandler to conform to the latest spec\. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up\-to\-date\.
## SPSS Modeler issues ##
You might encounter some of these issues when working in SPSS Modeler\.
### SPSS Modeler runtime restrictions ###
Watson Studio does not include SPSS functionality in Peru, Ecuador, Colombia and Venezuela\.
### Merge node and unicode characters ###
The Merge node treats the following very similar Japanese characters as the same character\.

## Connection issues ##
You might encounter this issue when working with connections\.
### Cloudera Impala connection does not work with LDAP authentication ###
If you create a connection to a Cloudera Impala data source and the Cloudera Impala server is set up for LDAP authentication, the username and password authentication method in IBM watsonx will not work\.
Workaround: Disable the **Enable LDAP Authentication** option on the Impala server\. See [Configuring LDAP Authentication](https://docs.cloudera.com/cdp-private-cloud-base/latest/impala-secure/topics/impala-ldap.html) in the Cloudera documentation\.
## Watson Pipelines known issues ##
The issues pertain to Watson Pipelines\.
### Nesting loops more than 2 levels can result in pipeline error ###
Nesting loops more than 2 levels can result in an error when you run the pipeline, such as Error retrieving the run\. Reviewing the logs can show an error such as `text in text not resolved: neither pipeline_input nor node_output`\. If you are looping with output from a Bash script, the log might list an error like this: `PipelineLoop can't be run; it has an invalid spec: non-existent variable in $(params.run-bash-script-standard-output)`\. To resolve the problem, do not nest loops more than 2 levels\.
### Asset browser does not always reflect count for total numbers of asset type ###
When selecting an asset from the asset browser, such as choosing a source for a Copy node, you see that some of the assets list the total number of that asset type available, but notebooks do not\. That is a current limitation\.
### Cannot delete pipeline versions ###
Currently, you cannot delete saved versions of pipelines that you no longer need\.
### Deleting an AutoAI experiment fails under some conditions ###
Using a *Delete AutoAI experiment* node to delete an AutoAI experiment that was created from the Projects UI does not delete the AutoAI asset\. However, the rest of the flow can complete successfully\.
### Cache appears enabled but is not enabled ###
If the *Copy assets* Pipelines node's *Copy mode* is set to `Overwrite`, cache is displayed as enabled but remains disabled\.
## Watson Pipelines limitations ##
These limitations apply to Watson Pipelines\.
<!-- <ul> -->
* [Single pipeline limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#pipeline-limits)
* [Limitations by configuration size](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#config-size)
* [Input and output size limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#input-limit)
* [Batch input limited to data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=en#batch-input)
<!-- </ul> -->
### Single pipeline limits ###
These limitation apply to a single pipeline, regardless of configuration\.
<!-- <ul> -->
* Any single pipeline cannot contain more than 120 standard nodes
* Any pipeline with a loop cannot contain more than 600 nodes across all iterations (for example, 60 iterations \- 10 nodes each)
<!-- </ul> -->
### Limitations by configuration size ###
#### Small configuration ####
A SMALL configuration supports 600 standard nodes (across all active pipelines) or 300 nodes run in a loop\. For example:
<!-- <ul> -->
* 30 standard pipelines with 20 nodes run in parallel = 600 standard nodes
* 3 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 300 nodes in a loop
<!-- </ul> -->
#### Medium configuration ####
A MEDIUM configuration supports 1200 standard nodes (across all active pipelines) or 600 nodes run in a loop\. For example:
<!-- <ul> -->
* 30 standard pipelines with 40 nodes run in parallel = 1200 standard nodes
* 6 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 600 nodes in a loop
<!-- </ul> -->
#### Large configuration ####
A LARGE configuration supports 4800 standard nodes (across all active pipelines) or 2400 nodes run in a loop\. For example:
<!-- <ul> -->
* 80 standard pipelines with 60 nodes run in parallel = 4800 standard nodes
* 24 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 2400 nodes in a loop
<!-- </ul> -->
### Input and output size limits ###
Input and output values, which include pipeline parameters, user variables, and generic node inputs and outputs, cannot exceed 10 KB of data\.
### Batch input limited to data assets ###
Currently, input for batch deployment jobs is limited to data assets\. This means that certain types of deployments, which require JSON input or multiple files as input, are not supported\. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported\.
## Issues with Cloud Object Storage ##
These issue apply to working with Cloud Object Storage\.
### Issues with Cloud Object Storage when Key Protect is enabled ###
Key Protect in conjunction with Cloud Object Storage is not supported for working with Watson Machine Learning assets\. If you are using Key Protect, you might encounter these issues when you are working with assets in Watson Studio\.
<!-- <ul> -->
* Training or saving these Watson Machine Learning assets might fail:
<!-- <ul> -->
* Auto AI
* Federated Learning
* Watson Pipelines
<!-- </ul> -->
* You might be unable to save an SPSS model or a notebook model to a project
<!-- </ul> -->
## Issues with watsonx\.governance ##
### Delay showing prompt template deployment data in a factsheet ###
When a deployment is created for a prompt template, the facts for the deployment are not added to factsheet immediately\. You must first evaluate the deployment or view the lifecycle tracking page to add the facts to the factsheet\.
### Display issues for existing Factsheet users ###
If you previously used factsheets with IBM Knowledge Catalog and you create a new AI use case in watsonx\.governance, you might see some display issues, such as duplicate Risk level fields in the General information and Details section of the AI use case interface\.
To resolve display problems, update the `model_entry_user` asset type definition\. For details on updating a use case programmatically, see [Customizing details for a use case or factsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-customize-user-facts.html)\.
### Redundant attachment links in factsheet ###
A factsheet tracks all of the events for an asset over all phases of the lifecycle\. Attachments show up in each stage, creating some redundancy in the factsheet\.
<!-- </article "role="article" "> -->
|
E5EA38444D60150C0FD2EB498BF33793DDE5FED2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html?context=cdpaas&locale=en | Language support for the product and the documentation | Language support for the product and the documentation
IBM watsonx is translated into multiple languages.
Supported languages
The IBM watsonx user interface is translated into these languages:
* Brazilian Portuguese
* Simplified Chinese
* Traditional Chinese
* French
* German
* Italian
* Japanese
* Korean
* Spanish
* Swedish
The documentation is automatically translated into these languages:
* Brazilian Portuguese
* Simplified Chinese
* French
* German
* Italian
* Japanese
* Korean
* Spanish
IBM is not responsible for any damages or losses resulting from the use of automatically (machine) translated content.
When the translated documentation is not as current as the English content, you see a message and have the option of switching to the English content.
Changing languages
To change the language for this documentation, scroll to the end of any documentation page, and select a language from the language selector.

To change the language for both the product user interface and this documentation, select a different language for your browser:
* In the Google Chrome browser, you can change the language in the advanced settings.
* In the Mozilla Firefox browser, you can change the language in the general settings.
Learn more
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
Parent topic:[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
| # Language support for the product and the documentation #
IBM watsonx is translated into multiple languages\.
## Supported languages ##
The IBM watsonx user interface is translated into these languages:
<!-- <ul> -->
* Brazilian Portuguese
* Simplified Chinese
* Traditional Chinese
* French
* German
* Italian
* Japanese
* Korean
* Spanish
* Swedish
<!-- </ul> -->
The documentation is automatically translated into these languages:
<!-- <ul> -->
* Brazilian Portuguese
* Simplified Chinese
* French
* German
* Italian
* Japanese
* Korean
* Spanish
<!-- </ul> -->
IBM is not responsible for any damages or losses resulting from the use of automatically (machine) translated content\.
When the translated documentation is not as current as the English content, you see a message and have the option of switching to the English content\.
## Changing languages ##
To change the language for this documentation, scroll to the end of any documentation page, and select a language from the language selector\.

To change the language for both the product user interface and this documentation, select a different language for your browser:
<!-- <ul> -->
* In the Google Chrome browser, you can change the language in the advanced settings\.
* In the Mozilla Firefox browser, you can change the language in the general settings\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
<!-- </ul> -->
**Parent topic:**[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html)
<!-- </article "role="article" "> -->
|
F0DB4483C93A5D14DBF8076C4DD42D22A4F8542D | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/notices.html?context=cdpaas&locale=en | Notices | Notices
These notices apply to the watsonx platform.
The Offering includes some or all of the following that IBM provides under the [SIL Open Font License 1.1](https://opensource.org/license/openfont-html/):
* AMSFONTS
* AMSFONTS (matplotlib)
* CARLOGO (matplotlib)
* CMMI9 (libtasn1)
* cvxopt 1.3.0
* FONTS (harfbuzz)
* FONTS (pillow)
* FONT AWESOME (Apache ORC)
* FONT AWESOME - FONT (bazel)
* FONT AWESOME 4.2.0 (arrow)
* FONT-AWESOME-IE7.MIN.CSS (Jetty)
* FONT AWESOME (nbconvert)
* FONTAWESOME-FONTS
* HELVETICA-NEUE
* READLINE.PS (Readline)
* FONT-AWESOME (Notebook)
* FONTAWESOME
* FONTAWESOME (Tables)
* FONT AWESOME FONTS
* FONTAWESOME (FONT) (JupyterLab)
* Font-Awesome v4.6.3
* Font-Awesome v4.3.0
* Font-Awesome v4.7.0
* handsontable v0.25.1
* minio 7.1.7
* IBM PLEX TYPEFACE (carbon-components)
* nbconvert v5.2.1
* nbconvert v5.1.1
* nbconvert 6.4.4
* nbconvert 6.5.0
* nbdime 3.1.1
* NotoNastaliqUrdu-Regular.ttf (pillow)
* NOTO-FONTS (pillow)
* QTAWESOME-FONTS (qtawesome)
* qtawesome v3.3.0
* READLINE.PS (Readline)
* RLUSERMAN.PS (Readline)
* STIX FONT (matplotlib)
The Offering includes some or all of the following that IBM provides under the [UBUNTU FONT LICENCE Version 1.0](https://ubuntu.com/legal/font-licence):
* Font_license (Werkzeug)
Learn more
[Foundation model use terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html)
| # Notices #
These notices apply to the watsonx platform\.
The Offering includes some or all of the following that IBM provides under the [SIL Open Font License 1\.1](https://opensource.org/license/openfont-html/):
<!-- <ul> -->
* AMSFONTS
* AMSFONTS (matplotlib)
* CARLOGO (matplotlib)
* CMMI9 (libtasn1)
* cvxopt 1\.3\.0
* FONTS (harfbuzz)
* FONTS (pillow)
* FONT AWESOME (Apache ORC)
* FONT AWESOME \- FONT (bazel)
* FONT AWESOME 4\.2\.0 (arrow)
* FONT\-AWESOME\-IE7\.MIN\.CSS (Jetty)
* FONT AWESOME (nbconvert)
* FONTAWESOME\-FONTS
* HELVETICA\-NEUE
* READLINE\.PS (Readline)
* FONT\-AWESOME (Notebook)
* FONTAWESOME
* FONTAWESOME (Tables)
* FONT AWESOME FONTS
* FONTAWESOME (FONT) (JupyterLab)
* Font\-Awesome v4\.6\.3
* Font\-Awesome v4\.3\.0
* Font\-Awesome v4\.7\.0
* handsontable v0\.25\.1
* minio 7\.1\.7
* IBM PLEX TYPEFACE (carbon\-components)
* nbconvert v5\.2\.1
* nbconvert v5\.1\.1
* nbconvert 6\.4\.4
* nbconvert 6\.5\.0
* nbdime 3\.1\.1
* NotoNastaliqUrdu\-Regular\.ttf (pillow)
* NOTO\-FONTS (pillow)
* QTAWESOME\-FONTS (qtawesome)
* qtawesome v3\.3\.0
* READLINE\.PS (Readline)
* RLUSERMAN\.PS (Readline)
* STIX FONT (matplotlib)
<!-- </ul> -->
The Offering includes some or all of the following that IBM provides under the [UBUNTU FONT LICENCE Version 1\.0](https://ubuntu.com/legal/font-licence):
<!-- <ul> -->
* Font\_license (Werkzeug)
<!-- </ul> -->
## Learn more ##
[Foundation model use terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html)
<!-- </article "role="article" "> -->
|
E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html?context=cdpaas&locale=en | Overview of IBM watsonx as a Service | Overview of IBM watsonx as a Service
IBM watsonx.ai is a studio of integrated tools for working with generative AI capabilities that are powered by foundation models and for building machine learning models. The IBM watsonx.ai component provides a secure and collaborative environment where you can access your organization's trusted data, automate AI processes, and deliver AI in your applications. The IBM watsonx.governance component provides end-to-end monitoring for machine learning and generative AI models to accelerate responsible, transparent, and explainable AI workflows.
Watch this short video that introduces watsonx.ai.
Looking for watsonx.data? Go to [IBM watsonx.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started).
You can accomplish the following goals with watsonx:
* Build machine learning models
Build models by using open source frameworks and code-based, automated, or visual data science tools.
* Experiment with foundation models
Test prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face.
* Manage the AI lifecycle
Manage and automate the full AI model lifecycle with all the integrated tools and runtimes to train, validate, and deploy AI models.
* Govern AI
Track and document the detailed history of AI models to help ensure compliance.
Watsonx.ai provides these tools for working with data and models:
Tools for working with data and models
What you can use What you can do Best to use when
[Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Access and refine data from diverse data source connections. <br> <br>Materialize the resulting data sets as snapshots in time that might combine, join, or filter data for other data scientists to analyze and explore. You need to visualize the data when you want to shape or cleanse it. <br> <br>You want to simplify the process of preparing large amounts of raw data for analysis.
[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with IBM and open source foundation models by inputting prompts. You want to engineer prompts for your generative AI solution.
[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tailor the output that a foundation model returns to better meet your needs. You want to adjust foundation model outputs for use in your generative AI solution.
[AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Use AutoAI to automatically select algorithms, engineer features, generate pipeline candidates, and train machine learning model pipeline candidates. <br> <br>Then, evaluate the ranked pipelines and save the best as models. <br> <br>Deploy the trained models to a space, or export the model training pipeline that you like from AutoAI into a notebook to refine it. You want an advanced and automated way to build a good set of training pipelines and machine learning models quickly. <br> <br>You want to be able to export the generated pipelines to refine them.
[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) Prompt foundation models with the Python library. <br> <br>Use notebooks and scripts to write your own feature engineering, model training, and evaluation code in Python or R. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage. <br> <br>Code with your favorite open source frameworks and libraries. You want to use Python or R coding skills to have full control over the code that you use to work with models.
[SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage. You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows.
[RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Analyze data and build and test machine learning models by working with R in RStudio. You want to use a development environment to work in R.
[Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Prepare data, import models, solve problems and compare scenarios, visualize data, find solutions, produce reports, and save models to deploy with Watson Machine Learning. You need to evaluate millions of possibilities to find the best solution to a prescriptive analytics problem.
[Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train a common machine learning model that uses distributed data. You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations.
[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment. You want to automate some or all of the steps in an MLOps flow.
[Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. You want to mask or mimic production data or you want to generate synthetic data from a custom data schema.
Watsonx.governance provides these tools for governing models.
Tools for governing models
What you can use What you can do Best to use when
[Factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) View model lifecycle status, general model and deployment details, training information and metrics, and deployment metrics. You want to make sure that your model is compliant and performing as expected.
[Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html) Monitor model output and explain model predictions. You need to keep your models fair and be able to explain model predictions.
Security and privacy of your data and models
Your work on watsonx, including your data and the models that you create, are private to your account:
* Your data is accessible only by you. Your data is used to train only your models. Your data will never be accessible or used by IBM or any other person or organization. Your data is stored in dedicated storage buckets from your IBM Cloud Object Storage service instance. Data is encrypted at rest and in motion.
* The models that you create are accessible only by you. Your models will never be accessible or used by IBM or any other person or organization. Your models are secured in the same way as your data.
Learn more about security and your options:
* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
* [Security of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
Underlying architecture
Watsonx includes the following functionality as the secure and scalable foundation for your organization to collaborate efficiently:
* Software and hardware
Watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic.
* Storage
A IBM Cloud Object Storage service instance is automatically provisioned for you to provide storage.
* Compute resources
You can choose the appropriate runtime for your jobs. Compute resource usage is billed based on the rate for the runtime environment and its active duration.
* Security, compliance, and isolation
The data security, network security, security standards compliance, and isolation of watsonx are managed by IBM Cloud. You can set up extra security and encryption options.
* User management
You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. You assign roles within each collaborative workspace across the platform.
* Global search
You can search for assets across the platform.
* Shared connections to data sources
You can share connections with others across the platform in the Platform assets catalog.
* Samples
You can experiment with IBM-curated sample data sets, notebooks, projects, and models.
Watsonx.ai on the watsonx platform includes the Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage services. Watsonx.governance on the watsonx platform includes the watsonx.governance service.
Workspaces and assets
Watsonx is organized as a set of collaborative workspaces where you can work with your team or organization. Each workspace has a set of members with roles that provide permissions to perform actions. Most users work with assets, which are the items that users add to the platform. Data assets contain metadata that represents data, while assets that you create in tools, such as models, run code to work with data. You build assets in projects, and manage the deployment of completed assets in deployment spaces.
Projects and tools
Projects are where your data science and model builder teams work with data to create assets, such as, saved prompts, notebooks, models, or pipelines. Your first project, which is known as your sandbox project, is created automatically when you sign up for watsonx.ai.
The following image shows what the Overview page of a project might look like.

Deployment spaces
Deployment spaces are where your ModelOps team deploys models and other deployable assets to production and then tests and manages deployments in production. After you build models and deployable assets in projects, you promote them to deployment spaces.
The following image shows what the Overview page of a deployment space might look like.

Samples
The platform includes an integrated collection of samples that provides models, data assets, prompts, notebooks, and sample projects. Sample notebooks provide examples of data science and machine learning code. Sample projects contain sets of data, models, other assets, and detailed instructions on how to solve a particular business problem.
The following image shows what Samples looks like.

* See a tour of the samples collection
Learn more
* [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
* [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
* [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html)
* [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
* [Feature differences between watsonx deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html)
* [IBM watsonx.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started)
| # Overview of IBM watsonx as a Service #
IBM watsonx\.ai is a studio of integrated tools for working with generative AI capabilities that are powered by foundation models and for building machine learning models\. The IBM watsonx\.ai component provides a secure and collaborative environment where you can access your organization's trusted data, automate AI processes, and deliver AI in your applications\. The IBM watsonx\.governance component provides end\-to\-end monitoring for machine learning and generative AI models to accelerate responsible, transparent, and explainable AI workflows\.
Watch this short video that introduces watsonx\.ai\.
Looking for watsonx\.data? Go to [IBM watsonx\.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started)\.
You can accomplish the following goals with watsonx:
<!-- <ul> -->
* **Build machine learning models**
Build models by using open source frameworks and code-based, automated, or visual data science tools.
* **Experiment with foundation models**
Test prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face.
* **Manage the AI lifecycle**
Manage and automate the full AI model lifecycle with all the integrated tools and runtimes to train, validate, and deploy AI models.
* **Govern AI**
Track and document the detailed history of AI models to help ensure compliance.
<!-- </ul> -->
Watsonx\.ai provides these tools for working with data and models:
<!-- <table> -->
Tools for working with data and models
| What you can use | What you can do | Best to use when |
| ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) | Access and refine data from diverse data source connections\. <br> <br>Materialize the resulting data sets as snapshots in time that might combine, join, or filter data for other data scientists to analyze and explore\. | You need to visualize the data when you want to shape or cleanse it\. <br> <br>You want to simplify the process of preparing large amounts of raw data for analysis\. |
| [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) | Experiment with IBM and open source foundation models by inputting prompts\. | You want to engineer prompts for your generative AI solution\. |
| [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) | Tailor the output that a foundation model returns to better meet your needs\. | You want to adjust foundation model outputs for use in your generative AI solution\. |
| [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) | Use AutoAI to automatically select algorithms, engineer features, generate pipeline candidates, and train machine learning model pipeline candidates\. <br> <br>Then, evaluate the ranked pipelines and save the best as models\. <br> <br>Deploy the trained models to a space, or export the model training pipeline that you like from AutoAI into a notebook to refine it\. | You want an advanced and automated way to build a good set of training pipelines and machine learning models quickly\. <br> <br>You want to be able to export the generated pipelines to refine them\. |
| [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) | Prompt foundation models with the Python library\. <br> <br>Use notebooks and scripts to write your own feature engineering, model training, and evaluation code in Python or R\. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage\. <br> <br>Code with your favorite open source frameworks and libraries\. | You want to use Python or R coding skills to have full control over the code that you use to work with models\. |
| [SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) | Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows\. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage\. | You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows\. |
| [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) | Analyze data and build and test machine learning models by working with R in RStudio\. | You want to use a development environment to work in R\. |
| [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) | Prepare data, import models, solve problems and compare scenarios, visualize data, find solutions, produce reports, and save models to deploy with Watson Machine Learning\. | You need to evaluate millions of possibilities to find the best solution to a prescriptive analytics problem\. |
| [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) | Train a common machine learning model that uses distributed data\. | You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations\. |
| [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) | Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment\. | You want to automate some or all of the steps in an MLOps flow\. |
| [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) | Generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms\. | You want to mask or mimic production data or you want to generate synthetic data from a custom data schema\. |
<!-- </table ""> -->
Watsonx\.governance provides these tools for governing models\.
<!-- <table> -->
Tools for governing models
| What you can use | What you can do | Best to use when |
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| [Factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) | View model lifecycle status, general model and deployment details, training information and metrics, and deployment metrics\. | You want to make sure that your model is compliant and performing as expected\. |
| [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html) | Monitor model output and explain model predictions\. | You need to keep your models fair and be able to explain model predictions\. |
<!-- </table ""> -->
## Security and privacy of your data and models ##
Your work on watsonx, including your data and the models that you create, are private to your account:
<!-- <ul> -->
* Your data is accessible only by you\. Your data is used to train only your models\. Your data will never be accessible or used by IBM or any other person or organization\. Your data is stored in dedicated storage buckets from your IBM Cloud Object Storage service instance\. Data is encrypted at rest and in motion\.
* The models that you create are accessible only by you\. Your models will never be accessible or used by IBM or any other person or organization\. Your models are secured in the same way as your data\.
<!-- </ul> -->
Learn more about security and your options:
<!-- <ul> -->
* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)
* [Security of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </ul> -->
## Underlying architecture ##
Watsonx includes the following functionality as the secure and scalable foundation for your organization to collaborate efficiently:
<!-- <ul> -->
* **Software and hardware**
Watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic.
* **Storage**
A IBM Cloud Object Storage service instance is automatically provisioned for you to provide storage.
* **Compute resources**
You can choose the appropriate runtime for your jobs. Compute resource usage is billed based on the rate for the runtime environment and its active duration.
* **Security, compliance, and isolation**
The data security, network security, security standards compliance, and isolation of watsonx are managed by IBM Cloud. You can set up extra security and encryption options.
* **User management**
You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. You assign roles within each collaborative workspace across the platform.
* **Global search**
You can search for assets across the platform.
* **Shared connections to data sources**
You can share connections with others across the platform in the Platform assets catalog.
* **Samples**
You can experiment with IBM-curated sample data sets, notebooks, projects, and models.
<!-- </ul> -->
Watsonx\.ai on the watsonx platform includes the Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage services\. Watsonx\.governance on the watsonx platform includes the watsonx\.governance service\.
## Workspaces and assets ##
Watsonx is organized as a set of collaborative workspaces where you can work with your team or organization\. Each workspace has a set of members with roles that provide permissions to perform actions\. Most users work with assets, which are the items that users add to the platform\. Data assets contain metadata that represents data, while assets that you create in tools, such as models, run code to work with data\. You build assets in projects, and manage the deployment of completed assets in deployment spaces\.
### Projects and tools ###
Projects are where your data science and model builder teams work with data to create assets, such as, saved prompts, notebooks, models, or pipelines\. Your first project, which is known as your sandbox project, is created automatically when you sign up for watsonx\.ai\.
The following image shows what the **Overview** page of a project might look like\.

### Deployment spaces ###
Deployment spaces are where your ModelOps team deploys models and other deployable assets to production and then tests and manages deployments in production\. After you build models and deployable assets in projects, you promote them to deployment spaces\.
The following image shows what the **Overview** page of a deployment space might look like\.

## Samples ##
The platform includes an integrated collection of samples that provides models, data assets, prompts, notebooks, and sample projects\. Sample notebooks provide examples of data science and machine learning code\. Sample projects contain sets of data, models, other assets, and detailed instructions on how to solve a particular business problem\.
The following image shows what Samples looks like\.

<!-- <ul> -->
* See a tour of the samples collection
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
* [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
* [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html)
* [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
* [Feature differences between watsonx deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html)
* [IBM watsonx\.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
64057AA641F9259654E5F08D996209EF8027A3AF | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html?context=cdpaas&locale=en | Switching between the IBM watsonx as a Service and Cloud Pak for Data as a Service platforms | Switching between the IBM watsonx as a Service and Cloud Pak for Data as a Service platforms
If you are a Cloud Pak for Data as a Service user, you have access to IBM watsonx as a Service and you can switch between the two platforms.
Important:Foundation model inferencing and the Prompt Lab tool to work with foundation models are available only in the Dallas and Frankfurt regions. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab.
If you signed up for watsonx only, you can't switch to Cloud Pak for Data as a Service and you don't have a Switch platform option. To switch to Cloud Pak for Data as a Service, you must sign up for it.
To switch between platforms:
1. Log in to either IBM watsonx as a Service or Cloud Pak for Data as a Service. Your region must be Dallas.
2. On the platform home page, click the Switch platform icon () next to your avatar, and select the platform.
Service instances and resource consumption
When you switch platforms, you continue using the same Watson Studio and Watson Machine Learning service instances.
The resources that you consume for each of these service instances is cumulative. For example, suppose you use 3 CUH for Watson Studio on Cloud Pak for Data as a Service in the first half of July. Then, you switch to watsonx and use 3 CUH for Watson Studio in the second half of July. Your total CUH for the Watson Studio service for July is 6 CUH.
Switch projects and deployment spaces between platforms
You can switch a project or a deployment space from one platform to the other if that project or space meets the requirements and restrictions. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html).
Platform assets catalog
You share a single Platform assets catalog between the two platforms and any previously or newly added connection assets in your Platform assets catalog are available on both platforms. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx.
Notifications
Your notifications are specific to each platform.
Learn more
* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)
* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)
* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
Parent topic:[Getting started with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
| # Switching between the IBM watsonx as a Service and Cloud Pak for Data as a Service platforms #
If you are a Cloud Pak for Data as a Service user, you have access to IBM watsonx as a Service and you can switch between the two platforms\.
Important:Foundation model inferencing and the Prompt Lab tool to work with foundation models are available only in the Dallas and Frankfurt regions\. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service\. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab\.
If you signed up for watsonx only, you can't switch to Cloud Pak for Data as a Service and you don't have a **Switch platform** option\. To switch to Cloud Pak for Data as a Service, you must sign up for it\.
To switch between platforms:
<!-- <ol> -->
1. Log in to either IBM watsonx as a Service or Cloud Pak for Data as a Service\. Your region must be Dallas\.
2. On the platform home page, click the **Switch platform** icon () next to your avatar, and select the platform\.
<!-- </ol> -->
## Service instances and resource consumption ##
When you switch platforms, you continue using the same Watson Studio and Watson Machine Learning service instances\.
The resources that you consume for each of these service instances is cumulative\. For example, suppose you use 3 CUH for Watson Studio on Cloud Pak for Data as a Service in the first half of July\. Then, you switch to watsonx and use 3 CUH for Watson Studio in the second half of July\. Your total CUH for the Watson Studio service for July is 6 CUH\.
## Switch projects and deployment spaces between platforms ##
You can switch a project or a deployment space from one platform to the other if that project or space meets the requirements and restrictions\. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)\.
## Platform assets catalog ##
You share a single Platform assets catalog between the two platforms and any previously or newly added connection assets in your Platform assets catalog are available on both platforms\. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx\.
## Notifications ##
Your notifications are specific to each platform\.
## Learn more ##
<!-- <ul> -->
* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)
* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html)
* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
<!-- </ul> -->
**Parent topic:**[Getting started with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
<!-- </article "role="article" "> -->
|
E3526B694C68C40EDC206E216B454E63B83F3EBA | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en | Asset contents or previews | Asset contents or previews
In projects and other workspaces, you can see a preview of data assets that contain relational data.
* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire)
* [Previews of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=endata)
Requirements and restrictions
You can view the contents or previews of assets under the following conditions and restrictions.
* Workspaces
You can view the preview or contents of assets in these workspaces:
* Projects
* Deployment spaces
* Types of assets
* Data assets from files
* Connected data assets
* Models
* Notebooks
* Required permissions
To see the asset contents or preview, these conditions must be true:
* You have any collaborator role in the workspace.
* Restrictions for data assets
Additional requirements apply to connected data assets and data assets from files. See [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data). Previews are not available for data assets that were added as managed assets by using the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-apicreateattachmentnewv2).
Previews of data assets
The previews of data assets show a view of the data.
You can see when the data in the preview was last fetched and refresh the preview data by clicking the refresh icon.
* [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data)
* [Preview information for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enpreview-info)
* [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles)
Requirements for data assets
The additional requirements for viewing previews of data assets depend on whether the data is accessed through a connection or from a file.
Connected data assets
You can see previews of data assets that are accessed through a connection if all these conditions are true:
* You have access to the data asset and its associated connection. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire).
* The data asset contains structured data. Structured data resides in fixed fields within a record or file, for example, relational database data or spreadsheets.
* You have credentials for the connection:
* For connections with shared credentials, the username in the connection details has access to the object at the data source.
* For connections with personal credentials, you must enter your personal credentials when you see a key icon (). This is a one-time step that permanently unlocks the connection for you. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
Data assets from files
You can see previews of data assets from files if the following conditions are true:
* You have access to the data asset. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire).
* The file is stored in IBM Cloud Object Storage. For preview of text or image files from an IBM Cloud Object Storage connection to work, the connection credentials must include an access key and a secret key. If you’re using an existing Cloud Object Storage connection that doesn’t have these keys, edit the connection asset and add them. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html).
* The file type is supported. See [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles).
Preview information for data assets
For structured data, the preview displays a limited number of rows and columns:
* The number of rows in the preview is limited to 1,000.
* The amount of data is limited to 800 KB. The more columns the data asset has, the fewer rows that appear in the preview.
Previews show different information for different types of data assets and files.
Structured data
For structured data, the preview shows column names, data types, and a subset of columns and rows of data. The supported formats of structured data area: Relational data, CSV, TSV, Avro, partitioned data, and Parquet (projects).
Assets from file based connections like Apache Kafka and Apache Cassandra are not supported.
Unstructured data
Unstructured data files must be stored in IBM Cloud Object Storage to have previews.
For these unstructured data files, the preview shows the whole document: Text, JSON, HTML, PDF, images, and Microsoft Excel documents. HTML files are supported in text format. Images stored in IBM Cloud Object Storage support JPG, JPEG, PNG, GIF, BMP, and BMP1. Microsoft Excel document previews show the first sheet.
For connected folder assets, the preview shows the files and subfolders, which you can also preview.
File extensions and mime types of previewed files
These types of files that contain structured data have previews:
Structured data files
Extension Mime type
AVRO
CSV text/csv
CSV1 application/csv
JSON application/json
PARQ
TSV
TXT text/plain
XLSX application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
XLS application/vnd.ms-excel
XLSM application/vnd.ms-excel.sheet.macroEnabled.12
These types of image files have previews:
Image files
Extension Mime type
BMP image/bmp
GIF image/gif
JPG image/jpeg
JPEG image/jpeg
PNG image/png
These types of document files have previews:
Document files
Extension Mime type
HTML text/html
PDF application/pdf
TXT text/plain
Learn more
* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html)
* [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html)
* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html)
* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html)
Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
| # Asset contents or previews #
In projects and other workspaces, you can see a preview of data assets that contain relational data\.
<!-- <ul> -->
* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#require)
* [Previews of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#data)
<!-- </ul> -->
## Requirements and restrictions ##
You can view the contents or previews of assets under the following conditions and restrictions\.
<!-- <ul> -->
* **Workspaces**
You can view the preview or contents of assets in these workspaces:
<!-- <ul> -->
* Projects
* Deployment spaces
<!-- </ul> -->
<!-- </ul> -->
<!-- <ul> -->
* **Types of assets**
<!-- <ul> -->
* Data assets from files
* Connected data assets
* Models
* Notebooks
<!-- </ul> -->
<!-- </ul> -->
<!-- <ul> -->
* **Required permissions**
To see the asset contents or preview, these conditions must be true:
<!-- <ul> -->
* You have any collaborator role in the workspace.
<!-- </ul> -->
<!-- </ul> -->
<!-- <ul> -->
* **Restrictions for data assets**
Additional requirements apply to connected data assets and data assets from files. See [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#require-data). Previews are not available for data assets that were added as managed assets by using the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api#createattachmentnewv2).
<!-- </ul> -->
## Previews of data assets ##
The previews of data assets show a view of the data\.
You can see when the data in the preview was last fetched and refresh the preview data by clicking the refresh icon\.
<!-- <ul> -->
* [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#require-data)
* [Preview information for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#preview-info)
* [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#files)
<!-- </ul> -->
### Requirements for data assets ###
The additional requirements for viewing previews of data assets depend on whether the data is accessed through a connection or from a file\.
#### Connected data assets ####
You can see previews of data assets that are accessed through a connection if all these conditions are true:
<!-- <ul> -->
* You have access to the data asset and its associated connection\. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#require)\.
* The data asset contains structured data\. Structured data resides in fixed fields within a record or file, for example, relational database data or spreadsheets\.
* You have credentials for the connection:
<!-- <ul> -->
* For connections with shared credentials, the username in the connection details has access to the object at the data source.
* For connections with personal credentials, you must enter your personal credentials when you see a key icon (). This is a one-time step that permanently unlocks the connection for you. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
<!-- </ul> -->
<!-- </ul> -->
#### Data assets from files ####
You can see previews of data assets from files if the following conditions are true:
<!-- <ul> -->
* You have access to the data asset\. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#require)\.
* The file is stored in IBM Cloud Object Storage\. For preview of text or image files from an IBM Cloud Object Storage connection to work, the connection credentials must include an access key and a secret key\. If you’re using an existing Cloud Object Storage connection that doesn’t have these keys, edit the connection asset and add them\. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)\.
* The file type is supported\. See [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=en#files)\.
<!-- </ul> -->
### Preview information for data assets ###
For structured data, the preview displays a limited number of rows and columns:
<!-- <ul> -->
* The number of rows in the preview is limited to 1,000\.
* The amount of data is limited to 800 KB\. The more columns the data asset has, the fewer rows that appear in the preview\.
<!-- </ul> -->
Previews show different information for different types of data assets and files\.
#### Structured data ####
For structured data, the preview shows column names, data types, and a subset of columns and rows of data\. The supported formats of structured data area: Relational data, CSV, TSV, Avro, partitioned data, and Parquet (projects)\.
Assets from file based connections like Apache Kafka and Apache Cassandra are not supported\.
#### Unstructured data ####
Unstructured data files must be stored in IBM Cloud Object Storage to have previews\.
For these unstructured data files, the preview shows the whole document: Text, JSON, HTML, PDF, images, and Microsoft Excel documents\. HTML files are supported in text format\. Images stored in IBM Cloud Object Storage support JPG, JPEG, PNG, GIF, BMP, and BMP1\. Microsoft Excel document previews show the first sheet\.
For connected folder assets, the preview shows the files and subfolders, which you can also preview\.
### File extensions and mime types of previewed files ###
These types of files that contain structured data have previews:
<!-- <table> -->
Structured data files
| Extension | Mime type |
| --------- | --------------------------------------------------------------------- |
| AVRO | |
| CSV | text/csv |
| CSV1 | application/csv |
| JSON | application/json |
| PARQ | |
| TSV | |
| TXT | text/plain |
| XLSX | application/vnd\.openxmlformats\-officedocument\.spreadsheetml\.sheet |
| XLS | application/vnd\.ms\-excel |
| XLSM | application/vnd\.ms\-excel\.sheet\.macroEnabled\.12 |
<!-- </table ""> -->
These types of image files have previews:
<!-- <table> -->
Image files
| Extension | Mime type |
| --------- | ---------- |
| BMP | image/bmp |
| GIF | image/gif |
| JPG | image/jpeg |
| JPEG | image/jpeg |
| PNG | image/png |
<!-- </table ""> -->
These types of document files have previews:
<!-- <table> -->
Document files
| Extension | Mime type |
| --------- | --------------- |
| HTML | text/html |
| PDF | application/pdf |
| TXT | text/plain |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html)
* [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html)
* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html)
* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html)
<!-- </ul> -->
**Parent topic:**[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
<!-- </article "role="article" "> -->
|
AA213D259727545C26401AD5CFB4916B6EFBD18D | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=en | Profiles of data assets | Profiles of data assets
An asset profile includes generated information and statistics about the asset content. You can see the profile on an asset's Profile page.
* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprereqs)
* [Creating a profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=encreate-profile)
* [Profile information](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprofile-results)
Requirements and restrictions
You can view the profile of assets under the following circumstances.
Required permissions
: To view a data asset's Profile page, you can have any role in a project or catalog. : To create or update a profile, you must have the Admin or Editor role in the project or catalog.
Workspaces
: You can view the asset profile in projects.
Types of assets
: These types of assets have a profile:
* Data assets from relational or nonrelational databases from a connection to the data sources, except Cloudant
* Data assets from partitioned data sets, where a partitioned data set consists of multiple files and is represented by a single folder uploaded from the local file system or from file-based connections to the data sources
* Data assets from files uploaded from the local file system or from file-based connections to the data sources, with these formats:
* CSV
* XLS, XLSM, XLSX (Only the first sheet in a workbook is profiled.)
* TSV
* Avro
* Parquet
However, structured data files are not profiled when data assets do not explicitly reference them, such as in these circumstances:
* The files are within a connected folder asset. Files that are accessible from a connected folder asset are not treated as assets and are not profiled.
* The files are within an archive file. The archive file is referenced by the data asset and the compressed files are not profiled.
Creating a profile
In projects, you can create a profile for a data asset by clicking Create profile. You can update an existing profile when the data changes.
Profiling results
When you create or update an asset profile, the columns in the data asset are analyzed. By default, the profile is created based on the first 5,000 rows of data. If the data asset has more than 250 columns, the profile is created based on the first 1,000 rows of data.
The profile of a data asset shows information about each column in the data set:
* When was the profile created or last updated.
* How many columns and rows were analyzed.
* The data types for columns and data types distribution.
* The data formats for columns and formats distribution.
* The percentage of matching, mismatching, or missing data for each column.
* The frequency distribution for all values identified in a column.
* Statistics about the data for each column:
* The number of distinct values indicates how many different values exist in the sampled data for the column.
* The percentage of unique values indicates the percentage of distinct values that appear only once in the column.
* The minimum, maximum, or mean, and sometimes the standard deviation in that column. Depending on a column’s data format, the statistics vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values and a standard deviation value while statistics for a column of data type string have minimum length, maximum length, and mean length values.
Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
| # Profiles of data assets #
An asset profile includes generated information and statistics about the asset content\. You can see the profile on an asset's **Profile** page\.
<!-- <ul> -->
* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=en#prereqs)
* [Creating a profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=en#create-profile)
* [Profile information](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=en#profile-results)
<!-- </ul> -->
## Requirements and restrictions ##
You can view the profile of assets under the following circumstances\.
**Required permissions**
: To view a data asset's **Profile** page, you can have any role in a project or catalog\. : To create or update a profile, you must have the **Admin** or **Editor** role in the project or catalog\.
**Workspaces**
: You can view the asset profile in projects\.
**Types of assets**
: These types of assets have a profile:
<!-- <ul> -->
* Data assets from relational or nonrelational databases from a connection to the data sources, except Cloudant
* Data assets from partitioned data sets, where a partitioned data set consists of multiple files and is represented by a single folder uploaded from the local file system or from file\-based connections to the data sources
* Data assets from files uploaded from the local file system or from file\-based connections to the data sources, with these formats:
<!-- <ul> -->
* CSV
* XLS, XLSM, XLSX (Only the first sheet in a workbook is profiled.)
* TSV
* Avro
* Parquet
<!-- </ul> -->
However, structured data files are not profiled when data assets do not explicitly reference them, such as in these circumstances:
<!-- <ul> -->
* The files are within a connected folder asset. Files that are accessible from a connected folder asset are not treated as assets and are not profiled.
* The files are within an archive file. The archive file is referenced by the data asset and the compressed files are not profiled.
<!-- </ul> -->
<!-- </ul> -->
## Creating a profile ##
In projects, you can create a profile for a data asset by clicking **Create profile**\. You can update an existing profile when the data changes\.
## Profiling results ##
When you create or update an asset profile, the columns in the data asset are analyzed\. By default, the profile is created based on the first 5,000 rows of data\. If the data asset has more than 250 columns, the profile is created based on the first 1,000 rows of data\.
The profile of a data asset shows information about each column in the data set:
<!-- <ul> -->
* When was the profile created or last updated\.
* How many columns and rows were analyzed\.
* The data types for columns and data types distribution\.
* The data formats for columns and formats distribution\.
* The percentage of matching, mismatching, or missing data for each column\.
* The frequency distribution for all values identified in a column\.
* Statistics about the data for each column:
<!-- <ul> -->
* The number of *distinct* values indicates how many different values exist in the sampled data for the column.
* The percentage of *unique* values indicates the percentage of distinct values that appear only once in the column.
* The minimum, maximum, or mean, and sometimes the standard deviation in that column. Depending on a column’s data format, the statistics vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values and a standard deviation value while statistics for a column of data type string have minimum length, maximum length, and mean length values.
<!-- </ul> -->
<!-- </ul> -->
**Parent topic:**[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
<!-- </article "role="article" "> -->
|
B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=en | Creating a project | Creating a project
You create a project to collaborate with your team on working with data and other resources to achieve a particular goal, such as building a model.
Your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx.ai.
You can create an empty project, start from a sample project that provides sample data and other assets, or import a previously exported project. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html). The number of projects you can create per data center is 100.
Your project resources can include data, collaborators, tools, assets that run code, like notebooks and models, and other types of assets.
* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=enrequirements)
* [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=encreate-a-project)
Requirements and restrictions
Before you create a project, understand the requirements for storage and the project name.
Storage requirement : You must associate an [IBM Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) with your project to store assets. Each project has a separate bucket to hold the project's assets. If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow project creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). When a new project is created, the Cloud Object Storage bucket defaults to Regional resiliency. Regional buckets distribute data across several data centers that are within the same metropolitan area. If one of these data centers suffers an outage or destruction, availability and performance are not affected.
Project name requirements : Your project name must follow these requirements: : - Must be unique in the account. : - Must contain 1 - 255 characters. : - Can't contain these characters: % \ : - Can't contain leading or trailing underscores (_). : - Can't contain leading or trailing spaces. Leading or trailing spaces are automatically truncated.
Creating a project
To create a project:
1. Choose Projects > View all projects from the navigation menu and click New project.
2. Choose whether to create an empty project or to create a project based on an exported project file or a sample project.
3. If you chose to create a project from a file or a sample, upload a project file or select a sample project. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html).
4. If you chose to create a new project, add a name on the New project screen.
5. You can [mark the project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html). The project has a sensitive tag and project collaborators can't move data assets out of the project. You cannot change this setting after the project is created.
6. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one.
7. Click Create. You can start adding resources to your project.
The object storage bucket name for the project is based on the project name without spaces or nonalphanumberic characters plus a unique identifier.
Watch this video to see how to create both an empty project, imported project, and a project from a sample.
This video provides a visual method to learn the concepts and tasks in this documentation.
Next steps
* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
* [Add data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
Learn more
* [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)
* [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html)
* [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html)
Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
| # Creating a project #
You create a project to collaborate with your team on working with data and other resources to achieve a particular goal, such as building a model\.
Your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx\.ai\.
You can create an empty project, start from a sample project that provides sample data and other assets, or import a previously exported project\. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html)\. The number of projects you can create per data center is 100\.
Your project resources can include data, collaborators, tools, assets that run code, like notebooks and models, and other types of assets\.
<!-- <ul> -->
* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=en#requirements)
* [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=en#create-a-project)
<!-- </ul> -->
## Requirements and restrictions ##
Before you create a project, understand the requirements for storage and the project name\.
**Storage requirement** : You must associate an [IBM Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) with your project to store assets\. Each project has a separate bucket to hold the project's assets\. If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow project creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)\. When a new project is created, the Cloud Object Storage bucket defaults to Regional resiliency\. Regional buckets distribute data across several data centers that are within the same metropolitan area\. If one of these data centers suffers an outage or destruction, availability and performance are not affected\.
**Project name requirements** : Your project name must follow these requirements: : \- Must be unique in the account\. : \- Must contain 1 \- 255 characters\. : \- Can't contain these characters: % \\ : \- Can't contain leading or trailing underscores (\_)\. : \- Can't contain leading or trailing spaces\. Leading or trailing spaces are automatically truncated\.
## Creating a project ##
To create a project:
<!-- <ol> -->
1. Choose **Projects > View all projects** from the navigation menu and click **New project**\.
2. Choose whether to create an empty project or to create a project based on an exported project file or a sample project\.
3. If you chose to create a project from a file or a sample, upload a project file or select a sample project\. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html)\.
4. If you chose to create a new project, add a name on the **New project** screen\.
5. You can [mark the project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html)\. The project has a sensitive tag and project collaborators can't move data assets out of the project\. You cannot change this setting after the project is created\.
6. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one\.
7. Click **Create**\. You can start adding resources to your project\.
<!-- </ol> -->
The object storage bucket name for the project is based on the project name without spaces or nonalphanumberic characters plus a unique identifier\.
Watch this video to see how to create both an empty project, imported project, and a project from a sample\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Next steps ##
<!-- <ul> -->
* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
* [Add data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)
* [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html)
* [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html)
<!-- </ul> -->
**Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
<!-- </article "role="article" "> -->
|
C32FE380CF3083B6D85554063B5ACB153FC1C8BE | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en | Quick start tutorials | Quick start tutorials
Take quick start tutorials to learn how to perform specific tasks, such as refine data or build a model. These tutorials help you quickly learn how to do a specific task or set of related tasks.
The quick start tutorials are categorized by task:
* [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprepare)
* [Analyzing and visualizing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enanalyze)
* [Building, deploying, and trusting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enbuild)
* [Working with foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprompt)
Each tutorial requires one or more service instances. Some services are included in multiple tutorials. The tutorials are grouped by task. You can start with any task. Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources.
The tags for each tutorial describe the level of expertise (
Beginner
,
Intermediate
, or
Advanced
), and the amount of coding required (
No code
,
Low code
, or
All code
).
After completing these tutorials, see the [Other learning resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enresources) section to continue your learning.
Preparing data
To get started with preparing, transforming, and integrating data, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
Your data preparation workflow has these basic steps:
1. Create a project.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project.
3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data.
4. Choose a tool to analyze your data. Each of the tutorials describes a tool.
5. Run or schedule a job to prepare your data.
Tutorials for preparing data
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
Tutorial Description Expertise for tutorial
[Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data. <br><br>Beginner<br><br>No code
[Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) Generate synthetic tabular data using a graphical flow editor. Select operations to generate data. <br><br>Beginner<br><br>No code
Analyzing and visualizing data
To get started with analyzing and visualizing data, understand the overall workflow, choose a tutorial, and check out other learning resources for working with other tools.
Your analyzing and visualizing data workflow has these basic steps:
1. Create a project.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project.
3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data.
4. Choose a tool to analyze your data. Each of the tutorials describes a tool.
Tutorials for analyzing and visualizing data
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
Tutorial Description Expertise for tutorial
[Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) Load data, run, and share a notebook. Understand generated Python code. <br><br>Intermediate<br><br>All code
[Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data. <br><br>Beginner<br><br>No code
Building, deploying, and trusting models
To get started with building, deploying, and trusting models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
The model workflow has three main steps: build a model asset, deploy the model, and build trust in the model.

Tutorials for building, deploying, and trusting models
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
Tutorial Description Expertise for tutorial
[Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) Automatically build model candidates with the AutoAI tool. Build, deploy, and test a model without coding. <br><br>Beginner<br><br>No code
[Build and deploy a machine learning model in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) Build a model by updating and running a notebook that uses Python code and the Watson Machine Learning APIs. Build, deploy, and test a scikit-learn model that uses Python code. <br><br>Intermediate<br><br>All code
[Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) Build a C5.0 model that uses the SPSS Modeler tool. Drop data and operation nodes on a canvas and select properties. <br><br>Beginner<br><br>No code
[Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) Automatically build scenarios with the Modeling Assistant. Solve and explore scenarios, then deploy and test a model without coding. <br><br>Intermediate<br><br>No code
[Automate the lifecycle for a model with pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html) Create and run a pipeline to automate building and deploying a machine learning model. Drop operation nodes on a canvas and select properties. <br><br>Beginner<br><br>No code
Prompting foundation models
To get started with prompting foundation models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
Your prompt engineering workflow has these basic steps:
1. Create a project.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project.
3. Choose a tool to prompt foundation models. Each of the tutorials describes a tool.
4. Save and share your best prompts.
Tutorials for working with foundation models
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
Tutorial Description Expertise for tutorial
[Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding. <br><br>Beginner<br><br>No code
[Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code. <br><br>Intermediate<br><br>All code
[Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) Tune a foundation model to enhance model performance. Use the Tuning Studio to tune a model without coding. <br><br>Intermediate<br><br>No code
[Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle. Use the evaluation tool and an AI use case to track the prompt template. <br><br>Beginner<br><br>No code
Other learning resources
Guided tutorials
Access the [Build an AI model sample project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c6008d167803ef95c1b37da931604cac) to follow a guided tutorial in the Samples. After you create the sample project, the readme provides instructions:
* Choose Explore and prepare data to remove anomalies in the data with Data Refinery.
* Choose Build a model in a notebook to build a model with Python code.
* Choose Build and deploy a model to automate building a model with the AutoAI tool.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
* Watch a preview of the guided tutorial video series
Documentation
General
* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
* [Adding data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
Preparing data
* [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
[Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
Analyzing and visualizing data
* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
Building, deploying, and trusting models
* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
* [Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
Prompting a foundation model
* [Retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
Videos
* [A comprehensive set of videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) that show many common tasks in watsonx.
Samples
Find sample data sets, projects, models, prompts, and notebooks in the Samples area to gain hands-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab.
Training
* [Watson Studio Methodology](https://www.ibm.com/training/course/W7067G) is an IBM Training e-Learning course that provides an in-depth look at Watson Studio.
* [Take control of your data with Watson Studio](https://developer.ibm.com/learningpaths/get-started-watson-studio/) is a learning path that consists of step-by-step tutorials that explain the process of working with data using Watson Studio.
Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
| # Quick start tutorials #
Take quick start tutorials to learn how to perform specific tasks, such as refine data or build a model\. These tutorials help you quickly learn how to do a specific task or set of related tasks\.
The quick start tutorials are categorized by task:
<!-- <ul> -->
* [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en#prepare)
* [Analyzing and visualizing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en#analyze)
* [Building, deploying, and trusting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en#build)
* [Working with foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en#prompt)
<!-- </ul> -->
Each tutorial requires one or more service instances\. Some services are included in multiple tutorials\. The tutorials are grouped by task\. You can start with any task\. Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources\.
The tags for each tutorial describe the level of expertise (
Beginner
,
Intermediate
, or
Advanced
), and the amount of coding required (
No code
,
Low code
, or
All code
)\.
After completing these tutorials, see the [Other learning resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=en#resources) section to continue your learning\.
## Preparing data ##
To get started with preparing, transforming, and integrating data, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform\.
Your data preparation workflow has these basic steps:
<!-- <ol> -->
1. Create a project\.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project\.
3. Add data to your project\. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data\.
4. Choose a tool to analyze your data\. Each of the tutorials describes a tool\.
5. Run or schedule a job to prepare your data\.
<!-- </ol> -->
### Tutorials for preparing data ###
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
<!-- <table> -->
| Tutorial | Description | Expertise for tutorial |
| ------------------------------------------------ | ----------------------------------------------------------------- | ----------------------------------------------------------------------- |
| [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) | Prepare and visualize tabular data with a graphical flow editor\. | Select operations to manipulate data\. <br><br>Beginner<br><br>No code |
| [Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) | Generate synthetic tabular data using a graphical flow editor\. | Select operations to generate data\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
## Analyzing and visualizing data ##
To get started with analyzing and visualizing data, understand the overall workflow, choose a tutorial, and check out other learning resources for working with other tools\.
Your analyzing and visualizing data workflow has these basic steps:
<!-- <ol> -->
1. Create a project\.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project\.
3. Add data to your project\. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data\.
4. Choose a tool to analyze your data\. Each of the tutorials describes a tool\.
<!-- </ol> -->
### Tutorials for analyzing and visualizing data ###
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
<!-- <table> -->
| Tutorial | Description | Expertise for tutorial |
| ------------------------------------------------ | ----------------------------------------------------------------- | ------------------------------------------------------------------------ |
| [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) | Load data, run, and share a notebook\. | Understand generated Python code\. <br><br>Intermediate<br><br>All code |
| [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) | Prepare and visualize tabular data with a graphical flow editor\. | Select operations to manipulate data\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
## Building, deploying, and trusting models ##
To get started with building, deploying, and trusting models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform\.
The model workflow has three main steps: build a model asset, deploy the model, and build trust in the model\.

### Tutorials for building, deploying, and trusting models ###
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
<!-- <table> -->
| Tutorial | Description | Expertise for tutorial |
| --------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
| [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) | Automatically build model candidates with the AutoAI tool\. | Build, deploy, and test a model without coding\. <br><br>Beginner<br><br>No code |
| [Build and deploy a machine learning model in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) | Build a model by updating and running a notebook that uses Python code and the Watson Machine Learning APIs\. | Build, deploy, and test a scikit\-learn model that uses Python code\. <br><br>Intermediate<br><br>All code |
| [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) | Build a C5\.0 model that uses the SPSS Modeler tool\. | Drop data and operation nodes on a canvas and select properties\. <br><br>Beginner<br><br>No code |
| [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) | Automatically build scenarios with the Modeling Assistant\. | Solve and explore scenarios, then deploy and test a model without coding\. <br><br>Intermediate<br><br>No code |
| [Automate the lifecycle for a model with pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html) | Create and run a pipeline to automate building and deploying a machine learning model\. | Drop operation nodes on a canvas and select properties\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
## Prompting foundation models ##
To get started with prompting foundation models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform\.
Your prompt engineering workflow has these basic steps:
<!-- <ol> -->
1. Create a project\.
2. If necessary, create the service instance that provides the tool you want to use and associate it with the project\.
3. Choose a tool to prompt foundation models\. Each of the tutorials describes a tool\.
4. Save and share your best prompts\.
<!-- </ol> -->
### Tutorials for working with foundation models ###
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
<!-- <table> -->
| Tutorial | Description | Expertise for tutorial |
| -------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) | Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts\. | Prompt a model using Prompt Lab without coding\. <br><br>Beginner<br><br>No code |
| [Prompt a foundation model with the retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) | Prompt a foundation model by leveraging information in a knowledge base\. | Use the retrieval\-augmented generation pattern in a Jupyter notebook that uses Python code\. <br><br>Intermediate<br><br>All code |
| [Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) | Tune a foundation model to enhance model performance\. | Use the Tuning Studio to tune a model without coding\. <br><br>Intermediate<br><br>No code |
| [Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) | Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle\. | Use the evaluation tool and an AI use case to track the prompt template\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
## Other learning resources ##
### Guided tutorials ###
Access the [Build an AI model sample project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c6008d167803ef95c1b37da931604cac) to follow a guided tutorial in the Samples\. After you create the sample project, the readme provides instructions:
<!-- <ul> -->
* Choose **Explore and prepare data** to remove anomalies in the data with Data Refinery\.
* Choose **Build a model in a notebook** to build a model with Python code\.
* Choose **Build and deploy a model** to automate building a model with the AutoAI tool\.
<!-- </ul> -->
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
<!-- <ul> -->
* Watch a preview of the guided tutorial video series
<!-- </ul> -->
### Documentation ###
#### General ####
<!-- <ul> -->
* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
* [Adding data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
<!-- </ul> -->
#### Preparing data ####
<!-- <ul> -->
* [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </ul> -->
[Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
#### Analyzing and visualizing data ####
<!-- <ul> -->
* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
<!-- </ul> -->
#### Building, deploying, and trusting models ####
<!-- <ul> -->
* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
* [Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
<!-- </ul> -->
### Prompting a foundation model ###
<!-- <ul> -->
* [Retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
<!-- </ul> -->
### Videos ###
<!-- <ul> -->
* [A comprehensive set of videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) that show many common tasks in watsonx\.
<!-- </ul> -->
### Samples ###
Find sample data sets, projects, models, prompts, and notebooks in the Samples area to gain hands\-on experience:
[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models\.
[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets\.
[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models\.
[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model\.
[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab\.
### Training ###
<!-- <ul> -->
* [Watson Studio Methodology](https://www.ibm.com/training/course/W7067G) is an IBM Training e\-Learning course that provides an in\-depth look at Watson Studio\.
* [Take control of your data with Watson Studio](https://developer.ibm.com/learningpaths/get-started-watson-studio/) is a learning path that consists of step\-by\-step tutorials that explain the process of working with data using Watson Studio\.
<!-- </ul> -->
**Parent topic:**[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
<!-- </article "role="article" "> -->
|
8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html?context=cdpaas&locale=en | Regional availability for services and features | Regional availability for services and features
IBM watsonx is deployed on the IBM Cloud multi-zone region network. The availability of services and features can vary across regional data centers.
You can view the regional availability for every service in the [Services catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=wx).
Regional availability of the Watson Studio and Watson Machine Learning services
Watsonx.ai includes the Watson Studio and Watson Machine Learning services to provide foundation and machine learning model tools.
The Watson Studio and Watson Machine Learning services are available in the following regional data centers:
* Dallas (us-south), in Texas US
* Frankfurt (eu-de), in Germany
Regional availability of foundation models
The following table shows the IBM Cloud data centers where each foundation model is available. A checkmark indicates that the model is hosted in the region.
Table 1. IBM Cloud data center support
Model name Dallas Frankfurt
flan-t5-xl-3b ✓
flan-t5-xxl-11b ✓ ✓
flan-ul2-20b ✓ ✓
gpt-neox-20b ✓ ✓
granite-13b-chat-v2 ✓ ✓
granite-13b-chat-v1 ✓ ✓
granite-13b-instruct-v2 ✓ ✓
granite-13b-instruct-v1 ✓ ✓
llama-2-13b-chat ✓ ✓
llama-2-70b-chat ✓ ✓
mpt-7b-instruct2 ✓ ✓
mt0-xxl-13b ✓ ✓
starcoder-15.5b ✓ ✓
Tool and environment limitations for the Frankfurt region
Table 2. Frankfurt regional limitations
Service Limitation
Watson Studio If you need a Spark runtime, you must use the [Spark environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/jupyter-spark.html) in Watson Studio for the SPSS Modeler and notebook editor tools.
Watson Studio [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) is not supported.
Watson Studio [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) is not supported.
Regional availability of watsonx.governance
Watsonx.governance Lite and Essentials plans are available only in the Dallas region.
Regional availability of Watson OpenScale
Watson OpenScale legacy plans are available only in the Frankfurt region.
Regional availability of the Cloud Object Storage service
The region for the Cloud Object Storage service is Global. Cloud Object Storage buckets for workspaces are Regional buckets. For more information, see [IBM Cloud docs: Cloud Object Storage endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints).
Learn more
* [IBM Cloud docs: IBM Cloud global data centers](https://www.ibm.com/cloud/data-centers)
* [Services in the IBM watsonx catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html)
| # Regional availability for services and features #
IBM watsonx is deployed on the IBM Cloud multi\-zone region network\. The availability of services and features can vary across regional data centers\.
You can view the regional availability for every service in the [Services catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=wx)\.
## Regional availability of the Watson Studio and Watson Machine Learning services ##
Watsonx\.ai includes the Watson Studio and Watson Machine Learning services to provide foundation and machine learning model tools\.
The Watson Studio and Watson Machine Learning services are available in the following regional data centers:
<!-- <ul> -->
* Dallas (us\-south), in Texas US
* Frankfurt (eu\-de), in Germany
<!-- </ul> -->
### Regional availability of foundation models ###
The following table shows the IBM Cloud data centers where each foundation model is available\. A checkmark indicates that the model is hosted in the region\.
<!-- <table> -->
Table 1\. IBM Cloud data center support
| Model name | Dallas | Frankfurt |
| -------------------------- | ------ | --------- |
| flan\-t5\-xl\-3b | ✓ | |
| flan\-t5\-xxl\-11b | ✓ | ✓ |
| flan\-ul2\-20b | ✓ | ✓ |
| gpt\-neox\-20b | ✓ | ✓ |
| granite\-13b\-chat\-v2 | ✓ | ✓ |
| granite\-13b\-chat\-v1 | ✓ | ✓ |
| granite\-13b\-instruct\-v2 | ✓ | ✓ |
| granite\-13b\-instruct\-v1 | ✓ | ✓ |
| llama\-2\-13b\-chat | ✓ | ✓ |
| llama\-2\-70b\-chat | ✓ | ✓ |
| mpt\-7b\-instruct2 | ✓ | ✓ |
| mt0\-xxl\-13b | ✓ | ✓ |
| starcoder\-15\.5b | ✓ | ✓ |
<!-- </table ""> -->
### Tool and environment limitations for the Frankfurt region ###
<!-- <table> -->
Table 2\. Frankfurt regional limitations
| Service | Limitation |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| Watson Studio | If you need a Spark runtime, you must use the [Spark environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/jupyter-spark.html) in Watson Studio for the SPSS Modeler and notebook editor tools\. |
| Watson Studio | [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) is not supported\. |
| Watson Studio | [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) is not supported\. |
<!-- </table ""> -->
## Regional availability of watsonx\.governance ##
Watsonx\.governance Lite and Essentials plans are available only in the Dallas region\.
## Regional availability of Watson OpenScale ##
Watson OpenScale legacy plans are available only in the Frankfurt region\.
## Regional availability of the Cloud Object Storage service ##
The region for the Cloud Object Storage service is **Global**\. Cloud Object Storage buckets for workspaces are Regional buckets\. For more information, see [IBM Cloud docs: Cloud Object Storage endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints)\.
## Learn more ##
<!-- <ul> -->
* [IBM Cloud docs: IBM Cloud global data centers](https://www.ibm.com/cloud/data-centers)
* [Services in the IBM watsonx catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
<!-- </ul> -->
**Parent topic:**[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html)
<!-- </article "role="article" "> -->
|
F908A5F1D788E2597335215A464817436A3D3ED1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=en | Levels of user access roles in IBM watsonx | Levels of user access roles in IBM watsonx
Every user of IBM watsonx has multiple levels of roles with the corresponding permissions, or actions. The permissions determine what actions a user can perform on the platform or within a service. Some roles are set in IBM Cloud, and others are set in IBM watsonx.
The IBM Cloud account owner or administrator sets the Identity and Access (IAM) Platform and Service access roles in the IBM Cloud account. Workspace administrators in watsonx set the collaborator roles for workspaces, for example, projects and deployment spaces.
Familiarity with the IBM Cloud IAM feature, Access groups, Platform roles, and Service roles is required to configure user access for IBM watsonx. See [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) for a description of IBM Cloud IAM Platform and Service roles.
This illustration shows the different levels of roles assigned to each user so that they can work in IBM watsonx.
Levels of roles in IBM watsonx

The levels of roles are:
* [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enplatform) determine your permissions for the IBM Cloud account. At least the Viewer role is required to work with services.
* [IAM Service access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enservice) determine your permissions within services.
* [Workspace collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enworkspace) determine what actions you have permission to perform within workspaces in IBM watsonx.
IAM Platform access roles
The IAM Platform access roles are assigned and managed in the IBM Cloud account.
IAM Platform access roles provide permissions to manage the IBM Cloud account and to access services within IBM watsonx. The Platform access roles are Viewer, Operator, Editor, and Administrator. The Platform roles are available to all services on IBM Cloud.
The Viewer role has minimal, view-only permissions. Users need at least Viewer role to see the services in IBM watsonx. A Viewer can:
* View, but not modify, available service instances and assets
* Associate services with projects.
* Become collaborator in projects or deployment spaces.
* Create projects and deployment spaces if assigned appropriate permissions for Cloud Object Storage.
The Operator role has permissions to configure existing service instances.
The Editor role provides access to these actions:
* All Viewer role permissions.
* Provision instances of services.
* Update plans for service instances.
The Administrator role provides the same permissions as the Owner role for the account. With Administrator role, you can:
* All Viewer, Operator, and Editor permissions.
* Perform all management actions for services.
* Add users to the [IBM Cloud account and assign roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
* Perform administrative tasks in IBM watsonx
* [Manage services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
To understand IAM Platform access roles, see [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview).
IAM Service access roles
Service roles apply to individual services and define actions permitted within the service. IBM Cloud Object Storage has its own set of Service access roles. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html).
Workspace collaborator roles
Your role in a specific workspace determines what actions you can perform in that workspace. Your IAM roles do not affect your role within a workspace. For example, you can be the Administrator of the Cloud account, but this does not automatically make you an administrator for a project or catalog. The Admin collaborator role for a project (or other workspace) must be explicitly assigned. Similarly, roles are specific to each project. You may have Admin role in a project, which gives you full control of the contents of that project, including managing collaborators and assets. But you can have the Viewer role in another project, which allows you to only view the contents of that project.
Projects and deployment spaces have these roles:
* Admin: Control assets, collaborators, and settings in the workspace.
* Editor: Control assets in the workspace.
* Viewer: View the workspace and its contents.
The permissions that are associated with each role are specific to the type of workspace:
* [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)
* [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html)
Learn more
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [Setting up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
* [Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
* [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin)
* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)
Parent topic:[Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
| # Levels of user access roles in IBM watsonx #
Every user of IBM watsonx has multiple levels of roles with the corresponding permissions, or actions\. The permissions determine what actions a user can perform on the platform or within a service\. Some roles are set in IBM Cloud, and others are set in IBM watsonx\.
The IBM Cloud account owner or administrator sets the Identity and Access (IAM) Platform and Service access roles in the IBM Cloud account\. Workspace administrators in watsonx set the collaborator roles for workspaces, for example, projects and deployment spaces\.
Familiarity with the IBM Cloud IAM feature, Access groups, Platform roles, and Service roles is required to configure user access for IBM watsonx\. See [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) for a description of IBM Cloud IAM Platform and Service roles\.
This illustration shows the different levels of roles assigned to each user so that they can work in IBM watsonx\.
Levels of roles in IBM watsonx

The levels of roles are:
<!-- <ul> -->
* [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=en#platform) determine your permissions for the IBM Cloud account\. At least the **Viewer** role is required to work with services\.
* [IAM Service access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=en#service) determine your permissions within services\.
* [Workspace collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=en#workspace) determine what actions you have permission to perform within workspaces in IBM watsonx\.
<!-- </ul> -->
## IAM Platform access roles ##
The IAM Platform access roles are assigned and managed in the IBM Cloud account\.
IAM Platform access roles provide permissions to manage the IBM Cloud account and to access services within IBM watsonx\. The Platform access roles are **Viewer**, **Operator**, **Editor**, and **Administrator**\. The Platform roles are available to all services on IBM Cloud\.
The **Viewer** role has minimal, view\-only permissions\. Users need at least **Viewer** role to see the services in IBM watsonx\. A **Viewer** can:
<!-- <ul> -->
* View, but not modify, available service instances and assets
* Associate services with projects\.
* Become collaborator in projects or deployment spaces\.
* Create projects and deployment spaces if assigned appropriate permissions for Cloud Object Storage\.
<!-- </ul> -->
The **Operator** role has permissions to configure existing service instances\.
The **Editor** role provides access to these actions:
<!-- <ul> -->
* All Viewer role permissions\.
* Provision instances of services\.
* Update plans for service instances\.
<!-- </ul> -->
The **Administrator** role provides the same permissions as the **Owner** role for the account\. With **Administrator** role, you can:
<!-- <ul> -->
* All Viewer, Operator, and Editor permissions\.
* Perform all management actions for services\.
* Add users to the [IBM Cloud account and assign roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
* Perform administrative tasks in IBM watsonx
* [Manage services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
<!-- </ul> -->
To understand IAM Platform access roles, see [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview)\.
## IAM Service access roles ##
Service roles apply to individual services and define actions permitted within the service\. IBM Cloud Object Storage has its own set of Service access roles\. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)\.
## Workspace collaborator roles ##
Your role in a specific workspace determines what actions you can perform in that workspace\. Your IAM roles do not affect your role within a workspace\. For example, you can be the **Administrator** of the Cloud account, but this does not automatically make you an administrator for a project or catalog\. The **Admin** collaborator role for a project (or other workspace) must be explicitly assigned\. Similarly, roles are specific to each project\. You may have **Admin** role in a project, which gives you full control of the contents of that project, including managing collaborators and assets\. But you can have the **Viewer** role in another project, which allows you to only view the contents of that project\.
Projects and deployment spaces have these roles:
<!-- <ul> -->
* **Admin**: Control assets, collaborators, and settings in the workspace\.
* **Editor**: Control assets in the workspace\.
* **Viewer**: View the workspace and its contents\.
<!-- </ul> -->
The permissions that are associated with each role are specific to the type of workspace:
<!-- <ul> -->
* [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)
* [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [Setting up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
* [Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
* [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html#accountadmin)
* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)
<!-- </ul> -->
**Parent topic:**[Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
<!-- </article "role="article" "> -->
|
ABFAAF84948B090C8EA099FF44CC8CD878371073 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en | IBM Cloud account security | IBM Cloud account security
Account security mechanisms for IBM watsonx are provided by IBM Cloud. These security mechanisms, including SSO and role-based, group-based, and service-based access control, protect access to resources and provide user authentication.
Mechanism Purpose Responsibility Configured on
[Access (IAM) roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=eniam-access-roles) Provide role-based access control for services Customer IBM Cloud
[Access groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enaccess-groups) Configure access groups and policies Customer IBM Cloud
[Resource groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enresource-groups) Organize resources into groups and assign access Customer IBM Cloud
[Service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enservice-ids) Enables an application outside of IBM Cloud access to your IBM Cloud services Customer IBM Cloud
[Service ID API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enservice-id-api-keys) Authenticates an application to a Service ID Customer IBM Cloud
[Activity Tracker](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enactivity-tracker) Monitor events related to IBM watsonx Customer IBM Cloud
[Multifactor authentication (MFA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enmultifactor-authentication) Require users to authenticate with a method beyond ID and password Customer IBM Cloud
[Single sign-on authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=ensingle-sign-on) Connect with an identity provider (IdP) for single sign-on (SSO) authentication by using SAML federation Shared IBM Cloud
IAM access roles
You can use IAM access roles to provide users access to all resources that belong to a resource group. You can also give users access to manage resource groups and create new service instances that are assigned to a resource group.
For step-by-step instructions, see [IBM Cloud docs: Assigning access to resources](https://cloud.ibm.com/docs/account?topic=account-access-getstarted)
Access groups
After you set up and organize resource groups in your account, you can streamline access management by using access groups. Create access groups to organize a set of users and service IDs into a single entity. You can then assign a policy to all group members by assigning it to the access group. Thus you can assign a single policy to the access group instead of assigning the same policy multiple times per individual user or service ID.
By using access groups, you can minimally manage the number of assigned policies by giving the same access to all identities in an access group.
For more information see:
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui).
Resource groups
Use resource groups to organize your account's resources into logical groups that help with access control. Rather than assigning access to individual resources, you assign access to the group. Resources are any service that is managed by IAM, such as databases. Whenever you create a service instance from the Cloud catalog, you must assign it to a resource group.
Resource groups work with access group policies to provide a way to manage access to resources by groups of users. By including a user in an access group, and assigning the access group to a resource group, you provide access to the resources contained in the group. Those resources are not available to nonmembers. The Lite account comes with a single resource group, named "Default", so all resources are placed in the Default resource group. With paid accounts, Administrators can create multiple resource groups to support your business and provide access to resources on an as-needed basis.
For step-by-step instructions, see [IBM Cloud docs: Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs)
For tips on configuring resource groups to provide secure access, see [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup)
Service IDs
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Service IDs are not tied to a specific user. If a user leaves an organization and is deleted from the account, the service ID remains intact to ensure that your service continues to work. Access policies that are assigned to each service ID ensure that your application has the appropriate access for authenticating with your IBM Cloud services. See [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html).
One way in which Service IDs and access policies can be used is to manage access to the Cloud Object Storage buckets. See [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html).
For more information, see [IBM Cloud docs: Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids).
Service ID API keys
For extra protection, Service IDs can be combined with unique API keys. The API key that is associated with a Service ID can be set for one-time use or unlimited use. For more information, see [IBM Cloud docs: Managing service IDs API keys](https://cloud.ibm.com/docs/account?topic=account-serviceidapikeys).
Activity Tracker
The Activity Tracker collects and stores audit records for API calls (events) made to resources that run in the IBM Cloud. You can use Activity Tracker to monitor the activity of your IBM Cloud account to investigate abnormal activity and critical actions, and to comply with regulatory audit requirements. The events that are collected comply with the Cloud Auditing Data Federation (CADF) standard. IBM services that generate Activity Tracker events follow the IBM Cloud security policy.
For a list of events that apply to IBM watsonx, see [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html).
For instructions on configuring Activity Tracker, see [IBM Cloud docs: Getting started with IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started).
Multifactor authentication
Multifactor authentication (or MFA) adds an extra layer of security by requiring multiple types of authentication methods upon login. After entering a valid username and password, users must also satisfy a second authentication method. For example, a time-sensitive passcode is sent to the user, either through text or email. The correct passcode must be entered to complete the login process.
For more information, see [IBM Cloud docs: Types of multifactor authentication](https://cloud.ibm.com/docs/account?topic=account-types).
Single sign-on authentication
Single sign-on (SSO) is an authentication method that enables users to log in to multiple, related applications that use one set of credentials.
IBM watsonx supports SSO using Security Assertion Markup Language (SAML) federated IDs. SAML federation requires coordination with IBM to configure. SAML connects IBMids with the user credentials that are provided by an identity provider (IdP). For companies that have configured SAML federation with IBM, users can log in to IBM watsonx with their company credentials. SAML federation is the recommended method for SSO configuration with IBM watsonx.
The [IBMid Enterprise Federation Adoption Guide](https://ibm.ent.box.com/notes/78040808400?s=yqjnprek2rm99jgqhlm04xz0nsjda69a) describes the steps that are required to federate your identity provider (IdP). You need an IBM Sponsor, which is an IBM employee that works as the contact person between you and the IBMid team.
For an overview of SAML federation, see [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide). This blog discusses both SAML federation and IBM Cloud App ID. IBM Cloud App ID is supported as a Beta version with IBM watsonx.
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # IBM Cloud account security #
Account security mechanisms for IBM watsonx are provided by IBM Cloud\. These security mechanisms, including SSO and role\-based, group\-based, and service\-based access control, protect access to resources and provide user authentication\.
<!-- <table> -->
| Mechanism | Purpose | Responsibility | Configured on |
| ------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | -------------- | ------------- |
| [Access (IAM) roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#iam-access-roles) | Provide role\-based access control for services | Customer | IBM Cloud |
| [Access groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#access-groups) | Configure access groups and policies | Customer | IBM Cloud |
| [Resource groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#resource-groups) | Organize resources into groups and assign access | Customer | IBM Cloud |
| [Service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#service-ids) | Enables an application outside of IBM Cloud access to your IBM Cloud services | Customer | IBM Cloud |
| [Service ID API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#service-id-api-keys) | Authenticates an application to a Service ID | Customer | IBM Cloud |
| [Activity Tracker](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#activity-tracker) | Monitor events related to IBM watsonx | Customer | IBM Cloud |
| [Multifactor authentication (MFA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#multifactor-authentication) | Require users to authenticate with a method beyond ID and password | Customer | IBM Cloud |
| [Single sign\-on authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=en#single-sign-on) | Connect with an identity provider (IdP) for single sign\-on (SSO) authentication by using SAML federation | Shared | IBM Cloud |
<!-- </table ""> -->
## IAM access roles ##
You can use IAM access roles to provide users access to all resources that belong to a resource group\. You can also give users access to manage resource groups and create new service instances that are assigned to a resource group\.
For step\-by\-step instructions, see [IBM Cloud docs: Assigning access to resources](https://cloud.ibm.com/docs/account?topic=account-access-getstarted)
## Access groups ##
After you set up and organize resource groups in your account, you can streamline access management by using access groups\. Create access groups to organize a set of users and service IDs into a single entity\. You can then assign a policy to all group members by assigning it to the access group\. Thus you can assign a single policy to the access group instead of assigning the same policy multiple times per individual user or service ID\.
By using access groups, you can minimally manage the number of assigned policies by giving the same access to all identities in an access group\.
For more information see:
<!-- <ul> -->
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui)\.
<!-- </ul> -->
## Resource groups ##
Use resource groups to organize your account's resources into logical groups that help with access control\. Rather than assigning access to individual resources, you assign access to the group\. Resources are any service that is managed by IAM, such as databases\. Whenever you create a service instance from the Cloud catalog, you must assign it to a resource group\.
Resource groups work with access group policies to provide a way to manage access to resources by groups of users\. By including a user in an access group, and assigning the access group to a resource group, you provide access to the resources contained in the group\. Those resources are not available to nonmembers\. The Lite account comes with a single resource group, named "Default", so all resources are placed in the Default resource group\. With paid accounts, Administrators can create multiple resource groups to support your business and provide access to resources on an as\-needed basis\.
For step\-by\-step instructions, see [IBM Cloud docs: Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs)
For tips on configuring resource groups to provide secure access, see [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup)
## Service IDs ##
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services\. Service IDs are not tied to a specific user\. If a user leaves an organization and is deleted from the account, the service ID remains intact to ensure that your service continues to work\. Access policies that are assigned to each service ID ensure that your application has the appropriate access for authenticating with your IBM Cloud services\. See [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)\.
One way in which Service IDs and access policies can be used is to manage access to the Cloud Object Storage buckets\. See [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html)\.
For more information, see [IBM Cloud docs: Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids)\.
## Service ID API keys ##
For extra protection, Service IDs can be combined with unique API keys\. The API key that is associated with a Service ID can be set for one\-time use or unlimited use\. For more information, see [IBM Cloud docs: Managing service IDs API keys](https://cloud.ibm.com/docs/account?topic=account-serviceidapikeys)\.
## Activity Tracker ##
The Activity Tracker collects and stores audit records for API calls (events) made to resources that run in the IBM Cloud\. You can use Activity Tracker to monitor the activity of your IBM Cloud account to investigate abnormal activity and critical actions, and to comply with regulatory audit requirements\. The events that are collected comply with the Cloud Auditing Data Federation (CADF) standard\. IBM services that generate Activity Tracker events follow the IBM Cloud security policy\.
For a list of events that apply to IBM watsonx, see [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html)\.
For instructions on configuring Activity Tracker, see [IBM Cloud docs: Getting started with IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started)\.
## Multifactor authentication ##
Multifactor authentication (or MFA) adds an extra layer of security by requiring multiple types of authentication methods upon login\. After entering a valid username and password, users must also satisfy a second authentication method\. For example, a time\-sensitive passcode is sent to the user, either through text or email\. The correct passcode must be entered to complete the login process\.
For more information, see [IBM Cloud docs: Types of multifactor authentication](https://cloud.ibm.com/docs/account?topic=account-types)\.
## Single sign\-on authentication ##
Single sign\-on (SSO) is an authentication method that enables users to log in to multiple, related applications that use one set of credentials\.
IBM watsonx supports SSO using Security Assertion Markup Language (SAML) federated IDs\. SAML federation requires coordination with IBM to configure\. SAML connects IBMids with the user credentials that are provided by an identity provider (IdP)\. For companies that have configured SAML federation with IBM, users can log in to IBM watsonx with their company credentials\. SAML federation is the recommended method for SSO configuration with IBM watsonx\.
The [IBMid Enterprise Federation Adoption Guide](https://ibm.ent.box.com/notes/78040808400?s=yqjnprek2rm99jgqhlm04xz0nsjda69a) describes the steps that are required to federate your identity provider (IdP)\. You need an IBM Sponsor, which is an IBM employee that works as the contact person between you and the IBMid team\.
For an overview of SAML federation, see [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide)\. This blog discusses both SAML federation and IBM Cloud App ID\. IBM Cloud App ID is supported as a Beta version with IBM watsonx\.
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
DA7407D415B3EFF25CA3DD588BBA677CC8CD3494 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html?context=cdpaas&locale=en | Collaborator security | Collaborator security
IBM watsonx provides attribute-based access control to protect workspaces such as projects and catalogs. You control access to workspaces by assigning roles and by restricting collaborators.
Table 1. Collaborator security mechanisms for IBM watsonx
Mechanism Purpose Responsibility Configured on
[Collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html?context=cdpaas&locale=encollaborator-roles) Assign roles to control access to workspaces Customer IBM watsonx
Collaborator roles
Everyone working in IBM watsonx is assigned a role that determines the workspaces that they can access and the tasks that they can perform. Collaborator roles control access to projects, deployment spaces, and catalogs using permissions specific to the role. Roles are assigned in IBM watsonx to provide Admin, Editor, or Viewer permissions.
Users also have an IAM Platform access role for the Cloud account and they may also have an IAM Service access role for workspaces. To understand how the roles provide secure access, see [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html).
To understand the permissions for each collaborator role, see [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html).
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # Collaborator security #
IBM watsonx provides attribute\-based access control to protect workspaces such as projects and catalogs\. You control access to workspaces by assigning roles and by restricting collaborators\.
<!-- <table> -->
Table 1\. Collaborator security mechanisms for IBM watsonx
| Mechanism | Purpose | Responsibility | Configured on |
| ---------------------- | -------------------------------------------- | -------------- | ------------- |
| [Collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html?context=cdpaas&locale=en#collaborator-roles) | Assign roles to control access to workspaces | Customer | IBM watsonx |
<!-- </table ""> -->
## Collaborator roles ##
Everyone working in IBM watsonx is assigned a role that determines the workspaces that they can access and the tasks that they can perform\. Collaborator roles control access to projects, deployment spaces, and catalogs using permissions specific to the role\. Roles are assigned in IBM watsonx to provide **Admin**, **Editor**, or **Viewer** permissions\.
Users also have an IAM Platform access role for the Cloud account and they may also have an IAM Service access role for workspaces\. To understand how the roles provide secure access, see [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)\.
To understand the permissions for each collaborator role, see [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)\.
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en | Data security | Data security
In IBM watsonx, data security mechanisms, such as encryption, protect sensitive customer and corporate data, both in transit and at rest. A secure , and other mechanisms protect your valuable corporate data. A secure IBM Cloud Object Storage instance stores data assets from projects, catalogs, and deployment spaces.
Mechanism Purpose Responsibility Configured on
[Configuring Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enconfiguring-cloud-object-storage) IBM Cloud Object Storage is required to store assets Customer IBM Cloud
[Controlling access with service credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=encontrolling-access-with-service-credentials) Authorize a Cloud Object Storage instance for a specific project Customer IBM Cloud and IBM watsonx
[Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enencrypting-at-rest-data) Default encryption is provided. Use IBM Key Protect to manage your own keys. Shared IBM Cloud
[Encrypting in motion data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enencrypting-in-motion-data) Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion. IBM, Third-party clouds IBM Cloud, Cloud providers
[Backups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enbackups) Use IBM Cloud Backup to manage backups for your data. Shared IBM Cloud
Configuring Cloud Object Storage
IBM Cloud Object Storage provides storage for projects, catalogs, and deployment spaces. You are required to associate an IBM Cloud Object Storage instance when you create projects, catalogs, or deployment spaces to store files for assets, such as uploaded data files or notebook files. The Lite plan instance is free to use for storage capacity up to 25 GB per month.
You can also access data sources in an IBM Cloud Object Storage instance. To access data IBM Cloud Object Storage, you create an IBM Cloud Object Storage connection when you want to connect to data stored in IBM Cloud Object Storage. An IBM Cloud Object Storage connection has a different purpose from the IBM Cloud Object Storage instance that you associate with a project, deployment space, or catalog.
The IBM Cloud Identity and Access Management (IAM) service securely authenticates users and controls access to IBM Cloud Object Storage. See [IBM Cloud docs: Getting started with IAM](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-iam) for instructions on setting up access control for Cloud Object Storage on IBM Cloud.
See [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage)
Controlling access with service credentials
Cloud Object Storage credentials consist of a service credential and a Service ID. Policies are assigned to Service IDs to control access. The credentials are used to create a secure connection to the Cloud Object Storage instance, with access control as determined by the policy.
For more information, see [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html)
Encrypting at rest data
By default, at rest data is encrypted with randomly generated keys that are managed by IBM. If the default keys are sufficient protection for your data, no additional action is needed. To provide extra protection for at rest data, you can create and manage your own keys with IBM® Key Protect for IBM Cloud™. Key Protect is a full-service encryption solution that allows data to be secured and stored in IBM Cloud Object Storage.
To encrypt your Cloud Object Storage instance with your own key, create an instance of the IBM Key Project service from the IBM Cloud catalog. Not all Watson Studio plans support customer-generated encryption keys.
* For instructions on encrypting your Cloud Object Storage instance with your own key, see [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)
* For an overview of how to encrypt data with your own keys, see [IBM Cloud docs: Encrypting data with your own keys](https://cloud.ibm.com/docs/overview?topic=overview-key-encryption)
* For the complete documentation for Key Protect, see [IBM Cloud docs: IBM Key Protect](https://cloud.ibm.com/docs/key-protect)
* For an overview of how encryption works in the IBM Cloud Security Architecture, see [Data security architecture](https://www.ibm.com/cloud/architecture/architectures/data-security-arch)
Encrypting in motion data
Data is encrypted when transmitted by IBM on any public networks and within the Cloud Service's private data center network. Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion.
Backups
To avoid loss of important data, create and properly store backups. You can use IBM Cloud Backup to securely back up your data between IBM Cloud servers in one or more IBM Cloud data centers. See [IBM Cloud docs: Getting started with IBM Cloud Backup](https://cloud.ibm.com/docs/Backup?topic=Backup-getting-started)
Learn More For more information, see [IBM Cloud docs: Getting started with Security and Compliance Center](https://cloud.ibm.com/docs/security-compliance).
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # Data security #
In IBM watsonx, data security mechanisms, such as encryption, protect sensitive customer and corporate data, both in transit and at rest\. A secure , and other mechanisms protect your valuable corporate data\. A secure IBM Cloud Object Storage instance stores data assets from projects, catalogs, and deployment spaces\.
<!-- <table> -->
| Mechanism | Purpose | Responsibility | Configured on |
| ----------------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------ | -------------------------- |
| [Configuring Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en#configuring-cloud-object-storage) | IBM Cloud Object Storage is required to store assets | Customer | IBM Cloud |
| [Controlling access with service credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en#controlling-access-with-service-credentials) | Authorize a Cloud Object Storage instance for a specific project | Customer | IBM Cloud and IBM watsonx |
| [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en#encrypting-at-rest-data) | Default encryption is provided\. Use IBM Key Protect to manage your own keys\. | Shared | IBM Cloud |
| [Encrypting in motion data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en#encrypting-in-motion-data) | Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion\. | IBM, Third\-party clouds | IBM Cloud, Cloud providers |
| [Backups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=en#backups) | Use IBM Cloud Backup to manage backups for your data\. | Shared | IBM Cloud |
<!-- </table ""> -->
## Configuring Cloud Object Storage ##
IBM Cloud Object Storage provides storage for projects, catalogs, and deployment spaces\. You are required to associate an IBM Cloud Object Storage instance when you create projects, catalogs, or deployment spaces to store files for assets, such as uploaded data files or notebook files\. The Lite plan instance is free to use for storage capacity up to 25 GB per month\.
You can also access data sources in an IBM Cloud Object Storage instance\. To access data IBM Cloud Object Storage, you create an IBM Cloud Object Storage connection when you want to connect to data stored in IBM Cloud Object Storage\. An IBM Cloud Object Storage connection has a different purpose from the IBM Cloud Object Storage instance that you associate with a project, deployment space, or catalog\.
The IBM Cloud Identity and Access Management (IAM) service securely authenticates users and controls access to IBM Cloud Object Storage\. See [IBM Cloud docs: Getting started with IAM](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-iam) for instructions on setting up access control for Cloud Object Storage on IBM Cloud\.
See [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage)
## Controlling access with service credentials ##
Cloud Object Storage credentials consist of a service credential and a Service ID\. Policies are assigned to Service IDs to control access\. The credentials are used to create a secure connection to the Cloud Object Storage instance, with access control as determined by the policy\.
For more information, see [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html)
## Encrypting at rest data ##
By default, at rest data is encrypted with randomly generated keys that are managed by IBM\. If the default keys are sufficient protection for your data, no additional action is needed\. To provide extra protection for at rest data, you can create and manage your own keys with IBM® Key Protect for IBM Cloud™\. Key Protect is a full\-service encryption solution that allows data to be secured and stored in IBM Cloud Object Storage\.
To encrypt your Cloud Object Storage instance with your own key, create an instance of the IBM Key Project service from the IBM Cloud catalog\. Not all Watson Studio plans support customer\-generated encryption keys\.
<!-- <ul> -->
* For instructions on encrypting your Cloud Object Storage instance with your own key, see [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)
* For an overview of how to encrypt data with your own keys, see [IBM Cloud docs: Encrypting data with your own keys](https://cloud.ibm.com/docs/overview?topic=overview-key-encryption)
* For the complete documentation for Key Protect, see [IBM Cloud docs: IBM Key Protect](https://cloud.ibm.com/docs/key-protect)
* For an overview of how encryption works in the IBM Cloud Security Architecture, see [Data security architecture](https://www.ibm.com/cloud/architecture/architectures/data-security-arch)
<!-- </ul> -->
## Encrypting in motion data ##
Data is encrypted when transmitted by IBM on any public networks and within the Cloud Service's private data center network\. Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion\.
## Backups ##
To avoid loss of important data, create and properly store backups\. You can use IBM Cloud Backup to securely back up your data between IBM Cloud servers in one or more IBM Cloud data centers\. See [IBM Cloud docs: Getting started with IBM Cloud Backup](https://cloud.ibm.com/docs/Backup?topic=Backup-getting-started)
**Learn More** For more information, see [IBM Cloud docs: Getting started with Security and Compliance Center](https://cloud.ibm.com/docs/security-compliance)\.
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-enterprise.html?context=cdpaas&locale=en | Enterprise security | Enterprise security
An enterprise is a hierarchy of IBM Cloud accounts that contains a parent account at the highest level with child account groups as the middle level and optional individual accounts that you can add at the lowest level. To provide security between the levels of accounts, enterprises isolate user and access management between the enterprise account and its child accounts.
The users and their assigned access in the enterprise account are entirely separate from users in the child accounts, and no access is inherited between the two types of accounts. User and access management in each enterprise and each account is entirely separate and must be managed by the account owner or a user given the Administrator role in the specific account.
Resources and services within an enterprise function the same as in stand-alone accounts. Each account in an enterprise can contain resource groups that manage access to multiple resources. For account security and how to use resource groups, see [IBM Cloud account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html).
Use cases
The user lists for each account are only visible to the users who are invited to that account. Just because a user is invited and given access to manage the entire enterprise, it doesn't mean that they can view the users who are invited to each child account.
Both user management and access management are entirely separate in each account and in the enterprise itself. This separation means that users who manage your enterprise can't access account resources within the child accounts unless you specifically enable them to. For example, your financial officer can have the Administrator role on the Billing account management service within the enterprise account. The financial officer must be invited to a child account with the appropriate access rights to view offers or update spending limits for the child account.

Learn more
For an overview of enterprise accounts, see [IBM Cloud docs: What is an enterprise?](https://cloud.ibm.com/docs/account?topic=account-what-is-enterprise)
For step-by-step instructions for setting up an enterprise hierarchy of accounts, see [IBM Cloud docs: Setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-tutorial)
For tips for setting up an enterprise, see [IBM Cloud docs: Best practices for setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-best-practices)
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # Enterprise security #
An enterprise is a hierarchy of IBM Cloud accounts that contains a parent account at the highest level with child account groups as the middle level and optional individual accounts that you can add at the lowest level\. To provide security between the levels of accounts, enterprises isolate user and access management between the enterprise account and its child accounts\.
The users and their assigned access in the enterprise account are entirely separate from users in the child accounts, and no access is inherited between the two types of accounts\. User and access management in each enterprise and each account is entirely separate and must be managed by the account owner or a user given the Administrator role in the specific account\.
Resources and services within an enterprise function the same as in stand\-alone accounts\. Each account in an enterprise can contain resource groups that manage access to multiple resources\. For account security and how to use resource groups, see [IBM Cloud account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html)\.
## Use cases ##
The user lists for each account are only visible to the users who are invited to that account\. Just because a user is invited and given access to manage the entire enterprise, it doesn't mean that they can view the users who are invited to each child account\.
Both user management and access management are entirely separate in each account and in the enterprise itself\. This separation means that users who manage your enterprise can't access account resources within the child accounts unless you specifically enable them to\. For example, your financial officer can have the Administrator role on the Billing account management service within the enterprise account\. The financial officer must be invited to a child account with the appropriate access rights to view offers or update spending limits for the child account\.

## Learn more ##
For an overview of enterprise accounts, see [IBM Cloud docs: What is an enterprise?](https://cloud.ibm.com/docs/account?topic=account-what-is-enterprise)
For step\-by\-step instructions for setting up an enterprise hierarchy of accounts, see [IBM Cloud docs: Setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-tutorial)
For tips for setting up an enterprise, see [IBM Cloud docs: Best practices for setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-best-practices)
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
AAC63365F37F6B307BA343F45706E388D24245D4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en | Network security | Network security
IBM watsonx provides network security mechanisms to protect infrastructure, data, and applications from potential threats and unauthorized access. Network security mechanisms provide secure connections to data sources and control traffic across both the public internet and internal networks.
Table 1. Network security mechanisms for IBM watsonx
Mechanism Purpose Responsibility Configured on
[Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enprivate-network-service-endpoints) Access services through secure private network endpoints Customer IBM Cloud
[Access to private data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enaccess-to-private-data-sources) Connect to data sources that are protected by a firewall Customer IBM watsonx
[Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enintegrations) Secure connections to Third-party clouds through a firewall Customer and Third-party clouds IBM watsonx
[Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enconnections) Secure connections to data sources Customer IBM watsonx
[Connections to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=ensecure) The Satellite Connector and Satellite location provide secure connections to data sources in a hybrid environment Customer IBM Cloud and IBM watsonx
[VPNs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=envpns) Share data securely across public networks Customer IBM Cloud
[Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enallow-specific-ip-addresses) Protect from access by unknown IP addresses Customer IBM Cloud
[Allow third party URLs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enthirdpartyurls) Allow third party URLs on an internal network Customer Customer firewall
[Multi-tenancy](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enmulti-tenancy) Provide isolation in a SaaS environment IBM and Third-party clouds IBM Cloud, Cloud providers
Private network service endpoints
Use private network service endpoints to securely connect to endpoints over IBM private cloud, rather than connecting to resources over the public network. With Private network service endpoints, services are no longer served on an internet routable IP address and thus are more secure. Service endpoints require virtual routing and forwarding (VRF) to be enabled on your account. VRF is automatically enabled for Virtual Private Clouds (VPCs).
For more information about service endpoints, see:
* [Securing connections to services with private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html?audience=wdp)
* [Blog: Introducing Private Service Endpoints in IBM Cloud Databases](https://www.ibm.com/cloud/blog/introducing-private-service-endpoints-in-ibm-cloud-databases?mhsrc=ibmsearch_a&mhq=private%20cloud%20endpoints)
* [IBM Cloud docs: Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview)
* [IBM Cloud docs: Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint)
* [IBM Cloud docs: Public and private network endpoints](https://cloud.ibm.com/docs/watson?topic=watson-public-private-endpoints&mhsrc=ibmsearch_a&mhq=public%20cloud%20endpoints)
Access to private data sources
Private data sources are on-premises data sources that are protected by a firewall. IBM watsonx requires access through the firewall to reach the data sources. To provide secure access, you create inbound firewall rules to allow access for the IP address ranges for IBM watsonx. The inbound rules are created in the configuration tool for your firewall.
See [Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
Integrations
You can configure integrations with third-party cloud platforms to allow IBM watsonx users to access data sources hosted on those clouds. The following security mechanisms apply to integrations with third-party clouds:
1. An authorized account on the third-party cloud, with appropriate permissions to view account credentials
2. Permissions to allow secure connections through the firewall of the cloud provider (for specific IP ranges)
For example, you have a data source on AWS that you are running notebooks on. You need to integrate with AWS and then generate a connection to the database. The integration and connection are secure. After you configure firewall access, you can grant appropriate permissions to users and provide them with credentials to access data.
See [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
Connections
Connections require valid credentials to access data. The account owner or administrator configures the type of credentials that are required, either shared or personal, at the account level. Shared credentials make the data source and its credentials accessible to all collaborators in the project. Personal credentials require each collaborator to provide their own credentials to use the data source.
Connections require valid credentials to access data. The account owner or administrator configures the type of credentials that are required at the account level. The connection creator enters a valid credential. The options are:
* Either shared or personal allows users to specify personal or shared credentials when creating a new connection by selecting a radio button and entering the correct credential.
* Personal credentials require each collaborator to provide their own credentials to use the data source.
* Shared credentials make the data source and its credentials accessible to all collaborators in the project. Users enter a common credential which was created by the creator of the connection.
For more information about connections, see:
* [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
* [Adding data from a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
* [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)
* [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections)
Connections to data behind a firewall
Secure connections provide secure communication among resources in a hybrid cloud deployment, some of which might reside behind a firewall. You have the following options for secure connections between your environment and the cloud:
* [Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enlink)
* [Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enlocation)
Satellite Connector
A Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem, cloud, or Edge environment back to IBM Cloud. Your infrastructure needs only a container host, such as Docker. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui).
See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.htmlsatctr) for instructions on configuring a Satellite Connector.
Satellite Connector is the replacement for the deprecated Secure Gateway. For the Secure Gateway deprecation announcement, see [IBM Cloud docs: Secure Gateway Deprecation Overview](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview)
Satellite location
A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on-prem location. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane. A Satellite location is a superset of the capabilities of the Satellite Connector. If you need only client data communication, set up a Satellite Connector.
See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.htmlsl) for instructions on configuring a Satellite location.
VPNs
Virtual Private Networks (VPNs) create virtual point-to-point connections by using tunneling protocols, and encryption and dedicated connections. They provide a secure method for sharing data across public networks.
Following are the VPN technologies on IBM Cloud:
* [IPSec VPN](https://cloud.ibm.com/catalog/infrastructure/ipsec-vpn): The VPN facilitates connectivity from your secure network to IBM IaaS platform’s private network. Any user on the account can be given VPN access.
* [VPN for VPC](https://cloud.ibm.com/vpc-ext/provision/vpngateway): With Virtual Private Cloud (VPC), you can provision generation 2 virtual server instances for VPC with high network performance.
* The Secure Gateway deprecation announcement provides information and scenarios for using VPNs as an alternative. See [IBM Cloud docs: Migration options](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-migration-optionsvirtual-private-network).
Allow specific IP addresses
Use this mechanism to control access to the IBM cloud console and to IBM watsonx. Access is allowed from the specified IP addresses only; access from all other IP addresses is denied. You can specify the allowed IP addresses for an individual user or for an account.
When allowing specific IP addresses for Watson Studio, you must include the CIDR ranges for the Watson Studio nodes in each region (as well as the individual client system IPs that are allowed). You can include the CIDR ranges in IBM watsonx by following these steps:
1. From the main menu, choose Administration > Cloud integrations.
2. Click Firewall configuration to display the IP addresses for the current region. Use CIDR notation.
3. Copy each CIDR range into the IP address restrictions for either a user or an account. Be sure to enter the allowed individual client IP addresses as well. Enter the IP addresses as a comma-separated list. Then, click Apply.
4. Repeat for each region to allow access for Watson Studio.
For step-by-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips)
Allow third party URLs on an internal network
If you are running IBM watsonx behind a firewall, you must allowlist third party URLs to provide outbound browser access. The URLs include resources from IBM Cloud and other domains. IBM watsonx requires access to these domains for outbound browser traffic through the firewall.
This list provides access only for core IBM watsonx functions. Specific services might require additional URLs. The list does not cover URLs required by the IBM Cloud console and its outbound requests.
Table 2. Third party URLs allowlist for IBM watsonx
Domain Description
*.bluemix.net IBM legacy Cloud domain - still used in some flows
*.appdomain.cloud IBM Cloud app domain
cloud.ibm.com IBM Cloud global domain
*.cloud.ibm.com Various IBM Cloud subdomains
dataplatform.cloud.ibm.com IBM watsonx Dallas region
*.dataplatform.cloud.ibm.com CIBM watsonx subdomains
eum.instana.io Instana client side instrumentation
eum-orange-saas.instana.io Instana client side instrumentation
cdnjs.cloudflare.com Cloudflare CDN for some static resources
nebula-cdn.kampyle.com Medallia NPS
resources.digital-cloud-ibm.medallia.eu Medallia NPS
udc-neb.kampyle.com Medallia NPS
ubt.digital-cloud-ibm.medallia.eu Medallia NPS
cdn.segment.com Segment JS
api.segment.io Segment API
cdn.walkme.com WalkMe static resources
papi.walkme.com WalkMe API
ec.walkme.com WalkMe API
playerserver.walkme.com WalkMe player server
s3.walkmeusercontent.com WalkMe static resources
Multi-tenancy
IBM watsonx is hosted as a secure and compliant multi-tenant solution on IBM Cloud. See [Multi-Tenant](https://www.ibm.com/cloud/learn/multi-tenant)
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # Network security #
IBM watsonx provides network security mechanisms to protect infrastructure, data, and applications from potential threats and unauthorized access\. Network security mechanisms provide secure connections to data sources and control traffic across both the public internet and internal networks\.
<!-- <table> -->
Table 1\. Network security mechanisms for IBM watsonx
| Mechanism | Purpose | Responsibility | Configured on |
| ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | -------------------------------- | -------------------------- |
| [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#private-network-service-endpoints) | Access services through secure private network endpoints | Customer | IBM Cloud |
| [Access to private data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#access-to-private-data-sources) | Connect to data sources that are protected by a firewall | Customer | IBM watsonx |
| [Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#integrations) | Secure connections to Third\-party clouds through a firewall | Customer and Third\-party clouds | IBM watsonx |
| [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#connections) | Secure connections to data sources | Customer | IBM watsonx |
| [Connections to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#secure) | The Satellite Connector and Satellite location provide secure connections to data sources in a hybrid environment | Customer | IBM Cloud and IBM watsonx |
| [VPNs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#vpns) | Share data securely across public networks | Customer | IBM Cloud |
| [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#allow-specific-ip-addresses) | Protect from access by unknown IP addresses | Customer | IBM Cloud |
| [Allow third party URLs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#thirdpartyurls) | Allow third party URLs on an internal network | Customer | Customer firewall |
| [Multi\-tenancy](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#multi-tenancy) | Provide isolation in a SaaS environment | IBM and Third\-party clouds | IBM Cloud, Cloud providers |
<!-- </table ""> -->
## Private network service endpoints ##
Use private network service endpoints to securely connect to endpoints over IBM private cloud, rather than connecting to resources over the public network\. With Private network service endpoints, services are no longer served on an internet routable IP address and thus are more secure\. Service endpoints require virtual routing and forwarding (VRF) to be enabled on your account\. VRF is automatically enabled for Virtual Private Clouds (VPCs)\.
For more information about service endpoints, see:
<!-- <ul> -->
* [Securing connections to services with private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html?audience=wdp)
* [Blog: Introducing Private Service Endpoints in IBM Cloud Databases](https://www.ibm.com/cloud/blog/introducing-private-service-endpoints-in-ibm-cloud-databases?mhsrc=ibmsearch_a&mhq=private%20cloud%20endpoints)
* [IBM Cloud docs: Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview)
* [IBM Cloud docs: Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint)
* [IBM Cloud docs: Public and private network endpoints](https://cloud.ibm.com/docs/watson?topic=watson-public-private-endpoints&mhsrc=ibmsearch_a&mhq=public%20cloud%20endpoints)
<!-- </ul> -->
## Access to private data sources ##
Private data sources are on\-premises data sources that are protected by a firewall\. IBM watsonx requires access through the firewall to reach the data sources\. To provide secure access, you create inbound firewall rules to allow access for the IP address ranges for IBM watsonx\. The inbound rules are created in the configuration tool for your firewall\.
See [Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
## Integrations ##
You can configure integrations with third\-party cloud platforms to allow IBM watsonx users to access data sources hosted on those clouds\. The following security mechanisms apply to integrations with third\-party clouds:
<!-- <ol> -->
1. An authorized account on the third\-party cloud, with appropriate permissions to view account credentials
2. Permissions to allow secure connections through the firewall of the cloud provider (for specific IP ranges)
<!-- </ol> -->
For example, you have a data source on AWS that you are running notebooks on\. You need to integrate with AWS and then generate a connection to the database\. The integration and connection are secure\. After you configure firewall access, you can grant appropriate permissions to users and provide them with credentials to access data\.
See [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
## Connections ##
Connections require valid credentials to access data\. The account owner or administrator configures the type of credentials that are required, either shared or personal, at the account level\. Shared credentials make the data source and its credentials accessible to all collaborators in the project\. Personal credentials require each collaborator to provide their own credentials to use the data source\.
Connections require valid credentials to access data\. The account owner or administrator configures the type of credentials that are required at the account level\. The connection creator enters a valid credential\. The options are:
<!-- <ul> -->
* **Either shared or personal** allows users to specify personal or shared credentials when creating a new connection by selecting a radio button and entering the correct credential\.
* **Personal** credentials require each collaborator to provide their own credentials to use the data source\.
* **Shared** credentials make the data source and its credentials accessible to all collaborators in the project\. Users enter a common credential which was created by the creator of the connection\.
<!-- </ul> -->
For more information about connections, see:
<!-- <ul> -->
* [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
* [Adding data from a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
* [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)
* [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-the-credentials-for-connections)
<!-- </ul> -->
## Connections to data behind a firewall ##
Secure connections provide secure communication among resources in a hybrid cloud deployment, some of which might reside behind a firewall\. You have the following options for secure connections between your environment and the cloud:
<!-- <ul> -->
* [Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#link)
* [Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=en#location)
<!-- </ul> -->
### Satellite Connector ###
A Satellite Connector uses a lightweight Docker\-based communication that creates secure and auditable communications from your on\-prem, cloud, or Edge environment back to IBM Cloud\. Your infrastructure needs only a container host, such as Docker\. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui)\.
See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html##satctr) for instructions on configuring a Satellite Connector\.
Satellite Connector is the replacement for the deprecated Secure Gateway\. For the Secure Gateway deprecation announcement, see [IBM Cloud docs: Secure Gateway Deprecation Overview](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview)
### Satellite location ###
A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on\-prem location\. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane\. A Satellite location is a superset of the capabilities of the Satellite Connector\. If you need only client data communication, set up a Satellite Connector\.
See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html#sl) for instructions on configuring a Satellite location\.
## VPNs ##
Virtual Private Networks (VPNs) create virtual point\-to\-point connections by using tunneling protocols, and encryption and dedicated connections\. They provide a secure method for sharing data across public networks\.
Following are the VPN technologies on IBM Cloud:
<!-- <ul> -->
* [IPSec VPN](https://cloud.ibm.com/catalog/infrastructure/ipsec-vpn): The VPN facilitates connectivity from your secure network to IBM IaaS platform’s private network\. Any user on the account can be given VPN access\.
* [VPN for VPC](https://cloud.ibm.com/vpc-ext/provision/vpngateway): With Virtual Private Cloud (VPC), you can provision generation 2 virtual server instances for VPC with high network performance\.
* The Secure Gateway deprecation announcement provides information and scenarios for using VPNs as an alternative\. See [IBM Cloud docs: Migration options](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-migration-options#virtual-private-network)\.
<!-- </ul> -->
## Allow specific IP addresses ##
Use this mechanism to control access to the IBM cloud console and to IBM watsonx\. Access is allowed from the specified IP addresses only; access from all other IP addresses is denied\. You can specify the allowed IP addresses for an individual user or for an account\.
When allowing specific IP addresses for Watson Studio, you must include the CIDR ranges for the Watson Studio nodes in each region (as well as the individual client system IPs that are allowed)\. You can include the CIDR ranges in IBM watsonx by following these steps:
<!-- <ol> -->
1. From the main menu, choose **Administration > Cloud integrations**\.
2. Click **Firewall configuration** to display the IP addresses for the current region\. Use CIDR notation\.
3. Copy each CIDR range into the **IP address restrictions** for either a user or an account\. Be sure to enter the allowed individual client IP addresses as well\. Enter the IP addresses as a comma\-separated list\. Then, click **Apply**\.
4. Repeat for each region to allow access for Watson Studio\.
<!-- </ol> -->
For step\-by\-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips)
## Allow third party URLs on an internal network ##
If you are running IBM watsonx behind a firewall, you must allowlist third party URLs to provide outbound browser access\. The URLs include resources from IBM Cloud and other domains\. IBM watsonx requires access to these domains for outbound browser traffic through the firewall\.
This list provides access only for core IBM watsonx functions\. Specific services might require additional URLs\. The list does not cover URLs required by the IBM Cloud console and its outbound requests\.
<!-- <table> -->
Table 2\. Third party URLs allowlist for IBM watsonx
| Domain | Description |
| -------------------------------------------- | --------------------------------------------------- |
| \*\.bluemix\.net | IBM legacy Cloud domain \- still used in some flows |
| \*\.appdomain\.cloud | IBM Cloud app domain |
| cloud\.ibm\.com | IBM Cloud global domain |
| \*\.cloud\.ibm\.com | Various IBM Cloud subdomains |
| dataplatform\.cloud\.ibm\.com | IBM watsonx Dallas region |
| \*\.dataplatform\.cloud\.ibm\.com | CIBM watsonx subdomains |
| eum\.instana\.io | Instana client side instrumentation |
| eum\-orange\-saas\.instana\.io | Instana client side instrumentation |
| cdnjs\.cloudflare\.com | Cloudflare CDN for some static resources |
| nebula\-cdn\.kampyle\.com | Medallia NPS |
| resources\.digital\-cloud\-ibm\.medallia\.eu | Medallia NPS |
| udc\-neb\.kampyle\.com | Medallia NPS |
| ubt\.digital\-cloud\-ibm\.medallia\.eu | Medallia NPS |
| cdn\.segment\.com | Segment JS |
| api\.segment\.io | Segment API |
| cdn\.walkme\.com | WalkMe static resources |
| papi\.walkme\.com | WalkMe API |
| ec\.walkme\.com | WalkMe API |
| playerserver\.walkme\.com | WalkMe player server |
| s3\.walkmeusercontent\.com | WalkMe static resources |
<!-- </table ""> -->
## Multi\-tenancy ##
IBM watsonx is hosted as a secure and compliant multi\-tenant solution on IBM Cloud\. See [Multi\-Tenant](https://www.ibm.com/cloud/learn/multi-tenant)
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html?context=cdpaas&locale=en | Security for IBM watsonx | Security for IBM watsonx
Security mechanisms in IBM watsonx provide protection for data, applications, identity, and resources. You can configure security mechanisms on five levels for IBM Cloud security functions.
Security levels in IBM watsonx
Security for IBM watsonx is configured on levels to ensure that your data, application endpoints, and identity are protected on any cloud. The security levels are:
1. [Network security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html) – Network security protects the network infrastructure and the points where your database or applications interact with the cloud. For example, you can protect your network by allowing IP addresses, by connecting securely to databases and third-party clouds, and by securing endpoints.
2. [Enterprise security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-enterprise.html) – Enterprises are multiple IBM Cloud accounts in a hierarchy. For example, your company might have many teams that require one or more separate accounts for development, testing, and production environments. Or, you can configure an enterprise to isolate workloads in separate accounts to meet compliance guidelines.
3. [Account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) – Account security includes IAM and Access group roles, Service IDs, monitoring, and other security mechanisms that are configured on IBM Cloud for your IBM Cloud account.
4. [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) – Data security protects the IBM Cloud Object Storage service instance, provides data encryption for at-rest and in-motion data, and other security mechanisms related to data.
5. [Collaborator security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html) – Protect your workspaces by assigning role-based access controls to collaborators in IBM watsonx.
IBM watsonx conforms to IBM Cloud security requirements. See [IBM Cloud docs: How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security).
Resiliency
IBM watsonx is disaster resistant:
* The metadata for your projects and catalogs is stored in a three-node dedicated Cloudant Enterprise cluster that spans multiple geographic locations.
* The files that are associated with projects and catalogs are protected by the level of resiliency that is specified by the IBM Cloud Object Storage plan.
Compliance
See [Keep your data secure and compliant](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html).
Learn more
* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document)
* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883)
* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)
* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03)
* [Managing security and compliance in IBM Cloud](https://cloud.ibm.com/docs/overview?topic=overview-manage-security-compliance)
* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB).
* [Software Product Compatibility Reports: IBM Watson Machine Learning service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479).
Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
| # Security for IBM watsonx #
Security mechanisms in IBM watsonx provide protection for data, applications, identity, and resources\. You can configure security mechanisms on five levels for IBM Cloud security functions\.
## Security levels in IBM watsonx ##
Security for IBM watsonx is configured on levels to ensure that your data, application endpoints, and identity are protected on any cloud\. The security levels are:
<!-- <ol> -->
1. [Network security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html) – Network security protects the network infrastructure and the points where your database or applications interact with the cloud\. For example, you can protect your network by allowing IP addresses, by connecting securely to databases and third\-party clouds, and by securing endpoints\.
2. [Enterprise security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-enterprise.html) – Enterprises are multiple IBM Cloud accounts in a hierarchy\. For example, your company might have many teams that require one or more separate accounts for development, testing, and production environments\. Or, you can configure an enterprise to isolate workloads in separate accounts to meet compliance guidelines\.
3. [Account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) – Account security includes IAM and Access group roles, Service IDs, monitoring, and other security mechanisms that are configured on IBM Cloud for your IBM Cloud account\.
4. [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) – Data security protects the IBM Cloud Object Storage service instance, provides data encryption for at\-rest and in\-motion data, and other security mechanisms related to data\.
5. [Collaborator security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html) – Protect your workspaces by assigning role\-based access controls to collaborators in IBM watsonx\.
<!-- </ol> -->
IBM watsonx conforms to IBM Cloud security requirements\. See [IBM Cloud docs: How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security)\.
## Resiliency ##
IBM watsonx is disaster resistant:
<!-- <ul> -->
* The metadata for your projects and catalogs is stored in a three\-node dedicated Cloudant Enterprise cluster that spans multiple geographic locations\.
* The files that are associated with projects and catalogs are protected by the level of resiliency that is specified by the IBM Cloud Object Storage plan\.
<!-- </ul> -->
## Compliance ##
See [Keep your data secure and compliant](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html)\.
## Learn more ##
<!-- <ul> -->
* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=en#detail-document)
* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883)
* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)
* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03)
* [Managing security and compliance in IBM Cloud](https://cloud.ibm.com/docs/overview?topic=overview-manage-security-compliance)
* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB)\.
* [Software Product Compatibility Reports: IBM Watson Machine Learning service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479)\.
<!-- </ul> -->
**Parent topic:**[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
<!-- </article "role="article" "> -->
|
581F43AA02D6C6861D2FDF220617CF3FBB903AE5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en | Keeping your data secure and compliant | Keeping your data secure and compliant
Customer data security is paramount. The following information outlines some of the ways that customer data is protected when using IBM watsonx and what you are expected to do to help in these efforts.
* [Customer responsibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=encustomer-responsibility)
* [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enhipaa)
* [IBM's commitment to GDPR](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=engdpr)
* [Content and Data Protection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=encontent-and-data-protection)
* [GDPR statement that applies to IBM Watson Machine Learning log files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enlogfiles)
* [Secure deletion from the IBM Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=ensecure-deletion)
Customer responsibility
Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation (GDPR). Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations. The products, services, and other capabilities described herein are not suitable for all customer situations and may have restricted availability. IBM does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.
HIPAA readiness
Watson Studio and Watson Machine Learning meet the required IBM controls that are commensurate with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule requirements.
These requirements include the appropriate administrative, physical, and technical safeguards required of Business Associates in 45 CFR Part 160 and Subparts A and C of Part 164. HIPAA readiness applies to the following plans:
* The Watson Studio Professional plan in the Dallas (US South) region
* The Watson Machine Learning Standard plan in the Dallas (US South) region
For other services, you must check the plan page in IBM Cloud for each to determine if it is HIPAA ready and whether you need to reprovision the service after you enable HIPAA support.
HIPAA support from IBM requires that you agree to the terms of the [Business Associate Addendum (BAA) agreement](https://www.ibm.com/support/customer/csol/terms/?ref=i126-7356-04-12-2019-zz-en) with IBM for your IBM Cloud account. The BAA outlines IBM responsibilities, but also your responsibilities to maintain HIPAA compliance. After you enable HIPAA support in your IBM Cloud account, you cannot disable it. See [IBM Cloud Docs: Enabling the HIPAA Supported setting](https://cloud.ibm.com/docs/account?topic=account-eu-hipaa-supported).
To enable HIPAA support for your IBM Cloud account:
1. Log in to your IBM Cloud account.
2. Click Manage > Account and then Account settings.
3. In the HIPAA Supported section, click On.
4. Read the BAA and then select Accept and click Submit.
IBM's commitment to GDPR
Learn more about IBM’s own [GDPR readiness journey and our GDPR capabilities](https://www.ibm.com/data-responsibility/gdpr/) and offerings to support your compliance journey.
Content and Data Protection
The Data Processing and Protection data sheet (Data Sheet) provides information specific to the IBM Cloud Service regarding the type of Content enabled to be processed, the processing activities involved, the data protection features, and specifics on retention and return of Content. Any details or clarifications and terms, including customer responsibilities, around use of the Cloud Service and data protection features, if any, are set forth in this section. There may be more than one Data Sheet applicable to a customer's use of the IBM Cloud Service based upon options selected by customer. The Data Sheet may only be available in English and not available in local languages. Despite any practices of local law or custom, the parties agree that they understand English and it is an appropriate language regarding acquisition and use of the IBM Cloud Services. The following Data Sheets apply to the IBM Cloud Service and its available options. Customer acknowledges that i) IBM may modify Data Sheets from time to time at IBM's sole discretion and ii) such modifications will supersede prior versions. The intent of any modification to Data Sheet(s) will be to
1. improve or clarify existing commitments,
2. maintain alignment to current adopted standards and applicable laws, or
3. provide additional commitments. No modification to Data Sheets will materially degrade the data protection of a IBM Cloud Service.
See the [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enlearn-more) section for links to some of the data sheets that you can view.
You, the customer, are responsible to take necessary actions to order, enable, or use available data protection features for a IBM Cloud Service and accept responsibility for use of the IBM Cloud Services if you fail to take such actions, including meeting any data protection or other legal requirements regarding Content. [IBM's Data Processing Addendum](http://ibm.com/dpa) (DPA) and DPA Exhibits apply and are referenced in as part of the Agreement, if and to the extent the European General Data Protection Regulation (EU/2016/679) (GDPR) applies to personal data contained in Content. The applicable Data Sheets for this IBM Cloud Service will serve as the DPA Exhibits. If the DPA applies, IBM's obligation to provide notice of changes to Subprocessors and Customer's right to object to such changes will apply as set out in DPA.
GDPR statement that applies to IBM Watson Machine Learning log files
Disclaimer: Client’s use of the deep learning training process includes the ability to write to the training log files. Personal data must not be written to these training log files as they are accessible to other users within Client’s Enterprise as well as to IBM as necessary to support the Cloud Service.
Please pay close attention to data privacy principals when selecting a dataset for training data. Processing of PI is governed by vigorous legal requirements and is only allowed if it is based on an explicit legal basis. These regulations mandate that PI is processed only for the purpose it was collected for. No other processing in a manner that is incompatible with this initial purpose is permissible. For these and other constrains these regulations place on your use of PI, we highly recommend that you do not use "real" PI in your training dataset unless it is allowed or permissible. You may substitute real PI using test data that is available on the public sphere.
Secure deletion from the IBM Watson Machine Learning service
Anyone that has personally identifiable information and data (PII) stored as part of using the IBM Watson Machine Learning service, has the right to obtain from the controller the erasure of that data without undue delay. The controller has the obligation to erase personal data without undue delay where one of the following conditions exist:
* There is PII data stored in the IBM Watson Machine Learning service
* User email address and full name are stored as metadata related to the Machine Learning repository assets.
* User provided service credentials.
* Repository asset content, which is usually out of Machine Learning service control and potentially can contain any type of PII data in it. In this case, when users want to track PII data stored in assets, such as a model, they must:
* Get training data reference from the model metadata.
* Scan training data for occurrence of PII data of particular user.
* If such data can be found in the training data set, the model should be considered as potentially holding this data in its content.
Repository asset content, such as models, can be securely deleted by performing one of the methods [for permanently deleting personal data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enoptions-for-permanently-deleting-personal-data).
Options for permanently deleting personal data
There are several options that users can choose to delete their personal data permanently:
* Remove the entire IBM Watson Machine Learning service instance from IBM Cloud. This is possible by sending an un-provisioning request via different channels, such as the IBM Cloud UI, CLI, or REST API.
* Use the [Watson Machine Learning REST ](https://cloud.ibm.com/apidocs/machine-learning-cp) to delete models or model deployments.
For the IBM Watson Machine Learning service, personally identifiable information and data is removed completely from all data sources, including backups, after 30 days.
Learn more
* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document)
* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883)
* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)
* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03)
* [How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security)
* [Data Security and Privacy Principles for IBM Cloud Services](https://www-03.ibm.com/software/sla/sladb.nsf/pdf/7745WW2/$file/Z126-7745-WW-2_05-2017_en_US.pdf)
* [IBM and GDPR](https://www.ibm.com/data-responsibility/gdpr/)
* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB)
* [Software Product Compatibility Reports: IBM Watson Machine Learning](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=6B5148E0537F11E6865BC3F213DB63F7)
* [Software Product Compatibility Reports: IBM Watson Machine Learning Service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479)
Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
| # Keeping your data secure and compliant #
Customer data security is paramount\. The following information outlines some of the ways that customer data is protected when using IBM watsonx and what you are expected to do to help in these efforts\.
<!-- <ul> -->
* [Customer responsibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#customer-responsibility)
* [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#hipaa)
* [IBM's commitment to GDPR](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#gdpr)
* [Content and Data Protection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#content-and-data-protection)
* [GDPR statement that applies to IBM Watson Machine Learning log files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#logfiles)
* [Secure deletion from the IBM Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#secure-deletion)
<!-- </ul> -->
## Customer responsibility ##
Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation (GDPR)\. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations\. The products, services, and other capabilities described herein are not suitable for all customer situations and may have restricted availability\. IBM does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation\.
## HIPAA readiness ##
Watson Studio and Watson Machine Learning meet the required IBM controls that are commensurate with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule requirements\.
These requirements include the appropriate administrative, physical, and technical safeguards required of Business Associates in 45 CFR Part 160 and Subparts A and C of Part 164\. HIPAA readiness applies to the following plans:
<!-- <ul> -->
* The Watson Studio Professional plan in the Dallas (US South) region
* The Watson Machine Learning Standard plan in the Dallas (US South) region
<!-- </ul> -->
For other services, you must check the plan page in IBM Cloud for each to determine if it is HIPAA ready and whether you need to reprovision the service after you enable HIPAA support\.
HIPAA support from IBM requires that you agree to the terms of the [Business Associate Addendum (BAA) agreement](https://www.ibm.com/support/customer/csol/terms/?ref=i126-7356-04-12-2019-zz-en) with IBM for your IBM Cloud account\. The BAA outlines IBM responsibilities, but also your responsibilities to maintain HIPAA compliance\. After you enable HIPAA support in your IBM Cloud account, you cannot disable it\. See [IBM Cloud Docs: Enabling the HIPAA Supported setting](https://cloud.ibm.com/docs/account?topic=account-eu-hipaa-supported)\.
To enable HIPAA support for your IBM Cloud account:
<!-- <ol> -->
1. Log in to your IBM Cloud account\.
2. Click **Manage > Account** and then **Account settings**\.
3. In the **HIPAA Supported** section, click **On**\.
4. Read the BAA and then select **Accept** and click **Submit**\.
<!-- </ol> -->
## IBM's commitment to GDPR ##
Learn more about IBM’s own [GDPR readiness journey and our GDPR capabilities](https://www.ibm.com/data-responsibility/gdpr/) and offerings to support your compliance journey\.
## Content and Data Protection ##
The Data Processing and Protection data sheet (Data Sheet) provides information specific to the IBM Cloud Service regarding the type of Content enabled to be processed, the processing activities involved, the data protection features, and specifics on retention and return of Content\. Any details or clarifications and terms, including customer responsibilities, around use of the Cloud Service and data protection features, if any, are set forth in this section\. There may be more than one Data Sheet applicable to a customer's use of the IBM Cloud Service based upon options selected by customer\. The Data Sheet may only be available in English and not available in local languages\. Despite any practices of local law or custom, the parties agree that they understand English and it is an appropriate language regarding acquisition and use of the IBM Cloud Services\. The following Data Sheets apply to the IBM Cloud Service and its available options\. Customer acknowledges that i) IBM may modify Data Sheets from time to time at IBM's sole discretion and ii) such modifications will supersede prior versions\. The intent of any modification to Data Sheet(s) will be to
<!-- <ol> -->
1. improve or clarify existing commitments,
2. maintain alignment to current adopted standards and applicable laws, or
3. provide additional commitments\. No modification to Data Sheets will materially degrade the data protection of a IBM Cloud Service\.
<!-- </ol> -->
See the [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#learn-more) section for links to some of the data sheets that you can view\.
You, the customer, are responsible to take necessary actions to order, enable, or use available data protection features for a IBM Cloud Service and accept responsibility for use of the IBM Cloud Services if you fail to take such actions, including meeting any data protection or other legal requirements regarding Content\. [IBM's Data Processing Addendum](http://ibm.com/dpa) (DPA) and DPA Exhibits apply and are referenced in as part of the Agreement, if and to the extent the European General Data Protection Regulation (EU/2016/679) (GDPR) applies to personal data contained in Content\. The applicable Data Sheets for this IBM Cloud Service will serve as the DPA Exhibits\. If the DPA applies, IBM's obligation to provide notice of changes to Subprocessors and Customer's right to object to such changes will apply as set out in DPA\.
## GDPR statement that applies to IBM Watson Machine Learning log files ##
Disclaimer: Client’s use of the deep learning training process includes the ability to write to the training log files\. Personal data must not be written to these training log files as they are accessible to other users within Client’s Enterprise as well as to IBM as necessary to support the Cloud Service\.
Please pay close attention to data privacy principals when selecting a dataset for training data\. Processing of PI is governed by vigorous legal requirements and is only allowed if it is based on an explicit legal basis\. These regulations mandate that PI is processed only for the purpose it was collected for\. No other processing in a manner that is incompatible with this initial purpose is permissible\. For these and other constrains these regulations place on your use of PI, we highly recommend that you do not use "real" PI in your training dataset unless it is allowed or permissible\. You may substitute real PI using test data that is available on the public sphere\.
## Secure deletion from the IBM Watson Machine Learning service ##
Anyone that has personally identifiable information and data (PII) stored as part of using the IBM Watson Machine Learning service, has the right to obtain from the controller the erasure of that data without undue delay\. The controller has the obligation to erase personal data without undue delay where one of the following conditions exist:
<!-- <ul> -->
* There is PII data stored in the IBM Watson Machine Learning service
* User email address and full name are stored as metadata related to the Machine Learning repository assets\.
* User provided service credentials\.
* Repository asset content, which is usually out of Machine Learning service control and potentially can contain any type of PII data in it\. In this case, when users want to track PII data stored in assets, such as a model, they must:
<!-- <ul> -->
* Get training data reference from the model metadata.
* Scan training data for occurrence of PII data of particular user.
* If such data can be found in the training data set, the model should be considered as potentially holding this data in its content.
<!-- </ul> -->
<!-- </ul> -->
Repository asset content, such as models, can be securely deleted by performing one of the methods [for permanently deleting personal data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=en#options-for-permanently-deleting-personal-data)\.
### Options for permanently deleting personal data ###
There are several options that users can choose to delete their personal data permanently:
<!-- <ul> -->
* Remove the entire IBM Watson Machine Learning service instance from IBM Cloud\. This is possible by sending an un\-provisioning request via different channels, such as the IBM Cloud UI, CLI, or REST API\.
* Use the [Watson Machine Learning REST ](https://cloud.ibm.com/apidocs/machine-learning-cp) to delete models or model deployments\.
<!-- </ul> -->
For the IBM Watson Machine Learning service, personally identifiable information and data is removed completely from all data sources, including backups, after 30 days\.
## Learn more ##
<!-- <ul> -->
* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=en#detail-document)
* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883)
* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)
* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03)
* [How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security)
* [Data Security and Privacy Principles for IBM Cloud Services](https://www-03.ibm.com/software/sla/sladb.nsf/pdf/7745WW2/$file/Z126-7745-WW-2_05-2017_en_US.pdf)
* [IBM and GDPR](https://www.ibm.com/data-responsibility/gdpr/)
* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB)
* [Software Product Compatibility Reports: IBM Watson Machine Learning](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=6B5148E0537F11E6865BC3F213DB63F7)
* [Software Product Compatibility Reports: IBM Watson Machine Learning Service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479)
<!-- </ul> -->
**Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
<!-- </article "role="article" "> -->
|
B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=en | Setting up the Watson Studio and Watson Machine Learning services | Setting up the Watson Studio and Watson Machine Learning services
The Watson Studio and Watson Machine Learning services are provisioned automatically with a Lite plan when you sign up for IBM watsonx. To set up Watson Studio and Watson Machine Learning for an organization, you upgrade the service plans. You allow the node IP addresses access through the firewall.
To set up the Watson Studio and Watson Machine Learning services, complete these tasks:
1. [Upgrade the services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=enupgrade).
2. [Allow IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=ennode-ips).
Step 1: Upgrade the services to the appropriate plans
Required roles : You must be the IBM Cloud account Owner or Administrator.
To upgrade the services:
1. Determine the Watson Studio service plan that you need. The features and compute resources of Watson Studio vary across the service plans. See [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html).
2. While logged in to IBM watsonx, from the main menu, click Administration > Services > Service instances.
3. Click the menu next to the Watson Studio service and choose Upgrade service.
4. Choose the plan you want and click Upgrade.
5. Repeat the steps for the Watson Machine Learning service. The resources and number of deployment jobs vary across the Watson Machine Learning service plans. See [Watson Machine Learning service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Make sure that object storage is configured to allow these users to create catalogs and projects. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlcos-delegation).
All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services can now create projects and use all the Watson Studio and Watson Machine Learning tools.
Step 2: Allow IP addresses for Watson Studio for your region
The IP addresses for the Watson Studio nodes in each region must be configured as allowed IP addresses for the IBM Cloud account. When allowing specific IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio nodes in each region to allow a secure connection through the firewall.
Required roles : You must have the Editor or higher IBM Cloud IAM Platform role to allow IP addresses.
First look up the CIDR blocks in IBM watsonx, and then enter them into the Access(IAM) > Settings screen in IBM Cloud. Follow these steps:
1. From the IBM watsonx main menu, select Administration > Cloud integrations.
2. Click Firewall configuration to display the IP addresses for the current region.
3. Checkmark Show IP ranges in CIDR notation.
4. Click the icon to copy a CIDR block to the clipboard.
5. Enter the CIDR block of IP addresses into the Access(IAM) > Settings > Restrict IP address access > Allowed IP addresses for the IBM Cloud account.
6. Then click Save.
7. Repeat for each CIDR block until all are entered.
8. Repeat for each region.
For step-by-step instructions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips).
Next steps
Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html).
Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Setting up the Watson Studio and Watson Machine Learning services #
The Watson Studio and Watson Machine Learning services are provisioned automatically with a Lite plan when you sign up for IBM watsonx\. To set up Watson Studio and Watson Machine Learning for an organization, you upgrade the service plans\. You allow the node IP addresses access through the firewall\.
To set up the Watson Studio and Watson Machine Learning services, complete these tasks:
<!-- <ol> -->
1. [Upgrade the services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=en#upgrade)\.
2. [Allow IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=en#node-ips)\.
<!-- </ol> -->
## Step 1: Upgrade the services to the appropriate plans ##
**Required roles** : You must be the IBM Cloud account **Owner** or **Administrator**\.
To upgrade the services:
<!-- <ol> -->
1. Determine the Watson Studio service plan that you need\. The features and compute resources of Watson Studio vary across the service plans\. See [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)\.
2. While logged in to IBM watsonx, from the main menu, click **Administration > Services > Service instances**\.
3. Click the menu next to the Watson Studio service and choose **Upgrade service**\.
4. Choose the plan you want and click **Upgrade**\.
5. Repeat the steps for the Watson Machine Learning service\. The resources and number of deployment jobs vary across the Watson Machine Learning service plans\. See [Watson Machine Learning service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
<!-- </ol> -->
Make sure that object storage is configured to allow these users to create catalogs and projects\. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#cos-delegation)\.
All users in your IBM Cloud account with the **Editor** IAM platform access role for all IAM enabled services can now create projects and use all the Watson Studio and Watson Machine Learning tools\.
## Step 2: Allow IP addresses for Watson Studio for your region ##
The IP addresses for the Watson Studio nodes in each region must be configured as allowed IP addresses for the IBM Cloud account\. When allowing specific IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio nodes in each region to allow a secure connection through the firewall\.
**Required roles** : You must have the **Editor** or higher IBM Cloud IAM Platform role to allow IP addresses\.
First look up the CIDR blocks in IBM watsonx, and then enter them into the **Access(IAM) > Settings** screen in IBM Cloud\. Follow these steps:
<!-- <ol> -->
1. From the IBM watsonx main menu, select **Administration > Cloud integrations**\.
2. Click **Firewall configuration** to display the IP addresses for the current region\.
3. Checkmark **Show IP ranges in CIDR notation**\.
4. Click the icon to copy a CIDR block to the clipboard\.
5. Enter the CIDR block of IP addresses into the **Access(IAM) > Settings > Restrict IP address access > Allowed IP addresses** for the IBM Cloud account\.
6. Then click **Save**\.
7. Repeat for each CIDR block until all are entered\.
8. Repeat for each region\.
<!-- </ol> -->
For step\-by\-step instructions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips)\.
## Next steps ##
Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)\.
**Parent topic:**[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
91838636DE3442E218FD7BECCDE866113D10DDF3 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html?context=cdpaas&locale=en | Managing users and access | Managing users and access
As the account owner or administrator, you add the people in your organization to the IBM Cloud account and then assign them access permissions using roles that provide access to the services that they need.
User management on IBM Cloud
People who work in IBM watsonx must have a valid IBMid and be a member of the IBM Cloud account. Alternately, they must have a valid ID in a supported user registry. User management includes adding users to the account and then assigning appropriate roles to provide access to the services and actions that they need. See [Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html).
Access management using IBM Cloud Identity and Access Management (IAM)
You control the actions that a user can perform for a specific service by assigning permissions with IBM Cloud IAM. You create user access groups containing roles to provide permissions for users. You can also assign roles and permissions to individual users. If necessary, you can create custom roles to satisfy your business requirements.
Learn more
* [Signing up for your organization's watsonx account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlorgacct)
* [Logging in to watsonx.ai through IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlappid)
* [IBM Cloud docs: Assigning access to resources by using access groups](https://cloud.ibm.com/docs/account?topic=account-access-getstarted)
* [IBM Cloud docs: Creating custom roles](https://cloud.ibm.com/docs/account?topic=account-custom-roles)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups)
* [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup&interface=ui)
Parent topic:[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Managing users and access #
As the account owner or administrator, you add the people in your organization to the IBM Cloud account and then assign them access permissions using roles that provide access to the services that they need\.
## User management on IBM Cloud ##
People who work in IBM watsonx must have a valid IBMid and be a member of the IBM Cloud account\. Alternately, they must have a valid ID in a supported user registry\. User management includes adding users to the account and then assigning appropriate roles to provide access to the services and actions that they need\. See [Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)\.
## Access management using IBM Cloud Identity and Access Management (IAM) ##
You control the actions that a user can perform for a specific service by assigning permissions with IBM Cloud IAM\. You create user access groups containing roles to provide permissions for users\. You can also assign roles and permissions to individual users\. If necessary, you can create custom roles to satisfy your business requirements\.
## Learn more ##
<!-- <ul> -->
* [Signing up for your organization's watsonx account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html#orgacct)
* [Logging in to watsonx\.ai through IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html#appid)
* [IBM Cloud docs: Assigning access to resources by using access groups](https://cloud.ibm.com/docs/account?topic=account-access-getstarted)
* [IBM Cloud docs: Creating custom roles](https://cloud.ibm.com/docs/account?topic=account-custom-roles)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups)
* [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup&interface=ui)
<!-- </ul> -->
**Parent topic:**[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
9FD50170823EF108E2CF4EBF083B0085845FC3BE | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=en | Setting up the IBM Cloud account | Setting up the IBM Cloud account
As an IBM Cloud account owner or administrator, you sign up for IBM watsonx.ai and set up payment for services in the IBM Cloud account.
These steps describe the typical tasks for an IBM Cloud account owner to set up the account for an organization:
1. [Sign up for watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=ensign-up).
2. [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enpaid-account) to add or update billing information.
3. [(Optional) Configure restrictions for the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enrestrict).
Step 1: Sign up for watsonx
To sign up for watsonx.ai:
1. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south).
2. Select the service region.
3. Agree to the terms, Data Use Policy, and Cookie Use.
4. Log in with your IBMid (usually an email address) if you have an existing IBM Cloud account. If you don't have an IBM Cloud account, click Create an IBM Cloud account to create a new account. You must enter a credit card to create a Pay-As-You-Go IBM Cloud account. However, you are not charged until you buy paid service plans.
Lite plans for Watson Studio and Watson Machine Learning are automatically provisioned for you.
Step 2: Update your IBM Cloud account
You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan.
You must update your IBM Cloud account in the following circumstances:
* You have a Trial account from signing up for watsonx.
* You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic).
* You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021.
* You want to change a Pay-As-You-Go plan to a subscription plan.
Setting up a Pay-As-You-Go account
You set up a Pay-As-You-Go by adding a credit card number and billing information. You pay only for billable services that you use, with no long-term contracts or commitments. You can provision paid plans for all services in the IBM Cloud services catalog, including plans in the watsonx services catalog.
To set up a Pay-As-You-Go account:
1. From the watsonx navigation menu, select Administration > Account and billing > Account.
2. Click Manage in IBM Cloud.
3. Log in to IBM Cloud.
4. Select Account settings.
5. Click Add credit card and enter your credit card and billing information.
6. Click Create account to submit your information.
After your payment information is processed, your account is upgraded and you receive a monthly invoice for billable resource usage or instance fees.
Setting up a subscription account
With subscriptions, you commit to a minimum spending amount for a certain period and receive a discount on the overall cost. Subscriptions are limited to service plans in the watsonx catalog.
Subscription credits are activated using a unique code that you receive by email. To activate the subscription, you apply the subscription code to an account. Be careful when selecting the account, because after you apply the subscription to an account, you can't undo it.
To set up a watsonx subscription:
1. From the watsonx navigation menu, select Administration > Account and billing > Upgrade service plans.
2. On the Upgrade service plans page, click Contact sales.
Complete and submit the form to communicate with IBM Sales that you want to set up a subscription account for watsonx. An associate from IBM Sales will contact you to set up a subscription. When your subscription is ready, you receive an email from IBM containing a unique subscription code.
To apply the subscription code to your account:
1. Locate the unique code from the email that you received from IBM.
2. Log in to your IBM Cloud account, and select Manage > Account from the header. Be sure to select the correct account.
3. Select Account settings and locate the Subscription and feature codes section on the page.
4. Click Apply code.
5. Copy and paste the code from the email into the Apply a code field and click Apply.
Your subscription account is active and you can upgrade your watsonx.ai services.
Step 3: (Optional) Configure restrictions for the account
Complete these optional tasks to secure your account:
* Restrict the scope of resources that are available in IBM watsonx to the current account. See [Set the scope of resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources).
* Restrict access to specific IP addresses to protect the IBM Cloud account from unwanted access from unknown IP addresses. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses).
Next steps
* [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
* [Add more security constraints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html)
Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Setting up the IBM Cloud account #
As an IBM Cloud account owner or administrator, you sign up for IBM watsonx\.ai and set up payment for services in the IBM Cloud account\.
These steps describe the typical tasks for an IBM Cloud account owner to set up the account for an organization:
<!-- <ol> -->
1. [Sign up for watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=en#sign-up)\.
2. [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=en#paid-account) to add or update billing information\.
3. [(Optional) Configure restrictions for the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=en#restrict)\.
<!-- </ol> -->
## Step 1: Sign up for watsonx ##
To sign up for watsonx\.ai:
<!-- <ol> -->
1. Go to [Try IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx) or [Try watsonx\.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south)\.
2. Select the service region\.
3. Agree to the terms, Data Use Policy, and Cookie Use\.
4. Log in with your IBMid (usually an email address) if you have an existing IBM Cloud account\. If you don't have an IBM Cloud account, click **Create an IBM Cloud account** to create a new account\. You must enter a credit card to create a Pay\-As\-You\-Go IBM Cloud account\. However, you are not charged until you buy paid service plans\.
<!-- </ol> -->
Lite plans for Watson Studio and Watson Machine Learning are automatically provisioned for you\.
## Step 2: Update your IBM Cloud account ##
You can skip this step if your IBM Cloud account has billing information with a Pay\-As\-You\-Go or a subscription plan\.
You must update your IBM Cloud account in the following circumstances:
<!-- <ul> -->
* You have a Trial account from signing up for watsonx\.
* You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic)\.
* You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accounts#liteaccount) that you created before 25 October 2021\.
* You want to change a Pay\-As\-You\-Go plan to a subscription plan\.
<!-- </ul> -->
### Setting up a Pay\-As\-You\-Go account ###
You set up a Pay\-As\-You\-Go by adding a credit card number and billing information\. You pay only for billable services that you use, with no long\-term contracts or commitments\. You can provision paid plans for all services in the IBM Cloud services catalog, including plans in the watsonx services catalog\.
To set up a Pay\-As\-You\-Go account:
<!-- <ol> -->
1. From the watsonx navigation menu, select **Administration > Account and billing > Account**\.
2. Click **Manage in IBM Cloud**\.
3. Log in to IBM Cloud\.
4. Select **Account settings**\.
5. Click **Add credit card** and enter your credit card and billing information\.
6. Click **Create account** to submit your information\.
<!-- </ol> -->
After your payment information is processed, your account is upgraded and you receive a monthly invoice for billable resource usage or instance fees\.
### Setting up a subscription account ###
With subscriptions, you commit to a minimum spending amount for a certain period and receive a discount on the overall cost\. Subscriptions are limited to service plans in the watsonx catalog\.
Subscription credits are activated using a unique code that you receive by email\. To activate the subscription, you apply the subscription code to an account\. Be careful when selecting the account, because after you apply the subscription to an account, you can't undo it\.
To set up a watsonx subscription:
<!-- <ol> -->
1. From the watsonx navigation menu, select **Administration > Account and billing > Upgrade service plans**\.
2. On the **Upgrade service plans** page, click **Contact sales**\.
<!-- </ol> -->
Complete and submit the form to communicate with IBM Sales that you want to set up a subscription account for watsonx\. An associate from IBM Sales will contact you to set up a subscription\. When your subscription is ready, you receive an email from IBM containing a unique subscription code\.
To apply the subscription code to your account:
<!-- <ol> -->
1. Locate the unique code from the email that you received from IBM\.
2. Log in to your IBM Cloud account, and select **Manage > Account** from the header\. Be sure to select the correct account\.
3. Select **Account settings** and locate the **Subscription and feature codes** section on the page\.
4. Click **Apply code**\.
5. Copy and paste the code from the email into the **Apply a code** field and click **Apply**\.
<!-- </ol> -->
Your subscription account is active and you can upgrade your watsonx\.ai services\.
## Step 3: (Optional) Configure restrictions for the account ##
Complete these optional tasks to secure your account:
<!-- <ul> -->
* Restrict the scope of resources that are available in IBM watsonx to the current account\. See [Set the scope of resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-the-scope-for-resources)\.
* Restrict access to specific IP addresses to protect the IBM Cloud account from unwanted access from unknown IP addresses\. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html#allow-specific-ip-addresses)\.
<!-- </ul> -->
## Next steps ##
<!-- <ul> -->
* [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html)
* [Add more security constraints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html)
<!-- </ul> -->
**Parent topic:**[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
D0DECE55336BC8393593243D829B9D4B1E6159FD | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=en | Add users to the account | Add users to the account
As an Administrator, you add the people in your organization who need access to IBM watsonx to the IBM Cloud account and then assign them the appropriate roles for their tasks.
1. [Add nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enusers) to the IBM Cloud account and assign access groups or roles so that they can work in IBM watsonx. The new users receive an email invitation to join the account. They must accept the invitation to be added to the account.
2. Set up access groups to simplify permissions and role assignment.
3. Optional: [Add administrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enadminuser) to the IBM Cloud account.
Add nonadministrative users to your IBM Cloud account
You invite users to your IBM Cloud account by sending an email invitation. The user accepts the invitation to join the account. You must assign them roles (or access groups) to provide the necessary permissions to work in IBM watsonx. For a baseline role assignment, you can provide minimum permissions by assigning the following roles in the Manage > Access(IAM) > Users > Invite users > Access policy screen in IBM Cloud:
Table 1. Minimum roles for new IBM watsonx users
Level Role Description
Service All Identity and Access enabled services Can access all services that use IAM for access management; usually assigned only to administrators in a production environment
Resources All resources Scope of resources for which user has access
Resource group access Viewer Can view but not modify resource groups
Service access Reader Can perform read-only actions within a service
Platform access Viewer Can view but not modify service instances
IBM account membership
To be authorized for IBM watsonx, users must have existing IBMids. If the invited user does not have an IBMid, it is created for them when they join the account.
Assigning roles
To assign minimum permissions to individual users:
1. From IBM watsonx, click Administration > Access (IAM) to open the Manage access and users page for your IBM Cloud account.
2. Click Users > Invite users+.
3. Enter one or more email addresses that are separated by commas, spaces, or line breaks. The limit is 100 email addresses. The settings apply to all the email addresses.
4. Click the Access policy tile.
5. Select All Identity and Access enabled services, then click Next to assign Resource access.
6. For Resources, choose All resources. Click Next.
7. For Resource group access, choose Viewer. Click Next
8. For Roles and action, choose the following minimum permissions:
* In the Service access section, select Reader
* In the Platform access section, select Viewer.
9. Review the settings and edit if necessary.
10. Click Add to save the policy.
11. Click Invite to send an email invitation to each email address. The policies are assigned to the users when they accept the invitation to join the account.
Modifying a user's role
When you change a user's role, their access to services changes. Their ability to complete work in IBM watsonx can be impacted if they do not have the necessary access.
Optional: Add administrative users to your IBM Cloud account
You can add administrative users with the Administrator role for account management. This role also provides the Manager role for all services in the account.
To add a user as an IBM Cloud account administrator:
1. Follow the steps to [add a non-administrative user](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enusers), except change these settings for an individual user's roles:
* In the Service access section, select Manager.
* In the Platform access section, select Administrator.
2. Alternatively, create an access group containing these roles and assign the user to the access group.
3. Click Invite. The new users receive an email invitation to join the account. They must accept the invitation to be added to the account.
4. After the user joins the account, add account management permissions. Click the user's name, then Access > Assign access under Access policies.
5. For the service to assign access to, choose All Account Management Services.
6. Next, in the Platform access section, select Administrator and click Add.
7. Click Assign.
Next steps
* Finish [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html).
* [Upgrade your service instances](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlapp) to billable plans.
Learn more
* [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)
* [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups)
* [IBM Cloud docs: Giving access to resources in resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs_manage_access)
Parent topic:[Managing users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html)
| # Add users to the account #
As an **Administrator**, you add the people in your organization who need access to IBM watsonx to the IBM Cloud account and then assign them the appropriate roles for their tasks\.
<!-- <ol> -->
1. [Add nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=en#users) to the IBM Cloud account and assign access groups or roles so that they can work in IBM watsonx\. The new users receive an email invitation to join the account\. They must accept the invitation to be added to the account\.
2. Set up access groups to simplify permissions and role assignment\.
3. Optional: [Add administrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=en#adminuser) to the IBM Cloud account\.
<!-- </ol> -->
## Add nonadministrative users to your IBM Cloud account ##
You invite users to your IBM Cloud account by sending an email invitation\. The user accepts the invitation to join the account\. You must assign them roles (or access groups) to provide the necessary permissions to work in IBM watsonx\. For a baseline role assignment, you can provide minimum permissions by assigning the following roles in the **Manage > Access(IAM) > Users > Invite users > Access policy** screen in IBM Cloud:
<!-- <table> -->
Table 1\. Minimum roles for new IBM watsonx users
| Level | Role | Description |
| --------------------- | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| Service | **All Identity and Access enabled services** | Can access all services that use IAM for access management; usually assigned only to administrators in a production environment |
| Resources | **All resources** | Scope of resources for which user has access |
| Resource group access | **Viewer** | Can view but not modify resource groups |
| Service access | **Reader** | Can perform read\-only actions within a service |
| Platform access | **Viewer** | Can view but not modify service instances |
<!-- </table ""> -->
### IBM account membership ###
To be authorized for IBM watsonx, users must have existing IBMids\. If the invited user does not have an IBMid, it is created for them when they join the account\.
### Assigning roles ###
To assign minimum permissions to individual users:
<!-- <ol> -->
1. From IBM watsonx, click **Administration > Access (IAM)** to open the **Manage access and users** page for your IBM Cloud account\.
2. Click **Users > Invite users\+**\.
3. Enter one or more email addresses that are separated by commas, spaces, or line breaks\. The limit is 100 email addresses\. The settings apply to all the email addresses\.
4. Click the **Access policy** tile\.
5. Select **All Identity and Access enabled services**, then click **Next** to assign Resource access\.
6. For **Resources**, choose **All resources**\. Click **Next**\.
7. For **Resource group access**, choose **Viewer**\. Click **Next**
8. For **Roles and action**, choose the following minimum permissions:
<!-- <ul> -->
* In the **Service access** section, select **Reader**
* In the **Platform access** section, select **Viewer**.
<!-- </ul> -->
9. Review the settings and edit if necessary\.
10. Click **Add** to save the policy\.
11. Click **Invite** to send an email invitation to each email address\. The policies are assigned to the users when they accept the invitation to join the account\.
<!-- </ol> -->
### Modifying a user's role ###
When you change a user's role, their access to services changes\. Their ability to complete work in IBM watsonx can be impacted if they do not have the necessary access\.
## Optional: Add administrative users to your IBM Cloud account ##
You can add administrative users with the **Administrator** role for account management\. This role also provides the **Manager** role for all services in the account\.
To add a user as an IBM Cloud account administrator:
<!-- <ol> -->
1. Follow the steps to [add a non\-administrative user](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=en#users), except change these settings for an individual user's roles:
<!-- <ul> -->
* In the **Service access** section, select **Manager**.
* In the **Platform access** section, select **Administrator**.
<!-- </ul> -->
2. Alternatively, create an access group containing these roles and assign the user to the access group\.
3. Click **Invite**\. The new users receive an email invitation to join the account\. They must accept the invitation to be added to the account\.
4. After the user joins the account, add account management permissions\. Click the user's name, then **Access > Assign access** under **Access policies**\.
5. For the service to assign access to, choose **All Account Management Services**\.
6. Next, in the **Platform access** section, select **Administrator** and click **Add**\.
7. Click **Assign**\.
<!-- </ol> -->
## Next steps ##
<!-- <ul> -->
* Finish [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)\.
* [Upgrade your service instances](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html#app) to billable plans\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)
* [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts)
* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles)
* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview)
* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups)
* [IBM Cloud docs: Giving access to resources in resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs_manage_access)
<!-- </ul> -->
**Parent topic:**[Managing users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html)
<!-- </article "role="article" "> -->
|
27DB2218237B89F557D3702F4270288E4460E9CB | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=en | Setting up the IBM watsonx platform for administrators | Setting up the IBM watsonx platform for administrators
To set up the watsonx platform for your organization, sign up for IBM watsonx.ai, upgrade to a paid plan, set up the services that you need, and add your users with the appropriate permissions.
IBM watsonx.ai on the watsonx platform includes cloud-based services that provide data preparation, data science, and AI modeling capabilities. The watsonx platform is protected by the same powerful security constraints that are available on IBM Cloud.
Table 1. Configuration steps for IBM watsonx
Task Location Required Role Description
[Set up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) IBM Cloud Account Owner Set up a paid account.
[Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) IBM Cloud Administrator Invite users to join the account, create user access groups, and assign roles or access groups to users to provide access.
[Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) IBM Cloud and IBM watsonx Administrator Create a test project to initialize IBM Cloud Object Storage and set the location to Global in each user's profile.
[Set up the Watson Studio and Watson Machine Learning services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html) IBM Cloud and IBM watsonx Administrator Upgrade to a paid plan.
[Create the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) IBM watsonx Administrator or Manager role for the Cloud Pak for Data service Add connections to the platform assets catalog for use by collaborators.
[Set up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html) IBM Cloud and IBM watsonx Administrator or Editor Create access policies and assign roles to users.
[Configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) (if necessary) IBM watsonx and cloud provider firewall configuration Administrator Configure inbound access through a firewall.
Optional. [Configure security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) IBM Cloud Administrator IBM watsonx has five security levels to ensure that data, application endpoints, and identity are protected. For a list of common security mechanisms, see [Common security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=ensecurity).
Optional. [Connect to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) IBM Cloud Administrator Securely connect to databases that are hosted behind a firewall.
Optional. [Configure integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) IBM Cloud and IBM watsonx Administrator Connect to services on other cloud platforms.
Common security mechanisms
As an IBM Cloud account owner or administrator, you set up security for the account by providing single sign-on, IAM role-based access control, secure communication, and other security constraints.
Following are common security mechanisms for the IBM watsonx platform:
* Encrypt your instance with your own key. See [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok).
* Use IBM Key Protect to encrypt key data assets in Cloud Object Storage. See [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.htmlencrypting-at-rest-data).
* Support single sign-on using SAML federation or Active Directory. See [SSO with Federated IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.htmlsso-with-federated-ids).
* Configure secure connections to databases that are behind a firewall. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)
* Configure secure communication between services with Service Endpoints. See [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlprivate-network-service-endpoints).
* Control access at the IP address level. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses).
* Require personal credentials when creating connections. The default setting is shared credentials. See [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections).
Learn more
* HIPAA readiness is available for some regions and plans. See [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.htmlhipaa).
* See [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) for a complete list of security constraints available in IBM watsonx.
* See [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) to understand the architecture of the platform.
Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
| # Setting up the IBM watsonx platform for administrators #
To set up the watsonx platform for your organization, sign up for IBM watsonx\.ai, upgrade to a paid plan, set up the services that you need, and add your users with the appropriate permissions\.
IBM watsonx\.ai on the watsonx platform includes cloud\-based services that provide data preparation, data science, and AI modeling capabilities\. The watsonx platform is protected by the same powerful security constraints that are available on IBM Cloud\.
<!-- <table> -->
Table 1\. Configuration steps for IBM watsonx
| Task | Location | Required Role | Description |
| ----------------------------------------------------------------- | ----------------------------------------------------- | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Set up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) | IBM Cloud | Account Owner | Set up a paid account\. |
| [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) | IBM Cloud | Administrator | Invite users to join the account, create user access groups, and assign roles or access groups to users to provide access\. |
| [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) | IBM Cloud and IBM watsonx | Administrator | Create a test project to initialize IBM Cloud Object Storage and set the location to Global in each user's profile\. |
| [Set up the Watson Studio and Watson Machine Learning services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html) | IBM Cloud and IBM watsonx | Administrator | Upgrade to a paid plan\. |
| [Create the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) | IBM watsonx | Administrator or Manager role for the Cloud Pak for Data service | Add connections to the platform assets catalog for use by collaborators\. |
| [Set up watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html) | IBM Cloud and IBM watsonx | Administrator or Editor | Create access policies and assign roles to users\. |
| [Configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) (if necessary) | IBM watsonx and cloud provider firewall configuration | Administrator | Configure inbound access through a firewall\. |
| Optional\. [Configure security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) | IBM Cloud | Administrator | IBM watsonx has five security levels to ensure that data, application endpoints, and identity are protected\. For a list of common security mechanisms, see [Common security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=en#security)\. |
| Optional\. [Connect to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) | IBM Cloud | Administrator | Securely connect to databases that are hosted behind a firewall\. |
| Optional\. [Configure integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) | IBM Cloud and IBM watsonx | Administrator | Connect to services on other cloud platforms\. |
<!-- </table ""> -->
## Common security mechanisms ##
As an IBM Cloud account owner or administrator, you set up security for the account by providing single sign\-on, IAM role\-based access control, secure communication, and other security constraints\.
Following are common security mechanisms for the IBM watsonx platform:
<!-- <ul> -->
* Encrypt your instance with your own key\. See [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#byok)\.
* Use IBM Key Protect to encrypt key data assets in Cloud Object Storage\. See [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html#encrypting-at-rest-data)\.
* Support single sign\-on using SAML federation or Active Directory\. See [SSO with Federated IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html#sso-with-federated-ids)\.
* Configure secure connections to databases that are behind a firewall\. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)
* Configure secure communication between services with Service Endpoints\. See [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html#private-network-service-endpoints)\.
* Control access at the IP address level\. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html#allow-specific-ip-addresses)\.
* Require personal credentials when creating connections\. The default setting is shared credentials\. See [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-the-credentials-for-connections)\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* HIPAA readiness is available for some regions and plans\. See [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html#hipaa)\.
* See [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) for a complete list of security constraints available in IBM watsonx\.
* See [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) to understand the architecture of the platform\.
<!-- </ul> -->
**Parent topic:**[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
<!-- </article "role="article" "> -->
|
323D19F0757433758D1743B0A62DACC98D286EC5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en | Signing up for IBM watsonx as a Service | Signing up for IBM watsonx as a Service
IBM watsonx as a Service contains two components: watsonx.ai and watsonx.governance. You can sign up for a personal version of either watsonx.ai or watsonx.governance at no initial cost, or sign up through an email invitation to join your organization's account. Watsonx.ai provides all the tools that you need to work with foundation models and machine learning models. Watsonx.governance provides the tools that you need to govern models.
After you sign up for the watsonx.ai, you can add the watsonx.governance component from the services catalog. If you sign up for watsonx.governance, watsonx.ai is included automatically.
* [Signing up for a personal account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enpersonal)
* [Signing up for your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enorgacct)
* [Switching to your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enswitching)
* [Logging in using IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enappid)
Signing up for a personal account
When you sign up for watsonx.ai or watsonx.governance, you need an IBMid for an IBM Cloud account. If you don't already have an IBMid, you can create one while you sign up for watsonx.ai or watsonx.governance. For your IBM Cloud account, you enter your email address, personal information, and credit card information, which is used to verify your identity. You are charged only if you upgrade to a billable plan and then consume billable services. Lite plans do not incur charges.
The free version of watsonx.ai contains Lite plans for the IBM Watson Studio and Watson Machine Learning services that provide the tools for working with foundation models and machine learning models. The free version of watsonx.governance contains the watsonx.ai services plus a Lite plan for the watsonx.governance service that provides the tools for governing models. The Cloud Object Storage service is also included to provide storage.
To sign up for watsonx:
1. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south).
2. Select the IBM Cloud service region. You can select the Dallas or Frankfurt region.
3. Enter your IBM Cloud account username and password. If you don't have an IBM Cloud account, [create one](https://cloud.ibm.com/registration).
4. If you see the Select account screen, select the account and resource group where you want to use watsonx. If you belong to an account with existing services, you can select it instead of your account. The Select account screen does not display if you have only one account and resource group.
5. Click Continue. The account activation process begins.
Note:Stay with your default browser during the activation process. If you land on the IBM Cloud Dashboard, return to the [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx){: new_window} page or the [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south){: external} page and follow the link to log in with an existing account.
After the activation process completes, your watsonx home page is shown.
Bookmark your home page so that you can go directly to the watsonx site for your region to log in with your personal credentials.
If you're in your own account, you have the necessary permissions for complete access to projects and deployment spaces. You can access another account by [switching to that account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enswitching).
To set up an account for your organization, so that other users can share services and resources, see [Set up an account for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html).
Signing up for your organization's account
Before you can access your organization's watsonx account, you must be a member of your organization's IBM Cloud account. Account administrators can invite users to join their organization's IBM Cloud account.
The administrator provides the following information:
* The IBM Cloud account name for watsonx.
* The resource group name for the watsonx account.
* The IBM Cloud service region.
When the account administrator invites you, you receive an email from IBM Cloud with the title "You are invited to join an account in IBM Cloud." with the name of the account.
To join your organization's account:
1. Click the Join now link. The expiry is 30 days.
2. You are asked to log in with your IBMid. IBMids are assigned to IBM Cloud account members. If you don't have an IBMid, one is created for you when you join.
3. Continue to the next screen and confirm that your information is correct, then accept the invite.
4. Login in from the Welcome screen. You are now logged in to the IBM Cloud account.
5. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south).
6. Follow the prompts to sign up with your IBMid.
7. On the Select account screen, select your organization's account and resource group.
8. Click Continue.
You can see the name of the account you are currently working in on the menu bar.

Switching to your organization's account
You can switch to your organization's existing IBM Cloud account (or any other account for which you are a member) to share watsonx resources that are provisioned for that account.
If you are not already an account member, the account administrator must invite you to the IBM Cloud account. You receive an email invitation to join the account. After you accept the invitation, you can access the account and watsonx.
To switch to your organization's account:
1. Log in to watsonx with your personal credentials.
2. Select your organization's account name from the account list on the page header. If you don't see the account list, click the Account Switcher to open it.
To switch regions:
1. Select the region from the region list on the page header. If you don't see the region list, click the Region Switcher to open it.
Logging in to watsonx through IBM Cloud App ID (beta)
IBM Cloud App ID integrates user authentication on IBM Cloud with user registries that are hosted on other identity providers. If App ID is configured for your IBM Cloud account, your administrator provides an alias to log in to watsonx. With App ID, you do not need to sign in to IBM Cloud. Instead, you log in to watsonx with the App ID alias.
You cannot switch accounts when you log in through App ID.
To log in with App ID:
1. Go to watsonx and choose to log in with App ID (Beta).
2. Enter the alias that was [provided to you by your administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html). You are redirected to your company's login page.
3. Enter your company credentials on your company's login page. You are redirected back to watsonx.
Select the Remember App ID checkbox to save the App ID alias for future logins.
Next steps
* Go back to [Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) and choose the right path for you.
* Add services from the services catalog. See [Creating and managing services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html).
Learn more
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html)
Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
| # Signing up for IBM watsonx as a Service #
IBM watsonx as a Service contains two components: watsonx\.ai and watsonx\.governance\. You can sign up for a personal version of either watsonx\.ai or watsonx\.governance at no initial cost, or sign up through an email invitation to join your organization's account\. Watsonx\.ai provides all the tools that you need to work with foundation models and machine learning models\. Watsonx\.governance provides the tools that you need to govern models\.
After you sign up for the watsonx\.ai, you can add the watsonx\.governance component from the services catalog\. If you sign up for watsonx\.governance, watsonx\.ai is included automatically\.
<!-- <ul> -->
* [Signing up for a personal account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en#personal)
* [Signing up for your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en#orgacct)
* [Switching to your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en#switching)
* [Logging in using IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en#appid)
<!-- </ul> -->
## Signing up for a personal account ##
When you sign up for watsonx\.ai or watsonx\.governance, you need an IBMid for an IBM Cloud account\. If you don't already have an IBMid, you can create one while you sign up for watsonx\.ai or watsonx\.governance\. For your IBM Cloud account, you enter your email address, personal information, and credit card information, which is used to verify your identity\. You are charged only if you upgrade to a billable plan and then consume billable services\. Lite plans do not incur charges\.
The free version of watsonx\.ai contains Lite plans for the IBM Watson Studio and Watson Machine Learning services that provide the tools for working with foundation models and machine learning models\. The free version of watsonx\.governance contains the watsonx\.ai services plus a Lite plan for the watsonx\.governance service that provides the tools for governing models\. The Cloud Object Storage service is also included to provide storage\.
To sign up for watsonx:
<!-- <ol> -->
1. Go to [Try IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx\.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south)\.
2. Select the IBM Cloud service region\. You can select the Dallas or Frankfurt region\.
3. Enter your IBM Cloud account username and password\. If you don't have an IBM Cloud account, [create one](https://cloud.ibm.com/registration)\.
4. If you see the **Select account** screen, select the account and resource group where you want to use watsonx\. If you belong to an account with existing services, you can select it instead of your account\. The **Select account** screen does not display if you have only one account and resource group\.
5. Click **Continue**\. The account activation process begins\.
<!-- </ol> -->
Note:Stay with your default browser during the activation process\. If you land on the IBM Cloud Dashboard, return to the [Try IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx)\{: new\_window\} page or the [Try watsonx\.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south)\{: external\} page and follow the link to log in with an existing account\.
After the activation process completes, your watsonx home page is shown\.
Bookmark your home page so that you can go directly to the watsonx site for your region to log in with your personal credentials\.
If you're in your own account, you have the necessary permissions for complete access to projects and deployment spaces\. You can access another account by [switching to that account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=en#switching)\.
To set up an account for your organization, so that other users can share services and resources, see [Set up an account for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html)\.
## Signing up for your organization's account ##
Before you can access your organization's watsonx account, you must be a member of your organization's IBM Cloud account\. Account administrators can invite users to join their organization's IBM Cloud account\.
The administrator provides the following information:
<!-- <ul> -->
* The IBM Cloud account name for watsonx\.
* The resource group name for the watsonx account\.
* The IBM Cloud service region\.
<!-- </ul> -->
When the account administrator invites you, you receive an email from IBM Cloud with the title "You are invited to join an account in IBM Cloud\." with the name of the account\.
To join your organization's account:
<!-- <ol> -->
1. Click the **Join now** link\. The expiry is 30 days\.
2. You are asked to log in with your IBMid\. IBMids are assigned to IBM Cloud account members\. If you don't have an IBMid, one is created for you when you join\.
3. Continue to the next screen and confirm that your information is correct, then accept the invite\.
4. Login in from the **Welcome** screen\. You are now logged in to the IBM Cloud account\.
5. Go to [Try IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx\.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south)\.
6. Follow the prompts to sign up with your IBMid\.
7. On the **Select account** screen, select your organization's account and resource group\.
8. Click **Continue**\.
<!-- </ol> -->
You can see the name of the account you are currently working in on the menu bar\.

## Switching to your organization's account ##
You can switch to your organization's existing IBM Cloud account (or any other account for which you are a member) to share watsonx resources that are provisioned for that account\.
If you are not already an account member, the account administrator must invite you to the IBM Cloud account\. You receive an email invitation to join the account\. After you accept the invitation, you can access the account and watsonx\.
To switch to your organization's account:
<!-- <ol> -->
1. Log in to watsonx with your personal credentials\.
2. Select your organization's account name from the account list on the page header\. If you don't see the account list, click the **Account Switcher** to open it\.
<!-- </ol> -->
To switch regions:
<!-- <ol> -->
1. Select the region from the region list on the page header\. If you don't see the region list, click the **Region Switcher** to open it\.
<!-- </ol> -->
## Logging in to watsonx through IBM Cloud App ID (beta) ##
IBM Cloud App ID integrates user authentication on IBM Cloud with user registries that are hosted on other identity providers\. If App ID is configured for your IBM Cloud account, your administrator provides an alias to log in to watsonx\. With App ID, you do not need to sign in to IBM Cloud\. Instead, you log in to watsonx with the App ID alias\.
You cannot switch accounts when you log in through App ID\.
To log in with App ID:
<!-- <ol> -->
1. Go to watsonx and choose to log in with App ID (Beta)\.
2. Enter the alias that was [provided to you by your administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html)\. You are redirected to your company's login page\.
3. Enter your company credentials on your company's login page\. You are redirected back to watsonx\.
<!-- </ol> -->
Select the **Remember App ID** checkbox to save the App ID alias for future logins\.
## Next steps ##
<!-- <ul> -->
* Go back to [Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) and choose the right path for you\.
* Add services from the services catalog\. See [Creating and managing services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html)\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html)
* [Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html)
<!-- </ul> -->
**Parent topic:**[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
<!-- </article "role="article" "> -->
|
E334A64775AE571C661CDCC847669F0E20C207FF | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html?context=cdpaas&locale=en | Video library | Video library
Watch short videos for data scientists, data engineers, and data stewards to learn about watsonx. The videos and accompanying tutorials are task-focused and provide hands-on experience by using the tools in watsonx.
Note: These videos provides a visual method to learn the concepts and tasks in this documentation. If you are having difficulty viewing any of the videos on this page, visit the [Video playlists](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx-docs.html) page.
First watch the IBM watsonx.ai overview video.
 Select any video from the lists below to watch here.
Quick start
IBM watsonx.ai overview
* Classify text
* Summarize large, complex documents
* Generate content
* Extract text from complex documents
Get started
* Create a project
* Collaborate in projects
* Tour the samples collection
* Load and analyze public data sets
Work with data
* Prepare data with Data Refinery
* Generate synthetic tabular data
* Analyze data in a Jupyter notebook
IBM watsonx.governance
* Track a model in an AI use case
* Evaluate a prompt template
* Track a prompt template
Work with foundation models
* Prompt a foundation model using Prompt Lab
* Prompt tips: Get started prompting foundation models
* Introduction to the retrieval-augmented generation pattern
* Tune a foundation model
Build models
* Build and deploy a model with AutoAI
* Build and deploy a model in a Jupyter notebook
* Build and deploy a model with SPSS Modeler
* Build and deploy a Decision Optimization model
* Create a pipeline to automate the lifecycle for a model
| # Video library #
Watch short videos for data scientists, data engineers, and data stewards to learn about watsonx\. The videos and accompanying tutorials are task\-focused and provide hands\-on experience by using the tools in watsonx\.
Note: These videos provides a visual method to learn the concepts and tasks in this documentation\. If you are having difficulty viewing any of the videos on this page, visit the **[Video playlists](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx-docs.html)** page\.
#### First watch the IBM watsonx\.ai overview video\. ####
 Select any video from the lists below to watch here\.
## Quick start ##
IBM watsonx\.ai overview
<!-- <ul> -->
* Classify text
* Summarize large, complex documents
* Generate content
* Extract text from complex documents
<!-- </ul> -->
Get started
<!-- <ul> -->
* Create a project
* Collaborate in projects
* Tour the samples collection
* Load and analyze public data sets
<!-- </ul> -->
Work with data
<!-- <ul> -->
* Prepare data with Data Refinery
* Generate synthetic tabular data
* Analyze data in a Jupyter notebook
<!-- </ul> -->
IBM watsonx\.governance
<!-- <ul> -->
* Track a model in an AI use case
* Evaluate a prompt template
* Track a prompt template
<!-- </ul> -->
Work with foundation models
<!-- <ul> -->
* Prompt a foundation model using Prompt Lab
* Prompt tips: Get started prompting foundation models
* Introduction to the retrieval\-augmented generation pattern
* Tune a foundation model
<!-- </ul> -->
Build models
<!-- <ul> -->
* Build and deploy a model with AutoAI
* Build and deploy a model in a Jupyter notebook
* Build and deploy a model with SPSS Modeler
* Build and deploy a Decision Optimization model
* Create a pipeline to automate the lifecycle for a model
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html?context=cdpaas&locale=en | Visualizations of assets | Visualizations of assets
In your project, you can create visualizations of data assets to further explore and discover insights. To create and view visualizations, open a data asset and go to the Visualization tab.
Requirements and restrictions
You can view the visualization of assets under the following circumstances.
* Required permissions
To view this page, you can have any role in a project. To edit or update information on this page, you must have the Editor or Admin role.
* Workspaces
You can view the asset visualization in projects.
* Types of assets
These types of assets create a visualization:
* Data asset from file: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files
* Connected data assets
* Collaboration
Visualization assets created by a user can be viewed or edited by other collaborators of the same project, depending on the assigned permissions.
Learn more
* [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html)
Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
| # Visualizations of assets #
In your project, you can create visualizations of data assets to further explore and discover insights\. To create and view visualizations, open a data asset and go to the **Visualization** tab\.
## Requirements and restrictions ##
You can view the visualization of assets under the following circumstances\.
<!-- <ul> -->
* **Required permissions**
To view this page, you can have any role in a project. To edit or update information on this page, you must have the **Editor** or **Admin** role.
* **Workspaces**
You can view the asset visualization in projects.
* **Types of assets**
These types of assets create a visualization:
<!-- <ul> -->
* Data asset from file: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files
* Connected data assets
<!-- </ul> -->
<!-- </ul> -->
<!-- <ul> -->
* **Collaboration**
Visualization assets created by a user can be viewed or edited by other collaborators of the same project, depending on the assigned permissions.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html)
<!-- </ul> -->
**Parent topic:**[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
<!-- </article "role="article" "> -->
|
438B5ABAC4D30492C2F192EA551E9514DF877831 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wdp-apis.html?context=cdpaas&locale=en | IBM watsonx APIs | IBM watsonx APIs
You can perform many of the tasks for watsonx with APIs.
APIs for managing assets
You can use a collection of REST APIs to manage data-related assets and the people who need to use these assets. See [Watson Data API](http://ibm.biz/wdp-api).
Connections in the Watson Data API
Use the Watson Data API to create a connection in a catalog or project. See [Connections in the Watson Data API](https://cloud.ibm.com/apidocs/watson-data-apiconnections).
Python library for foundation models
For the full library reference, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html).
For examples of how to use the foundation models Python library, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html).
APIs for machine learning
Watson Machine Learning allows for managing spaces, deployments, and assets programmatically by using:
* [REST API](https://cloud.ibm.com/apidocs/machine-learning)
* [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)
For links to sample Jupyter Notebooks that demonstrate how to manage spaces, deployments, and assets programmatically, see [Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
APIs for factsheets
AI Factsheets allows managing settings, model entries, and report templates programmatically by using:
* [REST API](https://cloud.ibm.com/apidocs/factsheets)
* [Python client library](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlfactsheet-asset-elements)
Learn more
* [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api)
* [Watson Machine Learning API docs](https://cloud.ibm.com/apidocs/machine-learning)
* [AI Factsheets API docs](https://cloud.ibm.com/apidocs/factsheets)
| # IBM watsonx APIs #
You can perform many of the tasks for watsonx with APIs\.
## APIs for managing assets ##
You can use a collection of REST APIs to manage data\-related assets and the people who need to use these assets\. See [Watson Data API](http://ibm.biz/wdp-api)\.
## Connections in the Watson Data API ##
Use the Watson Data API to create a connection in a catalog or project\. See [Connections in the Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api#connections)\.
## Python library for foundation models ##
For the full library reference, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html)\.
For examples of how to use the foundation models Python library, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)\.
## APIs for machine learning ##
Watson Machine Learning allows for managing spaces, deployments, and assets programmatically by using:
<!-- <ul> -->
* [REST API](https://cloud.ibm.com/apidocs/machine-learning)
* [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)
<!-- </ul> -->
For links to sample Jupyter Notebooks that demonstrate how to manage spaces, deployments, and assets programmatically, see [Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\.
## APIs for factsheets ##
AI Factsheets allows managing settings, model entries, and report templates programmatically by using:
<!-- <ul> -->
* [REST API](https://cloud.ibm.com/apidocs/factsheets)
* [Python client library](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html#factsheet-asset-elements)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api)
* [Watson Machine Learning API docs](https://cloud.ibm.com/apidocs/machine-learning)
* [AI Factsheets API docs](https://cloud.ibm.com/apidocs/factsheets)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
5BC1631D896899D03E7D8DD2296C21656DD169FF | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/whats-new.html?context=cdpaas&locale=en | What's new | What's new
Check back each week to learn about new features and updates for IBM watsonx.ai.
Tip: Occasionally, you must take a specific action after an update. To see all required actions, search this page for “Action required”.
Week ending 15 December 2023
Create user API keys for jobs and other operations
15 Dec 2023
Certain runtime operations in IBM watsonx, such as jobs and model training, require an API key as a credential for secure authorization. With user API keys, you can now generate and rotate an API key directly in IBM watsonx as needed to help ensure your operations run smoothly. The API keys are managed in IBM Cloud, but you can conveniently create and rotate them in IBM watsonx.
The user API key is account-specific and is created from Profile and settings under your account profile.
For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html).
New watsonx tutorials and videos
15 Dec 2023
Try the new watsonx.governance and watsonx.ai tutorials to help you learn how to tune a foundation model, and evaluate and track a prompt template.
New tutorials
Tutorial Description Expertise for tutorial
[Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) Tune a foundation model to enhance model performance. Use the Tuning Studio to tune a model without coding. <br><br>Intermediate<br><br>No code
[Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle. Use the evaluation tool and an AI use case to track the prompt template. <br><br>Beginner<br><br>No code
 Find more watsonx.governance and watsonx.ai videos in the [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html).
New login session expiration and sign out due to inactivity
15 Dec 2023
You are now signed out of IBM Cloud due to session expiration. Your session can expire due to login session expiration (24 hours by default) or inactivity (2 hours by default). You can change the default durations in the Access (IAM) settings in IBM Cloud. For more information, see [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-expiration).
IBM Cloud Databases for DataStax connector is deprecated
15 Dec 2023
The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release.
Week ending 08 December 2023
The Tuning Studio is available
7 Dec 2023
The Tuning Studio helps you to guide a foundation model to return useful output. With the Tuning Studio, you can prompt tune the flan-t5-xl-3b foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation. Prompt tuning helps smaller, more computationally-efficient foundation models achieve results comparable to larger models in the same model family. By tuning and deploying a tuned version of a smaller model, you can reduce long-term inference costs. The Tuning Studio is available to users of paid plans in the Dallas region.
* For more information, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html).
* To get started, see [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html).
* To run a sample notebook, go to [Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502).
New client properties in Db2 connections for workload management
08 Dec 2023
You can now specify properties in the following fields for monitoring purposes: Application name, Client accounting information, Client hostname, and Client user. These fields are optional and are available for the following connections:
* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html)
* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)
Week ending 1 December 2023
Watsonx.governance is available!
1 Dec 2023
Watsonx.governance extends the governance capabilities of Watson OpenScale to evaluate foundation model assets as well as machine learning assets. For example, evaluate foundation model prompt templates for dimensions such as accuracy or to detect the presence of hateful and abusive speech. You can also define AI use cases to address business problems, then track prompt templates or model data in factsheets to support compliance and governance goals. Watsonx.governance plans and features are available only in the Dallas region.
* To view plan details, see [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) plans.
* For details on governance features, see [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html).
* To get started, see [Provisioning and launching watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html).
Explore with the AI risk atlas
1 Dec 2023
You can now explore some of the risks of working with generative AI, foundation models, and machine learning models. Read about risks for privacy, fairness, explainability, value alignment, and other areas. See [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html).
New versions of the IBM Granite models are available
30 Nov 2023
The latest versions of the Granite models include these changes:
granite-13b-chat-v2: Tuned to be better at question-answering, summarization, and generative tasks. With sufficient context, generates responses with the following improvements over the previous version:
* Generates longer, higher-quality responses with a professional tone
* Supports chain-of-thought responses
* Recognizes mentions of people and can detect tone and sentiment better
* Handles white spaces in input more gracefully
Due to extensive changes, test and revise any prompts that were engineered for v1 before you switch to the latest version.
granite-13b-instruct-v2: Tuned specifically for classification, extraction, and summarization tasks. The latest version differs from the previous version in the following ways:
* Returns more coherent answers of varied lengths and with a diverse vocabulary
* Recognizes mentions of people and can summarize longer inputs
* Handles white spaces in input more gracefully
Engineered prompts that work well with v1 are likely to work well with v2 also, but be sure to test before you switch models.
The latest versions of the Granite models are categorized as Class 2 models.
Some foundation models are now available at lower cost
30 Nov 2023
Some popular foundation models were recategorized into lower-cost billing classes.
The following foundation models changed from Class 3 to Class 2:
* granite-13b-chat-v1
* granite-13b-instruct-v1
* llama-2-70b
The following foundation model changed from Class 2 to Class 1:
* llama-2-13b
For more information about the billing classes, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
A new sample notebook is available: Introduction to RAG with Discovery
30 Nov 2023
Use the Introduction to RAG with Discovery notebook to learn how to apply the retrieval-augmented generation pattern in IBM watsonx.ai with IBM Watson Discovery as the search component. For more information, see [Introduction to RAG with Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec).
Understand feature differences between watsonx as a service and software deployments
30 Nov 2023
You can now compare the features and implementation of IBM watsonx as a Service and watsonx on Cloud Pak for Data software, version 4.8. See [Feature differences between watsonx deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html).
Change to how stop sequences are handled
30 Nov 2023
When a stop sequence, such as a newline character, is specified in the Prompt Lab, the model output text ends after the first occurrence of the stop sequence. The model output stops even if the occurrence comes at the beginning of the output. Previously, the stop sequence was ignored if it was specified at the start of the model output.
Week ending 10 November 2023
A smaller version of the Llama-2 Chat model is available
9 Nov 2023
You can now choose between using the 13b or 70b versions of the Llama-2 Chat model. Consider these factors when you make your choice:
* Cost
* Performance
The 13b version is a Class 2 model, which means it is cheaper to use than the 70b version. To compare benchmarks and other factors, such as carbon emissions for each model size, see the [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx).
Use prompt variables to build reusable prompts
Add flexibility to your prompts with prompt variables. Prompt variables function as placeholders in the static text of your prompt input that you can replace with text dynamically at inference time. You can save prompt variable names and default values in a prompt template asset to reuse yourself or share with collaborators in your project. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html).
Announcing support for Python 3.10 and R4.2 frameworks and software specifications on runtime 23.1
9 Nov 2023
Action required
You can now use IBM Runtime 23.1, which includes the latest data science frameworks based on Python 3.10 and R 4.2, to run Watson Studio Jupyter notebooks and R scripts, train models, and run Watson Machine Learning deployments. Update your assets and deployments to use IBM Runtime 23.1 frameworks and software specifications.
* For information on the IBM Runtime 23.1 release and the included environments for Python 3.10 and R 4.2, see [Changing notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env).
* For details on deployment frameworks, see [Managing frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html).
Use Apache Spark 3.4 to run notebooks and scripts
Spark 3.4 with Python 3.10 and R 4.2 is now supported as a runtime for notebooks and RStudio scripts in projects. For details on available notebook environments, see [Compute resource options for the notebook editor in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) and [Compute resource options for RStudio in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html).
Week ending 27 October 2023
Use a Satellite Connector to connect to an on-prem database
26 Oct 2023
Use the new Satellite Connector to connect to a database that is not accessible via the internet (for example, behind a firewall). Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem environment back to IBM Cloud. For instructions, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html).
Secure Gateway is deprecated
26 Oct 2023
IBM Cloud announced the deprecation of Secure Gateway. For information, see the [Overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview).
Action required
If you currently have connections that are set up with Secure Gateway, plan to use an alternative communication method. In IBM watsonx, you can use the Satellite Connector as a replacement for Secure Gateway. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html).
Week ending 20 October 2023
Maximum token sizes increased
16 Oct 2023
Limits that were previously applied to the maximum number of tokens allowed in the output from foundation models are removed from paid plans. You can use larger maximum token values during prompt engineering from both the Prompt Lab and the Python library. The exact number of tokens allowed differs by model. For more information about token limits for paid and Lite plans, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
Week ending 13 October 2023
New notebooks in Samples
12 Oct 2023
Two new notebooks are available that use a vector database from Elasticsearch in the retrieval phase of the retrieval-augmented generation pattern. The notebooks demonstrate how to find matches based on the semantic similarity between the indexed documents and the query text that is submitted from a user.
* [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp)
* [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp)
Intermediate solutions in Decision Optimization
12 Oct 2023
You can now choose to see a sample of intermediate solutions while a Decision Optimization experiment is running. This can be useful for debugging or to see how the solver is progressing. For large models that take longer to solve, with intermediate solutions you can now quickly and easily identify any potential problems with the solve, without having to wait for the solve to complete.  You can configure the Intermediate solution delivery parameter in the Run configuration and select a frequency for these solutions. For more information, see [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__runmodel) and [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.htmlRunConfig__section_runconfig)
New Decision Optimization saved model dialog
When you save a model for deployment from the Decision Optimization user interface, you can now review the input and output schema, and more easily select the tables that you want to include. You can also add, modify or delete run configuration parameters, review the environment, and the model files used. All these items are displayed in the same Save as model for deployment dialog. For more information, see [Deploying a Decision Optimization model by using the user interface](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html).
Week ending 6 October 2023
Additional foundation models in Frankfurt
5 Oct 2023
All foundation models that are available in the Dallas data center are now also available in the Frankfurt data center. The watsonx.ai Prompt Lab and foundation model inferencing are now supported in the Frankfurt region for these models:
* granite-13b-chat-v1
* granite-13b-instruct-v1
* llama-2-70b-chat
* gpt-neox-20b
* mt0-xxl-13b
* starcoder-15.5b
For more information on these models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Control the placement of a new column in the Concatenate operation (Data Refinery)
6 Oct 2023
You now have two options to specify the position of the new column that results from the Concatenate operation: As the right-most column in the data set or next to the original column.

Previously, the new column was placed at the beginning of the data set.
Important:
Action required
Edit the Concatenate operation in any of your existing Data Refinery flows to specify the new column position. Otherwise, the flow might fail.
For information about Data Refinery operations, see [GUI operations in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html).
Week ending 29 September 2023
IBM Granite foundation models for natural language generation
28 Sept 2023
The first two models from the Granite family of IBM foundation models are now available in the Dallas region:
* granite-13b-chat-v1: General use model that is optimized for dialogue use cases
* granite-13b-instruct-v1: General use model that is optimized for question answering
Both models are 13B-parameter decoder models that can efficiently predict and generate language in English. They, like all models in the Granite family, are designed for business. Granite models are pretrained on multiple terabytes of data from both general-language sources, such as the public internet, and industry-specific data sources from the academic, scientific, legal, and financial fields.
Try them out today in the Prompt Lab or run a [sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a) that uses the granite-13b-instruct-v1 model for sentiment analysis.
Read the [Building AI for business: IBM’s Granite foundation models](https://www.ibm.com/blog/building-ai-for-business-ibms-granite-foundation-models/) blog post to learn more.
* For more information on these models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html).
* For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Week ending 22 September 2023
Decision Optimization Java models
20 Sept 2023
Decision Optimization Java models can now be deployed in Watson Machine Learning. By using the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. You can now easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md). For more information, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html).
New notebooks in Samples
21 Sept 2023
You can use the following new notebooks in Samples:
* [Use watsonx and LangChain to answer questions using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6)
* [Use watsonx and BigCode starcoder-15.5b to generate code](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6)
Week ending 15 September 2023
Prompt engineering and synthetic data quick start tutorials
14 Sept 2023
Try the new tutorials to help you learn how to:
* Prompt foundation models: There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text using the retrieval-augmented generation pattern.
* Generate synthetic data: You can generate synthetic tabular data in watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities.
Tutorial Description Expertise for tutorial
[Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding. <br><br>Beginner<br><br>No code
[Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code. <br><br>Intermediate<br><br>All code
[Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) Generate synthetic tabular data using a graphical flow editor. Select operations to generate data. <br><br>Beginner<br><br>No code
Watsonx.ai Community
14 Sept 2023
You can now join the [watsonx.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e) for AI architects and builders to learn, share ideas, and connect with others.
Week ending 8 September 2023
Generate synthetic tabular data with Synthetic Data Generator
7 Sept 2023
Now available in the Dallas and Frankfurt regions, Synthetic Data Generator is a new graphical editor tool on watsonx.ai that you can use to generate tabular data to use for training models. Using visual flows and a statistical model, you can create synthetic data based on your existing data or a custom data schema. You can choose to mask your original data and export your synthetic data to a database or as a file.
To get started, see [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html).
Llama-2 Foundation Model for natural language generation and chat
7 Sept 2023
The Llama-2 Foundation Model from Meta is now available in the Dallas region. Llama-2 Chat model is an auto-regressive language model that uses an optimized transformer architecture. The model is pretrained with publicly available online data, and then fine-tuned using reinforcement learning from human feedback. The model is intended for commercial and research use in English-language assistant-like chat scenarios.
* For more information on the Llama-2 model, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html).
* For pricing details for Llama-2, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
LangChain extension for the foundation models Python library
7 Sept 2023
You can now use the LangChain framework with foundation models in watsonx.ai with the new LangChain extension for the foundation models Python library.
This sample notebook demonstrates how to use the new extension: [Sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx)
Introductory sample for the retrieval-augmented generation pattern
7 Sept 2023
Retrieval-augmented generation is a simple, powerful technique for leveraging a knowledge base to get factually accurate output from foundation models.
See: [Introduction to retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
Week ending 1 September 2023
31 Aug 2023
As of today it is not possible to add comments to a notebook from the notebook action bar. Any existing comments were removed.

StarCoder Foundation Model for code generation and code translation
31 Aug 2023
The StarCoder model from Hugging Face is now available in the Dallas region. Use StarCoder to create prompts for generating code or for transforming code from one programming language to another. One sample prompt demonstrates how to use StarCoder to generate Python code from a set of instruction. A second sample prompt demonstrates how to use StarCoder to transform code written in C++ to Python code.
* For more information on the StarCoder model, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
* For a description of the sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html).
IBM watsonx.ai is available in the Frankfurt region
31 Aug 2023
Watsonx.ai is now generally available in the Frankfurt data center and can be selected as the preferred region when signing-up. The Prompt Lab and foundation model inferencing are supported in the Frankfurt region for these models:
* mpt-7b-instruct2
* flan-t5-xxl-11b
* flan-ul2-20b
* For more information on the supported models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
Week ending 25 August 2023
Additional cache enhancements available for Watson Pipelines
21 August 2023
More options are available for customizing your pipeline flow settings. You can now exercise greater control over when the cache is used for pipeline runs. For details, see [Managing default settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html).
Week ending 18 August 2023
Plan name updates for Watson Machine Learning service
18 August 2023
Starting immediately, plan names are updated for the IBM Watson Machine Learning service, as follows:
* The v2 Standard plan is now the Essentials plan. The plan is designed to give your organization the resources required to get started working with foundation models and machine learning assets.
* The v2 Professional plan is now the Standard plan. This plan provides resources designed to support most organizations through asset creation to productive use.
Changes to the plan names do not change your terms of service. That is, if you are registered to use the v2 Standard plan, it will now be named Essentials, but all of the plan details will remain the same. Similarly, if you are registered to use the v2 Professional plan, there are no changes other than the plan name change to Standard.
For details on what is included with each plan, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). For pricing information, find your plan on the [Watson Machine Learning plan page](https://cloud.ibm.com/catalog/services/watson-machine-learning) in the IBM Cloud catalog.
Week ending 11 August 2023
7 August 2023
On 31 August 2023, you will no longer be able to add comments to a notebook from the notebook action bar. Any existing comments that were added that way will be removed.

Week ending 4 August 2023
Increased token limit for Lite plan
4 August 2023
If you are using the Lite plan to test foundation models, the token limit for prompt input and output is now increased from 25,000 to 50,000 per account per month. This gives you more flexibility for exploring foundation models and experimenting with prompts.
* For details on watsonx.ai plans, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
* For details on working with prompts, see [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
Custom text analytics template (SPSS Modeler)
4 August 2023
For SPSS Modeler, you can now upload a custom text analytics template to a project. This provides you with more flexibility to capture and extract key concepts in a way that is unique to your context.
Week ending 28 July 2023
Foundation models Python library available
27 July 2023
You can now prompt foundation models in watsonx.ai programmatically using a Python library.
See: [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
Week ending 14 July 2023
Control AI guardrails
14 July 2023
You can now control whether AI guardrails are on or off in the Prompt Lab. AI guardrails remove potentially harmful text from both the input and output fields. Harmful text can include hate speech, abuse, and profanity. To prevent the removal of potentially harmful text, set the AI guardrails switch to off. See [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.htmlhap).

Microsoft Azure SQL Database connection supports Azure Active Directory authentication (Azure AD)
14 July 2023
You can now select Active Directory for the Microsoft Azure SQL Database connection. Active Directory authentication is an alternative to SQL Server authentication. With this enhancement, administrators can centrally manage user permissions to Azure. For more information, see [Microsoft Azure SQL Database connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html).
Week ending 7 July 2023
Welcome to IBM watsonx.ai!
7 July 2023
IBM watsonx.ai delivers all the tools that you need to work with machine learning and foundation models.
Get started:
* [Learn about watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Learn about foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Take quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* [Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
Try generative AI search and answer in this documentation
7 July 2023
You can see generative AI in action by trying the new generative AI search and answer option in the watsonx.ai documentation. The answers are generated by a large language model running in watsonx.ai and based on the documentation content. This feature is only available when you are viewing the documentation while logged in to watsonx.ai.
Enter a question in the documentation search field and click the Try generative AI search and answer icon (). The Generative AI search and answer pane opens and answers your question.

| # What's new #
Check back each week to learn about new features and updates for IBM watsonx\.ai\.
Tip: Occasionally, you must take a specific action after an update\. To see all required actions, search this page for “Action required”\.
## Week ending 15 December 2023 ##
### Create user API keys for jobs and other operations ###
15 Dec 2023
Certain runtime operations in IBM watsonx, such as jobs and model training, require an API key as a credential for secure authorization\. With user API keys, you can now generate and rotate an API key directly in IBM watsonx as needed to help ensure your operations run smoothly\. The API keys are managed in IBM Cloud, but you can conveniently create and rotate them in IBM watsonx\.
The user API key is account\-specific and is created from **Profile and settings** under your account profile\.
For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html)\.
### New watsonx tutorials and videos ###
15 Dec 2023
Try the new watsonx\.governance and watsonx\.ai tutorials to help you learn how to tune a foundation model, and evaluate and track a prompt template\.
<!-- <table> -->
New tutorials
| Tutorial | Description | Expertise for tutorial |
| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| [Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) | Tune a foundation model to enhance model performance\. | Use the Tuning Studio to tune a model without coding\. <br><br>Intermediate<br><br>No code |
| [Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) | Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle\. | Use the evaluation tool and an AI use case to track the prompt template\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
 Find more watsonx\.governance and watsonx\.ai videos in the [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html)\.
### New login session expiration and sign out due to inactivity ###
15 Dec 2023
You are now signed out of IBM Cloud due to session expiration\. Your session can expire due to login session expiration (24 hours by default) or inactivity (2 hours by default)\. You can change the default durations in the Access (IAM) settings in IBM Cloud\. For more information, see [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-expiration)\.
### IBM Cloud Databases for DataStax connector is deprecated ###
15 Dec 2023
The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release\.
## Week ending 08 December 2023 ##
### The Tuning Studio is available ###
7 Dec 2023
The Tuning Studio helps you to guide a foundation model to return useful output\. With the Tuning Studio, you can prompt tune the flan\-t5\-xl\-3b foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation\. Prompt tuning helps smaller, more computationally\-efficient foundation models achieve results comparable to larger models in the same model family\. By tuning and deploying a tuned version of a smaller model, you can reduce long\-term inference costs\. The Tuning Studio is available to users of paid plans in the Dallas region\.
<!-- <ul> -->
* For more information, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)\.
* To get started, see [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html)\.
* To run a sample notebook, go to [Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502)\.
<!-- </ul> -->
### New client properties in Db2 connections for workload management ###
08 Dec 2023
You can now specify properties in the following fields for monitoring purposes: **Application name**, **Client accounting information**, **Client hostname**, and **Client user**\. These fields are optional and are available for the following connections:
<!-- <ul> -->
* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html)
* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)
<!-- </ul> -->
## Week ending 1 December 2023 ##
### Watsonx\.governance is available\! ###
1 Dec 2023
Watsonx\.governance extends the governance capabilities of Watson OpenScale to evaluate foundation model assets as well as machine learning assets\. For example, evaluate foundation model prompt templates for dimensions such as accuracy or to detect the presence of hateful and abusive speech\. You can also define AI use cases to address business problems, then track prompt templates or model data in factsheets to support compliance and governance goals\. Watsonx\.governance plans and features are available only in the Dallas region\.
<!-- <ul> -->
* To view plan details, see [watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) plans\.
* For details on governance features, see [watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html)\.
* To get started, see [Provisioning and launching watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html)\.
<!-- </ul> -->
### Explore with the AI risk atlas ###
1 Dec 2023
You can now explore some of the risks of working with generative AI, foundation models, and machine learning models\. Read about risks for privacy, fairness, explainability, value alignment, and other areas\. See [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)\.
### New versions of the IBM Granite models are available ###
30 Nov 2023
The latest versions of the Granite models include these changes:
**granite\-13b\-chat\-v2**: Tuned to be better at question\-answering, summarization, and generative tasks\. With sufficient context, generates responses with the following improvements over the previous version:
<!-- <ul> -->
* Generates longer, higher\-quality responses with a professional tone
* Supports chain\-of\-thought responses
* Recognizes mentions of people and can detect tone and sentiment better
* Handles white spaces in input more gracefully
<!-- </ul> -->
Due to extensive changes, test and revise any prompts that were engineered for v1 before you switch to the latest version\.
**granite\-13b\-instruct\-v2**: Tuned specifically for classification, extraction, and summarization tasks\. The latest version differs from the previous version in the following ways:
<!-- <ul> -->
* Returns more coherent answers of varied lengths and with a diverse vocabulary
* Recognizes mentions of people and can summarize longer inputs
* Handles white spaces in input more gracefully
<!-- </ul> -->
Engineered prompts that work well with v1 are likely to work well with v2 also, but be sure to test before you switch models\.
The latest versions of the Granite models are categorized as Class 2 models\.
### Some foundation models are now available at lower cost ###
30 Nov 2023
Some popular foundation models were recategorized into lower\-cost billing classes\.
The following foundation models changed from Class 3 to Class 2:
<!-- <ul> -->
* granite\-13b\-chat\-v1
* granite\-13b\-instruct\-v1
* llama\-2\-70b
<!-- </ul> -->
The following foundation model changed from Class 2 to Class 1:
<!-- <ul> -->
* llama\-2\-13b
<!-- </ul> -->
For more information about the billing classes, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
### A new sample notebook is available: Introduction to RAG with Discovery ###
30 Nov 2023
Use the *Introduction to RAG with Discovery* notebook to learn how to apply the retrieval\-augmented generation pattern in IBM watsonx\.ai with IBM Watson Discovery as the search component\. For more information, see [Introduction to RAG with Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec)\.
### Understand feature differences between watsonx as a service and software deployments ###
30 Nov 2023
You can now compare the features and implementation of IBM watsonx as a Service and watsonx on Cloud Pak for Data software, version 4\.8\. See [Feature differences between watsonx deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html)\.
### Change to how stop sequences are handled ###
30 Nov 2023
When a stop sequence, such as a newline character, is specified in the Prompt Lab, the model output text ends after the first occurrence of the stop sequence\. The model output stops even if the occurrence comes at the beginning of the output\. Previously, the stop sequence was ignored if it was specified at the start of the model output\.
## Week ending 10 November 2023 ##
### A smaller version of the Llama\-2 Chat model is available ###
9 Nov 2023
You can now choose between using the 13b or 70b versions of the Llama\-2 Chat model\. Consider these factors when you make your choice:
<!-- <ul> -->
* Cost
* Performance
<!-- </ul> -->
The 13b version is a Class 2 model, which means it is cheaper to use than the 70b version\. To compare benchmarks and other factors, such as carbon emissions for each model size, see the [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx)\.
### Use prompt variables to build reusable prompts ###
Add flexibility to your prompts with *prompt variables*\. Prompt variables function as placeholders in the static text of your prompt input that you can replace with text dynamically at inference time\. You can save prompt variable names and default values in a prompt template asset to reuse yourself or share with collaborators in your project\. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)\.
### Announcing support for Python 3\.10 and R4\.2 frameworks and software specifications on runtime 23\.1 ###
9 Nov 2023
Action required
You can now use IBM Runtime 23\.1, which includes the latest data science frameworks based on Python 3\.10 and R 4\.2, to run Watson Studio Jupyter notebooks and R scripts, train models, and run Watson Machine Learning deployments\. Update your assets and deployments to use IBM Runtime 23\.1 frameworks and software specifications\.
<!-- <ul> -->
* For information on the IBM Runtime 23\.1 release and the included environments for Python 3\.10 and R 4\.2, see [Changing notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#change-env)\.
* For details on deployment frameworks, see [Managing frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html)\.
<!-- </ul> -->
### Use Apache Spark 3\.4 to run notebooks and scripts ###
Spark 3\.4 with Python 3\.10 and R 4\.2 is now supported as a runtime for notebooks and RStudio scripts in projects\. For details on available notebook environments, see [Compute resource options for the notebook editor in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) and [Compute resource options for RStudio in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html)\.
## Week ending 27 October 2023 ##
### Use a Satellite Connector to connect to an on\-prem database ###
26 Oct 2023
Use the new Satellite Connector to connect to a database that is not accessible via the internet (for example, behind a firewall)\. Satellite Connector uses a lightweight Docker\-based communication that creates secure and auditable communications from your on\-prem environment back to IBM Cloud\. For instructions, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\.
### Secure Gateway is deprecated ###
26 Oct 2023
IBM Cloud announced the deprecation of Secure Gateway\. For information, see the [Overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview)\.
Action required
If you currently have connections that are set up with Secure Gateway, plan to use an alternative communication method\. In IBM watsonx, you can use the Satellite Connector as a replacement for Secure Gateway\. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\.
## Week ending 20 October 2023 ##
### Maximum token sizes increased ###
16 Oct 2023
Limits that were previously applied to the maximum number of tokens allowed in the output from foundation models are removed from paid plans\. You can use larger maximum token values during prompt engineering from both the Prompt Lab and the Python library\. The exact number of tokens allowed differs by model\. For more information about token limits for paid and Lite plans, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
## Week ending 13 October 2023 ##
### New notebooks in Samples ###
12 Oct 2023
Two new notebooks are available that use a vector database from Elasticsearch in the retrieval phase of the retrieval\-augmented generation pattern\. The notebooks demonstrate how to find matches based on the semantic similarity between the indexed documents and the query text that is submitted from a user\.
<!-- <ul> -->
* [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp)
* [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp)
<!-- </ul> -->
### Intermediate solutions in Decision Optimization ###
12 Oct 2023
You can now choose to see a sample of intermediate solutions while a Decision Optimization experiment is running\. This can be useful for debugging or to see how the solver is progressing\. For large models that take longer to solve, with intermediate solutions you can now quickly and easily identify any potential problems with the solve, without having to wait for the solve to complete\.  You can configure the Intermediate solution delivery parameter in the Run configuration and select a frequency for these solutions\. For more information, see [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__runmodel) and [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html#RunConfig__section_runconfig)
### New Decision Optimization saved model dialog ###
When you save a model for deployment from the Decision Optimization user interface, you can now review the input and output schema, and more easily select the tables that you want to include\. You can also add, modify or delete run configuration parameters, review the environment, and the model files used\. All these items are displayed in the same **Save as model for deployment** dialog\. For more information, see [Deploying a Decision Optimization model by using the user interface](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html)\.
## Week ending 6 October 2023 ##
### Additional foundation models in Frankfurt ###
5 Oct 2023
All foundation models that are available in the Dallas data center are now also available in the Frankfurt data center\. The watsonx\.ai Prompt Lab and foundation model inferencing are now supported in the Frankfurt region for these models:
<!-- <ul> -->
* granite\-13b\-chat\-v1
* granite\-13b\-instruct\-v1
* llama\-2\-70b\-chat
* gpt\-neox\-20b
* mt0\-xxl\-13b
* starcoder\-15\.5b
<!-- </ul> -->
For more information on these models, see [Supported foundation models available with watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
### Control the placement of a new column in the Concatenate operation (Data Refinery) ###
6 Oct 2023
You now have two options to specify the position of the new column that results from the **Concatenate** operation: As the right\-most column in the data set or next to the original column\.

Previously, the new column was placed at the beginning of the data set\.
Important:
Action required
Edit the **Concatenate** operation in any of your existing Data Refinery flows to specify the new column position\. Otherwise, the flow might fail\.
For information about Data Refinery operations, see [GUI operations in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html)\.
## Week ending 29 September 2023 ##
### IBM Granite foundation models for natural language generation ###
28 Sept 2023
The first two models from the Granite family of IBM foundation models are now available in the Dallas region:
<!-- <ul> -->
* **granite\-13b\-chat\-v1**: General use model that is optimized for dialogue use cases
* **granite\-13b\-instruct\-v1**: General use model that is optimized for question answering
<!-- </ul> -->
Both models are 13B\-parameter decoder models that can efficiently predict and generate language in English\. They, like all models in the Granite family, are designed for business\. Granite models are pretrained on multiple terabytes of data from both general\-language sources, such as the public internet, and industry\-specific data sources from the academic, scientific, legal, and financial fields\.
Try them out today in the Prompt Lab or run a [sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a) that uses the granite\-13b\-instruct\-v1 model for sentiment analysis\.
Read the [Building AI for business: IBM’s Granite foundation models](https://www.ibm.com/blog/building-ai-for-business-ibms-granite-foundation-models/) blog post to learn more\.
<!-- <ul> -->
* For more information on these models, see [Supported foundation models available with watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)\.
* For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
<!-- </ul> -->
## Week ending 22 September 2023 ##
### Decision Optimization Java models ###
20 Sept 2023
Decision Optimization Java models can now be deployed in Watson Machine Learning\. By using the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs\. You can now easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\. For more information, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html)\.
### New notebooks in Samples ###
21 Sept 2023
You can use the following new notebooks in Samples:
<!-- <ul> -->
* [Use watsonx and LangChain to answer questions using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6)
* [Use watsonx and BigCode `starcoder-15.5b` to generate code](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6)
<!-- </ul> -->
## Week ending 15 September 2023 ##
### Prompt engineering and synthetic data quick start tutorials ###
14 Sept 2023
Try the new tutorials to help you learn how to:
<!-- <ul> -->
* Prompt foundation models: There are usually multiple ways to prompt a foundation model for a successful result\. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts\. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text using the retrieval\-augmented generation pattern\.
* Generate synthetic data: You can generate synthetic tabular data in watsonx\.ai\. The benefit to synthetic data is that you can procure the data on\-demand, then customize to fit your use case, and produce it in large quantities\.
<!-- </ul> -->
<!-- <table> -->
| Tutorial | Description | Expertise for tutorial |
| -------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) | Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts\. | Prompt a model using Prompt Lab without coding\. <br><br>Beginner<br><br>No code |
| [Prompt a foundation model with the retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) | Prompt a foundation model by leveraging information in a knowledge base\. | Use the retrieval\-augmented generation pattern in a Jupyter notebook that uses Python code\. <br><br>Intermediate<br><br>All code |
| [Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) | Generate synthetic tabular data using a graphical flow editor\. | Select operations to generate data\. <br><br>Beginner<br><br>No code |
<!-- </table ""> -->
### Watsonx\.ai Community ###
14 Sept 2023
You can now join the [watsonx\.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e) for AI architects and builders to learn, share ideas, and connect with others\.
## Week ending 8 September 2023 ##
### Generate synthetic tabular data with Synthetic Data Generator ###
7 Sept 2023
Now available in the Dallas and Frankfurt regions, Synthetic Data Generator is a new graphical editor tool on watsonx\.ai that you can use to generate tabular data to use for training models\. Using visual flows and a statistical model, you can create synthetic data based on your existing data or a custom data schema\. You can choose to mask your original data and export your synthetic data to a database or as a file\.
To get started, see [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)\.
### Llama\-2 Foundation Model for natural language generation and chat ###
7 Sept 2023
The Llama\-2 Foundation Model from Meta is now available in the Dallas region\. Llama\-2 Chat model is an auto\-regressive language model that uses an optimized transformer architecture\. The model is pretrained with publicly available online data, and then fine\-tuned using reinforcement learning from human feedback\. The model is intended for commercial and research use in English\-language assistant\-like chat scenarios\.
<!-- <ul> -->
* For more information on the Llama\-2 model, see [Supported foundation models available with watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)\.
* For pricing details for Llama\-2, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
<!-- </ul> -->
### LangChain extension for the foundation models Python library ###
7 Sept 2023
You can now use the LangChain framework with foundation models in watsonx\.ai with the new LangChain extension for the foundation models Python library\.
This sample notebook demonstrates how to use the new extension: [Sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx)
### Introductory sample for the retrieval\-augmented generation pattern ###
7 Sept 2023
Retrieval\-augmented generation is a simple, powerful technique for leveraging a knowledge base to get factually accurate output from foundation models\.
See: [Introduction to retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
## Week ending 1 September 2023 ##
31 Aug 2023
As of today it is not possible to add comments to a notebook from the notebook action bar\. Any existing comments were removed\.

### StarCoder Foundation Model for code generation and code translation ###
31 Aug 2023
The StarCoder model from Hugging Face is now available in the Dallas region\. Use StarCoder to create prompts for generating code or for transforming code from one programming language to another\. One sample prompt demonstrates how to use StarCoder to generate Python code from a set of instruction\. A second sample prompt demonstrates how to use StarCoder to transform code written in C\+\+ to Python code\.
<!-- <ul> -->
* For more information on the StarCoder model, see [Supported foundation models available with watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
* For a description of the sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)\.
<!-- </ul> -->
### IBM watsonx\.ai is available in the Frankfurt region ###
31 Aug 2023
Watsonx\.ai is now generally available in the Frankfurt data center and can be selected as the preferred region when signing\-up\. The Prompt Lab and foundation model inferencing are supported in the Frankfurt region for these models:
<!-- <ul> -->
* mpt\-7b\-instruct2
* flan\-t5\-xxl\-11b
* flan\-ul2\-20b
* For more information on the supported models, see [Supported foundation models available with watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
<!-- </ul> -->
## Week ending 25 August 2023 ##
### Additional cache enhancements available for Watson Pipelines ###
21 August 2023
More options are available for customizing your pipeline flow settings\. You can now exercise greater control over when the cache is used for pipeline runs\. For details, see [Managing default settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html)\.
## Week ending 18 August 2023 ##
### Plan name updates for Watson Machine Learning service ###
18 August 2023
Starting immediately, plan names are updated for the IBM Watson Machine Learning service, as follows:
<!-- <ul> -->
* The v2 Standard plan is now the **Essentials** plan\. The plan is designed to give your organization the resources required to get started working with foundation models and machine learning assets\.
* The v2 Professional plan is now the **Standard** plan\. This plan provides resources designed to support most organizations through asset creation to productive use\.
<!-- </ul> -->
Changes to the plan names do not change your terms of service\. That is, if you are registered to use the v2 Standard plan, it will now be named **Essentials**, but all of the plan details will remain the same\. Similarly, if you are registered to use the v2 Professional plan, there are no changes other than the plan name change to **Standard**\.
For details on what is included with each plan, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. For pricing information, find your plan on the [Watson Machine Learning plan page](https://cloud.ibm.com/catalog/services/watson-machine-learning) in the IBM Cloud catalog\.
## Week ending 11 August 2023 ##
7 August 2023
On 31 August 2023, you will no longer be able to add comments to a notebook from the notebook action bar\. Any existing comments that were added that way will be removed\.

## Week ending 4 August 2023 ##
### Increased token limit for Lite plan ###
4 August 2023
If you are using the Lite plan to test foundation models, the token limit for prompt input and output is now increased from 25,000 to 50,000 per account per month\. This gives you more flexibility for exploring foundation models and experimenting with prompts\.
<!-- <ul> -->
* For details on watsonx\.ai plans, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
* For details on working with prompts, see [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)\.
<!-- </ul> -->
### Custom text analytics template (SPSS Modeler) ###
4 August 2023
For SPSS Modeler, you can now upload a custom text analytics template to a project\. This provides you with more flexibility to capture and extract key concepts in a way that is unique to your context\.
## Week ending 28 July 2023 ##
### Foundation models Python library available ###
27 July 2023
You can now prompt foundation models in watsonx\.ai programmatically using a Python library\.
See: [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
## Week ending 14 July 2023 ##
### Control AI guardrails ###
14 July 2023
You can now control whether AI guardrails are on or off in the Prompt Lab\. AI guardrails remove potentially harmful text from both the input and output fields\. Harmful text can include hate speech, abuse, and profanity\. To prevent the removal of potentially harmful text, set the **AI guardrails** switch to off\. See [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html#hap)\.

### Microsoft Azure SQL Database connection supports Azure Active Directory authentication (Azure AD) ###
14 July 2023
You can now select Active Directory for the Microsoft Azure SQL Database connection\. Active Directory authentication is an alternative to SQL Server authentication\. With this enhancement, administrators can centrally manage user permissions to Azure\. For more information, see [Microsoft Azure SQL Database connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html)\.
## Week ending 7 July 2023 ##
### Welcome to IBM watsonx\.ai\! ###
7 July 2023
IBM watsonx\.ai delivers all the tools that you need to work with machine learning and foundation models\.
Get started:
<!-- <ul> -->
* [Learn about watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Learn about foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
* [Take quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)
* [Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
<!-- </ul> -->
### Try generative AI search and answer in this documentation ###
7 July 2023
You can see generative AI in action by trying the new generative AI search and answer option in the watsonx\.ai documentation\. The answers are generated by a large language model running in watsonx\.ai and based on the documentation content\. This feature is only available when you are viewing the documentation while logged in to watsonx\.ai\.
Enter a question in the documentation search field and click the **Try generative AI search and answer** icon ()\. The **Generative AI search and answer** pane opens and answers your question\.

<!-- </article "role="article" "> -->
|
CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en | Watson Machine Learning plans and compute usage | Watson Machine Learning plans and compute usage
You use Watson Machine Learning resources, which are measured in capacity unit hours (CUH), when you train AutoAI models, run machine learning models, or score deployed models. You use Watson Machine Learning resources, measured in resource units (RU), when you run inferencing services with foundation models. This topic describes the various plans you can choose, what services are included, and how computing resources are calculated.
Watson Machine Learning in Cloud Pak for Data as a Service and watsonx
Important:The Watson Machine Learning plan includes details for watsonx.ai. Watsonx.ai is a studio of integrated tools for working with generative AI, powered by foundation models, and machine learning models. If you are using Cloud Pak for Data as a Service, then the details for working with foundation models and metering prompt inferencing using Resource Units do not apply to your plan.
For more information on watsonx.ai, see:
* [Overview of IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Comparison of IBM watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
* [Signing up for IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)
If you are enabled for both watsonx and Cloud Pak for Data as a Service, you can switch between the two platforms.
Choosing a Watson Machine Learning plan
View a comparison of plans and consider the details to choose a plan that fits your needs.
* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-plan)
* [Capacity Unit Hours (CUH), tokens, and Resource Units (RU)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-meters)
* [Watson Machine Learning plan details](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-plan-details)
* [Capacity Unit Hours metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=encuh-metering)
* [Monitoring CUH and RU usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-track-usage)
Watson Machine Learning plans
Watson Machine Learning plans govern how you are billed for models you train and deploy with Watson Machine Learning and for prompts you use with foundation models. Choose a plan based on your needs:
* Lite is a free plan with limited capacity. Choose this plan if you are evaluating Watson Machine Learning and want to try out the capabilities. The Lite plan does not support running a foundation model tuning experiment on watsonx.
* Essentials is a pay-as-you-go plan that gives you the flexibility to build, deploy, and manage models to match your needs.
* Standard is a high-capacity enterprise plan that is designed to support all of an organization's machine learning needs. Capacity unit hours are provided at a flat rate, while resource unit consumption is pay-as-you-go.
For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/machine-learning).
Capacity Unit Hours (CUH), tokens, and Resource Units (RU)
For metering and billing purposes, machine learning models and deployments or foundation models are measured with these units:
* Capacity Unit Hours (CUH) measure compute resource consumption per unit hour for usage and billing purposes. CUH measures all Watson Machine Learning activity except for Foundation Model inferencing.
* Resource Units (RU) measure foundation model inferencing consumption. Inferencing is the process of calling the foundation model to generate output in response to a prompt. Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. Choose a plan that corresponds to your usage requirements. For details on tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html).
* A rate limit monitors and restricts the number of inferencing requests per second processed for foundation models for a given Watson Machine Learning plan instance. The rate limit is higher for paid plans than for the free Lite plan.
Watson Machine Learning plan details
The Lite plan provides enough free resources for you to evaluate the capabilities of watsonx.ai. You can then choose a paid plan that matches the needs of your organization, based on plan features and capacity.
Table 1. Plan details
Plan features Lite Essentials Standard
Machine Learning usage in CUH 20 CUH per month CUH billing based on CUH rate multiplied by hours of consumption 2500 CUH per month
Foundation model inferencing in tokens or Resource Units (RU) 50,000 tokens per month Billed for usage (1000 tokens = 1 RU) Billed for usage (1000 tokens = 1 RU)
Max parallel Decision Optimization batch jobs per deployment 2 5 100
Deployment jobs retained per space 100 1000 3000
Deployment time to idle 1 day 3 days 3 days
HIPAA support NA NA Dallas region only <br>Must be enabled in your [IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.htmlhipaa)
Rate limit per plan ID 2 inference requests per second 8 inference requests per second 8 inference requests per second
Note: If you upgrade from Essentials to Standard, you cannot revert to an Essentials plan. You must create a new plan.
For all plans:
* Foundational Model inferencing Resource Units (RU) can be used for Prompt Lab inferencing, including input and output. That is, the prompt you enter for input is counted in addition to the generated output. (watsonx only)
* Foundation model inferencing is available only for the Dallas and Frankfurt data centers. (watsonx only)
* Foundation model tuning in the Tuning Studio is available only in the Dallas data center. (watsonx only)
* Three model classes determine the RU rate. The price per RU differs according to model class. (watsonx only)
* Capacity-unit-hour (CUH) rate consumption for training is based on training tool, hardware specification, and runtime environment.
* Capacity-unit-hour (CUH) rate consumption for deployment is based on deployment type, hardware specification, and software specification.
* Watson Machine Learning places limits on the number of [deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) retained for each single [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). If you exceed your limit, you cannot create new deployment jobs until you delete existing jobs or upgrade your plan. By default, jobs metadata will be auto-delete after 30 days. You can override this value when creating a job. See [Managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html).
* Time to idle refers to the amount of time to consider a deployment active between scoring requests. If a deployment does not receive scoring requests for a given duration, it is treated as inactive, or idle, and billing stops for all frameworks other than SPSS.
* A plan allows for at least the stated rate limit, and the actual rate limit can be higher than the stated limit. For example, the Lite plan might process more than 2 requests per second without issuing an error. If you have a paid plan and believe you are reaching the rate limit in error, contact IBM Support for assistance.
For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/watson-machine-learning).
Resource unit metering (watsonx)
Resource Units billing is based on the rate of the billing class for the foundation model multipled by the number of Resource Units (RU). A Resource Unit is equal to 1000 tokens from the input and output of foundation model inferencing. The three foundation model billing classes have different RU rates.
Table 2. Foundation model billing details
Model Origin Billing class Price per RU
granite-13b-instruct-v2 IBM Class 2 $0.0018 per RU
granite-13b-instruct-v1 IBM Class 2 $0.0018 per RU
granite-13b-chat-v2 IBM Class 2 $0.0018 per RU
granite-13b-chat-v1 IBM Class 2 $0.0018 per RU
flan-t5-xl-3b Open source Class 1 $0.0006 per RU
flan-t5-xxl-11b Open source Class 2 $0.0018 per RU
flan-ul2-20b Open source Class 3 $0.0050 per RU
gpt-neox-20b Open source Class 3 $0.0050 per RU
llama-2-13b-chat Open source Class 1 $0.0006 per RU
llama-2-70b-chat Open source Class 2 $0.0018 per RU
mpt-7b-instruct2 Open source Class 1 $0.0006 per RU
mt0-xxl-13b Open source Class 2 $0.0018 per RU
starcoder-15.5b Open source Class 2 $0.0018 per RU
Tuned foundation model Custom Class 1 $0.0006 per RU
* For more information about each model, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
* For information about tuned foundation models, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html).
* For information about regional support for each model, see [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers).
Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site.
Capacity Unit Hours metering (watsonx and Watson Machine Learning)
CUH consumption is affected by the computational hardware resources you apply for a task as well as other factors such as the software specification and model type.
CUH consumption rates by asset type
Table 3. CUH consumption rates by asset type
Asset type Capacity type Capacity units per hour
AutoAI experiment 8 vCPU and 32 GB RAM 20
Decision Optimization training 2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM 6 <br>7 <br>9 <br>13
Decision Optimization deployments 2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM 30 <br>40 <br>50 <br>60
Machine Learning models <br>(training, evaluating, or scoring) 1 vCPU and 4 GB RAM <br>2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM 0.5 <br>1 <br>2 <br>4 <br>8
Foundation model tuning experiment <br>(watsonx only) NVIDIA A100 80GB GPU 43
CUH consumption by deployment and framework type
CUH consumption for deployments is calculated using these formulas:
Table 4. CUH consumption by deployment and framework type
Deployment type Framework CUH calculation
Online AutoAI, Python functions and scripts, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny deployment_active_duration no_of_nodes CUH_rate_for_capacity_type_framework
Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost score_duration_in_seconds no_of_nodes CUH_rate_for_capacity_type_framework
Batch all frameworks job_duration_in_seconds no_of_nodes CUH_rate_for_capacity_type_framework
Monitoring resource usage
You can track CUH or RU usage for assets you own or collaborate on in a project or space. If you are an account owner or administrator, you can track CUH or RU usage for an entire account.
Tracking CUH or RU usage in a project
To monitor CUH or RU consumption in a project:
1. Navigate to the Manage tab for a project.
2. Click Resources to review a summary of resource consumption for assets in the project or space, or to review resource consumption details for particular assets.

Tracking CUH usage for an account
You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator or the Watson Machine Learning service owner. For details, see [Monitoring resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html).
Tracking CUH consumption for machine learning in a notebook
To calculate capacity unit hours in a notebook, use:
CP = client.service_instance.get_details()
CUH = CUH["entity"]["capacity_units"]/(36001000)
print(CUH)
For example:
'capacity_units': {'current': 19773430}
19773430/(36001000)
returns 5.49 CUH
For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation.
Learn more
* [Compute options for AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html)
* [Compute options for model training and scoring](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html)
Parent topic:[Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html)
| # Watson Machine Learning plans and compute usage #
You use Watson Machine Learning resources, which are measured in capacity unit hours (CUH), when you train AutoAI models, run machine learning models, or score deployed models\. You use Watson Machine Learning resources, measured in resource units (RU), when you run inferencing services with foundation models\. This topic describes the various plans you can choose, what services are included, and how computing resources are calculated\.
## Watson Machine Learning in Cloud Pak for Data as a Service and watsonx ##
Important:The Watson Machine Learning plan includes details for watsonx\.ai\. Watsonx\.ai is a studio of integrated tools for working with generative AI, powered by foundation models, and machine learning models\. If you are using Cloud Pak for Data as a Service, then the details for working with foundation models and metering prompt inferencing using Resource Units do not apply to your plan\.
For more information on watsonx\.ai, see:
<!-- <ul> -->
* [Overview of IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html)
* [Comparison of IBM watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
* [Signing up for IBM watsonx\.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)
<!-- </ul> -->
If you are enabled for both watsonx and Cloud Pak for Data as a Service, you can switch between the two platforms\.
## Choosing a Watson Machine Learning plan ##
View a comparison of plans and consider the details to choose a plan that fits your needs\.
<!-- <ul> -->
* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en#wml-plan)
* [Capacity Unit Hours (CUH), tokens, and Resource Units (RU)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en#wml-meters)
* [Watson Machine Learning plan details](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en#wml-plan-details)
* [Capacity Unit Hours metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en#cuh-metering)
* [Monitoring CUH and RU usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=en#wml-track-usage)
<!-- </ul> -->
### Watson Machine Learning plans ###
Watson Machine Learning plans govern how you are billed for models you train and deploy with Watson Machine Learning and for prompts you use with foundation models\. Choose a plan based on your needs:
<!-- <ul> -->
* **Lite** is a free plan with limited capacity\. Choose this plan if you are evaluating Watson Machine Learning and want to try out the capabilities\. The Lite plan does not support running a foundation model tuning experiment on watsonx\.
* **Essentials** is a pay\-as\-you\-go plan that gives you the flexibility to build, deploy, and manage models to match your needs\.
* **Standard** is a high\-capacity enterprise plan that is designed to support all of an organization's machine learning needs\. Capacity unit hours are provided at a flat rate, while resource unit consumption is pay\-as\-you\-go\.
<!-- </ul> -->
For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/machine-learning)\.
### Capacity Unit Hours (CUH), tokens, and Resource Units (RU) ###
For metering and billing purposes, machine learning models and deployments or foundation models are measured with these units:
<!-- <ul> -->
* *Capacity Unit Hours* (CUH) measure compute resource consumption per unit hour for usage and billing purposes\. CUH measures all Watson Machine Learning activity except for Foundation Model inferencing\.
* *Resource Units* (RU) measure foundation model inferencing consumption\. Inferencing is the process of calling the foundation model to generate output in response to a prompt\. Each RU equals 1,000 *tokens*\. A token is a basic unit of text (typically 4 characters or 0\.75 words) used in the input or output for a foundation model prompt\. Choose a plan that corresponds to your usage requirements\. For details on tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)\.
* A *rate limit* monitors and restricts the number of inferencing requests per second processed for foundation models for a given Watson Machine Learning plan instance\. The rate limit is higher for paid plans than for the free Lite plan\.
<!-- </ul> -->
## Watson Machine Learning plan details ##
The Lite plan provides enough free resources for you to evaluate the capabilities of watsonx\.ai\. You can then choose a paid plan that matches the needs of your organization, based on plan features and capacity\.
<!-- <table> -->
Table 1\. Plan details
| Plan features | Lite | Essentials | Standard |
| ------------------------------------------------------------- | ------------------------------- | ---------------------------------------------------------------- | --------------------------------------------------------------------- |
| Machine Learning usage in CUH | 20 CUH per month | CUH billing based on CUH rate multiplied by hours of consumption | 2500 CUH per month |
| Foundation model inferencing in tokens or Resource Units (RU) | 50,000 tokens per month | Billed for usage (1000 tokens = 1 RU) | Billed for usage (1000 tokens = 1 RU) |
| Max parallel Decision Optimization batch jobs per deployment | 2 | 5 | 100 |
| Deployment jobs retained per space | 100 | 1000 | 3000 |
| Deployment time to idle | 1 day | 3 days | 3 days |
| HIPAA support | NA | NA | Dallas region only <br>Must be enabled in your [IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html#hipaa) |
| Rate limit per plan ID | 2 inference requests per second | 8 inference requests per second | 8 inference requests per second |
<!-- </table ""> -->
Note: If you upgrade from Essentials to Standard, you cannot revert to an Essentials plan\. You must create a new plan\.
For all plans:
<!-- <ul> -->
* Foundational Model inferencing Resource Units (RU) can be used for Prompt Lab inferencing, including input and output\. That is, the prompt you enter for input is counted in addition to the generated output\. (watsonx only)
* Foundation model inferencing is available only for the Dallas and Frankfurt data centers\. (watsonx only)
* Foundation model tuning in the Tuning Studio is available only in the Dallas data center\. (watsonx only)
* Three model classes determine the RU rate\. The price per RU differs according to model class\. (watsonx only)
* Capacity\-unit\-hour (CUH) rate consumption for training is based on training tool, hardware specification, and runtime environment\.
* Capacity\-unit\-hour (CUH) rate consumption for deployment is based on deployment type, hardware specification, and software specification\.
* Watson Machine Learning places limits on the number of [deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) retained for each single [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\. If you exceed your limit, you cannot create new deployment jobs until you delete existing jobs or upgrade your plan\. By default, jobs metadata will be auto\-delete after 30 days\. You can override this value when creating a job\. See [Managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html)\.
* Time to idle refers to the amount of time to consider a deployment active between scoring requests\. If a deployment does not receive scoring requests for a given duration, it is treated as inactive, or idle, and billing stops for all frameworks other than SPSS\.
* A plan allows for at least the stated rate limit, and the actual rate limit can be higher than the stated limit\. For example, the Lite plan might process more than 2 requests per second without issuing an error\. If you have a paid plan and believe you are reaching the rate limit in error, contact IBM Support for assistance\.
<!-- </ul> -->
For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/watson-machine-learning)\.
## Resource unit metering (watsonx) ##
Resource Units billing is based on the rate of the billing class for the foundation model multipled by the number of Resource Units (RU)\. A Resource Unit is equal to 1000 tokens from the input and output of foundation model inferencing\. The three foundation model billing classes have different RU rates\.
<!-- <table> -->
Table 2\. Foundation model billing details
| Model | Origin | Billing class | Price per RU |
| -------------------------- | ----------- | ------------- | --------------- |
| granite\-13b\-instruct\-v2 | IBM | Class 2 | $0\.0018 per RU |
| granite\-13b\-instruct\-v1 | IBM | Class 2 | $0\.0018 per RU |
| granite\-13b\-chat\-v2 | IBM | Class 2 | $0\.0018 per RU |
| granite\-13b\-chat\-v1 | IBM | Class 2 | $0\.0018 per RU |
| flan\-t5\-xl\-3b | Open source | Class 1 | $0\.0006 per RU |
| flan\-t5\-xxl\-11b | Open source | Class 2 | $0\.0018 per RU |
| flan\-ul2\-20b | Open source | Class 3 | $0\.0050 per RU |
| gpt\-neox\-20b | Open source | Class 3 | $0\.0050 per RU |
| llama\-2\-13b\-chat | Open source | Class 1 | $0\.0006 per RU |
| llama\-2\-70b\-chat | Open source | Class 2 | $0\.0018 per RU |
| mpt\-7b\-instruct2 | Open source | Class 1 | $0\.0006 per RU |
| mt0\-xxl\-13b | Open source | Class 2 | $0\.0018 per RU |
| starcoder\-15\.5b | Open source | Class 2 | $0\.0018 per RU |
| Tuned foundation model | Custom | Class 1 | $0\.0006 per RU |
<!-- </table ""> -->
<!-- <ul> -->
* For more information about each model, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
* For information about tuned foundation models, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)\.
* For information about regional support for each model, see [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html#data-centers)\.
<!-- </ul> -->
Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site\.
## Capacity Unit Hours metering (watsonx and Watson Machine Learning) ##
CUH consumption is affected by the computational hardware resources you apply for a task as well as other factors such as the software specification and model type\.
### CUH consumption rates by asset type ###
<!-- <table> -->
Table 3\. CUH consumption rates by asset type
| Asset type | Capacity type | Capacity units per hour |
| --------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | -------------------------------- |
| AutoAI experiment | 8 vCPU and 32 GB RAM | 20 |
| Decision Optimization training | 2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM | 6 <br>7 <br>9 <br>13 |
| Decision Optimization deployments | 2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM | 30 <br>40 <br>50 <br>60 |
| Machine Learning models <br>(training, evaluating, or scoring) | 1 vCPU and 4 GB RAM <br>2 vCPU and 8 GB RAM <br>4 vCPU and 16 GB RAM <br>8 vCPU and 32 GB RAM <br>16 vCPU and 64 GB RAM | 0\.5 <br>1 <br>2 <br>4 <br>8 |
| Foundation model tuning experiment <br>(watsonx only) | NVIDIA A100 80GB GPU | 43 |
<!-- </table ""> -->
### CUH consumption by deployment and framework type ###
CUH consumption for deployments is calculated using these formulas:
<!-- <table> -->
Table 4\. CUH consumption by deployment and framework type
| Deployment type | Framework | CUH calculation |
| --------------- | ---------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| Online | AutoAI, Python functions and scripts, SPSS, Scikit\-Learn custom libraries, Tensorflow, RShiny | deployment\_active\_duration *no\_of\_nodes* CUH\_rate\_for\_capacity\_type\_framework |
| Online | Spark, PMML, Scikit\-Learn, Pytorch, XGBoost | score\_duration\_in\_seconds *no\_of\_nodes* CUH\_rate\_for\_capacity\_type\_framework |
| Batch | all frameworks | job\_duration\_in\_seconds *no\_of\_nodes* CUH\_rate\_for\_capacity\_type\_framework |
<!-- </table ""> -->
## Monitoring resource usage ##
You can track CUH or RU usage for assets you own or collaborate on in a project or space\. If you are an account owner or administrator, you can track CUH or RU usage for an entire account\.
### Tracking CUH or RU usage in a project ###
To monitor CUH or RU consumption in a project:
<!-- <ol> -->
1. Navigate to the **Manage** tab for a project\.
2. Click **Resources** to review a summary of resource consumption for assets in the project or space, or to review resource consumption details for particular assets\.

<!-- </ol> -->
### Tracking CUH usage for an account ###
You can track the runtime usage for an account on the **Environment Runtimes** page if you are the IBM Cloud account owner or administrator or the Watson Machine Learning service owner\. For details, see [Monitoring resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)\.
### Tracking CUH consumption for machine learning in a notebook ###
To calculate capacity unit hours in a notebook, use:
CP = client.service_instance.get_details()
CUH = CUH["entity"]["capacity_units"]/(3600*1000)
print(CUH)
For example:
'capacity_units': {'current': 19773430}
19773430/(3600*1000)
returns 5\.49 CUH
For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation\.
## Learn more ##
<!-- <ul> -->
* [Compute options for AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html)
* [Compute options for model training and scoring](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html)
<!-- </ul> -->
**Parent topic:**[Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html)
<!-- </article "role="article" "> -->
|
3DF4040F4F2E5704EF44E6742585EE853A2F2A37 | https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html?context=cdpaas&locale=en | Watson Studio service plans | Watson Studio service plans
The plan you choose for Watson Studio affects the features and capabilities that you can use.
When you provision or upgrade Watson Studio, you can choose between a Lite and a Professional plan.
See the plan pages in [IBM Cloud catalog: Watson Studio](https://cloud.ibm.com/catalog/services/watson-studio) for pricing and feature information.
IBM Cloud account owners can choose between the Lite (unpaid) and Professional (paid) plan.
Under the Professional plan, you can provision multiple Watson Studio instances in an IBM Cloud account. The Professional plan allows unlimited users and charges for compute usage which is measured in capacity unit hours (CUH). The Professional plan is the only paid plan option.
Under the Lite plan, you can provision one Watson Studio instance per IBM Cloud account. The Lite plan allows only one user and limits the CUH to 10 hours per month. Collaborators in your projects must have their own Watson Studio Lite plans.
Both Watson Studio plans contain these features without additional services:
* Watson services APIs to run in notebooks.
* [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) to analyze data with Python or R code.
* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) to analyze data with R code.
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) to develop predictive models on a graphical canvas.
* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) to shape and cleanse data.
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) to orchestrate an end-to-end flow of assets from creation through deployment.
* [Small runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools.
* [Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-spark). The maximum number of Spark executors that can be used is restricted by the service plan.
* [Environments with the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) with pre-trained models for language processing tasks that you can run on unstructured data.
* [Environments with Decision Optimization libraries](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments.
* [Connectors to data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html).
* [Collaboration](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) in projects and deployment spaces.
* [Samples](https://dataplatform.cloud.ibm.com/gallery) for resources to help you learn and samples that you can use.
Both Watson Studio plans contain these features that also require the Watson Machine Learning service:
* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) to build analytical models.
* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) to automatically create a set of model candidates.
* [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) to collaboratively train a model with multiple remote parties without sharing data.
* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to build models that solve business problems.
* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) to generate synthetic tabular data.
The Watson Studio Professional plan includes features that are not available in the Lite plan, including the following:
* [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok).
* [Large runtime environments with 8 or more vCPUs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools.
* [GPU environments for running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-gpu).
* [Export projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html).
The Professional plan charges for [compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) consumed per month. Compute usage is measured in capacity unit hours (CUH). For details on computing resource allocation and consumption, see [Runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html).
Table 1. Feature differences between Watson Studio plans
Feature Lite Professional
Custom encryption keys ✓
Connectors ✓ ✓
Large environments ✓
Spark environments 2 executors Up to 35 executors
GPU environments ✓ Dallas region only
Export projects ✓
Collaborators 1 Unlimited
Processing usage 10 CUH per month Unlimited - pay per CUH
HIPAA readiness ✓ Dallas region only
Learn more
* [Watson Studio service overview](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html)
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
* [Upgrade your plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html).
Parent topic:[Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html)
| # Watson Studio service plans #
The plan you choose for Watson Studio affects the features and capabilities that you can use\.
When you provision or upgrade Watson Studio, you can choose between a Lite and a Professional plan\.
See the plan pages in [IBM Cloud catalog: Watson Studio](https://cloud.ibm.com/catalog/services/watson-studio) for pricing and feature information\.
IBM Cloud account owners can choose between the Lite (unpaid) and Professional (paid) plan\.
Under the Professional plan, you can provision multiple Watson Studio instances in an IBM Cloud account\. The Professional plan allows unlimited users and charges for compute usage which is measured in capacity unit hours (CUH)\. The Professional plan is the only paid plan option\.
Under the Lite plan, you can provision one Watson Studio instance per IBM Cloud account\. The Lite plan allows only one user and limits the CUH to 10 hours per month\. Collaborators in your projects must have their own Watson Studio Lite plans\.
Both Watson Studio plans contain these features without additional services:
<!-- <ul> -->
* Watson services APIs to run in notebooks\.
* [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) to analyze data with Python or R code\.
* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) to analyze data with R code\.
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) to develop predictive models on a graphical canvas\.
* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) to shape and cleanse data\.
* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) to orchestrate an end\-to\-end flow of assets from creation through deployment\.
* [Small runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools\.
* [Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#default-spark)\. The maximum number of Spark executors that can be used is restricted by the service plan\.
* [Environments with the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) with pre\-trained models for language processing tasks that you can run on unstructured data\.
* [Environments with Decision Optimization libraries](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments\.
* [Connectors to data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)\.
* [Collaboration](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) in projects and deployment spaces\.
* [Samples](https://dataplatform.cloud.ibm.com/gallery) for resources to help you learn and samples that you can use\.
<!-- </ul> -->
Both Watson Studio plans contain these features that also require the Watson Machine Learning service:
<!-- <ul> -->
* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) to build analytical models\.
* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) to automatically create a set of model candidates\.
* [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) to collaboratively train a model with multiple remote parties without sharing data\.
* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to build models that solve business problems\.
* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) to generate synthetic tabular data\.
<!-- </ul> -->
The Watson Studio Professional plan includes features that are not available in the Lite plan, including the following:
<!-- <ul> -->
* [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#byok)\.
* [Large runtime environments with 8 or more vCPUs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools\.
* [GPU environments for running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html##default-gpu)\.
* [Export projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html)\.
<!-- </ul> -->
The Professional plan charges for [compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) consumed per month\. Compute usage is measured in capacity unit hours (CUH)\. For details on computing resource allocation and consumption, see [Runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html)\.
<!-- <table> -->
Table 1\. Feature differences between Watson Studio plans
| Feature | Lite | Professional |
| ---------------------- | ---------------- | ------------------------ |
| Custom encryption keys | | ✓ |
| Connectors | ✓ | ✓ |
| Large environments | | ✓ |
| Spark environments | 2 executors | Up to 35 executors |
| GPU environments | | ✓ Dallas region only |
| Export projects | | ✓ |
| Collaborators | 1 | Unlimited |
| Processing usage | 10 CUH per month | Unlimited \- pay per CUH |
| HIPAA readiness | | ✓ Dallas region only |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Watson Studio service overview](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html)
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html)
* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
* [Upgrade your plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)\.
<!-- </ul> -->
**Parent topic:**[Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html)
<!-- </article "role="article" "> -->
|
B508DA024EE4722C3919C4D1118CF0410713A9C5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=en | Adding data to a project | Adding data to a project
After you create a project, the next step is to add data assets to it so that you can work with data. All the collaborators in the project are automatically authorized to access the data in the project.
Different asset types can have duplicate names. However, you can't add an asset type with the same name multiple times.
You can use the following methods to add data assets to projects:
Method When to use
[Add local files](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enfiles) You have data in CSV or similar files on your local system.
[Add Samples data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=encommunity) You want to use sample data sets.
[Add database connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) You need to connect to a remote data source.
[Add data from a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) You need one or more tables or files from a remote data source.
[Add connected folder assets from IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) You need a folder in IBM Cloud Object Storage that contains a dynamic set of files, such as a news feed.
[Convert files in project storage to assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enos) You want to convert files that you created in the project into data assets.
Add local files
You can add a file from your local system as a data asset in a project.
Required permissions : You must have the Editor or Admin role in the project.
Restrictions : - The file cannot be empty. : - The file name can't exceed 255 characters.
: - The maximum size for files that you can load with the UI is 5 GB. You can [load larger files to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) with APIs.
Important: You can't add executable files to a project. All other types files that you add to a project are not checked for malicious code. You must ensure that your files do not contain malware or other types of malicious software that other collaborators might download.
To add data files to a project:
1. From your project's Assets page, click the Upload asset to project icon (). You can also click the same icon () from within a notebook or canvas.
2. In the pane that opens, browse for the files or drag them onto the pane. You must stay on the page until the load is complete.
The files are saved in the object storage that is associated with your project and are listed as data assets on the Assets page of your project.
When you click the data asset name, you can see this information about data assets from files:
* The asset name and description
* The tags for the asset
* The name of the person who created the asset
* The size of the data
* The date when the asset was added to the project
* The date when the asset was last modified
* A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of the data, for CSV, Avro, Parquet, TSV, Microsoft Excel, PDF, text, JSON, and image files
* A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of the data, for CSV, Avro, Parquet, TSV, and Microsoft Excel files
You can update the contents of a data asset from a file by adding a file with the same name and format to the project and then choosing to replace the existing data asset.
You can remove the data asset by choosing the Delete option from the action menu next to the asset name. Choose the Prepare data option to refine the data with [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html).
Add Samples data sets
You can add data sets from Samples to your project:
1. In Samples, find the card for the data set that you want to add.
2. Click the Add to Project icon from the action bar, select the project, and click Add.
This video provides a visual method to learn the concepts and tasks in this documentation.
Convert files in project storage to assets
The storage for the project contains the data assets that you uploaded to the project, but it can also contain other files. For example, you can save a DataFrame in a notebook in the project environment storage. You can convert files in project storage to assets.
To convert files in project storage to assets:
1. From the Assets tab of your project, click Import asset.
2. Select Project files.
3. Select the data_asset folder.
4. Select the asset and click Import.
Next steps
* [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
* [Analyze the data and work with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Learn more
* [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html)
* [Publishing data assets to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html)
Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
| # Adding data to a project #
After you create a project, the next step is to add data assets to it so that you can work with data\. All the collaborators in the project are automatically authorized to access the data in the project\.
Different asset types can have duplicate names\. However, you can't add an asset type with the same name multiple times\.
You can use the following methods to add data assets to projects:
<!-- <table> -->
| Method | When to use |
| ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| [Add local files](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=en#files) | You have data in CSV or similar files on your local system\. |
| [Add Samples data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=en#community) | You want to use sample data sets\. |
| [Add database connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) | You need to connect to a remote data source\. |
| [Add data from a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) | You need one or more tables or files from a remote data source\. |
| [Add connected folder assets from IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) | You need a folder in IBM Cloud Object Storage that contains a dynamic set of files, such as a news feed\. |
| [Convert files in project storage to assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=en#os) | You want to convert files that you created in the project into data assets\. |
<!-- </table ""> -->
## Add local files ##
You can add a file from your local system as a data asset in a project\.
**Required permissions** : You must have the **Editor** or **Admin** role in the project\.
**Restrictions** : \- The file cannot be empty\. : \- The file name can't exceed 255 characters\.
: \- The maximum size for files that you can load with the UI is 5 GB\. You can [load larger files to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) with APIs\.
Important: You can't add executable files to a project\. All other types files that you add to a project are not checked for malicious code\. You must ensure that your files do not contain malware or other types of malicious software that other collaborators might download\.
To add data files to a project:
<!-- <ol> -->
1. From your project's **Assets** page, click the **Upload asset to project** icon ()\. You can also click the same icon () from within a notebook or canvas\.
2. In the pane that opens, browse for the files or drag them onto the pane\. You must stay on the page until the load is complete\.
<!-- </ol> -->
The files are saved in the object storage that is associated with your project and are listed as data assets on the **Assets** page of your project\.
When you click the data asset name, you can see this information about data assets from files:
<!-- <ul> -->
* The asset name and description
* The tags for the asset
* The name of the person who created the asset
* The size of the data
* The date when the asset was added to the project
* The date when the asset was last modified
* A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of the data, for CSV, Avro, Parquet, TSV, Microsoft Excel, PDF, text, JSON, and image files
* A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of the data, for CSV, Avro, Parquet, TSV, and Microsoft Excel files
<!-- </ul> -->
You can update the contents of a data asset from a file by adding a file with the same name and format to the project and then choosing to replace the existing data asset\.
You can remove the data asset by choosing the **Delete** option from the action menu next to the asset name\. Choose the **Prepare data** option to refine the data with [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)\.
## Add Samples data sets ##
You can add data sets from Samples to your project:
<!-- <ol> -->
1. In Samples, find the card for the data set that you want to add\.
2. Click the **Add to Project** icon from the action bar, select the project, and click **Add**\.
<!-- </ol> -->
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Convert files in project storage to assets ##
The storage for the project contains the data assets that you uploaded to the project, but it can also contain other files\. For example, you can save a DataFrame in a notebook in the project environment storage\. You can convert files in project storage to assets\.
To convert files in project storage to assets:
<!-- <ol> -->
1. From the **Assets** tab of your project, click **Import asset**\.
2. Select **Project files**\.
3. Select the **data\_asset** folder\.
4. Select the asset and click **Import**\.
<!-- </ol> -->
## Next steps ##
<!-- <ul> -->
* [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
* [Analyze the data and work with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html)
* [Publishing data assets to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html)
<!-- </ul> -->
**Parent topic:**[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
<!-- </article "role="article" "> -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.