text
stringlengths 5
22M
| id
stringlengths 12
177
| metadata
dict | __index_level_0__
int64 0
1.37k
|
---|---|---|---|
# Gitea Shared Service
As outbound access to public git repositories such as GitHub is often blocked a git mirror may be required. Gitea can be deployed as a shared service to offer this functionality.
Documentation on Gitea can be found here: [https://docs.gitea.io/](https://docs.gitea.io/).
## Deploy
To deploy this shared service you should use the UI (or the API) to issue a request. If you don't see the option available for this specific template make sure it has been built, published and registered by the TRE Admin.
## Getting Started
Connect to the Gitea admin console `https://yourtreuri/gitea/` with the `giteaadmin` user. You can find the password in keyvault as `gitea password`.
## Configuring repositories
By default, this Gitea instance does not have any repositories configured. You can add repositories to Gitea either by using the command line or by using the Gitea web interface.
### Command Line
Make sure you run the following commands using git bash and set your current directory as C:/AzureTRE.
1. On the jumbox, run:
```./templates/workspace_services/gitea/gitea_migrate_repo.sh -t <tre_id> -g <URL_of_github_repo_to_migrate>```
1. If you have issues with token or token doesn't work, you can reset the token by setting it's value to null in Key Vault:
```az keyvault secret set --name gitea-<tre-id>-admin-token --vault-name kv-<tre-id> --value null```
### Gitea Web Interface
1. on the jumbox, open Edge and go to:
```https://gitea-<TRE_ID>.azurewebsites.net/```
1. Authenticate yourself using username ```giteaadmin``` and the secret ```<gitea-TRE_ID-administrator-password>``` stored in the keyvault,
1. Add the repository of your choice
### Verify can access the mirrored repository
From a virtual machine within a workspace:
- Command line: ```git clone https://gitea-<TRE_ID>.azurewebsites.net/giteaadmin/<NameOfrepository>```
- Gitea Web Interface: ```https://gitea-<TRE_ID>.azurewebsites.net/```
## Network requirements
Gitea needs to be able to access the following resource outside the Azure TRE VNET via explicitly allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
| Service Tag / Destination | Justification |
| --- | --- |
| AzureActiveDirectory | Authorize the signed in user against Microsoft Entra ID. |
| AzureContainerRegistry | Pull the Gitea container image, as it is located in Azure Container Registry. |
| (www.)github.com | Allows Gitea to mirror any repo on GitHub |
## Upgrading to version 1.0.0
Migrating existing Gitea services to the major version 1.0.0 is not currently supported. This is due to the breaking change in the Terraform to migrate from the deprecated mysql_server to the new mysql_flexible_server.
|
AzureTRE/docs/tre-templates/shared-services/gitea.md/0
|
{
"file_path": "AzureTRE/docs/tre-templates/shared-services/gitea.md",
"repo_id": "AzureTRE",
"token_count": 807
}
| 111 |
# MySQL Workspace Service
See: [MySQL Azure](https://learn.microsoft.com/en-GB/azure/mysql/)
## Prerequisites
- [A base workspace deployed](../workspaces/base.md)
- The MySQL workspace service container image needs building and pushing:
`make workspace_service_bundle BUNDLE=mysql`
## Authenticating to MySQL
1. Navigate to the MySQL workspace service using the `Mysql fqdn` from the details tab.
2. Using the Password found in Key Vault and the Username `mysqladmin`
3. Connect to the MySQL server on a VM with the following command shown below
`mysql -h [fqdn] -u [username] -p [password]`
## Upgrading to version 1.0.0
Migrating existing MySQL services to the major version 1.0.0 is not currently supported. This is due to the breaking change in the Terraform to migrate from the deprecated mysql_server to the new mysql_flexible_server.
|
AzureTRE/docs/tre-templates/workspace-services/mysql.md/0
|
{
"file_path": "AzureTRE/docs/tre-templates/workspace-services/mysql.md",
"repo_id": "AzureTRE",
"token_count": 242
}
| 112 |
# Checking the Service Bus
If the message payload is accepted by the API, and a **workspace_id** is generated, you should be able to track the progress of the deployment using `GET /api/workspaces/{workspace_id}`
Initially the status is always reported as:
```json
{
"deployment": {
"status": "awaiting_deployment",
"message": "This resource has not yet been deployed"
}
}
```
This should eventually change as the message flows through the system.
If the message remains at this stage, you should first verify that the message arrived in the service bus.
In the Azure portal:
1. Select the Service Bus from deployed resources and click **Entities > Queues > workspacequeue**.
1. Select the Service Bus Explorer and the **Peek** tab to check for hanging messages.

|
AzureTRE/docs/troubleshooting-faq/troubleshooting-sb.md/0
|
{
"file_path": "AzureTRE/docs/troubleshooting-faq/troubleshooting-sb.md",
"repo_id": "AzureTRE",
"token_count": 223
}
| 113 |
import random
import pytest
import asyncio
from typing import Tuple
import config
import logging
from resources.resource import post_resource, disable_and_delete_resource
from resources.workspace import get_workspace_auth_details
from resources import strings as resource_strings
from helpers import get_admin_token
LOGGER = logging.getLogger(__name__)
pytestmark = pytest.mark.asyncio
def pytest_addoption(parser):
parser.addoption("--verify", action="store", default="true")
@pytest.fixture(scope="session")
def event_loop():
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope="session")
def verify(pytestconfig):
if pytestconfig.getoption("verify").lower() == "true":
return True
elif pytestconfig.getoption("verify").lower() == "false":
return False
async def create_or_get_test_workspace(
auth_type: str,
verify: bool,
template_name: str = resource_strings.BASE_WORKSPACE,
pre_created_workspace_id: str = "",
client_id: str = "",
client_secret: str = "") -> Tuple[str, str]:
if pre_created_workspace_id != "":
return f"/workspaces/{pre_created_workspace_id}", pre_created_workspace_id
LOGGER.info(f"Creating workspace {template_name}")
description = " ".join([x.capitalize() for x in template_name.split("-")[2:]])
payload = {
"templateName": template_name,
"properties": {
"display_name": f"E2E {description} workspace ({auth_type} AAD)",
"description": f"{template_name} test workspace for E2E tests",
"auth_type": auth_type,
"address_space_size": "small"
}
}
if config.TEST_WORKSPACE_APP_PLAN != "":
payload["properties"]["app_service_plan_sku"] = config.TEST_WORKSPACE_APP_PLAN
if auth_type == "Manual":
payload["properties"]["client_id"] = client_id
payload["properties"]["client_secret"] = client_secret
admin_token = await get_admin_token(verify=verify)
# TODO: Temp fix to solve creation of workspaces - https://github.com/microsoft/AzureTRE/issues/2986
await asyncio.sleep(random.uniform(1, 9))
workspace_path, workspace_id = await post_resource(payload, resource_strings.API_WORKSPACES, access_token=admin_token, verify=verify)
LOGGER.info(f"Workspace {workspace_id} {template_name} created")
return workspace_path, workspace_id
async def create_or_get_test_workpace_service(workspace_path, workspace_owner_token, pre_created_workspace_service_id, verify):
if pre_created_workspace_service_id != "":
workspace_service_id = pre_created_workspace_service_id
workspace_service_path = f"{workspace_path}/{resource_strings.API_WORKSPACE_SERVICES}/{workspace_service_id}"
return workspace_service_path, workspace_service_id
# create a guac service
service_payload = {
"templateName": resource_strings.GUACAMOLE_SERVICE,
"properties": {
"display_name": "Workspace service test",
"description": ""
}
}
workspace_service_path, workspace_service_id = await post_resource(
payload=service_payload,
endpoint=f'/api{workspace_path}/{resource_strings.API_WORKSPACE_SERVICES}',
access_token=workspace_owner_token,
verify=verify)
return workspace_service_path, workspace_service_id
async def clean_up_test_workspace(pre_created_workspace_id: str, workspace_path: str, verify: bool):
# Only delete the workspace if it wasn't pre-created
if pre_created_workspace_id == "":
LOGGER.info(f"Deleting workspace {pre_created_workspace_id}")
await disable_and_delete_tre_resource(workspace_path, verify)
async def clean_up_test_workspace_service(pre_created_workspace_service_id: str, workspace_service_path: str, workspace_id: str, verify: bool):
if pre_created_workspace_service_id == "":
LOGGER.info(f"Deleting workspace service {pre_created_workspace_service_id}")
await disable_and_delete_ws_resource(workspace_service_path, workspace_id, verify)
# Session scope isn't in effect with python-xdist: https://github.com/microsoft/AzureTRE/issues/2868
@pytest.fixture(scope="session")
async def setup_test_workspace(verify) -> Tuple[str, str, str]:
pre_created_workspace_id = config.TEST_WORKSPACE_ID
# Set up - uses a pre created app reg as has appropriate roles assigned
workspace_path, workspace_id = await create_or_get_test_workspace(
auth_type="Manual", verify=verify, pre_created_workspace_id=pre_created_workspace_id, client_id=config.TEST_WORKSPACE_APP_ID, client_secret=config.TEST_WORKSPACE_APP_SECRET)
yield workspace_path, workspace_id
# Tear-down
await clean_up_test_workspace(pre_created_workspace_id=pre_created_workspace_id, workspace_path=workspace_path, verify=verify)
# Session scope isn't in effect with python-xdist: https://github.com/microsoft/AzureTRE/issues/2868
@pytest.fixture(scope="session")
async def setup_test_workspace_and_guacamole_service(setup_test_workspace, verify):
# Set up
workspace_path, workspace_id = setup_test_workspace
workspace_owner_token = await get_workspace_owner_token(workspace_id, verify)
pre_created_workspace_service_id = config.TEST_WORKSPACE_SERVICE_ID
workspace_service_path, workspace_service_id = await create_or_get_test_workpace_service(
workspace_path,
workspace_owner_token=workspace_owner_token,
pre_created_workspace_service_id=pre_created_workspace_service_id,
verify=verify)
yield workspace_path, workspace_id, workspace_service_path, workspace_service_id
await clean_up_test_workspace_service(pre_created_workspace_service_id, workspace_service_path, workspace_id, verify)
# Session scope isn't in effect with python-xdist: https://github.com/microsoft/AzureTRE/issues/2868
@pytest.fixture(scope="session")
async def setup_test_aad_workspace(verify) -> Tuple[str, str, str]:
pre_created_workspace_id = config.TEST_AAD_WORKSPACE_ID
# Set up
workspace_path, workspace_id = await create_or_get_test_workspace(auth_type="Automatic", verify=verify, pre_created_workspace_id=pre_created_workspace_id)
yield workspace_path, workspace_id
# Tear-down
await clean_up_test_workspace(pre_created_workspace_id=pre_created_workspace_id, workspace_path=workspace_path, verify=verify)
async def get_workspace_owner_token(workspace_id, verify):
admin_token = await get_admin_token(verify=verify)
workspace_owner_token, _ = await get_workspace_auth_details(admin_token=admin_token, workspace_id=workspace_id, verify=verify)
return workspace_owner_token
async def disable_and_delete_ws_resource(resource_path, workspace_id, verify):
workspace_owner_token = await get_workspace_owner_token(workspace_id, verify)
await disable_and_delete_resource(f'/api{resource_path}', workspace_owner_token, verify)
async def disable_and_delete_tre_resource(resource_path, verify):
admin_token = await get_admin_token(verify)
await disable_and_delete_resource(f'/api{resource_path}', admin_token, verify)
# Session scope isn't in effect with python-xdist: https://github.com/microsoft/AzureTRE/issues/2868
@pytest.fixture(scope="session")
async def setup_test_airlock_import_review_workspace_and_guacamole_service(verify) -> Tuple[str, str, str, str, str]:
pre_created_workspace_id = config.TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_ID
# Set up
workspace_path, workspace_id = await create_or_get_test_workspace(auth_type="Automatic", verify=verify, template_name=resource_strings.AIRLOCK_IMPORT_REVIEW_WORKSPACE, pre_created_workspace_id=pre_created_workspace_id)
admin_token = await get_admin_token(verify=verify)
workspace_owner_token, _ = await get_workspace_auth_details(admin_token=admin_token, workspace_id=workspace_id, verify=verify)
pre_created_workspace_service_id = config.TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_SERVICE_ID
workspace_service_path, workspace_service_id = await create_or_get_test_workpace_service(
workspace_path,
workspace_owner_token=workspace_owner_token,
pre_created_workspace_service_id=pre_created_workspace_service_id,
verify=verify)
yield workspace_path, workspace_id, workspace_service_path, workspace_service_id
# Tear-down in a cascaded way
await clean_up_test_workspace(pre_created_workspace_id=pre_created_workspace_id, workspace_path=workspace_path, verify=verify)
|
AzureTRE/e2e_tests/conftest.py/0
|
{
"file_path": "AzureTRE/e2e_tests/conftest.py",
"repo_id": "AzureTRE",
"token_count": 3152
}
| 114 |
import pytest
from httpx import AsyncClient
import config
pytestmark = pytest.mark.asyncio
@pytest.mark.smoke
async def test_ui() -> None:
endpoint = f"{config.TRE_URL}"
async with AsyncClient(verify=False) as client:
response = await client.get(endpoint)
assert response.status_code == 200
assert "<title>Azure TRE</title>" in response.text
|
AzureTRE/e2e_tests/test_ui.py/0
|
{
"file_path": "AzureTRE/e2e_tests/test_ui.py",
"repo_id": "AzureTRE",
"token_count": 141
}
| 115 |
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
# Uncomment this line to see each command for debugging (careful: this will show secrets!)
# set -o xtrace
# Generate required configuration for Porter Azure plugin
# TODO: Remove porter v0 https://github.com/microsoft/AzureTRE/issues/2990
# Documentation here: - https://github.com/vdice/porter-bundles/tree/master/azure-keyvault
cat > /"${PORTER_HOME_V0}"/config.toml << EOF
default-storage = "azurestorage"
default-secrets = "aad_auth"
no-logs = true
[[storage]]
name = "azurestorage"
plugin = "azure.table"
[storage.config]
account="${MGMT_STORAGE_ACCOUNT_NAME}"
resource-group="${MGMT_RESOURCE_GROUP_NAME}"
[[secrets]]
name = "aad_auth"
plugin = "azure.keyvault"
[secrets.config]
vault = "${KEY_VAULT_NAME}"
EOF
# TODO: Remove porter v0 https://github.com/microsoft/AzureTRE/issues/2990
echo "Azure cli login..."
az cloud set --name "${AZURE_ENVIRONMENT}"
az login --identity -u "${VMSS_MSI_ID}"
echo "Checking if porter v0 state exists..."
exists=$(az storage table exists --account-name "${MGMT_STORAGE_ACCOUNT_NAME}" --name "porter" --auth-mode "login" --output tsv)
if [ "${exists}" = "True" ]; then
echo "v0 state exists. Checking if migration was completed once before..."
migration_complete_container_name="porter-migration-completed"
exists=$(az storage container exists --account-name "${MGMT_STORAGE_ACCOUNT_NAME}" --name "${migration_complete_container_name}" --auth-mode "login" --output tsv)
if [ "${exists}" = "False" ]; then
echo "${migration_complete_container_name} container doesn't exist. Running porter migration..."
porter storage migrate --old-home "${PORTER_HOME_V0}" --old-account "azurestorage"
echo "Porter migration complete. Creating ${migration_complete_container_name} container to prevent migrating again in the future..."
az storage container create --account-name "${MGMT_STORAGE_ACCOUNT_NAME}" --name "${migration_complete_container_name}" --auth-mode "login" --fail-on-exist
echo "Migration is done."
else
echo "${migration_complete_container_name} container is present. Skipping porter migration."
fi
else
echo "Porter v0 state doesn't exist."
fi
# Launch the runner
echo "Starting resource processor..."
python -u vmss_porter/runner.py
|
AzureTRE/resource_processor/run.sh/0
|
{
"file_path": "AzureTRE/resource_processor/run.sh",
"repo_id": "AzureTRE",
"token_count": 770
}
| 116 |
{
"schemaType": "CredentialSet",
"schemaVersion": "1.0.1",
"namespace": "",
"name": "arm_auth",
"credentials": [
{
"name": "azure_client_id",
"source": {
"env": "ARM_CLIENT_ID"
}
},
{
"name": "azure_client_secret",
"source": {
"env": "ARM_CLIENT_SECRET"
}
},
{
"name": "azure_subscription_id",
"source": {
"env": "ARM_SUBSCRIPTION_ID"
}
},
{
"name": "azure_tenant_id",
"source": {
"env": "ARM_TENANT_ID"
}
}
]
}
|
AzureTRE/resource_processor/vmss_porter/arm_auth_local_debugging.json/0
|
{
"file_path": "AzureTRE/resource_processor/vmss_porter/arm_auth_local_debugging.json",
"repo_id": "AzureTRE",
"token_count": 317
}
| 117 |
variable "tre_id" {
type = string
description = "Unique TRE ID"
}
variable "tre_resource_id" {
type = string
description = "Resource ID"
}
variable "admin_jumpbox_vm_sku" {
type = string
}
|
AzureTRE/templates/shared_services/admin-vm/terraform/variables.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/admin-vm/terraform/variables.tf",
"repo_id": "AzureTRE",
"token_count": 86
}
| 118 |
locals {
core_vnet = "vnet-${var.tre_id}"
core_resource_group_name = "rg-${var.tre_id}"
storage_account_name = lower(replace("stg-${var.tre_id}", "-", ""))
topic_name_suffix = "v2-${var.tre_id}"
notification_topic_name = "evgt-airlock-notification-${local.topic_name_suffix}"
airlock_notification_eventgrid_subscription_name = "evgs-airlock-notification"
tre_shared_service_tags = {
tre_id = var.tre_id
tre_shared_service_id = var.tre_resource_id
}
default_tre_url = "https://${data.azurerm_public_ip.app_gateway_ip.fqdn}"
}
|
AzureTRE/templates/shared_services/airlock_notifier/terraform/locals.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/airlock_notifier/terraform/locals.tf",
"repo_id": "AzureTRE",
"token_count": 386
}
| 119 |
data "azurerm_client_config" "current" {}
data "azurerm_resource_group" "rg" {
name = "rg-${var.tre_id}"
}
data "azurerm_key_vault" "key_vault" {
name = "kv-${var.tre_id}"
resource_group_name = data.azurerm_resource_group.rg.name
}
data "azurerm_subnet" "app_gw_subnet" {
name = "AppGwSubnet"
virtual_network_name = "vnet-${var.tre_id}"
resource_group_name = data.azurerm_resource_group.rg.name
}
data "azurerm_user_assigned_identity" "resource_processor_vmss_id" {
name = "id-vmss-${var.tre_id}"
resource_group_name = "rg-${var.tre_id}"
}
|
AzureTRE/templates/shared_services/certs/terraform/data.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/certs/terraform/data.tf",
"repo_id": "AzureTRE",
"token_count": 290
}
| 120 |
output "databricks_workspace_name" {
value = azurerm_databricks_workspace.databricks.name
}
|
AzureTRE/templates/shared_services/databricks-auth/terraform/outputs.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/databricks-auth/terraform/outputs.tf",
"repo_id": "AzureTRE",
"token_count": 35
}
| 121 |
resource "azurerm_route_table" "rt" {
name = "rt-${var.tre_id}"
resource_group_name = local.core_resource_group_name
location = data.azurerm_resource_group.rg.location
disable_bgp_route_propagation = false
tags = local.tre_shared_service_tags
lifecycle { ignore_changes = [tags] }
route {
name = "DefaultRoute"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.fw.ip_configuration[0].private_ip_address
}
}
resource "azurerm_subnet_route_table_association" "rt_shared_subnet_association" {
subnet_id = data.azurerm_subnet.shared.id
route_table_id = azurerm_route_table.rt.id
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
resource "azurerm_subnet_route_table_association" "rt_resource_processor_subnet_association" {
subnet_id = data.azurerm_subnet.resource_processor.id
route_table_id = azurerm_route_table.rt.id
# Not waiting for the rules will block traffic prematurally.
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
resource "azurerm_subnet_route_table_association" "rt_web_app_subnet_association" {
subnet_id = data.azurerm_subnet.web_app.id
route_table_id = azurerm_route_table.rt.id
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
resource "azurerm_subnet_route_table_association" "rt_airlock_processor_subnet_association" {
subnet_id = data.azurerm_subnet.airlock_processor.id
route_table_id = azurerm_route_table.rt.id
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
resource "azurerm_subnet_route_table_association" "rt_airlock_storage_subnet_association" {
subnet_id = data.azurerm_subnet.airlock_storage.id
route_table_id = azurerm_route_table.rt.id
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
resource "azurerm_subnet_route_table_association" "rt_airlock_events_subnet_association" {
subnet_id = data.azurerm_subnet.airlock_events.id
route_table_id = azurerm_route_table.rt.id
depends_on = [
azurerm_firewall.fw,
azurerm_firewall_policy_rule_collection_group.core,
azurerm_firewall_policy_rule_collection_group.dynamic_network,
azurerm_firewall_policy_rule_collection_group.dynamic_application
]
}
|
AzureTRE/templates/shared_services/firewall/terraform/routetable.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/firewall/terraform/routetable.tf",
"repo_id": "AzureTRE",
"token_count": 1344
}
| 122 |
locals {
core_vnet = "vnet-${var.tre_id}"
core_resource_group_name = "rg-${var.tre_id}"
webapp_name = "gitea-${var.tre_id}"
storage_account_name = lower(replace("stg-${var.tre_id}", "-", ""))
keyvault_name = "kv-${var.tre_id}"
version = replace(replace(replace(data.local_file.version.content, "__version__ = \"", ""), "\"", ""), "\n", "")
gitea_allowed_fqdns_list = distinct(compact(split(",", replace(var.gitea_allowed_fqdns, " ", ""))))
sql_sku = {
"B | 4GB 2vCores" = { value = "B_Standard_B2s" },
"GP | 8GB 2vCores" = { value = "GP_Standard_D2ds_v4" },
"BC | 16GB 2vCores" = { value = "MO_Standard_E2ds_v4" }
}
tre_shared_service_tags = {
tre_id = var.tre_id
tre_shared_service_id = var.tre_resource_id
}
webapp_diagnostic_categories_enabled = [
"AppServiceHTTPLogs", "AppServiceConsoleLogs", "AppServiceAppLogs", "AppServiceFileAuditLogs",
"AppServiceAuditLogs", "AppServiceIPSecAuditLogs", "AppServicePlatformLogs", "AppServiceAntivirusScanAuditLogs"
]
}
|
AzureTRE/templates/shared_services/gitea/terraform/locals.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/gitea/terraform/locals.tf",
"repo_id": "AzureTRE",
"token_count": 509
}
| 123 |
data "azurerm_virtual_network" "core" {
name = local.core_vnet
resource_group_name = local.core_resource_group_name
}
data "azurerm_subnet" "shared" {
resource_group_name = local.core_resource_group_name
virtual_network_name = local.core_vnet
name = "SharedSubnet"
}
data "azurerm_key_vault" "kv" {
name = "kv-${var.tre_id}"
resource_group_name = local.core_resource_group_name
}
data "azurerm_key_vault_certificate" "nexus_cert" {
name = var.ssl_cert_name
key_vault_id = data.azurerm_key_vault.kv.id
}
data "azurerm_storage_account" "nexus" {
name = local.storage_account_name
resource_group_name = local.core_resource_group_name
}
data "azurerm_resource_group" "rg" {
name = local.core_resource_group_name
}
data "azurerm_public_ip" "app_gateway_ip" {
name = "pip-agw-${var.tre_id}"
resource_group_name = local.core_resource_group_name
}
data "azurerm_private_dns_zone" "nexus" {
name = "nexus-${data.azurerm_public_ip.app_gateway_ip.fqdn}"
resource_group_name = local.core_resource_group_name
}
|
AzureTRE/templates/shared_services/sonatype-nexus-vm/terraform/data.tf/0
|
{
"file_path": "AzureTRE/templates/shared_services/sonatype-nexus-vm/terraform/data.tf",
"repo_id": "AzureTRE",
"token_count": 515
}
| 124 |
resource "random_password" "password" {
length = 16
lower = true
min_lower = 1
upper = true
min_upper = 1
numeric = true
min_numeric = 1
special = true
min_special = 1
override_special = "_%@"
}
resource "azurerm_key_vault_secret" "aml_password" {
name = "cp-${local.short_service_id}"
value = random_password.password.result
key_vault_id = data.azurerm_key_vault.ws.id
tags = local.tre_workspace_service_tags
lifecycle { ignore_changes = [tags] }
}
resource "azapi_resource" "compute_cluster" {
type = "Microsoft.MachineLearningServices/workspaces/computes@2022-10-01"
name = "cp-${local.short_service_id}"
location = data.azurerm_resource_group.ws.location
parent_id = azurerm_machine_learning_workspace.aml_workspace.id
tags = local.tre_workspace_service_tags
lifecycle { ignore_changes = [tags] }
identity {
type = "SystemAssigned"
}
body = jsonencode({
properties = {
computeLocation = data.azurerm_resource_group.ws.location
description = "Default Compute Cluster"
disableLocalAuth = true
computeType = "AmlCompute"
properties = {
enableNodePublicIp = false
isolatedNetwork = false # isolatedNetwork = true for internal MS usage only
osType = "Linux"
remoteLoginPortPublicAccess = "Disabled"
scaleSettings = {
maxNodeCount = 1
minNodeCount = 0
nodeIdleTimeBeforeScaleDown = "PT10M"
}
subnet = {
id = azurerm_subnet.aml.id
}
vmPriority = "Dedicated"
vmSize = "Standard_DS2_v2"
}
}
})
depends_on = [
azurerm_private_endpoint.mlpe,
azurerm_private_endpoint.blobpe,
azurerm_private_endpoint.filepe
]
response_export_values = ["*"]
}
# This seems to be added automatically
# resource "azurerm_role_assignment" "compute_cluster_acr_pull" {
# scope = azurerm_container_registry.acr.id
# role_definition_name = "AcrPull"
# principal_id = jsondecode(azapi_resource.compute_cluster.output).identity.principalId
# }
resource "azapi_update_resource" "set_image_build_compute" {
type = "Microsoft.MachineLearningServices/workspaces@2022-10-01"
name = azurerm_machine_learning_workspace.aml_workspace.name
parent_id = data.azurerm_resource_group.ws.id
body = jsonencode({
properties = {
imageBuildCompute = jsondecode(azapi_resource.compute_cluster.output).name
}
})
depends_on = [
azapi_resource.compute_cluster
#,
#azurerm_role_assignment.compute_cluster_acr_pull
]
}
|
AzureTRE/templates/workspace_services/azureml/terraform/compute.tf/0
|
{
"file_path": "AzureTRE/templates/workspace_services/azureml/terraform/compute.tf",
"repo_id": "AzureTRE",
"token_count": 1245
}
| 125 |
---
schemaVersion: 1.0.0
name: tre-user-resource-aml-compute-instance
version: 0.5.7
description: "Azure Machine Learning Compute Instance"
registry: azuretre
dockerfile: Dockerfile.tmpl
credentials:
- name: auth_tenant_id
env: AUTH_TENANT_ID
- name: azure_tenant_id
env: ARM_TENANT_ID
- name: azure_subscription_id
env: ARM_SUBSCRIPTION_ID
- name: azure_client_id
env: ARM_CLIENT_ID
- name: azure_client_secret
env: ARM_CLIENT_SECRET
parameters:
- name: id
type: string
- name: parent_service_id
type: string
- name: workspace_id
type: string
- name: tre_id
type: string
- name: vm_size
type: string
default: "Standard_DS2_v3"
- name: user_object_id
type: string
- name: tfstate_resource_group_name
type: string
description: "Resource group containing the Terraform state storage account"
env: MGMT_RESOURCE_GROUP_NAME
- name: tfstate_storage_account_name
type: string
description: "The name of the Terraform state storage account"
env: MGMT_STORAGE_ACCOUNT_NAME
- name: tfstate_container_name
type: string
default: "tfstate"
description: "The name of the Terraform state storage container"
env: TERRAFORM_STATE_CONTAINER_NAME
- name: arm_use_msi
env: ARM_USE_MSI
type: boolean
default: false
- name: arm_environment
env: ARM_ENVIRONMENT
type: string
default: "public"
mixins:
- exec
- az:
clientVersion: 2.37.0
- terraform:
clientVersion: 1.3.6
install:
- terraform:
description: "Deploy service"
vars:
workspace_id: ${ bundle.parameters.workspace_id }
tre_id: ${ bundle.parameters.tre_id }
tre_resource_id: ${ bundle.parameters.id }
parent_service_id: ${ bundle.parameters.parent_service_id }
vm_size_sku: ${ bundle.parameters.vm_size }
auth_tenant_id: ${ bundle.credentials.auth_tenant_id }
user_object_id: ${ bundle.parameters.user_object_id }
backendConfig:
resource_group_name: ${ bundle.parameters.tfstate_resource_group_name }
storage_account_name: ${ bundle.parameters.tfstate_storage_account_name }
container_name: ${ bundle.parameters.tfstate_container_name }
key: tre-user-resource-aml-compute-instance-${ bundle.parameters.id }
upgrade:
- terraform:
description: "Deploy service"
vars:
workspace_id: ${ bundle.parameters.workspace_id }
tre_id: ${ bundle.parameters.tre_id }
tre_resource_id: ${ bundle.parameters.id }
parent_service_id: ${ bundle.parameters.parent_service_id }
vm_size_sku: ${ bundle.parameters.vm_size }
auth_tenant_id: ${ bundle.credentials.auth_tenant_id }
user_object_id: ${ bundle.parameters.user_object_id }
backendConfig:
resource_group_name: ${ bundle.parameters.tfstate_resource_group_name }
storage_account_name: ${ bundle.parameters.tfstate_storage_account_name }
container_name: ${ bundle.parameters.tfstate_container_name }
key: tre-user-resource-aml-compute-instance-${ bundle.parameters.id }
uninstall:
- terraform:
description: "Uninstall service"
vars:
workspace_id: ${ bundle.parameters.workspace_id }
tre_id: ${ bundle.parameters.tre_id }
tre_resource_id: ${ bundle.parameters.id }
parent_service_id: ${ bundle.parameters.parent_service_id }
vm_size_sku: ${ bundle.parameters.vm_size }
auth_tenant_id: ${ bundle.credentials.auth_tenant_id }
user_object_id: ${ bundle.parameters.user_object_id }
backendConfig:
resource_group_name: ${ bundle.parameters.tfstate_resource_group_name }
storage_account_name: ${ bundle.parameters.tfstate_storage_account_name }
container_name: ${ bundle.parameters.tfstate_container_name }
key: tre-user-resource-aml-compute-instance-${ bundle.parameters.id }
|
AzureTRE/templates/workspace_services/azureml/user_resources/aml_compute/porter.yaml/0
|
{
"file_path": "AzureTRE/templates/workspace_services/azureml/user_resources/aml_compute/porter.yaml",
"repo_id": "AzureTRE",
"token_count": 1581
}
| 126 |
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/azure/azapi" {
version = "1.1.0"
constraints = "1.1.0"
hashes = [
"h1:IR+AHCwfjl1c0baWwfOwZ6QZtHj41H2syTgHkJtAr/M=",
"zh:2a25df6325a49f9e821f0b02c7da86167fc19a3bac647cd1edf231300f29d077",
"zh:2b443a836a39724663fe455d4deee408ff3a2d9a8b86f8408aa7db2e8aa743f8",
"zh:364ed09ddfc50d9bed8d930f7de489cb654a9908feb139413a097823a50075fd",
"zh:523bc005f56ae785867d230d55c29f59db4b599dbc6c38b4d03ea55a79458916",
"zh:60ded375fdb305b60bcb4d9e596dbb222cab166bad1b4958199b05a72aaeacfd",
"zh:61e69c58642fead6814e511c872b7c0a6478ec6af4ab758b4512607d910ac078",
"zh:823b2154ae2262dabcbd11aac992e3cc29eae0f7baa96bee1e3e2fe1ece8730b",
"zh:870ea9cc24807ef5142e4cad0281dac7173f7b6bf818a79762b6c690d12d4c4b",
"zh:9094ae76ed66cb328a4f35bd18b9140fb6fc6859c2e46431ec73c018bcb58d96",
"zh:d89149cfd01cb70012459536b4d36490b58e43312440562e5910bd5160537858",
"zh:dba7ec06171ca062fc423ba5b4776a5600444e45e57f4d1cb043bdc3eee538b7",
"zh:ff5bd6883d9ac8334e043434246357a55107411e9a962856c1d17e47ee15ac37",
]
}
provider "registry.terraform.io/databricks/databricks" {
version = "1.5.0"
constraints = "1.5.0"
hashes = [
"h1:UJe5L/BteOU7M5ewRLzuUjiewYFLF695eLp3hMKVR6M=",
"zh:0fa9ca13d977a8dcb46254f07c9be731891468f5b423f09cb51da97eaace8e2b",
"zh:3a648e4f8ece8aab05acfc7759b4e4cd153ecd29b3ed0e00d7f1a3a19911f7d8",
"zh:3b052b98b5e22ae4e81e4b667ae5cee9a68bb1750d22546ae9eff16c8d6a294a",
"zh:4320b165218cb39f0ad313d483bba20d0de9e48db0c1467fd0e3a0afb2c02012",
"zh:588c9fdbf35ca9c430cafb5dbd90f34a165744e3514212d0f2c07a3387d8b339",
"zh:b50f8eb38b556ddfa24a76b4113e8a84b778a9a0bb4b4ba5fdc3edca59198d2a",
"zh:ca5186443ac672f5566d9c9b5727f55124a8642dd3949e973790b9195e6b306a",
"zh:db817409b94c34c9b9b5e109751eff7fbca90d08b407a099630c8ec79b6c6d4b",
"zh:edf04424c68db603bf2473e2f14f3e3ad217feb84fc2c7debb6641d15886f8e3",
"zh:ef374f84c41fe529bff1ec3274eb7fe5dd8184c5e71f3e6d99a6adaff6eab82e",
]
}
provider "registry.terraform.io/hashicorp/azurerm" {
version = "3.40.0"
constraints = "3.40.0"
hashes = [
"h1:/Jbhw/zNAsDYDoASaG6w+0KZyay9BkUVOpR8b7m0CsA=",
"zh:00fa6dc05bf2643c6a3c741edb7d88263698086835a8a613f1d7bd76d1b918fd",
"zh:0da9b788e773272a7aa9d59bd9e3d5842edd4acc8c3895bea469e66dc14205a0",
"zh:25a8c39d1f042fc7c83ba9dd745c3569ea9e577fadb57563a575fb115ac2b9f1",
"zh:4423666dbeae8bc22c6e8898ffbb88745681dc27668ca9104b665dd7f3d7292c",
"zh:78c07308e7407b558d15737a98fb5eaf15529d297fc3798de6a7d61e0466e2e3",
"zh:894aca7e6f4f331ee8eb51957a180dc03d399d2b1727e0d7842e9b3f022a8c6a",
"zh:bb0e620c2161b4c4892a6f50b1c4c69ed70f66bb5e92543a03d79d0e4b1d9441",
"zh:c7d8e6a791159ca63b30908c9efe72ab65f60d64b30f0c1eb5a64972f4994844",
"zh:d04c11bfd346c1ac34d16bbdca70b23b006e822f6beb236b85375e8343888eb4",
"zh:f4edea9660327c7c70a823d786fd1b1c1b186c8759770447f63da72f23e1a73c",
"zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c",
"zh:f986e268949cf445ff53a66af48a87c6f6dba5964e8a5b1dc0ea02afabdd71f7",
]
}
provider "registry.terraform.io/hashicorp/dns" {
version = "3.2.3"
constraints = "3.2.3"
hashes = [
"h1:ODcR+vWOhCAJ2iCChZMVdRglNCx07VNr67OPLRPZyDY=",
"zh:03a304f4b76ac6c8bebffddcdf555bf77578a7f638948a681589def32e140cb8",
"zh:08c7d2498b747054e9c9df7838bfa4e4a6b5d63e2d29f0457247e384f792d56c",
"zh:20adf489819ba51ba9d9d15da2dbe1fecb92491b3d0dd80096873e5e84d8b4bd",
"zh:2959ff209d2578456ca490672b82864d483b9e9db9efc8e4ffada06e23017609",
"zh:3ecd0b22db79550fb1108ff7bd00c4066825e8c23bb64e3cc8d9b8102e8caa45",
"zh:6e53a9232245b4be52b56b078f15f270b89afe6abb9c9b8baab4a282fe0cf9f8",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:80437bdfa08eb90f70105b52cb06799a8f7967313654b43d28d7f654fcd4edc1",
"zh:816ddaca0ecc29e287376e5b0b8b0729ee13f23a9d74bfad5b14b7983e1a1775",
"zh:82d8ac7ad00c1a71d0a7c1aca03bb59a6b51128f895242df80b1f3d016c3c51a",
"zh:ec9243b8bd80693a6eeeea5d4f7f4e6f57bd44ae796d6d5b1a91790e359f8a61",
"zh:fd821adbfb03a2c9eac111ff27a32b3a5523b18f80333008de85482d3bbea645",
]
}
|
AzureTRE/templates/workspace_services/databricks/terraform/.terraform.lock.hcl/0
|
{
"file_path": "AzureTRE/templates/workspace_services/databricks/terraform/.terraform.lock.hcl",
"repo_id": "AzureTRE",
"token_count": 2611
}
| 127 |
#!/bin/bash
set -e
function usage() {
cat <<USAGE
Usage: $0 [-t --tre-id]
Options:
-t, --tre-id ID of the TRE
-g, --github-repo URL to the github repo to clone e.g "https://github.com/Microsoft/AzureTRE"
USAGE
exit 1
}
# if no arguments are provided, return usage function
if [ $# -eq 0 ]; then
usage # run usage function
fi
while [ "$1" != "" ]; do
case $1 in
-t | --tre-id)
shift
tre_id=$1
;;
-g | --github-repo)
shift
github_repo=$1
;;
esac
if [[ -z "$2" ]]; then
# if no more args then stop processing
break
fi
shift # remove the current value for `$1` and use the next
done
# These variables need to be set
# ******************************
# tre-id=
# github-repo=
username=giteaadmin
keyVaultName="kv-$tre_id"
tokenSecretName="gitea-$tre_id-admin-token"
pwdSecretName="gitea-$tre_id-administrator-password"
giteaUrl="https://gitea-$tre_id.azurewebsites.net"
# Check if access token exists
tokenExists=$(az keyvault secret list --vault-name "$keyVaultName" --query "contains([].id, 'https://$keyVaultName.vault.azure.net/secrets/$tokenSecretName')")
if $tokenExists
then
response=$(az keyvault secret show --vault-name "$keyVaultName" --name "$tokenSecretName")
token=$(jq -r '.value' <<< "$response")
fi
if [ -z "$token" ] || [ "$token" = "null" ]
then
# Get admin password from keyvault
response=$(az keyvault secret show --vault-name "$keyVaultName" --name "$pwdSecretName")
password=$(jq -r '.value' <<< "$response")
credentials=$username:$password
data='{"name": "'${username}'"}'
url=${giteaUrl}/api/v1/users/${username}/tokens
# Create new access token
response=$(curl -X POST -H "Content-Type: application/json" -k -d "${data}" -u "${credentials}" "${url}")
token=$(jq -r '.sha1' <<< "$response")
# Store access token to keyvault
az keyvault secret set --name "$tokenSecretName" --vault-name "$keyVaultName" --value "$token" > /dev/null
fi
# Repository migration parameters
repo='{
"clone_addr": "'${github_repo}'",
"issues": true,
"labels": true,
"lfs": true,
"milestones": true,
"mirror": true,
"mirror_interval": "12h0m0s",
"private": false,
"pull_requests": true,
"releases": true,
"repo_name": "'${github_repo##*/}'",
"service": "github",
"wiki": true
}'
# Mirror repository
url="${giteaUrl}/api/v1/repos/migrate?access_token=${token}"
response=$(curl -X POST "${url}" -H "accept: application/json" -H "Content-Type: application/json" -k -d "${repo}")
echo "$response"
# Additional settings
repo_settings='{
"permissions": {
"admin": true,
"push": false,
"pull": true
}
}'
# Set additional repository parameters
url="${giteaUrl}/api/v1/repos/${username}/${github_repo##*/}?access_token=${token}"
response=$(curl -X PATCH "${url}" -H "accept: application/json" -H "Content-Type: application/json" -k -d "${repo_settings}")
echo "$response"
|
AzureTRE/templates/workspace_services/gitea/gitea_migrate_repo.sh/0
|
{
"file_path": "AzureTRE/templates/workspace_services/gitea/gitea_migrate_repo.sh",
"repo_id": "AzureTRE",
"token_count": 1195
}
| 128 |
# Local .terraform directories
**/.terraform/*
# TF backend files
**/*_backend.tf
Dockerfile.tmpl
.env*
terraform/deploy.sh
terraform/destroy.sh
guacamole-server/
!guacamole-server/docker/version.txt
user_resources/
|
AzureTRE/templates/workspace_services/guacamole/.dockerignore/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/.dockerignore",
"repo_id": "AzureTRE",
"token_count": 86
}
| 129 |
# This is ssh server systemwide configuration file.
#
# /etc/sshd_config
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
|
AzureTRE/templates/workspace_services/guacamole/guacamole-server/docker/sshd_config/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/guacamole-server/docker/sshd_config",
"repo_id": "AzureTRE",
"token_count": 173
}
| 130 |
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration>
<!-- Default appender -->
<appender name="GUAC-DEFAULT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="GUAC-DEFAULT" />
</root>
</configuration>
|
AzureTRE/templates/workspace_services/guacamole/guacamole-server/guacamole-auth-azure/src/main/resources/logback.xml/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/guacamole-server/guacamole-auth-azure/src/main/resources/logback.xml",
"repo_id": "AzureTRE",
"token_count": 407
}
| 131 |
data "azurerm_resource_group" "ws" {
name = "rg-${var.tre_id}-ws-${local.short_workspace_id}"
}
data "azurerm_virtual_network" "ws" {
name = "vnet-${var.tre_id}-ws-${local.short_workspace_id}"
resource_group_name = data.azurerm_resource_group.ws.name
}
data "azurerm_subnet" "services" {
name = "ServicesSubnet"
virtual_network_name = data.azurerm_virtual_network.ws.name
resource_group_name = data.azurerm_resource_group.ws.name
}
data "azurerm_key_vault" "ws" {
name = local.keyvault_name
resource_group_name = data.azurerm_resource_group.ws.name
}
data "azurerm_linux_web_app" "guacamole" {
name = "guacamole-${var.tre_id}-ws-${local.short_workspace_id}-svc-${local.short_parent_id}"
resource_group_name = data.azurerm_resource_group.ws.name
}
|
AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-import-reviewvm/terraform/data.tf/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-import-reviewvm/terraform/data.tf",
"repo_id": "AzureTRE",
"token_count": 379
}
| 132 |
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "https://github.com/microsoft/AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-linuxvm/template_schema.json",
"type": "object",
"title": "Linux Virtual Machine",
"description": "Linux virtual machine.",
"required": [
],
"authorizedRoles": [
"WorkspaceOwner", "WorkspaceResearcher"
],
"properties": {
"os_image": {
"$id": "#/properties/os_image",
"type": "string",
"title": "Linux image",
"description": "Select Linux image to use for VM",
"enum": [
"Ubuntu 18.04",
"Ubuntu 18.04 Data Science VM"
]
},
"vm_size": {
"$id": "#/properties/vm_size",
"type": "string",
"title": "VM Size",
"description": "Select size of VM",
"enum": [
"2 CPU | 8GB RAM",
"4 CPU | 16GB RAM",
"8 CPU | 32GB RAM",
"16 CPU | 64GB RAM"
],
"updateable": true
},
"shared_storage_access": {
"$id": "#/properties/shared_storage_access",
"type": "boolean",
"title": "Shared storage",
"default": true,
"description": "Enable access to shared storage"
}
}
}
|
AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-linuxvm/template_schema.json/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-linuxvm/template_schema.json",
"repo_id": "AzureTRE",
"token_count": 532
}
| 133 |
{
"schemaType": "ParameterSet",
"schemaVersion": "1.0.1",
"namespace": "",
"name": "tre-service-guacamole-windowsvm",
"parameters": [
{
"name": "workspace_id",
"source": {
"env": "WORKSPACE_ID"
}
},
{
"name": "parent_service_id",
"source": {
"env": "PARENT_SERVICE_ID"
}
},
{
"name": "tre_id",
"source": {
"env": "TRE_ID"
}
},
{
"name": "tfstate_container_name",
"source": {
"env": "TERRAFORM_STATE_CONTAINER_NAME"
}
},
{
"name": "tfstate_resource_group_name",
"source": {
"env": "MGMT_RESOURCE_GROUP_NAME"
}
},
{
"name": "tfstate_storage_account_name",
"source": {
"env": "MGMT_STORAGE_ACCOUNT_NAME"
}
},
{
"name": "id",
"source": {
"env": "ID"
}
},
{
"name": "os_image",
"source": {
"env": "OS_IMAGE"
}
},
{
"name": "shared_storage_access",
"source": {
"env": "SHARED_STORAGE_ACCESS"
}
},
{
"name": "shared_storage_name",
"source": {
"env": "SHARED_STORAGE_NAME"
}
},
{
"name": "vm_size",
"source": {
"env": "VM_SIZE"
}
},
{
"name": "image_gallery_id",
"source": {
"env": "IMAGE_GALLERY_ID"
}
},
{
"name": "azure_environment",
"source": {
"env": "AZURE_ENVIRONMENT"
}
},
{
"name": "arm_environment",
"source": {
"env": "ARM_ENVIRONMENT"
}
}
]
}
|
AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-windowsvm/parameters.json/0
|
{
"file_path": "AzureTRE/templates/workspace_services/guacamole/user_resources/guacamole-azure-windowsvm/parameters.json",
"repo_id": "AzureTRE",
"token_count": 939
}
| 134 |
{
"schemaType": "ParameterSet",
"schemaVersion": "1.0.1",
"namespace": "",
"name": "tre-workspace-service-health",
"parameters": [
{
"name": "workspace_id",
"source": {
"env": "WORKSPACE_ID"
}
},
{
"name": "tre_id",
"source": {
"env": "TRE_ID"
}
},
{
"name": "id",
"source": {
"env": "ID"
}
},
{
"name": "tfstate_container_name",
"source": {
"env": "TERRAFORM_STATE_CONTAINER_NAME"
}
},
{
"name": "tfstate_resource_group_name",
"source": {
"env": "MGMT_RESOURCE_GROUP_NAME"
}
},
{
"name": "tfstate_storage_account_name",
"source": {
"env": "MGMT_STORAGE_ACCOUNT_NAME"
}
},
{
"name": "deploy_fhir",
"source": {
"env": "DEPLOY_FHIR"
}
},
{
"name": "fhir_kind",
"source": {
"env": "FHIR_KIND"
}
},
{
"name": "deploy_dicom",
"source": {
"env": "DEPLOY_DICOM"
}
},
{
"name": "aad_authority_url",
"source": {
"env": "AAD_AUTHORITY_URL"
}
},
{
"name": "arm_environment",
"source": {
"env": "ARM_ENVIRONMENT"
}
}
]
}
|
AzureTRE/templates/workspace_services/health-services/parameters.json/0
|
{
"file_path": "AzureTRE/templates/workspace_services/health-services/parameters.json",
"repo_id": "AzureTRE",
"token_count": 755
}
| 135 |
{
"schemaType": "ParameterSet",
"schemaVersion": "1.0.1",
"namespace": "",
"name": "tre-service-innereye",
"parameters": [
{
"name": "id",
"source": {
"env": "ID"
}
},
{
"name": "workspace_id",
"source": {
"env": "WORKSPACE_ID"
}
},
{
"name": "tre_id",
"source": {
"env": "TRE_ID"
}
},
{
"name": "mgmt_acr_name",
"source": {
"env": "ACR_NAME"
}
},
{
"name": "inference_sp_client_id",
"source": {
"env": "INFERENCE_SP_CLIENT_ID"
}
},
{
"name": "inference_sp_client_secret",
"source": {
"env": "INFERENCE_SP_CLIENT_SECRET"
}
},
{
"name": "tfstate_container_name",
"source": {
"env": "TERRAFORM_STATE_CONTAINER_NAME"
}
},
{
"name": "tfstate_resource_group_name",
"source": {
"env": "MGMT_RESOURCE_GROUP_NAME"
}
},
{
"name": "tfstate_storage_account_name",
"source": {
"env": "MGMT_STORAGE_ACCOUNT_NAME"
}
},
{
"name": "azure_environment",
"source": {
"env": "AZURE_ENVIRONMENT"
}
},
{
"name": "arm_environment",
"source": {
"env": "ARM_ENVIRONMENT"
}
}
]
}
|
AzureTRE/templates/workspace_services/innereye/parameters.json/0
|
{
"file_path": "AzureTRE/templates/workspace_services/innereye/parameters.json",
"repo_id": "AzureTRE",
"token_count": 763
}
| 136 |
{
"schemaVersion": "1.0.0-DRAFT+b6c701f",
"name": "azure",
"created": "2021-06-03T11:31:05.7314113Z",
"modified": "2021-06-03T11:31:05.7314113Z",
"credentials": [
{
"name": "azure_client_id",
"source": {
"env": "ARM_CLIENT_ID"
}
},
{
"name": "azure_client_secret",
"source": {
"env": "ARM_CLIENT_SECRET"
}
},
{
"name": "azure_subscription_id",
"source": {
"env": "ARM_SUBSCRIPTION_ID"
}
},
{
"name": "azure_tenant_id",
"source": {
"env": "ARM_TENANT_ID"
}
}
]
}
|
AzureTRE/templates/workspace_services/mlflow/azure.json/0
|
{
"file_path": "AzureTRE/templates/workspace_services/mlflow/azure.json",
"repo_id": "AzureTRE",
"token_count": 352
}
| 137 |
output "mysql_fqdn" {
value = azurerm_mysql_flexible_server.mysql.fqdn
}
|
AzureTRE/templates/workspace_services/mysql/terraform/outputs.tf/0
|
{
"file_path": "AzureTRE/templates/workspace_services/mysql/terraform/outputs.tf",
"repo_id": "AzureTRE",
"token_count": 35
}
| 138 |
/****** Copy Data ******/
CREATE TABLE #tbl
WITH
( DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS Sequence
, [name]
, 'DROP TABLE ' + N'$(RESULTS_SCHEMA_NAME)' + '.' + name AS sql_code
FROM sys.tables
WHERE schema_id = (select schema_id from sys.schemas where name = N'$(RESULTS_SCHEMA_NAME)')
;
DECLARE @nbr_statements INT = (SELECT COUNT(*) FROM #tbl)
, @i INT = 1
;
WHILE @i <= @nbr_statements
BEGIN
DECLARE @sql_code NVARCHAR(4000) = (SELECT sql_code FROM #tbl WHERE Sequence = @i);
EXEC sp_executesql @sql_code;
SET @i +=1;
END
DROP TABLE #tbl;
/****** Drop Schemas ******/
DROP SCHEMA [$(RESULTS_SCHEMA_NAME)];
DROP SCHEMA [$(TEMP_SCHEMA_NAME)];
|
AzureTRE/templates/workspace_services/ohdsi/sql/drop_synapse_schemas.sql/0
|
{
"file_path": "AzureTRE/templates/workspace_services/ohdsi/sql/drop_synapse_schemas.sql",
"repo_id": "AzureTRE",
"token_count": 328
}
| 139 |
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "https://github.com/microsoft/AzureTRE/templates/workspaces/base/template_schema.json",
"type": "object",
"title": "Base Workspace",
"description": "This workspace template is the foundation for TRE workspaces.",
"required": [
"auth_type",
"address_space_size"
],
"authorizedRoles": [],
"properties": {
"shared_storage_quota": {
"type": "integer",
"title": "Shared Storage Quota",
"description": "Quota (in GB) to set for the VM Shared Storage."
},
"enable_airlock": {
"type": "boolean",
"title": "Enable Airlock",
"description": "Allow safe import and export to the workspace",
"default": true,
"updateable": true
},
"app_service_plan_sku": {
"type": "string",
"title": "App Service Plan SKU",
"description": "The SKU that will be used when deploying an Azure App Service Plan.",
"default": "P1v3",
"enum": [
"P1v3",
"P1v2",
"S1"
]
},
"address_space_size": {
"type": "string",
"title": "Address space size",
"description": "Network address size (small, medium, large or custom) to be used by the workspace.",
"default": "small",
"enum": [
"small",
"medium",
"large",
"custom"
]
},
"address_spaces": {
"type": "array",
"title": "Address spaces",
"description": "Network address space to be used by the workspace.",
"updateable": true
},
"auth_type": {
"type": "string",
"title": "Workspace Authentication Type",
"description": "",
"default": "Automatic",
"enum": [
"Automatic",
"Manual"
],
"updateable": true
}
},
"allOf": [
{
"if": {
"properties": {
"enable_airlock": {
"const": true
}
},
"required": [
"enable_airlock"
]
},
"then": {
"properties": {
"configure_review_vms": {
"type": "boolean",
"title": "Configure Review VMs",
"description": "Allow TRE to automatically create and delete review VMs for airlock approvals",
"default": false,
"updateable": true
}
}
}
},
{
"if": {
"properties": {
"enable_airlock": {
"const": true
},
"configure_review_vms": {
"const": true
}
},
"required": [
"enable_airlock",
"configure_review_vms"
]
},
"then": {
"properties": {
"airlock_review_config": {
"type": "object",
"title": "Airlock Review Config",
"default": null,
"description": "Configuration for Airlock Review feature. Needs to be set up after workspace creation",
"updateable": true,
"properties": {
"import": {
"title": "Import Review Settings",
"required": [
"import_vm_workspace_id",
"import_vm_workspace_service_id",
"import_vm_user_resource_template_name"
],
"properties": {
"import_vm_workspace_id": {
"title": "Import Review Workspace ID",
"type": "string",
"description": "ID for Import Review workspace"
},
"import_vm_workspace_service_id": {
"title": "Import Review Workspace Service ID",
"type": "string",
"description": "ID for Workspace Service ID where to deploy Review user resources"
},
"import_vm_user_resource_template_name": {
"title": "Import Review VM User Resource Template Name",
"type": "string",
"description": "Template Name for User Resource for reviewing Import Requests",
"examples": [
"tre-service-guacamole-import-reviewvm"
]
}
}
},
"export": {
"title": "Export Review VM Settings",
"required": [
"export_vm_workspace_service_id",
"export_vm_user_resource_template_name"
],
"properties": {
"export_vm_workspace_service_id": {
"title": "Export Review Workspace Service ID",
"type": "string",
"description": "ID for Workspace Service ID where to deploy Review user resources"
},
"export_vm_user_resource_template_name": {
"title": "Export Review VM User Resource Template Name",
"type": "string",
"description": "Template Name for User Resource for reviewing Export Requests",
"examples": [
"tre-service-guacamole-export-reviewvm"
]
}
}
}
}
}
}
}
},
{
"if": {
"properties": {
"address_space_size": {
"enum": [
"custom"
]
}
},
"required": [
"address_space_size"
]
},
"then": {
"properties": {
"address_space": {
"type": "string",
"title": "Address space",
"description": "Network address space to be used by the workspace if 'Address space size' is custom."
}
},
"required": [
"address_space"
]
}
},
{
"if": {
"properties": {
"auth_type": {
"const": "Manual"
}
},
"required": [
"auth_type"
]
},
"then": {
"properties": {
"client_id": {
"type": "string",
"title": "Application (Client) ID",
"description": "The AAD Application Registration ID for the workspace.",
"updateable": true
},
"client_secret": {
"type": "string",
"title": "Application (Client) Secret",
"description": "The AAD Application Registration secret for the workspace. This value will be stored in the Workspace Key Vault.",
"sensitive": true,
"updateable": true
}
},
"required": [
"client_id"
]
},
"else": {
"properties": {
"create_aad_groups": {
"type": "boolean",
"title": "Create AAD Groups for each workspace role",
"description": "Create AAD Groups for the workspace roles. If this is set to true, the workspace will create new AAD Groups.",
"default": false,
"updateable": true
},
"aad_redirect_uris": {
"type": "array",
"title": "AAD Redirect URIs",
"description": "Redirect URIs for the AAD app in Automatic Auth mode",
"updateable": true,
"items": {
"title": "items",
"type": "object",
"required": [
"name",
"value"
],
"properties": {
"name": {
"title": "name",
"type": "string",
"description": "Redirect URI Name",
"examples": [
"My Redirect URI"
],
"pattern": "^.*$"
},
"value": {
"title": "value",
"type": "string",
"description": "Redirect URI Value",
"examples": [
"https://a-domain-name.com/oauth/"
]
}
}
}
}
}
}
}
],
"actions": [],
"customActions": [],
"pipeline": null,
"uiSchema": {
"aad_redirect_uris": {
"classNames": "tre-hidden"
},
"address_spaces": {
"classNames": "tre-hidden"
},
"ui:order": [
"display_name",
"description",
"overview",
"shared_storage_quota",
"app_service_plan_sku",
"address_space_size",
"address_space",
"auth_type",
"create_aad_groups",
"client_id",
"client_secret",
"enable_airlock",
"configure_review_vms",
"airlock_review_config",
"*"
]
}
}
|
AzureTRE/templates/workspaces/base/template_schema.json/0
|
{
"file_path": "AzureTRE/templates/workspaces/base/template_schema.json",
"repo_id": "AzureTRE",
"token_count": 4738
}
| 140 |
output "app_insights_connection_string" {
# value = azurerm_application_insights.workspace.connection_string
value = jsondecode(azapi_resource.appinsights.output).properties.ConnectionString
sensitive = true
}
output "log_analytics_workspace_id" {
value = azurerm_log_analytics_workspace.workspace.id
}
output "log_analytics_workspace_name" {
value = azurerm_log_analytics_workspace.workspace.name
}
|
AzureTRE/templates/workspaces/base/terraform/azure-monitor/outputs.tf/0
|
{
"file_path": "AzureTRE/templates/workspaces/base/terraform/azure-monitor/outputs.tf",
"repo_id": "AzureTRE",
"token_count": 142
}
| 141 |
output "workspace_resource_name_suffix" {
value = local.workspace_resource_name_suffix
}
# The following outputs are dependent on an Automatic AAD Workspace Application Registration.
# If we are not creating an App Reg we simple pass back the same values that were already created
# This is necessary so that we don't delete workspace properties
output "app_role_id_workspace_owner" {
value = var.register_aad_application ? module.aad[0].app_role_workspace_owner_id : var.app_role_id_workspace_owner
}
output "app_role_id_workspace_researcher" {
value = var.register_aad_application ? module.aad[0].app_role_workspace_researcher_id : var.app_role_id_workspace_researcher
}
output "app_role_id_workspace_airlock_manager" {
value = var.register_aad_application ? module.aad[0].app_role_workspace_airlock_manager_id : var.app_role_id_workspace_airlock_manager
}
output "client_id" {
value = var.register_aad_application ? module.aad[0].client_id : var.client_id
}
output "sp_id" {
value = var.register_aad_application ? module.aad[0].sp_id : var.sp_id
}
output "scope_id" {
value = var.register_aad_application ? module.aad[0].scope_id : var.scope_id
}
|
AzureTRE/templates/workspaces/base/terraform/outputs.tf/0
|
{
"file_path": "AzureTRE/templates/workspaces/base/terraform/outputs.tf",
"repo_id": "AzureTRE",
"token_count": 403
}
| 142 |
import { Dialog, PrimaryButton, DialogType, Stack, TooltipHost, TextField } from '@fluentui/react';
import React, { useState } from 'react';
import { Resource } from '../../models/resource';
interface ConfirmCopyUrlToClipboardProps {
resource: Resource,
onDismiss: () => void
}
// show a explanation about why connect is disabled, and show a copy to clipboard tool tip
export const ConfirmCopyUrlToClipboard: React.FunctionComponent<ConfirmCopyUrlToClipboardProps> = (props: ConfirmCopyUrlToClipboardProps) => {
const COPY_TOOL_TIP_DEFAULT_MESSAGE = "Copy to clipboard"
const [copyToolTipMessage, setCopyToolTipMessage] = useState<string>(COPY_TOOL_TIP_DEFAULT_MESSAGE);
const copyUrlToClipboardProps = {
type: DialogType.normal,
title: 'Access a Protected Endpoint',
closeButtonAriaLabel: 'Close',
subText: `Copy the link below, paste it and use it from a workspace virtual machine`,
};
const dialogStyles = { main: { maxWidth: 450 } };
const modalProps = {
titleAriaId: 'labelId',
subtitleAriaId: 'subTextId',
isBlocking: true,
styles: dialogStyles
};
const handleCopyUrl = () => {
navigator.clipboard.writeText(props.resource.properties.connection_uri);
setCopyToolTipMessage("Copied")
setTimeout(() => setCopyToolTipMessage(COPY_TOOL_TIP_DEFAULT_MESSAGE), 3000);
}
return (<>
<Dialog
hidden={false}
onDismiss={() => props.onDismiss()}
dialogContentProps={copyUrlToClipboardProps}
modalProps={modalProps}
>
<Stack horizontal styles={{ root: { alignItems: 'center', paddingTop: '7px' } }}>
<Stack.Item grow>
<TextField readOnly value={props.resource.properties.connection_uri} />
</Stack.Item>
<TooltipHost content={copyToolTipMessage}>
<PrimaryButton
iconProps={{ iconName: 'copy' }}
styles={{ root: { minWidth: '40px' } }}
onClick={() => { handleCopyUrl() }}
/>
</TooltipHost>
</Stack>
</Dialog>
</>);
};
|
AzureTRE/ui/app/src/components/shared/ConfirmCopyUrlToClipboard.tsx/0
|
{
"file_path": "AzureTRE/ui/app/src/components/shared/ConfirmCopyUrlToClipboard.tsx",
"repo_id": "AzureTRE",
"token_count": 781
}
| 143 |
import { DefaultPalette, IStackItemStyles, Stack } from "@fluentui/react";
interface ResourceHistoryListItemProps {
header: String,
val: String
}
export const ResourceHistoryListItem: React.FunctionComponent<ResourceHistoryListItemProps> = (props: ResourceHistoryListItemProps) => {
const stackItemStyles: IStackItemStyles = {
root: {
padding: '5px 0',
color: DefaultPalette.neutralSecondary
}
}
return(
<>
<Stack wrap horizontal>
<Stack.Item styles={stackItemStyles} style={{width:'20%'}}>
{props.header}
</Stack.Item>
<Stack.Item styles={stackItemStyles} style={{width:'80%'}}>
: {props.val}
</Stack.Item>
</Stack>
</>
);
}
|
AzureTRE/ui/app/src/components/shared/ResourceHistoryListItem.tsx/0
|
{
"file_path": "AzureTRE/ui/app/src/components/shared/ResourceHistoryListItem.tsx",
"repo_id": "AzureTRE",
"token_count": 401
}
| 144 |
import { Icon, mergeStyles, Panel, PanelType, PrimaryButton } from '@fluentui/react';
import React, { useEffect, useState } from 'react';
import { useNavigate } from 'react-router-dom';
import { ApiEndpoint } from '../../../models/apiEndpoints';
import { Operation } from '../../../models/operation';
import { ResourceType } from '../../../models/resourceType';
import { Workspace } from '../../../models/workspace';
import { WorkspaceService } from '../../../models/workspaceService';
import { ResourceForm } from './ResourceForm';
import { SelectTemplate } from './SelectTemplate';
import { getResourceFromResult, Resource } from '../../../models/resource';
import { HttpMethod, useAuthApiCall } from '../../../hooks/useAuthApiCall';
import { useAppDispatch } from '../../../hooks/customReduxHooks';
import { addUpdateOperation } from '../../shared/notifications/operationsSlice';
interface CreateUpdateResourceProps {
isOpen: boolean,
onClose: () => void,
workspaceApplicationIdURI?: string,
resourceType: ResourceType,
parentResource?: Workspace | WorkspaceService,
onAddResource?: (r: Resource) => void,
updateResource?: Resource
}
interface PageTitle {
selectTemplate: string,
resourceForm: string,
creating: string
}
const creatingIconClass = mergeStyles({
fontSize: 100,
height: 100,
width: 100,
margin: '0 25px',
color: 'deepskyblue',
padding: 20
});
export const CreateUpdateResource: React.FunctionComponent<CreateUpdateResourceProps> = (props: CreateUpdateResourceProps) => {
const [page, setPage] = useState('selectTemplate' as keyof PageTitle);
const [selectedTemplate, setTemplate] = useState(props.updateResource?.templateName || '');
const [deployOperation, setDeployOperation] = useState({} as Operation);
const navigate = useNavigate();
const apiCall = useAuthApiCall();
const dispatch = useAppDispatch();
useEffect(() => {
const clearState = () => {
setPage('selectTemplate');
setDeployOperation({} as Operation);
setTemplate('');
}
!props.isOpen && clearState();
props.isOpen && props.updateResource && props.updateResource.templateName && selectTemplate(props.updateResource.templateName);
}, [props.isOpen, props.updateResource]);
// Render a panel title depending on sub-page
const pageTitles: PageTitle = {
selectTemplate: 'Choose a template',
resourceForm: 'Create / Update a ' + props.resourceType,
creating: ''
}
// Construct API paths for templates of specified resourceType
let templateListPath;
// Usually, the GET path would be `${templateGetPath}/${selectedTemplate}`, but there's an exception for user resources
let templateGetPath;
let workspaceApplicationIdURI = undefined
switch (props.resourceType) {
case ResourceType.Workspace:
templateListPath = ApiEndpoint.WorkspaceTemplates; templateGetPath = templateListPath; break;
case ResourceType.WorkspaceService:
templateListPath = ApiEndpoint.WorkspaceServiceTemplates; templateGetPath = templateListPath; break;
case ResourceType.SharedService:
templateListPath = ApiEndpoint.SharedServiceTemplates; templateGetPath = templateListPath; break;
case ResourceType.UserResource:
if (props.parentResource) {
// If we are creating a user resource, parent resource must have a workspaceId
const workspaceId = (props.parentResource as WorkspaceService).workspaceId
templateListPath = `${ApiEndpoint.Workspaces}/${workspaceId}/${ApiEndpoint.WorkspaceServiceTemplates}/${props.parentResource.templateName}/${ApiEndpoint.UserResourceTemplates}`;
templateGetPath = `${ApiEndpoint.WorkspaceServiceTemplates}/${props.parentResource.templateName}/${ApiEndpoint.UserResourceTemplates}`
workspaceApplicationIdURI = props.workspaceApplicationIdURI
break;
} else {
throw Error('Parent workspace service must be passed as prop when creating user resource.');
}
default:
throw Error('Unsupported resource type.');
}
// Construct API path for resource creation
let resourcePath;
switch (props.resourceType) {
case ResourceType.Workspace:
resourcePath = ApiEndpoint.Workspaces; break;
case ResourceType.SharedService:
resourcePath = ApiEndpoint.SharedServices; break;
default:
if (!props.parentResource) {
throw Error('A parentResource must be passed as prop if creating a workspace-service or user-resource');
}
resourcePath = `${props.parentResource.resourcePath}/${props.resourceType}s`;
}
const selectTemplate = (templateName: string) => {
setTemplate(templateName);
setPage('resourceForm');
}
const resourceCreating = async (operation: Operation) => {
setDeployOperation(operation);
setPage('creating');
// Add deployment operation to notifications operation poller
dispatch(addUpdateOperation(operation));
// if an onAdd callback has been given, get the resource we just created and pass it back
if (props.onAddResource) {
let resource = getResourceFromResult(await apiCall(operation.resourcePath, HttpMethod.Get, props.workspaceApplicationIdURI));
props.onAddResource(resource);
}
}
// Render the current panel sub-page
let currentPage;
switch (page) {
case 'selectTemplate':
currentPage = <SelectTemplate templatesPath={templateListPath} workspaceApplicationIdURI={workspaceApplicationIdURI} onSelectTemplate={selectTemplate} />; break;
case 'resourceForm':
currentPage = <ResourceForm
templateName={selectedTemplate}
templatePath={`${templateGetPath}/${selectedTemplate}`}
resourcePath={resourcePath}
onCreateResource={resourceCreating}
workspaceApplicationIdURI={props.workspaceApplicationIdURI}
updateResource={props.updateResource}
/>; break;
case 'creating':
currentPage = <div style={{ textAlign: 'center', paddingTop: 100 }}>
<Icon iconName="CloudAdd" className={creatingIconClass} />
<h1>{props.updateResource?.id ? 'Updating' : 'Creating'} {props.resourceType}...</h1>
<p>Check the notifications panel for deployment progress.</p>
<PrimaryButton text="Go to resource" onClick={() => {navigate(deployOperation.resourcePath); props.onClose();}} />
</div>; break;
}
return (
<>
<Panel
headerText={pageTitles[page]}
isOpen={props.isOpen}
onDismiss={props.onClose}
type={PanelType.medium}
closeButtonAriaLabel="Close"
isLightDismiss
>
<div style={{ paddingTop: 30 }}>
{currentPage}
</div>
</Panel>
</>
);
};
|
AzureTRE/ui/app/src/components/shared/create-update-resource/CreateUpdateResource.tsx/0
|
{
"file_path": "AzureTRE/ui/app/src/components/shared/create-update-resource/CreateUpdateResource.tsx",
"repo_id": "AzureTRE",
"token_count": 2167
}
| 145 |
import React, { useContext } from 'react';
import { Resource } from '../../models/resource';
import { WorkspaceService } from '../../models/workspaceService';
import { ResourceCardList } from '../shared/ResourceCardList';
import { PrimaryButton, Stack } from '@fluentui/react';
import { ResourceType } from '../../models/resourceType';
import { WorkspaceContext } from '../../contexts/WorkspaceContext';
import { CreateUpdateResourceContext } from '../../contexts/CreateUpdateResourceContext';
import { successStates } from '../../models/operation';
import { WorkspaceRoleName } from '../../models/roleNames';
import { SecuredByRole } from '../shared/SecuredByRole';
interface WorkspaceServicesProps {
workspaceServices: Array<WorkspaceService>,
setWorkspaceService: (workspaceService: WorkspaceService) => void,
addWorkspaceService: (workspaceService: WorkspaceService) => void,
updateWorkspaceService: (workspaceService: WorkspaceService) => void,
removeWorkspaceService: (workspaceService: WorkspaceService) => void
}
export const WorkspaceServices: React.FunctionComponent<WorkspaceServicesProps> = (props: WorkspaceServicesProps) => {
const workspaceCtx = useContext(WorkspaceContext);
const createFormCtx = useContext(CreateUpdateResourceContext);
return (
<>
<Stack className="tre-panel">
<Stack.Item>
<Stack horizontal horizontalAlign="space-between">
<h1>Workspace Services</h1>
<SecuredByRole allowedWorkspaceRoles={[WorkspaceRoleName.WorkspaceOwner]} element={
<PrimaryButton iconProps={{ iconName: 'Add' }} text="Create new" disabled={successStates.indexOf(workspaceCtx.workspace.deploymentStatus) === -1 || !workspaceCtx.workspace.isEnabled} onClick={() => {
createFormCtx.openCreateForm({
resourceType: ResourceType.WorkspaceService,
resourceParent: workspaceCtx.workspace,
onAdd: (r: Resource) => props.addWorkspaceService(r as WorkspaceService),
workspaceApplicationIdURI: workspaceCtx.workspaceApplicationIdURI
});
}} />
} />
</Stack>
</Stack.Item>
<Stack.Item>
<ResourceCardList
resources={props.workspaceServices}
selectResource={(r: Resource) => props.setWorkspaceService(r as WorkspaceService)}
updateResource={(r: Resource) => props.updateWorkspaceService(r as WorkspaceService)}
removeResource={(r: Resource) => props.removeWorkspaceService(r as WorkspaceService)}
emptyText="This workspace has no workspace services." />
</Stack.Item>
</Stack>
</>
);
};
|
AzureTRE/ui/app/src/components/workspaces/WorkspaceServices.tsx/0
|
{
"file_path": "AzureTRE/ui/app/src/components/workspaces/WorkspaceServices.tsx",
"repo_id": "AzureTRE",
"token_count": 962
}
| 146 |
import { Operation } from "./operation";
import { ResourceType } from "./resourceType";
import { User } from "./user";
export interface Resource {
id: string,
isEnabled: boolean,
resourcePath: string,
resourceVersion: number,
resourceType: ResourceType
templateName: string,
templateVersion: string,
availableUpgrades: Array<AvailableUpgrade>,
deploymentStatus: string,
updatedWhen: number,
user: User,
history: Array<HistoryItem>,
_etag: string,
properties: any,
azureStatus?: any
}
export interface HistoryItem {
id: string,
resourceId: string,
isEnabled: boolean,
resourceVersion: number,
updatedWhen: number,
user: User,
properties: any,
templateVersion: string
}
export interface AvailableUpgrade {
version: string,
forceUpdateRequired : boolean
}
export enum ComponentAction {
None,
Reload,
Remove,
Lock
}
export interface ResourceUpdate {
operation: Operation,
componentAction: ComponentAction
}
export enum VMPowerStates {
Running = "VM running",
Starting = "VM starting",
Stopping = "VM stopping",
Stopped = "VM stopped",
Deallocating = "VM deallocating",
Deallocated = "VM deallocated"
}
export const getResourceFromResult = (r: any) => {
if (r['userResource']) return r.userResource;
if (r['workspaceService']) return r.workspaceService;
if (r['workspace']) return r.workspace;
if (r['sharedService']) return r.sharedService;
}
|
AzureTRE/ui/app/src/models/resource.ts/0
|
{
"file_path": "AzureTRE/ui/app/src/models/resource.ts",
"repo_id": "AzureTRE",
"token_count": 495
}
| 147 |
FORMAT=$1
GOLD_FILE=$2
PREDICTION_FILE=$3
java -cp bc5cdr_eval.jar ncbi.bc5cdr_eval.Evaluate mention Disease $FORMAT $GOLD_FILE $PREDICTION_FILE | grep -v INFO
# java -cp bc5cdr_eval.jar ncbi.bc5cdr_eval.Evaluate mention Disease $FORMAT $GOLD_FILE $PREDICTION_FILE
|
BioGPT/data/BC5CDR/raw/BC5CDR_Evaluation-0.0.3/eval_mention.sh/0
|
{
"file_path": "BioGPT/data/BC5CDR/raw/BC5CDR_Evaluation-0.0.3/eval_mention.sh",
"repo_id": "BioGPT",
"token_count": 113
}
| 148 |
# Question Answering on PubMedQA in Reasoning Required Setting
## Data
Download data from [PubMedQA](https://github.com/pubmedqa/pubmedqa) and following the steps of splitting dataset.
Copy the files `pqal_fold0/train_set.json`, `pqal_fold0/dev_set.json`, `test_set.json` and `test_ground_truth.json` to `../../data/PubMedQA/raw`
Then, you can process the data by:
``` bash
bash preprocess.sh # for BioGPT
```
or
``` bash
bash preprocess_large.sh # for BioGPT-Large
```
## Model Checkpoint
We provide our fine-tuned model on the task. See [here](../../README.md#pre-trained-models)
## Inference and Evaluating
You can inference and evaluate the model on the test set by:
``` bash
bash infer.sh # for BioGPT
```
or
``` bash
bash infer_large.sh # for BioGPT-Large
```
|
BioGPT/examples/QA-PubMedQA/README.md/0
|
{
"file_path": "BioGPT/examples/QA-PubMedQA/README.md",
"repo_id": "BioGPT",
"token_count": 266
}
| 149 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
MODEL_DIR=../../checkpoints/RE-DDI-BioGPT
MODEL=checkpoint_avg.pt
DATA_DIR=${PWD}/../../data/DDI/relis-bin
BASE_DATA_DIR=${DATA_DIR%/*}
BIN_DATA_DIR=${DATA_DIR##*/}
DATA_PREFIX=${BIN_DATA_DIR%-*}
RAW_DATA_DIR=${BASE_DATA_DIR}/raw
OUTPUT_FILE=generate_${MODEL}
INPUT_FILE=${RAW_DATA_DIR}/${DATA_PREFIX}_test.tok.bpe.x
OUTPUT_FILE=${MODEL_DIR}/${OUTPUT_FILE}
GOLD_FILE=${RAW_DATA_DIR}/test.json
PMID_FILE=${RAW_DATA_DIR}/${DATA_PREFIX}_test.pmid
# average checkpoints
if [ ! -f "${MODEL_DIR}/${MODEL}" ]; then
python ../../scripts/average_checkpoints.py --inputs=${MODEL_DIR} --output=${MODEL_DIR}/${MODEL} --num-epoch-checkpoints=5
fi
# inference
if [ ! -f "$OUTPUT_FILE" ]; then
echo "Begin inferencing ${INPUT_FILE} using ${MODEL_DIR}/${MODEL}"
python ../../inference.py --data_dir=${DATA_DIR} --model_dir=${MODEL_DIR} --model_file=${MODEL} --src_file=${INPUT_FILE} --output_file=${OUTPUT_FILE}
fi
# debpe
sed -i "s/@@ //g" ${OUTPUT_FILE}
# detok
perl ${MOSES}/scripts/tokenizer/detokenizer.perl -l en -a < ${OUTPUT_FILE} > ${OUTPUT_FILE}.detok
# postprocess
python postprocess.py ${OUTPUT_FILE}.detok
# eval
python hard_match_evaluation.py ${OUTPUT_FILE}.detok.extracted.json ${GOLD_FILE} ${PMID_FILE}
|
BioGPT/examples/RE-DDI/infer.sh/0
|
{
"file_path": "BioGPT/examples/RE-DDI/infer.sh",
"repo_id": "BioGPT",
"token_count": 562
}
| 150 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import collections
import os
import re
import torch
from fairseq.file_io import PathManager
def average_checkpoints(inputs):
"""Loads checkpoints from inputs and returns a model with averaged weights.
Args:
inputs: An iterable of string paths of checkpoints to load from.
Returns:
A dict of string keys mapping to various values. The 'model' key
from the returned dict should correspond to an OrderedDict mapping
string parameter names to torch Tensors.
"""
params_dict = collections.OrderedDict()
params_keys = None
new_state = None
num_models = len(inputs)
for fpath in inputs:
with PathManager.open(fpath, "rb") as f:
state = torch.load(
f,
map_location=(
lambda s, _: torch.serialization.default_restore_location(s, "cpu")
),
)
# Copies over the settings from the first checkpoint
if new_state is None:
new_state = state
model_params = state["model"]
model_params_keys = list(model_params.keys())
if params_keys is None:
params_keys = model_params_keys
elif params_keys != model_params_keys:
raise KeyError(
"For checkpoint {}, expected list of params: {}, "
"but found: {}".format(f, params_keys, model_params_keys)
)
for k in params_keys:
p = model_params[k]
if isinstance(p, torch.HalfTensor):
p = p.float()
if k not in params_dict:
params_dict[k] = p.clone()
# NOTE: clone() is needed in case of p is a shared parameter
else:
params_dict[k] += p
averaged_params = collections.OrderedDict()
for k, v in params_dict.items():
averaged_params[k] = v
if averaged_params[k].is_floating_point():
averaged_params[k].div_(num_models)
else:
averaged_params[k] //= num_models
new_state["model"] = averaged_params
return new_state
def last_n_checkpoints(paths, n, update_based, upper_bound=None):
assert len(paths) == 1
path = paths[0]
if update_based:
pt_regexp = re.compile(r"checkpoint_\d+_(\d+)\.pt")
else:
pt_regexp = re.compile(r"checkpoint(\d+)\.pt")
files = PathManager.ls(path)
entries = []
for f in files:
m = pt_regexp.fullmatch(f)
if m is not None:
sort_key = int(m.group(1))
if upper_bound is None or sort_key <= upper_bound:
entries.append((sort_key, m.group(0)))
if len(entries) < n:
raise Exception(
"Found {} checkpoint files but need at least {}", len(entries), n
)
return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)[:n]]
def main():
parser = argparse.ArgumentParser(
description="Tool to average the params of input checkpoints to "
"produce a new checkpoint",
)
# fmt: off
parser.add_argument('--inputs', required=True, nargs='+',
help='Input checkpoint file paths.')
parser.add_argument('--output', required=True, metavar='FILE',
help='Write the new checkpoint containing the averaged weights to this path.')
num_group = parser.add_mutually_exclusive_group()
num_group.add_argument('--num-epoch-checkpoints', type=int,
help='if set, will try to find checkpoints with names checkpoint_xx.pt in the path specified by input, '
'and average last this many of them.')
num_group.add_argument('--num-update-checkpoints', type=int,
help='if set, will try to find checkpoints with names checkpoint_ee_xx.pt in the path specified by input, '
'and average last this many of them.')
parser.add_argument('--checkpoint-upper-bound', type=int,
help='when using --num-epoch-checkpoints, this will set an upper bound on which epoch to use, '
'when using --num-update-checkpoints, this will set an upper bound on which update to use'
'e.g., with --num-epoch-checkpoints=10 --checkpoint-upper-bound=50, checkpoints 41-50 would be averaged.'
'e.g., with --num-update-checkpoints=10 --checkpoint-upper-bound=50000, checkpoints 40500-50000 would be averaged assuming --save-interval-updates 500'
)
# fmt: on
args = parser.parse_args()
print(args)
num = None
is_update_based = False
if args.num_update_checkpoints is not None:
num = args.num_update_checkpoints
is_update_based = True
elif args.num_epoch_checkpoints is not None:
num = args.num_epoch_checkpoints
assert args.checkpoint_upper_bound is None or (
args.num_epoch_checkpoints is not None
or args.num_update_checkpoints is not None
), "--checkpoint-upper-bound requires --num-epoch-checkpoints or --num-update-checkpoints"
assert (
args.num_epoch_checkpoints is None or args.num_update_checkpoints is None
), "Cannot combine --num-epoch-checkpoints and --num-update-checkpoints"
if num is not None:
args.inputs = last_n_checkpoints(
args.inputs,
num,
is_update_based,
upper_bound=args.checkpoint_upper_bound,
)
print("averaging checkpoints: ", args.inputs)
new_state = average_checkpoints(args.inputs)
with PathManager.open(args.output, "wb") as f:
torch.save(new_state, f)
print("Finished writing averaged checkpoint to {}".format(args.output))
if __name__ == "__main__":
main()
|
BioGPT/scripts/average_checkpoints.py/0
|
{
"file_path": "BioGPT/scripts/average_checkpoints.py",
"repo_id": "BioGPT",
"token_count": 2590
}
| 151 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import numpy as np
import tvm
from bitblas.base.roller.policy import TensorCorePolicy, DefaultPolicy
from bitblas.base.roller.arch import CUDA
from bitblas.gpu.matmul_analysis import get_tensorized_func_and_tags
from bitblas.gpu import Matmul
from bitblas.base.utils import apply_and_build
import time
from tvm import te, tir
def conv2d_nhwc_hwio(n, f, h, w, c, kh, kw, s, d, p, in_dtype="float16", out_dtype="float16"):
A = te.placeholder((n, h, w, c), name="input", dtype=in_dtype)
B = te.placeholder((kh, kw, c, f), name="weight", dtype=in_dtype)
pad_shape = (n, h + 2 * p, w + 2 * p, c)
pad_value = tir.const(0.0, A.dtype)
pad = te.compute(
pad_shape,
lambda n, h, w, c: te.if_then_else(
tir.all(
h >= p,
w >= p,
h < pad_shape[1] - p,
w < pad_shape[2] - p,
),
A[n, h - p, w - p, c],
pad_value,
),
name="pad",
)
kernel_h, kernel_w = kh, kw
stride_h, stride_w = s, s
dilation_h, dilation_w = d, d
out_h = (h + 2 * p - (dilation_h * (kernel_h - 1) + 1)) // stride_h + 1
out_w = (w + 2 * p - (dilation_w * (kernel_w - 1) + 1)) // stride_w + 1
out_shape = (n, out_h, out_w, f)
kh = te.reduce_axis((0, kernel_h), name="kh")
kw = te.reduce_axis((0, kernel_w), name="kw")
c = te.reduce_axis((0, c), name="c")
C = te.compute(
out_shape,
lambda n, h, w, f: te.sum(
pad[n, h * stride_h + kh * dilation_h, w * stride_w + kw * dilation_w, c,] * B[kh, kw,
c, f],
axis=[kh, kw, c],
),
name="C",
)
return tvm.ir.IRModule({"main": te.create_prim_func([A, B, C])})
# fmt:off
benchmark_sets = [
# (prim_func, input_args, BitBLAS_default_schedule),
(conv2d_nhwc_hwio, (128, 64, 224, 224, 64, 1, 1, 2, 1, 3, "float16", "float16"), Matmul),
(conv2d_nhwc_hwio, (128, 64, 224, 224, 3, 7, 7, 2, 1, 3, "float32", "float32"), Matmul),
(conv2d_nhwc_hwio, (128, 64, 224, 224, 3, 7, 7, 2, 1, 3, "float16", "float16"), Matmul),
]
# fmt:on
benchmark_results = {}
for get_prim_func, input_args, d_schedule in benchmark_sets:
ir_module = get_prim_func(*input_args)
func = ir_module["main"]
target = tvm.target.Target("nvidia/nvidia-a100")
arch = CUDA(target)
policy = DefaultPolicy(func=func, arch=arch)
try:
tensorized_func, tags = get_tensorized_func_and_tags(func, arch.target)
except Exception:
tags = None
if tags:
policy = TensorCorePolicy(func=tensorized_func, arch=arch, tags=tags)
configs = policy.emit_config(20)
tune_start = time.time()
cpresults, best = apply_and_build(func, configs, arch, parallel_build=True)
fast_tune_time = time.time() - tune_start
print("[BitBLAS] The best latency of top 1 is {:.3f} ms".format(cpresults[0].latency * 1e3))
print("[BitBLAS] The best latency of top 20 is {:.3f} ms".format(best.latency * 1e3))
# evaluate the performance of the default schedule
rule = d_schedule()
default_tune_start = time.time()
sch_default = rule.apply(func, target, False)
with tvm.transform.PassContext(config={"tir.use_async_copy": True}):
mod_default = tvm.build(sch_default.mod["main"], target="cuda")
default_tune_time = time.time() - default_tune_start
args = func.buffer_map.values()
profile_tensors = []
for arg in args:
profile_tensors.append(
tvm.nd.array(
np.random.uniform(0, 1, [int(i) for i in arg.shape]).astype(arg.dtype),
device=arch.device,
))
timer_cuda_mod = mod_default.time_evaluator(mod_default.entry_name, arch.device, number=5)
t = timer_cuda_mod(*profile_tensors).mean
print("Time cost of BitBLAS default schedule: {:.3f} ms".format(t * 1e3))
profile_config = {
f"{get_prim_func.__name__}-{'-'.join([str(i) for i in input_args])}": {
"BitBLAS_top20_tune_time": fast_tune_time,
"BitBLAS_top1_latency": cpresults[0].latency * 1e3,
"BitBLAS_top20_latency": best.latency * 1e3,
"BitBLAS_default_tune_time": default_tune_time,
"BitBLAS_default_latency": t * 1e3,
}
}
benchmark_results.update(profile_config)
headers = [
"PrimFunc",
"Input Arguments",
"BitBLAS Top20 Tune Time",
"BitBLAS Top1 Latency",
"BitBLAS Top20 Latency",
"BitBLAS Default Tune Time",
"BitBLAS Default Latency",
]
col_width = (max(len(word) for row in [headers] + list(profile_config.values()) for word in row) + 2
) # padding
print("".join(word.ljust(col_width) for word in headers))
print("-" * col_width * len(headers))
for config, values in benchmark_results.items():
args = config.split("-")
func_name = args[0]
input_args = "-".join(args[1:])
row = [
func_name,
input_args,
f" {str(values['BitBLAS_top20_tune_time'])} s",
f"{values['BitBLAS_top1_latency']:.3f} ms",
f"{values['BitBLAS_top20_latency']:.3f} ms",
str(values["BitBLAS_default_tune_time"]),
f"{values['BitBLAS_default_latency']:.3f} ms",
]
print("".join(word.ljust(col_width) for word in row))
|
BitBLAS/benchmark/dsl/convolution.py/0
|
{
"file_path": "BitBLAS/benchmark/dsl/convolution.py",
"repo_id": "BitBLAS",
"token_count": 2592
}
| 152 |
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tokenization classes for LLaMA."""
import os
from shutil import copyfile
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
import sentencepiece as spm
from transformers.convert_slow_tokenizer import import_protobuf
from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
from transformers.utils import logging
if TYPE_CHECKING:
from transformers.tokenization_utils_base import TextInput
logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"}
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"hf-internal-testing/llama-tokenizer": "https://huggingface.co/hf-internal-testing/llama-tokenizer/resolve/main/tokenizer.model",
},
"tokenizer_file": {
"hf-internal-testing/llama-tokenizer": "https://huggingface.co/hf-internal-testing/llama-tokenizer/resolve/main/tokenizer_config.json",
},
}
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"hf-internal-testing/llama-tokenizer": 2048,
}
SPIECE_UNDERLINE = "▁"
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
# fmt: off
DEFAULT_SYSTEM_PROMPT = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \
answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\
that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \
correct. If you don't know the answer to a question, please don't share false information."""
# fmt: on
class BitnetTokenizer(PreTrainedTokenizer):
"""
Construct a Bitnet tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is
no padding token in the original model.
Args:
vocab_file (`str`):
Path to the vocabulary file.
unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`):
The end of sequence token.
pad_token (`str` or `tokenizers.AddedToken`, *optional*):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation.
sp_model_kwargs (`Dict[str, Any]`, `Optional`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
add_bos_token (`bool`, *optional*, defaults to `True`):
Whether or not to add an `bos_token` at the start of sequences.
add_eos_token (`bool`, *optional*, defaults to `False`):
Whether or not to add an `eos_token` at the end of sequences.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like
extra spaces.
use_default_system_prompt (`bool`, *optional*, defaults to `False`):
Whether or not the default system prompt for Bitnet should be used.
spaces_between_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to add spaces between special tokens.
legacy (`bool`, *optional*):
Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622
and #25224 which includes fixes to properly handle tokens that appear after special tokens. A simple
example:
- `legacy=True`:
```python
>>> from transformers import T5Tokenizer
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-base", legacy=True)
>>> tokenizer.encode("Hello <extra_id_0>.")
[8774, 32099, 3, 5, 1]
```
- `legacy=False`:
```python
>>> from transformers import T5Tokenizer
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-base", legacy=False)
>>> tokenizer.encode("Hello <extra_id_0>.") # the extra space `[3]` is no longer here
[8774, 32099, 5, 1]
```
Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details.
add_prefix_space (`bool`, *optional*, defaults to `True`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word.
"""
vocab_files_names = VOCAB_FILES_NAMES
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
model_input_names = ["input_ids", "attention_mask"]
def __init__(
self,
vocab_file,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
pad_token=None,
sp_model_kwargs: Optional[Dict[str, Any]] = None,
add_bos_token=True,
add_eos_token=False,
clean_up_tokenization_spaces=False,
use_default_system_prompt=False,
spaces_between_special_tokens=False,
legacy=None,
add_prefix_space=True,
**kwargs,
):
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
bos_token = AddedToken(bos_token, normalized=False, special=True) if isinstance(bos_token, str) else bos_token
eos_token = AddedToken(eos_token, normalized=False, special=True) if isinstance(eos_token, str) else eos_token
unk_token = AddedToken(unk_token, normalized=False, special=True) if isinstance(unk_token, str) else unk_token
pad_token = AddedToken(pad_token, normalized=False, special=True) if isinstance(pad_token, str) else pad_token
if legacy is None:
logger.warning_once(
f"You are using the default legacy behaviour of the {self.__class__}. This is"
" expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you."
" If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it"
" means, and thoroughly read the reason why this was added as explained in"
" https://github.com/huggingface/transformers/pull/24565"
)
legacy = True
self.legacy = legacy
self.vocab_file = vocab_file
self.add_bos_token = add_bos_token
self.add_eos_token = add_eos_token
self.use_default_system_prompt = use_default_system_prompt
self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
self.add_prefix_space = add_prefix_space
super().__init__(
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
pad_token=pad_token,
add_bos_token=add_bos_token,
add_eos_token=add_eos_token,
sp_model_kwargs=self.sp_model_kwargs,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
use_default_system_prompt=use_default_system_prompt,
spaces_between_special_tokens=spaces_between_special_tokens,
legacy=legacy,
add_prefix_space=add_prefix_space,
**kwargs,
)
@property
def unk_token_length(self):
return len(self.sp_model.encode(str(self.unk_token)))
# Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_spm_processor
def get_spm_processor(self, from_slow=False):
tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)
if self.legacy or from_slow: # no dependency on protobuf
tokenizer.Load(self.vocab_file)
return tokenizer
with open(self.vocab_file, "rb") as f:
sp_model = f.read()
model_pb2 = import_protobuf(f"The new behaviour of {self.__class__.__name__} (with `self.legacy = False`)")
model = model_pb2.ModelProto.FromString(sp_model)
normalizer_spec = model_pb2.NormalizerSpec()
normalizer_spec.add_dummy_prefix = False
model.normalizer_spec.MergeFrom(normalizer_spec)
sp_model = model.SerializeToString()
tokenizer.LoadFromSerializedProto(sp_model)
return tokenizer
def __getstate__(self):
state = self.__dict__.copy()
state["sp_model"] = None
state["sp_model_proto"] = self.sp_model.serialized_model_proto()
return state
def __setstate__(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
self.sp_model.LoadFromSerializedProto(self.sp_model_proto)
@property
def vocab_size(self):
"""Returns vocab size"""
return self.sp_model.get_piece_size()
def get_vocab(self):
"""Returns vocab as a dict"""
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
vocab.update(self.added_tokens_encoder)
return vocab
# Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.tokenize
def tokenize(self, text: "TextInput", **kwargs) -> List[str]:
"""
Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the
first token is special.
"""
if self.legacy or len(text) == 0:
return super().tokenize(text, **kwargs)
text = text.replace(SPIECE_UNDERLINE, " ")
if self.add_prefix_space:
text = SPIECE_UNDERLINE + text
tokens = super().tokenize(text, **kwargs)
if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens:
tokens = tokens[1:]
return tokens
# Copied from transformers.models.t5.tokenization_t5.T5Tokenizer._tokenize
def _tokenize(self, text, **kwargs):
"""
Returns a tokenized string.
We de-activated the `add_dummy_prefix` option, thus the sentencepiece internals will always strip any
SPIECE_UNDERLINE. For example: `self.sp_model.encode(f"{SPIECE_UNDERLINE}Hey", out_type = str)` will give
`['H', 'e', 'y']` instead of `['▁He', 'y']`. Thus we always encode `f"{unk_token}text"` and strip the
`unk_token`. Here is an example with `unk_token = "<unk>"` and `unk_token_length = 4`.
`self.tokenizer.sp_model.encode("<unk> Hey", out_type = str)[4:]`.
"""
tokens = self.sp_model.encode(text, out_type=str)
if self.legacy or not text.startswith((SPIECE_UNDERLINE, " ")):
return tokens
# 1. Encode string + prefix ex: "<unk> Hey"
tokens = self.sp_model.encode(self.unk_token + text, out_type=str)
# 2. Remove self.unk_token from ['<','unk','>', '▁Hey']
return tokens[self.unk_token_length :] if len(tokens) >= self.unk_token_length else tokens
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
return self.sp_model.piece_to_id(token)
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
token = self.sp_model.IdToPiece(index)
return token
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
# since we manually add the prefix space, we have to remove it when decoding
if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space:
tokens[0] = tokens[0][1:]
current_sub_tokens = []
out_string = ""
prev_is_special = False
for i, token in enumerate(tokens):
# make sure that special tokens are not decoded using sentencepiece model
if token in self.all_special_tokens:
if not prev_is_special and i != 0 and self.legacy:
out_string += " "
out_string += self.sp_model.decode(current_sub_tokens) + token
prev_is_special = True
current_sub_tokens = []
else:
current_sub_tokens.append(token)
prev_is_special = False
out_string += self.sp_model.decode(current_sub_tokens)
return out_string
def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
"""
Save the vocabulary and special tokens file to a directory.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
Returns:
`Tuple(str)`: Paths to the files saved.
"""
if not os.path.isdir(save_directory):
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
return
out_vocab_file = os.path.join(
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
)
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
copyfile(self.vocab_file, out_vocab_file)
elif not os.path.isfile(self.vocab_file):
with open(out_vocab_file, "wb") as fi:
content_spiece_model = self.sp_model.serialized_model_proto()
fi.write(content_spiece_model)
return (out_vocab_file,)
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
output = bos_token_id + token_ids_0 + eos_token_id
if token_ids_1 is not None:
output = output + bos_token_id + token_ids_1 + eos_token_id
return output
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
if already_has_special_tokens:
return super().get_special_tokens_mask(
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
)
bos_token_id = [1] if self.add_bos_token else []
eos_token_id = [1] if self.add_eos_token else []
if token_ids_1 is None:
return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id
return (
bos_token_id
+ ([0] * len(token_ids_0))
+ eos_token_id
+ bos_token_id
+ ([0] * len(token_ids_1))
+ eos_token_id
)
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
if token_ids_1 is None, only returns the first portion of the mask (0s).
Args:
token_ids_0 (`List[int]`):
List of ids.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
"""
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
output = [0] * len(bos_token_id + token_ids_0 + eos_token_id)
if token_ids_1 is not None:
output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
return output
@property
def default_chat_template(self):
"""
LLaMA uses [INST] and [/INST] to indicate user messages, and <<SYS>> and <</SYS>> to indicate system messages.
Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
results in an unusual token ordering when it is present. This template should definitely be changed if you wish
to fine-tune a model with more flexible role ordering!
The output should look something like:
<bos>[INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer <eos><bos>[INST] Prompt [/INST] Answer <eos>
<bos>[INST] Prompt [/INST]
The reference for this chat template is [this code
snippet](https://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L320-L362)
in the original repository.
"""
logger.warning_once(
"\nNo chat template is defined for this tokenizer - using the default template "
f"for the {self.__class__.__name__} class. If the default is not appropriate for "
"your model, please set `tokenizer.chat_template` to an appropriate template. "
"See https://huggingface.co/docs/transformers/main/chat_templating for more information.\n"
)
template = (
"{% if messages[0]['role'] == 'system' %}"
"{% set loop_messages = messages[1:] %}" # Extract system message if it's present
"{% set system_message = messages[0]['content'] %}"
"{% elif USE_DEFAULT_PROMPT == true and not '<<SYS>>' in messages[0]['content'] %}"
"{% set loop_messages = messages %}" # Or use the default system message if the flag is set
"{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
"{% else %}"
"{% set loop_messages = messages %}"
"{% set system_message = false %}"
"{% endif %}"
"{% for message in loop_messages %}" # Loop over all non-system messages
"{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
"{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
"{% endif %}"
"{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
"{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}"
"{% else %}"
"{% set content = message['content'] %}"
"{% endif %}"
"{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
"{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
"{% elif message['role'] == 'system' %}"
"{{ '<<SYS>>\\n' + content.strip() + '\\n<</SYS>>\\n\\n' }}"
"{% elif message['role'] == 'assistant' %}"
"{{ ' ' + content.strip() + ' ' + eos_token }}"
"{% endif %}"
"{% endfor %}"
)
template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
return template
|
BitBLAS/integration/BitNet/tokenization_bitnet.py/0
|
{
"file_path": "BitBLAS/integration/BitNet/tokenization_bitnet.py",
"repo_id": "BitBLAS",
"token_count": 9488
}
| 153 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
from logging import getLogger
import numpy as np
import torch
import torch.nn as nn
logger = getLogger(__name__)
try:
import bitblas # noqa: F401
except ImportError as e:
bitblas_import_exception = e
def error_raiser_bitblas(*args, **kwargs):
raise ValueError(
f"Trying to use the bitblas backend, but could not import dependencies with the following error: {bitblas_import_exception}"
)
autogptq_bitblas_cuda = bitblas_import_exception
from bitblas.quantization.utils import general_compress, interleave_weight
from bitblas.ops.matmul_dequantize import (
MatmulWeightOnlyDequantizeConfig,
MatmulWeightOnlyDequantize,
)
from bitblas.utils import auto_detect_nvidia_target
from typing import List, Union, Literal, Optional
class QuantLinear(nn.Module):
QUANT_TYPE = "bitblas"
def __init__(
self,
bits: int,
group_size: int,
in_features: int,
out_features: int,
bias: bool,
enable_tuning: bool = False,
fast_decoding: bool = False,
propagate_a: bool = False,
propagate_b: bool = False,
opt_M: Optional[Union[int, List[int]]] = None,
layout: Literal["nt"] = "nt",
trainable=False,
**kwargs,
):
super().__init__()
if group_size == -1:
group_size = in_features
if in_features % 128 != 0 or out_features % 256 != 0:
raise ValueError("`in_features` must be divisible by 128 and `out_features` by 256.")
if bits not in [1, 2, 4]:
raise NotImplementedError("Only 1/2/4 bits are supported.")
if in_features % group_size != 0:
raise ValueError("`in_features` must be divisible by `group_size`.")
if trainable:
raise NotImplementedError("Bitblas does not support train.")
if opt_M is None:
opt_M = [1, 32, 64]
self.bits = bits
storage_nbit = 8 # assume int8 storage
n_float_per_elem = storage_nbit // bits
self.opt_M = opt_M
self.in_features = in_features
self.out_features = out_features
self.group_size = group_size if group_size != -1 else in_features
self.register_buffer(
"qweight",
torch.empty(
(self.out_features, self.in_features // n_float_per_elem),
dtype=torch.int8,
),
)
self.register_buffer(
"scales",
torch.empty(
(self.out_features, self.in_features // self.group_size),
dtype=torch.half,
),
)
self.register_buffer(
"zeros",
torch.full(
(self.out_features, self.in_features // self.group_size),
0,
dtype=torch.float16,
),
)
if bias:
self.register_buffer("bias", torch.zeros((out_features), dtype=torch.half))
else:
self.bias = None
self.fast_type_conversion = False
self.weight_propagation = False
dtype = self.scales.dtype
BITBLAS_DTYPES = {
torch.float32: "float32",
torch.float16: "float16",
torch.half: "float16",
torch.int8: "int8",
}
assert dtype in BITBLAS_DTYPES, f"Unsupported dtype: {dtype}"
bitblas_dtype = BITBLAS_DTYPES[dtype]
self.target = auto_detect_nvidia_target()
matmul_config = MatmulWeightOnlyDequantizeConfig(
M=self.opt_M,
N=self.out_features,
K=self.in_features,
in_dtype=bitblas_dtype,
out_dtype=bitblas_dtype,
accum_dtype="int32" if bitblas_dtype == "int8" else bitblas_dtype,
bit=bits,
storage_dtype="int8",
source_format="uint",
with_scaling=True,
with_zeros=True,
group_size=group_size,
fast_decoding=fast_decoding,
with_bias=bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
zeros_mode="original",
)
# optimize target shapes for dynamic symbolic
self.bitblas_matmul = MatmulWeightOnlyDequantize(matmul_config, target=self.target)
if enable_tuning:
self.bitblas_matmul.hardware_aware_finetune(topk=20)
self.reset_parameters()
def reset_parameters(self):
# init for char
self.qweight = torch.randint_like(
self.qweight,
0,
2**(self.bits - 1) - 1,
dtype=torch.int8,
device=self.qweight.device,
)
nn.init.normal_(self.scales)
nn.init.zeros_(self.zeros)
if self.bias is not None:
nn.init.zeros_(self.bias)
def post_init(self):
pass
def pack(self, linear, scales, zeros=None):
"""Pack a fake-quantized linear layer into this actual Bitblas representation.
@linear: fake-quantized `torch.nn.Linear` layer to convert (must be of type `torch.half`)
@scales: corresponding quantization scales of shape `(in_features, groups)`
"""
if linear.weight.dtype != torch.half:
raise ValueError("Only `torch.half` weights are supported.")
# do permutation with (n, k) layout
w = linear.weight.data
# scales shape should be (n, k) as well.
s = scales
scale_zeros = torch.zeros_like(zeros, dtype=torch.float16)
if zeros is not None:
scale_zeros[:, :] = zeros[:, :] * scales[:, :]
self.zeros = zeros.to(scales.device).to(scales.dtype).contiguous()
# do permutation on weight
intweight = []
for idx in range(self.in_features):
g_idx = idx // self.group_size
intweight.append(
torch.round(
(w[:, idx] + scale_zeros[:, g_idx]) / scales[:, g_idx]).to(torch.int)[:, None])
intweight = torch.cat(intweight, dim=1)
intweight = intweight.contiguous()
intweight = intweight.cpu().numpy().astype(np.int8)
# quantize to 4bit
qw_np = general_compress(intweight, source_bits=self.bits, storage_dtype=np.int8)
# do interleave for fast type conversion
if self.fast_type_conversion:
qw_np = interleave_weight(qw_np, nbits=self.bits, target_dtype="float16")
if self.weight_propagation:
# do permutation on weight
pass
q = torch.from_numpy(qw_np).to(w.device)
self.qweight = q.to(self.qweight.device).contiguous()
self.scales = s.to(self.qweight.device).contiguous()
self.zeros = self.zeros.to(self.qweight.device).contiguous()
if self.bias is not None:
self.bias[:] = linear.bias.data.to(self.bias.device).contiguous()
def forward(self, A, output=None):
args = [A, self.qweight, self.scales, self.zeros]
if self.bias is not None:
args.append(self.bias)
if output is None:
output = torch.empty(
A.shape[:-1] + (self.qweight.shape[0],), dtype=A.dtype, device=A.device)
args.append(output)
self.bitblas_matmul(*args)
return output
__all__ = ["QuantLinear"]
|
BitBLAS/integration/pytorch/bitblas_quant_linear.py/0
|
{
"file_path": "BitBLAS/integration/pytorch/bitblas_quant_linear.py",
"repo_id": "BitBLAS",
"token_count": 3627
}
| 154 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
from .node import PrimFuncNode # noqa: F401
from .hint import Hint # noqa: F401
from .policy import DefaultPolicy, TensorCorePolicy # noqa: F401
from .arch import TileDevice, CUDA # noqa: F401
|
BitBLAS/python/bitblas/base/roller/__init__.py/0
|
{
"file_path": "BitBLAS/python/bitblas/base/roller/__init__.py",
"repo_id": "BitBLAS",
"token_count": 84
}
| 155 |
# Copyright 2018 The apache/tvm Authors. All Rights Reserved.
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Modifications Copyright (c) Microsoft.
# The code below is mostly copied from apache/tvm schedule_rule.py in dlight.
"""A lightweight wrapper on an arbitrary function that can be used to schedule a TIR PrimFunc."""
from typing import Callable, List, Union
from tvm import tir
from tvm.target import Target
class ScheduleRule: # pylint: disable=too-few-public-methods
"""A thin wrapper on an arbitrary function that can be used to schedule a TIR PrimFunc.
Given a PrimFunc, a target, and a tunable flag, the apply method of a ScheduleRule
returns either a Schedule, a list of Schedules, or None, where None means that the rule
is not applicable to the given PrimFunc. If the tunable flag is True, the ScheduleRule is
allowed to return either a Schedule or a list of Schedules, and the Schedules are allowed to
contain tunable instructions. If the tunable flag is False, the ScheduleRule is only allowed to
return a Schedule, and the Schedule is not allowed to contain tunable instructions.
"""
def apply(
self,
func: tir.PrimFunc,
target: Target,
tunable: bool,
) -> Union[None, tir.Schedule, List[tir.Schedule]]:
"""Apply the ScheduleRule to the given PrimFunc.
Parameters
----------
func : tir.PrimFunc
The PrimFunc to apply the ScheduleRule to.
target : Target
The compilation target the schedule is supposed to be built for.
tunable : bool
Whether the schedule is allowed to contain tunable instructions.
Returns
-------
results : Union[None, tir.Schedule, List[tir.Schedule]]
Either a Schedule, a list of Schedules, or None, where None means that the rule
is not applicable to the given PrimFunc.
"""
raise NotImplementedError
def apply_config(
self,
func: tir.PrimFunc,
config,
):
"""Apply the ScheduleRule to the given PrimFunc.
Parameters
----------
func : tir.PrimFunc
The PrimFunc to apply the ScheduleRule to.
target : Target
The compilation target the schedule is supposed to be built for.
configs :
# todo: Discribe the configs
Returns
-------
results : Union[None, tir.Schedule, List[tir.Schedule]]
Either a Schedule, a list of Schedules, or None, where None means that the rule
is not applicable to the given PrimFunc.
"""
raise NotImplementedError
@staticmethod
def from_callable(
name,
) -> Callable[
[
Callable[
[tir.PrimFunc, Target, bool],
Union[None, tir.Schedule, List[tir.Schedule]],
],
],
"ScheduleRule",
]:
"""Create a ScheduleRule from a callable.
Parameters
----------
name : str
Returns
-------
decorator : Callable
A decorator that takes a callable and returns a ScheduleRule.
Examples
--------
.. code-block:: python
@ScheduleRule.from_callable("MyRule")
def my_rule(func: tir.PrimFunc, target: Target, tunable: bool) -> Union[None, Schedule]
# Do something with func and target
"""
def decorator(f) -> "ScheduleRule": # pylint: disable=invalid-name
class _Rule(ScheduleRule):
def apply(
self,
func: tir.PrimFunc,
target: Target,
tunable: bool,
) -> Union[None, tir.Schedule, List[tir.Schedule]]:
return f(func, target, tunable)
_Rule.__name__ = name
return _Rule()
return decorator
def is_target_available(
self, target: Target
) -> bool: # pylint: disable=unused-argument
"""Check whether the rule is available for the given target.
Parameters
----------
target : Target
The compilation target the schedule is supposed to be built for.
Returns
-------
available : bool
Whether the rule is available for the given target.
"""
return True
|
BitBLAS/python/bitblas/base/schedule_rule.py/0
|
{
"file_path": "BitBLAS/python/bitblas/base/schedule_rule.py",
"repo_id": "BitBLAS",
"token_count": 2022
}
| 156 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# pylint: disable=missing-docstring, invalid-name
"""A GEMM schedule rule for GPU operators."""
from dataclasses import dataclass
from enum import Enum
from typing import List, Optional, Set, Union, Tuple, Dict
from tvm import tir
from tvm.ir import Range
from tvm.tir import IterVar, PrimExpr, Var, BufferRegion
from tvm.tir.analysis import undefined_vars
from tvm.tir.schedule.schedule import BlockRV
from ..base.analysis import (
collect_block_iter_vars_used_in_access_region,
get_root_block,
get_reduction_blocks,
)
from tvm.target.target import Target
from tvm.tir import IndexMap
import logging
logger = logging.getLogger(__name__)
def _is_one(x: PrimExpr) -> bool:
return isinstance(x, tir.IntImm) and x.value == 1
def _collect_producers(sch: tir.Schedule, block: tir.schedule.BlockRV):
result = []
for producer in sch.get_producers(block):
result.append(producer)
result.extend(_collect_producers(sch, producer))
return result
def _collect_consumers(sch: tir.Schedule, block: tir.schedule.BlockRV):
result = []
for consumer in sch.get_consumers(block):
result.append(consumer)
result.extend(_collect_consumers(sch, consumer))
return result
def auto_inline_producers(
sch: tir.Schedule,
block: tir.schedule.BlockRV,
skip_blocks: Optional[List[tir.schedule.BlockRV]] = None,
):
skip_blocks = skip_blocks or []
while True:
inlined_cnt = 0
producers = _collect_producers(sch, block)
for producer in producers:
if any(sch.get(producer) == sch.get(skip_block) for skip_block in skip_blocks):
continue
try:
sch.compute_inline(producer)
inlined_cnt += 1
except Exception: # pylint: disable=bare-except
continue
if inlined_cnt == 0:
return
def auto_inline_consumers(
sch: tir.Schedule,
block: tir.schedule.BlockRV,
):
while True:
inlined_cnt = 0
consumers = _collect_consumers(sch, block)
for consumer in consumers:
try:
sch.compute_inline(consumer)
inlined_cnt += 1
except Exception: # pylint: disable=bare-except
continue
for consumer in consumers:
try:
sch.reverse_compute_inline(consumer)
inlined_cnt += 1
except Exception: # pylint: disable=bare-except
continue
if inlined_cnt == 0:
return
def auto_inline_consumer_chain(
sch: tir.Schedule,
block: tir.schedule.BlockRV,
):
auto_inline_consumers(sch, block)
remaining_consumers = sch.get_consumers(block)
if len(remaining_consumers) != 0:
# Some blocks have failed to be inlined to the producer cache-write stage.
# This could be due to another producer block that has not been scheduled.
for c in remaining_consumers:
for p in sch.get_producers(c):
if sch.get(p) != sch.get(block):
sch.compute_inline(p)
# Try inlining into the cache-write stage again, this time it should succeed.
auto_inline_consumers(sch, block)
# used to match the similar region with dequantize op.
def find_first_similar_region(regions: List[BufferRegion], buffer: tir.Buffer):
for region in regions:
if len(region.buffer.shape) == len(buffer.shape):
return region
return None
# used to match the similar buffer with dequantize op.
def find_first_similar_buffer(regions: List[BufferRegion], buffer: tir.Buffer):
for region in regions:
if len(region.buffer.shape) == len(buffer.shape):
return region.buffer
return None
# find the block that required to be reindex and scope.
def find_last_producer_from_buffer(sch, main_block, buffer: tir.Buffer) -> Optional[BlockRV]:
# block that most near to the arguments
block = main_block
buffer = buffer
while True:
last_buffer = buffer
producers = sch.get_producers(block)
if len(producers) == 0:
# do not have any producer means it is the first block
break
for producer in producers:
for write in sch.get(producer).writes:
if write.buffer == buffer:
block = producer
buffer = find_first_similar_buffer(sch.get(producer).reads, last_buffer)
if buffer == last_buffer:
break
return block
def find_arg_idx_from_buffer_chain(sch: tir.Schedule, main_block: tir.schedule.BlockRV,
buffer: tir.Buffer) -> int:
"""traverse to find the arg index from the buffer"""
producers = sch.get_producers(main_block)
# a head buffer has no producer blocks
def find_args_index(sch: tir.Schedule, buffer: tir.Buffer):
for i, param in enumerate(sch.mod["main"].params):
if sch.mod["main"].buffer_map[param] == buffer:
return i
return None
is_head_buffer = len(producers) == 0
if is_head_buffer:
return find_args_index(sch, buffer)
for block in sch.get_producers(main_block):
if len(sch.get(block).reads) != 1 or len(sch.get(block).writes) != 1:
continue
for write in sch.get(block).writes:
if write.buffer == buffer:
return find_arg_idx_from_buffer_chain(sch, block, buffer)
# if no buffer producer block found, it means the buffer is an input buffer
return find_args_index(sch, buffer)
class IterKind(Enum):
"""Iter kinds for GEMM-liked programs.
We can simplify the computation to C[S, I, J] += A[S, I, K] * B[S, J, K],
where `I, J, K` are fundamental axes for gemm and `S` represents all
other spatial axes (e.g. batches)
kIter_S: spatial axes
kIter_I: I axes
kIter_J: J axes
kIter_K: K axes
kIter_T: trivial axes (i.e. with extent 1)
"""
kIter_S = 0
kIter_I = 1
kIter_J = 2
kIter_K = 3
kIter_T = 4
@dataclass
class IterTrait:
kind: IterKind
extent: PrimExpr
def make_iter_fusion_index_map(
traits: List[IterTrait],
kind_order: List[IterKind],
) -> tir.IndexMap:
fused_iters: Dict[IterKind, PrimExpr] = {}
input_iters: List[tir.Var] = []
for i, trait in enumerate(traits):
v_i = tir.Var(f"i{i}", trait.extent.dtype)
input_iters.append(v_i)
if trait.kind == IterKind.kIter_T:
continue
if trait.kind not in kind_order:
raise ValueError(f"Unknown iter kind {trait.kind}")
if trait.kind in fused_iters:
fused_iters[trait.kind] = fused_iters[trait.kind] * trait.extent + v_i
else:
fused_iters[trait.kind] = v_i
final_indices: List[tir.PrimExpr] = [
fused_iters.get(kind, tir.IntImm(traits[0].extent.dtype, 0)) for kind in kind_order
]
return tir.IndexMap(input_iters, final_indices, None)
def detect_iter_traits(block: tir.Block) -> Optional[Tuple[List[IterTrait]]]:
"""Detect iter traits based on the pattern C[S, I, J] += A[S, I, K] * B[S, J, K]
Parameters
----------
block : tir.Block
The block to be analyzed
Returns
-------
traits : Optional[Tuple[List[IterTrait]]]
The detected iter traits for axes in A, B and C. None if the block
does not match the pattern.
"""
if len(block.reads) != 2 or len(block.writes) != 1:
return None
def get_access_axes(region: List[Range]) -> Set[Var]:
axes: Set[Var] = set()
for r in region:
if not _is_one(r.extent):
raise ValueError("Expect elemwise block access")
axes = axes.union(set(undefined_vars(r.min)))
return axes
try:
A_axes = get_access_axes(block.reads[0].region)
B_axes = get_access_axes(block.reads[1].region)
C_axes = get_access_axes(block.writes[0].region)
except ValueError:
return None
traits: Dict[Var, IterTrait] = {}
for iter_var in block.iter_vars:
var = iter_var.var
kind: IterKind
if _is_one(iter_var.dom.extent):
if iter_var.iter_type == tir.IterVar.CommReduce:
# for simplified case (e.g. 1x1 conv kernel)
kind = IterKind.kIter_K
else:
kind = IterKind.kIter_T
elif iter_var.iter_type == iter_var.DataPar:
if var in A_axes and var in B_axes and var in C_axes:
kind = IterKind.kIter_S
elif var in A_axes and var in C_axes:
kind = IterKind.kIter_I
elif var in B_axes and var in C_axes:
kind = IterKind.kIter_J
else:
return None
elif iter_var.iter_type == tir.IterVar.CommReduce:
if var in A_axes and var in B_axes and var not in C_axes:
kind = IterKind.kIter_K
else:
return None
else:
return None
traits[var] = IterTrait(kind, iter_var.dom.extent)
# A Gemm-kernel requires have I, J and K axes
gemm_traits = {IterKind.kIter_I, IterKind.kIter_J, IterKind.kIter_K}
if {x.kind for x in traits.values()}.intersection(gemm_traits) != gemm_traits:
return None
A_traits = [traits[iter_var.var] for iter_var in block.iter_vars if iter_var.var in A_axes]
B_traits = [traits[iter_var.var] for iter_var in block.iter_vars if iter_var.var in B_axes]
C_traits = [traits[iter_var.var] for iter_var in block.iter_vars if iter_var.var in C_axes]
block_traits = [traits[i.var] for i in block.iter_vars]
return A_traits, B_traits, C_traits, block_traits
def get_index_map(block: tir.Block,
layout: Optional[List[str]] = None) -> Optional[Tuple[tir.IndexMap, ...]]:
"""Get index maps for the block
Parameters
----------
block : tir.Block
The block to be analyzed
layout : List[str]
the target layout index map to be used.
'n' for [i, k] layout
't' for [k, j] layout
'a' for auto inference based on whether the last axis is reduction.
Returns
-------
index_maps : Optional[Tuple[tir.IndexMap]]
The index maps for the block, or None if the block is not a gemm-liked kernel
"""
if layout is None:
layout = ["n", "t", "n"]
traits = detect_iter_traits(block)
if traits is None:
return None
A_traits, B_traits, C_traits, block_traits = traits
def get_ordered_axes(region: List[Range]) -> Set[Var]:
axes: List[Var] = []
for r in region:
if not _is_one(r.extent):
raise ValueError("Expect elemwise block access")
axes.append(r.min)
return axes
def is_common_reduce(var: Var) -> bool:
for iter_var in block.iter_vars:
if iter_var.var == var and iter_var.iter_type == IterVar.CommReduce:
return True
return False
def check_last_trait(region: List[Range]):
axes = get_ordered_axes(region)
return is_common_reduce(axes[-1])
def infer_layout(layout: str, region: List[Range], kind: str = "A"):
"""
Infer the layout based on the region and the kind of buffer
kind: "A", "B", "C"
"""
primary_iter, secondary_iter, reduction_iter = {
"A": (IterKind.kIter_I, IterKind.kIter_K, IterKind.kIter_K),
"B": (IterKind.kIter_K, IterKind.kIter_J, IterKind.kIter_K),
"C": (IterKind.kIter_I, IterKind.kIter_J, None),
}[kind]
spatial_iter = {
"A": IterKind.kIter_I,
"B": IterKind.kIter_J,
"C": None,
}[kind]
if layout == "n":
return [IterKind.kIter_S, primary_iter, secondary_iter]
elif layout == "t":
return [IterKind.kIter_S, secondary_iter, primary_iter]
elif layout == "a":
# auto inference layout
# for buffer with reduction axis, we put it as the last axis
# otherwise, we put it as the first axis
if kind == "C":
return [IterKind.kIter_S, primary_iter, secondary_iter]
else:
return ([IterKind.kIter_S, spatial_iter, reduction_iter] if check_last_trait(region)
else [IterKind.kIter_S, reduction_iter, spatial_iter])
else:
raise ValueError(f"Unknown layout {layout}")
A_index_map = make_iter_fusion_index_map(
A_traits, infer_layout(layout[0], block.reads[0].region, kind="A"))
B_index_map = make_iter_fusion_index_map(
B_traits, infer_layout(layout[1], block.reads[1].region, kind="B"))
C_index_map = make_iter_fusion_index_map(
C_traits, infer_layout(layout[2], block.writes[0].region, kind="C"))
matmul_index_map = make_iter_fusion_index_map(
block_traits,
[IterKind.kIter_S, IterKind.kIter_I, IterKind.kIter_J, IterKind.kIter_K],
)
return (
matmul_index_map,
A_index_map,
B_index_map,
C_index_map,
)
def get_in_out_dtypes(block: tir.Block) -> Tuple[str]:
"""
Detect In/Out data types for the given block based on the analysis if read/write buffers.
"""
assert len(block.reads) > 0 and len(block.writes) > 0
in_dtype = block.reads[0].buffer.dtype
out_dtype = block.writes[0].buffer.dtype
return (in_dtype, out_dtype)
def get_dequantize_block(sch, blocks) -> Optional[BlockRV]:
# check at least two input and one output
# at lease one input has uint dtype, and the output dtype is float
def is_dequantize(block: BlockRV) -> bool:
block_stmt = sch.get(block)
if len(block_stmt.reads) < 2:
return False
has_uint_input = any("uint" in str(region.buffer.dtype) for region in block_stmt.reads)
if not has_uint_input:
return False
if len(block_stmt.writes) != 1 or "float" not in str(block_stmt.writes[0].buffer.dtype):
return False
return True
dequantize_blocks = [block for block in blocks if is_dequantize(block)]
return dequantize_blocks[0] if len(dequantize_blocks) == 1 else None
def is_identity_or_transpose_block(block_stmt: tir.Block) -> bool:
iter_types = {iter_var.iter_type for iter_var in block_stmt.iter_vars}
if iter_types != {IterVar.DataPar}:
return False, False
if not isinstance(block_stmt.body, tir.BufferStore):
return False, False
if not isinstance(block_stmt.body.value, tir.BufferLoad):
return False, False
def get_access_vars(region: List[Range]) -> List[Var]:
axes: List[Var] = []
for r in region:
if not _is_one(r.extent):
return None
axes.extend(undefined_vars(r.min))
# remove trivial axis
trivial_vars = set(
iter_var.var for iter_var in block_stmt.iter_vars if _is_one(iter_var.dom.extent))
axes = [axis for axis in axes if axis not in trivial_vars]
# remove duplicate axis
axes = [var for i, var in enumerate(axes) if i == 0 or var != axes[i - 1]]
return axes
lhs_access_vars = get_access_vars(block_stmt.reads[0].region)[-2:]
rhs_access_vars = get_access_vars(block_stmt.writes[0].region)[-2:]
is_identity = list(lhs_access_vars) == list(rhs_access_vars)
is_transpose = list(lhs_access_vars) != list(rhs_access_vars) and set(lhs_access_vars) == set(
rhs_access_vars)
return is_identity, is_transpose
def is_identity_block(block_stmt: tir.Block) -> bool:
return is_identity_or_transpose_block(block_stmt)[0]
def is_transpose_block(block_stmt: tir.Block) -> bool:
return is_identity_or_transpose_block(block_stmt)[1]
def inline_transpose_block(sch: tir.Schedule, blocks: List[tir.schedule.BlockRV]):
result_blocks = []
for block in blocks:
if not is_transpose_block(sch.get(block)):
result_blocks.append(block)
continue
try:
sch.compute_inline(block)
except Exception:
try:
sch.reverse_compute_inline(block)
except Exception:
result_blocks.append(block)
return result_blocks
def normalize_to_matmul(sch: tir.Schedule,
main_block: BlockRV,
layout: Optional[List[str]] = None) -> Optional[tir.Schedule]:
if layout is None:
layout = ["n", "t", "n"]
block_stmt = sch.get(main_block)
# let layout be 'a' to auto inference the layout
index_maps = get_index_map(block_stmt, layout=layout)
if index_maps is None:
logger.debug("Cannot find the appropriate index map for tensorcore")
return None
matmul_index_map, a_index_map, b_index_map, c_index_map = index_maps
# `skip_simplify` to avoid the bug in the 1x1 conv
block = sch.reindex(main_block, ("read", 0), skip_simplify=True)
sch.transform_layout(block, ("write", 0), a_index_map)
block = sch.reindex(main_block, ("read", 1), skip_simplify=True)
sch.transform_layout(block, ("write", 0), b_index_map)
block = sch.reindex(main_block, ("write", 0), skip_simplify=True)
sch.transform_layout(block, ("read", 0), c_index_map)
sch.transform_block_layout(main_block, matmul_index_map)
sch.mod["main"] = sch.mod["main"].with_attr("dlight.tensorcore_prenormlized", True)
return sch
def get_tensorized_func_and_tags(
func: tir.PrimFunc,
target: Target,
layout: Optional[List[str]] = None,
skip_normalize: bool = False,
allow_gemv: bool = False,
) -> Tuple[tir.PrimFunc, Dict[str, Union[List[int], int]]]:
from tvm.tir.tensor_intrin.cuda import ( # pylint: disable=import-outside-toplevel
get_mma_intrin_group,)
"""
transform function to matmul if necessary (e.g. transform conv2d with im2col)
"""
if layout is None:
layout = ["a", "a", "a"]
# step1. detect whether the function can utilize tensorcore
sch = tir.Schedule(func)
root_block = get_root_block(sch)
blocks = sch.get_child_blocks(root_block)
reduction_blocks = get_reduction_blocks(sch, blocks)
if not reduction_blocks or len(reduction_blocks) != 1:
return func, None
def _can_be_tensorized(sch: tir.Schedule, block: BlockRV) -> bool:
block_stmt = sch.get(block)
conditions = []
conditions.append(len(block_stmt.reads) == 2)
conditions.append(len(block_stmt.writes) == 1)
conditions.append(
len(
collect_block_iter_vars_used_in_access_region(block_stmt,
block_stmt.writes[0].region)) > 0)
if not all(conditions):
return False
return True
# step2. transform function to tensorcore matmul (e.g. conv2d with im2col)
def check_sm_version(arch: str) -> int:
sm_version = arch.replace("sm_", "")
return int(sm_version) if sm_version.isdigit() else -1
def analysis_tensorcore_tags(sch: tir.Schedule, block: BlockRV, target: Target) -> bool:
tags: Dict[str, Union[List[int], int]] = {}
block_stmt = sch.get(block)
# analysis tensorcore axis
# todo(lei): maybe we can remove this in the future
(write_buffer_region,) = block_stmt.writes
out_axis = len(write_buffer_region.buffer.shape)
tags["tensorcore_config"] = [out_axis - 2, out_axis - 1]
# analysis pipeline stage
# todo(lei): maybe we can integrate this into policy in the future
tags["pipeline_stage"] = 1
if target.kind.name == "cuda" and check_sm_version(target.arch) == 80:
# enable pipeline stage only for sm_80 devices
tags["pipeline_stage"] = 2
# analysis async copy
# todo(lei): maybe we can integrate this into policy in the future
tags["use_async_copy"] = False
if tags["pipeline_stage"] == 2 and check_sm_version(target.arch) >= 80:
# async copy only works in software pipeline.
tags["use_async_copy"] = True
# analysis intrin information
def get_ordered_axes(region: List[Range]) -> Set[Var]:
axes: List[Var] = []
for r in region:
if not _is_one(r.extent):
raise ValueError("Expect elemwise block access")
axes.append(r.min)
return axes
def is_common_reduce(var: Var) -> bool:
for iter_var in block_stmt.iter_vars:
if iter_var.var == var and iter_var.iter_type == IterVar.CommReduce:
return True
return False
def check_last_trait(region: List[Range]):
axes = get_ordered_axes(region)
return is_common_reduce(axes[-1])
intrin_info: dict = {}
in_dtype, out_dtype = get_in_out_dtypes(block_stmt)
intrin_info["in_dtype"] = in_dtype
intrin_info["out_dtype"] = out_dtype
# if the last dimension is reduce axis, the B is transposed
intrin_info["trans_b"] = check_last_trait(block_stmt.reads[1].region)
if func.attrs is not None and "input_transform_kind" in func.attrs:
intrin_info["input_transform_kind"] = func.attrs["input_transform_kind"]
if func.attrs is not None and "weight_transform_kind" in func.attrs:
intrin_info["weight_transform_kind"] = func.attrs["weight_transform_kind"]
tags["intrin_info"] = intrin_info
return tags
(main_block,) = reduction_blocks
if _can_be_tensorized(sch, main_block) is None:
return func, None
block_stmt = sch.get(main_block)
if target.kind.name == "cuda" and check_sm_version(target.arch) >= 70:
# TODO(lei): we should consider the dtype of the input a and b
# instead of assuming both a and b share the same dtype.
# As the tensorcore may supports e4m3_float8 * e5m2_float8
in_dtype, out_dtype = get_in_out_dtypes(block_stmt)
try:
_ = get_mma_intrin_group(
a_dtype=in_dtype,
b_dtype=in_dtype,
out_dtype=out_dtype,
)
except Exception:
logger.debug("Cannot find the corresponding mma intrin group")
return func, None
# reindex and transform functions
# Normalize tensor functions to C[S, I, J] += A[S, I, K] * B[S, J, K]
# or C[S, I, J] += A[S, I, K] * B[S, K, J]
# skip normalize when we want to detect tags only.
if not skip_normalize:
sch = normalize_to_matmul(sch, main_block, layout)
if sch is None:
return func, None
block_stmt = sch.get(main_block)
minimal_tensorize_threshold = 16
# the batch dimension is not taken into consideration.
extent = block_stmt.iter_vars[1].dom.extent
if isinstance(extent,
tir.expr.IntImm) and (extent.value <
(1 if allow_gemv else minimal_tensorize_threshold)):
return func, None
for item_var in block_stmt.iter_vars[2:]:
extent = item_var.dom.extent
if (isinstance(extent, tir.expr.IntImm) and extent.value < minimal_tensorize_threshold):
return func, None
tags = analysis_tensorcore_tags(sch, main_block, target)
return sch.mod["main"], tags
return func, None
def get_propagate_map(trans: bool = True, dtype="float16", matrix_name="A", index_dtype="int32"):
from tvm.tir.tensor_intrin.cuda import ( # pylint: disable=import-outside-toplevel
ldmatrix_32x8_to_shared_16x16_layout, ldmatrix_trans_32x8_to_shared_16x16_layout,
ldmatrix_32x16_to_shared_16x32_layout_a, ldmatrix_32x16_to_shared_16x32_layout_b,
)
assert dtype in [
"float16",
"int8",
"e4m3_float8",
"e5m2_float8",
], "Only support float16, int8, e4m3_float8, e5m2_float8"
if dtype == "float16":
ldmatrix_layout = ldmatrix_32x8_to_shared_16x16_layout
ldmatrix_layout_trans = ldmatrix_trans_32x8_to_shared_16x16_layout
elif dtype in ["int8", "e4m3_float8", "e5m2_float8"]:
# int8 mma only support 32x16 to 16x32 layout
if matrix_name == "A" and trans is False:
ldmatrix_layout = ldmatrix_32x16_to_shared_16x32_layout_a
elif matrix_name == "B" and trans is True:
ldmatrix_layout = ldmatrix_32x16_to_shared_16x32_layout_b
else:
raise ValueError("Unknown matrix name ", matrix_name)
# IntraWarp memory layout was occurred by ldmatrix, we should lift the ld_matrix out
def ldmatrix_permutation_16x16_32x8_16x16(kernel_i, kernel_j):
thread_id = kernel_i * 2 + kernel_j // 8
local_id = kernel_j % 8
return ldmatrix_layout(thread_id, local_id)
def ldmatrix_trans_permutation_16x16_32x8_16x16(kernel_i, kernel_j):
thread_id = kernel_i * 2 + kernel_j // 8
local_id = kernel_j % 8
return ldmatrix_layout_trans(thread_id, local_id)
def ldmatrix_permutation_16x32_32x16_32x16(kernel_i, kernel_j):
thread_id = kernel_i * 2 + kernel_j // 16
local_id = kernel_j % 16
return ldmatrix_layout(thread_id, local_id)
if dtype == "float16":
ldmatrix_index_map = (
ldmatrix_trans_permutation_16x16_32x8_16x16
if trans else ldmatrix_permutation_16x16_32x8_16x16)
else:
ldmatrix_index_map = ldmatrix_permutation_16x32_32x16_32x16
ldmatrix_index_map = IndexMap.from_func(ldmatrix_index_map, index_dtype=index_dtype)
# TODO(lei): index_dtype should be analyzed from the schedule
row, col = [16, 16] if dtype == "float16" else [16, 32]
inversed_index_map = ldmatrix_index_map.inverse([row, col])
return ldmatrix_index_map, inversed_index_map
def layout_propagate_chain(
sch: tir.Schedule,
start_block: BlockRV,
start_buffer: tir.Buffer,
end_block: BlockRV,
index_map: IndexMap,
):
# some layout transformation may only apply to the last n dimensions
# propagate the layout transformation to the chain of blocks
block = start_block
buffer = start_buffer
index_map = index_map
while True:
last_buffer = buffer
producers = sch.get_producers(block)
if len(producers) == 0:
break
for producer in producers:
if len(sch.get(producer).writes) != 1:
return index_map
if sch.get(producer) == sch.get(end_block):
return index_map
(write,) = sch.get(producer).writes
read = find_first_similar_region(sch.get(producer).reads, last_buffer)
if write.buffer == buffer:
block = producer
buffer = read.buffer
write_indices = [r.min for r in write.region]
read_indices = [r.min for r in read.region]
# reverse index map from [vi // x] -> [vi * x] to match the inconsistent layout
tmp_index_map = IndexMap(write_indices, read_indices, None)
tmp_index_map = tmp_index_map.non_surjective_inverse(write.buffer.shape)[0]
# if dequantize like ops are used, the scaling factor should be considered
# to be applied to the final indices
scaling_factor = 1
for i, j in zip(write.buffer.shape, read.buffer.shape):
scaling_factor *= i // j
final_indices = list(
index_map.map_indices(tmp_index_map.map_indices(write_indices)))
final_indices[-1] = final_indices[-1] // scaling_factor
index_map = IndexMap(
write_indices,
final_indices,
None,
)
if buffer == last_buffer:
break
return index_map
|
BitBLAS/python/bitblas/gpu/matmul_analysis.py/0
|
{
"file_path": "BitBLAS/python/bitblas/gpu/matmul_analysis.py",
"repo_id": "BitBLAS",
"token_count": 12863
}
| 157 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# pre-transformed tir expression of matmul
import tvm
from tvm import te
from bitblas.gpu.matmul_analysis import get_propagate_map
from bitblas.ops.operator import TransformKind
def matmul_nn(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
):
if not isinstance(M, int):
M = tvm.te.var("m")
A = te.placeholder((M, K), name="A", dtype=in_dtype)
B = te.placeholder((K, N), name="B", dtype=in_dtype)
Bias = te.placeholder((N,), name="Bias", dtype=in_dtype)
# Describe the matrix multiplication in TE
k = te.reduce_axis((0, K), name="k")
C = te.compute(
(M, N),
lambda i, j: te.sum(A[i, k].astype(accum_dtype) * B[k, j].astype(accum_dtype), axis=k),
name="C",
)
last_output = C
if accum_dtype != out_dtype:
D = te.compute((M, N), lambda i, j: C[i, j].astype(out_dtype), name="D")
last_output = D
if with_bias:
E = te.compute((M, N), lambda i, j: last_output[i, j] + Bias[j], name="E")
last_output = E
args = [A, B, Bias, last_output] if with_bias else [A, B, last_output]
func = te.create_prim_func(args)
return tvm.IRModule.from_expr(func)
def matmul_nt(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
):
if not isinstance(M, int):
M = tvm.te.var("m")
A = te.placeholder((M, K), name="A", dtype=in_dtype)
B = te.placeholder((N, K), name="B", dtype=in_dtype)
Bias = te.placeholder((N,), name="Bias", dtype=in_dtype)
# Describe the matrix multiplication in TE
k = te.reduce_axis((0, K), name="k")
C = te.compute(
(M, N),
lambda i, j: te.sum(A[i, k].astype(accum_dtype) * B[j, k].astype(accum_dtype), axis=k),
name="C",
)
last_output = C
if accum_dtype != out_dtype:
D = te.compute((M, N), lambda i, j: C[i, j].astype(out_dtype), name="D")
last_output = D
if with_bias:
E = te.compute((M, N), lambda i, j: last_output[i, j] + Bias[j], name="E")
last_output = E
args = [A, B, Bias, last_output] if with_bias else [A, B, last_output]
func = te.create_prim_func(args)
return tvm.IRModule.from_expr(func)
def matmul(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
layout="nt",
):
if layout == "nn":
return matmul_nn(M, N, K, in_dtype, out_dtype, accum_dtype, with_bias)
return matmul_nt(M, N, K, in_dtype, out_dtype, accum_dtype, with_bias)
def matmul_nt_propagate_a(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
transform_kind: TransformKind = TransformKind.IntraWarpTransform,
):
if not isinstance(M, int):
M = tvm.te.var("m")
l = r = 16 # noqa: E741
if in_dtype in ["int8", "e4m3_float8", "e5m2_float8"]:
l, r = 16, 32 # noqa: E741
_, inversed_index_map = get_propagate_map(trans=False, dtype=in_dtype, matrix_name="A")
A = te.placeholder((M // l, K // r, l, r), name="A", dtype=in_dtype)
B = te.placeholder((N, K), name="B", dtype=in_dtype)
Bias = te.placeholder((N,), name="Bias", dtype=in_dtype)
def fcompute(i, j):
warp_i, warp_j = i % l, j % r
spatial_args = i // l, j // r
if transform_kind >= TransformKind.IntraWarpTransform:
warp_i, warp_j = inversed_index_map.map_indices([warp_i, warp_j])
new_index = (*spatial_args, warp_i, warp_j)
return A[new_index]
A_reindex = te.compute(
(M, K),
fcompute,
name="A_reindex",
)
# Describe the matrix multiplication in TE
k = te.reduce_axis((0, K), name="k")
C = te.compute(
(M, N),
lambda i, j: te.sum(
A_reindex[i, k].astype(accum_dtype) * B[j, k].astype(accum_dtype), axis=k),
name="C",
)
last_output = C
if accum_dtype != out_dtype:
D = te.compute((M, N), lambda i, j: C[i, j].astype(out_dtype), name="D")
last_output = D
if with_bias:
E = te.compute((M, N), lambda i, j: last_output[i, j] + Bias[j], name="E")
last_output = E
args = [A, B, Bias, last_output] if with_bias else [A, B, last_output]
func = te.create_prim_func(args)
func = func.with_attr("input_transform_kind", transform_kind.value)
return tvm.IRModule.from_expr(func)
def matmul_nt_propagate_b(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
transform_kind: TransformKind = TransformKind.IntraWarpTransform,
):
if not isinstance(M, int):
M = tvm.te.var("m")
l = r = 16 # noqa: E741
if in_dtype in ["int8", "e4m3_float8", "e5m2_float8"]:
l, r = 16, 32 # noqa: E741
_, inversed_index_map = get_propagate_map(trans=True, dtype=in_dtype, matrix_name="B")
A = te.placeholder((M, K), name="A", dtype=in_dtype)
B = te.placeholder((N // l, K // r, l, r), name="B", dtype=in_dtype)
Bias = te.placeholder((N,), name="Bias", dtype=in_dtype)
def fcompute(i, j):
warp_i, warp_j = i % l, j % r
spatial_args = i // l, j // r
if transform_kind >= TransformKind.IntraWarpTransform:
warp_i, warp_j = inversed_index_map.map_indices([warp_i, warp_j])
new_index = (*spatial_args, warp_i, warp_j)
return B[new_index]
B_reindex = te.compute(
(N, K),
fcompute,
name="B_reindex",
)
# Describe the matrix multiplication in TE
k = te.reduce_axis((0, K), name="k")
C = te.compute(
(M, N),
lambda i, j: te.sum(
A[i, k].astype(accum_dtype) * B_reindex[j, k].astype(accum_dtype), axis=k),
name="C",
)
last_output = C
if accum_dtype != out_dtype:
D = te.compute((M, N), lambda i, j: C[i, j].astype(out_dtype), name="D")
last_output = D
if with_bias:
E = te.compute((M, N), lambda i, j: last_output[i, j] + Bias[j], name="E")
last_output = E
args = [A, B, Bias, last_output] if with_bias else [A, B, last_output]
func = te.create_prim_func(args)
func = func.with_attr("weight_transform_kind", transform_kind.value)
return tvm.IRModule.from_expr(func)
def matmul_nt_propagate_a_propagate_b(
M,
N,
K,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
transform_kind_input: TransformKind = TransformKind.IntraWarpTransform,
transform_kind_weight: TransformKind = TransformKind.IntraWarpTransform,
):
if not isinstance(M, int):
M = tvm.te.var("m")
l = r = 16 # noqa: E741
if in_dtype in ["int8", "e4m3_float8", "e5m2_float8"]:
l, r = 16, 32 # noqa: E741
A = te.placeholder((M // l, K // r, l, r), name="A", dtype=in_dtype)
B = te.placeholder((N // l, K // r, l, r), name="B", dtype=in_dtype)
Bias = te.placeholder((N,), name="Bias", dtype=in_dtype)
_, inversed_index_map = get_propagate_map(trans=False, dtype=in_dtype, matrix_name="A")
def fcompute(i, j):
warp_i, warp_j = i % l, j % r
spatial_args = i // l, j // r
if transform_kind_input >= TransformKind.IntraWarpTransform:
warp_i, warp_j = inversed_index_map.map_indices([warp_i, warp_j])
new_index = (*spatial_args, warp_i, warp_j)
return A[new_index]
A_reindex = te.compute(
(M, K),
fcompute,
name="A_reindex",
)
_, inversed_index_map = get_propagate_map(trans=True, dtype=in_dtype, matrix_name="B")
def fcompute(i, j):
warp_i, warp_j = i % l, j % r
spatial_args = i // l, j // r
if transform_kind_weight >= TransformKind.IntraWarpTransform:
warp_i, warp_j = inversed_index_map.map_indices([warp_i, warp_j])
new_index = (*spatial_args, warp_i, warp_j)
return B[new_index]
B_reindex = te.compute(
(N, K),
fcompute,
name="B_reindex",
)
# Describe the matrix multiplication in TE
k = te.reduce_axis((0, K), name="k")
C = te.compute(
(M, N),
lambda i, j: te.sum(
A_reindex[i, k].astype(accum_dtype) * B_reindex[j, k].astype(accum_dtype),
axis=k,
),
name="C",
)
last_output = C
if accum_dtype != out_dtype:
D = te.compute((M, N), lambda i, j: C[i, j].astype(out_dtype), name="D")
last_output = D
if with_bias:
E = te.compute((M, N), lambda i, j: last_output[i, j] + Bias[j], name="E")
last_output = E
args = [A, B, Bias, last_output] if with_bias else [A, B, last_output]
func = te.create_prim_func(args)
func = func.with_attr("input_transform_kind", transform_kind_input.value)
func = func.with_attr("weight_transform_kind", transform_kind_weight.value)
return tvm.IRModule.from_expr(func)
def select_implementation(
M=None,
N=16384,
K=16384,
in_dtype="float16",
out_dtype="float16",
accum_dtype="float16",
with_bias=False,
layout="nt",
propagate_a: TransformKind = TransformKind.NonTransform,
propagate_b: TransformKind = TransformKind.NonTransform,
):
if layout == "nn":
if propagate_a or propagate_b:
raise ValueError(
"Currently only support propagate_a=False and propagate_b=False for layout=nn")
return matmul(M, N, K, in_dtype, out_dtype, accum_dtype, with_bias, layout)
elif layout == "nt":
if propagate_a and propagate_b:
return matmul_nt_propagate_a_propagate_b(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
transform_kind_input=propagate_a,
transform_kind_weight=propagate_b,
)
elif propagate_a:
return matmul_nt_propagate_a(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
transform_kind=propagate_a,
)
elif propagate_b:
return matmul_nt_propagate_b(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
transform_kind=propagate_b,
)
else:
return matmul(M, N, K, in_dtype, out_dtype, accum_dtype, with_bias, layout)
else:
raise ValueError(f"Unsupported layout: {layout}")
|
BitBLAS/python/bitblas/ops/impl/matmul_impl.py/0
|
{
"file_path": "BitBLAS/python/bitblas/ops/impl/matmul_impl.py",
"repo_id": "BitBLAS",
"token_count": 5449
}
| 158 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
from .post_process import match_global_kernel, tensor_replace_dp4a, tensor_remove_make_int4 # noqa: F401
from .tensor_adapter import tvm_tensor_to_torch, lazy_tvm_tensor_to_torch, lazy_torch_to_tvm_tensor # noqa: F401
from .target_detector import auto_detect_nvidia_target # noqa: F401
|
BitBLAS/python/bitblas/utils/__init__.py/0
|
{
"file_path": "BitBLAS/python/bitblas/utils/__init__.py",
"repo_id": "BitBLAS",
"token_count": 131
}
| 159 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pytest
import os
import torch
import bitblas
from bitblas.ops.matmul import Matmul, MatmulConfig
from bitblas.ops.matmul_dequantize import (
MatmulWeightOnlyDequantize,
MatmulWeightOnlyDequantizeConfig,
)
from bitblas.cache import global_operator_cache
target = bitblas.utils.auto_detect_nvidia_target()
def get_codegen_result(ops, target):
code = ops.get_source(target=target)
return code
# fmt: off
@pytest.mark.parametrize(
"M,N,K,in_dtype,out_dtype,accum_dtype,with_bias,propagate_a,propagate_b,layout,enable_tuning",
[
(1, 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
# dynamic shape
([1], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
([1, 32], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", True),
],
)
def test_config_hashable(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
propagate_a,
propagate_b,
layout,
enable_tuning,
):
matmul_config = MatmulConfig(
M=M,
N=N,
K=K,
in_dtype=in_dtype,
out_dtype=out_dtype,
accum_dtype=accum_dtype,
with_bias=with_bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
)
matmul = Matmul(
config=matmul_config,
target=target,
)
if enable_tuning:
matmul.hardware_aware_finetune(topk=20)
BITBLAS_TUNING_CACHE = {}
success = False
try:
BITBLAS_TUNING_CACHE[matmul.config] = matmul
success = True
except Exception as hash_error:
print(hash_error)
assert success
@pytest.mark.parametrize(
"M,N,K,in_dtype,out_dtype,accum_dtype,with_bias,propagate_a,propagate_b,layout,enable_tuning",
[
(1, 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
# dynamic shape
([1], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
([1, 32], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", True),
],
)
def test_global_cache_inquery(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
propagate_a,
propagate_b,
layout,
enable_tuning,
):
matmul_config = MatmulConfig(
M=M,
N=N,
K=K,
in_dtype=in_dtype,
out_dtype=out_dtype,
accum_dtype=accum_dtype,
with_bias=with_bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
)
matmul = Matmul(
config=matmul_config,
target=target,
)
if enable_tuning:
matmul.hardware_aware_finetune(topk=20)
success = False
try:
global_operator_cache.add(matmul.config, matmul)
success = True
except Exception as hash_error:
print(hash_error)
assert success
matmul = global_operator_cache.get(matmul.config)
assert matmul is not None
@pytest.mark.parametrize(
"M,N,K,in_dtype,out_dtype,accum_dtype,with_bias,propagate_a,propagate_b,layout,enable_tuning",
[
(1, 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
# dynamic shape
([1], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
([1, 32], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", True),
],
)
def test_global_cache_inquery_torch_forward(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
propagate_a,
propagate_b,
layout,
enable_tuning,
):
matmul_config = MatmulConfig(
M=M,
N=N,
K=K,
in_dtype=in_dtype,
out_dtype=out_dtype,
accum_dtype=accum_dtype,
with_bias=with_bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
)
matmul = Matmul(
config=matmul_config,
target=target,
)
if enable_tuning:
matmul.hardware_aware_finetune(topk=20)
success = False
try:
global_operator_cache.add(matmul.config, matmul)
success = True
except Exception as hash_error:
print(hash_error)
assert success
matmul = global_operator_cache.get(matmul.config)
assert matmul is not None
if not isinstance(M, int):
M = 32
# convert tensors to torch
input_shape = (M, K)
weight_shape = (N, K) if layout == "nt" else (K, N)
output_shape = (M, N)
inputs = []
inputs.append(torch.rand(input_shape, dtype=torch.float16).cuda())
inputs.append(torch.rand(weight_shape, dtype=torch.float16).cuda())
inputs.append(torch.rand(output_shape, dtype=torch.float16).cuda())
ref_result = torch.matmul(inputs[0], inputs[1].t() if layout == "nt" else inputs[1])
permuted_inputs = []
if matmul.input_transform is not None:
permuted_inputs.append(matmul.input_transform(inputs[0].cpu())).cuda()
else:
permuted_inputs.append(inputs[0])
if matmul.weight_transform is not None:
permuted_inputs.append(matmul.weight_transform(inputs[1].cpu()).cuda())
else:
permuted_inputs.append(inputs[1])
permuted_inputs.append(inputs[2])
matmul(*permuted_inputs)
torch.testing.assert_close(permuted_inputs[-1], ref_result, rtol=1e-2, atol=1e-2)
@pytest.mark.parametrize(
"M,N,K,in_dtype,out_dtype,accum_dtype,with_bias,propagate_a,propagate_b,layout,enable_tuning",
[
(1, 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
([1, 32], 16384, 16384, "float16", "float16", "float16", False, False, False, "nt", False),
],
)
def test_global_cache_save_to_database(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
with_bias,
propagate_a,
propagate_b,
layout,
enable_tuning,
):
matmul_config = MatmulConfig(
M=M,
N=N,
K=K,
in_dtype=in_dtype,
out_dtype=out_dtype,
accum_dtype=accum_dtype,
with_bias=with_bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
)
matmul = Matmul(
config=matmul_config,
target=target,
)
if enable_tuning:
matmul.hardware_aware_finetune(topk=20)
success = False
try:
global_operator_cache.add(matmul.config, matmul)
success = True
except Exception as hash_error:
print(hash_error)
assert success
database_path = "debug/test_database"
global_operator_cache.save_into_database(database_path, target=target)
assert os.path.exists(database_path)
global_operator_cache.clear()
assert global_operator_cache.size() == 0
global_operator_cache.load_from_database(database_path, target=target)
assert global_operator_cache.size() > 0
matmul = global_operator_cache.get(matmul.config)
assert matmul is not None
if not isinstance(M, int):
M = 32
# convert tensors to torch
input_shape = (M, K)
weight_shape = (N, K) if layout == "nt" else (K, N)
output_shape = (M, N)
inputs = []
inputs.append(torch.rand(input_shape, dtype=torch.float16).cuda())
inputs.append(torch.rand(weight_shape, dtype=torch.float16).cuda())
inputs.append(torch.rand(output_shape, dtype=torch.float16).cuda())
ref_result = torch.matmul(inputs[0], inputs[1].t() if layout == "nt" else inputs[1])
permuted_inputs = []
if matmul.input_transform is not None:
permuted_inputs.append(matmul.input_transform(inputs[0].cpu())).cuda()
else:
permuted_inputs.append(inputs[0])
if matmul.weight_transform is not None:
permuted_inputs.append(matmul.weight_transform(inputs[1].cpu()).cuda())
else:
permuted_inputs.append(inputs[1])
permuted_inputs.append(inputs[2])
matmul(*permuted_inputs)
torch.testing.assert_close(permuted_inputs[-1], ref_result, rtol=1e-2, atol=1e-2)
@pytest.mark.parametrize(
"M,N,K,in_dtype,out_dtype,accum_dtype,bit,storage_dtype,source_format,with_scaling,with_zeros,group_size,fast_decoding,with_bias,propagate_a,propagate_b,layout",
[
(
1,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"uint",
False,
False,
-1,
False,
False,
False,
False,
"nt",
),
(
1,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
False,
False,
-1,
False,
False,
False,
False,
"nt",
),
(
1024,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
False,
False,
-1,
False,
False,
False,
False,
"nt",
),
(
1024,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
False,
False,
-1,
False,
False,
False,
True,
"nt",
),
(
1024,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
False,
False,
-1,
False,
False,
True,
True,
"nt",
),
(
1024,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
True,
False,
-1,
False,
False,
True,
True,
"nt",
),
(
1024,
1024,
1024,
"float16",
"float16",
"float16",
4,
"int8",
"nf",
True,
False,
128,
False,
False,
True,
True,
"nt",
),
],
)
def test_matmul_dequantize_save_into_database(
M,
N,
K,
in_dtype,
out_dtype,
accum_dtype,
bit,
storage_dtype,
source_format,
with_scaling,
with_zeros,
group_size,
fast_decoding,
with_bias,
propagate_a,
propagate_b,
layout,
):
matmul_config = MatmulWeightOnlyDequantizeConfig(
M=M,
N=N,
K=K,
in_dtype=in_dtype,
out_dtype=out_dtype,
accum_dtype=accum_dtype,
bit=bit,
storage_dtype=storage_dtype,
source_format=source_format,
with_scaling=with_scaling,
with_zeros=with_zeros,
group_size=group_size,
fast_decoding=fast_decoding,
with_bias=with_bias,
propagate_a=propagate_a,
propagate_b=propagate_b,
layout=layout,
)
matmul = MatmulWeightOnlyDequantize(
config=matmul_config,
target=target,
)
matmul.hardware_aware_finetune(topk=20)
database_path = "debug/test_database"
success = False
try:
global_operator_cache.add(matmul.config, matmul)
success = True
except Exception as hash_error:
print(hash_error)
assert success
global_operator_cache.save_into_database(database_path, target=target)
assert os.path.exists(database_path)
global_operator_cache.clear()
assert global_operator_cache.size() == 0
global_operator_cache.load_from_database(database_path, target=target)
assert global_operator_cache.size() > 0
# fmt: on
if __name__ == "__main__":
bitblas.testing.main()
|
BitBLAS/testing/python/cache/test_operator_cache.py/0
|
{
"file_path": "BitBLAS/testing/python/cache/test_operator_cache.py",
"repo_id": "BitBLAS",
"token_count": 6576
}
| 160 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import tvm
from tvm.script import tir as T
from tvm.tir import IndexMap
from tvm.tir.tensor_intrin.cuda import (
ldmatrix_trans_32x8_to_shared_16x16_layout,
ldmatrix_32x16_to_shared_16x32_layout_a,
ldmatrix_32x16_to_shared_16x32_layout_b,
)
def ldmatrix_trans_permutation_16x16_32x8_16x16(kernel_i, kernel_j):
thread_id = kernel_i * 2 + kernel_j // 8
local_id = kernel_j % 8
return ldmatrix_trans_32x8_to_shared_16x16_layout(thread_id, local_id)
@tvm.script.ir_module
class LDMATRIX_16x16:
@T.prim_func
def main(a: T.handle, b: T.handle):
T.func_attr({"global_symbol": "main", "tir.noalias": True})
A = T.match_buffer(a, [16, 16], dtype="float16")
B = T.match_buffer(b, [16, 16], dtype="float16")
for i, j in T.grid(16, 16):
with T.block("B"):
vi, vj = T.axis.remap("SS", [i, j])
T.reads(B[vi, vj])
T.writes(A[vi, vj])
A[vi, vj] = B[vi, vj]
ir_module = LDMATRIX_16x16
sch = tvm.tir.Schedule(ir_module)
block_b = sch.get_block("B")
sch.transform_layout(block_b, ('read', 0), ldmatrix_trans_permutation_16x16_32x8_16x16)
print("========================inject transform=============================")
print(sch.mod["main"].script())
index_map = IndexMap.from_func(ldmatrix_trans_permutation_16x16_32x8_16x16, index_dtype="int32")
inversed_index_map = index_map.inverse([16, 16])
def inverse_permutation(i, j):
return inversed_index_map.map_indices([i, j])
sch.transform_layout(block_b, ('read', 0), inverse_permutation)
print("========================inverse inject transform=============================")
print(sch.mod["main"].script())
def ldmatrix_trans_permutation_16x32_16x32_16x32(kernel_i, kernel_j):
thread_id = kernel_i * 2 + kernel_j // 16
local_id = kernel_j % 16
return ldmatrix_32x16_to_shared_16x32_layout_a(thread_id, local_id)
@tvm.script.ir_module
class LDMATRIX_16x32_A:
@T.prim_func
def main(a: T.handle, b: T.handle):
T.func_attr({"global_symbol": "main", "tir.noalias": True})
A = T.match_buffer(a, [16, 32], dtype="float16")
B = T.match_buffer(b, [16, 32], dtype="float16")
for i, j in T.grid(16, 32):
with T.block("B"):
vi, vj = T.axis.remap("SS", [i, j])
T.reads(B[vi, vj])
T.writes(A[vi, vj])
A[vi, vj] = B[vi, vj]
ir_module = LDMATRIX_16x32_A
sch = tvm.tir.Schedule(ir_module)
block_b = sch.get_block("B")
sch.transform_layout(block_b, ('read', 0), ldmatrix_trans_permutation_16x32_16x32_16x32)
print("========================inject transform=============================")
print(sch.mod["main"].script())
index_map_inter = IndexMap.from_func(lambda i, j: (i // 16, j // 16, i % 16, j % 16), index_dtype="int32")
index_map_intra = IndexMap.from_func(ldmatrix_trans_permutation_16x32_16x32_16x32, index_dtype="int32")
print("index_map_inter", index_map_inter)
|
BitBLAS/testing/python/weight_only/index_map_fuse.py/0
|
{
"file_path": "BitBLAS/testing/python/weight_only/index_map_fuse.py",
"repo_id": "BitBLAS",
"token_count": 1424
}
| 161 |
date ; hostname ; pwd
EXP_NODES=1
EXP_IS=576
EXP_PGB=16
EXP_PGEB=64
EXP_LR=1e-5
EXP_BS=512
EXP_ME=10
EXP_WS=0.06
EXP_WD=0.05
EXP_LMH=50
EXP_LMC=5
EXP_THL=2
EXP_HHS=2
EXP_LP=BridgeTower_pt_base.ckpt
EXP_RGM=blip_randaug_wohf
export MASTER_ADDR=$HOSTNAME
export MASTER_PORT=19800
export NODE_RANK=0
PREFIX_NAME="ftfpt"
echo $MASTER_ADDR, $MASTER_PORT, $NODE_RANK, $EXP_NODES, $EXP_IS, $EXP_PGB, $EXP_PGEB, $EXP_LR, $EXP_BS, $EXP_ME, $EXP_WS, $EXP_WD, $EXP_LMH, $EXP_LMC, $EXP_THL, $EXP_HHS, $EXP_RGM
TIME=$(date "+%Y%m%d%H%M")
RUN_NAME=""$PREFIX_NAME"_"$EXP_IS"_"$EXP_PGB"_"$EXP_PGEB"_"$EXP_LR"_"$EXP_BS"_"$EXP_ME"_"$EXP_WS"_"$EXP_WD"_"$EXP_LMH"_"$EXP_LMC"_"$EXP_THL"_"$EXP_HHS"_"$EXP_RGM"_"$TIME""
echo $RUN_NAME
python run.py with run_name=$RUN_NAME task_finetune_vqa_clip_bert bt clip16 text_roberta $EXP_RGM num_gpus=8 num_nodes=$EXP_NODES load_path=~/BT/best_checkpoints/$EXP_LP image_size=$EXP_IS per_gpu_batchsize=$EXP_PGB per_gpu_eval_batchsize=$EXP_PGEB learning_rate=$EXP_LR batch_size=$EXP_BS max_epoch=$EXP_ME warmup_steps=$EXP_WS weight_decay=$EXP_WD lr_mult_head=$EXP_LMH lr_mult_cross_modal=$EXP_LMC task_head_layers=$EXP_THL head_hidden_scale=$EXP_HHS
date
|
BridgeTower/scripts/ftfpt_base_vqa.sh/0
|
{
"file_path": "BridgeTower/scripts/ftfpt_base_vqa.sh",
"repo_id": "BridgeTower",
"token_count": 604
}
| 162 |
import torch
from pytorch_lightning import LightningDataModule
from torch.utils.data import DataLoader
from transformers import (
DataCollatorForLanguageModeling,
DataCollatorForWholeWordMask,
BertTokenizer,
RobertaTokenizer,
)
def get_pretrained_tokenizer(from_pretrained):
if torch.distributed.is_initialized():
if torch.distributed.get_rank() == 0:
if 'roberta' in from_pretrained:
RobertaTokenizer.from_pretrained(from_pretrained)
else:
BertTokenizer.from_pretrained(
from_pretrained, do_lower_case="uncased" in from_pretrained
)
torch.distributed.barrier()
if 'roberta' in from_pretrained:
return RobertaTokenizer.from_pretrained(from_pretrained)
return BertTokenizer.from_pretrained(
from_pretrained, do_lower_case="uncased" in from_pretrained
)
class BaseDataModule(LightningDataModule):
def __init__(self, _config):
super().__init__()
self.prepare_data_per_node = False
self.data_dir = _config["data_root"]
self.num_workers = _config["num_workers"]
self.batch_size = _config["per_gpu_batchsize"]
self.eval_batch_size = _config["per_gpu_eval_batchsize"] if _config["per_gpu_eval_batchsize"] !=0 else self.batch_size
self.image_size = _config["image_size"]
self.max_text_len = _config["max_text_len"]
self.draw_false_image = _config["draw_false_image"]
self.draw_false_text = _config["draw_false_text"]
self.image_only = _config["image_only"]
self.debug_num = _config["debug_num"]
self.seed = _config["seed"]
self.train_transform_keys = (
["default_train"]
if len(_config["train_transform_keys"]) == 0
else _config["train_transform_keys"]
)
self.val_transform_keys = (
["default_val"]
if len(_config["val_transform_keys"]) == 0
else _config["val_transform_keys"]
)
tokenizer = _config["tokenizer"]
self.tokenizer = get_pretrained_tokenizer(tokenizer)
self.vocab_size = self.tokenizer.vocab_size
collator = (
DataCollatorForWholeWordMask
if _config["whole_word_masking"]
else DataCollatorForLanguageModeling
)
self.mlm_collator = collator(
tokenizer=self.tokenizer, mlm=True, mlm_probability=_config["mlm_prob"]
)
self.setup_flag = False
@property
def dataset_cls(self):
raise NotImplementedError("return tuple of dataset class")
@property
def dataset_name(self):
raise NotImplementedError("return name of dataset")
def set_train_dataset(self):
self.train_dataset = self.dataset_cls(
self.data_dir,
self.train_transform_keys,
split="train",
image_size=self.image_size,
max_text_len=self.max_text_len,
draw_false_image=self.draw_false_image,
draw_false_text=self.draw_false_text,
image_only=self.image_only,
tokenizer=self.tokenizer,
debug_num=self.debug_num,
)
def set_val_dataset(self):
self.val_dataset = self.dataset_cls(
self.data_dir,
self.val_transform_keys,
split="val",
image_size=self.image_size,
max_text_len=self.max_text_len,
draw_false_image=self.draw_false_image,
draw_false_text=self.draw_false_text,
image_only=self.image_only,
tokenizer=self.tokenizer,
debug_num=self.debug_num,
)
def make_no_false_dset(self, split, image_only=False):
return self.dataset_cls_no_false(
self.data_dir,
self.val_transform_keys,
split=split,
image_size=self.image_size,
max_text_len=self.max_text_len,
draw_false_image=0,
draw_false_text=0,
image_only=image_only,
tokenizer=self.tokenizer,
debug_num=self.debug_num,
)
def set_test_dataset(self):
self.test_dataset = self.dataset_cls(
self.data_dir,
self.val_transform_keys,
split="test",
image_size=self.image_size,
max_text_len=self.max_text_len,
draw_false_image=self.draw_false_image,
draw_false_text=self.draw_false_text,
image_only=self.image_only,
tokenizer=self.tokenizer,
debug_num=self.debug_num,
)
def setup(self, stage):
if not self.setup_flag:
self.set_train_dataset()
self.set_val_dataset()
self.set_test_dataset()
self.train_dataset.tokenizer = self.tokenizer
self.val_dataset.tokenizer = self.tokenizer
self.test_dataset.tokenizer = self.tokenizer
self.setup_flag = True
def train_dataloader(self):
loader = DataLoader(
self.train_dataset,
batch_size=self.batch_size,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True,
collate_fn=self.train_dataset.collate,
)
return loader
def val_dataloader(self):
loader = DataLoader(
self.val_dataset,
batch_size=self.eval_batch_size,
shuffle=False,
num_workers=self.num_workers,
pin_memory=True,
collate_fn=self.val_dataset.collate,
)
return loader
def test_dataloader(self):
loader = DataLoader(
self.test_dataset,
batch_size=self.eval_batch_size,
shuffle=False,
num_workers=self.num_workers,
pin_memory=True,
collate_fn=self.test_dataset.collate,
)
return loader
|
BridgeTower/src/datamodules/datamodule_base.py/0
|
{
"file_path": "BridgeTower/src/datamodules/datamodule_base.py",
"repo_id": "BridgeTower",
"token_count": 2974
}
| 163 |
from .base_dataset import BaseDataset
import io
from PIL import Image
class VisualGenomeCaptionDataset(BaseDataset):
def __init__(self, *args, split="", **kwargs):
assert split in ["train", "val", "test"]
if split == "test":
split = "val"
if split == "train":
names = ["vg"]
elif split == "val":
names = []
super().__init__(*args, **kwargs, names=names, text_column_name="caption")
def __getitem__(self, index):
return self.get_suite(index)
|
BridgeTower/src/datasets/vg_caption_dataset.py/0
|
{
"file_path": "BridgeTower/src/datasets/vg_caption_dataset.py",
"repo_id": "BridgeTower",
"token_count": 237
}
| 164 |
from .transform import (
pixelbert_transform,
pixelbert_transform_randaug,
vit_transform,
vit_transform_randaug,
imagenet_transform,
imagenet_transform_randaug,
clip_transform,
clip_transform_randaug,
blip_transform,
blip_transform_randaug,
blip_transform_randaug_wc,
blip_transform_randaug_wohf,
blip_transform_randaug_pretrain,
)
_transforms = {
"pixelbert": pixelbert_transform,
"pixelbert_randaug": pixelbert_transform_randaug,
"vit": vit_transform,
"vit_randaug": vit_transform_randaug,
"imagenet": imagenet_transform,
"imagenet_randaug": imagenet_transform_randaug,
"clip": clip_transform,
"clip_randaug": clip_transform_randaug,
"blip": blip_transform,
"blip_randaug": blip_transform_randaug,
"blip_randaug_wc": blip_transform_randaug_wc,
"blip_randaug_wohf": blip_transform_randaug_wohf,
"blip_randaug_pretrain": blip_transform_randaug_pretrain,
}
def keys_to_transforms(keys: list, size=224):
return [_transforms[key](size=size) for key in keys]
|
BridgeTower/src/transforms/__init__.py/0
|
{
"file_path": "BridgeTower/src/transforms/__init__.py",
"repo_id": "BridgeTower",
"token_count": 440
}
| 165 |
import json
import pandas as pd
import pyarrow as pa
import random
import os
from tqdm import tqdm
from glob import glob
from collections import defaultdict, Counter
from glossary import normalize_word
def get_score(occurences):
if occurences == 0:
return 0.0
elif occurences == 1:
return 0.3
elif occurences == 2:
return 0.6
elif occurences == 3:
return 0.9
else:
return 1.0
def path2rest(path, split, annotations, label2ans):
iid = int(path.split("/")[-1].split("_")[-1][:-4])
with open(path, "rb") as fp:
binary = fp.read()
_annot = annotations[split][iid]
_annot = list(_annot.items())
qids, qas = [a[0] for a in _annot], [a[1] for a in _annot]
questions = [qa[0] for qa in qas]
answers = [qa[1] for qa in qas] if "test" not in split else list(list())
answer_labels = (
[a["labels"] for a in answers] if "test" not in split else list(list())
)
answer_scores = (
[a["scores"] for a in answers] if "test" not in split else list(list())
)
answers = (
[[label2ans[l] for l in al] for al in answer_labels]
if "test" not in split
else list(list())
)
return [binary, questions, answers, answer_labels, answer_scores, iid, qids, split]
def make_arrow(root, dataset_root, fix_answer_normalization=True):
with open(f"{root}/vqav2/v2_OpenEnded_mscoco_train2014_questions.json", "r") as fp:
questions_train2014 = json.load(fp)["questions"]
with open(f"{root}/vqav2/v2_OpenEnded_mscoco_val2014_questions.json", "r") as fp:
questions_val2014 = json.load(fp)["questions"]
with open(f"{root}/vqav2/v2_OpenEnded_mscoco_test2015_questions.json", "r") as fp:
questions_test2015 = json.load(fp)["questions"]
with open(f"{root}/vqav2/v2_OpenEnded_mscoco_test-dev2015_questions.json", "r") as fp:
questions_test_dev2015 = json.load(fp)["questions"]
with open(f"{root}/vqav2/v2_mscoco_train2014_annotations.json", "r") as fp:
annotations_train2014 = json.load(fp)["annotations"]
with open(f"{root}/vqav2/v2_mscoco_val2014_annotations.json", "r") as fp:
annotations_val2014 = json.load(fp)["annotations"]
annotations = dict()
if fix_answer_normalization:
suffix_name = '_fix'
else:
suffix_name = ''
for split, questions in zip(
["train", "val", "test", "test-dev"],
[
questions_train2014,
questions_val2014,
questions_test2015,
questions_test_dev2015,
],
):
_annot = defaultdict(dict)
for q in tqdm(questions):
_annot[q["image_id"]][q["question_id"]] = [q["question"]]
annotations[split] = _annot
print(len(annotations['train']), sum([len(i) for i in annotations['train'].values()]))
print(len(annotations['val']), sum([len(i) for i in annotations['val'].values()]))
all_major_answers = list()
for split, annots in zip(
["train", "val"], [annotations_train2014, annotations_val2014],
):
for q in tqdm(annots):
all_major_answers.append(q["multiple_choice_answer"])
all_major_answers = [normalize_word(word) for word in tqdm(all_major_answers)]
counter = {k: v for k, v in Counter(all_major_answers).items() if v >= 9}
ans2label = {k: i for i, k in enumerate(counter.keys())}
label2ans = list(counter.keys())
for split, annots in zip(
["train", "val"], [annotations_train2014, annotations_val2014],
):
_annot = annotations[split]
for q in tqdm(annots):
answers = q["answers"]
answer_count = {}
for answer in answers:
if fix_answer_normalization:
answer_ = normalize_word(answer["answer"])
else:
answer_ = answer["answer"]
answer_count[answer_] = answer_count.get(answer_, 0) + 1
labels = []
scores = []
for answer in answer_count:
if answer not in ans2label:
continue
labels.append(ans2label[answer])
score = get_score(answer_count[answer])
scores.append(score)
_annot[q["image_id"]][q["question_id"]].append(
{"labels": labels, "scores": scores,}
)
print(len(annotations['train']), sum([len(i) for i in annotations['train'].values()]))
print(len(annotations['val']), sum([len(i) for i in annotations['val'].values()]))
# #image #question
# 82783 443757
# 40504 214354
for split in ["train", "val"]:
filtered_annot = dict()
for ik, iv in annotations[split].items():
new_q = dict()
for qk, qv in iv.items():
if len(qv[1]["labels"]) != 0:
new_q[qk] = qv
if len(new_q) != 0:
filtered_annot[ik] = new_q
annotations[split] = filtered_annot
print(len(annotations['train']), sum([len(i) for i in annotations['train'].values()]), sum([len(qa) for q in annotations['train'].values() for qa in q.values()]))
print(len(annotations['val']), sum([len(i) for i in annotations['val'].values()]), sum([len(qa) for q in annotations['val'].values() for qa in q.values()]))
# 82774 434867 869734
# 40503 210051 420102
# fix: 82774 435174 870348
# fix: 40503 210207 420414
for split in [
"train",
"val",
"test",
"test-dev",
]:
annot = annotations[split]
split_name = {
"train": "train2014",
"val": "val2014",
"test": "test2015",
"test-dev": "test2015",
}[split]
paths = list(glob(f"{root}/{split_name}/*.jpg"))
random.shuffle(paths)
annot_paths = [
path
for path in paths
if int(path.split("/")[-1].split("_")[-1][:-4]) in annot
]
if len(paths) == len(annot_paths):
print("all images have caption annotations")
else:
print("not all images have caption annotations")
print(
len(paths), len(annot_paths), len(annot),
)
bs = [
path2rest(path, split, annotations, label2ans) for path in tqdm(annot_paths)
]
dataframe = pd.DataFrame(
bs,
columns=[
"image",
"questions",
"answers",
"answer_labels",
"answer_scores",
"image_id",
"question_id",
"split",
],
)
table = pa.Table.from_pandas(dataframe)
os.makedirs(dataset_root, exist_ok=True)
with pa.OSFile(f"{dataset_root}/vqav2_{split}{suffix_name}.arrow", "wb") as sink:
with pa.RecordBatchFileWriter(sink, table.schema) as writer:
writer.write_table(table)
table = pa.ipc.RecordBatchFileReader(
pa.memory_map(f"{dataset_root}/vqav2_val{suffix_name}.arrow", "r")
).read_all()
pdtable = table.to_pandas()
df1 = pdtable[:-1000]
df2 = pdtable[-1000:]
df1 = pa.Table.from_pandas(df1)
df2 = pa.Table.from_pandas(df2)
with pa.OSFile(f"{dataset_root}/vqav2_trainable_val{suffix_name}.arrow", "wb") as sink:
with pa.RecordBatchFileWriter(sink, df1.schema) as writer:
writer.write_table(df1)
with pa.OSFile(f"{dataset_root}/vqav2_rest_val{suffix_name}.arrow", "wb") as sink:
with pa.RecordBatchFileWriter(sink, df2.schema) as writer:
writer.write_table(df2)
make_arrow('~/BT/dataset/mscoco_flickr30k_vqav2_snli_ve', '~/BT/dataset/fine-tune', True)
make_arrow('~/BT/dataset/mscoco_flickr30k_vqav2_snli_ve', '~/BT/dataset/fine-tune', False)
|
BridgeTower/src/utils/write_vqa.py/0
|
{
"file_path": "BridgeTower/src/utils/write_vqa.py",
"repo_id": "BridgeTower",
"token_count": 3840
}
| 166 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import torch
import numpy as np
import skimage.io as io
# from FaceSDK.face_sdk import FaceDetection
# from face_sdk import FaceDetection
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from skimage.transform import SimilarityTransform
from skimage.transform import warp
from PIL import Image
import torch.nn.functional as F
import torchvision as tv
import torchvision.utils as vutils
import time
import cv2
import os
from skimage import img_as_ubyte
import json
import argparse
import dlib
def _standard_face_pts():
pts = (
np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) / 256.0
- 1.0
)
return np.reshape(pts, (5, 2))
def _origin_face_pts():
pts = np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32)
return np.reshape(pts, (5, 2))
def get_landmark(face_landmarks, id):
part = face_landmarks.part(id)
x = part.x
y = part.y
return (x, y)
def search(face_landmarks):
x1, y1 = get_landmark(face_landmarks, 36)
x2, y2 = get_landmark(face_landmarks, 39)
x3, y3 = get_landmark(face_landmarks, 42)
x4, y4 = get_landmark(face_landmarks, 45)
x_nose, y_nose = get_landmark(face_landmarks, 30)
x_left_mouth, y_left_mouth = get_landmark(face_landmarks, 48)
x_right_mouth, y_right_mouth = get_landmark(face_landmarks, 54)
x_left_eye = int((x1 + x2) / 2)
y_left_eye = int((y1 + y2) / 2)
x_right_eye = int((x3 + x4) / 2)
y_right_eye = int((y3 + y4) / 2)
results = np.array(
[
[x_left_eye, y_left_eye],
[x_right_eye, y_right_eye],
[x_nose, y_nose],
[x_left_mouth, y_left_mouth],
[x_right_mouth, y_right_mouth],
]
)
return results
def compute_transformation_matrix(img, landmark, normalize, target_face_scale=1.0):
std_pts = _standard_face_pts() # [-1,1]
target_pts = (std_pts * target_face_scale + 1) / 2 * 256.0
# print(target_pts)
h, w, c = img.shape
if normalize == True:
landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0
landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0
# print(landmark)
affine = SimilarityTransform()
affine.estimate(target_pts, landmark)
return affine.params
def show_detection(image, box, landmark):
plt.imshow(image)
print(box[2] - box[0])
plt.gca().add_patch(
Rectangle(
(box[1], box[0]), box[2] - box[0], box[3] - box[1], linewidth=1, edgecolor="r", facecolor="none"
)
)
plt.scatter(landmark[0][0], landmark[0][1])
plt.scatter(landmark[1][0], landmark[1][1])
plt.scatter(landmark[2][0], landmark[2][1])
plt.scatter(landmark[3][0], landmark[3][1])
plt.scatter(landmark[4][0], landmark[4][1])
plt.show()
def affine2theta(affine, input_w, input_h, target_w, target_h):
# param = np.linalg.inv(affine)
param = affine
theta = np.zeros([2, 3])
theta[0, 0] = param[0, 0] * input_h / target_h
theta[0, 1] = param[0, 1] * input_w / target_h
theta[0, 2] = (2 * param[0, 2] + param[0, 0] * input_h + param[0, 1] * input_w) / target_h - 1
theta[1, 0] = param[1, 0] * input_h / target_w
theta[1, 1] = param[1, 1] * input_w / target_w
theta[1, 2] = (2 * param[1, 2] + param[1, 0] * input_h + param[1, 1] * input_w) / target_w - 1
return theta
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--url", type=str, default="/home/jingliao/ziyuwan/celebrities", help="input")
parser.add_argument(
"--save_url", type=str, default="/home/jingliao/ziyuwan/celebrities_detected_face_reid", help="output"
)
opts = parser.parse_args()
url = opts.url
save_url = opts.save_url
### If the origin url is None, then we don't need to reid the origin image
os.makedirs(url, exist_ok=True)
os.makedirs(save_url, exist_ok=True)
face_detector = dlib.get_frontal_face_detector()
landmark_locator = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
count = 0
map_id = {}
for x in os.listdir(url):
img_url = os.path.join(url, x)
pil_img = Image.open(img_url).convert("RGB")
image = np.array(pil_img)
start = time.time()
faces = face_detector(image)
done = time.time()
if len(faces) == 0:
print("Warning: There is no face in %s" % (x))
continue
print(len(faces))
if len(faces) > 0:
for face_id in range(len(faces)):
current_face = faces[face_id]
face_landmarks = landmark_locator(image, current_face)
current_fl = search(face_landmarks)
affine = compute_transformation_matrix(image, current_fl, False, target_face_scale=1.3)
aligned_face = warp(image, affine, output_shape=(256, 256, 3))
img_name = x[:-4] + "_" + str(face_id + 1)
io.imsave(os.path.join(save_url, img_name + ".png"), img_as_ubyte(aligned_face))
count += 1
if count % 1000 == 0:
print("%d have finished ..." % (count))
|
Bringing-Old-Photos-Back-to-Life/Face_Detection/detect_all_dlib.py/0
|
{
"file_path": "Bringing-Old-Photos-Back-to-Life/Face_Detection/detect_all_dlib.py",
"repo_id": "Bringing-Old-Photos-Back-to-Life",
"token_count": 2438
}
| 167 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
def CreateDataLoader(opt):
from data.custom_dataset_data_loader import CustomDatasetDataLoader
data_loader = CustomDatasetDataLoader()
print(data_loader.name())
data_loader.initialize(opt)
return data_loader
|
Bringing-Old-Photos-Back-to-Life/Global/data/data_loader.py/0
|
{
"file_path": "Bringing-Old-Photos-Back-to-Life/Global/data/data_loader.py",
"repo_id": "Bringing-Old-Photos-Back-to-Life",
"token_count": 97
}
| 168 |
build:
gpu: true
python_version: "3.8"
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_packages:
- "cmake==3.21.2"
- "torchvision==0.9.0"
- "torch==1.8.0"
- "numpy==1.19.4"
- "opencv-python==4.4.0.46"
- "scipy==1.5.3"
- "tensorboardX==2.4"
- "dominate==2.6.0"
- "easydict==1.9"
- "PyYAML==5.3.1"
- "scikit-image==0.18.3"
- "dill==0.3.4"
- "einops==0.3.0"
- "PySimpleGUI==4.46.0"
- "ipython==7.19.0"
run:
- pip install dlib
predict: "predict.py:Predictor"
|
Bringing-Old-Photos-Back-to-Life/cog.yaml/0
|
{
"file_path": "Bringing-Old-Photos-Back-to-Life/cog.yaml",
"repo_id": "Bringing-Old-Photos-Back-to-Life",
"token_count": 326
}
| 169 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import os
import argparse
import shutil
import sys
from subprocess import call
def run_cmd(command):
try:
call(command, shell=True)
except KeyboardInterrupt:
print("Process interrupted")
sys.exit(1)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--input_folder", type=str, default="./test_images/old", help="Test images")
parser.add_argument(
"--output_folder",
type=str,
default="./output",
help="Restored images, please use the absolute path",
)
parser.add_argument("--GPU", type=str, default="6,7", help="0,1,2")
parser.add_argument(
"--checkpoint_name", type=str, default="Setting_9_epoch_100", help="choose which checkpoint"
)
parser.add_argument("--with_scratch", action="store_true")
parser.add_argument("--HR", action='store_true')
opts = parser.parse_args()
gpu1 = opts.GPU
# resolve relative paths before changing directory
opts.input_folder = os.path.abspath(opts.input_folder)
opts.output_folder = os.path.abspath(opts.output_folder)
if not os.path.exists(opts.output_folder):
os.makedirs(opts.output_folder)
main_environment = os.getcwd()
## Stage 1: Overall Quality Improve
print("Running Stage 1: Overall restoration")
os.chdir("./Global")
stage_1_input_dir = opts.input_folder
stage_1_output_dir = os.path.join(opts.output_folder, "stage_1_restore_output")
if not os.path.exists(stage_1_output_dir):
os.makedirs(stage_1_output_dir)
if not opts.with_scratch:
stage_1_command = (
"python test.py --test_mode Full --Quality_restore --test_input "
+ stage_1_input_dir
+ " --outputs_dir "
+ stage_1_output_dir
+ " --gpu_ids "
+ gpu1
)
run_cmd(stage_1_command)
else:
mask_dir = os.path.join(stage_1_output_dir, "masks")
new_input = os.path.join(mask_dir, "input")
new_mask = os.path.join(mask_dir, "mask")
stage_1_command_1 = (
"python detection.py --test_path "
+ stage_1_input_dir
+ " --output_dir "
+ mask_dir
+ " --input_size full_size"
+ " --GPU "
+ gpu1
)
if opts.HR:
HR_suffix=" --HR"
else:
HR_suffix=""
stage_1_command_2 = (
"python test.py --Scratch_and_Quality_restore --test_input "
+ new_input
+ " --test_mask "
+ new_mask
+ " --outputs_dir "
+ stage_1_output_dir
+ " --gpu_ids "
+ gpu1 + HR_suffix
)
run_cmd(stage_1_command_1)
run_cmd(stage_1_command_2)
## Solve the case when there is no face in the old photo
stage_1_results = os.path.join(stage_1_output_dir, "restored_image")
stage_4_output_dir = os.path.join(opts.output_folder, "final_output")
if not os.path.exists(stage_4_output_dir):
os.makedirs(stage_4_output_dir)
for x in os.listdir(stage_1_results):
img_dir = os.path.join(stage_1_results, x)
shutil.copy(img_dir, stage_4_output_dir)
print("Finish Stage 1 ...")
print("\n")
## Stage 2: Face Detection
print("Running Stage 2: Face Detection")
os.chdir(".././Face_Detection")
stage_2_input_dir = os.path.join(stage_1_output_dir, "restored_image")
stage_2_output_dir = os.path.join(opts.output_folder, "stage_2_detection_output")
if not os.path.exists(stage_2_output_dir):
os.makedirs(stage_2_output_dir)
if opts.HR:
stage_2_command = (
"python detect_all_dlib_HR.py --url " + stage_2_input_dir + " --save_url " + stage_2_output_dir
)
else:
stage_2_command = (
"python detect_all_dlib.py --url " + stage_2_input_dir + " --save_url " + stage_2_output_dir
)
run_cmd(stage_2_command)
print("Finish Stage 2 ...")
print("\n")
## Stage 3: Face Restore
print("Running Stage 3: Face Enhancement")
os.chdir(".././Face_Enhancement")
stage_3_input_mask = "./"
stage_3_input_face = stage_2_output_dir
stage_3_output_dir = os.path.join(opts.output_folder, "stage_3_face_output")
if not os.path.exists(stage_3_output_dir):
os.makedirs(stage_3_output_dir)
if opts.HR:
opts.checkpoint_name='FaceSR_512'
stage_3_command = (
"python test_face.py --old_face_folder "
+ stage_3_input_face
+ " --old_face_label_folder "
+ stage_3_input_mask
+ " --tensorboard_log --name "
+ opts.checkpoint_name
+ " --gpu_ids "
+ gpu1
+ " --load_size 512 --label_nc 18 --no_instance --preprocess_mode resize --batchSize 1 --results_dir "
+ stage_3_output_dir
+ " --no_parsing_map"
)
else:
stage_3_command = (
"python test_face.py --old_face_folder "
+ stage_3_input_face
+ " --old_face_label_folder "
+ stage_3_input_mask
+ " --tensorboard_log --name "
+ opts.checkpoint_name
+ " --gpu_ids "
+ gpu1
+ " --load_size 256 --label_nc 18 --no_instance --preprocess_mode resize --batchSize 4 --results_dir "
+ stage_3_output_dir
+ " --no_parsing_map"
)
run_cmd(stage_3_command)
print("Finish Stage 3 ...")
print("\n")
## Stage 4: Warp back
print("Running Stage 4: Blending")
os.chdir(".././Face_Detection")
stage_4_input_image_dir = os.path.join(stage_1_output_dir, "restored_image")
stage_4_input_face_dir = os.path.join(stage_3_output_dir, "each_img")
stage_4_output_dir = os.path.join(opts.output_folder, "final_output")
if not os.path.exists(stage_4_output_dir):
os.makedirs(stage_4_output_dir)
if opts.HR:
stage_4_command = (
"python align_warp_back_multiple_dlib_HR.py --origin_url "
+ stage_4_input_image_dir
+ " --replace_url "
+ stage_4_input_face_dir
+ " --save_url "
+ stage_4_output_dir
)
else:
stage_4_command = (
"python align_warp_back_multiple_dlib.py --origin_url "
+ stage_4_input_image_dir
+ " --replace_url "
+ stage_4_input_face_dir
+ " --save_url "
+ stage_4_output_dir
)
run_cmd(stage_4_command)
print("Finish Stage 4 ...")
print("\n")
print("All the processing is done. Please check the results.")
|
Bringing-Old-Photos-Back-to-Life/run.py/0
|
{
"file_path": "Bringing-Old-Photos-Back-to-Life/run.py",
"repo_id": "Bringing-Old-Photos-Back-to-Life",
"token_count": 3260
}
| 170 |
from torch.utils.data import Dataset
from torchvision.datasets.utils import download_url
from tqdm import tqdm
import pandas as pd
import os
import torch.nn as nn
import torch
class AudioDataset(Dataset):
def __init__(self, root: str, download: bool = True):
self.root = os.path.expanduser(root)
if download:
self.download()
def __getitem__(self, index):
raise NotImplementedError
def download(self):
raise NotImplementedError
def __len__(self):
raise NotImplementedError
class ESC50(AudioDataset):
base_folder = 'ESC-50-master'
url = "https://github.com/karoldvl/ESC-50/archive/master.zip"
filename = "ESC-50-master.zip"
num_files_in_dir = 2000
audio_dir = 'audio'
label_col = 'category'
file_col = 'filename'
meta = {
'filename': os.path.join('meta','esc50.csv'),
}
def __init__(self, root, reading_transformations: nn.Module = None, download: bool = True):
super().__init__(root)
self._load_meta()
self.targets, self.audio_paths = [], []
self.pre_transformations = reading_transformations
print("Loading audio files")
# self.df['filename'] = os.path.join(self.root, self.base_folder, self.audio_dir) + os.sep + self.df['filename']
self.df['category'] = self.df['category'].str.replace('_',' ')
for _, row in tqdm(self.df.iterrows()):
file_path = os.path.join(self.root, self.base_folder, self.audio_dir, row[self.file_col])
self.targets.append(row[self.label_col])
self.audio_paths.append(file_path)
def _load_meta(self):
path = os.path.join(self.root, self.base_folder, self.meta['filename'])
self.df = pd.read_csv(path)
self.class_to_idx = {}
self.classes = [x.replace('_',' ') for x in sorted(self.df[self.label_col].unique())]
for i, category in enumerate(self.classes):
self.class_to_idx[category] = i
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is index of the target class.
"""
file_path, target = self.audio_paths[index], self.targets[index]
idx = torch.tensor(self.class_to_idx[target])
one_hot_target = torch.zeros(len(self.classes)).scatter_(0, idx, 1).reshape(1,-1)
return file_path, target, one_hot_target
def __len__(self):
return len(self.audio_paths)
def download(self):
download_url(self.url, self.root, self.filename)
# extract file
from zipfile import ZipFile
with ZipFile(os.path.join(self.root, self.filename), 'r') as zip:
zip.extractall(path=self.root)
|
CLAP/examples/esc50_dataset.py/0
|
{
"file_path": "CLAP/examples/esc50_dataset.py",
"repo_id": "CLAP",
"token_count": 1217
}
| 171 |
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=python -msphinx
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=fairseq
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The Sphinx module was not found. Make sure you have Sphinx installed,
echo.then set the SPHINXBUILD environment variable to point to the full
echo.path of the 'sphinx-build' executable. Alternatively you may add the
echo.Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd
|
COCO-LM/fairseq/docs/make.bat/0
|
{
"file_path": "COCO-LM/fairseq/docs/make.bat",
"repo_id": "COCO-LM",
"token_count": 316
}
| 172 |
# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from fairseq.modules.layer_norm import LayerNorm
from .adaptive_span_attention import AdaptiveSpan
# Size notations:
# B = batch_size, H = d_model, M = block_size, L = attn_span
def _skew(X, pad_value):
"""shift every row 1 step to right"""
# X = B x M x L
B, M, L = X.size()
X = F.pad(X, (0, M + 1), value=pad_value) # B x M x (L+M+1)
X = X.view(B, -1) # B x ML+MM+M
X = X[:, :-M] # B x ML+MM
X = X.view(B, M, M + L) # B x M x L+M
return X
def _unskew(X):
"""reverse _skew operation"""
# X = B x M x L+M
B, M, L = X.size()
L -= M
X = X.view(B, -1) # B x ML+MM
X = F.pad(X, (0, M)) # B x ML+MM+M
X = X.view(B, M, M + L + 1) # B x M x L+M+1
X = X[:, :, :L] # B x M x L
return X
class SeqAttention(nn.Module):
"""Sequential self-attention layer.
Each token will attend to its previous fixed number of steps.
Note that attention doesn't include the current step itself.
"""
def __init__(self, d_model, n_head, attn_span, dropout, adapt_span_layer, **kargs):
nn.Module.__init__(self)
self.dropout = nn.Dropout(dropout)
self.d_model = d_model # size of a single head
self.attn_span = attn_span
self.adaptive_span = AdaptiveSpan(
attn_span=attn_span,
n_head=n_head,
adapt_span_layer=adapt_span_layer,
**kargs
)
def forward(self, query, key, value, key_pe):
# query size = B x M x H
# key, value sizes = B x (M+L) x H
key, value, key_pe = self.adaptive_span.trim_memory(query, key, value, key_pe)
# compute attention from context
# B x M (dest) x (M+L) (src)
attn_cont = torch.matmul(query, key.transpose(-1, -2))
attn_cont = _unskew(attn_cont) # B x M x L
# compute the effect of position embedding
attn_pos = torch.matmul(query, key_pe) # B x M x L_pos
attn = attn_cont + attn_pos
attn = attn / math.sqrt(self.d_model) # B x M X L_pos
attn = F.softmax(attn.float(), dim=-1).type_as(attn)
# trim attention lengths according to the learned span
attn = self.adaptive_span(attn)
attn = self.dropout(attn) # B x M X L_pos
attn_cont = _skew(attn, 0) # B x M X (L+M)
out = torch.matmul(attn_cont, value) # B x M x H
return out
def get_cache_size(self):
return self.adaptive_span.get_cache_size()
class MultiHeadSeqAttention(nn.Module):
def __init__(self, d_model, n_head, **kargs):
nn.Module.__init__(self)
assert d_model % n_head == 0
self.n_head = n_head
self.head_dim = d_model // n_head
self.attn = SeqAttention(d_model=self.head_dim, n_head=n_head, **kargs)
self.proj_query = nn.Linear(d_model, d_model, bias=False)
nn.init.xavier_normal_(self.proj_query.weight)
self.proj_out = nn.Linear(d_model, d_model, bias=False)
nn.init.xavier_normal_(self.proj_out.weight)
self.proj_val = nn.Linear(d_model, d_model, bias=False)
nn.init.xavier_normal_(self.proj_val.weight)
self.proj_key = nn.Linear(d_model, d_model, bias=False)
nn.init.xavier_normal_(self.proj_key.weight)
def head_reshape(self, x):
K = self.n_head
D = self.head_dim
x = x.view(x.size()[:-1] + (K, D)) # B x (M+L) x K x D
x = x.transpose(1, 2).contiguous() # B x K x (M+L) x D
x = x.view(-1, x.size(-2), x.size(-1)) # B_K x (M+L) x D
return x
def forward(self, query, key, value, key_pe):
B = query.size(0)
K = self.n_head
D = self.head_dim
M = query.size(1)
query = self.proj_query(query)
query = self.head_reshape(query)
value = self.proj_val(value)
value = self.head_reshape(value)
key = self.proj_key(key)
key = self.head_reshape(key)
out = self.attn(query, key, value, key_pe) # B_K x M x D
out = out.view(B, K, M, D) # B x K x M x D
out = out.transpose(1, 2).contiguous() # B x M x K x D
out = out.view(B, M, -1) # B x M x K_D
out = self.proj_out(out)
return out
class FeedForwardLayer(nn.Module):
def __init__(self, d_model, d_inner, dropout, **kargs):
nn.Module.__init__(self)
self.fc1 = nn.Linear(d_model, d_inner)
self.fc2 = nn.Linear(d_inner, d_model)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.xavier_uniform_(self.fc2.weight)
self.dropout = nn.Dropout(dropout)
def forward(self, h):
h1 = F.relu(self.fc1(h))
h1 = self.dropout(h1)
h2 = self.fc2(h1)
return h2
class TransformerSeqLayer(nn.Module):
def __init__(self, d_model, **kargs):
nn.Module.__init__(self)
self.attn = MultiHeadSeqAttention(d_model=d_model, **kargs)
self.norm1 = LayerNorm(d_model)
self.ff = FeedForwardLayer(d_model=d_model, **kargs)
self.norm2 = LayerNorm(d_model)
def forward(self, h, h_cache, key_pe):
# h = B x M x H
# h_cache = B x L x H
h_all = torch.cat([h_cache, h], dim=1) # B x (M+L) x H
attn_out = self.attn(h, h_all, h_all, key_pe)
h = self.norm1(h + attn_out) # B x M x H
if self.ff is not None:
ff_out = self.ff(h)
out = self.norm2(h + ff_out) # B x M x H
else:
out = h
return out
def get_cache_size(self):
return self.attn.attn.get_cache_size()
class TransformerSeq(nn.Module):
def __init__(
self,
vocab_size,
d_model,
n_head,
n_layer,
attn_span,
emb_dropout,
aux_loss_scaler,
adapt_span_layer,
**kargs
):
nn.Module.__init__(self)
# token embeddings
self.in_emb = nn.Embedding(vocab_size, d_model)
nn.init.normal_(self.in_emb.weight, mean=0, std=d_model ** -0.5)
self.out_emb = nn.Linear(d_model, vocab_size)
self.aux_loss_scaler = aux_loss_scaler
if emb_dropout > 0:
self.emb_dropout = nn.Dropout(emb_dropout)
else:
self.emb_dropout = None
# position embeddings
self.key_pe = nn.Parameter(torch.randn(1, d_model // n_head, attn_span))
self.layers = nn.ModuleList()
self.layers.extend(
TransformerSeqLayer(
d_model=d_model,
n_head=n_head,
attn_span=attn_span,
adapt_span_layer=adapt_span_layer,
**kargs
)
for _ in range(n_layer)
)
def forward(self, x, h_cache, target=None):
# x size = B x M
block_size = x.size(1)
h = self.in_emb(x) # B x M x H
if self.emb_dropout is not None:
h = self.emb_dropout(h)
h_cache_next = []
for l, layer in enumerate(self.layers):
cache_size = layer.attn.attn.get_cache_size()
if cache_size > block_size:
h_cache_next_l = torch.cat(
[h_cache[l][:, -cache_size + block_size :, :], h], dim=1
).detach()
else:
h_cache_next_l = h[:, -cache_size:, :].detach()
h_cache_next.append(h_cache_next_l)
h = layer(h, h_cache[l], self.key_pe) # B x M x H
if self.emb_dropout is not None:
h = self.emb_dropout(h)
out = F.log_softmax(self.out_emb(h).float(), dim=-1).type_as(h)
dummy_loss = None
return out, h_cache_next, dummy_loss
def get_aux_loss(self):
loss = 0.0
for layer in self.layers:
loss += layer.attn.attn.adaptive_span.get_loss()
return self.aux_loss_scaler * loss
def get_current_max_span(self):
max_span = 0.0
for layer in self.layers:
max_span = max(
max_span, layer.attn.attn.adaptive_span.get_current_max_span()
)
return max_span
def get_current_avg_span(self):
avg_span = 0.0
for layer in self.layers:
avg_span += layer.attn.attn.adaptive_span.get_current_avg_span()
return avg_span / len(self.layers)
|
COCO-LM/fairseq/examples/adaptive_span/adaptive_span_model.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/adaptive_span/adaptive_span_model.py",
"repo_id": "COCO-LM",
"token_count": 4392
}
| 173 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch.nn as nn
import torch.nn.functional as F
from fairseq.models import register_model, register_model_architecture
from fairseq.models.transformer import TransformerEncoder, TransformerModel
@register_model("gru_transformer")
class GRUTransformerModel(TransformerModel):
@classmethod
def build_encoder(cls, args, src_dict, embed_tokens):
return GRUTransformerEncoder(args, src_dict, embed_tokens)
class GRUTransformerEncoder(TransformerEncoder):
def __init__(self, args, dictionary, embed_tokens):
super().__init__(args, dictionary, embed_tokens)
self.emb_ctx = nn.GRU(
input_size=embed_tokens.embedding_dim,
hidden_size=embed_tokens.embedding_dim // 2,
num_layers=1,
bidirectional=True,
)
def forward_embedding(self, src_tokens):
# embed tokens and positions
x = embed = self.embed_scale * self.embed_tokens(src_tokens)
if self.embed_positions is not None:
x = embed + self.embed_positions(src_tokens)
# contextualize embeddings
x = x.transpose(0, 1)
x = self.dropout_module(x)
x, _ = self.emb_ctx.forward(x)
x = x.transpose(0, 1)
if self.layernorm_embedding is not None:
x = self.layernorm_embedding(x)
x = self.dropout_module(x)
return x, embed
@register_model_architecture("gru_transformer", "gru_transformer")
def gru_transformer_base_architecture(args):
args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
args.encoder_layers = getattr(args, "encoder_layers", 6)
args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
args.decoder_ffn_embed_dim = getattr(
args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
)
args.decoder_layers = getattr(args, "decoder_layers", 6)
args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
args.attention_dropout = getattr(args, "attention_dropout", 0.0)
args.activation_dropout = getattr(args, "activation_dropout", 0.0)
args.activation_fn = getattr(args, "activation_fn", "relu")
args.dropout = getattr(args, "dropout", 0.1)
args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
args.share_decoder_input_output_embed = getattr(
args, "share_decoder_input_output_embed", False
)
args.share_all_embeddings = getattr(args, "share_all_embeddings", False)
args.no_token_positional_embeddings = getattr(
args, "no_token_positional_embeddings", False
)
args.adaptive_input = getattr(args, "adaptive_input", False)
args.no_cross_attention = getattr(args, "no_cross_attention", False)
args.cross_self_attention = getattr(args, "cross_self_attention", False)
args.layer_wise_attention = getattr(args, "layer_wise_attention", False)
args.decoder_output_dim = getattr(
args, "decoder_output_dim", args.decoder_embed_dim
)
args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
args.layernorm_embedding = getattr(args, "layernorm_embedding", False)
@register_model_architecture("gru_transformer", "gru_transformer_big")
def gru_transformer_big(args):
args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
args.dropout = getattr(args, "dropout", 0.3)
gru_transformer_base_architecture(args)
|
COCO-LM/fairseq/examples/byte_level_bpe/gru_transformer.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/byte_level_bpe/gru_transformer.py",
"repo_id": "COCO-LM",
"token_count": 1992
}
| 174 |
# Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling
## Introduction
- [Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) introduce a simple and effective noisy channel modeling approach for neural machine translation. However, the noisy channel online decoding approach introduced in this paper is too slow to be practical.
- To address this, [Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 simple approximations to make this approach very fast and practical without much loss in accuracy.
- This README provides intructions on how to run online decoding or generation with the noisy channel modeling approach, including ways to make it very fast without much loss in accuracy.
## Noisy Channel Modeling
[Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) applies the Bayes Rule to predict `P(y|x)`, the probability of the target `y` given the source `x`.
```P(y|x) = P(x|y) * P(y) / P(x)```
- `P(x|y)` predicts the source `x` given the target `y` and is referred to as the **channel model**
- `P(y)` is a **language model** over the target `y`
- `P(x)` is generally not modeled since it is constant for all `y`.
We use Transformer models to parameterize the direct model `P(y|x)`, the channel model `P(x|y)` and the language model `P(y)`.
During online decoding with beam search, we generate the top `K2` candidates per beam and score them with the following linear combination of the channel model, the language model as well as the direct model scores.
```(1 / t) * log(P(y|x) + (1 / s) * ( λ1 * log(P(x|y)) + λ2 * log(P(y) ) )```
- `t` - Target Prefix Length
- `s` - Source Length
- `λ1` - Channel Model Weight
- `λ2` - Language Model Weight
The top `beam_size` candidates based on the above combined scores are chosen to continue the beams in beam search. In beam search with a direct model alone, the scores from the direct model `P(y|x)` are used to choose the top candidates in beam search.
This framework provides a great way to utlize strong target language models trained on large amounts of unlabeled data. Language models can prefer targets unrelated to the source, so we also need a channel model whose role is to ensure that the target preferred by the language model also translates back to the source.
### Training Translation Models and Language Models
For training Transformer models in fairseq for machine translation, refer to instructions [here](https://github.com/pytorch/fairseq/tree/master/examples/translation)
For training Transformer models in fairseq for language modeling, refer to instructions [here](https://github.com/pytorch/fairseq/tree/master/examples/language_model)
### Generation with Language Model for German-English translation with fairseq
Here are instructions to generate using a direct model and a target-side language model.
Note:
- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq)
- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing)
```sh
binarized_data=data_dir/binarized
direct_model=de_en_seed4.pt
lm_model=en_lm.pt
lm_data=lm_data
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model}
mkdir -p ${lm_data}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt
k2=10
lenpen=0.16
lm_wt=0.14
fairseq-generate ${binarized_data} \
--user-dir examples/fast_noisy_channel \
--beam 5 \
--path ${direct_model} \
--lm-model ${lm_model} \
--lm-data ${lm_data} \
--k2 ${k2} \
--combine-method lm_only \
--task noisy_channel_translation \
--lenpen ${lenpen} \
--lm-wt ${lm_wt} \
--gen-subset valid \
--remove-bpe \
--fp16 \
--batch-size 10
```
### Noisy Channel Generation for German-English translation with fairseq
Here are instructions for noisy channel generation with a direct model, channel model and language model as explained in section [Noisy Channel Modeling](#noisy-channel-modeling).
Note:
- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq)
- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing)
```sh
binarized_data=data_dir/binarized
direct_model=de_en_seed4.pt
lm_model=en_lm.pt
lm_data=lm_data
ch_model=en_de.big.seed4.pt
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model}
mkdir -p ${lm_data}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt -O ${ch_model}
k2=10
lenpen=0.21
lm_wt=0.50
bw_wt=0.30
fairseq-generate ${binarized_data} \
--user-dir examples/fast_noisy_channel \
--beam 5 \
--path ${direct_model} \
--lm-model ${lm_model} \
--lm-data ${lm_data} \
--channel-model ${ch_model} \
--k2 ${k2} \
--combine-method noisy_channel \
--task noisy_channel_translation \
--lenpen ${lenpen} \
--lm-wt ${lm_wt} \
--ch-wt ${bw_wt} \
--gen-subset test \
--remove-bpe \
--fp16 \
--batch-size 1
```
## Fast Noisy Channel Modeling
[Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 approximations that speed up online noisy channel decoding -
- Smaller channel models (`Tranformer Base` with 1 encoder and decoder layer each vs. `Transformer Big`)
- This involves training a channel model that is possibly smaller and less accurate in terms of BLEU than a channel model of the same size as the direct model.
- Since the role of the channel model is mainly to assign low scores to generations from the language model if they don't translate back to the source, we may not need the most accurate channel model for this purpose.
- Smaller output vocabulary size for the channel model (~30,000 -> ~1000)
- The channel model doesn't need to score the full output vocabulary, it just needs to score the source tokens, which are completely known.
- This is specified using the arguments `--channel-scoring-type src_vocab --top-k-vocab 500`
- This means that the output vocabulary for the channel model will be the source tokens for all examples in the batch and the top-K most frequent tokens in the vocabulary
- This reduces the memory consumption needed to store channel model scores significantly
- Smaller number of candidates (`k2`) scored per beam
- This is specified by reducing the argument `--k2`
### Fast Noisy Channel Generation for German-English translation with fairseq
Here are instructions for **fast** noisy channel generation with a direct model, channel model and language model as explained in section [Fast Noisy Channel Modeling](#fast-noisy-channel-modeling). The main differences are that we use a smaller channel model, reduce `--k2`, set `--channel-scoring-type src_vocab --top-k-vocab 500` and increase the `--batch-size`.
Note:
- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq)
- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing)
```sh
binarized_data=data_dir/binarized
direct_model=de_en_seed4.pt
lm_model=en_lm.pt
lm_data=lm_data
small_ch_model=en_de.base_1_1.seed4.pt
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model}
mkdir -p ${lm_data}
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt
wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt -O ${small_ch_model}
k2=3
lenpen=0.23
lm_wt=0.58
bw_wt=0.26
fairseq-generate ${binarized_data} \
--user-dir examples/fast_noisy_channel \
--beam 5 \
--path ${direct_model} \
--lm-model ${lm_model} \
--lm-data ${lm_data} \
--channel-model ${small_ch_model} \
--k2 ${k2} \
--combine-method noisy_channel \
--task noisy_channel_translation \
--lenpen ${lenpen} \
--lm-wt ${lm_wt} \
--ch-wt ${bw_wt} \
--gen-subset test \
--remove-bpe \
--fp16 \
--batch-size 50 \
--channel-scoring-type src_vocab --top-k-vocab 500
```
## Test Data Preprocessing
For preprocessing and binarizing the test sets for Romanian-English and German-English translation, we use the following script -
```sh
FAIRSEQ=/path/to/fairseq
cd $FAIRSEQ
SCRIPTS=$FAIRSEQ/mosesdecoder/scripts
if [ ! -d "${SCRIPTS}" ]; then
echo 'Cloning Moses github repository (for tokenization scripts)...'
git clone https://github.com/moses-smt/mosesdecoder.git
fi
TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl
NORMALIZE=$SCRIPTS/tokenizer/normalize-punctuation.perl
s=de
t=en
test=wmt18
mkdir -p data_dir
# Tokenization
if [ $s == "ro" ] ; then
# Note: Get normalise-romanian.py and remove-diacritics.py from
# https://github.com/rsennrich/wmt16-scripts/tree/master/preprocess
sacrebleu -t $test -l $s-$t --echo src | \
$NORMALIZE -l $s | \
python normalise-romanian.py | \
python remove-diacritics.py | \
$TOKENIZER -l $s -a -q > data_dir/$test.$s-$t.$s
else
sacrebleu -t $test -l $s-$t --echo src | perl $NORMALIZE -l $s | perl $TOKENIZER -threads 8 -a -l $s > data_dir/$test.$s-$t.$s
fi
sacrebleu -t $test -l $s-$t --echo ref | perl $NORMALIZE -l $t | perl $TOKENIZER -threads 8 -a -l $t > data_dir/$test.$s-$t.$t
# Applying BPE
src_bpe_code=/path/to/source/language/bpe/code
tgt_bpe_code=/path/to/target/language/bpe/code
src_dict=/path/to/source/language/dict
tgt_dict=/path/to/target/language/dict
FASTBPE=$FAIRSEQ/fastBPE
if [ ! -d "${FASTBPE}" ] ; then
git clone https://github.com/glample/fastBPE.git
# Follow compilation instructions at https://github.com/glample/fastBPE
g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast
fi
${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${src_bpe_code}
${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${tgt_bpe_code}
fairseq-preprocess -s $s -t $t \
--testpref data_dir/bpe.$test.$s-$t \
--destdir data_dir/binarized \
--srcdict ${src_dict} \
--tgtdict ${tgt_dict}
```
## Calculating BLEU
```sh
DETOKENIZER=$SCRIPTS/tokenizer/detokenizer.perl
cat ${generation_output} | grep -P "^H" | sort -V | cut -f 3- | $DETOKENIZER -l $t -q -a | sacrebleu -t $test -l $s-$t
```
## Romanian-English Translation
The direct and channel models are trained using bitext data (WMT16) combined with backtranslated data (The monolingual data used for backtranslation comes from http://data.statmt.org/rsennrich/wmt16_backtranslations/ (Sennrich et al., 2016c))
The backtranslated data is generated using an ensemble of 3 English-Romanian models trained on bitext training data (WMT16) with unrestricted sampling.
### BPE Codes and Dictionary
We learn a joint BPE vocabulary of 18K types on the bitext training data which is used for both the source and target.
||Path|
|----------|------|
| BPE Code | [joint_bpe_18k](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/bpe_18k) |
| Dictionary | [dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/dict) |
### Direct Models
For Ro-En with backtranslation, the direct and channel models use a Transformer-Big architecture.
| Seed | Model |
|----|----|
| 2 | [ro_en_seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed2.pt)
| 4 | [ro_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed4.pt)
| 6 | [ro_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed6.pt)
### Channel Models
For channel models, we follow the same steps as for the direct models. But backtranslated data is generated in the opposite direction using [this Romanian monolingual data](http://data.statmt.org/rsennrich/wmt16_backtranslations/).
The best lenpen, LM weight and CH weight are obtained by sweeping over the validation set (wmt16/dev) using beam 5.
| Model Size | Lenpen | LM Weight | CH Weight | Seed 2 | Seed 4 | Seed 6 |
|----|----|----|----|----|----|----|
| `big` | 0.84 | 0.64 | 0.56 | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) |
| `base_1_1` | 0.63 | 0.40 | 0.37 | [base_1_1.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed2.pt) | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed6.pt) |
### Language Model
The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization.
| | Path |
|----|----|
| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/transformer_lm.pt) |
| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/lm_dict)
## German-English Translation
### BPE Codes and Dictionaries
| | Path|
|----------|------|
| Source BPE Code | [de_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_bpe_code_24K) |
| Target BPE Code | [en_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_bpe_code_24K)
| Source Dictionary | [de_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_dict) |
| Target Dictionary | [en_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_dict) |
### Direct Models
We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs.
We use the Transformer-Big architecture for the direct model.
| Seed | Model |
|:----:|----|
| 4 | [de_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt)
| 5 | [de_en_seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed5.pt)
| 6 | [de_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed6.pt)
### Channel Models
We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs.
| Model Size | Seed 4 | Seed 5 | Seed 6 |
|----|----|----|----|
| `big` | [big.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt) | [big.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed5.pt) | [big.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed6.pt) |
| `big_1_1` | [big_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed4.pt) | [big_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed5.pt) | [big_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed6.pt) |
| `base` | [base.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed4.pt) | [base.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed5.pt) | [base.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed6.pt) |
| `base_1_1` | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed5.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed6.pt) |
| `half` | [half.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed4.pt) | [half.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed5.pt) | [half.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed6.pt) |
| `half_1_1` | [half_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed4.pt) | [half_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed5.pt) | [half_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed6.pt) |
| `quarter` | [quarter.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed4.pt) | [quarter.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed5.pt) | [quarter.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed6.pt) |
| `quarter_1_1` | [quarter_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed4.pt) | [quarter_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed5.pt) | [quarter_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed6.pt) |
| `8th` | [8th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed4.pt) | [8th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed5.pt) | [8th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed6.pt) |
| `8th_1_1` | [8th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed4.pt) | [8th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed5.pt) | [8th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed6.pt) |
| `16th` | [16th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed4.pt) | [16th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed5.pt) | [16th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed6.pt) |
| `16th_1_1` | [16th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed4.pt) | [16th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed5.pt) | [16th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed6.pt) |
### Language Model
The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization.
| | Path |
|----|----|
| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt) |
| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/)
## Citation
```bibtex
@inproceedings{bhosale2020language,
title={Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling},
author={Shruti Bhosale and Kyra Yee and Sergey Edunov and Michael Auli},
booktitle={Proceedings of the Fifth Conference on Machine Translation (WMT)},
year={2020},
}
@inproceedings{yee2019simple,
title={Simple and Effective Noisy Channel Modeling for Neural Machine Translation},
author={Yee, Kyra and Dauphin, Yann and Auli, Michael},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5700--5705},
year={2019}
}
```
|
COCO-LM/fairseq/examples/fast_noisy_channel/README.md/0
|
{
"file_path": "COCO-LM/fairseq/examples/fast_noisy_channel/README.md",
"repo_id": "COCO-LM",
"token_count": 7677
}
| 175 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from collections import OrderedDict, defaultdict
import json
import os
import logging
from fairseq import options, models
from fairseq.data import (
data_utils,
Dictionary,
LanguagePairDataset,
IndexedDataset,
FairseqDataset,
)
from .multitask_data_utils import (
MultitaskDatasetWrapper,
MultidatasetEpochBatchIterator,
)
from fairseq.tasks import LegacyFairseqTask, register_task
logger = logging.getLogger(__name__)
@register_task("laser")
class LaserTask(LegacyFairseqTask):
@staticmethod
def add_args(parser):
"""Add task-specific arguments to the parser."""
parser.add_argument(
"configfile", metavar="PATH", help="dataset configuration file in json"
)
parser.add_argument(
"--weighting-alpha",
type=float,
default=None,
help="alpha for automatic weighting",
)
parser.add_argument(
"--raw-text", action="store_true", help="load raw text dataset"
)
parser.add_argument(
"--left-pad-source",
default="True",
type=str,
metavar="BOOL",
help="pad the source on the left (default: True)",
)
parser.add_argument(
"--left-pad-target",
default="False",
type=str,
metavar="BOOL",
help="pad the target on the left (default: False)",
)
parser.add_argument(
"--max-source-positions",
default=1024,
type=int,
metavar="N",
help="max number of tokens in the source sequence",
)
parser.add_argument(
"--max-target-positions",
default=1024,
type=int,
metavar="N",
help="max number of tokens in the target sequence",
)
def __init__(self, args, config, src_dictionary, tgt_dictionary, num_tasks):
super().__init__(args)
self.config = config
self.src_dictionary = src_dictionary
self.tgt_dictionary = tgt_dictionary
self.num_tasks = num_tasks
@classmethod
def setup_task(cls, args, **kwargs):
with open(args.configfile, "r") as f:
config = json.load(f)
num_tasks = max(dataset["id"] for dataset in config["train"]) + 1
args.left_pad_source = options.eval_bool(args.left_pad_source)
args.left_pad_target = options.eval_bool(args.left_pad_target)
src_dictionary = Dictionary.load(config["src_vocab"])
tgt_dictionary = Dictionary.load(config["tgt_vocab"])
logger.info(
"| src Dictionary {} : {} types".format(
config["src_vocab"], len(src_dictionary)
)
)
logger.info(
"| tgt Dictionary {} : {} types".format(
config["tgt_vocab"], len(tgt_dictionary)
)
)
return cls(args, config, src_dictionary, tgt_dictionary, num_tasks)
# Experimental overriding for backtranslation
def build_model(self, args):
model = models.build_model(args, self)
return model
def dataset(self, split):
if split not in self.datasets:
raise KeyError("Dataset not loaded: " + split)
return self.datasets[split]
def load_dataset(self, split, epoch=1, **kwargs):
"""Load a dataset split."""
def indexed_dataset(path, dictionary):
if self.args.raw_text:
raise Exception("Unable to handle raw text.")
dataset = IndexedDataset(path, fix_lua_indexing=True)
return dataset
pair_datasets = OrderedDict()
if split == "valid":
self.datasets[split] = pair_datasets
return
if split not in self.config:
raise FileNotFoundError(
"Dataset not found in config file: {}".format(split)
)
size_by_corpus = defaultdict(int)
size_sum = 0
size_sum_with_subsampling = 0
init_pair_datasets = {}
for dataset_config in self.config[split]:
src_path = os.path.dirname(dataset_config["src"])
corpus_name = src_path.split("/")[-2]
language_pair_name = src_path.split("/")[-1]
pair_datasets_key = corpus_name + "-" + language_pair_name
logger.info(f"loading... {pair_datasets_key}")
if "src" in dataset_config:
src_dataset = indexed_dataset(
dataset_config["src"], self.src_dictionary
)
else:
src_dataset = None
if "tgt" in dataset_config:
tgt_dataset = indexed_dataset(
dataset_config["tgt"], self.tgt_dictionary
)
else:
tgt_dataset = None
dataset = LanguagePairDataset(
src_dataset,
src_dataset.sizes,
self.src_dictionary,
tgt_dataset,
tgt_dataset.sizes,
self.tgt_dictionary,
left_pad_source=self.args.left_pad_source,
left_pad_target=self.args.left_pad_target,
)
if pair_datasets_key in init_pair_datasets:
logger.warning(
f"Ignoring already added {pair_datasets_key}. "
f"Consider using `sample` key in order to upsample."
)
else:
init_pair_datasets[pair_datasets_key] = {
"dataset": dataset,
"sample": dataset_config.get("sample", None),
"id": dataset_config.get("id", None),
"len": len(dataset),
}
length_sum = 0
weighted_freqs_sum = 0
freq_per_dataset = {}
vmax = 0
vmin = 1
weighted_freq_per_dataset = {}
if self.args.weighting_alpha:
for key in init_pair_datasets:
if init_pair_datasets[key]["sample"] is None:
length_sum += len(init_pair_datasets[key]["dataset"])
for key in init_pair_datasets:
if init_pair_datasets[key]["sample"] is None:
val = float(init_pair_datasets[key]["len"]) / length_sum
freq_per_dataset[key] = val
weighted_freqs_sum += val ** self.args.weighting_alpha
for key in freq_per_dataset:
val = (
freq_per_dataset[key] ** self.args.weighting_alpha
/ weighted_freqs_sum
)
vmin = min(vmin, val)
vmax = max(vmax, val)
weighted_freq_per_dataset[key] = val
for pair_datasets_key in init_pair_datasets:
dataset_config = init_pair_datasets[pair_datasets_key]
dataset = dataset_config["dataset"]
sample = dataset_config["sample"]
if sample is None:
sample = 1.0
if pair_datasets_key in weighted_freq_per_dataset:
w = vmax / weighted_freq_per_dataset[pair_datasets_key]
sample = w
sample = round(sample)
initial_sample = sample
initial_pair_datasets_key = pair_datasets_key
while sample >= 1.0:
assert (
pair_datasets_key not in pair_datasets
), f"{pair_datasets_key} already in"
size_sum_with_subsampling += len(dataset)
pair_datasets[pair_datasets_key] = MultitaskDatasetWrapper(
dataset, dataset_config.get("id", 0), 1.0, name=pair_datasets_key
)
size_sum += len(dataset)
sample -= 1.0
pair_datasets_key += "-up"
assert sample < 1e-6, f"sample remains > 0 {pair_datasets_key}"
logger.info(
f"added pair {initial_pair_datasets_key} length {len(dataset)} new_length = {len(dataset)*initial_sample}"
)
size_by_corpus[corpus_name] += len(dataset)
self.datasets[split] = pair_datasets
logger.info(
f"Datasets number = {len(self.datasets[split])} size = {size_sum} size_sum_with_subsampling = {size_sum_with_subsampling}"
)
@property
def source_dictionary(self):
return self.src_dictionary
@property
def target_dictionary(self):
return self.tgt_dictionary
def get_batch_iterator(
self,
dataset,
max_tokens=None,
max_sentences=None,
max_positions=None,
ignore_invalid_inputs=False,
required_batch_size_multiple=1,
seed=1,
num_shards=1,
shard_id=0,
num_workers=0,
epoch=1,
data_buffer_size=0,
disable_iterator_cache=False,
):
assert isinstance(dataset, OrderedDict)
assert len(dataset)
assert isinstance(dataset[next(iter(dataset))], FairseqDataset)
# initialize the dataset with the correct starting epoch
for _, dt in dataset.items():
dt.set_epoch(epoch)
indices = OrderedDict()
batch_sampler = OrderedDict()
with data_utils.numpy_seed(seed + epoch):
for key, dt in dataset.items():
logger.info(f"\t ordered_indices {key}")
indices[key] = dt.ordered_indices()
# filter examples that are too large
if max_positions is not None:
for key, dt in dataset.items():
logger.info(f"\t filter_by_size {key}")
indices[key], ignored = dt.filter_indices_by_size(
indices[key], max_positions
)
for key, dt in dataset.items():
logger.info(f"\t batch_by_size {key}")
batch_sampler[key] = data_utils.batch_by_size(
indices[key],
dt.num_tokens,
max_tokens=max_tokens,
max_sentences=max_sentences,
required_batch_size_multiple=required_batch_size_multiple,
)
epoch_iter = MultidatasetEpochBatchIterator(
dataset=dataset,
batch_sampler=batch_sampler,
seed=seed,
num_shards=num_shards,
shard_id=shard_id,
num_workers=num_workers,
epoch=epoch,
)
return epoch_iter
|
COCO-LM/fairseq/examples/laser/laser_src/laser_task.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/laser/laser_src/laser_task.py",
"repo_id": "COCO-LM",
"token_count": 5535
}
| 176 |
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Use: echo {text} | python tokenize_indic.py {language}
import sys
from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
from indicnlp.tokenize.indic_tokenize import trivial_tokenize
factory = IndicNormalizerFactory()
normalizer = factory.get_normalizer(
sys.argv[1], remove_nuktas=False, nasals_mode="do_nothing"
)
for line in sys.stdin:
normalized_line = normalizer.normalize(line.strip())
tokenized_line = " ".join(trivial_tokenize(normalized_line, sys.argv[1]))
print(tokenized_line)
|
COCO-LM/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py",
"repo_id": "COCO-LM",
"token_count": 244
}
| 177 |
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
# set -x -e
if [ -z $WORKDIR_ROOT ] ;
then
echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..."
exit
fi
# put intermediate files
TMP_DIR=$WORKDIR_ROOT/temp/af_xhv2
# output {train,valid,test} files to dest
DEST=${WORKDIR_ROOT}/ML50/raw
ROOT=${WORKDIR_ROOT}
UTILS=$PWD/utils
TMX2CORPUS="${UTILS}/tmx2corpus"
TMX_TOOL="python ${TMX2CORPUS}/tmx2corpus.py"
mkdir -p $TMP_DIR
mkdir -p $DEST
mkdir -p $UTILS
function download_opus(){
src=$1
tgt=$2
subset=$3
ulr=$4
mkdir extract_$subset.$src-$tgt
pushd extract_$subset.$src-$tgt
if [ ! -f "$subset.$src-$tgt.tmx.gz" ]; then
wget $url -O "$subset.$src-$tgt.tmx.gz"
gzip -d "$subset.$src-$tgt.tmx.gz"
f=$subset.$src-$tgt.tmx
$TMX_TOOL $f
mv bitext.$src ../$subset.$src-$tgt.$src
mv bitext.$tgt ../$subset.$src-$tgt.$tgt
fi
popd
}
function concat_subsets(){
src=$1
tgt=$2
subsets=$3
src_train=raw_train.$src-$tgt.$src
tgt_train=raw_train.$src-$tgt.$tgt
> $src_train
> $tgt_train
for subset in $subsets; do
cat $subset.$src-$tgt.$src >> $src_train
cat $subset.$src-$tgt.$tgt >> $tgt_train
done
}
function get_seeded_random()
{
seed="$1"
openssl enc -aes-256-ctr -pass pass:"$seed" -nosalt \
</dev/zero 2>/dev/null
}
function split_train_valid(){
src=$1
tgt=$2
raw_src_train=raw_train.$src-$tgt.$src
raw_tgt_train=raw_train.$src-$tgt.$tgt
shuf --random-source=<(get_seeded_random 43) $raw_src_train > shuffled.$src-$tgt.$src
shuf --random-source=<(get_seeded_random 43) $raw_tgt_train > shuffled.$src-$tgt.$tgt
head -n 1500 shuffled.$src-$tgt.$src > valid.$src-$tgt.$src
head -n 1500 shuffled.$src-$tgt.$tgt > valid.$src-$tgt.$tgt
tail +1501 shuffled.$src-$tgt.$src > train.$src-$tgt.$src
tail +1501 shuffled.$src-$tgt.$tgt > train.$src-$tgt.$tgt
}
function copy2dst(){
lsrc=$1
ltgt=$2
src=${lsrc:0:2}
tgt=${ltgt:0:2}
cp valid.$src-$tgt.$src $DEST/valid.$lsrc-$ltgt.$lsrc
cp valid.$src-$tgt.$tgt $DEST/valid.$lsrc-$ltgt.$ltgt
cp train.$src-$tgt.$src $DEST/train.$lsrc-$ltgt.$lsrc
cp train.$src-$tgt.$tgt $DEST/train.$lsrc-$ltgt.$ltgt
}
#for xh-en
declare -A xh_en_urls
xh_en_urls=(
[Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/en-xh.tmx.gz
[wikimedia]=https://object.pouta.csc.fi/OPUS-wikimedia/v20190628/tmx/en-xh.tmx.gz
[memat]=https://object.pouta.csc.fi/OPUS-memat/v1/tmx/en-xh.tmx.gz
[uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/en-xh.tmx.gz
[GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/en-xh.tmx.gz
[XhosaNavy]=https://object.pouta.csc.fi/OPUS-XhosaNavy/v1/tmx/en-xh.tmx.gz
[KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/en-xh.tmx.gz
[Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/en-xh.tmx.gz
)
mkdir $TMP_DIR/xh-en
pushd $TMP_DIR/xh-en
for k in "${!xh_en_urls[@]}"
do
name=$k
url=${xh_en_urls[$k]}
echo "$name: $url"
download_opus xh en $name $ulr
done
concat_subsets xh en "${!xh_en_urls[@]}"
split_train_valid xh en
copy2dst xh_ZA en_XX
popd
##
#for af-en
declare -A af_en_urls
af_en_urls=(
[Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/af-en.tmx.gz
[uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/af-en.tmx.gz
[GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/af-en.tmx.gz
[QED]=https://object.pouta.csc.fi/OPUS-QED/v2.0a/tmx/af-en.tmx.gz
[KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/af-en.tmx.gz
[OpenSubtitles]=https://object.pouta.csc.fi/OPUS-OpenSubtitles/v2018/tmx/af-en.tmx.gz
[SPC]=https://object.pouta.csc.fi/OPUS-SPC/v1/tmx/af-en.tmx.gz
[Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/af-en.tmx.gz
)
mkdir $TMP_DIR/af-en
pushd $TMP_DIR/af-en
for k in "${!af_en_urls[@]}"
do
name=$k
url=${af_en_urls[$k]}
echo "$name: $url"
download_opus af en $name $ulr
done
concat_subsets af en "${!af_en_urls[@]}"
split_train_valid af en
copy2dst af_ZA en_XX
popd
|
COCO-LM/fairseq/examples/multilingual/data_scripts/download_af_xh.sh/0
|
{
"file_path": "COCO-LM/fairseq/examples/multilingual/data_scripts/download_af_xh.sh",
"repo_id": "COCO-LM",
"token_count": 2235
}
| 178 |
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
lang_pairs="en-fr,en-cs,fr-en,cs-en"
path_2_data=$1 # <path to data>
lang_list=$2 # <path to a file which contains list of languages separted by new lines>
model=$3 # <path to a trained model>
source_lang=cs
target_lang=en
fairseq-generate "$path_2_data" \
--path "$model" \
--task translation_multi_simple_epoch \
--gen-subset test \
--source-lang "$source_lang" \
--target-lang "$target_lang" \
--sacrebleu --remove-bpe 'sentencepiece'\
--batch-size 32 \
--encoder-langtok "src" \
--decoder-langtok \
--lang-dict "$lang_list" \
--lang-pairs "$lang_pairs"
|
COCO-LM/fairseq/examples/multilingual/multilingual_fairseq_gen.sh/0
|
{
"file_path": "COCO-LM/fairseq/examples/multilingual/multilingual_fairseq_gen.sh",
"repo_id": "COCO-LM",
"token_count": 284
}
| 179 |
# Transformer with Pointer-Generator Network
This page describes the `transformer_pointer_generator` model that incorporates
a pointing mechanism in the Transformer model that facilitates copying of input
words to the output. This architecture is described in [Enarvi et al. (2020)](https://www.aclweb.org/anthology/2020.nlpmc-1.4/).
## Background
The pointer-generator network was introduced in [See et al. (2017)](https://arxiv.org/abs/1704.04368)
for RNN encoder-decoder attention models. A similar mechanism can be
incorporated in a Transformer model by reusing one of the many attention
distributions for pointing. The attention distribution over the input words is
interpolated with the normal output distribution over the vocabulary words. This
allows the model to generate words that appear in the input, even if they don't
appear in the vocabulary, helping especially with small vocabularies.
## Implementation
The mechanism for copying out-of-vocabulary words from the input has been
implemented differently to See et al. In their [implementation](https://github.com/abisee/pointer-generator)
they convey the word identities through the model in order to be able to produce
words that appear in the input sequence but not in the vocabulary. A different
approach was taken in the Fairseq implementation to keep it self-contained in
the model file, avoiding any changes to the rest of the code base. Copying
out-of-vocabulary words is possible by pre-processing the input and
post-processing the output. This is described in detail in the next section.
## Usage
The training and evaluation procedure is outlined below. You can also find a
more detailed example for the XSum dataset on [this page](README.xsum.md).
##### 1. Create a vocabulary and extend it with source position markers
The pointing mechanism is especially helpful with small vocabularies, if we are
able to recover the identities of any out-of-vocabulary words that are copied
from the input. For this purpose, the model allows extending the vocabulary with
special tokens that can be used in place of `<unk>` tokens to identify different
input positions. For example, the user may add `<unk-0>`, `<unk-1>`, `<unk-2>`,
etc. to the end of the vocabulary, after the normal words. Below is an example
of how to create a vocabulary of 10000 most common words and add 1000 input
position markers.
```bash
vocab_size=10000
position_markers=1000
export LC_ALL=C
cat train.src train.tgt |
tr -s '[:space:]' '\n' |
sort |
uniq -c |
sort -k1,1bnr -k2 |
head -n "$((vocab_size - 4))" |
awk '{ print $2 " " $1 }' >dict.pg.txt
python3 -c "[print('<unk-{}> 0'.format(n)) for n in range($position_markers)]" >>dict.pg.txt
```
##### 2. Preprocess the text data
The idea is that any `<unk>` tokens in the text are replaced with `<unk-0>` if
it appears in the first input position, `<unk-1>` if it appears in the second
input position, and so on. This can be achieved using the `preprocess.py` script
that is provided in this directory.
##### 3. Train a model
The number of these special tokens is given to the model with the
`--source-position-markers` argument—the model simply maps all of these to the
same word embedding as `<unk>`.
The attention distribution that is used for pointing is selected using the
`--alignment-heads` and `--alignment-layer` command-line arguments in the same
way as with the `transformer_align` model.
##### 4. Generate text and postprocess it
When using the model to generate text, you want to preprocess the input text in
the same way that training data was processed, replacing out-of-vocabulary words
with `<unk-N>` tokens. If any of these tokens are copied to the output, the
actual words can be retrieved from the unprocessed input text. Any `<unk-N>`
token should be replaced with the word at position N in the original input
sequence. This can be achieved using the `postprocess.py` script.
|
COCO-LM/fairseq/examples/pointer_generator/README.md/0
|
{
"file_path": "COCO-LM/fairseq/examples/pointer_generator/README.md",
"repo_id": "COCO-LM",
"token_count": 1054
}
| 180 |
# Simultaneous Machine Translation
This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS)
## Prepare Data
[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh)
## Training
- MMA-IL
```shell
fairseq-train \
data-bin/wmt15_en_de_32k \
--simul-type infinite_lookback \
--user-dir $FAIRSEQ/example/simultaneous_translation \
--mass-preservation \
--criterion latency_augmented_label_smoothed_cross_entropy \
--latency-weight-avg 0.1 \
--max-update 50000 \
--arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
--optimizer adam --adam-betas '(0.9, 0.98)' \
--lr-scheduler 'inverse_sqrt' \
--warmup-init-lr 1e-7 --warmup-updates 4000 \
--lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
--dropout 0.3 \
--label-smoothing 0.1\
--max-tokens 3584
```
- MMA-H
```shell
fairseq-train \
data-bin/wmt15_en_de_32k \
--simul-type hard_aligned \
--user-dir $FAIRSEQ/example/simultaneous_translation \
--mass-preservation \
--criterion latency_augmented_label_smoothed_cross_entropy \
--latency-weight-var 0.1 \
--max-update 50000 \
--arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
--optimizer adam --adam-betas '(0.9, 0.98)' \
--lr-scheduler 'inverse_sqrt' \
--warmup-init-lr 1e-7 --warmup-updates 4000 \
--lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
--dropout 0.3 \
--label-smoothing 0.1\
--max-tokens 3584
```
- wait-k
```shell
fairseq-train \
data-bin/wmt15_en_de_32k \
--simul-type wait-k \
--waitk-lagging 3 \
--user-dir $FAIRSEQ/example/simultaneous_translation \
--mass-preservation \
--criterion latency_augmented_label_smoothed_cross_entropy \
--max-update 50000 \
--arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \
--optimizer adam --adam-betas '(0.9, 0.98)' \
--lr-scheduler 'inverse_sqrt' \
--warmup-init-lr 1e-7 --warmup-updates 4000 \
--lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\
--dropout 0.3 \
--label-smoothing 0.1\
--max-tokens 3584
```
## Evaluation
More details on evaluation can be found [here](https://github.com/pytorch/fairseq/blob/simulastsharedtask/examples/simultaneous_translation/docs/evaluation.md)
### Start the server
```shell
python ./eval/server.py \
--src-file $SRC_FILE \
--ref-file $TGT_FILE
```
### Run the client
```shell
python ./evaluate.py \
--data-bin data-bin/wmt15_en_de_32k \
--model-path ./checkpoints/checkpoint_best.pt
--scores --output $RESULT_DIR
```
### Run evaluation locally without server
```shell
python ./eval/evaluate.py
--local \
--src-file $SRC_FILE \
--tgt-file $TGT_FILE \
--data-bin data-bin/wmt15_en_de_32k \
--model-path ./checkpoints/checkpoint_best.pt \
--scores --output $RESULT_DIR
```
|
COCO-LM/fairseq/examples/simultaneous_translation/README.md/0
|
{
"file_path": "COCO-LM/fairseq/examples/simultaneous_translation/README.md",
"repo_id": "COCO-LM",
"token_count": 1310
}
| 181 |
# Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the LICENSE file in
# the root directory of this source tree. An additional grant of patent rights
# can be found in the PATENTS file in the same directory.
from fairseq import checkpoint_utils
from fairseq.models import (
register_model,
register_model_architecture,
)
from fairseq.models.speech_to_text import (
ConvTransformerModel,
convtransformer_espnet,
ConvTransformerEncoder,
)
from fairseq.models.speech_to_text.modules.augmented_memory_attention import (
augmented_memory,
SequenceEncoder,
AugmentedMemoryConvTransformerEncoder,
)
from fairseq.models.speech_to_text.modules.emformer import emformer_encoder
@register_model("convtransformer_simul_trans")
class SimulConvTransformerModel(ConvTransformerModel):
"""
Implementation of the paper:
SimulMT to SimulST: Adapting Simultaneous Text Translation to
End-to-End Simultaneous Speech Translation
https://www.aclweb.org/anthology/2020.aacl-main.58.pdf
"""
@staticmethod
def add_args(parser):
super(SimulConvTransformerModel, SimulConvTransformerModel).add_args(parser)
parser.add_argument(
"--train-monotonic-only",
action="store_true",
default=False,
help="Only train monotonic attention",
)
@classmethod
def build_decoder(cls, args, task, embed_tokens):
tgt_dict = task.tgt_dict
from examples.simultaneous_translation.models.transformer_monotonic_attention import (
TransformerMonotonicDecoder,
)
decoder = TransformerMonotonicDecoder(args, tgt_dict, embed_tokens)
if getattr(args, "load_pretrained_decoder_from", None):
decoder = checkpoint_utils.load_pretrained_component_from_model(
component=decoder, checkpoint=args.load_pretrained_decoder_from
)
return decoder
@register_model_architecture(
"convtransformer_simul_trans", "convtransformer_simul_trans_espnet"
)
def convtransformer_simul_trans_espnet(args):
convtransformer_espnet(args)
@register_model("convtransformer_augmented_memory")
@augmented_memory
class AugmentedMemoryConvTransformerModel(SimulConvTransformerModel):
@classmethod
def build_encoder(cls, args):
encoder = SequenceEncoder(args, AugmentedMemoryConvTransformerEncoder(args))
if getattr(args, "load_pretrained_encoder_from", None) is not None:
encoder = checkpoint_utils.load_pretrained_component_from_model(
component=encoder, checkpoint=args.load_pretrained_encoder_from
)
return encoder
@register_model_architecture(
"convtransformer_augmented_memory", "convtransformer_augmented_memory"
)
def augmented_memory_convtransformer_espnet(args):
convtransformer_espnet(args)
# ============================================================================ #
# Convtransformer
# with monotonic attention decoder
# with emformer encoder
# ============================================================================ #
@emformer_encoder
class ConvTransformerEmformerEncoder(ConvTransformerEncoder):
pass
@register_model("convtransformer_emformer")
class ConvtransformerEmformer(SimulConvTransformerModel):
@staticmethod
def add_args(parser):
super(ConvtransformerEmformer, ConvtransformerEmformer).add_args(parser)
parser.add_argument(
"--segment-length",
type=int,
metavar="N",
help="length of each segment (not including left context / right context)",
)
parser.add_argument(
"--segment-left-context",
type=int,
help="length of left context in a segment",
)
parser.add_argument(
"--segment-right-context",
type=int,
help="length of right context in a segment",
)
parser.add_argument(
"--max-memory-size",
type=int,
default=-1,
help="Right context for the segment.",
)
parser.add_argument(
"--amtrf-tanh-on-mem",
default=False,
action="store_true",
help="whether to use tanh on memory vector",
)
@classmethod
def build_encoder(cls, args):
encoder = ConvTransformerEmformerEncoder(args)
if getattr(args, "load_pretrained_encoder_from", None):
encoder = checkpoint_utils.load_pretrained_component_from_model(
component=encoder, checkpoint=args.load_pretrained_encoder_from
)
return encoder
@register_model_architecture(
"convtransformer_emformer",
"convtransformer_emformer",
)
def convtransformer_emformer_base(args):
convtransformer_espnet(args)
|
COCO-LM/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py",
"repo_id": "COCO-LM",
"token_count": 1935
}
| 182 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
import numpy as np
from fairseq.data import FairseqDataset
from . import data_utils
from .collaters import Seq2SeqCollater
class AsrDataset(FairseqDataset):
"""
A dataset representing speech and corresponding transcription.
Args:
aud_paths: (List[str]): A list of str with paths to audio files.
aud_durations_ms (List[int]): A list of int containing the durations of
audio files.
tgt (List[torch.LongTensor]): A list of LongTensors containing the indices
of target transcriptions.
tgt_dict (~fairseq.data.Dictionary): target vocabulary.
ids (List[str]): A list of utterance IDs.
speakers (List[str]): A list of speakers corresponding to utterances.
num_mel_bins (int): Number of triangular mel-frequency bins (default: 80)
frame_length (float): Frame length in milliseconds (default: 25.0)
frame_shift (float): Frame shift in milliseconds (default: 10.0)
"""
def __init__(
self,
aud_paths,
aud_durations_ms,
tgt,
tgt_dict,
ids,
speakers,
num_mel_bins=80,
frame_length=25.0,
frame_shift=10.0,
):
assert frame_length > 0
assert frame_shift > 0
assert all(x > frame_length for x in aud_durations_ms)
self.frame_sizes = [
int(1 + (d - frame_length) / frame_shift) for d in aud_durations_ms
]
assert len(aud_paths) > 0
assert len(aud_paths) == len(aud_durations_ms)
assert len(aud_paths) == len(tgt)
assert len(aud_paths) == len(ids)
assert len(aud_paths) == len(speakers)
self.aud_paths = aud_paths
self.tgt_dict = tgt_dict
self.tgt = tgt
self.ids = ids
self.speakers = speakers
self.num_mel_bins = num_mel_bins
self.frame_length = frame_length
self.frame_shift = frame_shift
self.s2s_collater = Seq2SeqCollater(
0,
1,
pad_index=self.tgt_dict.pad(),
eos_index=self.tgt_dict.eos(),
move_eos_to_beginning=True,
)
def __getitem__(self, index):
import torchaudio
import torchaudio.compliance.kaldi as kaldi
tgt_item = self.tgt[index] if self.tgt is not None else None
path = self.aud_paths[index]
if not os.path.exists(path):
raise FileNotFoundError("Audio file not found: {}".format(path))
sound, sample_rate = torchaudio.load_wav(path)
output = kaldi.fbank(
sound,
num_mel_bins=self.num_mel_bins,
frame_length=self.frame_length,
frame_shift=self.frame_shift,
)
output_cmvn = data_utils.apply_mv_norm(output)
return {"id": index, "data": [output_cmvn.detach(), tgt_item]}
def __len__(self):
return len(self.aud_paths)
def collater(self, samples):
"""Merge a list of samples to form a mini-batch.
Args:
samples (List[int]): sample indices to collate
Returns:
dict: a mini-batch suitable for forwarding with a Model
"""
return self.s2s_collater.collate(samples)
def num_tokens(self, index):
return self.frame_sizes[index]
def size(self, index):
"""Return an example's size as a float or tuple. This value is used when
filtering a dataset with ``--max-positions``."""
return (
self.frame_sizes[index],
len(self.tgt[index]) if self.tgt is not None else 0,
)
def ordered_indices(self):
"""Return an ordered list of indices. Batches will be constructed based
on this order."""
return np.arange(len(self))
|
COCO-LM/fairseq/examples/speech_recognition/data/asr_dataset.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/speech_recognition/data/asr_dataset.py",
"repo_id": "COCO-LM",
"token_count": 1776
}
| 183 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import json
import os
import re
import sys
import torch
from examples.speech_recognition.data import AsrDataset
from examples.speech_recognition.data.replabels import replabel_symbol
from fairseq.data import Dictionary
from fairseq.tasks import LegacyFairseqTask, register_task
def get_asr_dataset_from_json(data_json_path, tgt_dict):
"""
Parse data json and create dataset.
See scripts/asr_prep_json.py which pack json from raw files
Json example:
{
"utts": {
"4771-29403-0025": {
"input": {
"length_ms": 170,
"path": "/tmp/file1.flac"
},
"output": {
"text": "HELLO \n",
"token": "HE LLO",
"tokenid": "4815, 861"
}
},
"1564-142299-0096": {
...
}
}
"""
if not os.path.isfile(data_json_path):
raise FileNotFoundError("Dataset not found: {}".format(data_json_path))
with open(data_json_path, "rb") as f:
data_samples = json.load(f)["utts"]
assert len(data_samples) != 0
sorted_samples = sorted(
data_samples.items(),
key=lambda sample: int(sample[1]["input"]["length_ms"]),
reverse=True,
)
aud_paths = [s[1]["input"]["path"] for s in sorted_samples]
ids = [s[0] for s in sorted_samples]
speakers = []
for s in sorted_samples:
m = re.search("(.+?)-(.+?)-(.+?)", s[0])
speakers.append(m.group(1) + "_" + m.group(2))
frame_sizes = [s[1]["input"]["length_ms"] for s in sorted_samples]
tgt = [
[int(i) for i in s[1]["output"]["tokenid"].split(", ")]
for s in sorted_samples
]
# append eos
tgt = [[*t, tgt_dict.eos()] for t in tgt]
return AsrDataset(aud_paths, frame_sizes, tgt, tgt_dict, ids, speakers)
@register_task("speech_recognition")
class SpeechRecognitionTask(LegacyFairseqTask):
"""
Task for training speech recognition model.
"""
@staticmethod
def add_args(parser):
"""Add task-specific arguments to the parser."""
parser.add_argument("data", help="path to data directory")
parser.add_argument(
"--silence-token", default="\u2581", help="token for silence (used by w2l)"
)
parser.add_argument(
"--max-source-positions",
default=sys.maxsize,
type=int,
metavar="N",
help="max number of frames in the source sequence",
)
parser.add_argument(
"--max-target-positions",
default=1024,
type=int,
metavar="N",
help="max number of tokens in the target sequence",
)
def __init__(self, args, tgt_dict):
super().__init__(args)
self.tgt_dict = tgt_dict
@classmethod
def setup_task(cls, args, **kwargs):
"""Setup the task (e.g., load dictionaries)."""
dict_path = os.path.join(args.data, "dict.txt")
if not os.path.isfile(dict_path):
raise FileNotFoundError("Dict not found: {}".format(dict_path))
tgt_dict = Dictionary.load(dict_path)
if args.criterion == "ctc_loss":
tgt_dict.add_symbol("<ctc_blank>")
elif args.criterion == "asg_loss":
for i in range(1, args.max_replabel + 1):
tgt_dict.add_symbol(replabel_symbol(i))
print("| dictionary: {} types".format(len(tgt_dict)))
return cls(args, tgt_dict)
def load_dataset(self, split, combine=False, **kwargs):
"""Load a given dataset split.
Args:
split (str): name of the split (e.g., train, valid, test)
"""
data_json_path = os.path.join(self.args.data, "{}.json".format(split))
self.datasets[split] = get_asr_dataset_from_json(data_json_path, self.tgt_dict)
def build_generator(self, models, args, **unused):
w2l_decoder = getattr(args, "w2l_decoder", None)
if w2l_decoder == "viterbi":
from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder
return W2lViterbiDecoder(args, self.target_dictionary)
elif w2l_decoder == "kenlm":
from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder
return W2lKenLMDecoder(args, self.target_dictionary)
elif w2l_decoder == "fairseqlm":
from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder
return W2lFairseqLMDecoder(args, self.target_dictionary)
else:
return super().build_generator(models, args)
@property
def target_dictionary(self):
"""Return the :class:`~fairseq.data.Dictionary` for the language
model."""
return self.tgt_dict
@property
def source_dictionary(self):
"""Return the source :class:`~fairseq.data.Dictionary` (if applicable
for this task)."""
return None
def max_positions(self):
"""Return the max speech and sentence length allowed by the task."""
return (self.args.max_source_positions, self.args.max_target_positions)
|
COCO-LM/fairseq/examples/speech_recognition/tasks/speech_recognition.py/0
|
{
"file_path": "COCO-LM/fairseq/examples/speech_recognition/tasks/speech_recognition.py",
"repo_id": "COCO-LM",
"token_count": 2518
}
| 184 |
# Hierarchical Neural Story Generation (Fan et al., 2018)
The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset.
## Pre-trained models
Description | Dataset | Model | Test set(s)
---|---|---|---
Stories with Convolutional Model <br> ([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2)
We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation.
## Dataset
The dataset can be downloaded like this:
```bash
cd examples/stories
curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf -
```
and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token.
## Example usage
First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story:
```python
data = ["train", "test", "valid"]
for name in data:
with open(name + ".wp_target") as f:
stories = f.readlines()
stories = [" ".join(i.split()[0:1000]) for i in stories]
with open(name + ".wp_target", "w") as o:
for line in stories:
o.write(line.strip() + "\n")
```
Once we've trimmed the data we can binarize it and train our model:
```bash
# Binarize the dataset:
export TEXT=examples/stories/writingPrompts
fairseq-preprocess --source-lang wp_source --target-lang wp_target \
--trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
--destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10
# Train the model:
fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False
# Train a fusion model:
# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint
# Generate:
# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary.
fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}"
```
## Citation
```bibtex
@inproceedings{fan2018hierarchical,
title = {Hierarchical Neural Story Generation},
author = {Fan, Angela and Lewis, Mike and Dauphin, Yann},
booktitle = {Conference of the Association for Computational Linguistics (ACL)},
year = 2018,
}
```
|
COCO-LM/fairseq/examples/stories/README.md/0
|
{
"file_path": "COCO-LM/fairseq/examples/stories/README.md",
"repo_id": "COCO-LM",
"token_count": 1306
}
| 185 |
# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)
This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural
Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608)
## Requirements:
* mosesdecoder: https://github.com/moses-smt/mosesdecoder
* subword-nmt: https://github.com/rsennrich/subword-nmt
* flores: https://github.com/facebookresearch/flores
## Download Models and Test Data
Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe).
## Set up:
Given a testset consisting of source sentences and reference translations:
* `SRC_LANG`: source language
* `TGT_LANG`: target language
* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG`
contains the reference sentences
* `OUTPUT_DIR`: output path to store results
* `MOSES_DECODER`: path to mosesdecoder installation
* `BPE_ROOT`: path to subword-nmt installation
* `BPE`: path to BPE model
* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies.
* `TMP`: directory for intermediate temporary files
* `GPU`: if translating with GPU, id of the GPU to use for inference
* `DROPOUT_N`: number of stochastic forward passes
`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10
does not bring substantial improvements.
## Translate the data using standard decoding
Preprocess the input data:
```
for LANG in $SRC_LANG $TGT_LANG; do
perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG
python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG
done
```
Binarize the data for faster translation:
```
fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt
--source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4
```
Translate
```
CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5
--source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out
grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out
```
Post-process
```
sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl
-l $TGT_LANG > $OUTPUT_DIR/mt.out
```
## Produce uncertainty estimates
### Scoring
Make temporary files to store the translations repeated N times.
```
python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N
-o $TMP/repeated.$SRC_LANG
python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG
fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG}
--target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated
```
Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time:
```
CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5
--source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout
--retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]'
TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out
grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores
```
Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places
as for training.
Compute the mean of the resulting output distribution:
```
python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean
-n $DROPOUT_N
```
### Generation
Produce multiple translation hypotheses for the same source using `--retain-dropout` option:
```
CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt
--beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout
--unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder
TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out
grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_
sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl
-l $TGT_LANG > $TMP/dropout.hypotheses
```
Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor
evaluation metric:
```
python meteor.py -i $TMP/dropout.hypotheses -m <path_to_meteor_installation> -n $DROPOUT_N -o
$OUTPUT_DIR/dropout.gen.sim.meteor
```
|
COCO-LM/fairseq/examples/unsupervised_quality_estimation/README.md/0
|
{
"file_path": "COCO-LM/fairseq/examples/unsupervised_quality_estimation/README.md",
"repo_id": "COCO-LM",
"token_count": 1846
}
| 186 |
# @package _group_
common:
fp16: true
log_format: json
log_interval: 200
checkpoint:
save_interval_updates: 25000
keep_interval_updates: 1
no_epoch_checkpoints: true
task:
_name: audio_pretraining
data: ???
max_sample_size: 320000
min_sample_size: 32000
normalize: true
dataset:
num_workers: 6
max_tokens: 1200000
skip_invalid_size_inputs_valid_test: true
distributed_training:
distributed_world_size: 128
ddp_backend: legacy_ddp
criterion:
_name: wav2vec
infonce: true
log_keys: ["prob_perplexity","code_perplexity","temp"]
loss_weights: [0.1, 0]
optimization:
max_update: 1000000
lr: [0.005]
optimizer:
_name: adam
adam_betas: (0.9,0.98)
adam_eps: 1e-06
weight_decay: 0.01
lr_scheduler:
_name: polynomial_decay
warmup_updates: 32000
model:
_name: wav2vec2
quantize_targets: true
extractor_mode: layer_norm
layer_norm_first: true
final_dim: 768
latent_temp: [2.0,0.1,0.999995]
encoder_layerdrop: 0.00
dropout_input: 0.0
dropout_features: 0.0
dropout: 0.0
attention_dropout: 0.0
conv_bias: true
encoder_layers: 24
encoder_embed_dim: 1024
encoder_ffn_embed_dim: 4096
encoder_attention_heads: 16
feature_grad_mult: 1.0
|
COCO-LM/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox.yaml/0
|
{
"file_path": "COCO-LM/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox.yaml",
"repo_id": "COCO-LM",
"token_count": 520
}
| 187 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
from collections import Counter
import torch
from fairseq.file_io import PathManager
from fairseq.tokenizer import tokenize_line
from typing import List, Dict
def safe_readline(f):
pos = f.tell()
while True:
try:
return f.readline()
except UnicodeDecodeError:
pos -= 1
f.seek(pos) # search where this character begins
class Binarizer:
@staticmethod
def binarize(
filename,
dict,
consumer,
tokenize=tokenize_line,
append_eos=True,
reverse_order=False,
offset=0,
end=-1,
already_numberized=False,
) -> Dict[str, int]:
nseq, ntok = 0, 0
replaced = Counter()
def replaced_consumer(word, idx):
if idx == dict.unk_index and word != dict.unk_word:
replaced.update([word])
with open(PathManager.get_local_path(filename), "r", encoding="utf-8") as f:
f.seek(offset)
# next(f) breaks f.tell(), hence readline() must be used
line = safe_readline(f)
while line:
# f.tell() does not always give the byte position in the file
# sometimes it skips to a very large number
# it is unlikely that through a normal read we go from
# end bytes to end + 2**32 bytes (4 GB) and this makes it unlikely
# that the procedure breaks by the undeterministic behavior of
# f.tell()
if end > 0 and f.tell() > end and f.tell() < end + 2 ** 32:
break
if already_numberized:
id_strings = line.strip().split()
id_list = [int(id_string) for id_string in id_strings]
if reverse_order:
id_list.reverse()
if append_eos:
id_list.append(dict.eos())
ids = torch.IntTensor(id_list)
else:
ids = dict.encode_line(
line=line,
line_tokenizer=tokenize,
add_if_not_exist=False,
consumer=replaced_consumer,
append_eos=append_eos,
reverse_order=reverse_order,
)
nseq += 1
ntok += len(ids)
consumer(ids)
line = f.readline()
return {
"nseq": nseq,
"nunk": sum(replaced.values()),
"ntok": ntok,
"replaced": replaced,
}
@staticmethod
def binarize_alignments(
filename, alignment_parser, consumer, offset=0, end=-1
) -> Dict[str, int]:
nseq = 0
with open(PathManager.get_local_path(filename), "r") as f:
f.seek(offset)
line = safe_readline(f)
while line:
if end > 0 and f.tell() > end:
break
ids = alignment_parser(line)
nseq += 1
consumer(ids)
line = f.readline()
return {"nseq": nseq}
@staticmethod
def find_offsets(filename, num_chunks) -> List[int]:
with open(PathManager.get_local_path(filename), "r", encoding="utf-8") as f:
size = os.fstat(f.fileno()).st_size
chunk_size = size // num_chunks
offsets = [0 for _ in range(num_chunks + 1)]
for i in range(1, num_chunks):
f.seek(chunk_size * i)
safe_readline(f)
offsets[i] = f.tell()
return offsets
|
COCO-LM/fairseq/fairseq/binarizer.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/binarizer.py",
"repo_id": "COCO-LM",
"token_count": 2037
}
| 188 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
from fairseq import metrics, utils
from fairseq.criterions import register_criterion
from .label_smoothed_cross_entropy import LabelSmoothedCrossEntropyCriterion
@register_criterion("label_smoothed_cross_entropy_with_alignment")
class LabelSmoothedCrossEntropyCriterionWithAlignment(
LabelSmoothedCrossEntropyCriterion
):
def __init__(self, task, sentence_avg, label_smoothing, alignment_lambda):
super().__init__(task, sentence_avg, label_smoothing)
self.alignment_lambda = alignment_lambda
@staticmethod
def add_args(parser):
"""Add criterion-specific arguments to the parser."""
LabelSmoothedCrossEntropyCriterion.add_args(parser)
parser.add_argument(
"--alignment-lambda",
default=0.05,
type=float,
metavar="D",
help="weight for the alignment loss",
)
def forward(self, model, sample, reduce=True):
"""Compute the loss for the given sample.
Returns a tuple with three elements:
1) the loss
2) the sample size, which is used as the denominator for the gradient
3) logging outputs to display while training
"""
net_output = model(**sample["net_input"])
loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce)
sample_size = (
sample["target"].size(0) if self.sentence_avg else sample["ntokens"]
)
logging_output = {
"loss": utils.item(loss.data) if reduce else loss.data,
"nll_loss": utils.item(nll_loss.data) if reduce else nll_loss.data,
"ntokens": sample["ntokens"],
"nsentences": sample["target"].size(0),
"sample_size": sample_size,
}
alignment_loss = None
# Compute alignment loss only for training set and non dummy batches.
if "alignments" in sample and sample["alignments"] is not None:
alignment_loss = self.compute_alignment_loss(sample, net_output)
if alignment_loss is not None:
logging_output["alignment_loss"] = utils.item(alignment_loss.data)
loss += self.alignment_lambda * alignment_loss
return loss, sample_size, logging_output
def compute_alignment_loss(self, sample, net_output):
attn_prob = net_output[1]["attn"][0]
bsz, tgt_sz, src_sz = attn_prob.shape
attn = attn_prob.view(bsz * tgt_sz, src_sz)
align = sample["alignments"]
align_weights = sample["align_weights"].float()
if len(align) > 0:
# Alignment loss computation. align (shape [:, 2]) contains the src-tgt index pairs corresponding to
# the alignments. align_weights (shape [:]) contains the 1 / frequency of a tgt index for normalizing.
loss = -(
(attn[align[:, 1][:, None], align[:, 0][:, None]]).log()
* align_weights[:, None]
).sum()
else:
return None
return loss
@staticmethod
def reduce_metrics(logging_outputs) -> None:
"""Aggregate logging outputs from data parallel training."""
loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
nll_loss_sum = utils.item(
sum(log.get("nll_loss", 0) for log in logging_outputs)
)
alignment_loss_sum = utils.item(
sum(log.get("alignment_loss", 0) for log in logging_outputs)
)
ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs))
sample_size = utils.item(
sum(log.get("sample_size", 0) for log in logging_outputs)
)
metrics.log_scalar(
"loss", loss_sum / sample_size / math.log(2), sample_size, round=3
)
metrics.log_scalar(
"nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3
)
metrics.log_scalar(
"alignment_loss",
alignment_loss_sum / sample_size / math.log(2),
sample_size,
round=3,
)
metrics.log_derived(
"ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
)
@staticmethod
def logging_outputs_can_be_summed() -> bool:
"""
Whether the logging outputs returned by `forward` can be summed
across workers prior to calling `reduce_metrics`. Setting this
to True will improves distributed training speed.
"""
return True
|
COCO-LM/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py",
"repo_id": "COCO-LM",
"token_count": 2042
}
| 189 |
import math
import numbers
from typing import Optional
import numpy as np
from fairseq.data.audio.feature_transforms import (
AudioFeatureTransform,
register_audio_feature_transform,
)
@register_audio_feature_transform("specaugment")
class SpecAugmentTransform(AudioFeatureTransform):
"""SpecAugment (https://arxiv.org/abs/1904.08779)"""
@classmethod
def from_config_dict(cls, config=None):
_config = {} if config is None else config
return SpecAugmentTransform(
_config.get("time_warp_W", 0),
_config.get("freq_mask_N", 0),
_config.get("freq_mask_F", 0),
_config.get("time_mask_N", 0),
_config.get("time_mask_T", 0),
_config.get("time_mask_p", 0.0),
_config.get("mask_value", None),
)
def __init__(
self,
time_warp_w: int = 0,
freq_mask_n: int = 0,
freq_mask_f: int = 0,
time_mask_n: int = 0,
time_mask_t: int = 0,
time_mask_p: float = 0.0,
mask_value: Optional[float] = 0.0,
):
# Sanity checks
assert mask_value is None or isinstance(
mask_value, numbers.Number
), f"mask_value (type: {type(mask_value)}) must be None or a number"
if freq_mask_n > 0:
assert freq_mask_f > 0, (
f"freq_mask_F ({freq_mask_f}) "
f"must be larger than 0 when doing freq masking."
)
if time_mask_n > 0:
assert time_mask_t > 0, (
f"time_mask_T ({time_mask_t}) must be larger than 0 when "
f"doing time masking."
)
self.time_warp_w = time_warp_w
self.freq_mask_n = freq_mask_n
self.freq_mask_f = freq_mask_f
self.time_mask_n = time_mask_n
self.time_mask_t = time_mask_t
self.time_mask_p = time_mask_p
self.mask_value = mask_value
def __repr__(self):
return (
self.__class__.__name__
+ "("
+ ", ".join(
[
f"time_warp_w={self.time_warp_w}",
f"freq_mask_n={self.freq_mask_n}",
f"freq_mask_f={self.freq_mask_f}",
f"time_mask_n={self.time_mask_n}",
f"time_mask_t={self.time_mask_t}",
f"time_mask_p={self.time_mask_p}",
]
)
+ ")"
)
def __call__(self, spectrogram):
assert len(spectrogram.shape) == 2, "spectrogram must be a 2-D tensor."
distorted = spectrogram.copy() # make a copy of input spectrogram.
num_frames = spectrogram.shape[0] # or 'tau' in the paper.
num_freqs = spectrogram.shape[1] # or 'miu' in the paper.
mask_value = self.mask_value
if mask_value is None: # if no value was specified, use local mean.
mask_value = spectrogram.mean()
if num_frames == 0:
return spectrogram
if num_freqs < self.freq_mask_f:
return spectrogram
if self.time_warp_w > 0:
if 2 * self.time_warp_w < num_frames:
import cv2
w0 = np.random.randint(self.time_warp_w, num_frames - self.time_warp_w)
w = np.random.randint(-self.time_warp_w + 1, self.time_warp_w)
upper, lower = distorted[:w0, :], distorted[w0:, :]
upper = cv2.resize(
upper, dsize=(num_freqs, w0 + w), interpolation=cv2.INTER_LINEAR
)
lower = cv2.resize(
lower,
dsize=(num_freqs, num_frames - w0 - w),
interpolation=cv2.INTER_LINEAR,
)
distorted = np.concatenate((upper, lower), axis=0)
for _i in range(self.freq_mask_n):
f = np.random.randint(0, self.freq_mask_f)
f0 = np.random.randint(0, num_freqs - f)
if f != 0:
distorted[:, f0 : f0 + f] = mask_value
max_time_mask_t = min(
self.time_mask_t, math.floor(num_frames * self.time_mask_p)
)
if max_time_mask_t < 1:
return distorted
for _i in range(self.time_mask_n):
t = np.random.randint(0, max_time_mask_t)
t0 = np.random.randint(0, num_frames - t)
if t != 0:
distorted[t0 : t0 + t, :] = mask_value
return distorted
|
COCO-LM/fairseq/fairseq/data/audio/feature_transforms/specaugment.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/data/audio/feature_transforms/specaugment.py",
"repo_id": "COCO-LM",
"token_count": 2426
}
| 190 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from dataclasses import dataclass, field
from fairseq import file_utils
from fairseq.data.encoders import register_bpe
from fairseq.data.encoders.byte_utils import (
SPACE,
SPACE_ESCAPE,
byte_encode,
smart_byte_decode,
)
from fairseq.dataclass import FairseqDataclass
@dataclass
class ByteBpeConfig(FairseqDataclass):
sentencepiece_model_path: str = field(
default="???", metadata={"help": "path to sentencepiece model"}
)
@register_bpe("byte_bpe", dataclass=ByteBpeConfig)
class ByteBPE(object):
def __init__(self, cfg):
vocab = file_utils.cached_path(cfg.sentencepiece_model_path)
try:
import sentencepiece as spm
self.sp = spm.SentencePieceProcessor()
self.sp.Load(vocab)
except ImportError:
raise ImportError(
"Please install sentencepiece with: pip install sentencepiece"
)
def encode(self, x: str) -> str:
byte_encoded = byte_encode(x)
return SPACE.join(self.sp.EncodeAsPieces(byte_encoded))
@staticmethod
def decode(x: str) -> str:
unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE)
return smart_byte_decode(unescaped)
|
COCO-LM/fairseq/fairseq/data/encoders/byte_bpe.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/data/encoders/byte_bpe.py",
"repo_id": "COCO-LM",
"token_count": 566
}
| 191 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import os
import subprocess
import threading
from pathlib import Path
import numpy as np
import torch
def fasta_file_path(prefix_path):
return prefix_path + ".fasta"
class FastaDataset(torch.utils.data.Dataset):
"""
For loading protein sequence datasets in the common FASTA data format
"""
def __init__(self, path: str, cache_indices=False):
self.fn = fasta_file_path(path)
self.threadlocal = threading.local()
self.cache = Path(f"{path}.fasta.idx.npy")
if cache_indices:
if self.cache.exists():
self.offsets, self.sizes = np.load(self.cache)
else:
self.offsets, self.sizes = self._build_index(path)
np.save(self.cache, np.stack([self.offsets, self.sizes]))
else:
self.offsets, self.sizes = self._build_index(path)
def _get_file(self):
if not hasattr(self.threadlocal, "f"):
self.threadlocal.f = open(self.fn, "r")
return self.threadlocal.f
def __getitem__(self, idx):
f = self._get_file()
f.seek(self.offsets[idx])
desc = f.readline().strip()
line = f.readline()
seq = ""
while line != "" and line[0] != ">":
seq += line.strip()
line = f.readline()
return desc, seq
def __len__(self):
return self.offsets.size
def _build_index(self, path: str):
# Use grep and awk to get 100M/s on local SSD.
# Should process your enormous 100G fasta in ~10 min single core...
path = fasta_file_path(path)
bytes_offsets = subprocess.check_output(
f"cat {path} | tqdm --bytes --total $(wc -c < {path})"
"| grep --byte-offset '^>' -o | cut -d: -f1",
shell=True,
)
fasta_lengths = subprocess.check_output(
f"cat {path} | tqdm --bytes --total $(wc -c < {path})"
"| awk '/^>/ {print \"\";next;} { printf(\"%s\",$0);}' | tail -n+2 | awk '{print length($1)}'",
shell=True,
)
bytes_np = np.fromstring(bytes_offsets, dtype=np.int64, sep=" ")
sizes_np = np.fromstring(fasta_lengths, dtype=np.int64, sep=" ")
return bytes_np, sizes_np
def __setstate__(self, state):
self.__dict__ = state
self.threadlocal = threading.local()
def __getstate__(self):
d = {}
for i, v in self.__dict__.items():
if i != "threadlocal":
d[i] = v
return d
def __del__(self):
if hasattr(self.threadlocal, "f"):
self.threadlocal.f.close()
del self.threadlocal.f
@staticmethod
def exists(path):
return os.path.exists(fasta_file_path(path))
class EncodedFastaDataset(FastaDataset):
"""
The FastaDataset returns raw sequences - this allows us to return
indices with a dictionary instead.
"""
def __init__(self, path, dictionary):
super().__init__(path, cache_indices=True)
self.dictionary = dictionary
def __getitem__(self, idx):
desc, seq = super().__getitem__(idx)
return self.dictionary.encode_line(seq, line_tokenizer=list).long()
|
COCO-LM/fairseq/fairseq/data/fasta_dataset.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/data/fasta_dataset.py",
"repo_id": "COCO-LM",
"token_count": 1549
}
| 192 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from . import BaseWrapperDataset
class ReplaceDataset(BaseWrapperDataset):
"""Replaces tokens found in the dataset by a specified replacement token
Args:
dataset (~torch.utils.data.Dataset): dataset to replace tokens in
replace_map(Dictionary[int,int]): map of token to replace -> replacement token
offsets (List[int]): do not replace tokens before (from left if pos, right if neg) this offset. should be
as many as the number of objects returned by the underlying dataset __getitem__ method.
"""
def __init__(self, dataset, replace_map, offsets):
super().__init__(dataset)
assert len(replace_map) > 0
self.replace_map = replace_map
self.offsets = offsets
def __getitem__(self, index):
item = self.dataset[index]
is_tuple = isinstance(item, tuple)
srcs = item if is_tuple else [item]
for offset, src in zip(self.offsets, srcs):
for k, v in self.replace_map.items():
src_off = src[offset:] if offset >= 0 else src[:offset]
src_off.masked_fill_(src_off == k, v)
item = srcs if is_tuple else srcs[0]
return item
|
COCO-LM/fairseq/fairseq/data/replace_dataset.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/data/replace_dataset.py",
"repo_id": "COCO-LM",
"token_count": 520
}
| 193 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch
from . import FairseqDataset
class TransformEosDataset(FairseqDataset):
"""A :class:`~fairseq.data.FairseqDataset` wrapper that appends/prepends/strips EOS.
Note that the transformation is applied in :func:`collater`.
Args:
dataset (~fairseq.data.FairseqDataset): dataset to wrap
eos (int): index of the end-of-sentence symbol
append_eos_to_src (bool, optional): append EOS to the end of src
remove_eos_from_src (bool, optional): remove EOS from the end of src
append_eos_to_tgt (bool, optional): append EOS to the end of tgt
remove_eos_from_tgt (bool, optional): remove EOS from the end of tgt
"""
def __init__(
self,
dataset,
eos,
append_eos_to_src=False,
remove_eos_from_src=False,
append_eos_to_tgt=False,
remove_eos_from_tgt=False,
has_target=True,
):
if not isinstance(dataset, FairseqDataset):
raise ValueError("dataset must be an instance of FairseqDataset")
if append_eos_to_src and remove_eos_from_src:
raise ValueError("cannot combine append_eos_to_src and remove_eos_from_src")
if append_eos_to_tgt and remove_eos_from_tgt:
raise ValueError("cannot combine append_eos_to_tgt and remove_eos_from_tgt")
self.dataset = dataset
self.eos = torch.LongTensor([eos])
self.append_eos_to_src = append_eos_to_src
self.remove_eos_from_src = remove_eos_from_src
self.append_eos_to_tgt = append_eos_to_tgt
self.remove_eos_from_tgt = remove_eos_from_tgt
self.has_target = has_target
# precompute how we should adjust the reported sizes
self._src_delta = 0
self._src_delta += 1 if append_eos_to_src else 0
self._src_delta -= 1 if remove_eos_from_src else 0
self._tgt_delta = 0
self._tgt_delta += 1 if append_eos_to_tgt else 0
self._tgt_delta -= 1 if remove_eos_from_tgt else 0
self._checked_src = False
self._checked_tgt = False
def _check_src(self, src, expect_eos):
if not self._checked_src:
assert (src[-1] == self.eos[0]) == expect_eos
self._checked_src = True
def _check_tgt(self, tgt, expect_eos):
if self.has_target and not self._checked_tgt:
assert (tgt[-1] == self.eos[0]) == expect_eos
self._checked_tgt = True
def __getitem__(self, index):
return self.dataset[index]
def __len__(self):
return len(self.dataset)
def collater(self, samples):
def transform(item):
if self.append_eos_to_src:
self.eos = self.eos.to(device=item["source"].device)
self._check_src(item["source"], expect_eos=False)
item["source"] = torch.cat([item["source"], self.eos])
if self.remove_eos_from_src:
self.eos = self.eos.to(device=item["source"].device)
self._check_src(item["source"], expect_eos=True)
item["source"] = item["source"][:-1]
if self.append_eos_to_tgt:
self.eos = self.eos.to(device=item["target"].device)
self._check_tgt(item["target"], expect_eos=False)
item["target"] = torch.cat([item["target"], self.eos])
if self.remove_eos_from_tgt:
self.eos = self.eos.to(device=item["target"].device)
self._check_tgt(item["target"], expect_eos=True)
item["target"] = item["target"][:-1]
return item
samples = list(map(transform, samples))
return self.dataset.collater(samples)
def num_tokens(self, index):
return self.dataset.num_tokens(index)
def size(self, index):
if self.has_target:
src_len, tgt_len = self.dataset.size(index)
return (src_len + self._src_delta, tgt_len + self._tgt_delta)
else:
return self.dataset.size(index)
def ordered_indices(self):
# NOTE: we assume that the ordering does not change based on the
# addition or removal of eos
return self.dataset.ordered_indices()
@property
def supports_prefetch(self):
return getattr(self.dataset, "supports_prefetch", False)
def prefetch(self, indices):
return self.dataset.prefetch(indices)
|
COCO-LM/fairseq/fairseq/data/transform_eos_dataset.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/data/transform_eos_dataset.py",
"repo_id": "COCO-LM",
"token_count": 2131
}
| 194 |
#!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import copy
import logging
import os
from typing import Any, Dict, Iterator, List
import torch
from fairseq import utils
from fairseq.data import encoders
from omegaconf import open_dict
from torch import nn
logger = logging.getLogger(__name__)
def from_pretrained(
model_name_or_path,
checkpoint_file="model.pt",
data_name_or_path=".",
archive_map=None,
**kwargs
):
from fairseq import checkpoint_utils, file_utils
if archive_map is not None:
if model_name_or_path in archive_map:
model_name_or_path = archive_map[model_name_or_path]
if data_name_or_path is not None and data_name_or_path in archive_map:
data_name_or_path = archive_map[data_name_or_path]
# allow archive_map to set default arg_overrides (e.g., tokenizer, bpe)
# for each model
if isinstance(model_name_or_path, dict):
for k, v in model_name_or_path.items():
if k == "checkpoint_file":
checkpoint_file = v
elif (
k != "path"
# only set kwargs that don't already have overrides
and k not in kwargs
):
kwargs[k] = v
model_name_or_path = model_name_or_path["path"]
model_path = file_utils.load_archive_file(model_name_or_path)
# convenience hack for loading data and BPE codes from model archive
if data_name_or_path.startswith("."):
kwargs["data"] = os.path.abspath(os.path.join(model_path, data_name_or_path))
else:
kwargs["data"] = file_utils.load_archive_file(data_name_or_path)
for file, arg in {
"code": "bpe_codes",
"bpecodes": "bpe_codes",
"sentencepiece.bpe.model": "sentencepiece_model",
"merges.txt": "bpe_merges",
"vocab.json": "bpe_vocab",
}.items():
path = os.path.join(model_path, file)
if os.path.exists(path):
kwargs[arg] = path
if "user_dir" in kwargs:
utils.import_user_module(argparse.Namespace(user_dir=kwargs["user_dir"]))
models, args, task = checkpoint_utils.load_model_ensemble_and_task(
[os.path.join(model_path, cpt) for cpt in checkpoint_file.split(os.pathsep)],
arg_overrides=kwargs,
)
return {
"args": args,
"task": task,
"models": models,
}
class GeneratorHubInterface(nn.Module):
"""
PyTorch Hub interface for generating sequences from a pre-trained
translation or language model.
"""
def __init__(self, cfg, task, models):
super().__init__()
self.cfg = cfg
self.task = task
self.models = nn.ModuleList(models)
self.src_dict = task.source_dictionary
self.tgt_dict = task.target_dictionary
# optimize model for generation
for model in self.models:
model.prepare_for_inference_(cfg)
# Load alignment dictionary for unknown word replacement
# (None if no unknown word replacement, empty if no path to align dictionary)
self.align_dict = utils.load_align_dict(cfg.generation.replace_unk)
self.tokenizer = encoders.build_tokenizer(cfg.tokenizer)
self.bpe = encoders.build_bpe(cfg.bpe)
self.max_positions = utils.resolve_max_positions(
self.task.max_positions(), *[model.max_positions() for model in models]
)
# this is useful for determining the device
self.register_buffer("_float_tensor", torch.tensor([0], dtype=torch.float))
@property
def device(self):
return self._float_tensor.device
def translate(
self, sentences: List[str], beam: int = 5, verbose: bool = False, **kwargs
) -> List[str]:
return self.sample(sentences, beam, verbose, **kwargs)
def sample(
self, sentences: List[str], beam: int = 1, verbose: bool = False, **kwargs
) -> List[str]:
if isinstance(sentences, str):
return self.sample([sentences], beam=beam, verbose=verbose, **kwargs)[0]
tokenized_sentences = [self.encode(sentence) for sentence in sentences]
batched_hypos = self.generate(tokenized_sentences, beam, verbose, **kwargs)
return [self.decode(hypos[0]["tokens"]) for hypos in batched_hypos]
def score(self, sentences: List[str], **kwargs):
if isinstance(sentences, str):
return self.score([sentences], **kwargs)[0]
# NOTE: this doesn't support translation tasks currently
tokenized_sentences = [self.encode(sentence) for sentence in sentences]
return [
hypos[0]
for hypos in self.generate(
tokenized_sentences, score_reference=True, **kwargs
)
]
def generate(
self,
tokenized_sentences: List[torch.LongTensor],
beam: int = 5,
verbose: bool = False,
skip_invalid_size_inputs=False,
inference_step_args=None,
**kwargs
) -> List[List[Dict[str, torch.Tensor]]]:
if torch.is_tensor(tokenized_sentences) and tokenized_sentences.dim() == 1:
return self.generate(
tokenized_sentences.unsqueeze(0), beam=beam, verbose=verbose, **kwargs
)[0]
# build generator using current args as well as any kwargs
gen_args = copy.deepcopy(self.cfg.generation)
with open_dict(gen_args):
gen_args.beam = beam
for k, v in kwargs.items():
setattr(gen_args, k, v)
generator = self.task.build_generator(self.models, gen_args)
inference_step_args = inference_step_args or {}
results = []
for batch in self._build_batches(tokenized_sentences, skip_invalid_size_inputs):
batch = utils.apply_to_sample(lambda t: t.to(self.device), batch)
translations = self.task.inference_step(
generator, self.models, batch, **inference_step_args
)
for id, hypos in zip(batch["id"].tolist(), translations):
results.append((id, hypos))
# sort output to match input order
outputs = [hypos for _, hypos in sorted(results, key=lambda x: x[0])]
if verbose:
def getarg(name, default):
return getattr(gen_args, name, getattr(self.cfg, name, default))
for source_tokens, target_hypotheses in zip(tokenized_sentences, outputs):
src_str_with_unk = self.string(source_tokens)
logger.info("S\t{}".format(src_str_with_unk))
for hypo in target_hypotheses:
hypo_str = self.decode(hypo["tokens"])
logger.info("H\t{}\t{}".format(hypo["score"], hypo_str))
logger.info(
"P\t{}".format(
" ".join(
map(
lambda x: "{:.4f}".format(x),
hypo["positional_scores"].tolist(),
)
)
)
)
if hypo["alignment"] is not None and getarg(
"print_alignment", False
):
logger.info(
"A\t{}".format(
" ".join(
[
"{}-{}".format(src_idx, tgt_idx)
for src_idx, tgt_idx in hypo["alignment"]
]
)
)
)
return outputs
def encode(self, sentence: str) -> torch.LongTensor:
sentence = self.tokenize(sentence)
sentence = self.apply_bpe(sentence)
return self.binarize(sentence)
def decode(self, tokens: torch.LongTensor) -> str:
sentence = self.string(tokens)
sentence = self.remove_bpe(sentence)
return self.detokenize(sentence)
def tokenize(self, sentence: str) -> str:
if self.tokenizer is not None:
sentence = self.tokenizer.encode(sentence)
return sentence
def detokenize(self, sentence: str) -> str:
if self.tokenizer is not None:
sentence = self.tokenizer.decode(sentence)
return sentence
def apply_bpe(self, sentence: str) -> str:
if self.bpe is not None:
sentence = self.bpe.encode(sentence)
return sentence
def remove_bpe(self, sentence: str) -> str:
if self.bpe is not None:
sentence = self.bpe.decode(sentence)
return sentence
def binarize(self, sentence: str) -> torch.LongTensor:
return self.src_dict.encode_line(sentence, add_if_not_exist=False).long()
def string(self, tokens: torch.LongTensor) -> str:
return self.tgt_dict.string(tokens)
def _build_batches(
self, tokens: List[List[int]], skip_invalid_size_inputs: bool
) -> Iterator[Dict[str, Any]]:
lengths = torch.LongTensor([t.numel() for t in tokens])
batch_iterator = self.task.get_batch_iterator(
dataset=self.task.build_dataset_for_inference(tokens, lengths),
max_tokens=self.cfg.dataset.max_tokens,
max_sentences=self.cfg.dataset.batch_size,
max_positions=self.max_positions,
ignore_invalid_inputs=skip_invalid_size_inputs,
disable_iterator_cache=True,
).next_epoch_itr(shuffle=False)
return batch_iterator
class BPEHubInterface(object):
"""PyTorch Hub interface for Byte-Pair Encoding (BPE)."""
def __init__(self, bpe, **kwargs):
super().__init__()
args = argparse.Namespace(bpe=bpe, **kwargs)
self.bpe = encoders.build_bpe(args)
assert self.bpe is not None
def encode(self, sentence: str) -> str:
return self.bpe.encode(sentence)
def decode(self, sentence: str) -> str:
return self.bpe.decode(sentence)
class TokenizerHubInterface(object):
"""PyTorch Hub interface for tokenization."""
def __init__(self, tokenizer, **kwargs):
super().__init__()
args = argparse.Namespace(tokenizer=tokenizer, **kwargs)
self.tokenizer = encoders.build_tokenizer(args)
assert self.tokenizer is not None
def encode(self, sentence: str) -> str:
return self.tokenizer.encode(sentence)
def decode(self, sentence: str) -> str:
return self.tokenizer.decode(sentence)
|
COCO-LM/fairseq/fairseq/hub_utils.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/hub_utils.py",
"repo_id": "COCO-LM",
"token_count": 5160
}
| 195 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
RoBERTa: A Robustly Optimized BERT Pretraining Approach.
"""
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
from fairseq import utils
from fairseq.model_parallel.models.transformer import ModelParallelTransformerEncoder
from fairseq.models import register_model, register_model_architecture
from fairseq.models.roberta import (
roberta_base_architecture,
roberta_prenorm_architecture,
RobertaEncoder,
RobertaModel,
)
from fairseq.modules import LayerNorm
try:
from fairseq.model_parallel.megatron.mpu import (
copy_to_model_parallel_region,
gather_from_model_parallel_region,
ColumnParallelLinear,
VocabParallelEmbedding,
)
has_megatron_submodule = True
except (ImportError, ModuleNotFoundError):
has_megatron_submodule = False
logger = logging.getLogger(__name__)
@register_model("model_parallel_roberta")
class ModelParallelRobertaModel(RobertaModel):
def __init__(self, args, encoder):
super().__init__(args, encoder)
self.classification_heads = nn.ModuleDict()
@staticmethod
def add_args(parser):
RobertaModel.add_args(parser)
parser.add_argument(
"--no-final-layer-norm",
action="store_true",
help=(
"don't add final layernorm (only applicable when "
"--encoder-normalize-before=True"
),
)
@classmethod
def build_model(cls, args, task):
"""Build a new model instance."""
# make sure all arguments are present
base_architecture(args)
task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
if not hasattr(args, "max_positions"):
args.max_positions = args.tokens_per_sample
if getattr(args, "untie_weights_roberta", False):
raise NotImplementedError(
"--untie-weights-roberta is not supported in model parallel mode"
)
encoder = ModelParallelRobertaEncoder(args, task.source_dictionary)
return cls(args, encoder)
def forward(
self,
src_tokens,
features_only=False,
return_all_hiddens=False,
classification_head_name=None,
**kwargs
):
if classification_head_name is not None:
features_only = True
x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
if classification_head_name is not None:
x = self.classification_heads[classification_head_name](x)
return x, extra
def register_classification_head(
self, name, num_classes=None, inner_dim=None, **kwargs
):
"""Register a classification head."""
if name in self.classification_heads:
prev_num_classes = self.classification_heads[name].out_proj.out_features
prev_inner_dim = self.classification_heads[name].dense.out_features
if num_classes != prev_num_classes or inner_dim != prev_inner_dim:
logger.warning(
're-registering head "{}" with num_classes {} (prev: {}) '
"and inner_dim {} (prev: {})".format(
name, num_classes, prev_num_classes, inner_dim, prev_inner_dim
)
)
self.classification_heads[name] = ModelParallelRobertaClassificationHead(
self.args.encoder_embed_dim,
inner_dim or self.args.encoder_embed_dim,
num_classes,
self.args.pooler_activation_fn,
self.args.pooler_dropout,
)
class ModelParallelRobertaLMHead(nn.Module):
"""Head for masked language modeling."""
def __init__(self, embed_dim, output_dim, activation_fn, weight=None):
super().__init__()
self.dense = ColumnParallelLinear(embed_dim, embed_dim, gather_output=True)
self.activation_fn = utils.get_activation_fn(activation_fn)
self.layer_norm = LayerNorm(embed_dim)
if weight is None:
weight = nn.Linear(embed_dim, output_dim, bias=False).weight
self.weight = weight
self.bias = nn.Parameter(torch.zeros(output_dim))
def forward(self, features, masked_tokens=None, **kwargs):
# Only project the unmasked tokens while training,
# saves both memory and computation
if masked_tokens is not None:
features = features[masked_tokens, :]
x = self.dense(features)
x = self.activation_fn(x)
x = self.layer_norm(x)
x = copy_to_model_parallel_region(x)
# project back to size of vocabulary with bias
x = F.linear(x, self.weight)
x = gather_from_model_parallel_region(x).contiguous()
x = x + self.bias
return x
class ModelParallelRobertaClassificationHead(nn.Module):
"""Head for sentence-level classification tasks."""
def __init__(
self, input_dim, inner_dim, num_classes, activation_fn, pooler_dropout
):
super().__init__()
self.dense = ColumnParallelLinear(input_dim, inner_dim, gather_output=True)
self.activation_fn = utils.get_activation_fn(activation_fn)
self.dropout = nn.Dropout(p=pooler_dropout)
self.out_proj = nn.Linear(inner_dim, num_classes)
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.dense(x)
x = self.activation_fn(x)
x = self.dropout(x)
x = self.out_proj(x)
return x
class ModelParallelRobertaEncoder(RobertaEncoder):
"""RoBERTa encoder."""
def __init__(self, args, dictionary):
super().__init__(args, dictionary)
assert not self.args.untie_weights_roberta
def build_embedding(self, vocab_size, embedding_dim, padding_idx):
return VocabParallelEmbedding(vocab_size, embedding_dim, padding_idx)
def build_encoder(self, args, dictionary, embed_tokens):
return ModelParallelTransformerEncoder(args, dictionary, embed_tokens)
def build_lm_head(self, embed_dim, output_dim, activation_fn, weight):
return ModelParallelRobertaLMHead(embed_dim, output_dim, activation_fn, weight)
@register_model_architecture("model_parallel_roberta", "model_parallel_roberta")
def base_architecture(args):
args.no_final_layer_norm = getattr(args, "no_final_layer_norm", False)
# model parallel RoBERTa defaults to "Pre-LN" formulation
roberta_prenorm_architecture(args)
# earlier versions of model parallel RoBERTa removed the final layer norm
@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_v1")
def model_parallel_roberta_v1_architecture(args):
args.no_final_layer_norm = getattr(args, "no_final_layer_norm", True)
base_architecture(args)
@register_model_architecture(
"model_parallel_roberta", "model_parallel_roberta_postnorm"
)
def model_parallel_roberta_postnorm_architecture(args):
# the original BERT/RoBERTa uses the "Post-LN" formulation
roberta_base_architecture(args)
@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_base")
def model_parallel_roberta_base_architecture(args):
base_architecture(args)
@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_large")
def model_parallel_roberta_large_architecture(args):
args.encoder_layers = getattr(args, "encoder_layers", 24)
args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
base_architecture(args)
|
COCO-LM/fairseq/fairseq/model_parallel/models/roberta/model.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/model_parallel/models/roberta/model.py",
"repo_id": "COCO-LM",
"token_count": 3336
}
| 196 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from typing import Dict, Optional
from fairseq.incremental_decoding_utils import with_incremental_state
from fairseq.models import FairseqDecoder
from torch import Tensor
logger = logging.getLogger(__name__)
@with_incremental_state
class FairseqIncrementalDecoder(FairseqDecoder):
"""Base class for incremental decoders.
Incremental decoding is a special mode at inference time where the Model
only receives a single timestep of input corresponding to the previous
output token (for teacher forcing) and must produce the next output
*incrementally*. Thus the model must cache any long-term state that is
needed about the sequence, e.g., hidden states, convolutional states, etc.
Compared to the standard :class:`FairseqDecoder` interface, the incremental
decoder interface allows :func:`forward` functions to take an extra keyword
argument (*incremental_state*) that can be used to cache state across
time-steps.
The :class:`FairseqIncrementalDecoder` interface also defines the
:func:`reorder_incremental_state` method, which is used during beam search
to select and reorder the incremental state based on the selection of beams.
To learn more about how incremental decoding works, refer to `this blog
<http://www.telesens.co/2019/04/21/understanding-incremental-decoding-in-fairseq/>`_.
"""
def __init__(self, dictionary):
super().__init__(dictionary)
def forward(
self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
):
"""
Args:
prev_output_tokens (LongTensor): shifted output tokens of shape
`(batch, tgt_len)`, for teacher forcing
encoder_out (dict, optional): output from the encoder, used for
encoder-side attention
incremental_state (dict, optional): dictionary used for storing
state during :ref:`Incremental decoding`
Returns:
tuple:
- the decoder's output of shape `(batch, tgt_len, vocab)`
- a dictionary with any model-specific outputs
"""
raise NotImplementedError
def extract_features(
self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
):
"""
Returns:
tuple:
- the decoder's features of shape `(batch, tgt_len, embed_dim)`
- a dictionary with any model-specific outputs
"""
raise NotImplementedError
def reorder_incremental_state(
self,
incremental_state: Dict[str, Dict[str, Optional[Tensor]]],
new_order: Tensor,
):
"""Reorder incremental state.
This will be called when the order of the input has changed from the
previous time step. A typical use case is beam search, where the input
order changes between time steps based on the selection of beams.
"""
pass
def reorder_incremental_state_scripting(
self,
incremental_state: Dict[str, Dict[str, Optional[Tensor]]],
new_order: Tensor,
):
"""Main entry point for reordering the incremental state.
Due to limitations in TorchScript, we call this function in
:class:`fairseq.sequence_generator.SequenceGenerator` instead of
calling :func:`reorder_incremental_state` directly.
"""
for module in self.modules():
if hasattr(module, "reorder_incremental_state"):
result = module.reorder_incremental_state(incremental_state, new_order)
if result is not None:
incremental_state = result
def set_beam_size(self, beam_size):
"""Sets the beam size in the decoder and all children."""
if getattr(self, "_beam_size", -1) != beam_size:
seen = set()
def apply_set_beam_size(module):
if (
module != self
and hasattr(module, "set_beam_size")
and module not in seen
):
seen.add(module)
module.set_beam_size(beam_size)
self.apply(apply_set_beam_size)
self._beam_size = beam_size
|
COCO-LM/fairseq/fairseq/models/fairseq_incremental_decoder.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/models/fairseq_incremental_decoder.py",
"repo_id": "COCO-LM",
"token_count": 1752
}
| 197 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
import torch
from fairseq.models.transformer import (
TransformerDecoder,
TransformerEncoder,
TransformerModel,
)
from fairseq.modules.transformer_sentence_encoder import init_bert_params
def ensemble_encoder(func):
def wrapper(self, *args, **kwargs):
if self.ensemble_models is None or len(self.ensemble_models) == 1:
return func(self, *args, **kwargs)
encoder_outs = [func(model, *args, **kwargs, return_all_hiddens=True) for model in self.ensemble_models]
_encoder_out = encoder_outs[0].copy()
def stack(key):
outs = [e[key][0] for e in encoder_outs]
return [torch.stack(outs, -1) if outs[0] is not None else None]
_encoder_out["encoder_out"] = stack("encoder_out")
_encoder_out["encoder_embedding"] = stack("encoder_embedding")
num_layers = len(_encoder_out["encoder_states"])
if num_layers > 0:
_encoder_out["encoder_states"] = [
torch.stack([e["encoder_states"][i] for e in encoder_outs], -1)
for i in range(num_layers)
]
return _encoder_out
return wrapper
def ensemble_decoder(func):
def wrapper(self, normalize=False, encoder_out=None, *args, **kwargs):
if self.ensemble_models is None or len(self.ensemble_models) == 1:
return func(
self, normalize=normalize, encoder_out=encoder_out, *args, **kwargs
)
def _replace(encoder_out, new_val):
new_encoder_out = encoder_out.copy()
new_encoder_out["encoder_out"] = [new_val]
return new_encoder_out
action_outs = [
func(
model,
normalize=normalize,
encoder_out=_replace(
encoder_out,
encoder_out["encoder_out"][0][:, :, :, i]
),
*args,
**kwargs
)
for i, model in enumerate(self.ensemble_models)
]
if not isinstance(action_outs[0], tuple): # return multiple values
action_outs = [[a] for a in action_outs]
else:
action_outs = [list(a) for a in action_outs]
ensembled_outs = []
for i in range(len(action_outs[0])):
if i == 0 and normalize:
ensembled_outs += [
torch.logsumexp(
torch.stack([a[i] for a in action_outs], -1), dim=-1
)
- math.log(len(self.ensemble_models))
]
elif action_outs[0][i] is not None:
ensembled_outs += [torch.stack([a[i] for a in action_outs], -1)]
else:
ensembled_outs += [None]
if len(ensembled_outs) == 1:
return ensembled_outs[0]
return tuple(ensembled_outs)
return wrapper
class FairseqNATModel(TransformerModel):
"""
Abstract class for all nonautoregressive-based models
"""
def __init__(self, args, encoder, decoder):
super().__init__(args, encoder, decoder)
self.tgt_dict = decoder.dictionary
self.bos = decoder.dictionary.bos()
self.eos = decoder.dictionary.eos()
self.pad = decoder.dictionary.pad()
self.unk = decoder.dictionary.unk()
self.ensemble_models = None
@property
def allow_length_beam(self):
return False
@property
def allow_ensemble(self):
return True
def enable_ensemble(self, models):
self.encoder.ensemble_models = [m.encoder for m in models]
self.decoder.ensemble_models = [m.decoder for m in models]
@staticmethod
def add_args(parser):
TransformerModel.add_args(parser)
parser.add_argument(
"--apply-bert-init",
action="store_true",
help="use custom param initialization for BERT",
)
@classmethod
def build_decoder(cls, args, tgt_dict, embed_tokens):
decoder = FairseqNATDecoder(args, tgt_dict, embed_tokens)
if getattr(args, "apply_bert_init", False):
decoder.apply(init_bert_params)
return decoder
@classmethod
def build_encoder(cls, args, src_dict, embed_tokens):
encoder = FairseqNATEncoder(args, src_dict, embed_tokens)
if getattr(args, "apply_bert_init", False):
encoder.apply(init_bert_params)
return encoder
def forward_encoder(self, encoder_inputs):
return self.encoder(*encoder_inputs)
def forward_decoder(self, *args, **kwargs):
return NotImplementedError
def initialize_output_tokens(self, *args, **kwargs):
return NotImplementedError
def forward(self, *args, **kwargs):
return NotImplementedError
class FairseqNATEncoder(TransformerEncoder):
def __init__(self, args, dictionary, embed_tokens):
super().__init__(args, dictionary, embed_tokens)
self.ensemble_models = None
@ensemble_encoder
def forward(self, *args, **kwargs):
return super().forward(*args, **kwargs)
class FairseqNATDecoder(TransformerDecoder):
def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False):
super().__init__(args, dictionary, embed_tokens, no_encoder_attn)
self.ensemble_models = None
|
COCO-LM/fairseq/fairseq/models/nat/fairseq_nat_model.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/models/nat/fairseq_nat_model.py",
"repo_id": "COCO-LM",
"token_count": 2566
}
| 198 |
#!/usr/bin/env python3
from ast import literal_eval
from typing import List, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from fairseq import checkpoint_utils, utils
from fairseq.data.data_utils import lengths_to_padding_mask
from fairseq.models import (
FairseqEncoder,
FairseqEncoderDecoderModel,
FairseqIncrementalDecoder,
register_model,
register_model_architecture,
)
@register_model("s2t_berard")
class BerardModel(FairseqEncoderDecoderModel):
"""Implementation of a model similar to https://arxiv.org/abs/1802.04200
Paper title: End-to-End Automatic Speech Translation of Audiobooks
An implementation is available in tensorflow at
https://github.com/eske/seq2seq
Relevant files in this implementation are the config
(https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml)
and the model code
(https://github.com/eske/seq2seq/blob/master/translate/models.py).
The encoder and decoder try to be close to the original implementation.
The attention is an MLP as in Bahdanau et al.
(https://arxiv.org/abs/1409.0473).
There is no state initialization by averaging the encoder outputs.
"""
def __init__(self, encoder, decoder):
super().__init__(encoder, decoder)
@staticmethod
def add_args(parser):
parser.add_argument(
"--input-layers",
type=str,
metavar="EXPR",
help="List of linear layer dimensions. These "
"layers are applied to the input features and "
"are followed by tanh and possibly dropout.",
)
parser.add_argument(
"--dropout",
type=float,
metavar="D",
help="Dropout probability to use in the encoder/decoder. "
"Note that this parameters control dropout in various places, "
"there is no fine-grained control for dropout for embeddings "
"vs LSTM layers for example.",
)
parser.add_argument(
"--in-channels",
type=int,
metavar="N",
help="Number of encoder input channels. " "Typically value is 1.",
)
parser.add_argument(
"--conv-layers",
type=str,
metavar="EXPR",
help="List of conv layers " "(format: (channels, kernel, stride)).",
)
parser.add_argument(
"--num-blstm-layers",
type=int,
metavar="N",
help="Number of encoder bi-LSTM layers.",
)
parser.add_argument(
"--lstm-size", type=int, metavar="N", help="LSTM hidden size."
)
parser.add_argument(
"--decoder-embed-dim",
type=int,
metavar="N",
help="Embedding dimension of the decoder target tokens.",
)
parser.add_argument(
"--decoder-hidden-dim",
type=int,
metavar="N",
help="Decoder LSTM hidden dimension.",
)
parser.add_argument(
"--decoder-num-layers",
type=int,
metavar="N",
help="Number of decoder LSTM layers.",
)
parser.add_argument(
"--attention-dim",
type=int,
metavar="N",
help="Hidden layer dimension in MLP attention.",
)
parser.add_argument(
"--output-layer-dim",
type=int,
metavar="N",
help="Hidden layer dim for linear layer prior to output projection.",
)
parser.add_argument(
"--load-pretrained-encoder-from",
type=str,
metavar="STR",
help="model to take encoder weights from (for initialization)",
)
parser.add_argument(
"--load-pretrained-decoder-from",
type=str,
metavar="STR",
help="model to take decoder weights from (for initialization)",
)
@classmethod
def build_encoder(cls, args, task):
encoder = BerardEncoder(
input_layers=literal_eval(args.input_layers),
conv_layers=literal_eval(args.conv_layers),
in_channels=args.input_channels,
input_feat_per_channel=args.input_feat_per_channel,
num_blstm_layers=args.num_blstm_layers,
lstm_size=args.lstm_size,
dropout=args.dropout,
)
if getattr(args, "load_pretrained_encoder_from", None):
encoder = checkpoint_utils.load_pretrained_component_from_model(
component=encoder, checkpoint=args.load_pretrained_encoder_from
)
return encoder
@classmethod
def build_decoder(cls, args, task):
decoder = LSTMDecoder(
dictionary=task.target_dictionary,
embed_dim=args.decoder_embed_dim,
num_layers=args.decoder_num_layers,
hidden_size=args.decoder_hidden_dim,
dropout=args.dropout,
encoder_output_dim=2 * args.lstm_size, # bidirectional
attention_dim=args.attention_dim,
output_layer_dim=args.output_layer_dim,
)
if getattr(args, "load_pretrained_decoder_from", None):
decoder = checkpoint_utils.load_pretrained_component_from_model(
component=decoder, checkpoint=args.load_pretrained_decoder_from
)
return decoder
@classmethod
def build_model(cls, args, task):
"""Build a new model instance."""
encoder = cls.build_encoder(args, task)
decoder = cls.build_decoder(args, task)
return cls(encoder, decoder)
def get_normalized_probs(self, net_output, log_probs, sample=None):
# net_output['encoder_out'] is a (B, T, D) tensor
lprobs = super().get_normalized_probs(net_output, log_probs, sample)
# lprobs is a (B, T, D) tensor
lprobs.batch_first = True
return lprobs
class BerardEncoder(FairseqEncoder):
def __init__(
self,
input_layers: List[int],
conv_layers: List[Tuple[int]],
in_channels: int,
input_feat_per_channel: int,
num_blstm_layers: int,
lstm_size: int,
dropout: float,
):
"""
Args:
input_layers: list of linear layer dimensions. These layers are
applied to the input features and are followed by tanh and
possibly dropout.
conv_layers: list of conv2d layer configurations. A configuration is
a tuple (out_channels, conv_kernel_size, stride).
in_channels: number of input channels.
input_feat_per_channel: number of input features per channel. These
are speech features, typically 40 or 80.
num_blstm_layers: number of bidirectional LSTM layers.
lstm_size: size of the LSTM hidden (and cell) size.
dropout: dropout probability. Dropout can be applied after the
linear layers and LSTM layers but not to the convolutional
layers.
"""
super().__init__(None)
self.input_layers = nn.ModuleList()
in_features = input_feat_per_channel
for out_features in input_layers:
if dropout > 0:
self.input_layers.append(
nn.Sequential(
nn.Linear(in_features, out_features), nn.Dropout(p=dropout)
)
)
else:
self.input_layers.append(nn.Linear(in_features, out_features))
in_features = out_features
self.in_channels = in_channels
self.input_dim = input_feat_per_channel
self.conv_kernel_sizes_and_strides = []
self.conv_layers = nn.ModuleList()
lstm_input_dim = input_layers[-1]
for conv_layer in conv_layers:
out_channels, conv_kernel_size, conv_stride = conv_layer
self.conv_layers.append(
nn.Conv2d(
in_channels,
out_channels,
conv_kernel_size,
stride=conv_stride,
padding=conv_kernel_size // 2,
)
)
self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride))
in_channels = out_channels
lstm_input_dim //= conv_stride
lstm_input_dim *= conv_layers[-1][0]
self.lstm_size = lstm_size
self.num_blstm_layers = num_blstm_layers
self.lstm = nn.LSTM(
input_size=lstm_input_dim,
hidden_size=lstm_size,
num_layers=num_blstm_layers,
dropout=dropout,
bidirectional=True,
)
self.output_dim = 2 * lstm_size # bidirectional
if dropout > 0:
self.dropout = nn.Dropout(p=dropout)
else:
self.dropout = None
def forward(self, src_tokens, src_lengths=None, **kwargs):
"""
Args
src_tokens: padded tensor (B, T, C * feat)
src_lengths: tensor of original lengths of input utterances (B,)
"""
bsz, max_seq_len, _ = src_tokens.size()
# (B, C, T, feat)
x = (
src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
.transpose(1, 2)
.contiguous()
)
for input_layer in self.input_layers:
x = input_layer(x)
x = torch.tanh(x)
for conv_layer in self.conv_layers:
x = conv_layer(x)
bsz, _, output_seq_len, _ = x.size()
# (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) ->
# (T, B, C * feat)
x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
input_lengths = src_lengths.clone()
for k, s in self.conv_kernel_sizes_and_strides:
p = k // 2
input_lengths = (input_lengths.float() + 2 * p - k) / s + 1
input_lengths = input_lengths.floor().long()
packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths)
h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
packed_outs, _ = self.lstm(packed_x, (h0, c0))
# unpack outputs and apply dropout
x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs)
if self.dropout is not None:
x = self.dropout(x)
encoder_padding_mask = (
lengths_to_padding_mask(output_lengths).to(src_tokens.device).t()
)
return {
"encoder_out": x, # (T, B, C)
"encoder_padding_mask": encoder_padding_mask, # (T, B)
}
def reorder_encoder_out(self, encoder_out, new_order):
encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
1, new_order
)
encoder_out["encoder_padding_mask"] = encoder_out[
"encoder_padding_mask"
].index_select(1, new_order)
return encoder_out
class MLPAttention(nn.Module):
"""The original attention from Badhanau et al. (2014)
https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron.
The attention score between position i in the encoder and position j in the
decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a)
"""
def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim):
super().__init__()
self.context_dim = context_dim
self.attention_dim = attention_dim
# W_ae and b_a
self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True)
# W_ad
self.decoder_proj = nn.Linear(
decoder_hidden_state_dim, self.attention_dim, bias=False
)
# V_a
self.to_scores = nn.Linear(self.attention_dim, 1, bias=False)
def forward(self, decoder_state, source_hids, encoder_padding_mask):
"""The expected input dimensions are:
decoder_state: bsz x decoder_hidden_state_dim
source_hids: src_len x bsz x context_dim
encoder_padding_mask: src_len x bsz
"""
src_len, bsz, _ = source_hids.size()
# (src_len*bsz) x context_dim (to feed through linear)
flat_source_hids = source_hids.view(-1, self.context_dim)
# (src_len*bsz) x attention_dim
encoder_component = self.encoder_proj(flat_source_hids)
# src_len x bsz x attention_dim
encoder_component = encoder_component.view(src_len, bsz, self.attention_dim)
# 1 x bsz x attention_dim
decoder_component = self.decoder_proj(decoder_state).unsqueeze(0)
# Sum with broadcasting and apply the non linearity
# src_len x bsz x attention_dim
hidden_att = torch.tanh(
(decoder_component + encoder_component).view(-1, self.attention_dim)
)
# Project onto the reals to get attentions scores (src_len x bsz)
attn_scores = self.to_scores(hidden_att).view(src_len, bsz)
# Mask + softmax (src_len x bsz)
if encoder_padding_mask is not None:
attn_scores = (
attn_scores.float()
.masked_fill_(encoder_padding_mask, float("-inf"))
.type_as(attn_scores)
) # FP16 support: cast to float and back
# srclen x bsz
normalized_masked_attn_scores = F.softmax(attn_scores, dim=0)
# Sum weighted sources (bsz x context_dim)
attn_weighted_context = (
source_hids * normalized_masked_attn_scores.unsqueeze(2)
).sum(dim=0)
return attn_weighted_context, normalized_masked_attn_scores
class LSTMDecoder(FairseqIncrementalDecoder):
def __init__(
self,
dictionary,
embed_dim,
num_layers,
hidden_size,
dropout,
encoder_output_dim,
attention_dim,
output_layer_dim,
):
"""
Args:
dictionary: target text dictionary.
embed_dim: embedding dimension for target tokens.
num_layers: number of LSTM layers.
hidden_size: hidden size for LSTM layers.
dropout: dropout probability. Dropout can be applied to the
embeddings, the LSTM layers, and the context vector.
encoder_output_dim: encoder output dimension (hidden size of
encoder LSTM).
attention_dim: attention dimension for MLP attention.
output_layer_dim: size of the linear layer prior to output
projection.
"""
super().__init__(dictionary)
self.num_layers = num_layers
self.hidden_size = hidden_size
num_embeddings = len(dictionary)
padding_idx = dictionary.pad()
self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx)
if dropout > 0:
self.dropout = nn.Dropout(p=dropout)
else:
self.dropout = None
self.layers = nn.ModuleList()
for layer_id in range(num_layers):
input_size = embed_dim if layer_id == 0 else encoder_output_dim
self.layers.append(
nn.LSTMCell(input_size=input_size, hidden_size=hidden_size)
)
self.context_dim = encoder_output_dim
self.attention = MLPAttention(
decoder_hidden_state_dim=hidden_size,
context_dim=encoder_output_dim,
attention_dim=attention_dim,
)
self.deep_output_layer = nn.Linear(
hidden_size + encoder_output_dim + embed_dim, output_layer_dim
)
self.output_projection = nn.Linear(output_layer_dim, num_embeddings)
def forward(
self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
):
encoder_padding_mask = encoder_out["encoder_padding_mask"]
encoder_outs = encoder_out["encoder_out"]
if incremental_state is not None:
prev_output_tokens = prev_output_tokens[:, -1:]
bsz, seqlen = prev_output_tokens.size()
srclen = encoder_outs.size(0)
# embed tokens
embeddings = self.embed_tokens(prev_output_tokens)
x = embeddings
if self.dropout is not None:
x = self.dropout(x)
# B x T x C -> T x B x C
x = x.transpose(0, 1)
# initialize previous states (or get from cache during incremental
# generation)
cached_state = utils.get_incremental_state(
self, incremental_state, "cached_state"
)
if cached_state is not None:
prev_hiddens, prev_cells = cached_state
else:
prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers
prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers
attn_scores = x.new_zeros(bsz, srclen)
attention_outs = []
outs = []
for j in range(seqlen):
input = x[j, :, :]
attention_out = None
for i, layer in enumerate(self.layers):
# the previous state is one layer below except for the bottom
# layer where the previous state is the state emitted by the
# top layer
hidden, cell = layer(
input,
(
prev_hiddens[(i - 1) % self.num_layers],
prev_cells[(i - 1) % self.num_layers],
),
)
if self.dropout is not None:
hidden = self.dropout(hidden)
prev_hiddens[i] = hidden
prev_cells[i] = cell
if attention_out is None:
attention_out, attn_scores = self.attention(
hidden, encoder_outs, encoder_padding_mask
)
if self.dropout is not None:
attention_out = self.dropout(attention_out)
attention_outs.append(attention_out)
input = attention_out
# collect the output of the top layer
outs.append(hidden)
# cache previous states (no-op except during incremental generation)
utils.set_incremental_state(
self, incremental_state, "cached_state", (prev_hiddens, prev_cells)
)
# collect outputs across time steps
x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size)
attention_outs_concat = torch.cat(attention_outs, dim=0).view(
seqlen, bsz, self.context_dim
)
# T x B x C -> B x T x C
x = x.transpose(0, 1)
attention_outs_concat = attention_outs_concat.transpose(0, 1)
# concat LSTM output, attention output and embedding
# before output projection
x = torch.cat((x, attention_outs_concat, embeddings), dim=2)
x = self.deep_output_layer(x)
x = torch.tanh(x)
if self.dropout is not None:
x = self.dropout(x)
# project back to size of vocabulary
x = self.output_projection(x)
# to return the full attn_scores tensor, we need to fix the decoder
# to account for subsampling input frames
# return x, attn_scores
return x, None
def reorder_incremental_state(self, incremental_state, new_order):
super().reorder_incremental_state(incremental_state, new_order)
cached_state = utils.get_incremental_state(
self, incremental_state, "cached_state"
)
if cached_state is None:
return
def reorder_state(state):
if isinstance(state, list):
return [reorder_state(state_i) for state_i in state]
return state.index_select(0, new_order)
new_state = tuple(map(reorder_state, cached_state))
utils.set_incremental_state(self, incremental_state, "cached_state", new_state)
@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard")
def berard(args):
"""The original version: "End-to-End Automatic Speech Translation of
Audiobooks" (https://arxiv.org/abs/1802.04200)
"""
args.input_layers = getattr(args, "input_layers", "[256, 128]")
args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]")
args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
args.lstm_size = getattr(args, "lstm_size", 256)
args.dropout = getattr(args, "dropout", 0.2)
args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128)
args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512)
args.attention_dim = getattr(args, "attention_dim", 512)
args.output_layer_dim = getattr(args, "output_layer_dim", 128)
args.load_pretrained_encoder_from = getattr(
args, "load_pretrained_encoder_from", None
)
args.load_pretrained_decoder_from = getattr(
args, "load_pretrained_decoder_from", None
)
@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3")
def berard_256_3_3(args):
"""Used in
* "Harnessing Indirect Training Data for End-to-End Automatic Speech
Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515)
* "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"
(https://arxiv.org/pdf/2002.01320.pdf)
* "Self-Supervised Representations Improve End-to-End Speech Translation"
(https://arxiv.org/abs/2006.12124)
"""
args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
berard(args)
@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2")
def berard_512_3_2(args):
args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
args.lstm_size = getattr(args, "lstm_size", 512)
args.dropout = getattr(args, "dropout", 0.3)
args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
args.attention_dim = getattr(args, "attention_dim", 512)
args.output_layer_dim = getattr(args, "output_layer_dim", 256)
berard(args)
@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3")
def berard_512_5_3(args):
args.num_blstm_layers = getattr(args, "num_blstm_layers", 5)
args.lstm_size = getattr(args, "lstm_size", 512)
args.dropout = getattr(args, "dropout", 0.3)
args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
args.attention_dim = getattr(args, "attention_dim", 512)
args.output_layer_dim = getattr(args, "output_layer_dim", 256)
berard(args)
|
COCO-LM/fairseq/fairseq/models/speech_to_text/berard.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/models/speech_to_text/berard.py",
"repo_id": "COCO-LM",
"token_count": 11158
}
| 199 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""isort:skip_file"""
from .adaptive_input import AdaptiveInput
from .adaptive_softmax import AdaptiveSoftmax
from .beamable_mm import BeamableMM
from .character_token_embedder import CharacterTokenEmbedder
from .conv_tbc import ConvTBC
from .cross_entropy import cross_entropy
from .downsampled_multihead_attention import DownsampledMultiHeadAttention
from .dynamic_convolution import DynamicConv, DynamicConv1dTBC
from .dynamic_crf_layer import DynamicCRF
from .fairseq_dropout import FairseqDropout
from .fp32_group_norm import Fp32GroupNorm
from .gelu import gelu, gelu_accurate
from .grad_multiply import GradMultiply
from .gumbel_vector_quantizer import GumbelVectorQuantizer
from .kmeans_vector_quantizer import KmeansVectorQuantizer
from .layer_drop import LayerDropModuleList
from .layer_norm import Fp32LayerNorm, LayerNorm
from .learned_positional_embedding import LearnedPositionalEmbedding
from .lightweight_convolution import LightweightConv, LightweightConv1dTBC
from .linearized_convolution import LinearizedConvolution
from .multihead_attention import SelfMultiheadAttention, MultiheadAttention
from .positional_embedding import PositionalEmbedding
from .same_pad import SamePad
from .scalar_bias import ScalarBias
from .sinusoidal_positional_embedding import SinusoidalPositionalEmbedding
from .transformer_sentence_encoder_layer import TransformerSentenceEncoderLayer
from .transformer_sentence_encoder import TransformerSentenceEncoder
from .transpose_last import TransposeLast
from .unfold import unfold1d
from .transformer_layer import TransformerDecoderLayer, TransformerEncoderLayer
from .vggblock import VGGBlock
__all__ = [
"AdaptiveInput",
"AdaptiveSoftmax",
"BeamableMM",
"CharacterTokenEmbedder",
"ConvTBC",
"cross_entropy",
"DownsampledMultiHeadAttention",
"DynamicConv1dTBC",
"DynamicConv",
"DynamicCRF",
"FairseqDropout",
"Fp32GroupNorm",
"Fp32LayerNorm",
"gelu",
"gelu_accurate",
"GradMultiply",
"GumbelVectorQuantizer",
"KmeansVectorQuantizer",
"LayerDropModuleList",
"LayerNorm",
"LearnedPositionalEmbedding",
"LightweightConv1dTBC",
"LightweightConv",
"LinearizedConvolution",
"SelfMultiheadAttention",
"MultiheadAttention",
"PositionalEmbedding",
"SamePad",
"ScalarBias",
"SinusoidalPositionalEmbedding",
"TransformerSentenceEncoderLayer",
"TransformerSentenceEncoder",
"TransformerDecoderLayer",
"TransformerEncoderLayer",
"TransposeLast",
"VGGBlock",
"unfold1d",
]
|
COCO-LM/fairseq/fairseq/modules/__init__.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/modules/__init__.py",
"repo_id": "COCO-LM",
"token_count": 922
}
| 200 |
/**
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
#include "dynamicconv_cuda.cuh"
#include "dynamicconv_cuda_forward.cu"
#include "dynamicconv_cuda_backward.cu"
#include "../cuda_utils.cu"
// FS is filter size and kernels are specialized for filter sizes
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void dynamicconv_forward_kernel(const scalar_t* input,
const scalar_t* weight,
int minibatch,
int sequenceLength,
int numFeatures,
int numFiltersInBlock,
int numHeads,
scalar_t* output) {
assert(blockDim.x == SB);
const int tid = threadIdx.x;
const int batchIdx = blockIdx.x;
const int featureIdx = blockIdx.y;
const int head = featureIdx / numFiltersInBlock;
const int IOOffset = batchIdx * numFeatures * sequenceLength
+ featureIdx * sequenceLength;
const scalar_t* inputFeature = &input[IOOffset];
scalar_t* outputFeature = &output[IOOffset];
scalar_t filter[FS];
__shared__ scalar_t tempInput[SB + FS];
zeroSharedMem<FS, SB, padding_l>(tempInput);
const int numIterations = divUp<int, int>(sequenceLength, SB);
for (int i = 0; i < numIterations; ++i) {
__syncthreads();
const int inputOffset = i * SB;
load_input_to_shared<FS, SB, padding_l>(inputFeature, inputOffset,
sequenceLength, i,
numIterations, false, tempInput);
__syncthreads();
if (inputOffset + tid < sequenceLength) {
#pragma unroll
for (int k = 0; k < FS; ++k) {
const int filterOffset = batchIdx * numHeads * FS * sequenceLength
+ head * FS * sequenceLength
+ k * sequenceLength
+ i * SB + tid;
filter[k] = weight[filterOffset];
}
scalar_t out = scalar_t(0.0);
#pragma unroll
for (int k = 0; k < FS; ++k) {
out += filter[k] * tempInput[tid + k];
}
outputFeature[inputOffset + tid] = out;
}
}
}
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void dynamicconv_backward_kernel(
const scalar_t* gradOutput, // B * C * T
const scalar_t* input, // B * C * T
const scalar_t* weight,
int minibatch,
int sequenceLength,
int numFeatures,
int numFiltersInBlock,
int numHeads,
scalar_t* gradWeight,
scalar_t* gradInput) { // B * H * k * T
assert(blockDim.x == SB);
// each block operates on a single batch and filter head
const int tid = threadIdx.x;
const int batchIdx = blockIdx.x;
const int headIdx = blockIdx.y;
const int chunkIdx = blockIdx.z;
const int numChunks = divUp<int, int>(sequenceLength, SB);
const int inputOffset = chunkIdx * SB;
// initialize shared memory for output gradient and input
__shared__ scalar_t tempGradOutput[SB + FS];
__shared__ scalar_t tempInput[SB + FS];
const int padding = FS - padding_l - 1;
zeroSharedMem<FS, SB, padding>(tempGradOutput);
zeroSharedMem<FS, SB, padding_l>(tempInput);
// initialize local filter and weight gradient sum arrays
scalar_t tempGradSum[FS];
scalar_t bfilter[FS];
for (int k = 0; k < FS; ++k) {
tempGradSum[k] = scalar_t(0.0);
int idxOffset = inputOffset + tid + k - padding;
if (idxOffset >= 0 && idxOffset < sequenceLength) {
int bfilterOffset = batchIdx * numHeads * FS * sequenceLength
+ headIdx * FS * sequenceLength
+ (FS - k - 1) * sequenceLength
+ idxOffset;
bfilter[k] = weight[bfilterOffset];
} else {
bfilter[k] = scalar_t(0.0);
}
}
// iterate over filter block
for (int featureIdx = 0; featureIdx < numFiltersInBlock; ++featureIdx) {
__syncthreads();
// load input and output gradient for this channel and chunk
const int IOOffset = batchIdx * numFeatures * sequenceLength
+ (headIdx * numFiltersInBlock + featureIdx) * sequenceLength;
const scalar_t* inputFeature = &input[IOOffset];
const scalar_t* gradOutputFeature = &gradOutput[IOOffset];
scalar_t* gradInputFeature = &gradInput[IOOffset];
load_input_to_shared<FS, SB, padding>(gradOutputFeature, inputOffset,
sequenceLength, chunkIdx,
numChunks, true, tempGradOutput);
load_input_to_shared<FS, SB, padding_l>(inputFeature, inputOffset,
sequenceLength, chunkIdx,
numChunks, true, tempInput);
__syncthreads();
// sum input and weight gradients
scalar_t out = scalar_t(0.0);
#pragma unroll
for (int k = 0; k < FS; ++k) {
tempGradSum[k] += tempInput[tid + k] * tempGradOutput[tid + padding];
out += bfilter[k] * tempGradOutput[tid + k];
}
if (inputOffset + tid < sequenceLength) {
gradInputFeature[inputOffset + tid] = out;
}
}
const int gradOffset = batchIdx * numHeads * FS * sequenceLength
+ headIdx * FS * sequenceLength;
scalar_t *gradWeightFeature = &gradWeight[gradOffset];
// write weight gradient
if (inputOffset + tid < sequenceLength) {
for (int k = 0; k < FS; ++k) {
const int outputOffset = k * sequenceLength + inputOffset + tid;
gradWeightFeature[outputOffset] = tempGradSum[k];
}
}
}
|
COCO-LM/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu",
"repo_id": "COCO-LM",
"token_count": 2588
}
| 201 |
/**
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
#include <ATen/ATen.h>
#include <c10/cuda/CUDAStream.h>
#include <cuda.h>
#include <cuda_runtime.h>
#include <algorithm>
#include <functional>
#include <iostream>
#include <stdexcept>
#include <utility>
#include <vector>
#include <stdlib.h>
#include <assert.h>
#define SHFL_MASK 0xffffffff
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void lightconv_forward_kernel(const scalar_t* input,
const scalar_t* filters,
int minibatch, int sequenceLength,
int numFeatures, int numFiltersInBlock,
scalar_t* output);
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void lightconv_grad_wrt_input_kernel(
const scalar_t* input,
const scalar_t* filters,
int minibatch,
int sequenceLength,
int numFeatures,
int numFiltersInBlock,
scalar_t* output);
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void lightconv_grad_wrt_weights_firstpass_short_kernel(
const scalar_t* input,
const scalar_t* gradInput,
int minibatch,
int sequenceLength,
int numFeatures,
int numFiltersInBlock,
int numHeads,
float* output);
template<int FS, int SB, typename scalar_t>
__global__
void lightconv_grad_wrt_weights_secondpass_short_kernel(
const float* input,
const int minibatch,
const int numFiltersInBlock,
scalar_t* output);
template<int FS, int SB, int padding_l, typename scalar_t>
__global__
void lightconv_grad_wrt_weights_firstpass_kernel(
const scalar_t* input,
const scalar_t* gradInput,
int minibatch,
int sequenceLength,
int numFeatures,
int numFiltersInBlock,
float* output);
template<int FS, int SB, typename scalar_t>
__global__
void lightconv_grad_wrt_weights_secondpass_kernel(
const float* input,
const int minibatch,
const int numFiltersInBlock,
scalar_t* output);
|
COCO-LM/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cuh/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cuh",
"repo_id": "COCO-LM",
"token_count": 881
}
| 202 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from .em import EM, EmptyClusterResolveError
class PQ(EM):
"""
Quantizes the layer weights W with the standard Product Quantization
technique. This learns a codebook of codewords or centroids of size
block_size from W. For further reference on using PQ to quantize
neural networks, see "And the Bit Goes Down: Revisiting the Quantization
of Neural Networks", Stock et al., ICLR 2020.
PQ is performed in two steps:
(1) The matrix W (weights or fully-connected or convolutional layer)
is reshaped to (block_size, -1).
- If W is fully-connected (2D), its columns are split into
blocks of size block_size.
- If W is convolutional (4D), its filters are split along the
spatial dimension.
(2) We apply the standard EM/k-means algorithm to the resulting reshaped matrix.
Args:
- W: weight matrix to quantize of size (in_features x out_features)
- block_size: size of the blocks (subvectors)
- n_centroids: number of centroids
- n_iter: number of k-means iterations
- eps: for cluster reassignment when an empty cluster is found
- max_tentatives for cluster reassignment when an empty cluster is found
- verbose: print information after each iteration
Remarks:
- block_size be compatible with the shape of W
"""
def __init__(
self,
W,
block_size,
n_centroids=256,
n_iter=20,
eps=1e-6,
max_tentatives=30,
verbose=True,
):
self.block_size = block_size
W_reshaped = self._reshape(W)
super(PQ, self).__init__(
W_reshaped,
n_centroids=n_centroids,
n_iter=n_iter,
eps=eps,
max_tentatives=max_tentatives,
verbose=verbose,
)
def _reshape(self, W):
"""
Reshapes the matrix W as expained in step (1).
"""
# fully connected: by convention the weight has size out_features x in_features
if len(W.size()) == 2:
self.out_features, self.in_features = W.size()
assert (
self.in_features % self.block_size == 0
), "Linear: n_blocks must be a multiple of in_features"
return (
W.reshape(self.out_features, -1, self.block_size)
.permute(2, 1, 0)
.flatten(1, 2)
)
# convolutional: we reshape along the spatial dimension
elif len(W.size()) == 4:
self.out_channels, self.in_channels, self.k_h, self.k_w = W.size()
assert (
self.in_channels * self.k_h * self.k_w
) % self.block_size == 0, (
"Conv2d: n_blocks must be a multiple of in_channels * k_h * k_w"
)
return (
W.reshape(self.out_channels, -1, self.block_size)
.permute(2, 1, 0)
.flatten(1, 2)
)
# not implemented
else:
raise NotImplementedError(W.size())
def encode(self):
"""
Performs self.n_iter EM steps.
"""
self.initialize_centroids()
for i in range(self.n_iter):
try:
self.step(i)
except EmptyClusterResolveError:
break
def decode(self):
"""
Returns the encoded full weight matrix. Must be called after
the encode function.
"""
# fully connected case
if "k_h" not in self.__dict__:
return (
self.centroids[self.assignments]
.reshape(-1, self.out_features, self.block_size)
.permute(1, 0, 2)
.flatten(1, 2)
)
# convolutional case
else:
return (
self.centroids[self.assignments]
.reshape(-1, self.out_channels, self.block_size)
.permute(1, 0, 2)
.reshape(self.out_channels, self.in_channels, self.k_h, self.k_w)
)
|
COCO-LM/fairseq/fairseq/modules/quantization/pq/pq.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/modules/quantization/pq/pq.py",
"repo_id": "COCO-LM",
"token_count": 2080
}
| 203 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from fairseq.modules import TransformerSentenceEncoderLayer
from fairseq.modules.sparse_multihead_attention import SparseMultiheadAttention
class SparseTransformerSentenceEncoderLayer(TransformerSentenceEncoderLayer):
"""
Implements a Sprase Transformer Encoder Layer (see SparseMultiheadAttention)
"""
def __init__(
self,
embedding_dim: int = 768,
ffn_embedding_dim: int = 3072,
num_attention_heads: int = 8,
dropout: float = 0.1,
attention_dropout: float = 0.1,
activation_dropout: float = 0.1,
activation_fn: str = "relu",
export: bool = False,
is_bidirectional: bool = True,
stride: int = 32,
expressivity: int = 8,
) -> None:
super().__init__(
embedding_dim,
ffn_embedding_dim,
num_attention_heads,
dropout,
attention_dropout,
activation_dropout,
activation_fn,
export,
)
self.self_attn = SparseMultiheadAttention(
self.embedding_dim,
num_attention_heads,
dropout=attention_dropout,
add_bias_kv=False,
add_zero_attn=False,
self_attention=True,
is_bidirectional=is_bidirectional,
stride=stride,
expressivity=expressivity,
)
|
COCO-LM/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py",
"repo_id": "COCO-LM",
"token_count": 723
}
| 204 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import logging
from collections import defaultdict
from dataclasses import dataclass, field
from typing import Dict, Any, List, Optional
import torch.optim
from fairseq.dataclass import FairseqDataclass
from fairseq.optim import FairseqOptimizer, register_optimizer, _build_optimizer
from fairseq.optim.lr_scheduler import FairseqLRScheduler, build_lr_scheduler
from omegaconf import II, open_dict
logger = logging.getLogger(__name__)
@dataclass
class OptimizerAndSchedulerConfig(FairseqDataclass):
optimizer: Any = None
lr_scheduler: Optional[Any] = None
lr: List = II("optimization.lr")
lr_float: Optional[float] = None # this makes it easier to sweep on learning rate with auto sweepers
@dataclass
class CompositeOptimizerConfig(FairseqDataclass):
groups: Dict[str, Any] = field(
default_factory=lambda: {},
metadata={
"help": "optimizer name -> optimizer OptimizerAndSchedulerConfig. "
"Configures a different optimizer and (optionally) lr scheduler for each parameter group"
},
)
@register_optimizer("composite", dataclass=CompositeOptimizerConfig)
class FairseqCompositeOptimizer(FairseqOptimizer):
optimizers: Dict[str, FairseqOptimizer] = {}
lr_schedulers: Dict[str, FairseqLRScheduler] = {}
lr_scheduler: FairseqLRScheduler = None
_optimizer: torch.optim.Optimizer
def __init__(self, cfg: CompositeOptimizerConfig, params):
super().__init__(cfg)
assert (
len(params) > 1
), "Composite optimizer only works when there are multiple parameter groups (try fp16_no_flatten_grads: true)"
groupped_params = defaultdict(list)
for p in params:
group = getattr(p, "param_group", "default")
groupped_params[group].append(p)
assert groupped_params.keys() == cfg.groups.keys(), (
f"Parameter groups {groupped_params.keys()} and optimizer groups {cfg.groups.keys()} are not the same! "
"Try setting 'param_group' on your parameters in the model."
)
for group, group_params in groupped_params.items():
group_cfg = cfg.groups[group]
with open_dict(group_cfg):
if group_cfg.lr_float is not None:
group_cfg.optimizer.lr = [group_cfg.lr_float]
group_cfg.lr_scheduler.lr = [group_cfg.lr_float]
else:
group_cfg.optimizer.lr = group_cfg.lr
group_cfg.lr_scheduler.lr = group_cfg.lr
self.optimizers[group] = _build_optimizer(group_cfg.optimizer, group_params)
if group_cfg.lr_scheduler is not None:
self.lr_schedulers[group] = build_lr_scheduler(
group_cfg.lr_scheduler, self.optimizers[group]
)
if len(self.lr_schedulers) > 0:
assert len(self.lr_schedulers) == len(self.optimizers), (
f"Please provide an lr scheduler for each optimizer to use pass_through scheduler. "
f"Optimizers: {self.optimizers}; Lr scheds: {self.lr_schedulers}"
)
self.lr_scheduler = CompositeLRScheduler(self.lr_schedulers)
self._optimizer = CompositeOptimizer(self.optimizers)
@property
def supports_groups(self):
return True
@property
def param_groups(self):
for opt in self.optimizers.values():
for group in opt.param_groups:
yield group
def get_lr(self):
"""Return the current learning rate."""
k = (
"default"
if "default" in self.optimizers
else next(iter(self.optimizers.keys()))
)
return self.optimizers[k].param_groups[0]["lr"]
def state_dict(self):
"""Return the LR scheduler state dict."""
return {k: s.state_dict() for k, s in self.optimizers.items()}
def load_state_dict(self, state_dict, optimizer_overrides=None):
"""Load an LR scheduler state dict."""
for k, state in state_dict.items():
if k not in self.optimizers:
# skip extra keys like "loss_scale" added by fp16 optimizer
continue
overrides = (
optimizer_overrides[k]
if isinstance(optimizer_overrides, dict) and k in optimizer_overrides
else None
)
self.optimizers[k].load_state_dict(state, optimizer_overrides=overrides)
class CompositeOptimizer(torch.optim.Optimizer):
def __init__(self, optimizers: Dict[str, FairseqOptimizer]):
self.optimizers = optimizers
@property
def supports_memory_efficient_fp16(self):
return all(o.supports_memory_efficient_fp16 for o in self.optimizers.values())
@property
def supports_flat_params(self):
return all(o.supports_flat_params for o in self.optimizers.values())
def step(self, closure=None, groups=None):
"""Performs a single optimization step.
Args:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for k, opt in self.optimizers.items():
if groups is None or k in groups:
opt.step()
return loss
def zero_grad(self):
for opt in self.optimizers.values():
opt.zero_grad()
class CompositeLRScheduler(FairseqLRScheduler):
def __init__(self, lr_schedulers):
super().__init__(None, None)
self.lr_schedulers = lr_schedulers
def state_dict(self):
"""Return the LR scheduler state dict."""
return {k: s.state_dict() for k, s in self.lr_schedulers.items()}
def load_state_dict(self, state_dict):
"""Load an LR scheduler state dict."""
for k, state in state_dict.items():
self.lr_schedulers[k].load_state_dict(state)
def step_begin_epoch(self, epoch):
"""Update the learning rate at the beginning of the given epoch."""
for s in self.lr_schedulers.values():
s.step_begin_epoch(epoch)
def step(self, epoch, val_loss=None):
"""Update the learning rate at the end of the given epoch."""
for s in self.lr_schedulers.values():
s.step(epoch)
def step_update(self, num_updates):
"""Update the learning rate after each update."""
return {k: s.step_update(num_updates) for k, s in self.lr_schedulers.items()}
|
COCO-LM/fairseq/fairseq/optim/composite.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/optim/composite.py",
"repo_id": "COCO-LM",
"token_count": 2905
}
| 205 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import math
from dataclasses import dataclass, field
from typing import Optional, List, Tuple
from omegaconf import II
from fairseq.dataclass import FairseqDataclass
from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
@dataclass
class TriStageLRScheduleConfig(FairseqDataclass):
warmup_steps: int = field(
default=0,
metadata={"help": "warmup the learning rate linearly for the first N updates"},
)
hold_steps: int = field(
default=0,
metadata={"help": "steps in hold stage"},
)
decay_steps: int = field(
default=0,
metadata={"help": "steps in decay stages"},
)
phase_ratio: Optional[Tuple[float, float, float]] = field(
default=None,
metadata={
"help": (
"if set, automatically sets warmup/hold/decay steps to the ratio "
"specified here from max_updates. the ratios must add up to 1.0"
)
},
)
init_lr_scale: float = field(
default=0.01,
metadata={"help": "initial learning rate scale during warmup phase"},
)
final_lr_scale: float = field(
default=0.01,
metadata={"help": "final learning rate scale"},
)
max_update: float = II("optimization.max_update")
lr: List[float] = II("optimization.lr")
@register_lr_scheduler("tri_stage", dataclass=TriStageLRScheduleConfig)
class TriStageLRSchedule(FairseqLRScheduler):
"""Tristage learning rate schedulr
Implement the learning rate scheduler in https://arxiv.org/pdf/1904.08779.pdf
Similar to inverse_squre_root scheduler, but tri_stage learning rate employs
three stages LR scheduling:
- warmup stage, starting from `lr` * `init_lr_scale`, linearly
increased to `lr` in `warmup_steps` iterations
- hold stage, after `warmup_steps`, keep the LR as `lr` for `hold_steps`
iterations
- decay stage, after hold stage, decay LR exponetially to
`lr` * `final_lr_scale` in `decay_steps`;
after that LR is keep as `final_lr_scale` * `lr`
During warmup::
init_lr = cfg.init_lr_scale * cfg.lr
lrs = torch.linspace(init_lr, cfg.lr, cfg.warmup_steps)
lr = lrs[update_num]
During hold::
lr = cfg.lr
During decay::
decay_factor = - math.log(cfg.final_lr_scale) / cfg.decay_steps
lr = cfg.lr * exp(- (update_num - warmup_steps - decay_steps) * decay_factor)
After that::
lr = cfg.lr * cfg.final_lr_scale
"""
def __init__(self, cfg: TriStageLRScheduleConfig, optimizer):
super().__init__(cfg, optimizer)
if len(cfg.lr) > 1:
raise ValueError(
"Cannot use a fixed learning rate schedule with tri-stage lr."
" Consider --lr-scheduler=fixed instead."
)
# calculate LR at each point
self.peak_lr = cfg.lr[0]
self.init_lr = cfg.init_lr_scale * cfg.lr[0]
self.final_lr = cfg.final_lr_scale * cfg.lr[0]
if cfg.phase_ratio is not None:
assert cfg.max_update > 0
assert sum(cfg.phase_ratio) == 1, "phase ratios must add up to 1"
self.warmup_steps = int(cfg.max_update * cfg.phase_ratio[0])
self.hold_steps = int(cfg.max_update * cfg.phase_ratio[1])
self.decay_steps = int(cfg.max_update * cfg.phase_ratio[2])
else:
self.warmup_steps = cfg.warmup_steps
self.hold_steps = cfg.hold_steps
self.decay_steps = cfg.decay_steps
assert (
self.warmup_steps + self.hold_steps + self.decay_steps > 0
), "please specify steps or phase_ratio"
self.warmup_rate = (
(self.peak_lr - self.init_lr) / self.warmup_steps
if self.warmup_steps != 0
else 0
)
self.decay_factor = -math.log(cfg.final_lr_scale) / self.decay_steps
# initial learning rate
self.lr = self.init_lr
self.optimizer.set_lr(self.lr)
def _decide_stage(self, update_step):
"""
return stage, and the corresponding steps within the current stage
"""
if update_step < self.warmup_steps:
# warmup state
return 0, update_step
offset = self.warmup_steps
if update_step < offset + self.hold_steps:
# hold stage
return 1, update_step - offset
offset += self.hold_steps
if update_step <= offset + self.decay_steps:
# decay stage
return 2, update_step - offset
offset += self.decay_steps
# still here ? constant lr stage
return 3, update_step - offset
def step(self, epoch, val_loss=None):
"""Update the learning rate at the end of the given epoch."""
super().step(epoch, val_loss)
# we don't change the learning rate at epoch boundaries
return self.optimizer.get_lr()
def step_update(self, num_updates):
"""Update the learning rate after each update."""
stage, steps_in_stage = self._decide_stage(num_updates)
if stage == 0:
self.lr = self.init_lr + self.warmup_rate * steps_in_stage
elif stage == 1:
self.lr = self.peak_lr
elif stage == 2:
self.lr = self.peak_lr * math.exp(-self.decay_factor * steps_in_stage)
elif stage == 3:
self.lr = self.final_lr
else:
raise ValueError("Undefined stage")
self.optimizer.set_lr(self.lr)
return self.lr
|
COCO-LM/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py",
"repo_id": "COCO-LM",
"token_count": 2535
}
| 206 |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import sys
import torch
from fairseq import utils
class SequenceScorer(object):
"""Scores the target for a given source sentence."""
def __init__(
self,
tgt_dict,
softmax_batch=None,
compute_alignment=False,
eos=None,
symbols_to_strip_from_output=None,
):
self.pad = tgt_dict.pad()
self.eos = tgt_dict.eos() if eos is None else eos
self.softmax_batch = softmax_batch or sys.maxsize
assert self.softmax_batch > 0
self.compute_alignment = compute_alignment
self.symbols_to_strip_from_output = (
symbols_to_strip_from_output.union({self.eos})
if symbols_to_strip_from_output is not None
else {self.eos}
)
@torch.no_grad()
def generate(self, models, sample, **kwargs):
"""Score a batch of translations."""
net_input = sample["net_input"]
def batch_for_softmax(dec_out, target):
# assumes decoder_out[0] is the only thing needed (may not be correct for future models!)
first, rest = dec_out[0], dec_out[1:]
bsz, tsz, dim = first.shape
if bsz * tsz < self.softmax_batch:
yield dec_out, target, True
else:
flat = first.contiguous().view(1, -1, dim)
flat_tgt = target.contiguous().view(flat.shape[:-1])
s = 0
while s < flat.size(1):
e = s + self.softmax_batch
yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False
s = e
def gather_target_probs(probs, target):
probs = probs.gather(
dim=2,
index=target.unsqueeze(-1),
)
return probs
orig_target = sample["target"]
# compute scores for each model in the ensemble
avg_probs = None
avg_attn = None
for model in models:
model.eval()
decoder_out = model(**net_input)
attn = decoder_out[1] if len(decoder_out) > 1 else None
if type(attn) is dict:
attn = attn.get("attn", None)
batched = batch_for_softmax(decoder_out, orig_target)
probs, idx = None, 0
for bd, tgt, is_single in batched:
sample["target"] = tgt
curr_prob = model.get_normalized_probs(
bd, log_probs=len(models) == 1, sample=sample
).data
if is_single:
probs = gather_target_probs(curr_prob, orig_target)
else:
if probs is None:
probs = curr_prob.new(orig_target.numel())
step = curr_prob.size(0) * curr_prob.size(1)
end = step + idx
tgt_probs = gather_target_probs(
curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt
)
probs[idx:end] = tgt_probs.view(-1)
idx = end
sample["target"] = orig_target
probs = probs.view(sample["target"].shape)
if avg_probs is None:
avg_probs = probs
else:
avg_probs.add_(probs)
if attn is not None:
if torch.is_tensor(attn):
attn = attn.data
else:
attn = attn[0]
if avg_attn is None:
avg_attn = attn
else:
avg_attn.add_(attn)
if len(models) > 1:
avg_probs.div_(len(models))
avg_probs.log_()
if avg_attn is not None:
avg_attn.div_(len(models))
bsz = avg_probs.size(0)
hypos = []
start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz
for i in range(bsz):
# remove padding from ref
ref = (
utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad)
if sample["target"] is not None
else None
)
tgt_len = ref.numel()
avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len]
score_i = avg_probs_i.sum() / tgt_len
if avg_attn is not None:
avg_attn_i = avg_attn[i]
if self.compute_alignment:
alignment = utils.extract_hard_alignment(
avg_attn_i,
sample["net_input"]["src_tokens"][i],
sample["target"][i],
self.pad,
self.eos,
)
else:
alignment = None
else:
avg_attn_i = alignment = None
hypos.append(
[
{
"tokens": ref,
"score": score_i,
"attention": avg_attn_i,
"alignment": alignment,
"positional_scores": avg_probs_i,
}
]
)
return hypos
|
COCO-LM/fairseq/fairseq/sequence_scorer.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/sequence_scorer.py",
"repo_id": "COCO-LM",
"token_count": 3158
}
| 207 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
import pickle
import torch
import numpy as np
from fairseq.data import (
data_utils,
Dictionary,
encoders,
BaseWrapperDataset,
IdDataset,
NumSamplesDataset,
NumelDataset,
NestedDictionaryDataset,
SortDataset,
NumelDataset,
RightPadDataset,
RawLabelDataset,
RawArrayDataset,
)
#from transformers import BertTokenizer, squad_convert_examples_to_features
#from transformers.data.metrics.squad_metrics import compute_predictions_logits, squad_evaluate
from fairseq.tasks import LegacyFairseqTask, register_task
@register_task('squad')
class SQuADTask(LegacyFairseqTask):
@staticmethod
def add_args(parser):
parser.add_argument('data', metavar='FILE',
help='file prefix for data')
def __init__(self, args, dictionary):
super().__init__(args)
self.dictionary = dictionary
self.seed = args.seed
self.tokenizer = encoders.build_bpe(args)
assert self.tokenizer is not None
self.dictionary.add_symbol('[MASK]')
@classmethod
def load_dictionary(cls, filename):
dictionary = Dictionary.load(filename)
return dictionary
@classmethod
def setup_task(cls, args, **kwargs):
dictionary = cls.load_dictionary(os.path.join(args.data, 'dict.txt'))
print('| Dictionary: {} types'.format(len(dictionary)))
return cls(args, dictionary)
def load_dataset(self, split, combine=False, **kwargs):
features_file_path = os.path.join(self.args.data, "{}_features.pkl".format(split))
examples_file_path = os.path.join(self.args.data, "{}_examples.pkl".format(split))
if os.path.exists(features_file_path) and os.path.exists(examples_file_path):
examples = pickle.load(open(examples_file_path, 'rb'))
features = pickle.load(open(features_file_path, 'rb'))
else:
raise FileNotFoundError("cannot find {} or {}".format(features_file_path, examples_file_path))
if split == 'valid':
# save for eval
self.eval_examples = examples
self.eval_features = features
src_tokens = RawArrayDataset([torch.from_numpy(np.array(f.input_ids)) for f in features])
p_mask = RawArrayDataset([torch.from_numpy(np.array(f.p_mask)).bool() for f in features])
if split == 'train':
starts = RawLabelDataset([int(f.start_position) for f in features])
ends = RawLabelDataset([int(f.end_position) for f in features])
is_impossible = RawLabelDataset([int(f.is_impossible) for f in features])
else:
starts = ends = is_impossible = None
#sizes = np.array([len(f.input_ids) for f in features])
'''
Input format: <s> question here ? </s> Passage </s>
'''
dataset = NestedDictionaryDataset(
{
'id': IdDataset(),
'net_input': {
'src_tokens': RightPadDataset(
src_tokens,
pad_idx=self.dictionary.pad(),
),
'src_lengths': NumelDataset(src_tokens, reduce=False),
},
'targets': {
'starts': starts,
'ends': ends,
'is_impossible': is_impossible,
'p_mask': RightPadDataset(p_mask, pad_idx=1),
},
'nsentences': NumSamplesDataset(),
'ntokens': NumelDataset(src_tokens, reduce=True),
},
sizes=[src_tokens.sizes],
)
if split == 'train':
with data_utils.numpy_seed(self.args.seed):
shuffle = np.random.permutation(len(src_tokens))
dataset = SortDataset(
dataset,
sort_order=[shuffle],
)
print('| Loaded {} with {} samples'.format(split, len(dataset)))
self.datasets[split] = dataset
return self.datasets[split]
def build_model(self, args):
from fairseq import models
model = models.build_model(args, self)
model.register_question_answering_head(
'question_answering_head',
num_classes=2,
)
return model
@property
def source_dictionary(self):
return self.dictionary
@property
def target_dictionary(self):
return self.dictionary
|
COCO-LM/fairseq/fairseq/tasks/squad.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq/tasks/squad.py",
"repo_id": "COCO-LM",
"token_count": 2161
}
| 208 |
#!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Translate raw text with a trained model. Batches data on-the-fly.
"""
import ast
import fileinput
import logging
import math
import os
import sys
import time
from argparse import Namespace
from collections import namedtuple
import numpy as np
import torch
from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils
from fairseq.dataclass.configs import FairseqConfig
from fairseq.dataclass.utils import convert_namespace_to_omegaconf
from fairseq.token_generation_constraints import pack_constraints, unpack_constraints
from fairseq_cli.generate import get_symbols_to_strip_from_output
logging.basicConfig(
format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=os.environ.get("LOGLEVEL", "INFO").upper(),
stream=sys.stdout,
)
logger = logging.getLogger("fairseq_cli.interactive")
Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints")
Translation = namedtuple("Translation", "src_str hypos pos_scores alignments")
def buffered_read(input, buffer_size):
buffer = []
with fileinput.input(files=[input], openhook=fileinput.hook_encoded("utf-8")) as h:
for src_str in h:
buffer.append(src_str.strip())
if len(buffer) >= buffer_size:
yield buffer
buffer = []
if len(buffer) > 0:
yield buffer
def make_batches(lines, cfg, task, max_positions, encode_fn):
def encode_fn_target(x):
return encode_fn(x)
if cfg.generation.constraints:
# Strip (tab-delimited) contraints, if present, from input lines,
# store them in batch_constraints
batch_constraints = [list() for _ in lines]
for i, line in enumerate(lines):
if "\t" in line:
lines[i], *batch_constraints[i] = line.split("\t")
# Convert each List[str] to List[Tensor]
for i, constraint_list in enumerate(batch_constraints):
batch_constraints[i] = [
task.target_dictionary.encode_line(
encode_fn_target(constraint),
append_eos=False,
add_if_not_exist=False,
)
for constraint in constraint_list
]
if cfg.generation.constraints:
constraints_tensor = pack_constraints(batch_constraints)
else:
constraints_tensor = None
tokens, lengths = task.get_interactive_tokens_and_lengths(lines, encode_fn)
itr = task.get_batch_iterator(
dataset=task.build_dataset_for_inference(
tokens, lengths, constraints=constraints_tensor
),
max_tokens=cfg.dataset.max_tokens,
max_sentences=cfg.dataset.batch_size,
max_positions=max_positions,
ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test,
).next_epoch_itr(shuffle=False)
for batch in itr:
ids = batch["id"]
src_tokens = batch["net_input"]["src_tokens"]
src_lengths = batch["net_input"]["src_lengths"]
constraints = batch.get("constraints", None)
yield Batch(
ids=ids,
src_tokens=src_tokens,
src_lengths=src_lengths,
constraints=constraints,
)
def main(cfg: FairseqConfig):
if isinstance(cfg, Namespace):
cfg = convert_namespace_to_omegaconf(cfg)
start_time = time.time()
total_translate_time = 0
utils.import_user_module(cfg.common)
if cfg.interactive.buffer_size < 1:
cfg.interactive.buffer_size = 1
if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None:
cfg.dataset.batch_size = 1
assert (
not cfg.generation.sampling or cfg.generation.nbest == cfg.generation.beam
), "--sampling requires --nbest to be equal to --beam"
assert (
not cfg.dataset.batch_size
or cfg.dataset.batch_size <= cfg.interactive.buffer_size
), "--batch-size cannot be larger than --buffer-size"
logger.info(cfg)
# Fix seed for stochastic decoding
if cfg.common.seed is not None and not cfg.generation.no_seed_provided:
np.random.seed(cfg.common.seed)
utils.set_torch_seed(cfg.common.seed)
use_cuda = torch.cuda.is_available() and not cfg.common.cpu
# Setup task, e.g., translation
task = tasks.setup_task(cfg.task)
# Load ensemble
overrides = ast.literal_eval(cfg.common_eval.model_overrides)
logger.info("loading model(s) from {}".format(cfg.common_eval.path))
models, _model_args = checkpoint_utils.load_model_ensemble(
utils.split_paths(cfg.common_eval.path),
arg_overrides=overrides,
task=task,
suffix=cfg.checkpoint.checkpoint_suffix,
strict=(cfg.checkpoint.checkpoint_shard_count == 1),
num_shards=cfg.checkpoint.checkpoint_shard_count,
)
# Set dictionaries
src_dict = task.source_dictionary
tgt_dict = task.target_dictionary
# Optimize ensemble for generation
for model in models:
if model is None:
continue
if cfg.common.fp16:
model.half()
if use_cuda and not cfg.distributed_training.pipeline_model_parallel:
model.cuda()
model.prepare_for_inference_(cfg)
# Initialize generator
generator = task.build_generator(models, cfg.generation)
# Handle tokenization and BPE
tokenizer = task.build_tokenizer(cfg.tokenizer)
bpe = task.build_bpe(cfg.bpe)
def encode_fn(x):
if tokenizer is not None:
x = tokenizer.encode(x)
if bpe is not None:
x = bpe.encode(x)
return x
def decode_fn(x):
if bpe is not None:
x = bpe.decode(x)
if tokenizer is not None:
x = tokenizer.decode(x)
return x
# Load alignment dictionary for unknown word replacement
# (None if no unknown word replacement, empty if no path to align dictionary)
align_dict = utils.load_align_dict(cfg.generation.replace_unk)
max_positions = utils.resolve_max_positions(
task.max_positions(), *[model.max_positions() for model in models]
)
if cfg.generation.constraints:
logger.warning(
"NOTE: Constrained decoding currently assumes a shared subword vocabulary."
)
if cfg.interactive.buffer_size > 1:
logger.info("Sentence buffer size: %s", cfg.interactive.buffer_size)
logger.info("NOTE: hypothesis and token scores are output in base 2")
logger.info("Type the input sentence and press return:")
start_id = 0
for inputs in buffered_read(cfg.interactive.input, cfg.interactive.buffer_size):
results = []
for batch in make_batches(inputs, cfg, task, max_positions, encode_fn):
bsz = batch.src_tokens.size(0)
src_tokens = batch.src_tokens
src_lengths = batch.src_lengths
constraints = batch.constraints
if use_cuda:
src_tokens = src_tokens.cuda()
src_lengths = src_lengths.cuda()
if constraints is not None:
constraints = constraints.cuda()
sample = {
"net_input": {
"src_tokens": src_tokens,
"src_lengths": src_lengths,
},
}
translate_start_time = time.time()
translations = task.inference_step(
generator, models, sample, constraints=constraints
)
translate_time = time.time() - translate_start_time
total_translate_time += translate_time
list_constraints = [[] for _ in range(bsz)]
if cfg.generation.constraints:
list_constraints = [unpack_constraints(c) for c in constraints]
for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)):
src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad())
constraints = list_constraints[i]
results.append(
(
start_id + id,
src_tokens_i,
hypos,
{
"constraints": constraints,
"time": translate_time / len(translations),
},
)
)
# sort output to match input order
for id_, src_tokens, hypos, info in sorted(results, key=lambda x: x[0]):
src_str = ''
if src_dict is not None:
src_str = src_dict.string(src_tokens, cfg.common_eval.post_process)
print("S-{}\t{}".format(id_, src_str))
print("W-{}\t{:.3f}\tseconds".format(id_, info["time"]))
for constraint in info["constraints"]:
print(
"C-{}\t{}".format(
id_, tgt_dict.string(constraint, cfg.common_eval.post_process)
)
)
# Process top predictions
for hypo in hypos[: min(len(hypos), cfg.generation.nbest)]:
hypo_tokens, hypo_str, alignment = utils.post_process_prediction(
hypo_tokens=hypo["tokens"].int().cpu(),
src_str=src_str,
alignment=hypo["alignment"],
align_dict=align_dict,
tgt_dict=tgt_dict,
remove_bpe=cfg.common_eval.post_process,
extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator),
)
detok_hypo_str = decode_fn(hypo_str)
score = hypo["score"] / math.log(2) # convert to base 2
# original hypothesis (after tokenization and BPE)
print("H-{}\t{}\t{}".format(id_, score, hypo_str))
# detokenized hypothesis
print("D-{}\t{}\t{}".format(id_, score, detok_hypo_str))
print(
"P-{}\t{}".format(
id_,
" ".join(
map(
lambda x: "{:.4f}".format(x),
# convert from base e to base 2
hypo["positional_scores"].div_(math.log(2)).tolist(),
)
),
)
)
if cfg.generation.print_alignment:
alignment_str = " ".join(
["{}-{}".format(src, tgt) for src, tgt in alignment]
)
print("A-{}\t{}".format(id_, alignment_str))
# update running id_ counter
start_id += len(inputs)
logger.info(
"Total time: {:.3f} seconds; translation time: {:.3f}".format(
time.time() - start_time, total_translate_time
)
)
def cli_main():
parser = options.get_interactive_generation_parser()
args = options.parse_args_and_arch(parser)
distributed_utils.call_main(convert_namespace_to_omegaconf(args), main)
if __name__ == "__main__":
cli_main()
|
COCO-LM/fairseq/fairseq_cli/interactive.py/0
|
{
"file_path": "COCO-LM/fairseq/fairseq_cli/interactive.py",
"repo_id": "COCO-LM",
"token_count": 5590
}
| 209 |
# May help avoid undefined symbol errors https://pytorch.org/cppdocs/notes/faq.html#undefined-symbol-errors-from-pytorch-aten
import torch
import warnings
from . import *
|
COCO-LM/fairseq/fused_ops/fused_ops/__init__.py/0
|
{
"file_path": "COCO-LM/fairseq/fused_ops/fused_ops/__init__.py",
"repo_id": "COCO-LM",
"token_count": 52
}
| 210 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.