Add files using upload-large-folder tool
Browse files- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-01a8ecdc-729d-4ad7-9ea8-2c12971703011753434484022-2025_07_25-11.08.10.182/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-08d1caae-8fb5-40b9-88ed-5072c2f48ca81754110755341-2025_08_02-06.59.28.742/source.csv +10 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-14c33a6c-d06d-421b-aaef-66e0673e81a31753986093740-2025_07_31-20.21.52.136/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1a3a8350-ade5-4f14-90d3-a2023f5be9fa1753600712073-2025_07_27-09.18.39.905/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1abe6561-37fa-44c4-a02e-35deedf040521754322372590-2025_08_04-17.46.19.699/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-210e88c5-c80c-4b42-a393-717923a05daf1751602893995-2025_07_04-06.22.30.835/source.csv +162 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-264a0e70-8280-4ed9-a47c-e76bfae594cd1754128841382-2025_08_02-12.01.32.618/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2bb200ce-4bc8-4bc3-9354-29e24db5d38e1752063967983-2025_07_09-14.26.42.463/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2f484e29-43ea-48d0-8c50-df135d6c967a1753171043773-2025_07_22-09.57.31.372/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-3acc90e9-90ce-4c91-8dc5-7fa36ee6eae81754056616784-2025_08_01-15.57.02.654/source.csv +8 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-3f2b1a99-0d75-466c-970c-4deff62cba851753462933379-2025_07_25-19.02.23.245/source.csv +94 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-42977cf0-50f0-4dc9-b77f-2db2b88a939d1753960254266-2025_07_31-13.11.02.905/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-5342e4d6-3c20-40cb-9bcb-64bf1931df6e1753973941916-2025_07_31-16.59.20.943/source.csv +5 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-59175e55-ecae-446f-be12-8861032d4f481751613426266-2025_07_04-09.17.44.620/source.csv +16 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-59cfa53e-375d-426f-b7b4-1efe57f39c131751644504215-2025_07_04-17.55.46.972/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-623c548f-e16f-46a4-9ee1-6577a82e63e51754054052755-2025_08_01-15.14.20.520/source.csv +35 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-6eacf655-5590-4c9d-ad09-856f09c6e0121751568373129-2025_07_03-20.47.02.778/source.csv +282 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-76073275-4388-463f-8e12-ce34ee46fad51752495312029-2025_07_14-14.15.14.704/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7737eb1f-d280-4335-8b81-3697e0d16cc61754428290161-2025_08_05-23.11.37.668/source.csv +7 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7f860396-c5c8-4f1f-8ce7-04e005748e611754402256906-2025_08_05-15.57.44.850/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-81dc70dc-8e01-48a6-9a00-9349b9f9a4171751541780271-2025_07_03-13.23.33.804/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8d01713f-5b88-429a-99ff-32944a31fd381753259613769-2025_07_23-10.34.13.639/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e67a739-7b65-4646-afc1-42e9766880571751607756007-2025_07_04-07.43.31.602/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e7b7877-c553-4d5c-a7c5-433adcd8112b1754287948136-2025_08_04-08.12.35.154/source.csv +286 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-93adc08f-77de-486a-a0da-6bd1df62203b1753869084135-2025_07_30-11.51.32.679/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-995920d4-3066-4bd1-985c-53b12cb9e83c1753010233944-2025_07_20-13.18.05.231/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-afb08496-b4ce-4efc-aec5-cc21ff6731861752228993278-2025_07_11-12.16.54.20/source.csv +4 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-b00cd52f-686b-4cad-89ec-cf5dcdc287a11753702370531-2025_07_28-13.32.59.505/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-ba7dbcd4-5c4f-42a1-b9e3-2228180506061751641251586-2025_07_04-17.01.31.588/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c51cb8ee-522a-4c00-a6d0-920adfdf29e71753118966830-2025_07_21-19.29.37.366/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c66923f3-2a2a-4b19-880b-c1a8bfe1bf981753195775955-2025_07_22-16.49.43.384/source.csv +79 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c877f8c1-c8e0-4a7b-8720-40bb4df915221754138206219-2025_08_02-14.36.55.608/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-e6f472f8-41ac-4a94-8d9a-7becc51fed651753430069410-2025_07_25-09.54.49.121/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f0382786-979c-4a6d-8e9b-f5977f18eb4f1753726151187-2025_07_28-20.09.13.67/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f508ed97-76c1-4935-95ed-d4393099e6361753128212083-2025_07_21-22.03.39.166/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f818bac9-3228-48bb-85cd-ad930fdb35d91752220838711-2025_07_11-10.00.40.248/source.csv +0 -0
- 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-faba6583-b2c9-4b94-9ba6-9f240428520a1750722089894-2025_06_23-23.49.28.299/source.csv +151 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-03e2d1f0-34f2-48bc-9e6a-99708362c3301750977820647-2025_06_27-00.43.44.850/source.csv +50 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180/source.csv +38 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-1f4d4a04-f881-43ae-915a-b4684ec9fba71750685322384-2025_06_23-15.28.52.723/source.csv +404 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-27a851dc-3a84-4cfb-867a-eef6b63ee7ef1750746742858-2025_06_24-08.32.35.909/source.csv +38 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-66c1dffb-e395-48ae-8676-da72a2b6a5cb1751540512935-2025_07_03-13.02.33.440/source.csv +0 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-08.44.49.850/source.csv +9 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-12.16.44.223/source.csv +3 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-98469f8a-b7a1-4997-8c1b-664c4f92dfac1751019295411-2025_06_27-12.14.59.339/source.csv +0 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-b6479f87-a6dd-4f48-918b-47aeda5068fc1750926520523-2025_06_26-10.29.13.179/source.csv +0 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-bc8678ed-7352-41d3-8ab3-bf70f7958a0b1750745114983-2025_06_24-08.05.29.186/source.csv +24 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-eb4593ba-8717-4311-aac2-0669058b8e141750152994551-2025_06_17-11.36.51.515/source.csv +5 -0
- 69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-ecd6bcde-7cf4-4819-b2c7-c5b474828daa1750689105661-2025_06_23-16.31.57.981/source.csv +8 -0
- 927a8af5474e5654810c00ce2e09fd2de87d3e5722f33fa1090d867db114e403/crowd-code-68ed574d-e77d-4b90-972b-cb14379b3ba21752587095408-2025_07_15-15.45.46.648/source.csv +0 -0
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-01a8ecdc-729d-4ad7-9ea8-2c12971703011753434484022-2025_07_25-11.08.10.182/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-08d1caae-8fb5-40b9-88ed-5072c2f48ca81754110755341-2025_08_02-06.59.28.742/source.csv
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,9,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",0,0,"# Copyright 2024 The JAX Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the ""License"");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an ""AS IS"" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport enum\nimport functools\nimport json\nimport math\nfrom typing import TypedDict\n\nimport jax\nfrom jax import dtypes\nfrom jax._src import core\nfrom jax._src import dispatch\nfrom jax._src.custom_partitioning import custom_partitioning\nfrom jax._src.interpreters import batching\nfrom jax._src.interpreters import mlir\nfrom jax._src.lib import cuda_versions\nfrom jax._src import xla_bridge\nfrom jax._src.lib.mlir import ir\nfrom jax._src.lib.mlir.dialects import hlo\nimport jax.numpy as jnp\nfrom jax.sharding import NamedSharding, PartitionSpec\n\nArray = jnp.ndarray\n\nclass FP8Params(TypedDict):\n amax_dQ: float # Amax of gradient of query\n amax_dK: float # Amax of gradient of key\n amax_dV: float # Amax of gradient of value\n amax_dP: float # Amax of gradient of state\n descale_q: float # Descaling factor of query\n descale_k: float # Descaling factor of key\n descale_v: float # Descaling factor of value\n descale_s: float # Descaling factor of attention score\n scale_s: float # Scale factor for S tensor\n scale_o: float # Scale factor for output\n descale_o: float # Descale factor for output (bwd)\n descale_dO: float # Descale factor for output gradient (bwd)\n descale_dP: float # Descale factor for P gradient tensor (bwd)\n scale_dQ: float # Scale factor for query gradient (bwd)\n scale_dK: float # Scale factor for key gradient (bwd)\n scale_dV: float # Scale factor for value gradient (bwd)\n scale_dP: float # Scale factor for state gradient (bwd)\n\n\nclass AttentionLayout(enum.Enum):\n BTNH = 0\n BNTH = 1\n\n\nclass MaskType(enum.Enum):\n NO_MASK = 0\n PADDING = 1\n CAUSAL = 2\n PADDING_CAUSAL = 3\n ALIBI = 4\n\n\ndef convert_mask_type_to_string(mask_type: MaskType) -> str:\n if mask_type == MaskType.NO_MASK:\n return ""NO_MASK""\n elif mask_type == MaskType.PADDING:\n return ""PADDING""\n elif mask_type == MaskType.CAUSAL:\n return ""CAUSAL""\n elif mask_type == MaskType.PADDING_CAUSAL:\n return ""PADDING_CAUSAL""\n elif mask_type == MaskType.ALIBI:\n return ""ALIBI""\n else:\n raise ValueError(f""Unexpected mask type: {mask_type}"")\n\ndef has_padding(mask_type: MaskType) -> bool:\n return mask_type == MaskType.PADDING or mask_type == MaskType.PADDING_CAUSAL\n\ndef should_export_dbias(bias_shape, query_shape, layout) -> bool:\n b_B, b_N, _, _ = bias_shape\n if layout == AttentionLayout.BNTH.value:\n _, q_N, _, _ = query_shape\n else:\n _, _, q_N, _ = query_shape\n return b_B == 1 and b_N == q_N\n\ndef get_large_negative_number(dtype):\n # temp WAR as cuDNN has a bug for subtraction between two large negative value\n if dtype == jnp.bfloat16:\n return jnp.asarray(-2 << 40, dtype=dtype)\n elif dtype == jnp.float16:\n return jnp.asarray(-2 << 14, dtype=dtype)\n else:\n raise ValueError(""Unsupported dtype for inputs."")\n\ndef _normalize_layout(layout: str) -> AttentionLayout:\n layout_upper = layout.upper()\n if layout_upper in [""BSNH"", ""BNSH"", ""BTNH"", ""BNTH""]:\n return AttentionLayout[layout_upper.replace(""S"", ""T"")]\n else:\n raise ValueError(f""Unsupported qkv_layout: {layout}"")\n\ndef element_type_to_backend_config_type_mapping(dtype):\n _element_type_to_backend_config_type_mapping = {\n ir.BF16Type.get(): ""BF16"",\n ir.F16Type.get(): ""F16"",\n }\n return _element_type_to_backend_config_type_mapping[dtype]\n\ndef default_layouts(*shapes):\n return [range(len(shape) - 1, -1, -1) for shape in shapes]\n\ndef get_max_seg_per_batch(q_offsets):\n return q_offsets.shape[1] - 1 if len(q_offsets.shape) == 2 else 1\n\ndef check_is_paged_attention(page_table_k):\n return len(page_table_k.shape) == 4\n\ndef create_dot_product_attention_backend_config_base(\n batch, num_heads, seq_q, seq_kv, dtype, fmha_scale, mask_type, layout, is_bwd\n):\n # Q, K, V: query, key, value in shape of BT(S)NH or BNT(S)H\n # P: BMM1 output in shape of BNTS\n # O: BMM2 output in the same shape with Q\n # BMM1: Q @ K -> P\n # BMM2: P @ V -> O\n # BMM1Grad1: dP @ Q -> dK\n # BMM1Grad2: dP @ K -> dQ\n # BMM2Grad1: P @ dO -> dV\n # BMM2Grad2: dO @ V -> dP\n cudnn_fmha_backend_config = {\n ""algorithm"": {\n ""algo_id"": ""0"",\n ""math_type"": ""TENSOR_OP_MATH"",\n ""tuning_knobs"": {""17"": ""1"", ""24"": ""0""},\n ""is_cudnn_frontend"": True,\n ""workspace_size"": ""0"",\n },\n ""fmha_scale"": fmha_scale,\n ""intermediate_tensor_shape"": {\n ""element_type"": element_type_to_backend_config_type_mapping(dtype),\n ""dimensions"": [str(batch), str(num_heads), str(seq_q), str(seq_kv)],\n ""tuple_shapes"": [],\n ""layout"": {\n ""dim_level_types"": [],\n ""dim_unique"": [],\n ""dim_ordered"": [],\n ""minor_to_major"": [""3"", ""2"", ""1"", ""0""],\n ""tiles"": [],\n ""element_size_in_bits"": ""0"",\n ""memory_space"": ""0"",\n ""index_primitive_type"": ""PRIMITIVE_TYPE_INVALID"",\n ""pointer_primitive_type"": ""PRIMITIVE_TYPE_INVALID"",\n ""dynamic_shape_metadata_prefix_bytes"": ""0"",\n },\n ""is_dynamic_dimension"": [False, False, False, False],\n },\n ""is_flash_attention"": True,\n ""mask_type"": convert_mask_type_to_string(mask_type),\n }\n\n # We define the contracting and batch dims in the format of\n # ((lhs_contracting_dims, rhs_contracting_dims), (lhs_batch_dims,\n # rhs_batch_dims)).\n if layout == AttentionLayout.BNTH.value:\n dims = [\n ((3, 3), ((0, 1), (0, 1))), # BMM1: BNTH,BNSH->BNTS\n ((3, 2), ((0, 1), (0, 1))), # BMM2: BNTS,BNSH->BNTH\n ((2, 2), ((0, 1), (0, 1))), # BMM1_grad_1: BNTS,BNTH->BNSH\n ((3, 2), ((0, 1), (0, 1))), # BMM1_grad_2: BNTS,BNSH->BNTH\n ((2, 2), ((0, 1), (0, 1))), # BMM2_grad_1: BNTS,BNTH->BNSH\n ((3, 3), ((0, 1), (0, 1))), # BMM2_grad_2: BNTH,BNSH->BNTS\n ]\n else:\n dims = [\n ((3, 3), ((0, 2), (0, 2))), # BMM1: BTNH,BSNH->BNTS\n ((3, 1), ((0, 1), (0, 2))), # BMM2: BNTS,BSNH->BTNH\n ((2, 1), ((0, 1), (0, 2))), # BMM1_grad_1: BNTS,BTNH->BSNH\n ((3, 1), ((0, 1), (0, 2))), # BMM1_grad_2: BNTS,BSNH->BTNH\n ((2, 1), ((0, 1), (0, 2))), # BMM2_grad_1: BNTS,BTNH->BSNH\n ((3, 3), ((0, 2), (0, 2))), # BMM2_grad_2: BTNH,BSNH->BNTS\n ]\n keys = [\n ""bmm1_dot_dimension_numbers"",\n ""bmm2_dot_dimension_numbers"",\n ""bmm1_grad_gemm1_dot_dimension_numbers"",\n ""bmm1_grad_gemm2_dot_dimension_numbers"",\n ""bmm2_grad_gemm1_dot_dimension_numbers"",\n ""bmm2_grad_gemm2_dot_dimension_numbers"",\n ]\n fwd_dot_number = {}\n bwd_dot_number = {}\n for idx, (key, ((lc, rc), (lb, rb))) in enumerate(zip(keys, dims)):\n dims_to_write = fwd_dot_number if idx < 2 else bwd_dot_number\n dims_to_write[key] = {\n ""lhs_contracting_dimensions"": [str(lc)],\n ""rhs_contracting_dimensions"": [str(rc)],\n ""lhs_batch_dimensions"": [str(i) for i in lb],\n ""rhs_batch_dimensions"": [str(i) for i in rb],\n }\n\n if is_bwd:\n cudnn_fmha_backend_config = {**cudnn_fmha_backend_config, **bwd_dot_number}\n else:\n cudnn_fmha_backend_config = {**cudnn_fmha_backend_config, **fwd_dot_number}\n backend_config = {\n ""operation_queue_id"":""0"",\n ""wait_on_operation_queues"":[],\n ""cudnn_fmha_backend_config"": cudnn_fmha_backend_config\n }\n return backend_config\n\ndef create_dot_product_attention_backend_config(\n batch,\n num_heads,\n seq_q,\n seq_kv,\n dtype,\n fmha_scale,\n seed,\n dropout_rate,\n mask_type,\n layout,\n sliding_window_length,\n max_seg_per_batch,\n is_paged_attention,\n is_bwd\n):\n backend_config = create_dot_product_attention_backend_config_base(\n batch, num_heads, seq_q, seq_kv, dtype,\n fmha_scale, mask_type, layout, is_bwd\n )\n if sliding_window_length is None:\n sliding_window_length = 0\n backend_config['cudnn_fmha_backend_config'][""dropout_rate""] = dropout_rate\n backend_config['cudnn_fmha_backend_config'][""seed""] = seed\n backend_config['cudnn_fmha_backend_config'][""sliding_window_length""] = sliding_window_length\n backend_config['cudnn_fmha_backend_config'][""max_seg_per_batch""] = max_seg_per_batch\n backend_config['cudnn_fmha_backend_config'][""is_paged_attention""] = is_paged_attention\n return json.dumps(backend_config)\n\ndef create_dot_product_attention_fp8_backend_config(\n batch, num_heads, seq_q, seq_kv, dtype, fmha_scale, mask_type, layout, is_bwd):\n backend_config = create_dot_product_attention_backend_config_base(\n batch, num_heads, seq_q, seq_kv, dtype, fmha_scale, mask_type, layout, is_bwd)\n return json.dumps(backend_config)\n\n# mapping from (is_bwd, has_dropout, has_bias) to custom call name\n_custom_name_maps = {\n # fMHA forward call targets.\n (False, False, False, False): ""__cudnn$fmhaSoftmax"",\n (False, False, True, False): ""__cudnn$fmhaScaleBiasSoftmax"",\n (False, True, False, False): ""__cudnn$fmhaSoftmaxDropout"",\n (False, True, True, False): ""__cudnn$fmhaScaleBiasSoftmaxDropout"",\n (False, False, False, True): ""__cudnn$fmhaSoftmaxF8"",\n # fMHA backward call targets.\n (True, False, False, False): ""__cudnn$fmhaSoftmaxBackward"",\n (True, False, True, False): ""__cudnn$fmhaScaleBiasSoftmaxBackward"",\n (True, True, False, False): ""__cudnn$fmhaSoftmaxDropoutBackward"",\n (True, True, True, False): ""__cudnn$fmhaScaleBiasSoftmaxDropoutBackward"",\n (True, False, False, True): ""__cudnn$fmhaSoftmaxBackwardF8"",\n}\n\ndef get_custom_call_name(has_bias, has_dropout, is_bwd, is_fp8=False):\n return _custom_name_maps[(is_bwd, has_dropout, has_bias, is_fp8)]\n\nget_fp8_custom_call_name = functools.partial(\n get_custom_call_name, has_bias=False, has_dropout=False, is_fp8=True\n)\n\ndef check_layout(query, key, value, bias, q_seqlen, kv_seqlen,\n q_offsets, kv_offsets, page_table_k, page_table_v, layout):\n def check_eq(a, b, c, msg):\n if not (a == b == c):\n raise ValueError(f""{msg} must be same, got {a}, {b}, {b}"")\n\n q_rank, k_rank, v_rank = len(query.shape), len(key.shape), len(value.shape)\n if q_rank != 4:\n raise ValueError(f""Q must have a rank of 4, got {q_rank}"")\n check_eq(q_rank, k_rank, v_rank, ""QKV rank"")\n\n q_dtype, k_dtype, v_dtype = query.dtype, key.dtype, value.dtype\n if q_dtype not in [jnp.bfloat16, jnp.float16, jnp.float8_e4m3fn, jnp.float8_e5m2]:\n raise NotImplementedError(f""Q must be fp16/bf16/fp8_e4m3fn/fp8_e5m2, got {q_dtype}"")\n check_eq(q_dtype, k_dtype, v_dtype, ""QKV dtype"")\n\n if layout == AttentionLayout.BNTH:\n qB, qN, qT, qH = query.shape\n kB, kN, kS, kH = key.shape\n vB, vN, vS, vH = value.shape\n else:\n assert layout == AttentionLayout.BTNH\n qB, qT, qN, qH = query.shape\n kB, kS, kN, kH = key.shape\n vB, vS, vN, vH = value.shape\n\n if page_table_k is not None and page_table_v is not None:\n k_blocks, k_block_size = kB, kS\n v_blocks, v_block_size = vB, vS\n kB, _, k_blocks_per_batch, _ = page_table_k.shape\n vB, _, v_blocks_per_batch, _ = page_table_v.shape\n kS = k_blocks_per_batch * k_block_size\n vS = v_blocks_per_batch * v_block_size\n if kB * k_blocks_per_batch != k_blocks:\n raise ValueError(\n f""Key and page_table_k must have same number of blocks, ""\n f""got {k_blocks} vs {kB * k_blocks_per_batch}"")\n if vB * v_blocks_per_batch != v_blocks:\n raise ValueError(\n f""Value and page_table_v must have same number of blocks, ""\n f""got {v_blocks} vs {vB * v_blocks_per_batch}"")\n\n check_eq(qB, kB, vB, ""QKV batch"")\n check_eq(qH, kH, vH, ""QKV dim_per_head"")\n if kN != vN:\n raise ValueError(f""KV must have same number of heads, got {kN} vs {vN}"")\n if kS != vS:\n raise ValueError(f""KV must have same seq length, got {kS} vs {vS}"")\n\n # check bias\n if bias is not None:\n _, _, bT, bS = bias.shape\n if bT != qT or bS != vS:\n breakpoint()\n raise ValueError(\n f""Bias must have same seq length as QKV, got {bT} and {bS}"")\n\n # check q_seqlen/kv_seqlen/q_offsets/kv_offsets\n expected_rank = 2 if q_offsets is not None else 1\n def check_seqlen_offsets(tensor, name):\n if tensor is not None:\n dtype = tensor.dtype\n rank = len(tensor.shape)\n if dtype != jnp.int32:\n raise ValueError(f""{name} must have int32 datatype, got {dtype}"")\n if rank != expected_rank:\n raise ValueError(f""{name} must have a rank of {expected_rank}, got {rank}"")\n b = tensor.shape[0]\n if b != qB:\n raise ValueError(f""{name} must have same batch as Q, got {b}"")\n\n check_seqlen_offsets(q_seqlen, ""q_seqlen"")\n check_seqlen_offsets(kv_seqlen, ""kv_seqlen"")\n check_seqlen_offsets(q_offsets, ""q_offsets"")\n check_seqlen_offsets(kv_offsets, ""kv_offsets"")\n\n\ndef check_is_flash_attention(\n query, key, layout: int, cudnn_version, has_bias, is_training, is_packed=False,\n is_paged_attention=False, is_fp8=False):\n # Extract sequence length (T) and head dim (H) based on layout\n if layout == AttentionLayout.BNTH.value:\n _, _, T, H = query.shape\n _, _, S, _ = key.shape\n else:\n _, T, _, H = query.shape\n _, S, _, _ = key.shape\n\n # Flash attention conditions\n if is_fp8:\n # FP8 specific conditions\n if not ((is_training and H == 128 and T % 128 == 0 and S % 128 == 0) or\n (not is_training and H <= 256 and H % 16 == 0)):\n raise NotImplementedError(\n f""Unsupported sequence length Q {T}, KV {S} and head dim {H} for FP8.""\n )\n else:\n # bf16/fp16 attention conditions\n # Check the head dim.\n is_on_hopper = is_cuda_compute_capability_equal(""9.0"")\n H_max = 256 if cudnn_version >= 90500 and is_on_hopper else 128\n if not (H <= H_max and H % 8 == 0):\n raise NotImplementedError(\n f""The head dim must be <= {H_max} and a multiple of 8, ""\n f""but got {H}.""\n )\n\n # Check patterns with bias, seqlen should be divisible by 2\n if (is_training and has_bias and (T % 2 != 0 or S % 2 != 0)):\n raise NotImplementedError(\n f""Unsupported sequence length Q {T}, KV {S}.""\n )\n\n if is_packed and (cudnn_version < 90600 or not check_compute_capability(""9.0"")):\n raise NotImplementedError(\n ""Packed layout requires cudnn version >= 9.6 and at least hopper arch."")\n if is_paged_attention and cudnn_version < 90500:\n raise NotImplementedError(""Page attention requires cudnn version >= 9.5."")\n\ndef check_cudnn_version():\n # check if cuDNN is installed\n if cuda_versions is None:\n raise RuntimeError(""cuDNN is not detected."")\n return cuda_versions.cudnn_get_version()\n\ndef check_compute_capability(capability):\n if not 'cuda' in xla_bridge.get_backend().platform_version:\n return False\n d, *_ = jax.local_devices(backend=""gpu"")\n target = tuple(int(x) for x in capability.split("".""))\n current = tuple(int(x) for x in d.compute_capability.split("".""))\n return current >= target\n\ndef is_cuda_compute_capability_equal(capability):\n if not 'cuda' in xla_bridge.get_backend().platform_version:\n return False\n d, *_ = jax.local_devices(backend=""gpu"")\n target = tuple(int(x) for x in capability.split("".""))\n current = tuple(int(x) for x in d.compute_capability.split("".""))\n return current == target\n\ndef _dot_product_attention_fwd(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v,\n scale, seed, dropout_rate, variadic_args, mask_type, layout,\n sliding_window_length, cudnn_version, return_residual):\n # check if flash attention is supported for this attention pattern\n check_is_flash_attention(\n query, key, layout, cudnn_version, bias is not None, False,\n get_max_seg_per_batch(q_offsets) > 1, check_is_paged_attention(page_table_k))\n outputs = _dot_product_attention_fwd_p_wrapper.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale=scale, seed=seed, dropout_rate=dropout_rate,\n variadic_args=variadic_args, mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length, is_training=False or return_residual)\n if return_residual:\n return tuple(outputs)\n else:\n return outputs[0]\n\ndef _dot_product_attention_fwd_rule(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale, seed, dropout_rate, variadic_args,\n mask_type, layout, sliding_window_length, cudnn_version,\n return_residual):\n # check if flash attention is supported for this attention pattern\n check_is_flash_attention(\n query, key, layout, cudnn_version, bias is not None, True,\n get_max_seg_per_batch(q_offsets) > 1)\n outputs = _dot_product_attention_fwd_p_wrapper.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale=scale, seed=seed, dropout_rate=dropout_rate,\n variadic_args=variadic_args, mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length, is_training=True)\n res = (query, key, value, bias, q_seqlen, kv_seqlen, q_offsets,\n kv_offsets, page_table_k, page_table_v, outputs[1], outputs[0])\n if return_residual:\n return tuple(outputs), res\n else:\n return outputs[0], res\n\ndef _dot_product_attention_bwd_rule(\n scale, seed, dropout_rate, variadic_args, mask_type, layout,\n sliding_window_length, is_training, return_residual, res, grad_output):\n (query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output) = res\n if return_residual:\n grad_output = grad_output[0]\n grads = _dot_product_attention_bwd_p_wrapper.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output,\n scale=scale, seed=seed, dropout_rate=dropout_rate, variadic_args=variadic_args,\n mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length\n )\n grads = (*grads,) + (None,) * (10 - len(grads))\n return grads\n\ndef _fix_seqlen_offsets(q_seqlen, kv_seqlen, q_offsets, kv_offsets, query, key):\n # fix seqlen and offsets to what cuDNN expects in sequence packing.\n # cuDNN expects seqlen to have shape [S] where S is the total number of segments\n # while the SDPA API accetps seqlen with shape [B, M] where B is the batch and M\n # is the maximum number of segments of one batch. B x M is larger than S and seqlen\n # is filled with -1 for padded regions. Therefore, we need to shift all non negative\n # values to left side to form a correct seqlen. Similar layout is required for\n # offsets tensors.\n # cuDNN expects offsets to have offset for each segment starting from first segment\n # while SDPA API accetps offsets to have offset for each segment starting from\n # current batch, therefore we need to calculate accumulative offset of each segment\n # starting from first segment.\n def _shift_to_left(x, fill_value):\n # shift any non-negative value to left\n # [[1, 3, -1, -1], [2, 3, 4, -1]]\n # -> [[1, 3, 2, 3], [4, -1, -1, -1]]\n x_shape = x.shape\n x = x.flatten()\n size = x.size\n indices = jnp.nonzero(x >= 0, size=size, fill_value=size)[0]\n y = jnp.take(x, indices, fill_value=fill_value)\n return jnp.reshape(y, x_shape)\n\n def _cu_offset(offsets, max_seq):\n # calculate accumulative offset by batch\n # [[1, 3, 5, 7], [4, 5, -1, -1]], max_seq = 8\n # -> [[1, 3, 5, 7], [12, 13, -1, -1]]\n batch = offsets.shape[0]\n offsets = jnp.where(\n offsets >= 0,\n offsets + (jnp.arange(batch, dtype=offsets.dtype) * max_seq)[..., jnp.newaxis],\n offsets,\n )\n return offsets\n\n if get_max_seg_per_batch(q_offsets) > 1:\n B, T, N, H = query.shape\n _, S, _, _ = key.shape\n\n q_seqlen = _shift_to_left(q_seqlen, -1)\n kv_seqlen = _shift_to_left(kv_seqlen, -1)\n\n q_offsets = _cu_offset(q_offsets, T)\n kv_offsets = _cu_offset(kv_offsets, S)\n q_offsets = _shift_to_left(q_offsets, -1)\n kv_offsets = _shift_to_left(kv_offsets, -1)\n\n # mark any invalid entries as maximum offset\n q_offsets = jnp.where(q_offsets < 0, B * T, q_offsets)\n kv_offsets = jnp.where(kv_offsets < 0, B * S, kv_offsets)\n\n # multiply by stride_per_token to get correct offsets\n # do it here because real stride changes after sharding\n q_offsets = q_offsets * N * H\n kv_offsets = kv_offsets * N * H\n\n return q_seqlen, kv_seqlen, q_offsets, kv_offsets\n\ndef _dot_product_attention_fwd_impl(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale, seed, dropout_rate, variadic_args,\n mask_type, layout, sliding_window_length, is_training):\n # args: {Q, K, V, mask*, bias*}\n q_seqlen, kv_seqlen, q_offsets, kv_offsets = \\n _fix_seqlen_offsets(q_seqlen, kv_seqlen, q_offsets, kv_offsets, query, key)\n outputs = _dot_product_attention_fwd_p.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale=scale, seed=seed, dropout_rate=dropout_rate,\n variadic_args=variadic_args, mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length, is_training=is_training)\n return outputs\n\ndef _dot_product_attention_bwd_impl(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output, scale,\n seed, dropout_rate, variadic_args, mask_type, layout, sliding_window_length):\n q_seqlen, kv_seqlen, q_offsets, kv_offsets = \\n _fix_seqlen_offsets(q_seqlen, kv_seqlen, q_offsets, kv_offsets, query, key)\n grads = _dot_product_attention_bwd_p.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output,\n scale=scale, seed=seed,\n dropout_rate=dropout_rate, variadic_args=variadic_args,\n mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length)\n return grads\n\ndef _dot_product_attention_fwd_abstract(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, *, scale, seed, dropout_rate, variadic_args,\n mask_type, layout, sliding_window_length, is_training):\n query_dtype = dtypes.canonicalize_dtype(query.dtype)\n if layout == AttentionLayout.BNTH.value:\n B, N, T, _ = query.shape\n _, _, S, _ = key.shape\n else:\n B, T, N, _ = query.shape\n _, S, _, _ = key.shape\n output_shape = query.shape\n\n max_seg_per_batch = get_max_seg_per_batch(q_offsets)\n softmax_stat_shape = (B * max_seg_per_batch, N, T)\n\n if is_training:\n return (\n core.ShapedArray(output_shape, query_dtype), # output\n core.ShapedArray(softmax_stat_shape, jnp.float32), # softmax_stat\n )\n else:\n return (\n core.ShapedArray(output_shape, query_dtype), # output\n )\n\ndef _dot_product_attention_bwd_abstract(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output, *,\n scale, seed, dropout_rate, variadic_args, mask_type, layout, sliding_window_length):\n query_dtype = dtypes.canonicalize_dtype(query.dtype)\n key_dtype = dtypes.canonicalize_dtype(key.dtype)\n value_dtype = dtypes.canonicalize_dtype(value.dtype)\n\n _, has_dbias = variadic_args\n if has_dbias:\n # cuDNN supports bias for this case\n bias_dtype = dtypes.canonicalize_dtype(bias.dtype)\n return (\n core.ShapedArray(\n query.shape, query_dtype\n ), # grad query\n core.ShapedArray(\n key.shape, key_dtype\n ), # grad key\n core.ShapedArray(\n value.shape, value_dtype\n ), # grad value\n core.ShapedArray(\n bias.shape, bias_dtype\n ), # grad bias\n )\n else:\n return (\n core.ShapedArray(\n query.shape, query_dtype\n ), # grad query\n core.ShapedArray(\n key.shape, key_dtype\n ), # grad key\n core.ShapedArray(\n value.shape, value_dtype\n ), # grad value\n )\n\ndef _dot_product_attention_fwd_cuda_lowering(\n ctx, query, key, value, bias, q_seqlen, kv_seqlen, q_offsets,\n kv_offsets, page_table_k, page_table_v, scale, seed, dropout_rate,\n variadic_args, mask_type, layout, sliding_window_length, is_training):\n query_type = ir.RankedTensorType(query.type)\n query_shape = query_type.shape\n key_type = ir.RankedTensorType(key.type)\n key_shape = key_type.shape\n\n if layout == AttentionLayout.BNTH.value:\n B, N, T, H = query_shape\n _, _, S, _ = key_shape\n output_layout = (3, 2, 1, 0)\n output_transpose_perm = mlir.dense_int_array((0, 1, 2, 3))\n else:\n B, T, N, H = query_shape\n _, S, _, _ = key_shape\n output_layout = (3, 1, 2, 0)\n output_transpose_perm = mlir.dense_int_array((0, 2, 1, 3))\n\n max_seg_per_batch = get_max_seg_per_batch(ir.RankedTensorType(q_offsets.type))\n is_paged_attention = check_is_paged_attention(ir.RankedTensorType(page_table_k.type))\n\n output_shape = (B, N, T, H)\n softmax_stat_shape = (B * max_seg_per_batch, N, T)\n workspace_shape = (0,)\n workspace_type = ir.IntegerType.get_unsigned(8)\n\n has_bias, _ = variadic_args\n backend_config = create_dot_product_attention_backend_config(\n B, N, T, S, query_type.element_type, scale, seed, dropout_rate,\n mask_type, layout, sliding_window_length, max_seg_per_batch,\n is_paged_attention, is_bwd=False)\n # {Q, K, V, bias*, q_seqlen*, kv_seqlen*, q_offsets*, kv_offsets*}}\n # {output, activation*, workspace}\n has_dropout = dropout_rate > 0\n operands = [query, key, value]\n if has_bias:\n operands.append(bias)\n if has_padding(mask_type) or max_seg_per_batch > 1 or is_paged_attention:\n operands.append(q_seqlen)\n operands.append(kv_seqlen)\n if max_seg_per_batch > 1:\n operands.append(q_offsets)\n operands.append(kv_offsets)\n if is_paged_attention:\n operands.append(page_table_k)\n operands.append(page_table_v)\n\n custom_call_name = get_custom_call_name(has_bias, has_dropout, False)\n\n if is_training:\n result_types = [\n ir.RankedTensorType.get(output_shape, query_type.element_type),\n ir.RankedTensorType.get(softmax_stat_shape, ir.F32Type.get()),\n ir.RankedTensorType.get(workspace_shape, workspace_type),\n ]\n result_layouts = [output_layout] + default_layouts(softmax_stat_shape, workspace_shape)\n else:\n result_types = [\n ir.RankedTensorType.get(output_shape, query_type.element_type),\n ir.RankedTensorType.get(workspace_shape, workspace_type)\n ]\n result_layouts = [output_layout] + default_layouts(workspace_shape)\n # create custom call here\n out = mlir.custom_call(\n custom_call_name,\n result_types=result_types,\n operands=operands,\n backend_config=backend_config,\n operand_layouts=default_layouts(\n *[ir.RankedTensorType(operand.type).shape for operand in operands]),\n result_layouts=result_layouts,\n )\n # drop workspace memory\n # output should be (B, T, N, H) instead of (B, N, T, H)\n if is_training:\n return [hlo.transpose(out.results[0], output_transpose_perm), out.results[1]]\n else:\n return [hlo.transpose(out.results[0], output_transpose_perm)]\n\ndef _dot_product_attention_bwd_cuda_lowering(\n ctx, query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output,\n scale, seed, dropout_rate, variadic_args, mask_type, layout, sliding_window_length):\n query_type = ir.RankedTensorType(query.type)\n query_shape = query_type.shape\n key_type = ir.RankedTensorType(key.type)\n key_shape = key_type.shape\n value_type = ir.RankedTensorType(value.type)\n\n if layout == AttentionLayout.BNTH.value:\n B, q_N, T, H = query_shape\n _, k_N, S, _ = key_shape\n grad_layout = (3, 2, 1, 0)\n grad_transpose_perm = mlir.dense_int_array((0, 1, 2, 3))\n else:\n B, T, q_N, H = query_shape\n _, S, k_N, _ = key_shape\n grad_layout = (3, 1, 2, 0)\n grad_transpose_perm = mlir.dense_int_array((0, 2, 1, 3))\n\n workspace_shape = (0,)\n workspace_type = ir.IntegerType.get_unsigned(8)\n\n grad_query_shape = (B, q_N, T, H)\n grad_key_shape = (B, k_N, S, H)\n grad_value_shape = (B, k_N, S, H)\n\n has_bias, has_dbias = variadic_args\n max_seg_per_batch = get_max_seg_per_batch(ir.RankedTensorType(q_offsets.type))\n backend_config = create_dot_product_attention_backend_config(\n B, q_N, T, S, query_type.element_type, scale, seed, dropout_rate,\n mask_type, layout, sliding_window_length, max_seg_per_batch,\n False, is_bwd=True)\n # {Q, K, V, activation, dO, bias*, O, q_seqlen*, kv_seqlen*,\n # q_offsets*, kv_offsets*}\n # {dQ, dK, dV, dbias*, workspace}\n has_dropout = dropout_rate > 0\n # create operands\n operands = [query, key, value, activation, grad_output]\n if has_bias:\n # flash attention requires bias in the bwd for remat\n operands.append(bias)\n operands.append(fwd_output)\n if has_padding(mask_type) or max_seg_per_batch > 1:\n operands.append(q_seqlen)\n operands.append(kv_seqlen)\n if max_seg_per_batch > 1:\n operands.append(q_offsets)\n operands.append(kv_offsets)\n # get custom call name\n custom_call_name = get_custom_call_name(has_bias, has_dropout, True)\n\n # create output types and layouts\n # grad_query, grad_key, grad_value\n result_types = [\n ir.RankedTensorType.get(grad_query_shape, query_type.element_type),\n ir.RankedTensorType.get(grad_key_shape, key_type.element_type),\n ir.RankedTensorType.get(grad_value_shape, value_type.element_type),\n ]\n result_layouts = [grad_layout, grad_layout, grad_layout]\n bias_type = ir.RankedTensorType(bias.type)\n bias_shape = bias_type.shape\n if has_dbias:\n # cuDNN supports bias for this case\n result_types.append(\n ir.RankedTensorType.get(bias_shape, bias_type.element_type))\n result_layouts = result_layouts + default_layouts(bias_shape)\n # workspace\n result_types.append(ir.RankedTensorType.get(workspace_shape, workspace_type))\n result_layouts = result_layouts + default_layouts(workspace_shape)\n out = mlir.custom_call(\n custom_call_name,\n result_types=result_types,\n operands=operands,\n backend_config=backend_config,\n operand_layouts=default_layouts(\n *[ir.RankedTensorType(operand.type).shape for operand in operands]),\n result_layouts=result_layouts,\n )\n dqkv = (hlo.transpose(out.results[0], grad_transpose_perm),\n hlo.transpose(out.results[1], grad_transpose_perm),\n hlo.transpose(out.results[2], grad_transpose_perm))\n # Only keep dQ, dK, dV and dBias here\n if has_dbias:\n return dqkv + (out.results[3],)\n else:\n return dqkv\n\n# batcher\ndef _check_valid_batch_dims(bdims):\n for dim in bdims:\n if dim not in [0, None]:\n raise NotImplementedError(\n f""Currently only support batch_dim in [0, None], but got {dim=}"")\n\ndef _dot_product_attention_fwd_batcher(\n batched_args, batch_dims, *, scale, seed, dropout_rate, variadic_args,\n mask_type, layout, sliding_window_length, is_training):\n _check_valid_batch_dims(batch_dims)\n query, key, value, bias, q_seqlen, kv_seqlen, \\n q_offsets, kv_offsets, page_table_k, page_table_v = batched_args\n query_bdim = batch_dims[0]\n if is_training:\n out_bdims = query_bdim, query_bdim\n else:\n out_bdims = (query_bdim,)\n\n if layout == AttentionLayout.BNTH.value:\n *Bs, N, T, _ = query.shape\n *_, _, S, _ = key.shape\n else:\n *Bs, T, N, _ = query.shape\n *_, S, _, _ = key.shape\n B = math.prod(Bs)\n has_bias, _ = variadic_args\n original_shape = query.shape\n # reshape to 4D shape\n query = jnp.reshape(query, (B,) + query.shape[-3:])\n key = jnp.reshape(key, (B,) + key.shape[-3:])\n value = jnp.reshape(value, (B,) + key.shape[-3:])\n if has_bias and batch_dims[3] is not None:\n bias = jnp.reshape(bias, (B, N, T, S))\n if has_padding(mask_type):\n q_seqlen = jnp.reshape(q_seqlen, (B, ))\n kv_seqlen = jnp.reshape(kv_seqlen, (B, ))\n\n outputs = _dot_product_attention_fwd_p_wrapper.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale=scale, seed=seed, dropout_rate=dropout_rate,\n variadic_args=variadic_args, mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length, is_training=is_training)\n\n # reshape to original shape\n output = outputs[0]\n output = jnp.reshape(output, original_shape)\n if is_training:\n activation = outputs[1]\n activation = jnp.reshape(activation, (*Bs, N, T))\n return (output, activation), out_bdims\n else:\n return (output,), out_bdims\n\ndef _dot_product_attention_bwd_batcher(\n batched_args, batch_dims, *, scale, seed, dropout_rate, variadic_args,\n mask_type, layout, sliding_window_length):\n _check_valid_batch_dims(batch_dims)\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets, \\n page_table_k, page_table_v, activation, fwd_output, grad_output = batched_args\n query_bdim = batch_dims[0]\n out_bdims = query_bdim, query_bdim, query_bdim\n\n if layout == AttentionLayout.BNTH.value:\n *Bs, N, T, _ = query.shape\n *_, _, S, _ = key.shape\n else:\n *Bs, T, N, _ = query.shape\n *_, S, _, _ = key.shape\n B = math.prod(Bs)\n has_bias, has_dbias = variadic_args\n # Reset the has_dbias if the combined batch size is not 1, because cuDNN only\n # supports dbias with a single batch. In this case, an all-zero dbias will be\n # appended instead.\n if B > 1:\n variadic_args = (has_bias, False)\n original_query_shape = query.shape\n original_key_shape = key.shape\n original_value_shape = value.shape\n original_bias_shape = bias.shape if has_bias else None\n # reshape to 4D shape\n query = jnp.reshape(query, (B,) + query.shape[-3:])\n key = jnp.reshape(key, (B,) + key.shape[-3:])\n value = jnp.reshape(value, (B,) + key.shape[-3:])\n if has_bias and batch_dims[3] is not None:\n bias = jnp.reshape(bias, (B, N, T, S))\n if has_padding(mask_type):\n q_seqlen = jnp.reshape(q_seqlen, (B, ))\n kv_seqlen = jnp.reshape(kv_seqlen, (B, ))\n\n activation = jnp.reshape(activation, (B, N, T))\n fwd_output = jnp.reshape(fwd_output, (B,) + query.shape[-3:])\n grad_output = jnp.reshape(grad_output, (B,) + query.shape[-3:])\n\n grads = _dot_product_attention_bwd_p_wrapper.bind(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, activation, fwd_output, grad_output,\n scale=scale, seed=seed, dropout_rate=dropout_rate, variadic_args=variadic_args,\n mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length,\n )\n\n # reshape to original shape\n grads[0] = jnp.reshape(grads[0], original_query_shape)\n grads[1] = jnp.reshape(grads[1], original_key_shape)\n grads[2] = jnp.reshape(grads[2], original_value_shape)\n if has_dbias:\n assert has_bias\n if variadic_args[1]:\n grads[3] = jnp.reshape(grads[3], original_bias_shape)\n else:\n grads.append(jnp.zeros(original_bias_shape, bias.dtype))\n out_bdims += (batch_dims[3],)\n return grads, out_bdims\n\n# custom partitioning\ndef _get_padded_spec(arg_info):\n spec = None if arg_info.sharding is None else arg_info.sharding.spec\n ndim = arg_info.ndim\n if spec is None:\n return (None,) * ndim\n assert len(spec) <= ndim\n return spec + (None,) * (ndim - len(spec))\n\ndef _check_qkv_bias_mask_spec(\n query_spec, key_spec, value_spec, bias_spec, layout):\n # check qkv spec\n if not query_spec == key_spec == value_spec:\n raise ValueError(""Query, key and value should have same sharding."")\n if layout == AttentionLayout.BNTH.value:\n *batch_spec, num_head_spec, q_seq_spec, head_spec = query_spec\n else:\n *batch_spec, q_seq_spec, num_head_spec, head_spec = query_spec\n if q_seq_spec is not None:\n raise ValueError(""Sharding on sequence dim is not allowed."")\n if head_spec is not None:\n raise ValueError(""Sharding on head dim is not allowed."")\n # check bias spec\n if bias_spec:\n *bias_batch_spec, bias_num_head_spec, bias_q_seq_spec, bias_kv_seq_spec = bias_spec\n if any(bias_batch_spec) and bias_batch_spec != batch_spec or \\n bias_num_head_spec is not None and bias_num_head_spec != num_head_spec:\n raise ValueError(\n ""Query and bias should have same sharding on batch and num_head dim."")\n if bias_q_seq_spec is not None or bias_kv_seq_spec is not None:\n raise ValueError(""Sharding on bias sequence dim is not allowed."")\n\n\n# fwd custom partition\ndef _infer_fwd_output_sharding(mesh, arg_shapes, variadic_args,is_training, layout):\n # only sharding on batch and num_head dim is allowed\n # (*batch, q_seq, num_head, head)\n query_spec = _get_padded_spec(arg_shapes[0])\n # (*batch, kv_seq, num_head, head)\n key_spec = _get_padded_spec(arg_shapes[1])\n value_spec = _get_padded_spec(arg_shapes[2])\n has_bias, _ = variadic_args\n bias_spec = _get_padded_spec(arg_shapes[3]) if has_bias else None\n\n _check_qkv_bias_mask_spec(\n query_spec, key_spec, value_spec, bias_spec, layout)\n # keep out sharding same as query sharding since they have same shape\n out_sharding = NamedSharding(mesh, PartitionSpec(*query_spec))\n if is_training:\n # activation sharding\n *batch_spec, q_seq_spec, num_head_spec, _ = query_spec\n activation_sharding = NamedSharding(\n mesh, PartitionSpec(*batch_spec, num_head_spec, q_seq_spec, None))\n return [out_sharding, activation_sharding]\n return [out_sharding]\n\n_dot_product_attention_fwd_lower = custom_partitioning(\n _dot_product_attention_fwd_impl, static_argnums=(10, 11, 12, 13, 14, 15, 16, 17))\n\ndef _dot_product_attention_fwd_infer_sharding_from_operands(\n scale, seed, dropout_rate, variadic_args, mask_type, layout, sliding_window_length,\n is_training, mesh, arg_shapes, result_shape):\n return _infer_fwd_output_sharding(mesh, arg_shapes, variadic_args, is_training, layout)\n\ndef _dot_product_attention_fwd_partition(\n scale, seed, dropout_rate, variadic_args, mask_type, layout, sliding_window_length,\n is_training, mesh, arg_shapes, result_shape):\n # args sharding\n arg_shardings = tuple(arg_i.sharding for arg_i in arg_shapes)\n out_shardings = _infer_fwd_output_sharding(\n mesh, arg_shapes, variadic_args, is_training, layout)\n impl = functools.partial(\n _dot_product_attention_fwd_impl,\n scale=scale,\n seed=seed,\n dropout_rate=dropout_rate,\n variadic_args=variadic_args,\n mask_type=mask_type,\n layout=layout,\n sliding_window_length=sliding_window_length,\n is_training=is_training,\n )\n return mesh, impl, out_shardings, arg_shardings\n\n# bwd custom partition\ndef _infer_bwd_output_sharding(mesh, arg_shapes, layout, variadic_args):\n # (*batch, q_seq, num_head, head)\n query_spec = _get_padded_spec(arg_shapes[0])\n # (*batch, kv_seq, num_head, head)\n key_spec = _get_padded_spec(arg_shapes[1])\n value_spec = _get_padded_spec(arg_shapes[2])\n has_bias, has_dbias = variadic_args\n bias_spec = _get_padded_spec(arg_shapes[3]) if has_bias else None\n _check_qkv_bias_mask_spec(\n query_spec, key_spec, value_spec, bias_spec, layout)\n # keep grad query sharding same as query sharding\n grad_query_sharding = NamedSharding(mesh, PartitionSpec(*query_spec))\n grad_key_sharding = NamedSharding(mesh, PartitionSpec(*key_spec))\n grad_value_sharding = NamedSharding(mesh, PartitionSpec(*key_spec))\n out_shardings = [grad_query_sharding, grad_key_sharding, grad_value_sharding]\n if has_dbias:\n grad_bias_sharding = NamedSharding(mesh, PartitionSpec(*bias_spec))\n out_shardings = out_shardings + [grad_bias_sharding]\n return out_shardings\n\n_dot_product_attention_bwd_lower = custom_partitioning(\n _dot_product_attention_bwd_impl, static_argnums=(13, 14, 15, 16, 17, 18, 19)\n)\n\ndef _dot_product_attention_bwd_infer_sharding_from_operands(\n scale, seed, dropout_rate, variadic_args, mask_type, layout,\n sliding_window_length, mesh, arg_shapes, result_shape):\n return _infer_bwd_output_sharding(mesh, arg_shapes, layout, variadic_args)\n\ndef _dot_product_attention_bwd_partition(\n scale, seed, dropout_rate, variadic_args, mask_type, layout,\n sliding_window_length, mesh, arg_shapes, result_shape):\n out_shardings = _infer_bwd_output_sharding(mesh, arg_shapes, layout, variadic_args)\n # args sharding\n arg_shardings = tuple(arg_i.sharding for arg_i in arg_shapes)\n def sharded_impl(*args):\n impl = functools.partial(\n _dot_product_attention_bwd_impl,\n scale=scale,\n seed=seed,\n dropout_rate=dropout_rate,\n variadic_args=variadic_args,\n mask_type=mask_type,\n layout=layout,\n sliding_window_length=sliding_window_length,\n )\n grads = impl(*args)\n _, has_dbias = variadic_args\n if has_dbias:\n query_spec = arg_shardings[0].spec\n batch_spec = query_spec[0]\n local_dbias = grads[3]\n global_dbias = jax.lax.psum(local_dbias, batch_spec)\n grads = grads[:3] + [global_dbias]\n return grads\n return mesh, sharded_impl, out_shardings, arg_shardings\n\n# Create dot_product_attention_fwd_p for forward operation.\n_dot_product_attention_fwd_p = core.Primitive(""dot_product_attention_fwd"")\n_dot_product_attention_fwd_p.multiple_results = True\n_dot_product_attention_fwd_p.def_impl(\n functools.partial(dispatch.apply_primitive, _dot_product_attention_fwd_p)\n)\n_dot_product_attention_fwd_p.def_abstract_eval(\n _dot_product_attention_fwd_abstract\n)\n\nmlir.register_lowering(\n _dot_product_attention_fwd_p,\n _dot_product_attention_fwd_cuda_lowering,\n platform=""cuda"",\n)\n\n_dot_product_attention_fwd_p_wrapper = core.Primitive(\n ""dot_product_attention_fwd_wrapper""\n)\n_dot_product_attention_fwd_p_wrapper.multiple_results = True\n_dot_product_attention_fwd_p_wrapper.def_impl(_dot_product_attention_fwd_impl)\n_dot_product_attention_fwd_p_wrapper.def_abstract_eval(\n _dot_product_attention_fwd_abstract\n)\n\n# Create dot_product_attention_bwd_p for backward operation.\n_dot_product_attention_bwd_p = core.Primitive(""dot_product_attention_bwd"")\n_dot_product_attention_bwd_p.multiple_results = True\n_dot_product_attention_bwd_p.def_impl(\n functools.partial(dispatch.apply_primitive, _dot_product_attention_bwd_p)\n)\n_dot_product_attention_bwd_p.def_abstract_eval(\n _dot_product_attention_bwd_abstract\n)\n\nmlir.register_lowering(\n _dot_product_attention_bwd_p,\n _dot_product_attention_bwd_cuda_lowering,\n platform=""cuda"",\n)\n\n_dot_product_attention_bwd_p_wrapper = core.Primitive(\n ""dot_product_attention_bwd_wrapper""\n)\n_dot_product_attention_bwd_p_wrapper.multiple_results = True\n_dot_product_attention_bwd_p_wrapper.def_impl(_dot_product_attention_bwd_impl)\n_dot_product_attention_bwd_p_wrapper.def_abstract_eval(\n _dot_product_attention_bwd_abstract\n)\n\nbatching.primitive_batchers[\n _dot_product_attention_fwd_p_wrapper\n] = _dot_product_attention_fwd_batcher\nbatching.primitive_batchers[\n _dot_product_attention_bwd_p_wrapper\n] = _dot_product_attention_bwd_batcher\n\ndef not_implemented_sharding_rule(*args, **kwargs):\n return NotImplementedError(""Sharding rule not implemented."")\n\n_dot_product_attention_fwd_lower.def_partition(\n infer_sharding_from_operands=_dot_product_attention_fwd_infer_sharding_from_operands,\n partition=_dot_product_attention_fwd_partition,\n sharding_rule=not_implemented_sharding_rule)\n\nmlir.register_lowering(_dot_product_attention_fwd_p_wrapper,\n mlir.lower_fun(_dot_product_attention_fwd_lower, multiple_results=True))\n\n_dot_product_attention_bwd_lower.def_partition(\n infer_sharding_from_operands=_dot_product_attention_bwd_infer_sharding_from_operands,\n partition=_dot_product_attention_bwd_partition,\n sharding_rule=not_implemented_sharding_rule)\n\nmlir.register_lowering(_dot_product_attention_bwd_p_wrapper,\n mlir.lower_fun(_dot_product_attention_bwd_lower, multiple_results=True))\n\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fwd_p\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fwd_p_wrapper\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_bwd_p\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_bwd_p_wrapper\n)\n\[email protected](jax.custom_vjp, nondiff_argnums=(10, 11, 12, 13, 14, 15, 16, 17, 18))\ndef _dot_product_attention(query: Array,\n key: Array,\n value: Array,\n bias: Array,\n q_seqlen: Array,\n kv_seqlen: Array,\n q_offsets: Array,\n kv_offsets: Array,\n page_table_k: Array,\n page_table_v: Array,\n scale: float,\n seed: int,\n dropout_rate: float,\n variadic_args: tuple[bool, ...],\n mask_type: bool,\n layout: int,\n sliding_window_length: int | None,\n cudnn_version: int,\n return_residual: bool):\n output = _dot_product_attention_fwd(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n page_table_k, page_table_v, scale=scale, seed=seed, dropout_rate=dropout_rate,\n variadic_args=variadic_args, mask_type=mask_type, layout=layout,\n sliding_window_length=sliding_window_length,\n cudnn_version=cudnn_version, return_residual=return_residual)\n return output\n\n_dot_product_attention.defvjp(\n _dot_product_attention_fwd_rule, _dot_product_attention_bwd_rule\n)\n\nfp8_params_keys = [\n 'amax_dQ', 'amax_dK', 'amax_dV', 'amax_dP', # place holder for bwd output\n 'descale_q', 'descale_k', 'descale_v', 'descale_s',\n 'scale_s', 'scale_o', 'descale_o', 'descale_dO',\n 'descale_dP', 'scale_dQ', 'scale_dK', 'scale_dV',\n 'scale_dP'\n]\n\nfp8_params_keys_fwd = [\n 'descale_q', 'descale_k', 'descale_v', 'descale_s', 'scale_s', 'scale_o'\n]\nfp8_params_keys_bwd = [\n 'descale_q', 'descale_k', 'descale_v', 'descale_o', 'descale_dO', 'descale_s',\n 'descale_dP', 'scale_s', 'scale_dQ', 'scale_dK', 'scale_dV', 'scale_dP',\n]\nparams_from_keys = lambda params, keys: [params[key] for key in keys]\n\ndef check_fp8_params(params):\n # Check if all required keys are present\n missing_keys = set(fp8_params_keys) - set(params)\n if missing_keys:\n raise ValueError(f""The following keys are missing from fp8_params: {', '.join(missing_keys)}"")\n\ncheck_is_flash_attention_fp8 = functools.partial(\n check_is_flash_attention,\n has_bias=False,\n is_fp8=True\n)\n\ndef _dot_product_attention_fp8_fwd(\n query, key, value,\n fp8_params_fwd,\n scale, use_causal_mask, layout, cudnn_version):\n check_is_flash_attention_fp8(\n query, key, layout, cudnn_version, is_training=False)\n descale_q, descale_k, descale_v, descale_s, scale_s, scale_o = fp8_params_fwd\n outputs = _dot_product_attention_fp8_fwd_p_wrapper.bind(\n query, key, value,\n descale_q, descale_k, descale_v, descale_s,\n scale_s, scale_o,\n scale=scale, use_causal_mask=use_causal_mask, layout=layout, is_training=False)\n return outputs\n\ndef _dot_product_attention_fp8_fwd_rule(\n query, key, value,\n fp8_params,\n scale, use_causal_mask, layout, cudnn_version):\n check_is_flash_attention_fp8(\n query, key, layout, cudnn_version, is_training=True)\n\n outputs = _dot_product_attention_fp8_fwd_p_wrapper.bind(\n query, key, value, *params_from_keys(fp8_params, fp8_params_keys_fwd),\n scale=scale, use_causal_mask=use_causal_mask, layout=layout, is_training=True)\n res = (query, key, value, outputs[3], outputs[0], params_from_keys(fp8_params, fp8_params_keys_bwd))\n return (outputs[0], outputs[1], outputs[2]), res\n\ndef _dot_product_attention_fp8_bwd_rule(\n scale, use_causal_mask, layout, cudnn_version, res, g):\n (query, key, value, activation, fwd_output, aux_params) = res\n grad_output = g[0]\n grads = _dot_product_attention_fp8_bwd_p_wrapper.bind(\n query,\n key,\n value,\n fwd_output,\n grad_output,\n activation,\n *aux_params,\n scale=scale,\n use_causal_mask=use_causal_mask,\n layout=layout,\n )\n\n fp8_params_grads = dict.fromkeys(fp8_params_keys)\n keys_to_grad_indices = ['amax_dQ', 'amax_dK', 'amax_dV', 'amax_dP']\n # grads structure: (dQ, dK, dV, amax_dq, amax_dk, amax_dv, amax_dp)\n for i, key in enumerate(keys_to_grad_indices, start=3):\n fp8_params_grads[key] = grads[i]\n\n return (grads[0], grads[1], grads[2], fp8_params_grads)\n\ndef _dot_product_attention_fp8_fwd_impl(\n query, key, value,\n descale_q, descale_k, descale_v, descale_s, scale_s, scale_o,\n scale, use_causal_mask, layout, is_training):\n outputs = _dot_product_attention_fp8_fwd_p.bind(\n query,\n key,\n value,\n descale_q,\n descale_k,\n descale_v,\n descale_s,\n scale_s,\n scale_o,\n scale=scale,\n use_causal_mask=use_causal_mask,\n layout=layout,\n is_training=is_training,\n )\n return outputs\n\ndef _dot_product_attention_fp8_bwd_impl(\n query, key, value, fwd_output, grad_output, activation,\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s,\n descale_dP, scale_s, scale_dQ, scale_dK, scale_dV, scale_dP,\n scale, use_causal_mask, layout):\n grads = _dot_product_attention_fp8_bwd_p.bind(\n query, key, value, fwd_output, grad_output, activation,\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s,\n descale_dP, scale_s, scale_dQ, scale_dK, scale_dV, scale_dP,\n scale=scale, use_causal_mask=use_causal_mask, layout=layout)\n return grads\n\n\ndef _dot_product_attention_fp8_fwd_abstract(\n query, key, value,\n descale_q, descale_k, descale_v, descale_s, scale_s, scale_o,\n scale, use_causal_mask, layout, is_training):\n query_dtype = dtypes.canonicalize_dtype(query.dtype)\n if layout == AttentionLayout.BNTH.value:\n B, N, T, _ = query.shape\n _, _, S, _ = key.shape\n else:\n B, T, N, _ = query.shape\n _, S, _, _ = key.shape\n output_shape = query.shape\n softmax_stat_shape = (B, N, T)\n\n # output, amax_s, amax_o[, softmax_stat]\n if is_training:\n return (\n core.ShapedArray(output_shape, query_dtype),\n core.ShapedArray((1,1,1,1), jnp.float32),\n core.ShapedArray((1,1,1,1), jnp.float32),\n core.ShapedArray(softmax_stat_shape, jnp.float32),\n )\n else:\n return (\n core.ShapedArray(output_shape, query_dtype),\n core.ShapedArray((1,1,1,1), jnp.float32),\n core.ShapedArray((1,1,1,1), jnp.float32),\n )\n\ndef _dot_product_attention_fp8_bwd_abstract(\n query, key, value, fwd_output, grad_output, activation,\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s,\n descale_dP, scale_s, scale_dQ, scale_dK, scale_dV, scale_dP,\n scale, use_causal_mask, layout):\n query_dtype = dtypes.canonicalize_dtype(query.dtype)\n key_dtype = dtypes.canonicalize_dtype(key.dtype)\n value_dtype = dtypes.canonicalize_dtype(value.dtype)\n\n amax_shape = (1,1,1,1)\n\n return (\n core.ShapedArray(query.shape, query_dtype),\n core.ShapedArray(key.shape, key_dtype),\n core.ShapedArray(value.shape, value_dtype),\n core.ShapedArray(amax_shape, jnp.float32),\n core.ShapedArray(amax_shape, jnp.float32),\n core.ShapedArray(amax_shape, jnp.float32),\n core.ShapedArray(amax_shape, jnp.float32),\n )\n\ndef _dot_product_attention_fp8_fwd_cuda_lowering(\n ctx, query, key, value,\n descale_q, descale_k, descale_v, descale_s, scale_s, scale_o,\n scale, use_causal_mask, layout, is_training):\n query_type = ir.RankedTensorType(query.type)\n query_shape = query_type.shape\n key_type = ir.RankedTensorType(key.type)\n key_shape = key_type.shape\n\n if layout == AttentionLayout.BNTH.value:\n B, N, T, H = query_shape\n _, _, S, _ = key_shape\n output_layout = (3, 2, 1, 0)\n output_transpose_perm = mlir.dense_int_array((0, 1, 2, 3))\n else:\n B, T, N, H = query_shape\n _, S, _, _ = key_shape\n output_layout = (3, 1, 2, 0)\n output_transpose_perm = mlir.dense_int_array((0, 2, 1, 3))\n\n output_shape = (B, N, T, H)\n softmax_stat_shape = (B, N, T)\n workspace_shape = (0,)\n amax_shape = (1,1,1,1)\n workspace_type = ir.IntegerType.get_unsigned(8)\n mask_type = MaskType.CAUSAL if use_causal_mask else MaskType.NO_MASK\n backend_config = create_dot_product_attention_fp8_backend_config(\n B, N, T, S, ir.BF16Type.get(), # query_type.element_type,\n scale, mask_type, layout, is_bwd=False,\n )\n\n operands = [query, key, value, descale_q, descale_k, descale_v, descale_s, scale_s, scale_o]\n custom_call_name = get_fp8_custom_call_name(is_bwd=False)\n\n if is_training:\n result_types = [\n ir.RankedTensorType.get(output_shape, query_type.element_type),\n ir.RankedTensorType.get((1,1,1,1), ir.F32Type.get()),\n ir.RankedTensorType.get((1,1,1,1), ir.F32Type.get()),\n ir.RankedTensorType.get(softmax_stat_shape, ir.F32Type.get()),\n ir.RankedTensorType.get(workspace_shape, workspace_type),\n ]\n result_layouts = [output_layout] + default_layouts(amax_shape, amax_shape, softmax_stat_shape, workspace_shape)\n else:\n result_types = [\n ir.RankedTensorType.get(output_shape, query_type.element_type),\n ir.RankedTensorType.get((1,1,1,1), ir.F32Type.get()),\n ir.RankedTensorType.get((1,1,1,1), ir.F32Type.get()),\n ir.RankedTensorType.get(workspace_shape, workspace_type)\n ]\n result_layouts = [output_layout] + default_layouts(amax_shape, amax_shape, workspace_shape)\n\n operand_shapes = [ir.RankedTensorType(operand.type).shape for operand in operands[:3]]\n operand_shapes += [[1, 1, 1, 1]] * 6\n operand_layouts = default_layouts(*operand_shapes)\n out = mlir.custom_call(\n custom_call_name,\n result_types=result_types,\n operands=operands,\n backend_config=backend_config,\n operand_layouts=operand_layouts,\n result_layouts=result_layouts,\n )\n\n if is_training:\n return [hlo.transpose(out.results[0], output_transpose_perm), out.results[1], out.results[2], out.results[3]]\n else:\n return [hlo.transpose(out.results[0], output_transpose_perm), out.results[1], out.results[2]]\n\n\n\ndef _dot_product_attention_fp8_bwd_cuda_lowering(\n ctx, query, key, value, fwd_output, grad_output, activation,\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s,\n descale_dP, scale_s, scale_dQ, scale_dK, scale_dV, scale_dP, scale,\n use_causal_mask, layout):\n query_type = ir.RankedTensorType(query.type)\n query_shape = query_type.shape\n key_type = ir.RankedTensorType(key.type)\n key_shape = key_type.shape\n value_type = ir.RankedTensorType(value.type)\n\n if layout == AttentionLayout.BNTH.value:\n B, q_N, T, H = query_shape\n _, k_N, S, _ = key_shape\n grad_layout = (3, 2, 1, 0)\n grad_transpose_perm = mlir.dense_int_array((0, 1, 2, 3))\n else:\n B, T, q_N, H = query_shape\n _, S, k_N, _ = key_shape\n grad_layout = (3, 1, 2, 0)\n grad_transpose_perm = mlir.dense_int_array((0, 2, 1, 3))\n\n workspace_shape = (0,)\n workspace_type = ir.IntegerType.get_unsigned(8)\n amax_shape = (1,1,1,1)\n\n grad_query_shape = (B, q_N, T, H)\n grad_key_shape = (B, k_N, S, H)\n grad_value_shape = (B, k_N, S, H)\n mask_type = MaskType.CAUSAL if use_causal_mask else MaskType.NO_MASK\n\n backend_config = create_dot_product_attention_fp8_backend_config(\n B, q_N, T, S, ir.BF16Type.get(),\n scale, mask_type, layout, is_bwd=True,\n )\n\n operands = [\n query,\n key,\n value,\n fwd_output,\n grad_output,\n activation,\n descale_q,\n descale_k,\n descale_v,\n descale_o,\n descale_dO,\n descale_s,\n descale_dP,\n scale_s,\n scale_dQ,\n scale_dK,\n scale_dV,\n scale_dP,\n ]\n\n custom_call_name = get_fp8_custom_call_name(is_bwd=True)\n\n result_types = [\n ir.RankedTensorType.get(grad_query_shape, query_type.element_type),\n ir.RankedTensorType.get(grad_key_shape, key_type.element_type),\n ir.RankedTensorType.get(grad_value_shape, value_type.element_type),\n ir.RankedTensorType.get(amax_shape, ir.F32Type.get()),\n ir.RankedTensorType.get(amax_shape, ir.F32Type.get()),\n ir.RankedTensorType.get(amax_shape, ir.F32Type.get()),\n ir.RankedTensorType.get(amax_shape, ir.F32Type.get()),\n ]\n result_layouts = [grad_layout, grad_layout, grad_layout] + default_layouts(amax_shape, amax_shape, amax_shape, amax_shape)\n\n result_types.append(ir.RankedTensorType.get(workspace_shape, workspace_type))\n result_layouts = result_layouts + default_layouts(workspace_shape)\n out = mlir.custom_call(\n custom_call_name,\n result_types=result_types,\n operands=operands,\n backend_config=backend_config,\n operand_layouts=default_layouts(\n *[ir.RankedTensorType(operand.type).shape for operand in operands]),\n result_layouts=result_layouts,\n )\n dqkv_amaxs = (hlo.transpose(out.results[0], grad_transpose_perm),\n hlo.transpose(out.results[1], grad_transpose_perm),\n hlo.transpose(out.results[2], grad_transpose_perm),\n out.results[3], out.results[4], out.results[5], out.results[6])\n # Only keep dQ, dK, dV, amax_dQ, amax_dK, amax_dV, amax_dP here\n return dqkv_amaxs\n\ndef _dot_product_attention_fp8_fwd_batcher(\n batched_args, batch_dims, *, scale, use_causal_mask, layout, is_training):\n _check_valid_batch_dims(batch_dims)\n query, key, value,\\n descale_q, descale_k, descale_v, descale_s, scale_s, scale_o, = batched_args\n query_bdim = batch_dims[0]\n if is_training:\n out_bdims = query_bdim, query_bdim\n else:\n out_bdims = (query_bdim,)\n\n if layout == AttentionLayout.BNTH.value:\n *Bs, N, T, _ = query.shape\n *_, _, S, _ = key.shape\n else:\n *Bs, T, N, _ = query.shape\n *_, S, _, _ = key.shape\n B = math.prod(Bs)\n\n # reshape to 4D shape\n query = jnp.reshape(query, (B,) + query.shape[-3:])\n key = jnp.reshape(key, (B,) + key.shape[-3:])\n value = jnp.reshape(value, (B,) + key.shape[-3:])\n\n outputs = _dot_product_attention_fp8_fwd_p_wrapper.bind(\n query, key, value, descale_q, descale_k, descale_v, descale_s, scale_s, scale_o,\n scale=scale, use_causal_mask=use_causal_mask, layout=layout, is_training=is_training)\n\n # reshape to original shape\n output, amax_s, amax_o = outputs[0], outputs[1], outputs[2]\n output = jnp.reshape(output, query.shape)\n if is_training:\n activation = outputs[3]\n activation = jnp.reshape(activation, (*Bs, N, T))\n return (output, amax_s, amax_o, activation), out_bdims\n else:\n return (output, amax_s, amax_o), out_bdims\n\ndef _dot_product_attention_fp8_bwd_batcher(\n batched_args, batch_dims, *, scale, use_causal_mask, layout):\n _check_valid_batch_dims(batch_dims)\n query, key, value, fwd_output, grad_output, activation,\\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s, descale_dP,\\n scale_s, scale_dQ, scale_dK, scale_dV, scale_dP = batched_args\n query_bdim = batch_dims[0]\n out_bdims = query_bdim, query_bdim, query_bdim\n\n if layout == AttentionLayout.BNTH.value:\n *Bs, N, T, _ = query.shape\n *_, _, S, _ = key.shape\n else:\n *Bs, T, N, _ = query.shape\n *_, S, _, _ = key.shape\n B = math.prod(Bs)\n\n # reshape to 4D shape\n query = jnp.reshape(query, (B,) + query.shape[-3:])\n key = jnp.reshape(key, (B,) + key.shape[-3:])\n value = jnp.reshape(value, (B,) + key.shape[-3:])\n\n activation = jnp.reshape(activation, (B, N, T))\n fwd_output = jnp.reshape(fwd_output, (B,) + query.shape[-3:])\n grad_output = jnp.reshape(grad_output, (B,) + query.shape[-3:])\n\n grads = _dot_product_attention_fp8_bwd_p_wrapper.bind(\n query, key, value, fwd_output, grad_output, activation,\n descale_q, descale_k, descale_v, descale_o, descale_dO, descale_s, descale_dP, scale_s, scale_dQ, scale_dK, scale_dV, scale_dP,\n scale=scale, use_causal_mask=use_causal_mask, layout=layout,\n )\n\n grad_query, grad_key, grad_value = grads[:3]\n # reshape to original shape\n grad_query = jnp.reshape(grad_query, query.shape)\n grad_key = jnp.reshape(grad_key, key.shape)\n grad_value = jnp.reshape(grad_value, value.shape)\n\n return grads, out_bdims\n\ndef _infer_fp8_fwd_output_sharding(mesh, arg_shapes, is_training, layout):\n # Prepare variadic_args for the original function\n has_bias = False # Adjust as needed\n variadic_args = (has_bias, None) # Dummy value, adjust as necessary\n\n # Call the original function with the required parameters\n output_sharding = _infer_fwd_output_sharding(mesh, arg_shapes, variadic_args, is_training, layout)\n amax_sharding = NamedSharding(mesh, PartitionSpec())\n if is_training:\n out_sharding, activation_sharding = output_sharding[0], output_sharding[1]\n return [out_sharding, amax_sharding, amax_sharding, activation_sharding]\n return output_sharding + [amax_sharding, amax_sharding]\n\n_dot_product_attention_fp8_fwd_lower = custom_partitioning(\n _dot_product_attention_fp8_fwd_impl, static_argnums=(9, 10, 11, 12))\n\ndef _dot_product_attention_fp8_fwd_infer_sharding_from_operands(\n scale, use_causal_mask, layout, is_training,\n mesh, arg_shapes, result_shape):\n return _infer_fp8_fwd_output_sharding(mesh, arg_shapes, is_training, layout)\n\ndef _dot_product_attention_fp8_fwd_partition(\n scale, use_causal_mask, layout, is_training,\n mesh, arg_shapes, result_shape):\n # args sharding\n arg_shardings = tuple(arg_i.sharding for arg_i in arg_shapes)\n out_shardings = _infer_fp8_fwd_output_sharding(\n mesh, arg_shapes, is_training, layout)\n impl = functools.partial(\n _dot_product_attention_fp8_fwd_impl, scale=scale, use_causal_mask=use_causal_mask,\n layout=layout, is_training=is_training)\n return mesh, impl, out_shardings, arg_shardings\n\ndef _infer_fp8_bwd_output_sharding(mesh, arg_shapes, layout):\n # Prepare variadic_args for the original function\n has_bias = False # Adjust as needed\n has_dbias = False # Adjust as needed\n variadic_args = (has_bias, has_dbias) # Dummy value, adjust as necessary\n\n # Call the original function with the required parameters\n output_shardings = _infer_bwd_output_sharding(mesh, arg_shapes, layout, variadic_args)\n\n # Prepare amax_sharding\n amax_sharding = NamedSharding(mesh, PartitionSpec()) # Use a default spec or adjust as needed\n\n # Append amax_sharding for each output sharding\n out_shardings_with_amax = output_shardings + [amax_sharding] * 4\n\n return out_shardings_with_amax\n\n_dot_product_attention_fp8_bwd_lower = custom_partitioning(\n _dot_product_attention_fp8_bwd_impl, static_argnums=(18,19,20)\n)\n\ndef _dot_product_attention_fp8_bwd_infer_sharding_from_operands(\n scale, use_causal_mask, layout, mesh,\n arg_shapes, result_shape):\n return _infer_fp8_bwd_output_sharding(mesh, arg_shapes, layout)\n\ndef _dot_product_attention_fp8_bwd_partition(\n scale, use_causal_mask, layout, mesh,\n arg_shapes, result_shape):\n out_shardings = _infer_fp8_bwd_output_sharding(mesh, arg_shapes, layout)\n # args sharding\n arg_shardings = tuple(arg_i.sharding for arg_i in arg_shapes)\n impl = functools.partial(\n _dot_product_attention_fp8_bwd_impl, scale=scale,\n use_causal_mask=use_causal_mask, layout=layout\n )\n return mesh, impl, out_shardings, arg_shardings\n\n# Create dot_product_attention_fp8_fwd_p for forward operation.\n_dot_product_attention_fp8_fwd_p = core.Primitive(""dot_product_attention_fp8_fwd"")\n_dot_product_attention_fp8_fwd_p.multiple_results = True\n_dot_product_attention_fp8_fwd_p.def_impl(\n functools.partial(dispatch.apply_primitive, _dot_product_attention_fp8_fwd_p)\n)\n_dot_product_attention_fp8_fwd_p.def_abstract_eval(\n _dot_product_attention_fp8_fwd_abstract\n)\n\nmlir.register_lowering(\n _dot_product_attention_fp8_fwd_p,\n _dot_product_attention_fp8_fwd_cuda_lowering,\n platform=""cuda"",\n)\n\n_dot_product_attention_fp8_fwd_p_wrapper = core.Primitive(\n ""dot_product_attention_fp8_fwd_wrapper""\n)\n_dot_product_attention_fp8_fwd_p_wrapper.multiple_results = True\n_dot_product_attention_fp8_fwd_p_wrapper.def_impl(_dot_product_attention_fp8_fwd_impl)\n_dot_product_attention_fp8_fwd_p_wrapper.def_abstract_eval(\n _dot_product_attention_fp8_fwd_abstract\n)\n\n# Create dot_product_attention_bwd_p for backward operation.\n_dot_product_attention_fp8_bwd_p = core.Primitive(""dot_product_attention_fp8_bwd"")\n_dot_product_attention_fp8_bwd_p.multiple_results = True\n_dot_product_attention_fp8_bwd_p.def_impl(\n functools.partial(dispatch.apply_primitive, _dot_product_attention_fp8_bwd_p)\n)\n_dot_product_attention_fp8_bwd_p.def_abstract_eval(\n _dot_product_attention_fp8_bwd_abstract\n)\n\nmlir.register_lowering(\n _dot_product_attention_fp8_bwd_p,\n _dot_product_attention_fp8_bwd_cuda_lowering,\n platform=""cuda"",\n)\n\n_dot_product_attention_fp8_bwd_p_wrapper = core.Primitive(\n ""dot_product_attention_fp8_bwd_wrapper""\n)\n_dot_product_attention_fp8_bwd_p_wrapper.multiple_results = True\n_dot_product_attention_fp8_bwd_p_wrapper.def_impl(_dot_product_attention_fp8_bwd_impl)\n_dot_product_attention_fp8_bwd_p_wrapper.def_abstract_eval(\n _dot_product_attention_fp8_bwd_abstract\n)\n\nbatching.primitive_batchers[\n _dot_product_attention_fp8_fwd_p_wrapper\n] = _dot_product_attention_fp8_fwd_batcher\nbatching.primitive_batchers[\n _dot_product_attention_fp8_bwd_p_wrapper\n] = _dot_product_attention_fp8_bwd_batcher\n\n_dot_product_attention_fp8_fwd_lower.def_partition(\n infer_sharding_from_operands=_dot_product_attention_fp8_fwd_infer_sharding_from_operands,\n partition=_dot_product_attention_fp8_fwd_partition)\n\nmlir.register_lowering(_dot_product_attention_fp8_fwd_p_wrapper,\n mlir.lower_fun(_dot_product_attention_fp8_fwd_lower, multiple_results=True))\n\n_dot_product_attention_fp8_bwd_lower.def_partition(\n infer_sharding_from_operands=_dot_product_attention_fp8_bwd_infer_sharding_from_operands,\n partition=_dot_product_attention_fp8_bwd_partition)\n\nmlir.register_lowering(_dot_product_attention_fp8_bwd_p_wrapper,\n mlir.lower_fun(_dot_product_attention_fp8_bwd_lower, multiple_results=True))\n\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fp8_fwd_p\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fp8_fwd_p_wrapper\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fp8_bwd_p\n)\ndispatch.prim_requires_devices_during_lowering.add(\n _dot_product_attention_fp8_bwd_p_wrapper\n)\n\[email protected](jax.custom_vjp, nondiff_argnums=(4, 5, 6, 7))\ndef _dot_product_attention_fp8(query: Array,\n key: Array,\n value: Array,\n fp8_params: dict[str, Array],\n scale: float,\n use_causal_mask: bool,\n layout: int,\n cudnn_version: int):\n output, amax_s, amax_o = _dot_product_attention_fp8_fwd(\n query, key, value, params_from_keys(fp8_params, fp8_params_keys_fwd),\n scale, use_causal_mask, layout, cudnn_version\n )\n return output, amax_s, amax_o\n\n_dot_product_attention_fp8.defvjp(_dot_product_attention_fp8_fwd_rule, _dot_product_attention_fp8_bwd_rule)\n\ndef combine_bias_and_mask(bias, mask, dtype):\n if bias is not None:\n # reshape bias to have 4D shape\n bias = bias.reshape((1,) * (4 - len(bias.shape)) + bias.shape)\n\n if mask is not None:\n if mask.dtype == jnp.bool:\n large_negative_number = get_large_negative_number(dtype)\n mask = jnp.where(mask, jnp.asarray(0, dtype), large_negative_number)\n # reshape mask to have 4D shape\n mask = mask.reshape((1,) * (4 - len(mask.shape)) + mask.shape) # type: ignore[union-attr]\n\n # combine bias and mask\n if bias is None:\n bias = mask\n else:\n if mask is not None:\n # should be broadcast to same shape\n bias = bias + mask\n return bias\n\n# User interface\ndef paged_attention(\n query: Array,\n key: Array,\n value: Array,\n q_seqlen: Array,\n kv_seqlen: Array,\n page_table_k: Array,\n page_table_v: Array,\n bias: Array | None = None,\n mask: Array | None = None,\n fp8_params: FP8Params | None = None,\n *,\n scale: float = 1.0,\n mask_type: MaskType = MaskType.NO_MASK,\n seed: int = 42,\n dropout_rate: float = 0.,\n qkv_layout: str = ""BTNH"",\n sliding_window_length: int | None = None,\n use_fp8: bool = False,\n return_residual: bool = False\n):\n """"""Computes paged attention described in https://arxiv.org/pdf/2309.06180.\n\n B = batch size\n S = length of the key/value (source)\n T = length of the query (target)\n N = number of attention heads\n H = dimensions of each attention head.\n\n Args:\n query: Queries for attention calculation with a shape of BTNH or BNTH.\n key: Keys for attention calculation with a shape of\n [num_blocks, block_size, N, H] or [num_blocks, N, block_size, H] where\n num_blocks = B * Ceil(S / block_size).\n value: Values to be used in attention with a shape of\n [num_blocks, block_size, N, H] or [num_blocks, N, block_size, H] where\n num_blocks = B * Ceil(S / block_size).\n q_seqlen: Non padded sequence length of query with a shape of B.\n kv_seqlen: Non padded sequence length of key and value with a shape of B.\n page_table_k: page table for key of shape [B, 1, num_blocks_per_batch, 1]\n where num_blocks_per_batch = Ceil(S / block_size).\n page_table_v: page table for value of shape [B, 1, num_blocks_per_batch, 1]\n where num_blocks_per_batch = Ceil(S / block_size).\n bias: Bias to be added to logits with a shape of BNTS.\n mask: Mask used to filter out logits with a shape of BNTS.\n scale: Scale for the query.\n qkv_layout: Layout string, with supported formats being BTNH, BNTH, BSNH,\n BNSH.\n sliding_window_length: Window size to make attention only attend to each\n token's left local window (pos - sliding_window_length, pos] where `pos`\n is the index of each token. E.g., if sliding_window_length == 3 and the\n sequence is [0, 1, 2, 3, c, 4, 5], token `c` can attend to [4, 5, c].\n use_fp8: Whether to use FP8 attention mechanism.\n return_residual: Whether to return the logsumexp tensor of shape BTN\n or BNT to users. See section 3.1.1 in the FlashAttention-2 paper:\n https://arxiv.org/pdf/2307.08691 to find the definition of logsumexp.\n Returns:\n output: the same shape as the query.\n residual: the logsumexp tensor if return_residual=True. (non fp8)\n """"""\n cudnn_version = check_cudnn_version()\n layout = _normalize_layout(qkv_layout)\n if use_fp8:\n raise ValueError(""Paged attention doesn't support fp8 for now."")\n if has_padding(mask_type) and (q_seqlen is None or kv_seqlen is None):\n raise ValueError(""Require q_seqlen and kv_seqlen to generate padding mask."")\n if sliding_window_length is not None and sliding_window_length <= 0:\n raise ValueError(\n f""Require sliding_window_length > 0, got {sliding_window_length}."")\n\n bias = combine_bias_and_mask(bias, mask, query.dtype)\n # check if input shape and data type is compatiable\n check_layout(query, key, value, bias, q_seqlen, kv_seqlen, None, None,\n page_table_k, page_table_v, layout)\n has_bias = bias is not None\n has_dbias = has_bias and \\n should_export_dbias(bias.shape, query.shape, layout) # type: ignore[union-attr]\n variadic_args = (has_bias, has_dbias)\n\n _not_used = jnp.zeros(0, dtype=query.dtype)\n if bias is None:\n bias = _not_used\n\n output = _dot_product_attention(\n query, key, value, bias, q_seqlen, kv_seqlen, _not_used, _not_used,\n page_table_k, page_table_v, scale, seed, dropout_rate, variadic_args,\n mask_type, layout.value, sliding_window_length, cudnn_version,\n return_residual)\n return output\n\n\ndef dot_product_attention(\n query: Array,\n key: Array,\n value: Array,\n bias: Array | None = None,\n mask: Array | None = None,\n q_seqlen: Array | None = None,\n kv_seqlen: Array | None = None,\n q_offsets: Array | None = None,\n kv_offsets: Array | None = None,\n fp8_params: FP8Params | None = None,\n *,\n scale: float = 1.0,\n mask_type: MaskType = MaskType.NO_MASK,\n seed: int = 42,\n dropout_rate: float = 0.,\n qkv_layout: str = ""BTNH"",\n sliding_window_length: int | None = None,\n use_fp8: bool = False,\n return_residual: bool = False\n):\n """"""Computes dot-product attention given query (Q), key (K), and value (V).\n\n This function serves as the core operation for applying attention\n mechanisms as described in the paper [https://arxiv.org/abs/1706.03762].\n Initially, it determines the attention weights by processing Q and K,\n subsequently combining the outcomes using K. Throughout this function, we\n utilize the following uppercase letters to represent specific parameters of\n array:\n\n B = batch size\n S = length of the key/value (source)\n T = length of the query (target)\n N = number of attention heads\n H = dimensions of each attention head.\n\n The supported layouts for Q, K, V are either BT(S)NH or BNT(S)H, and they must\n adhere to the same layout. The output layout remains consistent with Q,\n defaulting to BT(S)NH.\n\n Args:\n query: Queries for attention calculation with a shape of BTNH or BNTH.\n key: Keys for attention calculation with a shape of BSNH or BNSH.\n value: Values to be used in attention with a shape of BSNH or BNSH.\n bias: Bias to be added to logits with a shape of BNTS.\n mask: Mask used to filter out logits with a shape of BNTS.\n q_seqlen: Non padded sequence length of query with a shape of B.\n If q_offsets is set, q_seqlen should have shape [B,M] where M is the\n maximum number of segments per batch. For batch that has less segments\n than maximum segments, fill the padded entries with -1.\n kv_seqlen: Non padded sequence length of key and value with a shape of B.\n If kv_offsets is set, kv_seqlen should have shape [B,M] where M is the\n maximum number of segments per batch. For batch that has less segments\n than maximum segments, fill the padded entries with -1.\n q_offsets: offset of each segment packed in query with a shape of [B,M+1]\n where M is the maximum number of segments per batch. For batch that has\n less segments than maximum segments, fill the padded entries with -1.\n E.g, if 2 batches has 3 and 2 segments respectively, each segment has\n size 1, q_offsets = [[0,1,2,-1], [0,1,-1,-1]]. q_seqlen should be set\n to indicate the size of each segment.\n kv_offsets: offset of each segment packed in key with a shape of [B,M+1]\n where M is the maximum number of segments per batch. For batch that has\n less segments than maximum segments, fill the padded entries with -1.\n E.g, if 2 batches has 3 and 2 segments respectively, each segment has\n size 1, kv_offsets = [[0,1,2,-1], [0,1,-1,-1]]. kv_seqlen should be set\n to indicate the size of each segment.\n scale: Scale for the query.\n dropout_rate: Dropout rate.\n qkv_layout: Layout string, with supported formats being BTNH, BNTH, BSNH,\n BNSH.\n sliding_window_length: Window size to make attention only attend to each\n token's left local window (pos - sliding_window_length, pos] where `pos`\n is the index of each token. E.g., if sliding_window_length == 3 and the\n sequence is [0, 1, 2, 3, c, 4, 5], token `c` can attend to [4, 5, c].\n use_fp8: Whether to use FP8 attention mechanism.\n return_residual: Whether to return the logsumexp tensor of shape BTN\n or BNT to users. See section 3.1.1 in the FlashAttention-2 paper:\n https://arxiv.org/pdf/2307.08691 to find the definition of logsumexp.\n Returns:\n output: the same shape as the query.\n residual: the logsumexp tensor if return_residual=True. (non fp8)\n amax_s: amax of state. (fp8 only)\n amax_o: amax of output. (fp8 only)\n """"""\n # TODO(b/380898464): Check the compute capability, e.g., require GPU device,\n # in the kernel implementation (c++) code.\n cudnn_version = check_cudnn_version()\n layout = _normalize_layout(qkv_layout)\n\n if use_fp8:\n if fp8_params is None:\n raise ValueError(""fp8_params should not be None."")\n if mask_type not in (MaskType.NO_MASK, MaskType.CAUSAL):\n raise ValueError(""Only NO_MASK or CAUSAL masks are supported for fp8."")\n if not all(x is None for x in [bias, mask, q_seqlen, kv_seqlen]):\n raise ValueError(\n f""Expected 'None' for bias, mask, q_seqlen, and kv_seqlen, ""\n f""but got: bias={bias}, mask={mask}, q_seqlen={q_seqlen}, kv_seqlen={kv_seqlen}""\n )\n check_fp8_params(fp8_params)\n check_layout(query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n None, None, layout)\n output, amax_s, amax_o = _dot_product_attention_fp8(\n query, key, value, fp8_params,\n scale, mask_type == MaskType.CAUSAL, layout.value, cudnn_version\n )\n return output, amax_s, amax_o\n else:\n if has_padding(mask_type) and (q_seqlen is None or kv_seqlen is None):\n raise ValueError(""Require q_seqlen and kv_seqlen to generate padding mask"")\n if sliding_window_length is not None and sliding_window_length <= 0:\n raise ValueError(\n f""Require sliding_window_length > 0, got {sliding_window_length}"")\n if q_offsets is not None and (q_seqlen is None or kv_seqlen is None):\n raise ValueError(""Require q_seqlen and kv_seqlen to use packed layout"")\n\n bias = combine_bias_and_mask(bias, mask, query.dtype)\n # check if input shape and data type is compatiable\n check_layout(query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n None, None, layout)\n has_bias = bias is not None\n has_dbias = has_bias and \\n should_export_dbias(bias.shape, query.shape, layout) # type: ignore[union-attr]\n variadic_args = (has_bias, has_dbias)\n\n _not_used = jnp.zeros(0, dtype=query.dtype)\n if bias is None:\n bias = _not_used\n if q_seqlen is None:\n q_seqlen = _not_used\n if kv_seqlen is None:\n kv_seqlen = _not_used\n if q_offsets is None:\n q_offsets = _not_used\n if kv_offsets is None:\n kv_offsets = _not_used\n\n output = _dot_product_attention(\n query, key, value, bias, q_seqlen, kv_seqlen, q_offsets, kv_offsets,\n _not_used, _not_used, scale, seed, dropout_rate, variadic_args,\n mask_type, layout.value, sliding_window_length, cudnn_version,\n return_residual)\n return output\n",python,tab
|
3 |
+
2,337,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"6:59:28 AM [info] Activating crowd-code\n6:59:28 AM [info] Recording started\n6:59:28 AM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,637,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"6:59:28 AM [info] Git repository found\n6:59:28 AM [info] Git provider initialized successfully\n6:59:29 AM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,175136,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",0,0,"",python,tab
|
6 |
+
5,237672,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",12124,0,"",python,selection_command
|
7 |
+
6,238004,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",12107,19,"",python,content
|
8 |
+
7,238080,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",12113,0,"",python,selection_command
|
9 |
+
8,239005,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",12084,0,"",python,selection_command
|
10 |
+
9,239210,".venv/lib/python3.10/site-packages/jax/_src/cudnn/fused_attention_stablehlo.py",12054,0,"",python,selection_command
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-14c33a6c-d06d-421b-aaef-66e0673e81a31753986093740-2025_07_31-20.21.52.136/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1a3a8350-ade5-4f14-90d3-a2023f5be9fa1753600712073-2025_07_27-09.18.39.905/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1abe6561-37fa-44c4-a02e-35deedf040521754322372590-2025_08_04-17.46.19.699/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-210e88c5-c80c-4b42-a393-717923a05daf1751602893995-2025_07_04-06.22.30.835/source.csv
ADDED
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,4,"experiments/tokenizer_cross_node_checkpointing_test.sh",0,0,"#!/usr/bin/env bash\nsource .venv/bin/activate\n\n\ndata_dir='data_tfrecords'\n\nsrun python train_tokenizer.py \\n --batch_size 48 \\n --num_steps 300000 \\n --warmup_steps 10000 \\n --seed 0 \\n --min_lr=0.0000866 \\n --max_lr=0.0000866 \\n --data_dir $data_dir",shellscript,tab
|
3 |
+
2,1227,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"6:22:29 AM [info] Activating crowd-code\n6:22:30 AM [info] Recording started\n6:22:30 AM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,1964,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"6:22:31 AM [info] Git repository found\n6:22:31 AM [info] Git provider initialized successfully\n",Log,content
|
5 |
+
4,2138,"extension-output-pdoom-org.crowd-code-#1-crowd-code",245,0,"6:22:31 AM [info] Initial git state: [object Object]\n",Log,content
|
6 |
+
5,2160,"experiments/tokenizer_cross_node_checkpointing_test.sh",0,0,"",shellscript,tab
|
7 |
+
6,10632,"TERMINAL",0,0,"/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt",,terminal_command
|
8 |
+
7,10679,"TERMINAL",0,0,"]633;E;/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt;302e415b-028b-436f-bfd4-9e88a911b8d3]633;C",,terminal_output
|
9 |
+
8,10769,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash]633;D;0",,terminal_output
|
10 |
+
9,155553,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18 --cpus-per-task=1 --ntasks-per-node=1",,terminal_command
|
11 |
+
10,155632,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18 --cpus-per-task=1 --ntasks-per-node=1;d83d794d-068d-45a7-86c8-da2446d84194]633;Csalloc: Granted job allocation 26666090\r\n",,terminal_output
|
12 |
+
11,155739,"TERMINAL",0,0,"salloc: Waiting for resource configuration\r\n",,terminal_output
|
13 |
+
12,156214,"TERMINAL",0,0,"^Csalloc: Job allocation 26666090 has been revoked.\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;1]633;P;Cwd=/lustre/groups/haicu/workspace/franz.srambical/jafar",,terminal_output
|
14 |
+
13,156370,"TERMINAL",0,0,"^C",,terminal_command
|
15 |
+
14,156377,"TERMINAL",0,0,"^C[?2004l\r[?2004h[?2004l\r\r\n]633;E;;d83d794d-068d-45a7-86c8-da2446d84194]633;C]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D",,terminal_output
|
16 |
+
15,163176,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18,supergpu16 --cpus-per-task=1 --ntasks-per-node=1",,terminal_command
|
17 |
+
16,163236,"TERMINAL",0,0,"\r\n[?2004l\r]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18,supergpu16 --cpus-per-task=1 --ntasks-per-node=1;d83d794d-068d-45a7-86c8-da2446d84194]633;Csalloc: Required node not available (down, drained or reserved)\r\nsalloc: Pending job allocation 26666092\r\nsalloc: job 26666092 queued and waiting for resources\r\n",,terminal_output
|
18 |
+
17,165340,"TERMINAL",0,0,"^Csalloc: Job allocation 26666092 has been revoked.\r\nsalloc: Job aborted due to signal\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar",,terminal_output
|
19 |
+
18,169464,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18,supergpu14 --cpus-per-task=1 --ntasks-per-node=1",,terminal_command
|
20 |
+
19,169496,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu18,supergpu14 --cpus-per-task=1 --ntasks-per-node=1;d83d794d-068d-45a7-86c8-da2446d84194]633;Csalloc: error: Problem using reservation\r\nsalloc: error: Job submit/allocate failed: Requested node configuration is not available\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;1",,terminal_output
|
21 |
+
20,175499,"TERMINAL",0,0,"squeue -w supergpu16,supergpu18,gpusrv[69,70],supergpu14",,terminal_command
|
22 |
+
21,175545,"TERMINAL",0,0,"[?25l[56;27H\r]633;Ajafar[franz.srambical@hpc-submit01 jafar]$ ]633;Bsqueue -w supergpu16,supergpu18,gpusrv[69,70],supergpu14[A\r]633;Ajafar[franz.srambical@hpc-submit01 jafar]$ ]633;B\r\n\r\r\n[?2004l\r]633;E;squeue -w supergpu16,supergpu18,gpusrv[69,70],supergpu14;d83d794d-068d-45a7-86c8-da2446d84194]633;C[?25h JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\r\n 26649778 gpu_p test_kto muhammad R 12:38:31 1 supergpu14\r\n 26644304 gpu_p old_gpt helena.f R 11:31:24 1 supergpu14\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;0",,terminal_output
|
23 |
+
22,263503,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=1",,terminal_command
|
24 |
+
23,263524,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=1;d83d794d-068d-45a7-86c8-da2446d84194]633;C",,terminal_output
|
25 |
+
24,269401,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=1",,terminal_command
|
26 |
+
25,269478,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=1;d83d794d-068d-45a7-86c8-da2446d84194]633;Csalloc: Granted job allocation 26666098\r\n",,terminal_output
|
27 |
+
26,269582,"TERMINAL",0,0,"salloc: Waiting for resource configuration\r\n",,terminal_output
|
28 |
+
27,270581,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
29 |
+
28,270984,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
30 |
+
29,288612,"TERMINAL",0,0,"b",,terminal_output
|
31 |
+
30,288792,"TERMINAL",0,0,"[?25l[58;36Ha[58;37H[?25h[?25l[58;37Hs[58;38H[?25h",,terminal_output
|
32 |
+
31,288862,"TERMINAL",0,0,"[?25l[58;38Hh[58;39H[?25h",,terminal_output
|
33 |
+
32,288981,"TERMINAL",0,0,"[?25l[58;39H [58;40H[?25h",,terminal_output
|
34 |
+
33,289354,"TERMINAL",0,0,"[?25l[58;40He[58;41H[?25h",,terminal_output
|
35 |
+
34,289560,"TERMINAL",0,0,"[?25l[58;41Hx[58;42H[?25h",,terminal_output
|
36 |
+
35,289646,"TERMINAL",0,0,"[?25l[58;42Hp[58;43H[?25h",,terminal_output
|
37 |
+
36,289804,"TERMINAL",0,0,"[?25l[58;43He[58;44H[?25h[?25l[58;44Hr[58;45H[?25h",,terminal_output
|
38 |
+
37,289896,"TERMINAL",0,0,"[?25l[58;45Hi[58;47H[?25h[?25l[58;46Hm[58;47H[?25h",,terminal_output
|
39 |
+
38,289995,"TERMINAL",0,0,"ents/",,terminal_output
|
40 |
+
39,290169,"TERMINAL",0,0,"[?25l[58;52Ht[58;53H[?25h",,terminal_output
|
41 |
+
40,290243,"TERMINAL",0,0,"[?25l[58;53Ho[58;55H[?25h[?25l[58;54Hk[58;55H[?25h",,terminal_output
|
42 |
+
41,290348,"TERMINAL",0,0,"enizer_",,terminal_output
|
43 |
+
42,291855,"TERMINAL",0,0,"[?25l[58;62Hc[58;63H[?25h",,terminal_output
|
44 |
+
43,292026,"TERMINAL",0,0,"[?25l[58;63Hr[58;64H[?25h",,terminal_output
|
45 |
+
44,292103,"TERMINAL",0,0,"[?25l[58;64Ho[58;65H[?25h",,terminal_output
|
46 |
+
45,292214,"TERMINAL",0,0,"ss_node_checkpointing_test.sh ",,terminal_output
|
47 |
+
46,292978,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
48 |
+
47,293942,"TERMINAL",0,0,"\r\n\r[C[C",,terminal_output
|
49 |
+
48,294889,"experiments/tokenizer_cross_node_checkpointing_test.sh",128,0,"",shellscript,selection_command
|
50 |
+
49,295198,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"",shellscript,selection_command
|
51 |
+
50,297368,"experiments/tokenizer_cross_node_checkpointing_test.sh",114,0,"",shellscript,selection_command
|
52 |
+
51,297688,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"",shellscript,selection_command
|
53 |
+
52,297977,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,2,"",shellscript,content
|
54 |
+
53,298600,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"9",shellscript,content
|
55 |
+
54,298601,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_keyboard
|
56 |
+
55,298855,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"6",shellscript,content
|
57 |
+
56,298856,"experiments/tokenizer_cross_node_checkpointing_test.sh",127,0,"",shellscript,selection_keyboard
|
58 |
+
57,299138,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_command
|
59 |
+
58,301827,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
60 |
+
59,426408,"TERMINAL",0,0,"2025-07-04 06:29:37.135127: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-04 06:29:37.134980: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
|
61 |
+
60,427077,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751603377.823046 2573765 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751603377.823111 2567277 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n",,terminal_output
|
62 |
+
61,427582,"TERMINAL",0,0,"E0000 00:00:1751603378.332558 2573765 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751603378.332609 2567277 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
|
63 |
+
62,429545,"TERMINAL",0,0,"W0000 00:00:1751603380.294291 2567277 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294340 2567277 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294347 2567277 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294352 2567277 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294299 2573765 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294368 2573765 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294374 2573765 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603380.294380 2573765 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
64 |
+
63,517769,"TERMINAL",0,0,"W0000 00:00:1751603468.502047 2567277 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751603468.501985 2573765 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n",,terminal_output
|
65 |
+
64,599140,"TERMINAL",0,0,"2025-07-04 06:32:29.885880: W external/xla/xla/tsl/framework/bfc_allocator.cc:310] Allocator (GPU_0_bfc) ran out of memory trying to allocate 19.39GiB with freed_by_count=0. The caller indicates that this is not a failure, but this may mean that there could be performance gains if more memory were available.\r\n",,terminal_output
|
66 |
+
65,599218,"TERMINAL",0,0,"2025-07-04 06:32:29.967424: W external/xla/xla/tsl/framework/bfc_allocator.cc:310] Allocator (GPU_0_bfc) ran out of memory trying to allocate 19.39GiB with freed_by_count=0. The caller indicates that this is not a failure, but this may mean that there could be performance gains if more memory were available.\r\n",,terminal_output
|
67 |
+
66,599564,"TERMINAL",0,0,"Running on 2 devices.\r\nTraceback (most recent call last):\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/train_tokenizer.py"", line 159, in <module>\r\n init_params = tokenizer.init(_rng, inputs)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 46, in __call__\r\n outputs = self.vq_encode(batch[""videos""], training)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 57, in vq_encode\r\n x = self.encoder(x) # (B, T, N, E)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 87, in __call__\r\nRunning on 2 devices.\r\nTraceback (most recent call last):\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/train_tokenizer.py"", line 159, in <module>\r\n x = STBlock(\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 41, in __call__\r\n z = nn.MultiHeadAttention(\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/flax/linen/attention.py"", line 674, in __call__\r\n init_params = tokenizer.init(_rng, inputs)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 46, in __call__\r\n outputs = self.vq_encode(batch[""videos""], training)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 57, in vq_encode\r\n x = self.encoder(x) # (B, T, N, E)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 87, in __call__\r\n x = STBlock(\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 41, in __call__\r\n z = nn.MultiHeadAttention(\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/flax/linen/attention.py"", line 674, in __call__\r\n x = self.attention_fn(*attn_args, **attn_kwargs)\r\n x = self.attention_fn(*attn_args, **attn_kwargs)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/flax/linen/attention.py"", line 291, in dot_product_attention\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/flax/linen/attention.py"", line 291, in dot_product_attention\r\n return attn_weights_value_einsum(\r\n return attn_weights_value_einsum(\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/jax/_src/numpy/einsum.py"", line 315, in einsum\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/.venv/lib/python3.10/site-packages/jax/_src/numpy/einsum.py"", line 315, in einsum\r\n return jit_einsum(operand_arrays, contractions, precision,\r\n File ""/ictstr01/home/aih/franz.srambical/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/contextlib.py"", line 79, in inner\r\n return jit_einsum(operand_arrays, contractions, precision,\r\n File ""/ictstr01/home/aih/franz.srambical/.local/share/uv/python/cpython-3.10.15-linux-x86_64-gnu/lib/python3.10/contextlib.py"", line 79, in inner\r\n return func(*args, **kwds)\r\njax._src.source_info_util.JaxStackTraceBeforeTransformation: jaxlib._jax.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 20817903616 bytes.\r\n\r\nThe preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.\r\n\r\n--------------------\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\njax.errors.SimplifiedTraceback: For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/train_tokenizer.py"", line 159, in <module>\r\n init_params = tokenizer.init(_rng, inputs)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 46, in __call__\r\n outputs = self.vq_encode(batch[""videos""], training)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 57, in vq_encode\r\n x = self.encoder(x) # (B, T, N, E)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 87, in __call__\r\n return func(*args, **kwds)\r\njax._src.source_info_util.JaxStackTraceBeforeTransformation: jaxlib._jax.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 20817903616 bytes.\r\n\r\nThe preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.\r\n\r\n--------------------\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\njax.errors.SimplifiedTraceback: For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/train_tokenizer.py"", line 159, in <module>\r\n x = STBlock(\r\njaxlib._jax.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 20817903616 bytes.\r\n init_params = tokenizer.init(_rng, inputs)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 46, in __call__\r\n outputs = self.vq_encode(batch[""videos""], training)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/models/tokenizer.py"", line 57, in vq_encode\r\n x = self.encoder(x) # (B, T, N, E)\r\n File ""/ictstr01/groups/haicu/workspace/franz.srambical/jafar/utils/nn.py"", line 87, in __call__\r\n x = STBlock(\r\njaxlib._jax.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 20817903616 bytes.\r\n",,terminal_output
|
68 |
+
67,614836,"TERMINAL",0,0,"srun: error: gpusrv70: task 1: Exited with exit code 1\r\nsrun: error: gpusrv69: task 0: Exited with exit code 1\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
69 |
+
68,619888,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"",shellscript,selection_command
|
70 |
+
69,620029,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,2,"",shellscript,content
|
71 |
+
70,621309,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"4",shellscript,content
|
72 |
+
71,621310,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_keyboard
|
73 |
+
72,621346,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"8",shellscript,content
|
74 |
+
73,621347,"experiments/tokenizer_cross_node_checkpointing_test.sh",127,0,"",shellscript,selection_keyboard
|
75 |
+
74,621533,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_command
|
76 |
+
75,623291,"TERMINAL",0,0,"[H[2J[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
77 |
+
76,623355,"TERMINAL",0,0,"bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
78 |
+
77,623643,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
79 |
+
78,626729,"TERMINAL",0,0,"2025-07-04 06:32:57.434648: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-04 06:32:57.443490: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751603577.448497 2575006 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nE0000 00:00:1751603577.453130 2575006 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751603577.457228 2568489 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nE0000 00:00:1751603577.461873 2568489 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nW0000 00:00:1751603577.465917 2575006 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.465930 2575006 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.465934 2575006 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.465936 2575006 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.474565 2568489 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.474580 2568489 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.474583 2568489 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751603577.474586 2568489 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
80 |
+
79,630023,"TERMINAL",0,0,"W0000 00:00:1751603580.771489 2575006 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751603580.772076 2568489 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n",,terminal_output
|
81 |
+
80,941318,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26666098.1 tasks 0-1: running\r\n",,terminal_output
|
82 |
+
81,941455,"TERMINAL",0,0,"^Csrun: sending Ctrl-C to StepId=26666098.1\r\nsrun: forcing job termination\r\nsrun: Job step aborted: Waiting up to 32 seconds for job step to finish.\r\nslurmstepd: error: *** STEP 26666098.1 ON gpusrv69 CANCELLED AT 2025-07-04T06:38:12 ***\r\n",,terminal_output
|
83 |
+
82,942039,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
84 |
+
83,942837,"TERMINAL",0,0,"e",,terminal_output
|
85 |
+
84,943091,"TERMINAL",0,0,"[?25l[51;36Hi[51;37H[?25h",,terminal_output
|
86 |
+
85,943782,"TERMINAL",0,0,"[?25l[51;36Hi[51;37H[?25h",,terminal_output
|
87 |
+
86,943839,"TERMINAL",0,0,"[?25l[51;37Ht[51;38H[?25h",,terminal_output
|
88 |
+
87,944367,"TERMINAL",0,0,"[?25l[51;36Hx[51;37H[?25h",,terminal_output
|
89 |
+
88,944458,"TERMINAL",0,0,"[?25l[51;37Hi[51;38H[?25h",,terminal_output
|
90 |
+
89,944521,"TERMINAL",0,0,"[?25l[51;38Ht[51;39H[?25h",,terminal_output
|
91 |
+
90,944693,"TERMINAL",0,0,"[?25l[?2004l\rexit\r\n[?25h",,terminal_output
|
92 |
+
91,945047,"TERMINAL",0,0,"srun: error: gpusrv69: task 0: Exited with exit code 137\r\nsalloc: Relinquishing job allocation 26666098\r\nsalloc: Job allocation 26666098 has been revoked.\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;137]633;P;Cwd=/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h",,terminal_output
|
93 |
+
92,948978,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=2",,terminal_command
|
94 |
+
93,949067,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=1 --ntasks-per-node=2;d83d794d-068d-45a7-86c8-da2446d84194]633;Csalloc: Granted job allocation 26666106\r\n",,terminal_output
|
95 |
+
94,949177,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
96 |
+
95,949517,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
97 |
+
96,952576,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
98 |
+
97,952728,"TERMINAL",0,0,"[?25l[58;23H[X[0m[61@s': bash experiments/tokenizer_cross_node_checkpointing_test.[7ms[27mh[?25h",,terminal_output
|
99 |
+
98,952807,"TERMINAL",0,0,"[?25l[58;81H[7;39;49ms[58;81H[0m\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Co': /usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cur[7mso[27mr-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt[A[?25h",,terminal_output
|
100 |
+
99,952860,"TERMINAL",0,0,"[A[A[42Pu': [7msou[27mrce .venv/bin/activate\r\n\r[K\r\n\r[K\r\n\r[K[A[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
101 |
+
100,953265,"TERMINAL",0,0,"[1@r': [7msour[27m",,terminal_output
|
102 |
+
101,953501,"TERMINAL",0,0,"[?25l[55;27H[7;39;49ms[55;27H[0m[1@c': [7msourc[27m[?25h[?25l[55;28H[7;39;49ms[55;28H[0m[1@e': [7msource[27m[?25h",,terminal_output
|
103 |
+
102,953664,"TERMINAL",0,0,"[?25l[55;29H\r[6@[franz.srambical@gpusrv69 jafar]$ source\r\n[?2004l\r]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h(jafar) [franz.srambical@gpusrv69 jafar]$ [?25h",,terminal_output
|
104 |
+
103,956058,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
105 |
+
104,956795,"TERMINAL",0,0,"-': /usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux[7m-[27mx64/python_files/deactivate/bash/envVars.txt[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[A[A\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cc': salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gp[27Pusrv69,gpusrv70 -[7m-c[27mpus-per-task=4 --ntasks-per-node=2 --ntasks=4\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
106 |
+
105,957520,"TERMINAL",0,0,"[?25l[57;19H[7;39;49m-[57;19H[0m[A[A[C[C[C[3P ': srun bash [7m-c [27m'nvidia-smi --query-gpu=uuid --format=csv,noheader'\r\n\r[K\r\n\r[K[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
107 |
+
106,958576,"TERMINAL",0,0,"\r[Cjafar) [franz.srambical@gpusrv69 jafar]$ srun bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C\r\n\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
108 |
+
107,960292,"TERMINAL",0,0,"",,terminal_output
|
109 |
+
108,960712,"TERMINAL",0,0,"",,terminal_output
|
110 |
+
109,960948,"TERMINAL",0,0,"\r[C[C[C[C[C",,terminal_output
|
111 |
+
110,961464,"TERMINAL",0,0,"\r[C[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
112 |
+
111,961897,"TERMINAL",0,0,"[C[C[C[C",,terminal_output
|
113 |
+
112,962331,"TERMINAL",0,0," -c 'nvidia-smi --query-gpu=uuid --format=c[1Psv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
114 |
+
113,962472,"TERMINAL",0,0," -c 'nvidia-smi --query-gpu=uuid --format=cs[1Pv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
115 |
+
114,962573,"TERMINAL",0,0," -c 'nvidia-smi --query-gpu=uuid --format=csv[1P,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
116 |
+
115,962699,"TERMINAL",0,0," -c 'nvidia-smi --query-gpu=uuid --format=csv,[1Pnoheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
117 |
+
116,963175,"TERMINAL",0,0,"p -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
118 |
+
117,963238,"TERMINAL",0,0,"y -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
119 |
+
118,963314,"TERMINAL",0,0,"t -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
120 |
+
119,963422,"TERMINAL",0,0,"[?25l[55;51Hh -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
121 |
+
120,963491,"TERMINAL",0,0,"[?25l[55;52Ho -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
122 |
+
121,963576,"TERMINAL",0,0,"[?25l[55;53Hn -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
123 |
+
122,963730,"TERMINAL",0,0,"[?25l[55;54H3 -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
124 |
+
123,963960,"TERMINAL",0,0,"\r\n\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
125 |
+
124,964474,"TERMINAL",0,0,"",,terminal_output
|
126 |
+
125,964904,"TERMINAL",0,0,"[1P'",,terminal_output
|
127 |
+
126,966726,"TERMINAL",0,0,"[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'\r[1P'\r[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C' \r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[1P'\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'[1P'",,terminal_output
|
128 |
+
127,972932,"TERMINAL",0,0,"i'",,terminal_output
|
129 |
+
128,973008,"TERMINAL",0,0,"[?25l[55;61Hm'[?25h",,terminal_output
|
130 |
+
129,973062,"TERMINAL",0,0,"[?25l[55;62Hp'[?25h",,terminal_output
|
131 |
+
130,973179,"TERMINAL",0,0,"[?25l[55;63Ho'[?25h",,terminal_output
|
132 |
+
131,973253,"TERMINAL",0,0,"[?25l[55;64Hr'[?25h",,terminal_output
|
133 |
+
132,973488,"TERMINAL",0,0,"[?25l[55;65Ht'[?25h",,terminal_output
|
134 |
+
133,973815,"TERMINAL",0,0,"[?25l[55;66H '[?25h",,terminal_output
|
135 |
+
134,973923,"TERMINAL",0,0,"[?25l[55;67Hj'[?25h",,terminal_output
|
136 |
+
135,974014,"TERMINAL",0,0,"[?25l[55;68Ha'[?25h",,terminal_output
|
137 |
+
136,974452,"TERMINAL",0,0,"[?25l[55;69Hx'[?25h",,terminal_output
|
138 |
+
137,975223,"TERMINAL",0,0,"[?25l[55;70H '[?25h",,terminal_output
|
139 |
+
138,1035106,"TERMINAL",0,0,"[?25l[0m [55;71H[0m[1P'[?25h",,terminal_output
|
140 |
+
139,1035277,"TERMINAL",0,0,"[?25l[55;70H;'[?25h",,terminal_output
|
141 |
+
140,1043348,"TERMINAL",0,0,"[7mjax.local_devices()[27m'",,terminal_output
|
142 |
+
141,1043772,"TERMINAL",0,0,"jax.local_devices()'",,terminal_output
|
143 |
+
142,1044542,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
144 |
+
143,1053281,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26666106.0 tasks 0-3: running\r\n",,terminal_output
|
145 |
+
144,1053829,"TERMINAL",0,0,"^P",,terminal_output
|
146 |
+
145,1054441,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26666106.0 tasks 0-3: running\r\n",,terminal_output
|
147 |
+
146,1054854,"TERMINAL",0,0,"^Csrun: sending Ctrl-C to StepId=26666106.0\r\nsrun: forcing job termination\r\nsrun: Job step aborted: Waiting up to 32 seconds for job step to finish.\r\nslurmstepd: error: *** STEP 26666106.0 ON gpusrv69 CANCELLED AT 2025-07-04T06:40:05 ***\r\n",,terminal_output
|
148 |
+
147,1055016,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h(jafar) [franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
149 |
+
148,1055477,"TERMINAL",0,0,"srun python3 -c 'import jax;jax.local_devices()'",,terminal_output
|
150 |
+
149,1057811,"TERMINAL",0,0,"pjax.local_devices()'",,terminal_output
|
151 |
+
150,1057911,"TERMINAL",0,0,"[?25l[58;72Hrjax.local_devices()' [A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
152 |
+
151,1058002,"TERMINAL",0,0,"ijax.local_devices()'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
153 |
+
152,1058079,"TERMINAL",0,0,"[?25l[57;74Hnjax.local_devices()'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
154 |
+
153,1058138,"TERMINAL",0,0,"[?25l[57;75Htjax.local_devices()'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
155 |
+
154,1058405,"TERMINAL",0,0,"[?25l[57;76H(jax.local_devices()'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
156 |
+
155,1059075,"TERMINAL",0,0,"\r\n\r[C[C[C[C",,terminal_output
|
157 |
+
156,1059561,"TERMINAL",0,0,"",,terminal_output
|
158 |
+
157,1059978,"TERMINAL",0,0,")'",,terminal_output
|
159 |
+
158,1060191,"TERMINAL",0,0,"[C",,terminal_output
|
160 |
+
159,1060368,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
161 |
+
160,1069146,"TERMINAL",0,0,"[CudaDevice(id=0), CudaDevice(id=1)]\r\n[CudaDevice(id=0), CudaDevice(id=1)]\r\n[CudaDevice(id=0), CudaDevice(id=1)]\r\n[CudaDevice(id=0), CudaDevice(id=1)]\r\n",,terminal_output
|
162 |
+
161,1070761,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h(jafar) [franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-264a0e70-8280-4ed9-a47c-e76bfae594cd1754128841382-2025_08_02-12.01.32.618/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2bb200ce-4bc8-4bc3-9354-29e24db5d38e1752063967983-2025_07_09-14.26.42.463/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2f484e29-43ea-48d0-8c50-df135d6c967a1753171043773-2025_07_22-09.57.31.372/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-3acc90e9-90ce-4c91-8dc5-7fa36ee6eae81754056616784-2025_08_01-15.57.02.654/source.csv
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,2,"experiments/sample.sh",0,0,"source .venv/bin/activate\n\ndata_dir=""$PWD/data_arrayrecord/dummy""\nckpt_dir=""$PWD/checkpoints/causal_dynamics_openai_grain_tok_restore""\n\nexport PYTHONUNBUFFERED=1\nsrun ipython --pdb sample.py -- \\n --dyna_type ""causal"" \\n --batch_size 1 \\n --seq_len 2 \\n --start_frame 1 \\n --checkpoint $ckpt_dir \\n --data_dir $data_dir",shellscript,tab
|
3 |
+
2,589,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:57:02 PM [info] Activating crowd-code\n3:57:02 PM [info] Recording started\n3:57:02 PM [info] Initializing git provider using file system watchers...\n3:57:02 PM [info] Git repository found\n3:57:02 PM [info] Git provider initialized successfully\n3:57:02 PM [info] Initial git state: [object Object]\n",Log,tab
|
4 |
+
3,750,"TERMINAL",0,0,"",,terminal_focus
|
5 |
+
4,910,"experiments/sample.sh",0,0,"",shellscript,tab
|
6 |
+
5,8633,"utils/nn.py",0,0,"import math\nfrom typing import Tuple, Callable, List\n\nfrom flax import nnx\nimport jax\nimport jax.numpy as jnp\nimport einops\n\n\nclass PositionalEncoding(nnx.Module):\n """"""https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/JAX/tutorial6/Transformers_and_MHAttention.html""""""\n\n def __init__(self, d_model: int, max_len: int = 5000):\n self.d_model = d_model\n self.max_len = max_len\n\n pe = jnp.zeros((self.max_len, self.d_model))\n position = jnp.arange(0, self.max_len, dtype=jnp.float32)[:, None]\n div_term = jnp.exp(\n jnp.arange(0, self.d_model, 2) * (-math.log(10000.0) / self.d_model)\n )\n pe = pe.at[:, 0::2].set(jnp.sin(position * div_term))\n pe = pe.at[:, 1::2].set(jnp.cos(position * div_term))\n self.pe = nnx.Variable(pe)\n\n def __call__(self, x: jax.Array) -> jax.Array:\n x = x + self.pe[: x.shape[2]]\n return x\n\n\nclass STBlock(nnx.Module):\n def __init__(\n self,\n dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.dim = dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.spatial_pos_enc = PositionalEncoding(self.dim)\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=False\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.temporal_pos_enc = PositionalEncoding(self.dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array) -> jax.Array:\n # --- Spatial attention ---\n z_BTNM = self.spatial_pos_enc(x_BTNM)\n z_BTNM = self.spatial_norm(z_BTNM)\n z_BTNM = self.spatial_attention(z_BTNM)\n x_BTNM = x_BTNM + z_BTNM\n\n # --- Temporal attention ---\n x_BNTM = x_BTNM.swapaxes(1, 2)\n z_BNTM = self.temporal_pos_enc(x_BNTM)\n z_BNTM = self.temporal_norm(z_BNTM)\n z_BNTM = self.temporal_attention(z_BNTM)\n x_BNTM = x_BNTM + z_BNTM\n x_BTNM = x_BNTM.swapaxes(1, 2)\n\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\n\nclass STTransformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n STBlock(\n dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n )\n\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM)\n\n x_BTNO = self.output_dense(x_BTNM)\n return x_BTNO\n\nclass TransformerBlock(nnx.Module):\n def __init__(\n self,\n model_dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n self.decode = decode\n\n self.temporal_pos_enc = PositionalEncoding(self.model_dim)\n self.spatial_pos_enc = PositionalEncoding(self.model_dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n # --- Spatial attention ---\n B, T, N, M = x_BTNM.shape\n z_FNM = einops.rearrange(x_BTNM, ""b t n m -> (b t) n m"")\n z_FNM = self.spatial_norm(z_FNM)\n if self.decode:\n assert pos_index is not None\n z_FM = z_FNM[:, pos_index[1]]\n z_F1M = jnp.reshape(z_FM, (B * T, 1, M))\n z_F1M = self.spatial_attention(z_F1M)\n z_FM = jnp.reshape(z_F1M, (B * T, M))\n z_FNM = z_FNM.at[:, pos_index[1], :].set(z_FM)\n else:\n z_FNM = self.spatial_attention(z_FNM)\n z_BTNM = einops.rearrange(z_FNM, ""(b t) n m -> b t n m"", t=T)\n x_BTNM = x_BTNM + z_BTNM\n # --- Temporal attention ---\n z_PTM = einops.rearrange(x_BTNM, ""b t n m -> (b n) t m"")\n z_PTM = self.temporal_norm(z_PTM)\n if self.decode:\n assert pos_index is not None\n z_PM = z_PTM[:, pos_index[0]]\n z_P1M = jnp.reshape(z_PM, (B * N, 1, M))\n z_P1M = self.temporal_attention(z_P1M)\n z_PM = jnp.reshape(z_P1M, (B * N, M))\n z_PTM = z_PTM.at[:, pos_index[0], :].set(z_PM)\n else:\n z_PTM = self.temporal_attention(z_PTM)\n z_BTNM = einops.rearrange(z_PTM, ""(b n) t m -> b t n m"", n=N)\n x_BTNM = x_BTNM + z_BTNM\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\nclass Transformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n F: number of frames in batch\n P: number of patch positions in batch\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.pos_enc = PositionalEncoding(self.model_dim)\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks: List[TransformerBlock] = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n TransformerBlock(\n model_dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n decode=decode,\n rngs=rngs,\n )\n )\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n x_BTNM = self.pos_enc(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM, pos_index)\n\n x_BTNV = self.output_dense(x_BTNM)\n return x_BTNV\n\ndef normalize(x: jax.Array) -> jax.Array:\n return x / (jnp.linalg.norm(x, ord=2, axis=-1, keepdims=True) + 1e-8)\n\n\nclass VectorQuantizer(nnx.Module):\n """"""\n Dimension keys:\n D: B * T * N\n K: number of latents\n L: latent dimension\n """"""\n def __init__(\n self, latent_dim: int, num_latents: int, dropout: float, rngs: nnx.Rngs\n ):\n self.latent_dim = latent_dim\n self.num_latents = num_latents\n self.dropout = dropout\n\n self.codebook = nnx.Param(\n normalize(\n nnx.initializers.lecun_uniform()(\n rngs.params(), (self.num_latents, self.latent_dim)\n )\n )\n )\n self.drop = nnx.Dropout(self.dropout, rngs=rngs)\n\n def __call__(\n self, x_DL: jax.Array, training: bool\n ) -> Tuple[jax.Array, jax.Array, jax.Array, jax.Array]:\n # --- Compute distances ---\n x_DL = normalize(x_DL)\n normalized_codebook_KL = normalize(self.codebook.value)\n distance_DK = -jnp.matmul(x_DL, normalized_codebook_KL.T)\n if training:\n distance_DK = self.drop(distance_DK)\n\n # --- Get indices and embeddings ---\n indices_D = jnp.argmin(distance_DK, axis=-1)\n z_DL = self.codebook[indices_D]\n\n # --- Straight through estimator ---\n z_q_DL = x_DL + jax.lax.stop_gradient(z_DL - x_DL)\n return z_q_DL, z_DL, x_DL, indices_D\n\n def get_codes(self, indices_E: jax.Array) -> jax.Array:\n return self.codebook[indices_E]\n\n\ndef _create_flash_attention_fn(use_flash_attention: bool, is_causal: bool) -> Callable:\n """"""\n Create an attention function that uses flash attention if enabled.\n\n flax.nnx.MultiHeadAttention provides tensors with shape (batch..., length, num_heads, head_dim),\n but jax.nn.dot_product_attention expects (batch, length, num_heads, head_dim). We reshape to\n ensure compatibility. cuDNN's flash attention additionally requires a sequence length that\n is a multiple of 4. We pad the sequence length to the nearest multiple of 4 and mask\n accordingly. Note that cuDNN requires the mask to be broadcast before calling the attention\n function due to strict shape checking.\n """"""\n\n # FIXME (f.srambical): keys and values could have different dimensionalities\n def attention_fn(query_BSHD, key_BSHD, value_BSHD, bias=None, mask_B111=None, **kwargs):\n implementation = ""cudnn"" if use_flash_attention else None\n\n def _merge_batch_dims(x):\n return einops.rearrange(x, ""... l h k -> (...) l h k"")\n\n def _pad(x):\n return jnp.pad(x, ((0, 0), (0, pad_size), (0, 0), (0, 0)))\n\n original_shape = query_BSHD.shape\n original_seq_len = query_BSHD.shape[-3]\n\n # Pad to nearest multiple of 4\n T = ((original_seq_len + 3) // 4) * 4\n pad_size = T - original_seq_len\n\n query_BTHD = _pad(_merge_batch_dims(query_BSHD))\n key_BTHD = _pad(_merge_batch_dims(key_BSHD))\n value_BTHD = _pad(_merge_batch_dims(value_BSHD))\n B = query_BTHD.shape[0]\n\n attention_mask = jnp.ones((T, T), dtype=jnp.bool_)\n attention_mask = attention_mask.at[original_seq_len:, :].set(False)\n attention_mask = attention_mask.at[:, original_seq_len:].set(False)\n\n # Handle causal mask for cached decoder self-attention (from nnx.MultiHeadAttention)\n if mask_B111 is not None:\n mask_B111 = _merge_batch_dims(mask_B111)\n # We need to broadcast T and S dimensions to target_seq_len since cudnn attention strictly checks the mask shape\n # https://github.com/jax-ml/jax/issues/28974\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L1830\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L337\n mask_B1QK = einops.repeat(mask_B111, ""... 1 1 -> ... t s"", t=T, s=T)\n mask_B1QK = mask_B111.astype(jnp.bool)\n else:\n mask_11QK = attention_mask[jnp.newaxis, jnp.newaxis, :, :]\n mask_B1QK = jnp.broadcast_to(mask_11QK, (B, 1, T, T))\n\n bias_4d = _pad(_merge_batch_dims(bias)) if bias is not None else None\n\n # NOTE: jax.nn.dot_product_attention does not support dropout\n output_4d = jax.nn.dot_product_attention(\n query=query_BTHD,\n key=key_BTHD,\n value=value_BTHD,\n bias=bias_4d,\n mask=mask_B1QK,\n implementation=implementation,\n is_causal=is_causal,\n )\n return output_4d[..., :original_seq_len, :, :].reshape(original_shape)\n\n return attention_fn\n",python,tab
|
7 |
+
6,12453,"utils/nn.py",14603,0,"",python,selection_mouse
|
8 |
+
7,12683,"utils/nn.py",0,0,"",python,selection_command
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-3f2b1a99-0d75-466c-970c-4deff62cba851753462933379-2025_07_25-19.02.23.245/source.csv
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,"genie.py",0,0,"from typing import Dict, Any\n\nimport optax\nimport jax\nimport jax.numpy as jnp\nimport flax.nnx as nnx\nfrom flax.training.train_state import TrainState\nimport orbax.checkpoint as ocp\n\nfrom models.dynamics import DynamicsMaskGIT\nfrom models.lam import LatentActionModel\nfrom models.tokenizer import TokenizerVQVAE\n\nimport grain\n\n\nclass Genie(nnx.Module):\n """"""Genie model""""""\n\n def __init__(\n self,\n in_dim: int,\n tokenizer_dim: int,\n tokenizer_ffn_dim: int,\n latent_patch_dim: int,\n num_patch_latents: int,\n patch_size: int,\n tokenizer_num_blocks: int,\n tokenizer_num_heads: int,\n lam_dim: int,\n lam_ffn_dim: int,\n latent_action_dim: int,\n num_latent_actions: int,\n lam_patch_size: int,\n lam_num_blocks: int,\n lam_num_heads: int,\n lam_co_train: bool,\n dyna_dim: int,\n dyna_ffn_dim: int,\n dyna_num_blocks: int,\n dyna_num_heads: int,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n dropout: float = 0.0,\n mask_limit: float = 0.0,\n ):\n # --- Tokenizer ---\n self.in_dim = in_dim\n self.tokenizer_dim = tokenizer_dim\n self.tokenizer_ffn_dim = tokenizer_ffn_dim\n self.latent_patch_dim = latent_patch_dim\n self.num_patch_latents = num_patch_latents\n self.patch_size = patch_size\n self.tokenizer_num_blocks = tokenizer_num_blocks\n self.tokenizer_num_heads = tokenizer_num_heads\n # --- LAM ---\n self.lam_dim = lam_dim\n self.lam_ffn_dim = lam_ffn_dim\n self.latent_action_dim = latent_action_dim\n self.num_latent_actions = num_latent_actions\n self.lam_patch_size = lam_patch_size\n self.lam_num_blocks = lam_num_blocks\n self.lam_num_heads = lam_num_heads\n self.lam_co_train = lam_co_train\n # --- Dynamics ---\n self.dyna_dim = dyna_dim\n self.dyna_ffn_dim = dyna_ffn_dim\n self.dyna_num_blocks = dyna_num_blocks\n self.dyna_num_heads = dyna_num_heads\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n self.dropout = dropout\n self.mask_limit = mask_limit\n\n self.tokenizer = TokenizerVQVAE(\n in_dim=self.in_dim,\n model_dim=self.tokenizer_dim,\n ffn_dim=self.tokenizer_ffn_dim,\n latent_dim=self.latent_patch_dim,\n num_latents=self.num_patch_latents,\n patch_size=self.patch_size,\n num_blocks=self.tokenizer_num_blocks,\n num_heads=self.tokenizer_num_heads,\n dropout=0.0,\n codebook_dropout=0.0,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n self.lam = LatentActionModel(\n in_dim=self.in_dim,\n model_dim=self.lam_dim,\n ffn_dim=self.lam_ffn_dim,\n latent_dim=self.latent_patch_dim,\n num_latents=self.num_latent_actions,\n patch_size=self.lam_patch_size,\n num_blocks=self.lam_num_blocks,\n num_heads=self.lam_num_heads,\n dropout=0.0,\n codebook_dropout=0.0,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n self.dynamics = DynamicsMaskGIT(\n model_dim=self.dyna_dim,\n ffn_dim=self.dyna_ffn_dim,\n num_latents=self.num_patch_latents,\n latent_action_dim=self.latent_action_dim,\n num_blocks=self.dyna_num_blocks,\n num_heads=self.dyna_num_heads,\n dropout=self.dropout,\n mask_limit=self.mask_limit,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n\n def __call__(self, batch: Dict[str, Any], training: bool = True) -> Dict[str, Any]:\n tokenizer_outputs = self.tokenizer.vq_encode(batch[""videos""], training=False)\n lam_outputs = self.lam.vq_encode(batch[""videos""], training=False)\n latent_actions = jax.lax.cond(\n self.lam_co_train,\n lambda: lam_outputs[""z_q""],\n lambda: jax.lax.stop_gradient(lam_outputs[""z_q""]),\n )\n outputs = dict(\n video_tokens=jax.lax.stop_gradient(tokenizer_outputs[""indices""]),\n latent_actions=latent_actions,\n )\n outputs[""mask_rng""] = batch[""mask_rng""]\n dyna_outputs = self.dynamics(outputs, training)\n outputs.update(dyna_outputs)\n mle_indices = jnp.argmax(outputs[""token_logits""], axis=-1)\n outputs[""recon""] = self.tokenizer.decode(\n mle_indices, batch[""videos""].shape[2:4]\n )\n outputs[""lam_indices""] = lam_outputs[""indices""]\n return outputs\n\n def sample(\n self,\n batch: Dict[str, Any],\n seq_len: int,\n steps: int = 25,\n temperature: float = 1,\n sample_argmax: bool = False,\n ) -> Any:\n """"""\n Autoregressively samples up to `seq_len` future frames, following Figure 8 of the paper.\n\n - Input frames are tokenized once.\n - Future frames are generated autoregressively in token space.\n - All frames are detokenized in a single pass.\n\n Note:\n - For interactive or step-wise sampling, detokenization should occur after each action.\n - To maintain consistent tensor shapes across timesteps, all current and future frames are decoded at every step.\n - Temporal causal structure is preserved by\n a) reapplying the mask before each decoding step.\n b) a temporal causal mask is applied within each ST-transformer block.\n\n Dimension keys:\n B: batch size\n T: number of input (conditioning) frames\n N: patches per frame\n S: sequence length\n A: action space\n D: model latent dimension\n """"""\n # --- Encode videos and actions ---\n tokenizer_out = self.tokenizer.vq_encode(batch[""videos""], training=False)\n token_idxs = tokenizer_out[""indices""] # (B, T, N)\n B, T, N = token_idxs.shape\n pad_shape = (B, seq_len - T, N)\n pad = jnp.zeros(pad_shape, dtype=token_idxs.dtype)\n token_idxs = jnp.concatenate([token_idxs, pad], axis=1) # (B, S, N)\n action_tokens = self.lam.vq.get_codes(batch[""latent_actions""])\n\n # Define the inner MaskGIT loop using nnx.scan\n maskgit_step = MaskGITStep(\n dynamics=self.dynamics,\n tokenizer=self.tokenizer,\n temperature=temperature,\n sample_argmax=sample_argmax,\n steps=steps,\n )\n\n def maskgit_scan_fn(module, carry, x):\n new_carry, _ = module(carry, x)\n return new_carry, None\n\n MaskGITLoop = nnx.scan(\n maskgit_scan_fn,\n in_axes=(None, nnx.Carry, 0), # (module, carry, x)\n out_axes=(nnx.Carry, None), # (new_carry, None)\n )\n\n # Define the outer autoregressive loop's body function\n def generation_step_fn(carry, step_t):\n rng, current_token_idxs = carry\n rng, step_rng = jax.random.split(rng)\n\n # Mask current and future frames (i.e., t >= step_t)\n mask = jnp.arange(seq_len) >= step_t # (S,)\n mask = jnp.broadcast_to(mask[None, :, None], (B, seq_len, N)).astype(bool) # (B, S, N)\n masked_token_idxs = current_token_idxs * ~mask\n\n # --- Initialize and run MaskGIT loop ---\n init_carry_maskgit = (\n step_rng,\n masked_token_idxs,\n mask,\n action_tokens,\n )\n final_carry_maskgit, _ = MaskGITLoop(\n maskgit_step, init_carry_maskgit, jnp.arange(steps)\n )\n updated_token_idxs = final_carry_maskgit[1]\n new_carry = (rng, updated_token_idxs)\n return new_carry, None\n\n # --- Run the autoregressive generation using jax.lax.scan ---\n initial_carry = (batch[""rng""], token_idxs)\n timesteps_to_scan = jnp.arange(T, seq_len)\n final_carry, _ = jax.lax.scan(\n generation_step_fn, initial_carry, timesteps_to_scan\n )\n final_token_idxs = final_carry[1]\n\n # --- Decode all tokens at once at the end ---\n final_frames = self.tokenizer.decode(\n final_token_idxs,\n video_hw=batch[""videos""].shape[2:4],\n )\n return final_frames\n\n def vq_encode(self, batch, training) -> Dict[str, Any]:\n # --- Preprocess videos ---\n lam_output = self.lam.vq_encode(batch[""videos""], training=training)\n return lam_output[""indices""]\n\n\nclass MaskGITStep(nnx.Module):\n def __init__(\n self,\n dynamics: DynamicsMaskGIT,\n tokenizer: TokenizerVQVAE,\n temperature: float,\n sample_argmax: bool,\n steps: int,\n ):\n self.dynamics = dynamics\n self.tokenizer = tokenizer\n self.temperature = temperature\n self.sample_argmax = sample_argmax\n self.steps = steps\n\n def __call__(self, carry, x):\n rng, token_idxs, mask, action_tokens = carry\n step = x\n N = token_idxs.shape[2]\n\n # --- Construct + encode video ---\n vid_embed = self.dynamics.patch_embed(token_idxs) # (B, S, N, D)\n mask_token = self.dynamics.mask_token.value # (1, 1, 1, D,)\n mask_expanded = mask[..., None] # (B, S, N, 1)\n vid_embed = jnp.where(mask_expanded, mask_token, vid_embed)\n\n # --- Predict transition ---\n act_embed = self.dynamics.action_up(action_tokens)\n vid_embed += jnp.pad(act_embed, ((0, 0), (1, 0), (0, 0), (0, 0)))\n unmasked_ratio = jnp.cos(jnp.pi * (step + 1) / (self.steps * 2))\n step_temp = self.temperature * (1.0 - unmasked_ratio)\n final_logits = self.dynamics.dynamics(vid_embed) / step_temp\n\n # --- Sample new tokens for final frame ---\n if self.sample_argmax:\n sampled_token_idxs = jnp.argmax(final_logits, axis=-1)\n else:\n rng, _rng = jax.random.split(rng)\n sampled_token_idxs = jax.random.categorical(_rng, final_logits)\n gather_fn = jax.vmap(jax.vmap(jax.vmap(lambda x, y: x[y])))\n final_token_probs = gather_fn(jax.nn.softmax(final_logits), sampled_token_idxs)\n final_token_probs += ~mask\n # Update masked tokens only\n token_idxs = jnp.where(mask, sampled_token_idxs, token_idxs)\n\n # --- Update mask ---\n num_unmasked_tokens = jnp.round(N * (1.0 - unmasked_ratio)).astype(int)\n idx_mask = jnp.arange(final_token_probs.shape[-1]) > num_unmasked_tokens\n sorted_idxs = jnp.argsort(final_token_probs, axis=-1, descending=True)\n mask_update_fn = jax.vmap(lambda msk, ids: msk.at[ids].set(idx_mask))\n new_mask = mask_update_fn(mask, sorted_idxs)\n\n new_carry = (rng, token_idxs, new_mask, action_tokens)\n return new_carry, None\n\n\n# FIXME (f.srambical): add conversion script for old checkpoints\ndef restore_genie_components(\n optimizer: nnx.Optimizer,\n sharding: jax.sharding.NamedSharding,\n rng: jax.Array,\n args,\n):\n """"""Restore pre-trained Genie components""""""\n rngs = nnx.Rngs(rng)\n\n # dummy values since we only use tx to initialize the dummy train states\n dummy_tx = optax.adamw(\n learning_rate=optax.constant_schedule(args.max_lr),\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n )\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n\n checkpoint_options = ocp.CheckpointManagerOptions(\n step_format_fixed_length=6,\n )\n tokenizer_checkpoint_manager = ocp.CheckpointManager(\n directory=args.tokenizer_checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n dummy_tokenizer = TokenizerVQVAE(\n in_dim=args.image_channels,\n model_dim=args.tokenizer_dim,\n ffn_dim=args.tokenizer_ffn_dim,\n latent_dim=args.latent_patch_dim,\n num_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n num_blocks=args.tokenizer_num_blocks,\n num_heads=args.tokenizer_num_heads,\n dropout=args.dropout,\n codebook_dropout=args.dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n dummy_tokenizer_optimizer = nnx.Optimizer(dummy_tokenizer, dummy_tx)\n dummy_tokenizer_optimizer_state = nnx.state(dummy_tokenizer_optimizer)\n abstract_sharded_tokenizer_optimizer_state = _create_abstract_sharded_pytree(\n dummy_tokenizer_optimizer_state, sharding\n )\n restored_tokenizer = tokenizer_checkpoint_manager.restore(\n step=tokenizer_checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(\n abstract_sharded_tokenizer_optimizer_state\n ),\n ),\n )[""model_state""]\n nnx.update(dummy_tokenizer_optimizer.model, restored_tokenizer.model)\n optimizer.model.tokenizer = dummy_tokenizer_optimizer.model\n tokenizer_checkpoint_manager.close()\n\n if args.lam_checkpoint:\n lam_checkpoint_manager = ocp.CheckpointManager(\n directory=args.lam_checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n dummy_lam = LatentActionModel(\n in_dim=args.image_channels,\n model_dim=args.lam_dim,\n ffn_dim=args.lam_ffn_dim,\n latent_dim=args.latent_patch_dim,\n num_latents=args.num_latent_actions,\n patch_size=args.lam_patch_size,\n num_blocks=args.lam_num_blocks,\n num_heads=args.lam_num_heads,\n dropout=args.dropout,\n codebook_dropout=args.dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n dummy_lam_optimizer = nnx.Optimizer(dummy_lam, dummy_tx)\n dummy_lam_optimizer_state = nnx.state(dummy_lam_optimizer)\n abstract_sharded_lam_optimizer_state = _create_abstract_sharded_pytree(\n dummy_lam_optimizer_state, sharding\n )\n restored_lam_optimizer = lam_checkpoint_manager.restore(\n step=lam_checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(\n abstract_sharded_lam_optimizer_state\n ),\n ),\n )[""model_state""]\n nnx.update(dummy_lam_optimizer.model, restored_lam_optimizer.model)\n optimizer.model.lam = dummy_lam_optimizer.model\n # Remove the LAM decoder to save memory and avoid unnecessary computation.\n del optimizer.model.lam.decoder\n lam_checkpoint_manager.close()\n\n return optimizer\n\n\ndef _create_abstract_sharded_pytree(pytree_template, sharding_spec):\n """"""Replaces arrays in a pytree with ShapeDtypeStructs having the given sharding.""""""\n\n def map_fn(leaf_template):\n if hasattr(leaf_template, ""shape"") and hasattr(leaf_template, ""dtype""):\n return jax.ShapeDtypeStruct(\n leaf_template.shape, leaf_template.dtype, sharding=sharding_spec\n )\n return leaf_template\n\n return jax.tree_util.tree_map(map_fn, pytree_template)\n",python,tab
|
3 |
+
2,357,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"7:02:23 PM [info] Activating crowd-code\n7:02:23 PM [info] Recording started\n7:02:23 PM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,587,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"7:02:23 PM [info] Git repository found\n7:02:23 PM [info] Git provider initialized successfully\n7:02:23 PM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,2898,"genie.py",0,0,"",python,tab
|
6 |
+
5,3378,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,tab
|
7 |
+
6,5988,"genie.py",0,0,"",python,tab
|
8 |
+
7,19218,"genie.py",0,0,"",python,tab
|
9 |
+
8,19298,"genie.py",6670,0,"",python,selection_command
|
10 |
+
9,56549,"/fast/home/franz.srambical/jafar/sample.py",0,0,"from dataclasses import dataclass\nimport time\nimport os\nimport optax\n\nimport dm_pix as pix\nimport einops\nimport jax\nimport jax.numpy as jnp\nimport flax.linen as nn\nimport numpy as np\nimport orbax.checkpoint as ocp\nfrom PIL import Image, ImageDraw\nimport tyro\nfrom flax import nnx\n\nfrom genie import Genie\nfrom utils.dataloader import get_dataloader\n\n\n@dataclass\nclass Args:\n # Experiment\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = ""data/coinrun_episodes""\n checkpoint: str = """"\n # Sampling\n batch_size: int = 1\n maskgit_steps: int = 25\n temperature: float = 1.0\n sample_argmax: bool = True\n start_frame: int = 0\n # Tokenizer checkpoint\n tokenizer_dim: int = 512\n tokenizer_ffn_dim: int = 2048\n latent_patch_dim: int = 32\n num_patch_latents: int = 1024\n patch_size: int = 4\n tokenizer_num_blocks: int = 4\n tokenizer_num_heads: int = 8\n # LAM checkpoint\n lam_dim: int = 512\n lam_ffn_dim: int = 2048\n latent_action_dim: int = 32\n num_latent_actions: int = 6\n lam_patch_size: int = 16\n lam_num_blocks: int = 4\n lam_num_heads: int = 8\n # Dynamics checkpoint\n dyna_dim: int = 512\n dyna_ffn_dim: int = 2048\n dyna_num_blocks: int = 6\n dyna_num_heads: int = 8\n param_dtype = jnp.float32\n dtype = jnp.bfloat16\n use_flash_attention: bool = True\n\n\nargs = tyro.cli(Args)\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n\n rng = jax.random.PRNGKey(args.seed)\n\n # --- Load Genie checkpoint ---\n rngs = nnx.Rngs(rng)\n genie = Genie(\n # Tokenizer\n in_dim=args.image_channels,\n tokenizer_dim=args.tokenizer_dim,\n tokenizer_ffn_dim=args.tokenizer_ffn_dim,\n latent_patch_dim=args.latent_patch_dim,\n num_patch_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n tokenizer_num_blocks=args.tokenizer_num_blocks,\n tokenizer_num_heads=args.tokenizer_num_heads,\n # LAM\n lam_dim=args.lam_dim,\n lam_ffn_dim=args.lam_ffn_dim,\n latent_action_dim=args.latent_action_dim,\n num_latent_actions=args.num_latent_actions,\n lam_patch_size=args.lam_patch_size,\n lam_num_blocks=args.lam_num_blocks,\n lam_num_heads=args.lam_num_heads,\n lam_co_train=False,\n # Dynamics\n dyna_dim=args.dyna_dim,\n dyna_ffn_dim=args.dyna_ffn_dim,\n dyna_num_blocks=args.dyna_num_blocks,\n dyna_num_heads=args.dyna_num_heads,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeSave, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n checkpoint_options = ocp.CheckpointManagerOptions(\n step_format_fixed_length=6,\n )\n checkpoint_manager = ocp.CheckpointManager(\n args.checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n\n dummy_tx = optax.adamw(\n learning_rate=optax.linear_schedule(0.0001, 0.0001, 10000),\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n mu_dtype=args.dtype,\n )\n dummy_optimizer = nnx.Optimizer(genie, dummy_tx)\n\n abstract_optimizer = nnx.eval_shape(lambda: dummy_optimizer)\n abstract_optimizer_state = nnx.state(abstract_optimizer)\n restored = checkpoint_manager.restore(\n checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(abstract_optimizer_state),\n ),\n )\n restored_optimizer_state = restored[""model_state""]\n nnx.update(dummy_optimizer, restored_optimizer_state)\n\n # --- Define sampling function ---\n # @nnx.jit\n # @jax.jit\n def _sampling_fn(model, batch):\n """"""Runs Genie.sample with pre-defined generation hyper-parameters.""""""\n return model.sample(\n batch,\n args.seq_len,\n args.maskgit_steps,\n args.temperature,\n args.sample_argmax,\n )\n\n\n # --- Define autoregressive sampling loop ---\n def _autoreg_sample(rng, video_batch, action_batch):\n vid = video_batch[:, : args.start_frame + 1]\n rng, _rng = jax.random.split(rng)\n batch = dict(videos=vid, latent_actions=action_batch, rng=_rng)\n generated_vid = genie.sample(batch, args.seq_len, args.maskgit_steps, args.temperature, args.sample_argmax)\n return generated_vid\n\n\n # --- Get video + latent actions ---\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".array_record"")\n ]\n dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n args.batch_size,\n args.image_height,\n args.image_width,\n args.image_channels,\n num_workers=8,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n video_batch = next(iter(dataloader))\n # Get latent actions for all videos in the batch\n batch = dict(videos=video_batch)\n action_batch = genie.vq_encode(batch, training=False) # type: ignore[arg-type]\n action_batch = jnp.asarray(action_batch).reshape(video_batch.shape[0], args.seq_len - 1, 1)\n\n # --- Sample + evaluate video ---\n vid = _autoreg_sample(rng, video_batch, action_batch)\n gt = video_batch[:, : vid.shape[1]].clip(0, 1).reshape(-1, *video_batch.shape[2:])\n recon = vid.clip(0, 1).reshape(-1, *vid.shape[2:])\n # FIXME (f.srambical): investigate why this is needed\n gt = gt.astype(jnp.float32)\n ssim = pix.ssim(gt[:, args.start_frame + 1 :], recon[:, args.start_frame + 1 :]).mean()\n print(f""SSIM: {ssim}"")\n\n # --- Construct video ---\n # true_videos = (video_batch * 255).astype(np.uint8)\n # pred_videos = (vid * 255).astype(np.uint8)\n # video_comparison = np.zeros((2, *vid.shape), dtype=np.uint8)\n # video_comparison[0] = true_videos[:, : args.seq_len]\n # video_comparison[1] = pred_videos\n # frames = einops.rearrange(video_comparison, ""n b t h w c -> t (b h) (n w) c"")\n\n # # --- Save video ---\n # imgs = [Image.fromarray(img) for img in frames]\n # # Write actions on each frame, on each row (i.e., for each video in the batch, on the GT row)\n # for t, img in enumerate(imgs[1:]):\n # d = ImageDraw.Draw(img)\n # for row in range(action_batch.shape[0]):\n # action = action_batch[row, t, 0]\n # y_offset = row * video_batch.shape[2] + 2\n # d.text((2, y_offset), f""{action}"", fill=255)\n # imgs[0].save(\n # f""generation_{time.time()}.gif"",\n # save_all=True,\n # append_images=imgs[1:],\n # duration=250,\n # loop=0,\n # )\n",python,tab
|
11 |
+
10,56549,"/fast/home/franz.srambical/jafar/sample.py",5611,0,"",python,selection_command
|
12 |
+
11,58027,"/fast/home/franz.srambical/jafar/sample.py",4644,0,"",python,selection_command
|
13 |
+
12,59949,"/fast/home/franz.srambical/jafar/genie.py",0,0,"from typing import Dict, Any\n\nimport optax\nimport jax\nimport jax.numpy as jnp\nimport flax.nnx as nnx\nfrom flax.training.train_state import TrainState\nimport orbax.checkpoint as ocp\n\nfrom models.dynamics import DynamicsMaskGIT\nfrom models.lam import LatentActionModel\nfrom models.tokenizer import TokenizerVQVAE\n\nimport grain\n\n\nclass Genie(nnx.Module):\n """"""Genie model""""""\n\n def __init__(\n self,\n in_dim: int,\n tokenizer_dim: int,\n tokenizer_ffn_dim: int,\n latent_patch_dim: int,\n num_patch_latents: int,\n patch_size: int,\n tokenizer_num_blocks: int,\n tokenizer_num_heads: int,\n lam_dim: int,\n lam_ffn_dim: int,\n latent_action_dim: int,\n num_latent_actions: int,\n lam_patch_size: int,\n lam_num_blocks: int,\n lam_num_heads: int,\n lam_co_train: bool,\n dyna_dim: int,\n dyna_ffn_dim: int,\n dyna_num_blocks: int,\n dyna_num_heads: int,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n dropout: float = 0.0,\n mask_limit: float = 0.0,\n ):\n # --- Tokenizer ---\n self.in_dim = in_dim\n self.tokenizer_dim = tokenizer_dim\n self.tokenizer_ffn_dim = tokenizer_ffn_dim\n self.latent_patch_dim = latent_patch_dim\n self.num_patch_latents = num_patch_latents\n self.patch_size = patch_size\n self.tokenizer_num_blocks = tokenizer_num_blocks\n self.tokenizer_num_heads = tokenizer_num_heads\n # --- LAM ---\n self.lam_dim = lam_dim\n self.lam_ffn_dim = lam_ffn_dim\n self.latent_action_dim = latent_action_dim\n self.num_latent_actions = num_latent_actions\n self.lam_patch_size = lam_patch_size\n self.lam_num_blocks = lam_num_blocks\n self.lam_num_heads = lam_num_heads\n self.lam_co_train = lam_co_train\n # --- Dynamics ---\n self.dyna_dim = dyna_dim\n self.dyna_ffn_dim = dyna_ffn_dim\n self.dyna_num_blocks = dyna_num_blocks\n self.dyna_num_heads = dyna_num_heads\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n self.dropout = dropout\n self.mask_limit = mask_limit\n\n self.tokenizer = TokenizerVQVAE(\n in_dim=self.in_dim,\n model_dim=self.tokenizer_dim,\n ffn_dim=self.tokenizer_ffn_dim,\n latent_dim=self.latent_patch_dim,\n num_latents=self.num_patch_latents,\n patch_size=self.patch_size,\n num_blocks=self.tokenizer_num_blocks,\n num_heads=self.tokenizer_num_heads,\n dropout=0.0,\n codebook_dropout=0.0,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n self.lam = LatentActionModel(\n in_dim=self.in_dim,\n model_dim=self.lam_dim,\n ffn_dim=self.lam_ffn_dim,\n latent_dim=self.latent_patch_dim,\n num_latents=self.num_latent_actions,\n patch_size=self.lam_patch_size,\n num_blocks=self.lam_num_blocks,\n num_heads=self.lam_num_heads,\n dropout=0.0,\n codebook_dropout=0.0,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n self.dynamics = DynamicsMaskGIT(\n model_dim=self.dyna_dim,\n ffn_dim=self.dyna_ffn_dim,\n num_latents=self.num_patch_latents,\n latent_action_dim=self.latent_action_dim,\n num_blocks=self.dyna_num_blocks,\n num_heads=self.dyna_num_heads,\n dropout=self.dropout,\n mask_limit=self.mask_limit,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n\n def __call__(self, batch: Dict[str, Any], training: bool = True) -> Dict[str, Any]:\n tokenizer_outputs = self.tokenizer.vq_encode(batch[""videos""], training=False)\n lam_outputs = self.lam.vq_encode(batch[""videos""], training=False)\n latent_actions = jax.lax.cond(\n self.lam_co_train,\n lambda: lam_outputs[""z_q""],\n lambda: jax.lax.stop_gradient(lam_outputs[""z_q""]),\n )\n outputs = dict(\n video_tokens=jax.lax.stop_gradient(tokenizer_outputs[""indices""]),\n latent_actions=latent_actions,\n )\n outputs[""mask_rng""] = batch[""mask_rng""]\n dyna_outputs = self.dynamics(outputs, training)\n outputs.update(dyna_outputs)\n mle_indices = jnp.argmax(outputs[""token_logits""], axis=-1)\n outputs[""recon""] = self.tokenizer.decode(\n mle_indices, batch[""videos""].shape[2:4]\n )\n outputs[""lam_indices""] = lam_outputs[""indices""]\n return outputs\n\n def sample(\n self,\n batch: Dict[str, Any],\n seq_len: int,\n steps: int = 25,\n temperature: float = 1,\n sample_argmax: bool = False,\n ) -> Any:\n """"""\n Autoregressively samples up to `seq_len` future frames, following Figure 8 of the paper.\n\n - Input frames are tokenized once.\n - Future frames are generated autoregressively in token space.\n - All frames are detokenized in a single pass.\n\n Note:\n - For interactive or step-wise sampling, detokenization should occur after each action.\n - To maintain consistent tensor shapes across timesteps, all current and future frames are decoded at every step.\n - Temporal causal structure is preserved by\n a) reapplying the mask before each decoding step.\n b) a temporal causal mask is applied within each ST-transformer block.\n\n Dimension keys:\n B: batch size\n T: number of input (conditioning) frames\n N: patches per frame\n S: sequence length\n A: action space\n D: model latent dimension\n """"""\n # --- Encode videos and actions ---\n tokenizer_out = self.tokenizer.vq_encode(batch[""videos""], training=False)\n token_idxs = tokenizer_out[""indices""] # (B, T, N)\n B, T, N = token_idxs.shape\n pad_shape = (B, seq_len - T, N)\n pad = jnp.zeros(pad_shape, dtype=token_idxs.dtype)\n token_idxs = jnp.concatenate([token_idxs, pad], axis=1) # (B, S, N)\n action_tokens = self.lam.vq.get_codes(batch[""latent_actions""])\n\n # Define the inner MaskGIT loop using nnx.scan\n maskgit_step = MaskGITStep(\n dynamics=self.dynamics,\n tokenizer=self.tokenizer,\n temperature=temperature,\n sample_argmax=sample_argmax,\n steps=steps,\n )\n\n def maskgit_scan_fn(module, carry, x):\n new_carry, _ = module(carry, x)\n return new_carry, None\n\n MaskGITLoop = nnx.scan(\n maskgit_scan_fn,\n in_axes=(None, nnx.Carry, 0), # (module, carry, x)\n out_axes=(nnx.Carry, None), # (new_carry, None)\n )\n\n # Define the outer autoregressive loop's body function\n def generation_step_fn(carry, step_t):\n rng, current_token_idxs = carry\n rng, step_rng = jax.random.split(rng)\n\n # Mask current and future frames (i.e., t >= step_t)\n mask = jnp.arange(seq_len) >= step_t # (S,)\n mask = jnp.broadcast_to(mask[None, :, None], (B, seq_len, N)).astype(bool) # (B, S, N)\n masked_token_idxs = current_token_idxs * ~mask\n\n # --- Initialize and run MaskGIT loop ---\n init_carry_maskgit = (\n step_rng,\n masked_token_idxs,\n mask,\n action_tokens,\n )\n final_carry_maskgit, _ = MaskGITLoop(\n maskgit_step, init_carry_maskgit, jnp.arange(steps)\n )\n updated_token_idxs = final_carry_maskgit[1]\n new_carry = (rng, updated_token_idxs)\n return new_carry, None\n\n # --- Run the autoregressive generation using jax.lax.scan ---\n initial_carry = (batch[""rng""], token_idxs)\n timesteps_to_scan = jnp.arange(T, seq_len)\n final_carry, _ = jax.lax.scan(\n generation_step_fn, initial_carry, timesteps_to_scan\n )\n final_token_idxs = final_carry[1]\n\n # --- Decode all tokens at once at the end ---\n final_frames = self.tokenizer.decode(\n final_token_idxs,\n video_hw=batch[""videos""].shape[2:4],\n )\n return final_frames\n\n def vq_encode(self, batch, training) -> Dict[str, Any]:\n # --- Preprocess videos ---\n lam_output = self.lam.vq_encode(batch[""videos""], training=training)\n return lam_output[""indices""]\n\n\nclass MaskGITStep(nnx.Module):\n def __init__(\n self,\n dynamics: DynamicsMaskGIT,\n tokenizer: TokenizerVQVAE,\n temperature: float,\n sample_argmax: bool,\n steps: int,\n ):\n self.dynamics = dynamics\n self.tokenizer = tokenizer\n self.temperature = temperature\n self.sample_argmax = sample_argmax\n self.steps = steps\n\n def __call__(self, carry, x):\n rng, token_idxs, mask, action_tokens = carry\n step = x\n N = token_idxs.shape[2]\n\n # --- Construct + encode video ---\n vid_embed = self.dynamics.patch_embed(token_idxs) # (B, S, N, D)\n mask_token = self.dynamics.mask_token.value # (1, 1, 1, D,)\n mask_expanded = mask[..., None] # (B, S, N, 1)\n vid_embed = jnp.where(mask_expanded, mask_token, vid_embed)\n\n # --- Predict transition ---\n act_embed = self.dynamics.action_up(action_tokens)\n vid_embed += jnp.pad(act_embed, ((0, 0), (1, 0), (0, 0), (0, 0)))\n unmasked_ratio = jnp.cos(jnp.pi * (step + 1) / (self.steps * 2))\n step_temp = self.temperature * (1.0 - unmasked_ratio)\n final_logits = self.dynamics.dynamics(vid_embed) / step_temp\n\n # --- Sample new tokens for final frame ---\n if self.sample_argmax:\n sampled_token_idxs = jnp.argmax(final_logits, axis=-1)\n else:\n rng, _rng = jax.random.split(rng)\n sampled_token_idxs = jax.random.categorical(_rng, final_logits)\n gather_fn = jax.vmap(jax.vmap(jax.vmap(lambda x, y: x[y])))\n final_token_probs = gather_fn(jax.nn.softmax(final_logits), sampled_token_idxs)\n final_token_probs += ~mask\n # Update masked tokens only\n token_idxs = jnp.where(mask, sampled_token_idxs, token_idxs)\n\n # --- Update mask ---\n num_unmasked_tokens = jnp.round(N * (1.0 - unmasked_ratio)).astype(int)\n idx_mask = jnp.arange(final_token_probs.shape[-1]) > num_unmasked_tokens\n sorted_idxs = jnp.argsort(final_token_probs, axis=-1, descending=True)\n mask_update_fn = jax.vmap(lambda msk, ids: msk.at[ids].set(idx_mask))\n new_mask = mask_update_fn(mask, sorted_idxs)\n\n new_carry = (rng, token_idxs, new_mask, action_tokens)\n return new_carry, None\n\n\n# FIXME (f.srambical): add conversion script for old checkpoints\ndef restore_genie_components(\n optimizer: nnx.Optimizer,\n sharding: jax.sharding.NamedSharding,\n rng: jax.Array,\n args,\n):\n """"""Restore pre-trained Genie components""""""\n rngs = nnx.Rngs(rng)\n\n # dummy values since we only use tx to initialize the dummy train states\n dummy_tx = optax.adamw(\n learning_rate=optax.constant_schedule(args.max_lr),\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n )\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n\n checkpoint_options = ocp.CheckpointManagerOptions(\n step_format_fixed_length=6,\n )\n tokenizer_checkpoint_manager = ocp.CheckpointManager(\n directory=args.tokenizer_checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n dummy_tokenizer = TokenizerVQVAE(\n in_dim=args.image_channels,\n model_dim=args.tokenizer_dim,\n ffn_dim=args.tokenizer_ffn_dim,\n latent_dim=args.latent_patch_dim,\n num_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n num_blocks=args.tokenizer_num_blocks,\n num_heads=args.tokenizer_num_heads,\n dropout=args.dropout,\n codebook_dropout=args.dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n dummy_tokenizer_optimizer = nnx.Optimizer(dummy_tokenizer, dummy_tx)\n dummy_tokenizer_optimizer_state = nnx.state(dummy_tokenizer_optimizer)\n abstract_sharded_tokenizer_optimizer_state = _create_abstract_sharded_pytree(\n dummy_tokenizer_optimizer_state, sharding\n )\n restored_tokenizer = tokenizer_checkpoint_manager.restore(\n step=tokenizer_checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(\n abstract_sharded_tokenizer_optimizer_state\n ),\n ),\n )[""model_state""]\n nnx.update(dummy_tokenizer_optimizer.model, restored_tokenizer.model)\n optimizer.model.tokenizer = dummy_tokenizer_optimizer.model\n tokenizer_checkpoint_manager.close()\n\n if args.lam_checkpoint:\n lam_checkpoint_manager = ocp.CheckpointManager(\n directory=args.lam_checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n dummy_lam = LatentActionModel(\n in_dim=args.image_channels,\n model_dim=args.lam_dim,\n ffn_dim=args.lam_ffn_dim,\n latent_dim=args.latent_patch_dim,\n num_latents=args.num_latent_actions,\n patch_size=args.lam_patch_size,\n num_blocks=args.lam_num_blocks,\n num_heads=args.lam_num_heads,\n dropout=args.dropout,\n codebook_dropout=args.dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n dummy_lam_optimizer = nnx.Optimizer(dummy_lam, dummy_tx)\n dummy_lam_optimizer_state = nnx.state(dummy_lam_optimizer)\n abstract_sharded_lam_optimizer_state = _create_abstract_sharded_pytree(\n dummy_lam_optimizer_state, sharding\n )\n restored_lam_optimizer = lam_checkpoint_manager.restore(\n step=lam_checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(\n abstract_sharded_lam_optimizer_state\n ),\n ),\n )[""model_state""]\n nnx.update(dummy_lam_optimizer.model, restored_lam_optimizer.model)\n optimizer.model.lam = dummy_lam_optimizer.model\n # Remove the LAM decoder to save memory and avoid unnecessary computation.\n del optimizer.model.lam.decoder\n lam_checkpoint_manager.close()\n\n return optimizer\n\n\ndef _create_abstract_sharded_pytree(pytree_template, sharding_spec):\n """"""Replaces arrays in a pytree with ShapeDtypeStructs having the given sharding.""""""\n\n def map_fn(leaf_template):\n if hasattr(leaf_template, ""shape"") and hasattr(leaf_template, ""dtype""):\n return jax.ShapeDtypeStruct(\n leaf_template.shape, leaf_template.dtype, sharding=sharding_spec\n )\n return leaf_template\n\n return jax.tree_util.tree_map(map_fn, pytree_template)\n",python,tab
|
14 |
+
13,59950,"/fast/home/franz.srambical/jafar/genie.py",7076,0,"",python,selection_command
|
15 |
+
14,77428,"/fast/home/franz.srambical/jafar/sample.py",0,0,"",python,tab
|
16 |
+
15,79472,"genie.py",0,0,"",python,tab
|
17 |
+
16,96262,"genie.py",7213,0,"",python,selection_command
|
18 |
+
17,97679,"genie.py",7201,63," out_axes=(nnx.Carry, 0), # (new_carry, None)\n",python,content
|
19 |
+
18,113555,"/fast/home/franz.srambical/jafar/genie.py",0,0,"",python,tab
|
20 |
+
19,113557,"/fast/home/franz.srambical/jafar/genie.py",7201,0,"",python,selection_command
|
21 |
+
20,116189,"genie.py",0,0,"",python,tab
|
22 |
+
21,117566,"genie.py",7258,0,"",python,selection_mouse
|
23 |
+
22,155903,"/fast/home/franz.srambical/jafar/genie.py",0,0,"",python,tab
|
24 |
+
23,157626,"/fast/home/franz.srambical/jafar/genie.py",7201,62," out_axes=(nnx.Carry, None), # (new_carry, None)",python,content
|
25 |
+
24,159605,"genie.py",0,0,"",python,tab
|
26 |
+
25,179385,"/fast/home/franz.srambical/jafar/genie.py",0,0,"",python,tab
|
27 |
+
26,179385,"/fast/home/franz.srambical/jafar/genie.py",7979,0,"",python,selection_command
|
28 |
+
27,213168,"/fast/home/franz.srambical/jafar/genie.py",7979,117," final_carry_maskgit, _ = jax.lax.scan(\n maskgit_step_fn, init_carry_maskgit, jnp.arange(steps)",python,content
|
29 |
+
28,213169,"/fast/home/franz.srambical/jafar/genie.py",7076,199,"",python,content
|
30 |
+
29,213169,"/fast/home/franz.srambical/jafar/genie.py",6949,90," new_carry = (rng, token_idxs, new_mask, action_tokens)",python,content
|
31 |
+
30,213169,"/fast/home/franz.srambical/jafar/genie.py",6670,277," # Define the inner MaskGIT loop function\n def maskgit_step_fn(carry, step):\n rng, token_idxs, mask, action_tokens = carry\n N = token_idxs.shape[2]\n\n # --- Construct + encode video ---\n vid_embed = self.dynamics.patch_embed(token_idxs) # (B, S, N, D)\n mask_token = self.dynamics.mask_token.value # (1, 1, 1, D,)\n mask_expanded = mask[..., None] # (B, S, N, 1)\n vid_embed = jnp.where(mask_expanded, mask_token, vid_embed)\n\n # --- Predict transition ---\n act_embed = self.dynamics.action_up(action_tokens)\n vid_embed += jnp.pad(act_embed, ((0, 0), (1, 0), (0, 0), (0, 0)))\n unmasked_ratio = jnp.cos(jnp.pi * (step + 1) / (steps * 2))\n step_temp = temperature * (1.0 - unmasked_ratio)\n final_logits = self.dynamics.dynamics(vid_embed) / step_temp\n\n # --- Sample new tokens for final frame ---\n if sample_argmax:\n sampled_token_idxs = jnp.argmax(final_logits, axis=-1)\n else:\n rng, _rng = jax.random.split(rng)\n sampled_token_idxs = jax.random.categorical(_rng, final_logits)\n gather_fn = jax.vmap(jax.vmap(jax.vmap(lambda x, y: x[y])))\n final_token_probs = gather_fn(jax.nn.softmax(final_logits), sampled_token_idxs)\n final_token_probs += ~mask\n # Update masked tokens only\n token_idxs = jnp.where(mask, sampled_token_idxs, token_idxs)\n\n # --- Update mask ---\n num_unmasked_tokens = jnp.round(N * (1.0 - unmasked_ratio)).astype(int)\n idx_mask = jnp.arange(final_token_probs.shape[-1]) > num_unmasked_tokens\n sorted_idxs = jnp.argsort(final_token_probs, axis=-1, descending=True)\n mask_update_fn = jax.vmap(lambda msk, ids: msk.at[ids].set(idx_mask))\n new_mask = mask_update_fn(mask, sorted_idxs)",python,content
|
32 |
+
31,223860,"/fast/home/franz.srambical/jafar/genie.py",9430,121," final_carry_maskgit, _ = MaskGITLoop(\n maskgit_step, init_carry_maskgit, jnp.arange(steps)",python,content
|
33 |
+
32,223860,"/fast/home/franz.srambical/jafar/genie.py",8726,0," MaskGITLoop = nnx.scan(\n maskgit_scan_fn,\n in_axes=(None, nnx.Carry, 0), # (module, carry, x)\n out_axes=(nnx.Carry, None), # (new_carry, None)\n )\n\n",python,content
|
34 |
+
33,223860,"/fast/home/franz.srambical/jafar/genie.py",8623,66," def maskgit_scan_fn(module, carry, x):\n new_carry, _ = module(carry, x)",python,content
|
35 |
+
34,223860,"/fast/home/franz.srambical/jafar/genie.py",6670,1951," # Define the inner MaskGIT loop using nnx.scan\n maskgit_step = MaskGITStep(\n dynamics=self.dynamics,\n tokenizer=self.tokenizer,\n temperature=temperature,\n sample_argmax=sample_argmax,\n steps=steps,\n )",python,content
|
36 |
+
35,224039,"/fast/home/franz.srambical/jafar/genie.py",11216,94,"",python,content
|
37 |
+
36,224039,"/fast/home/franz.srambical/jafar/genie.py",9013,2202,"",python,content
|
38 |
+
37,224039,"/fast/home/franz.srambical/jafar/genie.py",7979,117," final_carry_maskgit, _ = jax.lax.scan(\n maskgit_step_fn, init_carry_maskgit, jnp.arange(steps)",python,content
|
39 |
+
38,224039,"/fast/home/franz.srambical/jafar/genie.py",7076,197," # --- Predict transition ---\n act_embed = self.dynamics.action_up(action_tokens)\n vid_embed += jnp.pad(act_embed, ((0, 0), (1, 0), (0, 0), (0, 0)))\n unmasked_ratio = jnp.cos(jnp.pi * (step + 1) / (steps * 2))\n step_temp = temperature * (1.0 - unmasked_ratio)\n final_logits = self.dynamics.dynamics(vid_embed) / step_temp\n\n # --- Sample new tokens for final frame ---\n if sample_argmax:\n sampled_token_idxs = jnp.argmax(final_logits, axis=-1)\n else:\n rng, _rng = jax.random.split(rng)\n sampled_token_idxs = jax.random.categorical(_rng, final_logits)\n gather_fn = jax.vmap(jax.vmap(jax.vmap(lambda x, y: x[y])))\n final_token_probs = gather_fn(jax.nn.softmax(final_logits), sampled_token_idxs)\n final_token_probs += ~mask\n # Update masked tokens only\n token_idxs = jnp.where(mask, sampled_token_idxs, token_idxs)\n\n # --- Update mask ---\n num_unmasked_tokens = jnp.round(N * (1.0 - unmasked_ratio)).astype(int)\n idx_mask = jnp.arange(final_token_probs.shape[-1]) > num_unmasked_tokens\n sorted_idxs = jnp.argsort(final_token_probs, axis=-1, descending=True)\n mask_update_fn = jax.vmap(lambda msk, ids: msk.at[ids].set(idx_mask))\n new_mask = mask_update_fn(mask, sorted_idxs)\n\n new_carry = (rng, token_idxs, new_mask, action_tokens)\n return new_carry, None",python,content
|
40 |
+
39,224039,"/fast/home/franz.srambical/jafar/genie.py",6949,125," # --- Construct + encode video ---\n vid_embed = self.dynamics.patch_embed(token_idxs) # (B, S, N, D)\n mask_token = self.dynamics.mask_token.value # (1, 1, 1, D,)\n mask_expanded = mask[..., None] # (B, S, N, 1)\n vid_embed = jnp.where(mask_expanded, mask_token, vid_embed)",python,content
|
41 |
+
40,224039,"/fast/home/franz.srambical/jafar/genie.py",6670,277," # Define the inner MaskGIT loop function\n def maskgit_step_fn(carry, step):\n rng, token_idxs, mask, action_tokens = carry\n N = token_idxs.shape[2]",python,content
|
42 |
+
41,272905,"/fast/home/franz.srambical/jafar/genie.py",6670,0,"",python,selection_command
|
43 |
+
42,380998,"sample.py",0,0,"from dataclasses import dataclass\nimport time\nimport os\nimport optax\n\nimport dm_pix as pix\nimport einops\nimport jax\nimport jax.numpy as jnp\nimport flax.linen as nn\nimport numpy as np\nimport orbax.checkpoint as ocp\nfrom PIL import Image, ImageDraw\nimport tyro\nfrom flax import nnx\n\nfrom genie import Genie\nfrom utils.dataloader import get_dataloader\n\n\n@dataclass\nclass Args:\n # Experiment\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = ""data/coinrun_episodes""\n checkpoint: str = """"\n # Sampling\n batch_size: int = 1\n maskgit_steps: int = 25\n temperature: float = 1.0\n sample_argmax: bool = True\n start_frame: int = 0\n # Tokenizer checkpoint\n tokenizer_dim: int = 512\n tokenizer_ffn_dim: int = 2048\n latent_patch_dim: int = 32\n num_patch_latents: int = 1024\n patch_size: int = 4\n tokenizer_num_blocks: int = 4\n tokenizer_num_heads: int = 8\n # LAM checkpoint\n lam_dim: int = 512\n lam_ffn_dim: int = 2048\n latent_action_dim: int = 32\n num_latent_actions: int = 6\n lam_patch_size: int = 16\n lam_num_blocks: int = 4\n lam_num_heads: int = 8\n # Dynamics checkpoint\n dyna_dim: int = 512\n dyna_ffn_dim: int = 2048\n dyna_num_blocks: int = 6\n dyna_num_heads: int = 8\n param_dtype = jnp.float32\n dtype = jnp.bfloat16\n use_flash_attention: bool = True\n\n\nargs = tyro.cli(Args)\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n\n rng = jax.random.PRNGKey(args.seed)\n\n # --- Load Genie checkpoint ---\n rngs = nnx.Rngs(rng)\n genie = Genie(\n # Tokenizer\n in_dim=args.image_channels,\n tokenizer_dim=args.tokenizer_dim,\n tokenizer_ffn_dim=args.tokenizer_ffn_dim,\n latent_patch_dim=args.latent_patch_dim,\n num_patch_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n tokenizer_num_blocks=args.tokenizer_num_blocks,\n tokenizer_num_heads=args.tokenizer_num_heads,\n # LAM\n lam_dim=args.lam_dim,\n lam_ffn_dim=args.lam_ffn_dim,\n latent_action_dim=args.latent_action_dim,\n num_latent_actions=args.num_latent_actions,\n lam_patch_size=args.lam_patch_size,\n lam_num_blocks=args.lam_num_blocks,\n lam_num_heads=args.lam_num_heads,\n lam_co_train=False,\n # Dynamics\n dyna_dim=args.dyna_dim,\n dyna_ffn_dim=args.dyna_ffn_dim,\n dyna_num_blocks=args.dyna_num_blocks,\n dyna_num_heads=args.dyna_num_heads,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeSave, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n checkpoint_options = ocp.CheckpointManagerOptions(\n step_format_fixed_length=6,\n )\n checkpoint_manager = ocp.CheckpointManager(\n args.checkpoint,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n\n dummy_tx = optax.adamw(\n learning_rate=optax.linear_schedule(0.0001, 0.0001, 10000),\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n mu_dtype=args.dtype,\n )\n dummy_optimizer = nnx.Optimizer(genie, dummy_tx)\n\n abstract_optimizer = nnx.eval_shape(lambda: dummy_optimizer)\n abstract_optimizer_state = nnx.state(abstract_optimizer)\n restored = checkpoint_manager.restore(\n checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(abstract_optimizer_state),\n ),\n )\n restored_optimizer_state = restored[""model_state""]\n nnx.update(dummy_optimizer, restored_optimizer_state)\n\n # --- Define sampling function ---\n # @nnx.jit\n # @jax.jit\n def _sampling_fn(model, batch):\n """"""Runs Genie.sample with pre-defined generation hyper-parameters.""""""\n return model.sample(\n batch,\n args.seq_len,\n args.maskgit_steps,\n args.temperature,\n args.sample_argmax,\n )\n\n\n # --- Define autoregressive sampling loop ---\n def _autoreg_sample(rng, video_batch, action_batch):\n vid = video_batch[:, : args.start_frame + 1]\n rng, _rng = jax.random.split(rng)\n batch = dict(videos=vid, latent_actions=action_batch, rng=_rng)\n generated_vid = genie.sample(batch, args.seq_len, args.maskgit_steps, args.temperature, args.sample_argmax)\n return generated_vid\n\n\n # --- Get video + latent actions ---\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".array_record"")\n ]\n dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n args.batch_size,\n args.image_height,\n args.image_width,\n args.image_channels,\n num_workers=8,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n video_batch = next(iter(dataloader))\n # Get latent actions for all videos in the batch\n batch = dict(videos=video_batch)\n action_batch = genie.vq_encode(batch, training=False) # type: ignore[arg-type]\n action_batch = jnp.asarray(action_batch).reshape(video_batch.shape[0], args.seq_len - 1, 1)\n\n # --- Sample + evaluate video ---\n vid = _autoreg_sample(rng, video_batch, action_batch)\n gt = video_batch[:, : vid.shape[1]].clip(0, 1).reshape(-1, *video_batch.shape[2:])\n recon = vid.clip(0, 1).reshape(-1, *vid.shape[2:])\n # FIXME (f.srambical): investigate why this is needed\n gt = gt.astype(jnp.float32)\n ssim = pix.ssim(gt[:, args.start_frame + 1 :], recon[:, args.start_frame + 1 :]).mean()\n print(f""SSIM: {ssim}"")\n\n # --- Construct video ---\n # true_videos = (video_batch * 255).astype(np.uint8)\n # pred_videos = (vid * 255).astype(np.uint8)\n # video_comparison = np.zeros((2, *vid.shape), dtype=np.uint8)\n # video_comparison[0] = true_videos[:, : args.seq_len]\n # video_comparison[1] = pred_videos\n # frames = einops.rearrange(video_comparison, ""n b t h w c -> t (b h) (n w) c"")\n\n # # --- Save video ---\n # imgs = [Image.fromarray(img) for img in frames]\n # # Write actions on each frame, on each row (i.e., for each video in the batch, on the GT row)\n # for t, img in enumerate(imgs[1:]):\n # d = ImageDraw.Draw(img)\n # for row in range(action_batch.shape[0]):\n # action = action_batch[row, t, 0]\n # y_offset = row * video_batch.shape[2] + 2\n # d.text((2, y_offset), f""{action}"", fill=255)\n # imgs[0].save(\n # f""generation_{time.time()}.gif"",\n # save_all=True,\n # append_images=imgs[1:],\n # duration=250,\n # loop=0,\n # )\n",python,tab
|
44 |
+
43,382134,"sample.py",7041,0,"",python,selection_command
|
45 |
+
44,382137,"sample.py",7023,0,"",python,selection_command
|
46 |
+
45,382152,"sample.py",6999,0,"",python,selection_command
|
47 |
+
46,382225,"sample.py",6965,0,"",python,selection_command
|
48 |
+
47,382231,"sample.py",6940,0,"",python,selection_command
|
49 |
+
48,382323,"sample.py",6897,0,"",python,selection_command
|
50 |
+
49,382340,"sample.py",6877,0,"",python,selection_command
|
51 |
+
50,382345,"sample.py",6818,0,"",python,selection_command
|
52 |
+
51,382381,"sample.py",6762,0,"",python,selection_command
|
53 |
+
52,382416,"sample.py",6715,0,"",python,selection_command
|
54 |
+
53,382427,"sample.py",6664,0,"",python,selection_command
|
55 |
+
54,382469,"sample.py",6630,0,"",python,selection_command
|
56 |
+
55,382583,"sample.py",6589,0,"",python,selection_command
|
57 |
+
56,382584,"sample.py",6489,0,"",python,selection_command
|
58 |
+
57,382658,"sample.py",6435,0,"",python,selection_command
|
59 |
+
58,382658,"sample.py",6408,0,"",python,selection_command
|
60 |
+
59,382659,"sample.py",6407,0,"",python,selection_command
|
61 |
+
60,382692,"sample.py",6323,0,"",python,selection_command
|
62 |
+
61,382707,"sample.py",6283,0,"",python,selection_command
|
63 |
+
62,382729,"sample.py",6224,0,"",python,selection_command
|
64 |
+
63,382778,"sample.py",6157,0,"",python,selection_command
|
65 |
+
64,382822,"sample.py",6108,0,"",python,selection_command
|
66 |
+
65,382836,"sample.py",6051,0,"",python,selection_command
|
67 |
+
66,383110,"sample.py",6021,0,"",python,selection_command
|
68 |
+
67,409372,"sample.py",0,0,"",python,tab
|
69 |
+
68,423163,"sample.py",6020,0,"",python,selection_command
|
70 |
+
69,423280,"sample.py",5993,0,"",python,selection_command
|
71 |
+
70,456144,"genie.py",0,0,"",python,tab
|
72 |
+
71,456145,"genie.py",6670,0,"",python,selection_command
|
73 |
+
72,461257,"sample.py",0,0,"",python,tab
|
74 |
+
73,464425,"sample.py",6969,0,"",python,selection_command
|
75 |
+
74,464665,"sample.py",7049,0,"",python,selection_command
|
76 |
+
75,465095,"sample.py",6228,0,"",python,selection_command
|
77 |
+
76,465860,"sample.py",5207,0,"",python,selection_command
|
78 |
+
77,466191,"sample.py",4424,0,"",python,selection_command
|
79 |
+
78,466192,"sample.py",3753,0,"",python,selection_command
|
80 |
+
79,466193,"sample.py",2998,0,"",python,selection_command
|
81 |
+
80,466194,"sample.py",2219,0,"",python,selection_command
|
82 |
+
81,466211,"sample.py",1453,0,"",python,selection_command
|
83 |
+
82,466211,"sample.py",917,0,"",python,selection_command
|
84 |
+
83,466304,"sample.py",351,0,"",python,selection_command
|
85 |
+
84,466305,"sample.py",917,0,"",python,selection_command
|
86 |
+
85,466306,"sample.py",1453,0,"",python,selection_command
|
87 |
+
86,466306,"sample.py",2219,0,"",python,selection_command
|
88 |
+
87,466542,"sample.py",2998,0,"",python,selection_command
|
89 |
+
88,468353,"/fast/home/franz.srambical/jafar/genie.py",0,0,"",python,tab
|
90 |
+
89,470581,"/fast/home/franz.srambical/jafar/genie.py",7766,0,"",python,selection_command
|
91 |
+
90,470693,"/fast/home/franz.srambical/jafar/genie.py",8930,0,"",python,selection_command
|
92 |
+
91,470894,"/fast/home/franz.srambical/jafar/genie.py",9838,0,"",python,selection_command
|
93 |
+
92,471054,"/fast/home/franz.srambical/jafar/genie.py",10437,0,"",python,selection_command
|
94 |
+
93,475752,"sample.py",0,0,"",python,tab
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-42977cf0-50f0-4dc9-b77f-2db2b88a939d1753960254266-2025_07_31-13.11.02.905/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-5342e4d6-3c20-40cb-9bcb-64bf1931df6e1753973941916-2025_07_31-16.59.20.943/source.csv
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,"utils/nn.py",0,0,"import math\nfrom typing import Tuple, Callable\n\nfrom flax import nnx\nimport jax\nimport jax.numpy as jnp\nimport einops\n\n\nclass PositionalEncoding(nnx.Module):\n """"""https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/JAX/tutorial6/Transformers_and_MHAttention.html""""""\n\n def __init__(self, d_model: int, max_len: int = 5000):\n self.d_model = d_model\n self.max_len = max_len\n\n pe = jnp.zeros((self.max_len, self.d_model))\n position = jnp.arange(0, self.max_len, dtype=jnp.float32)[:, None]\n div_term = jnp.exp(\n jnp.arange(0, self.d_model, 2) * (-math.log(10000.0) / self.d_model)\n )\n pe = pe.at[:, 0::2].set(jnp.sin(position * div_term))\n pe = pe.at[:, 1::2].set(jnp.cos(position * div_term))\n self.pe = nnx.Variable(pe)\n\n def __call__(self, x: jax.Array) -> jax.Array:\n x = x + self.pe[: x.shape[2]]\n return x\n\n\nclass STBlock(nnx.Module):\n def __init__(\n self,\n dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.dim = dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.spatial_pos_enc = PositionalEncoding(self.dim)\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=False\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.temporal_pos_enc = PositionalEncoding(self.dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array) -> jax.Array:\n # --- Spatial attention ---\n z_BTNM = self.spatial_pos_enc(x_BTNM)\n z_BTNM = self.spatial_norm(z_BTNM)\n z_BTNM = self.spatial_attention(z_BTNM)\n x_BTNM = x_BTNM + z_BTNM\n\n # --- Temporal attention ---\n x_BNTM = x_BTNM.swapaxes(1, 2)\n z_BNTM = self.temporal_pos_enc(x_BNTM)\n z_BNTM = self.temporal_norm(z_BNTM)\n z_BNTM = self.temporal_attention(z_BNTM)\n x_BNTM = x_BNTM + z_BNTM\n x_BTNM = x_BNTM.swapaxes(1, 2)\n\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\n\nclass STTransformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n STBlock(\n dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n )\n\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM)\n\n x_BTNO = self.output_dense(x_BTNM)\n return x_BTNO\n\nclass TransformerBlock(nnx.Module):\n def __init__(\n self,\n model_dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.temporal_pos_enc = PositionalEncoding(self.model_dim)\n self.spatial_pos_enc = PositionalEncoding(self.model_dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=decode,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=decode,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array) -> jax.Array:\n # FIXME (f.srambical): this is exactly the same as STBlock (except for the positional encoding)\n # --- Spatial attention ---\n _, T, N, _ = x_BTNM.shape\n z_FNM = einops.rearrange(x_BTNM, ""b t n m -> (b t) n m"")\n z_FNM = self.spatial_norm(z_FNM)\n # FIXME (f.srambical): only input last token\n z_FNM = self.spatial_attention(z_FNM)\n z_BTNM = einops.rearrange(z_FNM, ""(b t) n m -> b t n m"", t=T)\n x_BTNM = x_BTNM + z_BTNM\n # --- Temporal attention ---\n z_PTM = einops.rearrange(x_BTNM, ""b t n m -> (b n) t m"")\n z_PTM = self.temporal_norm(z_PTM)\n # FIXME (f.srambical): only input last token\n z_PTM = self.temporal_attention(z_PTM)\n z_BTNM = einops.rearrange(z_PTM, ""(b n) t m -> b t n m"", n=N)\n x_BTNM = x_BTNM + z_BTNM\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\nclass Transformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n F: number of frames in batch\n P: number of patch positions in batch\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.pos_enc = PositionalEncoding(self.model_dim)\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n TransformerBlock(\n model_dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n decode=decode,\n rngs=rngs,\n )\n )\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n x_BTNM = self.pos_enc(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM)\n\n x_BTNV = self.output_dense(x_BTNM)\n return x_BTNV\n\ndef normalize(x: jax.Array) -> jax.Array:\n return x / (jnp.linalg.norm(x, ord=2, axis=-1, keepdims=True) + 1e-8)\n\n\nclass VectorQuantizer(nnx.Module):\n """"""\n Dimension keys:\n D: B * T * N\n K: number of latents\n L: latent dimension\n """"""\n def __init__(\n self, latent_dim: int, num_latents: int, dropout: float, rngs: nnx.Rngs\n ):\n self.latent_dim = latent_dim\n self.num_latents = num_latents\n self.dropout = dropout\n\n self.codebook = nnx.Param(\n normalize(\n nnx.initializers.lecun_uniform()(\n rngs.params(), (self.num_latents, self.latent_dim)\n )\n )\n )\n self.drop = nnx.Dropout(self.dropout, rngs=rngs)\n\n def __call__(\n self, x_DL: jax.Array, training: bool\n ) -> Tuple[jax.Array, jax.Array, jax.Array, jax.Array]:\n # --- Compute distances ---\n x_DL = normalize(x_DL)\n normalized_codebook_KL = normalize(self.codebook.value)\n distance_DK = -jnp.matmul(x_DL, normalized_codebook_KL.T)\n if training:\n distance_DK = self.drop(distance_DK)\n\n # --- Get indices and embeddings ---\n indices_D = jnp.argmin(distance_DK, axis=-1)\n z_DL = self.codebook[indices_D]\n\n # --- Straight through estimator ---\n z_q_DL = x_DL + jax.lax.stop_gradient(z_DL - x_DL)\n return z_q_DL, z_DL, x_DL, indices_D\n\n def get_codes(self, indices_E: jax.Array) -> jax.Array:\n return self.codebook[indices_E]\n\n\ndef _create_flash_attention_fn(use_flash_attention: bool, is_causal: bool) -> Callable:\n """"""\n Create an attention function that uses flash attention if enabled.\n\n Flax MultiHeadAttention provides tensors with shape (batch..., length, num_heads, head_dim)\n jax.nn.dot_product_attention expects (batch, length, num_heads, head_dim).\n\n We need to reshape to ensure compatibility. cuDNN's flash attention additionally\n requires a sequence length that is a multiple of 4. We pad the sequence length to the nearest\n multiple of 4 and mask accordingly.\n """"""\n\n def attention_fn(query, key, value, bias=None, mask=None, **kwargs):\n implementation = ""cudnn"" if use_flash_attention else None\n\n def _rearrange(x):\n return einops.rearrange(x, ""... l h d -> (...) l h d"")\n\n def _pad(x):\n return jnp.pad(x, ((0, 0), (0, pad_size), (0, 0), (0, 0)))\n\n def _fuse_masks(mask: jax.Array, attention_mask: jax.Array) -> jax.Array:\n mask_bool = mask.astype(jnp.bool_)\n expanded_mask = jnp.pad(\n mask_bool, ((0, pad_size), (0, pad_size)), constant_values=False\n )\n return jnp.logical_and(attention_mask, expanded_mask)\n\n original_shape = query.shape\n original_seq_len = query.shape[-3]\n\n # Pad to nearest multiple of 4\n target_seq_len = ((original_seq_len + 3) // 4) * 4\n pad_size = target_seq_len - original_seq_len\n\n query_4d = _pad(_rearrange(query))\n key_4d = _pad(_rearrange(key))\n value_4d = _pad(_rearrange(value))\n\n attention_mask = jnp.ones((target_seq_len, target_seq_len), dtype=jnp.bool_)\n attention_mask = attention_mask.at[original_seq_len:, :].set(False)\n attention_mask = attention_mask.at[:, original_seq_len:].set(False)\n\n mask_4d = (\n _fuse_masks(mask, attention_mask) if mask is not None else attention_mask\n )\n mask_4d = mask_4d[jnp.newaxis, jnp.newaxis, :, :] # (1, 1, seq_len, seq_len)\n\n bias_4d = _pad(_rearrange(bias)) if bias is not None else None\n\n # NOTE: jax.nn.dot_product_attention does not support dropout\n output_4d = jax.nn.dot_product_attention(\n query=query_4d,\n key=key_4d,\n value=value_4d,\n bias=bias_4d,\n mask=mask_4d,\n implementation=implementation,\n is_causal=is_causal,\n )\n return output_4d[..., :original_seq_len, :, :].reshape(original_shape)\n\n return attention_fn\n",python,tab
|
3 |
+
2,408,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"4:59:20 PM [info] Activating crowd-code\n4:59:20 PM [info] Recording started\n4:59:20 PM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,558,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"4:59:21 PM [info] Git repository found\n4:59:21 PM [info] Git provider initialized successfully\n4:59:21 PM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,6295,"utils/nn.py",0,0,"",python,tab
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-59175e55-ecae-446f-be12-8861032d4f481751613426266-2025_07_04-09.17.44.620/source.csv
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,5,"tests/test_checkpointer.py",0,0,"import unittest\nimport tempfile\nimport os\nimport jax\nimport jax.numpy as jnp\nfrom flax.training import orbax_utils\nfrom orbax.checkpoint import PyTreeCheckpointer\nfrom pathlib import Path\n\nfrom models.tokenizer import TokenizerVQVAE\nfrom flax.training.train_state import TrainState\nimport optax\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\n\nclass DistributedCheckpointerTest(unittest.TestCase):\n def setUp(self):\n super().setUp()\n self._temp_dir_manager = tempfile.TemporaryDirectory()\n self.checkpoint_dir = Path(self._temp_dir_manager.name)\n self.addCleanup(self._temp_dir_manager.cleanup)\n\n # FIXME (f.srambical): If the tests pass, we should use the default model config instead\n self.model_kwargs = dict(\n in_dim=3,\n model_dim=8,\n latent_dim=4,\n num_latents=16,\n patch_size=2,\n num_blocks=1,\n num_heads=1,\n dropout=0.0,\n codebook_dropout=0.0,\n )\n self.image_shape = (8, 8, 3)\n self.seq_len = 2\n self.batch_size = 2\n self.seed = 0\n\n def test_distributed_checkpointing(self):\n jax.distributed.initialize()\n num_devices = jax.device_count()\n self.assertGreater(num_devices, 0)\n\n model = TokenizerVQVAE(**self.model_kwargs)\n rng = jax.random.PRNGKey(self.seed)\n dummy_inputs = dict(\n videos=jnp.zeros((self.batch_size, self.seq_len, *self.image_shape), dtype=jnp.float32)\n )\n params = model.init(rng, dummy_inputs)\n\n tx = optax.adam(1e-3)\n state = TrainState.create(apply_fn=model.apply, params=params, tx=tx)\n\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n state = jax.device_put(state, replicated_sharding)\n\n ckpt = {""model"": state}\n orbax_checkpointer = PyTreeCheckpointer()\n save_args = orbax_utils.save_args_from_target(ckpt)\n ckpt_path = str(self.checkpoint_dir / ""test_ckpt"")\n orbax_checkpointer.save(ckpt_path, ckpt, save_args=save_args)\n self.assertTrue(os.path.exists(ckpt_path))\n\n restore_target = {""model"": state}\n restore_args = orbax_utils.restore_args_from_target(restore_target)\n restored = orbax_checkpointer.restore(ckpt_path, item=restore_target, restore_args=restore_args)\n # Compare parameters recursively, handling nested structure\n def compare_params(original, restored):\n if isinstance(original, dict):\n for k in original.keys():\n compare_params(original[k], restored[k])\n else:\n self.assertTrue(jnp.allclose(original, restored))\n \n compare_params(state.params, restored[""model""].params)\n\nif __name__ == ""__main__"":\n unittest.main()\n",python,tab
|
3 |
+
2,568,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"9:17:43 AM [info] Activating crowd-code\n9:17:44 AM [info] Recording started\n9:17:44 AM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,1547,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"9:17:45 AM [info] Git repository found\n9:17:45 AM [info] Git provider initialized successfully\n9:17:45 AM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,1648,"tests/test_checkpointer.py",0,0,"",python,tab
|
6 |
+
5,3081,"TERMINAL",0,0,"/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt",,terminal_command
|
7 |
+
6,3095,"TERMINAL",0,0,"]633;E;/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt;b730ba2f-be2c-4b6f-9f0d-d578d409e7ab]633;C]0;franz.srambical@hpc-submit01:/ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash]633;D;0]633;P;Cwd=/ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash",,terminal_output
|
8 |
+
7,13336,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu16 --cpus-per-task=8",,terminal_command
|
9 |
+
8,13419,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:4 -w supergpu16 --cpus-per-task=8;a8707c05-ae9b-4a50-91c9-fa9c06501dad]633;Csalloc: Required node not available (down, drained or reserved)\r\nsalloc: Pending job allocation 26666565\r\nsalloc: job 26666565 queued and waiting for resources\r\n",,terminal_output
|
10 |
+
9,79296,"TERMINAL",0,0,"^Csalloc: Job allocation 26666565 has been revoked.\r\nsalloc: Job aborted due to signal\r\n",,terminal_output
|
11 |
+
10,82221,"TERMINAL",0,0,"sinfo supergpu16",,terminal_command
|
12 |
+
11,82252,"TERMINAL",0,0,"\r\n[?2004l\r]633;E;sinfo supergpu16;a8707c05-ae9b-4a50-91c9-fa9c06501dad]633;CPARTITION AVAIL TIMELIMIT NODES STATE NODELIST\r\ninteractive_cpu_p up 12:00:00 1 down* cpusrv54\r\ninteractive_cpu_p up 12:00:00 1 mix cpusrv75\r\ninteractive_cpu_p up 12:00:00 4 idle cpusrv[51-53,55]\r\ncpu_p up 3-00:00:00 1 down* supercpu02\r\ncpu_p up 3-00:00:00 2 drain cpusrv[57,74]\r\ncpu_p up 3-00:00:00 84 mix cpusrv[02,05-27,31-33,35-39,41-47,49-50,56,58,61-65,72-73,75,77,79,82-83,89,92,94,96-104,106-108,110,112-113,115-117,119,121-122,124-127],supercpu01\r\ncpu_p up 3-00:00:00 19 alloc cpusrv[28,59-60,78,80-81,84-88,90-91,93,109,111,114,118,123]\r\ncpu_p up 3-00:00:00 8 idle cpusrv[30,40,48,69-71,95,105]\r\ninteractive_gpu_p up 12:00:00 1 inval gpusrv25\r\ninteractive_gpu_p up 12:00:00 3 idle gpusrv[22-24]\r\ngpu_p up 2-00:00:00 3 mix- supergpu[14,17,19]\r\ngpu_p up 2-00:00:00 2 down* gpusrv34,supergpu07\r\ngpu_p up 2-00:00:00 1 drain supergpu16\r\ngpu_p up 2-00:00:00 2 resv supergpu[05,18]\r\ngpu_p up 2-00:00:00 52 mix gpusrv[11-12,15,18,26-33,38-40,42-46,50-55,57-77],supergpu[02-03,08-09,15]\r\ngpu_p up 2-00:00:00 11 idle gpusrv[09-10,13-14,16-17,35,41,47-49]\r\ncemp_gpu_p up 5-00:00:00 1 down* supercpu02\r\ncemp_gpu_p up 5-00:00:00 3 mix supercpu01,supergpu[06,10]\r\ncemp_gpu_p up 5-00:00:00 3 idle supergpu[11-13]\r\nbcf_p up 14-00:00:0 1 mix cpusrv29\r\nbcf_p up 14-00:00:0 1 idle cpusrv128\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;0",,terminal_output
|
13 |
+
12,144962,"TERMINAL",0,0,"scontrol show node supergpu16~",,terminal_command
|
14 |
+
13,144987,"TERMINAL",0,0,"]633;E;scontrol show node supergpu16~;a8707c05-ae9b-4a50-91c9-fa9c06501dad]633;C",,terminal_output
|
15 |
+
14,147337,"TERMINAL",0,0,"scontrol show node supergpu16",,terminal_command
|
16 |
+
15,147379,"TERMINAL",0,0,"\r\n[?2004l\r]633;E;scontrol show node supergpu16;a8707c05-ae9b-4a50-91c9-fa9c06501dad]633;C",,terminal_output
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-59cfa53e-375d-426f-b7b4-1efe57f39c131751644504215-2025_07_04-17.55.46.972/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-623c548f-e16f-46a4-9ee1-6577a82e63e51754054052755-2025_08_01-15.14.20.520/source.csv
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
2,354,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:14:20 PM [info] Activating crowd-code\n3:14:20 PM [info] Recording started\n3:14:20 PM [info] Initializing git provider using file system watchers...\n",Log,tab
|
3 |
+
3,1073,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"3:14:20 PM [info] Git repository found\n3:14:20 PM [info] Git provider initialized successfully\n3:14:20 PM [info] Initial git state: [object Object]\n",Log,content
|
4 |
+
4,703122,"experiments/sample.sh",0,0,"source .venv/bin/activate\n\ndata_dir=""$PWD/data_arrayrecord/dummy""\nckpt_dir=""$PWD/checkpoints/causal_dynamics_openai_grain_tok_restore""\n\nexport PYTHONUNBUFFERED=1\nsrun ipython --pdb sample.py -- \\n --dyna_type ""causal"" \\n --batch_size 1 \\n --seq_len 2 \\n --start_frame 1 \\n --checkpoint $ckpt_dir \\n --data_dir $data_dir",shellscript,tab
|
5 |
+
5,703816,"experiments/sample.sh",283,0,"",shellscript,selection_mouse
|
6 |
+
6,703816,"experiments/sample.sh",282,0,"",shellscript,selection_command
|
7 |
+
7,704348,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
|
8 |
+
8,704352,"experiments/sample.sh",336,0,"",shellscript,selection_command
|
9 |
+
9,705452,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
10 |
+
10,710934,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
11 |
+
11,711936,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
12 |
+
12,720157,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
13 |
+
13,721337,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
14 |
+
14,722480,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
15 |
+
15,723451,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
16 |
+
16,724245,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
17 |
+
17,725680,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
|
18 |
+
18,725686,"experiments/sample.sh",336,0,"",shellscript,selection_command
|
19 |
+
19,860776,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
20 |
+
20,861676,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
21 |
+
21,863663,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
22 |
+
22,864281,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
|
23 |
+
23,864282,"experiments/sample.sh",336,0,"",shellscript,selection_command
|
24 |
+
24,865295,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
25 |
+
25,865896,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
26 |
+
26,867495,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
27 |
+
27,1826628,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
|
28 |
+
28,1826635,"experiments/sample.sh",336,0,"",shellscript,selection_command
|
29 |
+
29,1827817,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
30 |
+
30,1828589,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
31 |
+
31,1830307,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
32 |
+
32,1831392,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
|
33 |
+
33,1831396,"experiments/sample.sh",336,0,"",shellscript,selection_command
|
34 |
+
34,1832055,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
|
35 |
+
35,1832824,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-6eacf655-5590-4c9d-ad09-856f09c6e0121751568373129-2025_07_03-20.47.02.778/source.csv
ADDED
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,"experiments/tokenizer_cross_node_checkpointing_test.sh",0,0,"#!/usr/bin/env bash\nsource .venv/bin/activate\n\n\ndata_dir='data_tfrecords'\n\nsrun python train_tokenizer.py \\n --batch_size 12 \\n --ckpt_dir checkpoints/tokenizer_cross_node_checkpointing_test \\n --log_checkpoint_interval 10 \\n --num_steps 300000 \\n --warmup_steps 10000 \\n --seed 0 \\n --min_lr=0.0000866 \\n --max_lr=0.0000866 \\n --data_dir $data_dir",shellscript,tab
|
3 |
+
2,1240,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:47:01 PM [info] Activating crowd-code\n8:47:02 PM [info] Recording started\n8:47:02 PM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,1339,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"8:47:03 PM [info] Git repository found\n8:47:03 PM [info] Git provider initialized successfully\n8:47:03 PM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,10285,"TERMINAL",0,0,"/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt",,terminal_command
|
6 |
+
5,10288,"TERMINAL",0,0,"]633;E;/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt;c8ce12b3-1a75-4b36-97e0-87d6c697054a]633;C",,terminal_output
|
7 |
+
6,10354,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash]633;D;0",,terminal_output
|
8 |
+
7,65030,"experiments/tokenizer_cross_node_checkpointing_test.sh",0,0,"",shellscript,tab
|
9 |
+
8,72724,"tests/test_checkpointer.py",0,0,"",python,tab
|
10 |
+
9,75785,"tests/test_checkpointer.py",0,0,"import unittest\nimport tempfile\nimport os\nimport jax\nimport jax.numpy as jnp\nfrom flax.training import orbax_utils\nfrom orbax.checkpoint import PyTreeCheckpointer\nfrom pathlib import Path\n\nfrom models.tokenizer import TokenizerVQVAE\nfrom flax.training.train_state import TrainState\nimport optax\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\n\nclass DistributedCheckpointerTest(unittest.TestCase):\n def setUp(self):\n super().setUp()\n self._temp_dir_manager = tempfile.TemporaryDirectory()\n self.checkpoint_dir = Path(self._temp_dir_manager.name)\n self.addCleanup(self._temp_dir_manager.cleanup)\n\n # FIXME (f.srambical): If the tests pass, we should use the default model config instead\n self.model_kwargs = dict(\n in_dim=3,\n model_dim=8,\n latent_dim=4,\n num_latents=16,\n patch_size=2,\n num_blocks=1,\n num_heads=1,\n dropout=0.0,\n codebook_dropout=0.0,\n )\n self.image_shape = (8, 8, 3)\n self.seq_len = 2\n self.batch_size = 2\n self.seed = 0\n\n def test_distributed_checkpointing(self):\n jax.distributed.initialize()\n num_devices = jax.device_count()\n self.assertGreater(num_devices, 0)\n\n model = TokenizerVQVAE(**self.model_kwargs)\n rng = jax.random.PRNGKey(self.seed)\n dummy_inputs = dict(\n videos=jnp.zeros((self.batch_size, self.seq_len, *self.image_shape), dtype=jnp.float32)\n )\n params = model.init(rng, dummy_inputs)\n\n tx = optax.adam(1e-3)\n state = TrainState.create(apply_fn=model.apply, params=params, tx=tx)\n\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n state = jax.device_put(state, replicated_sharding)\n\n ckpt = {""model"": state}\n orbax_checkpointer = PyTreeCheckpointer()\n save_args = orbax_utils.save_args_from_target(ckpt)\n ckpt_path = str(self.checkpoint_dir / ""test_ckpt"")\n orbax_checkpointer.save(ckpt_path, ckpt, save_args=save_args)\n self.assertTrue(os.path.exists(ckpt_path))\n\n restore_target = {""model"": state}\n restore_args = orbax_utils.restore_args_from_target(restore_target)\n restored = orbax_checkpointer.restore(ckpt_path, item=restore_target, restore_args=restore_args)\n for k in state.params.keys():\n self.assertTrue(jax.tree_util.tree_all(jnp.allclose(state.params[k], restored[""model""].params[k])))\n\nif __name__ == ""__main__"":\n unittest.main()\n",python,content
|
11 |
+
10,76559,"tests/test_checkpointer.py",0,0,"",python,selection_command
|
12 |
+
11,116750,"tests/test_checkpointer.py",697,0,"",python,selection_mouse
|
13 |
+
12,118887,"train_tokenizer.py",0,0,"from dataclasses import dataclass, field\nimport os\nimport time\n\nimport einops\nfrom flax.training import orbax_utils\nfrom flax.training.train_state import TrainState\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\nimport optax\nfrom orbax.checkpoint import PyTreeCheckpointer\nimport numpy as np\nimport dm_pix as pix\nimport jax\nimport jax.numpy as jnp\nimport tyro\nimport wandb\n\nfrom models.tokenizer import TokenizerVQVAE\nfrom utils.dataloader import get_dataloader\nfrom utils.parameter_utils import count_parameters_by_component\n\nts = int(time.time())\n\n\n@dataclass\nclass Args:\n # Experiment\n num_steps: int = 300_000\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = ""data_tfrecords/coinrun""\n checkpoint: str = """"\n # Optimization\n vq_beta: float = 0.25\n batch_size: int = 48\n min_lr: float = 3e-4\n max_lr: float = 3e-4\n warmup_steps: int = 10000\n # Tokenizer\n model_dim: int = 512\n latent_dim: int = 32\n num_latents: int = 1024\n patch_size: int = 4\n num_blocks: int = 8\n num_heads: int = 8\n dropout: float = 0.0\n codebook_dropout: float = 0.01\n # Logging\n log: bool = False\n entity: str = """"\n project: str = """"\n name: str = ""train_tokenizer""\n tags: list[str] = field(default_factory=lambda: [""tokenizer""])\n log_interval: int = 5\n log_image_interval: int = 250\n ckpt_dir: str = """"\n log_checkpoint_interval: int = 10000\n log_gradients: bool = False\n\n\nargs = tyro.cli(Args)\n\n\ndef tokenizer_loss_fn(params, state, inputs):\n # --- Compute loss ---\n outputs = state.apply_fn(\n params,\n inputs,\n training=True,\n rngs={""params"": inputs[""rng""], ""dropout"": inputs[""dropout_rng""]},\n )\n mse = jnp.square(inputs[""videos""] - outputs[""recon""]).mean()\n q_loss = jnp.square(jax.lax.stop_gradient(outputs[""emb""]) - outputs[""z""]).mean()\n commitment_loss = jnp.square(\n outputs[""emb""] - jax.lax.stop_gradient(outputs[""z""])\n ).mean()\n loss = mse + q_loss + args.vq_beta * commitment_loss\n\n # --- Compute validation metrics ---\n gt = inputs[""videos""].clip(0, 1).reshape(-1, *inputs[""videos""].shape[2:])\n recon = outputs[""recon""].clip(0, 1).reshape(-1, *outputs[""recon""].shape[2:])\n psnr = pix.psnr(gt, recon).mean()\n ssim = pix.ssim(gt, recon).mean()\n _, index_counts = jnp.unique_counts(\n jnp.ravel(outputs[""indices""]), size=args.num_latents, fill_value=0\n )\n codebook_usage = (index_counts != 0).mean()\n metrics = dict(\n loss=loss,\n mse=mse,\n q_loss=q_loss,\n commitment_loss=commitment_loss,\n psnr=psnr,\n ssim=ssim,\n codebook_usage=codebook_usage,\n )\n return loss, (outputs[""recon""], metrics)\n\n\[email protected]\ndef train_step(state, inputs):\n grad_fn = jax.value_and_grad(tokenizer_loss_fn, has_aux=True, allow_int=True)\n (loss, (recon, metrics)), grads = grad_fn(state.params, state, inputs)\n state = state.apply_gradients(grads=grads)\n if args.log_gradients:\n metrics[""encoder_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""encoder""]\n )\n metrics[""vq_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""vq""]\n )\n metrics[""decoder_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""decoder""]\n )\n return state, loss, recon, metrics\n\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n num_devices = jax.device_count()\n if num_devices == 0:\n raise ValueError(""No JAX devices found."")\n print(f""Running on {num_devices} devices."")\n\n if args.batch_size % num_devices != 0:\n raise ValueError(\n f""Global batch size {args.batch_size} must be divisible by ""\n f""number of devices {num_devices}.""\n )\n\n per_device_batch_size_for_init = args.batch_size // num_devices\n\n rng = jax.random.PRNGKey(args.seed)\n\n # --- Initialize model ---\n tokenizer = TokenizerVQVAE(\n in_dim=args.image_channels,\n model_dim=args.model_dim,\n latent_dim=args.latent_dim,\n num_latents=args.num_latents,\n patch_size=args.patch_size,\n num_blocks=args.num_blocks,\n num_heads=args.num_heads,\n dropout=args.dropout,\n codebook_dropout=args.codebook_dropout,\n )\n rng, _rng = jax.random.split(rng)\n image_shape = (args.image_height, args.image_width, args.image_channels)\n inputs = dict(\n videos=jnp.zeros(\n (per_device_batch_size_for_init, args.seq_len, *image_shape),\n dtype=jnp.float32,\n ),\n )\n init_params = tokenizer.init(_rng, inputs)\n\n param_counts = count_parameters_by_component(init_params)\n\n if args.log and jax.process_index() == 0:\n wandb.init(\n entity=args.entity,\n project=args.project,\n name=args.name,\n tags=args.tags,\n group=""debug"",\n config=args,\n )\n wandb.config.update({""model_param_count"": param_counts})\n\n print(""Parameter counts:"")\n print(param_counts)\n\n # --- Initialize optimizer ---\n lr_schedule = optax.warmup_cosine_decay_schedule(\n args.min_lr, args.max_lr, args.warmup_steps, args.num_steps\n )\n tx = optax.adamw(learning_rate=lr_schedule, b1=0.9, b2=0.9, weight_decay=1e-4)\n train_state = TrainState.create(apply_fn=tokenizer.apply, params=init_params, tx=tx)\n\n # FIXME: switch to create_hybrid_device_mesh for runs spanning multiple nodes\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n videos_sharding = NamedSharding(mesh, PartitionSpec(""data"", None, None, None, None))\n train_state = jax.device_put(train_state, replicated_sharding)\n\n # --- Load checkpoint ---\n step = 0\n if args.checkpoint:\n restore_target = {""model"": train_state}\n restore_args = orbax_utils.restore_args_from_target(restore_target)\n train_state.params[""params""].update(\n PyTreeCheckpointer()\n .restore(args.checkpoint, item=restore_target, restore_args=restore_args)[\n ""model""\n ]\n .params[""params""]\n )\n # Assume checkpoint is of the form tokenizer_<timestamp>_<step>\n step += int(args.checkpoint.split(""_"")[-1])\n\n # --- TRAIN LOOP ---\n tfrecord_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".tfrecord"")\n ]\n dataloader = get_dataloader(\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n tfrecord_files,\n args.seq_len,\n args.batch_size,\n *image_shape,\n seed=args.seed,\n )\n dataloader = (jax.make_array_from_process_local_data(videos_sharding, elem) for elem in dataloader) # type: ignore\n print(f""Starting training from step {step}..."")\n while step < args.num_steps:\n for videos in dataloader:\n # --- Train step ---\n rng, _rng, _rng_dropout = jax.random.split(rng, 3)\n\n inputs = dict(videos=videos, rng=_rng, dropout_rng=_rng_dropout)\n train_state, loss, recon, metrics = train_step(train_state, inputs)\n print(f""Step {step}, loss: {loss}"")\n step += 1\n\n # --- Logging ---\n if args.log:\n if step % args.log_interval == 0 and jax.process_index() == 0:\n wandb.log(\n {\n ""loss"": loss,\n ""step"": step,\n **metrics,\n }\n )\n if step % args.log_image_interval == 0:\n gt_seq = inputs[""videos""][0]\n recon_seq = recon[0].clip(0, 1)\n comparison_seq = jnp.concatenate((gt_seq, recon_seq), axis=1)\n comparison_seq = einops.rearrange(\n comparison_seq * 255, ""t h w c -> h (t w) c""\n )\n # NOTE: Process-dependent control flow deliberately happens\n # after indexing operation since it must not contain code\n # sections that lead to cross-accelerator communication.\n if jax.process_index() == 0:\n log_images = dict(\n image=wandb.Image(np.asarray(gt_seq[0])),\n recon=wandb.Image(np.asarray(recon_seq[0])),\n true_vs_recon=wandb.Image(\n np.asarray(comparison_seq.astype(np.uint8))\n ),\n )\n wandb.log(log_images)\n if step % args.log_checkpoint_interval == 0:\n ckpt = {""model"": train_state}\n orbax_checkpointer = PyTreeCheckpointer()\n save_args = orbax_utils.save_args_from_target(ckpt)\n orbax_checkpointer.save(\n os.path.join(os.getcwd(), args.ckpt_dir, f""tokenizer_{ts}_{step}""),\n ckpt,\n save_args=save_args,\n )\n if step >= args.num_steps:\n break\n",python,tab
|
14 |
+
13,119017,"train_tokenizer.py",0,9533,"from dataclasses import dataclass, field\nimport os\nimport time\n\nimport einops\nfrom flax.training import orbax_utils\nfrom flax.training.train_state import TrainState\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\nimport optax\nfrom orbax.checkpoint import PyTreeCheckpointer\nimport numpy as np\nimport dm_pix as pix\nimport jax\nimport jax.numpy as jnp\nimport tyro\nimport wandb\n\nfrom models.tokenizer import TokenizerVQVAE\nfrom utils.dataloader import get_dataloader\nfrom utils.parameter_utils import count_parameters_by_component\n\nts = int(time.time())\n\n\n@dataclass\nclass Args:\n # Experiment\n num_steps: int = 300_000\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = ""data_tfrecords/coinrun""\n checkpoint: str = """"\n # Optimization\n vq_beta: float = 0.25\n batch_size: int = 48\n min_lr: float = 3e-4\n max_lr: float = 3e-4\n warmup_steps: int = 10000\n # Tokenizer\n model_dim: int = 512\n latent_dim: int = 32\n num_latents: int = 1024\n patch_size: int = 4\n num_blocks: int = 8\n num_heads: int = 8\n dropout: float = 0.0\n codebook_dropout: float = 0.01\n # Logging\n log: bool = False\n entity: str = """"\n project: str = """"\n name: str = ""train_tokenizer""\n tags: list[str] = field(default_factory=lambda: [""tokenizer""])\n log_interval: int = 5\n log_image_interval: int = 250\n ckpt_dir: str = """"\n log_checkpoint_interval: int = 10000\n log_gradients: bool = False\n\n\nargs = tyro.cli(Args)\n\n\ndef tokenizer_loss_fn(params, state, inputs):\n # --- Compute loss ---\n outputs = state.apply_fn(\n params,\n inputs,\n training=True,\n rngs={""params"": inputs[""rng""], ""dropout"": inputs[""dropout_rng""]},\n )\n mse = jnp.square(inputs[""videos""] - outputs[""recon""]).mean()\n q_loss = jnp.square(jax.lax.stop_gradient(outputs[""emb""]) - outputs[""z""]).mean()\n commitment_loss = jnp.square(\n outputs[""emb""] - jax.lax.stop_gradient(outputs[""z""])\n ).mean()\n loss = mse + q_loss + args.vq_beta * commitment_loss\n\n # --- Compute validation metrics ---\n gt = inputs[""videos""].clip(0, 1).reshape(-1, *inputs[""videos""].shape[2:])\n recon = outputs[""recon""].clip(0, 1).reshape(-1, *outputs[""recon""].shape[2:])\n psnr = pix.psnr(gt, recon).mean()\n ssim = pix.ssim(gt, recon).mean()\n _, index_counts = jnp.unique_counts(\n jnp.ravel(outputs[""indices""]), size=args.num_latents, fill_value=0\n )\n codebook_usage = (index_counts != 0).mean()\n metrics = dict(\n loss=loss,\n mse=mse,\n q_loss=q_loss,\n commitment_loss=commitment_loss,\n psnr=psnr,\n ssim=ssim,\n codebook_usage=codebook_usage,\n )\n return loss, (outputs[""recon""], metrics)\n\n\[email protected]\ndef train_step(state, inputs):\n grad_fn = jax.value_and_grad(tokenizer_loss_fn, has_aux=True, allow_int=True)\n (loss, (recon, metrics)), grads = grad_fn(state.params, state, inputs)\n state = state.apply_gradients(grads=grads)\n if args.log_gradients:\n metrics[""encoder_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""encoder""]\n )\n metrics[""vq_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""vq""]\n )\n metrics[""decoder_gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""decoder""]\n )\n return state, loss, recon, metrics\n\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n num_devices = jax.device_count()\n if num_devices == 0:\n raise ValueError(""No JAX devices found."")\n print(f""Running on {num_devices} devices."")\n\n if args.batch_size % num_devices != 0:\n raise ValueError(\n f""Global batch size {args.batch_size} must be divisible by ""\n f""number of devices {num_devices}.""\n )\n\n per_device_batch_size_for_init = args.batch_size // num_devices\n\n rng = jax.random.PRNGKey(args.seed)\n\n # --- Initialize model ---\n tokenizer = TokenizerVQVAE(\n in_dim=args.image_channels,\n model_dim=args.model_dim,\n latent_dim=args.latent_dim,\n num_latents=args.num_latents,\n patch_size=args.patch_size,\n num_blocks=args.num_blocks,\n num_heads=args.num_heads,\n dropout=args.dropout,\n codebook_dropout=args.codebook_dropout,\n )\n rng, _rng = jax.random.split(rng)\n image_shape = (args.image_height, args.image_width, args.image_channels)\n inputs = dict(\n videos=jnp.zeros(\n (per_device_batch_size_for_init, args.seq_len, *image_shape),\n dtype=jnp.float32,\n ),\n )\n init_params = tokenizer.init(_rng, inputs)\n\n param_counts = count_parameters_by_component(init_params)\n\n if args.log and jax.process_index() == 0:\n wandb.init(\n entity=args.entity,\n project=args.project,\n name=args.name,\n tags=args.tags,\n group=""debug"",\n config=args,\n )\n wandb.config.update({""model_param_count"": param_counts})\n\n print(""Parameter counts:"")\n print(param_counts)\n\n # --- Initialize optimizer ---\n lr_schedule = optax.warmup_cosine_decay_schedule(\n args.min_lr, args.max_lr, args.warmup_steps, args.num_steps\n )\n tx = optax.adamw(learning_rate=lr_schedule, b1=0.9, b2=0.9, weight_decay=1e-4)\n train_state = TrainState.create(apply_fn=tokenizer.apply, params=init_params, tx=tx)\n\n # FIXME: switch to create_hybrid_device_mesh for runs spanning multiple nodes\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n videos_sharding = NamedSharding(mesh, PartitionSpec(""data"", None, None, None, None))\n train_state = jax.device_put(train_state, replicated_sharding)\n\n # --- Load checkpoint ---\n step = 0\n if args.checkpoint:\n restore_target = {""model"": train_state}\n restore_args = orbax_utils.restore_args_from_target(restore_target)\n train_state.params[""params""].update(\n PyTreeCheckpointer()\n .restore(args.checkpoint, item=restore_target, restore_args=restore_args)[\n ""model""\n ]\n .params[""params""]\n )\n # Assume checkpoint is of the form tokenizer_<timestamp>_<step>\n step += int(args.checkpoint.split(""_"")[-1])\n\n # --- TRAIN LOOP ---\n tfrecord_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".tfrecord"")\n ]\n dataloader = get_dataloader(\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n tfrecord_files,\n args.seq_len,\n args.batch_size,\n *image_shape,\n seed=args.seed,\n )\n dataloader = (jax.make_array_from_process_local_data(videos_sharding, elem) for elem in dataloader) # type: ignore\n print(f""Starting training from step {step}..."")\n while step < args.num_steps:\n for videos in dataloader:\n # --- Train step ---\n rng, _rng, _rng_dropout = jax.random.split(rng, 3)\n\n inputs = dict(videos=videos, rng=_rng, dropout_rng=_rng_dropout)\n train_state, loss, recon, metrics = train_step(train_state, inputs)\n print(f""Step {step}, loss: {loss}"")\n step += 1\n\n # --- Logging ---\n if args.log:\n if step % args.log_interval == 0 and jax.process_index() == 0:\n wandb.log(\n {\n ""loss"": loss,\n ""step"": step,\n **metrics,\n }\n )\n if step % args.log_image_interval == 0:\n gt_seq = inputs[""videos""][0]\n recon_seq = recon[0].clip(0, 1)\n comparison_seq = jnp.concatenate((gt_seq, recon_seq), axis=1)\n comparison_seq = einops.rearrange(\n comparison_seq * 255, ""t h w c -> h (t w) c""\n )\n # NOTE: Process-dependent control flow deliberately happens\n # after indexing operation since it must not contain code\n # sections that lead to cross-accelerator communication.\n if jax.process_index() == 0:\n log_images = dict(\n image=wandb.Image(np.asarray(gt_seq[0])),\n recon=wandb.Image(np.asarray(recon_seq[0])),\n true_vs_recon=wandb.Image(\n np.asarray(comparison_seq.astype(np.uint8))\n ),\n )\n wandb.log(log_images)\n if step % args.log_checkpoint_interval == 0:\n ckpt = {""model"": train_state}\n orbax_checkpointer = PyTreeCheckpointer()\n save_args = orbax_utils.save_args_from_target(ckpt)\n orbax_checkpointer.save(\n os.path.join(os.getcwd(), args.ckpt_dir, f""tokenizer_{ts}_{step}""),\n ckpt,\n save_args=save_args,\n )\n if step >= args.num_steps:\n break\n",python,selection_command
|
15 |
+
14,119294,"train_tokenizer.py",9533,0,"",python,selection_command
|
16 |
+
15,610807,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
17 |
+
16,610830,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;C",,terminal_output
|
18 |
+
17,759508,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 --gpu-bind=single:1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
19 |
+
18,759658,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 --gpu-bind=single:1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;Csalloc: Granted job allocation 26664731\r\n",,terminal_output
|
20 |
+
19,759723,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
21 |
+
20,760047,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
22 |
+
21,761817,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
23 |
+
22,761915,"TERMINAL",0,0,"s': python -m unittest tests.test_tokenizer_reproducibility.TokenizerReproducibilityTe[7ms[27mt -v",,terminal_output
|
24 |
+
23,761971,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[37Pr': [7msr[27mun echo $CUDA_VISIBLE_DEVICES\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
25 |
+
24,762166,"TERMINAL",0,0,"[1@u': [7msru[27m[1@n': [7msrun[27m",,terminal_output
|
26 |
+
25,762797,"TERMINAL",0,0,"\r[8@[franz.srambical@gpusrv69 jafar]$ srun[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
27 |
+
26,762903,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
28 |
+
27,763132,"TERMINAL",0,0,"0,1\r\n0,1\r\n",,terminal_output
|
29 |
+
28,763232,"TERMINAL",0,0,"0,1\r\n0,1\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
30 |
+
29,952005,"TERMINAL",0,0,"e",,terminal_output
|
31 |
+
30,952401,"TERMINAL",0,0,"[?25l[40;36Hx[40;37H[?25h",,terminal_output
|
32 |
+
31,952402,"TERMINAL",0,0,"[?25l[40;37Hi[40;38H[?25h",,terminal_output
|
33 |
+
32,952403,"TERMINAL",0,0,"[?25l[40;38Ht[40;39H[?25h",,terminal_output
|
34 |
+
33,952404,"TERMINAL",0,0,"[?25l[?2004l\rexit\r\n[?25h",,terminal_output
|
35 |
+
34,952452,"TERMINAL",0,0,"salloc: Relinquishing job allocation 26664731\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar",,terminal_output
|
36 |
+
35,980297,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
37 |
+
36,980308,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;C",,terminal_output
|
38 |
+
37,992032,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
39 |
+
38,992121,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;C",,terminal_output
|
40 |
+
39,992138,"TERMINAL",0,0,"salloc: error: Failed to validate job spec. --gpus-per-task or --tres-per-task used without either --gpus or -n/--ntasks is not allowed.\r\nsalloc: error: Invalid generic resource (gres) specification\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;1",,terminal_output
|
41 |
+
40,1283900,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
42 |
+
41,1283922,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;Csalloc: error: Failed to validate job spec. --gpus-per-task or --tres-per-task used without either --gpus or -n/--ntasks is not allowed.\r\nsalloc: error: Invalid generic resource (gres) specification\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;1",,terminal_output
|
43 |
+
42,1299071,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
44 |
+
43,1299086,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;C",,terminal_output
|
45 |
+
44,1390358,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2 --ntasks=4",,terminal_command
|
46 |
+
45,1390513,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2 --ntasks=4;d193c799-eb50-4b87-89db-ea172d98a654]633;Csalloc: Granted job allocation 26664743\r\n",,terminal_output
|
47 |
+
46,1390578,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
48 |
+
47,1390938,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
49 |
+
48,1392447,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
50 |
+
49,1392695,"TERMINAL",0,0,"s': [7ms[27mrun echo $CUDA_VISIBLE_DEVICES\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
51 |
+
50,1392808,"TERMINAL",0,0,"[?25l[12;24H[7;39;49ms[12;24H[0mu': squeue -w supergpu16,supergpu18,gpusrv[69,70],[7msu[27mpergpu14[?25h",,terminal_output
|
52 |
+
51,1392897,"TERMINAL",0,0,"[?25l[12;71H[7;39;49ms[12;71H[0m\r[Cfailed reverse-i-search)`sun': squeue -w supergpu16,supergpu18,gpusrv[69,70],supergpu14[?25h",,terminal_output
|
53 |
+
52,1394909,"TERMINAL",0,0,"\r[2@[franz.srambical@gpusrv69 jafar]$[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
54 |
+
53,1395027,"TERMINAL",0,0,"\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C-me[K",,terminal_output
|
55 |
+
54,1395757,"TERMINAL",0,0,"alloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=8[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C/usr/bin/python3 /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/printEnvVariablesToFile.py /ictstr01/home/aih/franz.srambical/.cursor-server/extensions/ms-python.python-2024.12.3-linux-x64/python_files/deactivate/bash/envVars.txt[A[A[A[33Psource .venv/bin/activate\r\n\r[K\r\n\r[K\r\n\r[K[A[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cbash experiments/tokenizer_cross_node_checkpointing_test.sh [A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cnvidia-smi [K\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun test.shecho $CUDA_VISIBLE_DEVICESexit[Ksrun echo $CUDA_VISIBLE_DEVICESidle[Kqueuesqueue --mepython -m unittest tests.test_tokenizer_reproducibility.TokenizerReproducibilityTest -v",,terminal_output
|
56 |
+
55,1396281,"TERMINAL",0,0,"[A[C[C[C[C[Cexit[K\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun echo $CUDA_VISIBLE_DEVICESexit[K[K",,terminal_output
|
57 |
+
56,1396806,"TERMINAL",0,0,"",,terminal_output
|
58 |
+
57,1397150,"TERMINAL",0,0,"",,terminal_output
|
59 |
+
58,1397331,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
60 |
+
59,1397492,"TERMINAL",0,0,"s': [7ms[27mrun echo $CUDA_VISIBLE_DEVICES\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
61 |
+
60,1397562,"TERMINAL",0,0,"[?25l[12;24H[7;39;49ms[12;24H[0m[1@r': [7msr[27m[?25h",,terminal_output
|
62 |
+
61,1397610,"TERMINAL",0,0,"[?25l[12;25H[7;39;49ms[12;25H[0m[1@u': [7msru[27m[?25h",,terminal_output
|
63 |
+
62,1397679,"TERMINAL",0,0,"[?25l[12;26H[7;39;49ms[12;26H[0m[1@n': [7msrun[27m[?25h",,terminal_output
|
64 |
+
63,1398229,"TERMINAL",0,0,"\r[8@[franz.srambical@gpusrv69 jafar]$ srun[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
65 |
+
64,1398465,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
66 |
+
65,1398669,"TERMINAL",0,0,"0,1\r\n0,1\r\n0,1\r\n0,1\r\n",,terminal_output
|
67 |
+
66,1398805,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
68 |
+
67,1533571,"TERMINAL",0,0,"[7msrun printenv | grep CUDA_VISIBLE_DEVICES[27m",,terminal_output
|
69 |
+
68,1534010,"TERMINAL",0,0,"[?25l[17;76H\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun printenv | grep CUDA_VISIBLE_DEVICES\r\n[?2004l\r[?25h",,terminal_output
|
70 |
+
69,1534199,"TERMINAL",0,0,"[01;31m[KCUDA_VISIBLE_DEVICES[m[K=0\r\n[01;31m[KCUDA_VISIBLE_DEVICES[m[K=0\r\n[01;31m[KCUDA_VISIBLE_DEVICES[m[K=0\r\n[01;31m[KCUDA_VISIBLE_DEVICES[m[K=0\r\n",,terminal_output
|
71 |
+
70,1534330,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
72 |
+
71,1692595,"TERMINAL",0,0,"[7m [27m\r\n\r[7msrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[27m\r\n\r\n\r[7m [27m",,terminal_output
|
73 |
+
72,1693091,"TERMINAL",0,0,"[A[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C \r\n\rsrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'\r\n\r\n\r [A[A[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
74 |
+
73,1693793,"TERMINAL",0,0,"\r\n\r\n\r\n\r[C[C[C[C",,terminal_output
|
75 |
+
74,1694392,"TERMINAL",0,0,"[K",,terminal_output
|
76 |
+
75,1694563,"TERMINAL",0,0,"[K",,terminal_output
|
77 |
+
76,1694818,"TERMINAL",0,0,"[K",,terminal_output
|
78 |
+
77,1694913,"TERMINAL",0,0,"\r[K",,terminal_output
|
79 |
+
78,1695172,"TERMINAL",0,0,"[K[A",,terminal_output
|
80 |
+
79,1695517,"TERMINAL",0,0,"[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
81 |
+
80,1695977,"TERMINAL",0,0,"[A\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
82 |
+
81,1696813,"TERMINAL",0,0,"",,terminal_output
|
83 |
+
82,1697756,"TERMINAL",0,0,"\r\n\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
84 |
+
83,1698019,"TERMINAL",0,0,"[A\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
85 |
+
84,1698512,"TERMINAL",0,0,"\r\n\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
86 |
+
85,1699085,"TERMINAL",0,0,"[A\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
87 |
+
86,1699359,"TERMINAL",0,0,"[C",,terminal_output
|
88 |
+
87,1700194,"TERMINAL",0,0,"[C",,terminal_output
|
89 |
+
88,1700941,"TERMINAL",0,0,"[C[C[C[C\r\n\r[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
90 |
+
89,1701893,"TERMINAL",0,0,"[?25l[23;2H[0m\r[?25h",,terminal_output
|
91 |
+
90,1702455,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
92 |
+
91,1703242,"TERMINAL",0,0,"srun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid -[C[1Pformat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --[1Pformat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --f[1Pormat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --fo[1Prmat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --for[1Pmat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Csrun --ntasks=4 bash -c 'nvidia-smi --query-gpu=uuid --form[1Pat=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
93 |
+
92,1703948,"TERMINAL",0,0,"[C[C",,terminal_output
|
94 |
+
93,1704421,"TERMINAL",0,0,"[?25l[0m4[22;50H[0m bash -c 'nvidia-smi --query-gpu=uuid --forma[1Pt=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
95 |
+
94,1704858,"TERMINAL",0,0,"[?25l[0ms[22;48H[0m[0m=[22;49H[0m bash -c 'nvidia-smi --query-gpu=uuid --format[1P=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h[?25l[0ms[22;48H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=[1Pcsv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h[?25l[0mk[22;47H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=c[1Psv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h[?25l[0ms[22;46H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=cs[1Pv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h[?25l[0ma[22;45H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=csv[1P,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h bash -c 'nvidia-smi --query-gpu=uuid --format=csv,[1Pnoheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25l[0mn[22;43H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=csv,n[1Poheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
96 |
+
95,1704997,"TERMINAL",0,0,"[?25l[0m-[22;42H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=csv,no[1Pheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
97 |
+
96,1705183,"TERMINAL",0,0,"[?25l[0m-[22;41H[0m bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noh[1Peader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
98 |
+
97,1705521,"TERMINAL",0,0,"[?25l[0m [22;40H[0mbash -c 'nvidia-smi --query-gpu=uuid --format=csv,nohe[1Pader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
99 |
+
98,1706097,"TERMINAL",0,0,"\r\n\r[C[C[C[C[C[C",,terminal_output
|
100 |
+
99,1706287,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
101 |
+
100,1706540,"TERMINAL",0,0,"GPU-f3062939-2421-5673-6abc-1eb76d971cd6\r\nGPU-3e808c85-2952-72e2-8da6-6beec88de390\r\nGPU-e3008a0f-dcb9-f740-edf8-3364a398e339\r\nGPU-1bb263c4-21ed-4863-da84-ca4f36c17637\r\n",,terminal_output
|
102 |
+
101,1706652,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
103 |
+
102,2239299,"experiments/tokenizer_cross_node_checkpointing_test.sh",0,0,"",shellscript,tab
|
104 |
+
103,2241491,"experiments/tokenizer_cross_node_checkpointing_test.sh",274,0,"",shellscript,selection_mouse
|
105 |
+
104,2242979,"experiments/tokenizer_cross_node_checkpointing_test.sh",249,0,"",shellscript,selection_command
|
106 |
+
105,2243137,"experiments/tokenizer_cross_node_checkpointing_test.sh",214,0,"",shellscript,selection_command
|
107 |
+
106,2243294,"experiments/tokenizer_cross_node_checkpointing_test.sh",145,0,"",shellscript,selection_command
|
108 |
+
107,2244746,"experiments/tokenizer_cross_node_checkpointing_test.sh",146,0,"",shellscript,selection_command
|
109 |
+
108,2244853,"experiments/tokenizer_cross_node_checkpointing_test.sh",156,0,"",shellscript,selection_command
|
110 |
+
109,2245129,"experiments/tokenizer_cross_node_checkpointing_test.sh",128,0,"",shellscript,selection_command
|
111 |
+
110,2245985,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"",shellscript,selection_command
|
112 |
+
111,2252860,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,2,"",shellscript,content
|
113 |
+
112,2253835,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"8",shellscript,content
|
114 |
+
113,2253836,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_keyboard
|
115 |
+
114,2254161,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,1,"",shellscript,content
|
116 |
+
115,2254258,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"3",shellscript,content
|
117 |
+
116,2254259,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_keyboard
|
118 |
+
117,2254381,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"8",shellscript,content
|
119 |
+
118,2254382,"experiments/tokenizer_cross_node_checkpointing_test.sh",127,0,"",shellscript,selection_keyboard
|
120 |
+
119,2254771,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,1,"",shellscript,content
|
121 |
+
120,2254881,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,1,"",shellscript,content
|
122 |
+
121,2254983,"experiments/tokenizer_cross_node_checkpointing_test.sh",125,0,"4",shellscript,content
|
123 |
+
122,2254984,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_keyboard
|
124 |
+
123,2255096,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"8",shellscript,content
|
125 |
+
124,2255097,"experiments/tokenizer_cross_node_checkpointing_test.sh",127,0,"",shellscript,selection_keyboard
|
126 |
+
125,2255383,"experiments/tokenizer_cross_node_checkpointing_test.sh",126,0,"",shellscript,selection_command
|
127 |
+
126,2255979,"experiments/tokenizer_cross_node_checkpointing_test.sh",148,0,"",shellscript,selection_command
|
128 |
+
127,2263053,"TERMINAL",0,0,"b",,terminal_output
|
129 |
+
128,2263217,"TERMINAL",0,0,"[?25l[28;36Ha[28;37H[?25h[?25l[28;37Hs[28;38H[?25h",,terminal_output
|
130 |
+
129,2263349,"TERMINAL",0,0,"[?25l[28;38Hh[28;39H[?25h",,terminal_output
|
131 |
+
130,2263715,"TERMINAL",0,0,"[?25l[28;39H [28;40H[?25h",,terminal_output
|
132 |
+
131,2263945,"TERMINAL",0,0,"[?25l[28;40Ht[28;41H[?25h",,terminal_output
|
133 |
+
132,2264024,"TERMINAL",0,0,"[?25l[28;41Ho[28;43H[?25h[?25l[28;42Hk[28;43H[?25h",,terminal_output
|
134 |
+
133,2264159,"TERMINAL",0,0,"[?25l[28;43He[28;44H[?25h",,terminal_output
|
135 |
+
134,2264232,"TERMINAL",0,0,"[?25l[28;44Hn[28;45H[?25h",,terminal_output
|
136 |
+
135,2264356,"TERMINAL",0,0,"",,terminal_output
|
137 |
+
136,2265077,"TERMINAL",0,0,"[?25l[28;45Hi[28;46H[?25h",,terminal_output
|
138 |
+
137,2265131,"TERMINAL",0,0,"",,terminal_output
|
139 |
+
138,2265996,"TERMINAL",0,0,"",,terminal_output
|
140 |
+
139,2266129,"TERMINAL",0,0,"",,terminal_output
|
141 |
+
140,2267228,"TERMINAL",0,0,"[K",,terminal_output
|
142 |
+
141,2267477,"TERMINAL",0,0,"[?25l[28;40He[28;41H[?25h",,terminal_output
|
143 |
+
142,2267861,"TERMINAL",0,0,"[?25l[28;41Hx[28;42H[?25h",,terminal_output
|
144 |
+
143,2267966,"TERMINAL",0,0,"[?25l[28;42Hp[28;43H[?25h",,terminal_output
|
145 |
+
144,2268137,"TERMINAL",0,0,"[?25l[28;43He[28;44H[?25h[?25l[28;44Hr[28;45H[?25h",,terminal_output
|
146 |
+
145,2268204,"TERMINAL",0,0,"[?25l[28;45Hi[28;46H[?25h",,terminal_output
|
147 |
+
146,2268274,"TERMINAL",0,0,"[?25l[28;46Hm[28;47H[?25h",,terminal_output
|
148 |
+
147,2268361,"TERMINAL",0,0,"ents/",,terminal_output
|
149 |
+
148,2268557,"TERMINAL",0,0,"[?25l[28;52Ht[28;53H[?25h",,terminal_output
|
150 |
+
149,2268663,"TERMINAL",0,0,"[?25l[28;53Ho[28;55H[?25h[?25l[28;54Hk[28;55H[?25h",,terminal_output
|
151 |
+
150,2268761,"TERMINAL",0,0,"enizer_",,terminal_output
|
152 |
+
151,2270197,"TERMINAL",0,0,"[?25l[28;62Hc[28;63H[?25h",,terminal_output
|
153 |
+
152,2270355,"TERMINAL",0,0,"[?25l[28;63Hr[28;64H[?25h",,terminal_output
|
154 |
+
153,2270438,"TERMINAL",0,0,"[?25l[28;64Ho[28;65H[?25h",,terminal_output
|
155 |
+
154,2270578,"TERMINAL",0,0,"ss_node_checkpointing_test.sh ",,terminal_output
|
156 |
+
155,2270712,"TERMINAL",0,0,"",,terminal_output
|
157 |
+
156,2290521,"experiments/tokenizer_cross_node_checkpointing_test.sh",130,69,"",shellscript,content
|
158 |
+
157,2290535,"experiments/tokenizer_cross_node_checkpointing_test.sh",134,0,"",shellscript,selection_command
|
159 |
+
158,2292564,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
160 |
+
159,2302589,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26664743.3 tasks 0-3: running\r\n",,terminal_output
|
161 |
+
160,2302695,"TERMINAL",0,0,"^Csrun: sending Ctrl-C to StepId=26664743.3\r\nsrun: forcing job termination\r\nsrun: Job step aborted: Waiting up to 32 seconds for job step to finish.\r\nslurmstepd: error: *** STEP 26664743.3 ON gpusrv69 CANCELLED AT 2025-07-03T21:25:25 ***\r\n",,terminal_output
|
162 |
+
161,2302843,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
163 |
+
162,2304357,"experiments/tokenizer_cross_node_checkpointing_test.sh",306,0,"",shellscript,selection_mouse
|
164 |
+
163,2304360,"experiments/tokenizer_cross_node_checkpointing_test.sh",305,0,"",shellscript,selection_command
|
165 |
+
164,2305257,"experiments/tokenizer_cross_node_checkpointing_test.sh",136,0,"ckpt_dir checkpoints/tokenizer_cross_node_checkpointing_test \\n --",shellscript,content
|
166 |
+
165,2305267,"experiments/tokenizer_cross_node_checkpointing_test.sh",148,0,"",shellscript,selection_command
|
167 |
+
166,2306283,"experiments/tokenizer_cross_node_checkpointing_test.sh",130,68," --ckpt_dir checkpoints/tokenizer_cross_node_checkpointing_test \",shellscript,selection_command
|
168 |
+
167,2306459,"experiments/tokenizer_cross_node_checkpointing_test.sh",130,103," --ckpt_dir checkpoints/tokenizer_cross_node_checkpointing_test \\n --log_checkpoint_interval 10 \",shellscript,selection_command
|
169 |
+
168,2306883,"experiments/tokenizer_cross_node_checkpointing_test.sh",130,104,"",shellscript,content
|
170 |
+
169,2306888,"experiments/tokenizer_cross_node_checkpointing_test.sh",134,0,"",shellscript,selection_command
|
171 |
+
170,2309491,"TERMINAL",0,0,"bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
172 |
+
171,2309846,"TERMINAL",0,0,"[H[2J[franz.srambical@gpusrv69 jafar]$ bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
173 |
+
172,2310154,"TERMINAL",0,0,"[?25l[?2004l\r[?25h",,terminal_output
|
174 |
+
173,2368708,"experiments/tokenizer_cross_node_checkpointing_test.sh",74,0,"",shellscript,selection_mouse
|
175 |
+
174,2370127,"experiments/tokenizer_cross_node_checkpointing_test.sh",48,0,"",shellscript,selection_command
|
176 |
+
175,2370314,"experiments/tokenizer_cross_node_checkpointing_test.sh",47,0,"",shellscript,selection_command
|
177 |
+
176,2370628,"experiments/tokenizer_cross_node_checkpointing_test.sh",46,0,"",shellscript,selection_command
|
178 |
+
177,2409878,"experiments/tokenizer_cross_node_checkpointing_test.sh",47,0,"",shellscript,selection_command
|
179 |
+
178,2412239,"experiments/tokenizer_cross_node_checkpointing_test.sh",48,0,"",shellscript,selection_command
|
180 |
+
179,2412495,"experiments/tokenizer_cross_node_checkpointing_test.sh",56,0,"",shellscript,selection_command
|
181 |
+
180,2412522,"experiments/tokenizer_cross_node_checkpointing_test.sh",58,0,"",shellscript,selection_command
|
182 |
+
181,2412554,"experiments/tokenizer_cross_node_checkpointing_test.sh",72,0,"",shellscript,selection_command
|
183 |
+
182,2412581,"experiments/tokenizer_cross_node_checkpointing_test.sh",74,0,"",shellscript,selection_command
|
184 |
+
183,2412616,"experiments/tokenizer_cross_node_checkpointing_test.sh",75,0,"",shellscript,selection_command
|
185 |
+
184,2412652,"experiments/tokenizer_cross_node_checkpointing_test.sh",80,0,"",shellscript,selection_command
|
186 |
+
185,2413532,"experiments/tokenizer_cross_node_checkpointing_test.sh",87,0,"",shellscript,selection_command
|
187 |
+
186,2415112,"experiments/tokenizer_cross_node_checkpointing_test.sh",102,0,"",shellscript,selection_command
|
188 |
+
187,2415352,"experiments/tokenizer_cross_node_checkpointing_test.sh",103,0,"",shellscript,selection_command
|
189 |
+
188,2461686,"TERMINAL",0,0,"2025-07-03 21:28:04.344578: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:28:04.344619: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:28:04.344695: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:28:04.344729: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
|
190 |
+
189,2462333,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751570885.025387 2463513 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751570885.025374 2463514 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751570885.026820 2459674 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751570885.026835 2459675 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n",,terminal_output
|
191 |
+
190,2462775,"TERMINAL",0,0,"E0000 00:00:1751570885.469961 2463514 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751570885.469988 2463513 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751570885.470054 2459674 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751570885.470059 2459675 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
|
192 |
+
191,2464525,"TERMINAL",0,0,"W0000 00:00:1751570887.196521 2463513 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196573 2463513 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196580 2463513 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196584 2463513 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196532 2463514 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196572 2463514 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196576 2463514 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.196579 2463514 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218392 2459674 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218432 2459674 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218438 2459674 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218441 2459674 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218429 2459675 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218491 2459675 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218497 2459675 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751570887.218501 2459675 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
193 |
+
192,2619154,"TERMINAL",0,0,"W0000 00:00:1751571041.843067 2463513 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571041.843086 2459674 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571041.843105 2463514 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571041.843055 2459675 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n",,terminal_output
|
194 |
+
193,2925218,"TERMINAL",0,0,"2025-07-03 21:35:47.896755: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:35:47.896778: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:35:47.897076: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:35:47.897090: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n",,terminal_output
|
195 |
+
194,2925910,"TERMINAL",0,0,"srun: error: gpusrv70: tasks 2-3: Aborted (core dumped)\r\nsrun: error: gpusrv69: tasks 0-1: Aborted (core dumped)\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
196 |
+
195,3136855,"TERMINAL",0,0,"e",,terminal_output
|
197 |
+
196,3137164,"TERMINAL",0,0,"[?25l[58;36Hx[58;37H[?25h",,terminal_output
|
198 |
+
197,3137217,"TERMINAL",0,0,"[?25l[58;37Hi[58;38H[?25h",,terminal_output
|
199 |
+
198,3137489,"TERMINAL",0,0,"[?25l[58;38Ht[58;39H[?25h",,terminal_output
|
200 |
+
199,3139183,"TERMINAL",0,0,"[K",,terminal_output
|
201 |
+
200,3211564,"TERMINAL",0,0,"bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
202 |
+
201,3212608,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[K\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
203 |
+
202,3213418,"TERMINAL",0,0,"e",,terminal_output
|
204 |
+
203,3213649,"TERMINAL",0,0,"[?25l[57;36Hx[57;37H[?25h",,terminal_output
|
205 |
+
204,3213848,"TERMINAL",0,0,"[?25l[57;37Hi[57;38H[?25h",,terminal_output
|
206 |
+
205,3214091,"TERMINAL",0,0,"[?25l[57;38Ht[57;39H[?25h",,terminal_output
|
207 |
+
206,3214616,"TERMINAL",0,0,"[?25l[?2004l\rexit\r\n[?25h",,terminal_output
|
208 |
+
207,3214833,"TERMINAL",0,0,"srun: error: gpusrv69: task 0: Exited with exit code 134\r\nsalloc: Relinquishing job allocation 26664743\r\nsalloc: Job allocation 26664743 has been revoked.\r\n",,terminal_output
|
209 |
+
208,3223824,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
210 |
+
209,3223846,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;C",,terminal_output
|
211 |
+
210,3239955,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
212 |
+
211,3240035,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2;d193c799-eb50-4b87-89db-ea172d98a654]633;Csalloc: Granted job allocation 26664812\r\n",,terminal_output
|
213 |
+
212,3240143,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
214 |
+
213,3240527,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
215 |
+
214,3242400,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
216 |
+
215,3242669,"TERMINAL",0,0,"[61@s': bash experiments/tokenizer_cross_node_checkpointing_test.[7ms[27mh",,terminal_output
|
217 |
+
216,3243226,"TERMINAL",0,0,"[?25l[58;81H[7;39;49ms[58;81H[0m\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[34Po': [7mso[27murce .venv/bin/activate[?25h",,terminal_output
|
218 |
+
217,3243536,"TERMINAL",0,0,"[?25l[58;25H[7;39;49ms[58;25H[0m[1@u': [7msou[27m[?25h",,terminal_output
|
219 |
+
218,3244078,"TERMINAL",0,0,"\r[9@[franz.srambical@gpusrv69 jafar]$ sou\r\n[?2004l\r]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h(jafar) [franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
220 |
+
219,3247297,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
221 |
+
220,3247926,"TERMINAL",0,0,"b': source .venv/[7mb[27min/activate",,terminal_output
|
222 |
+
221,3248048,"TERMINAL",0,0,"[?25l[58;37H[7;39;49mb[58;37H[0ma': [7mba[27msh experiments/tokenizer_cross_node_checkpointing_test.sh \r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
223 |
+
222,3248131,"TERMINAL",0,0,"[?25l[58;25H[7;39;49mb[58;25H[0m[1@s': [7mbas[27m[?25h",,terminal_output
|
224 |
+
223,3248236,"TERMINAL",0,0,"[?25l[58;26H[7;39;49mb[58;26H[0m[1@h': [7mbash[27m[?25h",,terminal_output
|
225 |
+
224,3249224,"TERMINAL",0,0,"\r[Cjafar) [franz.srambical@gpusrv69 jafar]$ bash experiments/tokenizer_cross_node_checkpointing_test.sh [A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C\r\n\r[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
226 |
+
225,3250054,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
227 |
+
226,3321398,"TERMINAL",0,0,"2025-07-03 21:42:24.064048: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:42:24.064041: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:42:24.065689: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:42:24.065698: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
|
228 |
+
227,3322051,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751571744.633994 2466893 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751571744.633988 2466892 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751571744.649136 2462879 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751571744.649129 2462880 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n",,terminal_output
|
229 |
+
228,3322173,"TERMINAL",0,0,"E0000 00:00:1751571744.801568 2462879 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751571744.801616 2462880 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751571744.808444 2466892 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751571744.808457 2466893 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
|
230 |
+
229,3323053,"TERMINAL",0,0,"W0000 00:00:1751571745.740866 2466892 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740902 2466892 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740906 2466892 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740909 2466892 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740936 2466893 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740978 2466893 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740983 2466893 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.740987 2466893 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
231 |
+
230,3323146,"TERMINAL",0,0,"W0000 00:00:1751571745.837265 2462879 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837303 2462879 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837309 2462879 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837312 2462879 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837301 2462880 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837335 2462880 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837340 2462880 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751571745.837343 2462880 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
232 |
+
231,3381108,"TERMINAL",0,0,"W0000 00:00:1751571803.740339 2466892 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571803.740352 2466893 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571803.741264 2462879 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751571803.741275 2462880 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n",,terminal_output
|
233 |
+
232,3542883,"experiments/tokenizer_cross_node_checkpointing_test.sh",74,0,"",shellscript,selection_mouse
|
234 |
+
233,3686476,"TERMINAL",0,0,"2025-07-03 21:48:28.957747: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:48:28.958290: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:48:28.958282: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n2025-07-03 21:48:29.017875: F external/xla/xla/pjrt/distributed/client.h:80] Terminating process because the JAX distributed service detected fatal errors. This most likely indicates that another task died; see the other task logs for more details. Disable Python buffering, i.e. `python -u`, to be sure to see all the previous output. absl::Status: DEADLINE_EXCEEDED: Deadline Exceeded\r\n\r\nRPC: /tensorflow.CoordinationService/RegisterTask\r\n",,terminal_output
|
235 |
+
234,3686728,"TERMINAL",0,0,"srun: error: gpusrv70: task 3: Aborted (core dumped)\r\n",,terminal_output
|
236 |
+
235,3686862,"TERMINAL",0,0,"srun: error: gpusrv69: task 1: Aborted (core dumped)\r\nsrun: error: gpusrv70: task 2: Aborted (core dumped)\r\n",,terminal_output
|
237 |
+
236,3686921,"TERMINAL",0,0,"srun: error: gpusrv69: task 0: Aborted (core dumped)\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h(jafar) [franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
238 |
+
237,3728405,"TERMINAL",0,0,"e",,terminal_output
|
239 |
+
238,3728562,"TERMINAL",0,0,"[?25l[58;44Hx[58;45H[?25h",,terminal_output
|
240 |
+
239,3728660,"TERMINAL",0,0,"[?25l[58;45Hi[58;46H[?25h",,terminal_output
|
241 |
+
240,3728750,"TERMINAL",0,0,"[?25l[58;46Ht[58;47H[?25h",,terminal_output
|
242 |
+
241,3728877,"TERMINAL",0,0,"\r\n[?2004l\rexit\r\n",,terminal_output
|
243 |
+
242,3729136,"TERMINAL",0,0,"srun: error: gpusrv69: task 0: Exited with exit code 134\r\nsalloc: Relinquishing job allocation 26664812\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;134]633;P;Cwd=/lustre/groups/haicu/workspace/franz.srambical/jafar",,terminal_output
|
244 |
+
243,3738642,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gres=gpu:2 -w gpusrv69 --cpus-per-task=4 --ntasks-per-node=2",,terminal_command
|
245 |
+
244,3738713,"TERMINAL",0,0,"]633;Csalloc: Granted job allocation 26665017\r\n",,terminal_output
|
246 |
+
245,3738818,"TERMINAL",0,0,"salloc: Nodes gpusrv69 are ready for job\r\n",,terminal_output
|
247 |
+
246,3739191,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
248 |
+
247,3739821,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
249 |
+
248,3739991,"TERMINAL",0,0,"[61@b': [7mb[27mash experiments/tokenizer_cross_node_checkpointing_test.sh\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
250 |
+
249,3740106,"TERMINAL",0,0,"[?25l[58;24H[7;39;49mb[58;24H[0m[1@a': [7mba[27m[?25h[?25l[58;25H[7;39;49mb[58;25H[0m[1@s': [7mbas[27m[?25h",,terminal_output
|
251 |
+
250,3740183,"TERMINAL",0,0,"[?25l[58;26H[7;39;49mb[58;26H[0m[1@h': [7mbash[27m[?25h",,terminal_output
|
252 |
+
251,3740502,"TERMINAL",0,0,"\r[franz.srambical@gpusrv69 jafar]$ bash experiments/tokenizer_cross_node_checkpointing_test.sh [A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C\r\n\r\r\n[?2004l\r",,terminal_output
|
253 |
+
252,3801051,"TERMINAL",0,0,"2025-07-03 21:50:23.641855: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2025-07-03 21:50:23.641891: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
|
254 |
+
253,3801794,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751572224.474692 2468770 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1751572224.474696 2468771 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\nE0000 00:00:1751572224.487252 2468770 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nE0000 00:00:1751572224.487243 2468771 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
|
255 |
+
254,3802101,"TERMINAL",0,0,"W0000 00:00:1751572224.791692 2468770 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791712 2468770 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791715 2468770 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791717 2468770 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791699 2468771 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791714 2468771 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791717 2468771 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1751572224.791719 2468771 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
256 |
+
255,3823224,"TERMINAL",0,0,"W0000 00:00:1751572245.915941 2468770 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\nW0000 00:00:1751572245.915950 2468771 gpu_device.cc:2341] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n",,terminal_output
|
257 |
+
256,4421348,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26665017.0 tasks 0-1: running\r\n",,terminal_output
|
258 |
+
257,4646245,"TERMINAL",0,0,"^Csrun: interrupt (one more within 1 sec to abort)\r\nsrun: StepId=26665017.0 tasks 0-1: running\r\n",,terminal_output
|
259 |
+
258,4646426,"TERMINAL",0,0,"^Csrun: sending Ctrl-C to StepId=26665017.0\r\nsrun: forcing job termination\r\nsrun: Job step aborted: Waiting up to 32 seconds for job step to finish.\r\nslurmstepd: error: *** STEP 26665017.0 ON gpusrv69 CANCELLED AT 2025-07-03T22:04:29 ***\r\n",,terminal_output
|
260 |
+
259,4646687,"TERMINAL",0,0,"^Csrun: sending Ctrl-C to StepId=26665017.0\r\nsrun: job abort in progress\r\n",,terminal_output
|
261 |
+
260,4647398,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
262 |
+
261,4649272,"TERMINAL",0,0,"bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
263 |
+
262,4650031,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cexit[K\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
264 |
+
263,4650375,"TERMINAL",0,0,"bash experiments/tokenizer_cross_node_checkpointing_test.sh ",,terminal_output
|
265 |
+
264,4650565,"TERMINAL",0,0,"[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[K\r\n\r[K[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C",,terminal_output
|
266 |
+
265,4650903,"TERMINAL",0,0,"e",,terminal_output
|
267 |
+
266,4651044,"TERMINAL",0,0,"[?25l[57;36Hx[57;37H[?25h",,terminal_output
|
268 |
+
267,4651130,"TERMINAL",0,0,"[?25l[57;37Hi[57;38H[?25h",,terminal_output
|
269 |
+
268,4651246,"TERMINAL",0,0,"[?25l[57;38Ht[57;39H[?25h",,terminal_output
|
270 |
+
269,4651351,"TERMINAL",0,0,"[?25l[?2004l\rexit\r\n[?25h",,terminal_output
|
271 |
+
270,4651593,"TERMINAL",0,0,"srun: error: gpusrv69: task 0: Exited with exit code 137\r\nsalloc: Relinquishing job allocation 26665017\r\nsalloc: Job allocation 26665017 has been revoked.\r\n]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar]633;D;137]633;P;Cwd=/lustre/groups/haicu/workspace/franz.srambical/jafar",,terminal_output
|
272 |
+
271,4754894,"TERMINAL",0,0,"salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2 --ntasks=4",,terminal_command
|
273 |
+
272,4754971,"TERMINAL",0,0,"]633;E;salloc --reservation=haicu_stefan -p gpu_p --time=05:00:00 --job-name=interactive_bash --qos=gpu_normal --gpu-bind=single:1 --gpus-per-task=1 -w gpusrv69,gpusrv70 --cpus-per-task=4 --ntasks-per-node=2 --ntasks=4;d193c799-eb50-4b87-89db-ea172d98a654]633;Csalloc: Granted job allocation 26665156\r\n",,terminal_output
|
274 |
+
273,4755072,"TERMINAL",0,0,"salloc: Nodes gpusrv[69-70] are ready for job\r\n",,terminal_output
|
275 |
+
274,4755423,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
276 |
+
275,4756071,"TERMINAL",0,0,"\r(reverse-i-search)`': [K",,terminal_output
|
277 |
+
276,4756309,"TERMINAL",0,0,"[61@s': bash experiments/tokenizer_cross_node_checkpointing_test.[7ms[27mh",,terminal_output
|
278 |
+
277,4756404,"TERMINAL",0,0,"[?25l[58;81H[7;39;49ms[58;81H[0m\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cr': [7msr[27mun bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[?25h",,terminal_output
|
279 |
+
278,4756503,"TERMINAL",0,0,"[?25l[58;26H[7;39;49mr[58;26H[58;25H[7;39;49ms[58;25H[0m[1@u': [7msru[27m[?25h[1@n': [7msrun[27m",,terminal_output
|
280 |
+
279,4757633,"TERMINAL",0,0,"\r[franz.srambical@gpusrv69 jafar]$ srun bash -c 'nvidia-smi --query-gpu=uuid --format=csv,noheader'[A[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C\r\n\r\r\n[?2004l\r",,terminal_output
|
281 |
+
280,4757857,"TERMINAL",0,0,"GPU-3e808c85-2952-72e2-8da6-6beec88de390\r\nGPU-e3008a0f-dcb9-f740-edf8-3364a398e339\r\nGPU-f3062939-2421-5673-6abc-1eb76d971cd6\r\nGPU-1bb263c4-21ed-4863-da84-ca4f36c17637\r\n",,terminal_output
|
282 |
+
281,4757973,"TERMINAL",0,0,"]0;franz.srambical@hpc-submit01:/lustre/groups/haicu/workspace/franz.srambical/jafar[?2004h[franz.srambical@gpusrv69 jafar]$ ",,terminal_output
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-76073275-4388-463f-8e12-ce34ee46fad51752495312029-2025_07_14-14.15.14.704/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7737eb1f-d280-4335-8b81-3697e0d16cc61754428290161-2025_08_05-23.11.37.668/source.csv
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,"requirements.txt",0,0,"dm_pix>=0.4.3\neinops>=0.8.0\nflax>=0.10.7\njax[cuda12]>=0.6.2\noptax>=0.2.3\nprocgen>=0.10.7\ntyro>=0.8.5\nwandb>=0.17.4\ngrain>=0.2.10\npre-commit>=4.2.0\narray-record>=0.7.2\ntqdm>=4.67.1",pip-requirements,tab
|
3 |
+
2,120,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"11:11:37 PM [info] Activating crowd-code\n11:11:37 PM [info] Recording started\n11:11:37 PM [info] Initializing git provider using file system watchers...\n11:11:37 PM [info] Git repository found\n",Log,tab
|
4 |
+
3,170,"extension-output-pdoom-org.crowd-code-#1-crowd-code",193,0,"11:11:37 PM [info] Git provider initialized successfully\n11:11:37 PM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,10451,"requirements.txt",0,0,"",pip-requirements,tab
|
6 |
+
5,18865,"requirements.txt",179,0,"",pip-requirements,selection_mouse
|
7 |
+
6,18866,"requirements.txt",178,0,"",pip-requirements,selection_command
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7f860396-c5c8-4f1f-8ce7-04e005748e611754402256906-2025_08_05-15.57.44.850/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-81dc70dc-8e01-48a6-9a00-9349b9f9a4171751541780271-2025_07_03-13.23.33.804/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8d01713f-5b88-429a-99ff-32944a31fd381753259613769-2025_07_23-10.34.13.639/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e67a739-7b65-4646-afc1-42e9766880571751607756007-2025_07_04-07.43.31.602/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e7b7877-c553-4d5c-a7c5-433adcd8112b1754287948136-2025_08_04-08.12.35.154/source.csv
ADDED
@@ -0,0 +1,286 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,"utils/nn.py",0,0,"import math\nfrom typing import Tuple, Callable, List\n\nfrom flax import nnx\nimport jax\nimport jax.numpy as jnp\nimport einops\n\n\nclass PositionalEncoding(nnx.Module):\n """"""https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/JAX/tutorial6/Transformers_and_MHAttention.html""""""\n\n def __init__(self, d_model: int, max_len: int = 5000):\n self.d_model = d_model\n self.max_len = max_len\n\n pe = jnp.zeros((self.max_len, self.d_model))\n position = jnp.arange(0, self.max_len, dtype=jnp.float32)[:, None]\n div_term = jnp.exp(\n jnp.arange(0, self.d_model, 2) * (-math.log(10000.0) / self.d_model)\n )\n pe = pe.at[:, 0::2].set(jnp.sin(position * div_term))\n pe = pe.at[:, 1::2].set(jnp.cos(position * div_term))\n self.pe = nnx.Variable(pe)\n\n def __call__(self, x: jax.Array) -> jax.Array:\n x = x + self.pe[: x.shape[2]]\n return x\n\n\nclass STBlock(nnx.Module):\n def __init__(\n self,\n dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.dim = dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.spatial_pos_enc = PositionalEncoding(self.dim)\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=False\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.temporal_pos_enc = PositionalEncoding(self.dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array) -> jax.Array:\n # --- Spatial attention ---\n z_BTNM = self.spatial_pos_enc(x_BTNM)\n z_BTNM = self.spatial_norm(z_BTNM)\n z_BTNM = self.spatial_attention(z_BTNM)\n x_BTNM = x_BTNM + z_BTNM\n\n # --- Temporal attention ---\n x_BNTM = x_BTNM.swapaxes(1, 2)\n z_BNTM = self.temporal_pos_enc(x_BNTM)\n z_BNTM = self.temporal_norm(z_BNTM)\n z_BNTM = self.temporal_attention(z_BNTM)\n x_BNTM = x_BNTM + z_BNTM\n x_BTNM = x_BNTM.swapaxes(1, 2)\n\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\n\nclass STTransformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n STBlock(\n dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n )\n\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM)\n\n x_BTNO = self.output_dense(x_BTNM)\n return x_BTNO\n\nclass TransformerBlock(nnx.Module):\n def __init__(\n self,\n model_dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n self.decode = decode\n\n self.temporal_pos_enc = PositionalEncoding(self.model_dim)\n self.spatial_pos_enc = PositionalEncoding(self.model_dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n # --- Spatial attention ---\n B, T, N, M = x_BTNM.shape\n z_FNM = einops.rearrange(x_BTNM, ""b t n m -> (b t) n m"")\n z_FNM = self.spatial_norm(z_FNM)\n if self.decode:\n assert pos_index is not None\n z_FM = z_FNM[:, pos_index[1]]\n z_F1M = jnp.reshape(z_FM, (B * T, 1, M))\n z_F1M = self.spatial_attention(z_F1M)\n z_FM = jnp.reshape(z_F1M, (B * T, M))\n z_FNM = z_FNM.at[:, pos_index[1], :].set(z_FM)\n else:\n z_FNM = self.spatial_attention(z_FNM)\n z_BTNM = einops.rearrange(z_FNM, ""(b t) n m -> b t n m"", t=T)\n x_BTNM = x_BTNM + z_BTNM\n # --- Temporal attention ---\n z_PTM = einops.rearrange(x_BTNM, ""b t n m -> (b n) t m"")\n z_PTM = self.temporal_norm(z_PTM)\n if self.decode:\n assert pos_index is not None\n z_PM = z_PTM[:, pos_index[0]]\n z_P1M = jnp.reshape(z_PM, (B * N, 1, M))\n z_P1M = self.temporal_attention(z_P1M)\n z_PM = jnp.reshape(z_P1M, (B * N, M))\n z_PTM = z_PTM.at[:, pos_index[0], :].set(z_PM)\n else:\n z_PTM = self.temporal_attention(z_PTM)\n z_BTNM = einops.rearrange(z_PTM, ""(b n) t m -> b t n m"", n=N)\n x_BTNM = x_BTNM + z_BTNM\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\nclass Transformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n F: number of frames in batch\n P: number of patch positions in batch\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.pos_enc = PositionalEncoding(self.model_dim)\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks: List[TransformerBlock] = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n TransformerBlock(\n model_dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n decode=decode,\n rngs=rngs,\n )\n )\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n x_BTNM = self.pos_enc(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM, pos_index)\n\n x_BTNV = self.output_dense(x_BTNM)\n return x_BTNV\n\ndef normalize(x: jax.Array) -> jax.Array:\n return x / (jnp.linalg.norm(x, ord=2, axis=-1, keepdims=True) + 1e-8)\n\n\nclass VectorQuantizer(nnx.Module):\n """"""\n Dimension keys:\n D: B * T * N\n K: number of latents\n L: latent dimension\n """"""\n def __init__(\n self, latent_dim: int, num_latents: int, dropout: float, rngs: nnx.Rngs\n ):\n self.latent_dim = latent_dim\n self.num_latents = num_latents\n self.dropout = dropout\n\n self.codebook = nnx.Param(\n normalize(\n nnx.initializers.lecun_uniform()(\n rngs.params(), (self.num_latents, self.latent_dim)\n )\n )\n )\n self.drop = nnx.Dropout(self.dropout, rngs=rngs)\n\n def __call__(\n self, x_DL: jax.Array, training: bool\n ) -> Tuple[jax.Array, jax.Array, jax.Array, jax.Array]:\n # --- Compute distances ---\n x_DL = normalize(x_DL)\n normalized_codebook_KL = normalize(self.codebook.value)\n distance_DK = -jnp.matmul(x_DL, normalized_codebook_KL.T)\n if training:\n distance_DK = self.drop(distance_DK)\n\n # --- Get indices and embeddings ---\n indices_D = jnp.argmin(distance_DK, axis=-1)\n z_DL = self.codebook[indices_D]\n\n # --- Straight through estimator ---\n z_q_DL = x_DL + jax.lax.stop_gradient(z_DL - x_DL)\n return z_q_DL, z_DL, x_DL, indices_D\n\n def get_codes(self, indices_E: jax.Array) -> jax.Array:\n return self.codebook[indices_E]\n\n\ndef _create_flash_attention_fn(use_flash_attention: bool, is_causal: bool) -> Callable:\n """"""\n Create an attention function that uses flash attention if enabled.\n\n flax.nnx.MultiHeadAttention provides tensors with shape (batch..., length, num_heads, head_dim),\n but jax.nn.dot_product_attention expects (batch, length, num_heads, head_dim). We reshape to\n ensure compatibility. cuDNN's flash attention additionally requires a sequence length that\n is a multiple of 4. We pad the sequence length to the nearest multiple of 4 and mask\n accordingly. Note that cuDNN requires the mask to be broadcast before calling the attention\n function due to strict shape checking.\n """"""\n\n def attention_fn(query_BTHD, key_BSHD, value_BSHD, bias=None, mask_B111=None, **kwargs):\n implementation = ""cudnn"" if use_flash_attention else None\n\n def _merge_batch_dims(x):\n return einops.rearrange(x, ""... l h k -> (...) l h k"")\n\n def _pad(x, pad_size):\n return jnp.pad(x, ((0, 0), (0, pad_size), (0, 0), (0, 0)))\n\n original_shape = query_BTHD.shape\n T = query_BTHD.shape[-3]\n S = key_BSHD.shape[-3]\n\n # Pad to nearest multiple of 4\n Q = ((T + 3) // 4) * 4\n pad_size_Q = Q - T\n K = ((S + 3) // 4) * 4\n pad_size_K = K - S\n\n query_BQHD = _pad(_merge_batch_dims(query_BTHD), pad_size_Q)\n key_BKHD = _pad(_merge_batch_dims(key_BSHD), pad_size_K)\n value_BKHD = _pad(_merge_batch_dims(value_BSHD), pad_size_K)\n B = query_BQHD.shape[0]\n\n attention_mask = jnp.ones((Q, K), dtype=jnp.bool_)\n attention_mask = attention_mask.at[Q:, :].set(False)\n attention_mask = attention_mask.at[:, K:].set(False)\n\n # Handle causal mask for cached decoder self-attention (from nnx.MultiHeadAttention)\n if mask_B111 is not None:\n # FIXME (f.srambical): Why do we need this?\n mask_B111 = _merge_batch_dims(mask_B111)\n # We need to broadcast T and S dimensions to target_seq_len since cudnn attention strictly checks the mask shape\n # https://github.com/jax-ml/jax/issues/28974\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L1830\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L337\n mask_B1TS = einops.repeat(mask_B111, ""... 1 1 -> ... t s"", t=Q, s=K)\n mask_B1TS = mask_B111.astype(jnp.bool)\n else:\n mask_11TS = attention_mask[jnp.newaxis, jnp.newaxis, :, :]\n mask_B1TS = jnp.broadcast_to(mask_11TS, (B, 1, Q, K))\n\n bias_4d = _merge_batch_dims(bias) if bias is not None else None\n\n # NOTE: jax.nn.dot_product_attention does not support dropout\n output_4d = jax.nn.dot_product_attention(\n query=query_BQHD,\n key=key_BKHD,\n value=value_BKHD,\n bias=bias_4d,\n mask=mask_B1TS,\n implementation=implementation,\n is_causal=is_causal,\n )\n return output_4d[..., :T, :, :].reshape(original_shape)\n\n return attention_fn\n",python,tab
|
3 |
+
2,156,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:12:35 AM [info] Activating crowd-code\n8:12:35 AM [info] Recording started\n8:12:35 AM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,270,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"8:12:35 AM [info] Git repository found\n8:12:35 AM [info] Git provider initialized successfully\n8:12:35 AM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,27027,"TERMINAL",0,0,"",,terminal_focus
|
6 |
+
5,27028,"utils/nn.py",0,0,"",python,tab
|
7 |
+
6,27772,"TERMINAL",0,0,"source /home/franz.srambical/jafar/.venv/bin/activate",,terminal_command
|
8 |
+
7,30804,"TERMINAL",0,0,"salloc --gpus=1 --ntasks-per-node=1 --cpus-per-task=1 --mem=100G",,terminal_command
|
9 |
+
8,30871,"TERMINAL",0,0,"]633;Csalloc: Granted job allocation 14895\r\n",,terminal_output
|
10 |
+
9,30978,"TERMINAL",0,0,"salloc: Waiting for resource configuration\r\n",,terminal_output
|
11 |
+
10,31985,"TERMINAL",0,0,"salloc: Nodes hai005 are ready for job\r\n",,terminal_output
|
12 |
+
11,32407,"TERMINAL",0,0,"Running inside SLURM, Job ID 14895.\r\n",,terminal_output
|
13 |
+
12,32483,"TERMINAL",0,0,"]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
14 |
+
13,33111,"TERMINAL",0,0,"n",,terminal_output
|
15 |
+
14,33239,"TERMINAL",0,0,"s",,terminal_output
|
16 |
+
15,33338,"TERMINAL",0,0,"y",,terminal_output
|
17 |
+
16,33448,"TERMINAL",0,0,"s",,terminal_output
|
18 |
+
17,33607,"TERMINAL",0,0,"",,terminal_output
|
19 |
+
18,33984,"TERMINAL",0,0,"\r\nnsys nsys-ui \r\n[[email protected]:~/jafar] $ nsys",,terminal_output
|
20 |
+
19,34150,"TERMINAL",0,0,"\r\nnsys nsys-ui \r\n[[email protected]:~/jafar] $ nsys",,terminal_output
|
21 |
+
20,35164,"TERMINAL",0,0," ",,terminal_output
|
22 |
+
21,37061,"TERMINAL",0,0,"p",,terminal_output
|
23 |
+
22,37138,"TERMINAL",0,0,"r",,terminal_output
|
24 |
+
23,37229,"TERMINAL",0,0,"o",,terminal_output
|
25 |
+
24,37328,"TERMINAL",0,0,"f",,terminal_output
|
26 |
+
25,37429,"TERMINAL",0,0,"i",,terminal_output
|
27 |
+
26,37529,"TERMINAL",0,0,"l",,terminal_output
|
28 |
+
27,37652,"TERMINAL",0,0,"e",,terminal_output
|
29 |
+
28,37716,"TERMINAL",0,0," ",,terminal_output
|
30 |
+
29,37984,"TERMINAL",0,0,"-",,terminal_output
|
31 |
+
30,38182,"TERMINAL",0,0,"o",,terminal_output
|
32 |
+
31,38288,"TERMINAL",0,0," ",,terminal_output
|
33 |
+
32,40833,"TERMINAL",0,0,"m",,terminal_output
|
34 |
+
33,41751,"TERMINAL",0,0,"[K",,terminal_output
|
35 |
+
34,41984,"TERMINAL",0,0,"f",,terminal_output
|
36 |
+
35,42303,"TERMINAL",0,0,"[K",,terminal_output
|
37 |
+
36,42423,"TERMINAL",0,0,"t",,terminal_output
|
38 |
+
37,42500,"TERMINAL",0,0,"e",,terminal_output
|
39 |
+
38,42559,"TERMINAL",0,0,"s",,terminal_output
|
40 |
+
39,42623,"TERMINAL",0,0,"t",,terminal_output
|
41 |
+
40,42845,"TERMINAL",0,0,"_",,terminal_output
|
42 |
+
41,43219,"TERMINAL",0,0,"p",,terminal_output
|
43 |
+
42,43371,"TERMINAL",0,0,"r",,terminal_output
|
44 |
+
43,43445,"TERMINAL",0,0,"o",,terminal_output
|
45 |
+
44,43592,"TERMINAL",0,0,"f",,terminal_output
|
46 |
+
45,43670,"TERMINAL",0,0,"i",,terminal_output
|
47 |
+
46,43725,"TERMINAL",0,0,"l",,terminal_output
|
48 |
+
47,43811,"TERMINAL",0,0,"e",,terminal_output
|
49 |
+
48,44081,"TERMINAL",0,0," ",,terminal_output
|
50 |
+
49,45881,"TERMINAL",0,0,"-",,terminal_output
|
51 |
+
50,46075,"TERMINAL",0,0,"-f",,terminal_output
|
52 |
+
51,46200,"TERMINAL",0,0,"o",,terminal_output
|
53 |
+
52,46313,"TERMINAL",0,0,"r",,terminal_output
|
54 |
+
53,46568,"TERMINAL",0,0,"c",,terminal_output
|
55 |
+
54,46633,"TERMINAL",0,0,"e",,terminal_output
|
56 |
+
55,46935,"TERMINAL",0,0,"_",,terminal_output
|
57 |
+
56,47620,"TERMINAL",0,0,"[K",,terminal_output
|
58 |
+
57,48781,"TERMINAL",0,0,"-",,terminal_output
|
59 |
+
58,48978,"TERMINAL",0,0,"o",,terminal_output
|
60 |
+
59,49100,"TERMINAL",0,0,"v",,terminal_output
|
61 |
+
60,49293,"TERMINAL",0,0,"e",,terminal_output
|
62 |
+
61,49368,"TERMINAL",0,0,"r",,terminal_output
|
63 |
+
62,49569,"TERMINAL",0,0,"w",,terminal_output
|
64 |
+
63,49643,"TERMINAL",0,0,"r",,terminal_output
|
65 |
+
64,49754,"TERMINAL",0,0,"i",,terminal_output
|
66 |
+
65,49862,"TERMINAL",0,0,"t",,terminal_output
|
67 |
+
66,49947,"TERMINAL",0,0,"e",,terminal_output
|
68 |
+
67,51426,"TERMINAL",0,0," ",,terminal_output
|
69 |
+
68,51499,"TERMINAL",0,0,"t",,terminal_output
|
70 |
+
69,52077,"TERMINAL",0,0,"r",,terminal_output
|
71 |
+
70,52423,"TERMINAL",0,0,"y",,terminal_output
|
72 |
+
71,52541,"TERMINAL",0,0,"e",,terminal_output
|
73 |
+
72,52617,"TERMINAL",0,0," ",,terminal_output
|
74 |
+
73,53355,"TERMINAL",0,0,"[K",,terminal_output
|
75 |
+
74,53510,"TERMINAL",0,0,"[K",,terminal_output
|
76 |
+
75,53668,"TERMINAL",0,0,"[K",,terminal_output
|
77 |
+
76,54103,"TERMINAL",0,0,"u",,terminal_output
|
78 |
+
77,54185,"TERMINAL",0,0,"e",,terminal_output
|
79 |
+
78,54328,"TERMINAL",0,0," ",,terminal_output
|
80 |
+
79,54478,"TERMINAL",0,0,"-",,terminal_output
|
81 |
+
80,54649,"TERMINAL",0,0,"-",,terminal_output
|
82 |
+
81,56255,"TERMINAL",0,0,"t",,terminal_output
|
83 |
+
82,56397,"TERMINAL",0,0,"r",,terminal_output
|
84 |
+
83,56544,"TERMINAL",0,0,"a",,terminal_output
|
85 |
+
84,56661,"TERMINAL",0,0,"c",,terminal_output
|
86 |
+
85,56781,"TERMINAL",0,0,"e",,terminal_output
|
87 |
+
86,56938,"TERMINAL",0,0,"=",,terminal_output
|
88 |
+
87,57153,"TERMINAL",0,0,"c",,terminal_output
|
89 |
+
88,57312,"TERMINAL",0,0,"u",,terminal_output
|
90 |
+
89,57395,"TERMINAL",0,0,"d",,terminal_output
|
91 |
+
90,57446,"TERMINAL",0,0,"a",,terminal_output
|
92 |
+
91,57835,"TERMINAL",0,0,",",,terminal_output
|
93 |
+
92,59494,"TERMINAL",0,0,"n",,terminal_output
|
94 |
+
93,59551,"TERMINAL",0,0,"v",,terminal_output
|
95 |
+
94,59742,"TERMINAL",0,0,"t",,terminal_output
|
96 |
+
95,59932,"TERMINAL",0,0,"x",,terminal_output
|
97 |
+
96,61804,"TERMINAL",0,0," ",,terminal_output
|
98 |
+
97,63065,"TERMINAL",0,0,"b",,terminal_output
|
99 |
+
98,63194,"TERMINAL",0,0,"as",,terminal_output
|
100 |
+
99,63280,"TERMINAL",0,0,"h",,terminal_output
|
101 |
+
100,63429,"TERMINAL",0,0," e",,terminal_output
|
102 |
+
101,63600,"TERMINAL",0,0,"x",,terminal_output
|
103 |
+
102,63677,"TERMINAL",0,0,"p",,terminal_output
|
104 |
+
103,63794,"TERMINAL",0,0,"e",,terminal_output
|
105 |
+
104,63864,"TERMINAL",0,0,"ri",,terminal_output
|
106 |
+
105,63916,"TERMINAL",0,0,"m",,terminal_output
|
107 |
+
106,64042,"TERMINAL",0,0,"ents/",,terminal_output
|
108 |
+
107,65215,"TERMINAL",0,0,"t",,terminal_output
|
109 |
+
108,65607,"TERMINAL",0,0,"r",,terminal_output
|
110 |
+
109,65719,"TERMINAL",0,0,"ai",,terminal_output
|
111 |
+
110,65771,"TERMINAL",0,0,"n",,terminal_output
|
112 |
+
111,65935,"TERMINAL",0,0,"",,terminal_output
|
113 |
+
112,66603,"TERMINAL",0,0,"",,terminal_output
|
114 |
+
113,67080,"TERMINAL",0,0,"[K",,terminal_output
|
115 |
+
114,67235,"TERMINAL",0,0,"[K",,terminal_output
|
116 |
+
115,67378,"TERMINAL",0,0,"[K",,terminal_output
|
117 |
+
116,67520,"TERMINAL",0,0,"[K",,terminal_output
|
118 |
+
117,67649,"TERMINAL",0,0,"[K",,terminal_output
|
119 |
+
118,67836,"TERMINAL",0,0,"d",,terminal_output
|
120 |
+
119,67899,"TERMINAL",0,0,"y",,terminal_output
|
121 |
+
120,68013,"TERMINAL",0,0,"namics_grain_",,terminal_output
|
122 |
+
121,68464,"TERMINAL",0,0,"t",,terminal_output
|
123 |
+
122,68565,"TERMINAL",0,0,"ok",,terminal_output
|
124 |
+
123,68674,"TERMINAL",0,0,"_",,terminal_output
|
125 |
+
124,69397,"TERMINAL",0,0,"r",,terminal_output
|
126 |
+
125,69607,"TERMINAL",0,0,"estore.sh ",,terminal_output
|
127 |
+
126,70404,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
128 |
+
127,71800,"TERMINAL",0,0,"Collecting data...\r\n",,terminal_output
|
129 |
+
128,77965,"TERMINAL",0,0,"WARNING:2025-08-04 08:13:53,008:jax._src.distributed:127: JAX detected proxy variable(s) in the environment as distributed setup: QUADD_INJECTION_PROXY. On some systems, this may cause a hang of distributed.initialize and you may need to unset these ENV variable(s)\r\nWARNING:jax._src.distributed:JAX detected proxy variable(s) in the environment as distributed setup: QUADD_INJECTION_PROXY. On some systems, this may cause a hang of distributed.initialize and you may need to unset these ENV variable(s)\r\n",,terminal_output
|
130 |
+
129,78646,"TERMINAL",0,0,"Running on 1 devices.\r\n",,terminal_output
|
131 |
+
130,86522,"TERMINAL",0,0,"Counting all components: ['dynamics', 'lam', 'tokenizer']\r\nParameter counts:\r\n{'dynamics': 26555392, 'lam': 35115232, 'tokenizer': 33750256, 'total': 95420880}\r\n",,terminal_output
|
132 |
+
131,88929,"TERMINAL",0,0,"WARNING:absl:Metadata file does not exist: /home/franz.srambical/jafar/checkpoints/causal_dynamics_openai_grain_tok_restore/000290/_CHECKPOINT_METADATA\r\n",,terminal_output
|
133 |
+
132,89797,"TERMINAL",0,0,"/fast/home/franz.srambical/jafar/.venv/lib/python3.10/site-packages/orbax/checkpoint/_src/serialization/type_handlers.py:1256: UserWarning: Sharding info not provided when restoring. Populating sharding info from sharding file. Please note restoration time will be slightly increased due to reading from file. Note also that this option is unsafe when restoring on a different topology than the checkpoint was saved with.\r\n warnings.warn(\r\n",,terminal_output
|
134 |
+
133,96833,"TERMINAL",0,0,"Starting training from step 0...\r\n",,terminal_output
|
135 |
+
134,98035,"TERMINAL",0,0,"2025-08-04 08:14:13.078544: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
|
136 |
+
135,98192,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1754288053.237108 3090181 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n",,terminal_output
|
137 |
+
136,98265,"TERMINAL",0,0,"E0000 00:00:1754288053.280784 3090181 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
|
138 |
+
137,98709,"TERMINAL",0,0,"W0000 00:00:1754288053.643365 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643382 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643385 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643387 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
|
139 |
+
138,104271,"TERMINAL",0,0,"2025-08-04 08:14:19.312081: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:213] cuptiSubscribe: error 39: CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED\r\n2025-08-04 08:14:19.312098: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:242] cuptiGetResultString: ignored due to a previous error.\r\nE0804 08:14:19.312101 3090181 cupti_tracer.cc:1204] function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error \r\n",,terminal_output
|
140 |
+
139,131807,"TERMINAL",0,0,"2025-08-04 08:14:46.853849: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.854633: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.855920: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.855946: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n",,terminal_output
|
141 |
+
140,156766,"TERMINAL",0,0,"Step 0, loss: 16.796998977661133\r\n",,terminal_output
|
142 |
+
141,195251,"TERMINAL",0,0,"Step 1, loss: 1.9303642511367798\r\n",,terminal_output
|
143 |
+
142,196253,"TERMINAL",0,0,"Step 2, loss: 2.342648506164551\r\n",,terminal_output
|
144 |
+
143,197253,"TERMINAL",0,0,"Step 3, loss: 2.199798107147217\r\n",,terminal_output
|
145 |
+
144,198255,"TERMINAL",0,0,"Step 4, loss: 1.6089359521865845\r\nSaved checkpoint at step 5\r\n",,terminal_output
|
146 |
+
145,199252,"TERMINAL",0,0,"2025-08-04 08:15:53.706165: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:157] cuptiFinalize: ignored due to a previous error.\r\n2025-08-04 08:15:53.706187: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:242] cuptiGetResultString: ignored due to a previous error.\r\nE0804 08:15:53.706190 3090181 cupti_tracer.cc:1317] function cupti_interface_->Finalize()failed with error \r\n2025-08-04 08:15:53.707018: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:150] cuptiGetTimestamp: ignored due to a previous error.\r\n",,terminal_output
|
147 |
+
146,210246,"TERMINAL",0,0,"2025-08-04 08:16:04.208428: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:150] cuptiGetTimestamp: ignored due to a previous error.\r\n",,terminal_output
|
148 |
+
147,249252,"TERMINAL",0,0,"/home/franz.srambical/.local/share/uv/python/cpython-3.10.18-linux-x86_64-gnu/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 10 leaked shared_memory objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n",,terminal_output
|
149 |
+
148,253252,"TERMINAL",0,0,"Generating '/var/tmp/nsys-report-4202.qdstrm'\r\n",,terminal_output
|
150 |
+
149,255249,"TERMINAL",0,0,"\r[1/1] [0% ] test_profile.nsys-rep\r[1/1] [0% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [14% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [=15% ] test_profile.nsys-rep\r[1/1] [=17% ] test_profile.nsys-rep\r[1/1] [==19% ] test_profile.nsys-rep\r[1/1] [==21% ] test_profile.nsys-rep\r[1/1] [===22% ] test_profile.nsys-rep\r[1/1] [===23% ] test_profile.nsys-rep\r[1/1] [====25% ] test_profile.nsys-rep\r[1/1] [====27% ] test_profile.nsys-rep\r[1/1] [=====30% ] test_profile.nsys-rep\r[1/1] [=====32% ] test_profile.nsys-rep\r[1/1] [======34% ] test_profile.nsys-rep\r[1/1] [=======36% ] test_profile.nsys-rep\r[1/1] [=======37% ] test_profile.nsys-rep\r[1/1] [=======38% ] test_profile.nsys-rep\r[1/1] [=======39% ] test_profile.nsys-rep\r[1/1] [========40% ] test_profile.nsys-rep\r[1/1] [========41% ] test_profile.nsys-rep\r[1/1] [========42% ] test_profile.nsys-rep\r[1/1] [=========43% ] test_profile.nsys-rep\r[1/1] [=========44% ] test_profile.nsys-rep\r[1/1] [=========45% ] test_profile.nsys-rep\r[1/1] [==========47% ] test_profile.nsys-rep\r[1/1] [==========49% ] test_profile.nsys-rep\r[1/1] [===========51% ] test_profile.nsys-rep\r[1/1] [===========53% ] test_profile.nsys-rep\r[1/1] [============55% ] test_profile.nsys-rep\r[1/1] [============57% ] test_profile.nsys-rep\r[1/1] [=============59% ] test_profile.nsys-rep\r[1/1] [==============61% ] test_profile.nsys-rep\r[1/1] [==============63% ] test_profile.nsys-rep\r[1/1] [===============65% ] test_profile.nsys-rep\r[1/1] [================68% ] test_profile.nsys-rep\r[1/1] [================70% ] test_profile.nsys-rep\r[1/1] [=================72% ] test_profile.nsys-rep\r[1/1] [=================74% ] test_profile.nsys-rep\r[1/1] [==================76% ] test_profile.nsys-rep\r[1/1] [==================77% ] test_profile.nsys-rep\r[1/1] [===================82% ] test_profile.nsys-rep\r[1/1] [========================100%] test_profile.nsys-rep",,terminal_output
|
151 |
+
150,260255,"TERMINAL",0,0,"\r[1/1] [========================100%] test_profile.nsys-rep\r\nGenerated:\r\n /fast/home/franz.srambical/jafar/test_profile.nsys-rep\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
152 |
+
151,341900,"TERMINAL",0,0,"\r[K[[email protected]:~/jafar] $ ",,terminal_output
|
153 |
+
152,955976,"TERMINAL",0,0,"n",,terminal_output
|
154 |
+
153,956091,"TERMINAL",0,0,"c",,terminal_output
|
155 |
+
154,956287,"TERMINAL",0,0,"u",,terminal_output
|
156 |
+
155,961697,"TERMINAL",0,0," ",,terminal_output
|
157 |
+
156,961792,"TERMINAL",0,0,"-",,terminal_output
|
158 |
+
157,961973,"TERMINAL",0,0,"o",,terminal_output
|
159 |
+
158,962062,"TERMINAL",0,0," ",,terminal_output
|
160 |
+
159,963985,"TERMINAL",0,0,"t",,terminal_output
|
161 |
+
160,964066,"TERMINAL",0,0,"e",,terminal_output
|
162 |
+
161,964152,"TERMINAL",0,0,"s",,terminal_output
|
163 |
+
162,964204,"TERMINAL",0,0,"t",,terminal_output
|
164 |
+
163,964485,"TERMINAL",0,0,"_",,terminal_output
|
165 |
+
164,964669,"TERMINAL",0,0,"p",,terminal_output
|
166 |
+
165,964751,"TERMINAL",0,0,"r",,terminal_output
|
167 |
+
166,964865,"TERMINAL",0,0,"o",,terminal_output
|
168 |
+
167,964979,"TERMINAL",0,0,"f",,terminal_output
|
169 |
+
168,965104,"TERMINAL",0,0,"i",,terminal_output
|
170 |
+
169,965164,"TERMINAL",0,0,"l",,terminal_output
|
171 |
+
170,965256,"TERMINAL",0,0,"e",,terminal_output
|
172 |
+
171,967201,"TERMINAL",0,0," ",,terminal_output
|
173 |
+
172,967345,"TERMINAL",0,0,"-",,terminal_output
|
174 |
+
173,967491,"TERMINAL",0,0,"-",,terminal_output
|
175 |
+
174,967546,"TERMINAL",0,0,"s",,terminal_output
|
176 |
+
175,967605,"TERMINAL",0,0,"e",,terminal_output
|
177 |
+
176,967696,"TERMINAL",0,0,"t",,terminal_output
|
178 |
+
177,967801,"TERMINAL",0,0," ",,terminal_output
|
179 |
+
178,968014,"TERMINAL",0,0,"f",,terminal_output
|
180 |
+
179,968095,"TERMINAL",0,0,"u",,terminal_output
|
181 |
+
180,968192,"TERMINAL",0,0,"l",,terminal_output
|
182 |
+
181,968385,"TERMINAL",0,0,"l",,terminal_output
|
183 |
+
182,968479,"TERMINAL",0,0," ",,terminal_output
|
184 |
+
183,970729,"TERMINAL",0,0,"bas",,terminal_output
|
185 |
+
184,970834,"TERMINAL",0,0,"h",,terminal_output
|
186 |
+
185,970969,"TERMINAL",0,0," e",,terminal_output
|
187 |
+
186,971152,"TERMINAL",0,0,"x",,terminal_output
|
188 |
+
187,971246,"TERMINAL",0,0,"p",,terminal_output
|
189 |
+
188,971509,"TERMINAL",0,0,"erim",,terminal_output
|
190 |
+
189,971611,"TERMINAL",0,0,"ents/",,terminal_output
|
191 |
+
190,972533,"TERMINAL",0,0,"t",,terminal_output
|
192 |
+
191,972669,"TERMINAL",0,0,"r",,terminal_output
|
193 |
+
192,972766,"TERMINAL",0,0,"ai",,terminal_output
|
194 |
+
193,972843,"TERMINAL",0,0,"n",,terminal_output
|
195 |
+
194,973130,"TERMINAL",0,0,"[K",,terminal_output
|
196 |
+
195,973284,"TERMINAL",0,0,"[K",,terminal_output
|
197 |
+
196,973415,"TERMINAL",0,0,"[K",,terminal_output
|
198 |
+
197,973572,"TERMINAL",0,0,"[K",,terminal_output
|
199 |
+
198,973721,"TERMINAL",0,0,"[K",,terminal_output
|
200 |
+
199,973900,"TERMINAL",0,0,"d",,terminal_output
|
201 |
+
200,973998,"TERMINAL",0,0,"y",,terminal_output
|
202 |
+
201,974129,"TERMINAL",0,0,"namics_grain_",,terminal_output
|
203 |
+
202,974856,"TERMINAL",0,0,"t",,terminal_output
|
204 |
+
203,975063,"TERMINAL",0,0,"ok_",,terminal_output
|
205 |
+
204,976053,"TERMINAL",0,0,"r",,terminal_output
|
206 |
+
205,976210,"TERMINAL",0,0,"estore.sh ",,terminal_output
|
207 |
+
206,976605,"TERMINAL",0,0,"\r\n[?2004l\rbash: ncu: command not found\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
208 |
+
207,978326,"TERMINAL",0,0,"n",,terminal_output
|
209 |
+
208,978392,"TERMINAL",0,0,"c",,terminal_output
|
210 |
+
209,978620,"TERMINAL",0,0,"",,terminal_output
|
211 |
+
210,978862,"TERMINAL",0,0,"\r\nncclras ncdu ncurses6-config ncursesw6-config \r\n[[email protected]:~/jafar] $ nc",,terminal_output
|
212 |
+
211,979116,"TERMINAL",0,0,"u",,terminal_output
|
213 |
+
212,981520,"TERMINAL",0,0,"[K",,terminal_output
|
214 |
+
213,982234,"TERMINAL",0,0,"u",,terminal_output
|
215 |
+
214,982369,"TERMINAL",0,0,"rses",,terminal_output
|
216 |
+
215,982539,"TERMINAL",0,0,"",,terminal_output
|
217 |
+
216,983359,"TERMINAL",0,0,"[K",,terminal_output
|
218 |
+
217,992859,"TERMINAL",0,0,"m",,terminal_output
|
219 |
+
218,992923,"TERMINAL",0,0,"o",,terminal_output
|
220 |
+
219,993010,"TERMINAL",0,0,"d",,terminal_output
|
221 |
+
220,993130,"TERMINAL",0,0,"u",,terminal_output
|
222 |
+
221,993284,"TERMINAL",0,0,"l",,terminal_output
|
223 |
+
222,993401,"TERMINAL",0,0,"e",,terminal_output
|
224 |
+
223,993573,"TERMINAL",0,0,"",,terminal_output
|
225 |
+
224,993929,"TERMINAL",0,0,"\r\nmodule modulemd-validator \r\n[[email protected]:~/jafar] $ module",,terminal_output
|
226 |
+
225,995094,"TERMINAL",0,0," ",,terminal_output
|
227 |
+
226,995401,"TERMINAL",0,0,"",,terminal_output
|
228 |
+
227,995876,"TERMINAL",0,0,"l",,terminal_output
|
229 |
+
228,996199,"TERMINAL",0,0,"[K",,terminal_output
|
230 |
+
229,996597,"TERMINAL",0,0,"l",,terminal_output
|
231 |
+
230,996669,"TERMINAL",0,0,"i",,terminal_output
|
232 |
+
231,996885,"TERMINAL",0,0,"t",,terminal_output
|
233 |
+
232,997246,"TERMINAL",0,0,"[Ks",,terminal_output
|
234 |
+
233,997333,"TERMINAL",0,0,"t",,terminal_output
|
235 |
+
234,997504,"TERMINAL",0,0," ",,terminal_output
|
236 |
+
235,997813,"TERMINAL",0,0,"\r\n[?2004l\rNo modules loaded\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
237 |
+
236,1000434,"TERMINAL",0,0,"module list ",,terminal_output
|
238 |
+
237,1001046,"TERMINAL",0,0,"[K",,terminal_output
|
239 |
+
238,1001216,"TERMINAL",0,0,"[K",,terminal_output
|
240 |
+
239,1001368,"TERMINAL",0,0,"[K",,terminal_output
|
241 |
+
240,1001521,"TERMINAL",0,0,"[K",,terminal_output
|
242 |
+
241,1001685,"TERMINAL",0,0,"[K",,terminal_output
|
243 |
+
242,1001849,"TERMINAL",0,0,"s",,terminal_output
|
244 |
+
243,1001964,"TERMINAL",0,0,"p",,terminal_output
|
245 |
+
244,1002024,"TERMINAL",0,0,"i",,terminal_output
|
246 |
+
245,1002120,"TERMINAL",0,0,"d",,terminal_output
|
247 |
+
246,1002330,"TERMINAL",0,0,"e",,terminal_output
|
248 |
+
247,1002404,"TERMINAL",0,0,"r",,terminal_output
|
249 |
+
248,1002594,"TERMINAL",0,0," ",,terminal_output
|
250 |
+
249,1004453,"TERMINAL",0,0,"n",,terminal_output
|
251 |
+
250,1004545,"TERMINAL",0,0,"s",,terminal_output
|
252 |
+
251,1004621,"TERMINAL",0,0,"i",,terminal_output
|
253 |
+
252,1004704,"TERMINAL",0,0,"g",,terminal_output
|
254 |
+
253,1004804,"TERMINAL",0,0,"h",,terminal_output
|
255 |
+
254,1004899,"TERMINAL",0,0,"t",,terminal_output
|
256 |
+
255,1004994,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
257 |
+
256,1006202,"TERMINAL",0,0,"\r\n----------------------------------------------------------------------------------------------------------------\r\n insight: [1;34minsight/0.20.5[0m (E)\r\n----------------------------------------------------------------------------------------------------------------\r\n This extension is provided by the following modules. To access the extension you must load one of the following \r\nmodules. Note that any module names in parentheses show the module location in the software hierarchy.\r\n\r\n\r\n R-bundle-CRAN/2024.11-foss-2024a\r\n\r\n\r\nNames marked by a trailing (E) are extensions provided by another module.\r\n\r\n\r\n\r\n \r\n\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
258 |
+
257,1021117,"TERMINAL",0,0,"mo",,terminal_output
|
259 |
+
258,1021320,"TERMINAL",0,0,"d",,terminal_output
|
260 |
+
259,1021376,"TERMINAL",0,0,"u",,terminal_output
|
261 |
+
260,1021517,"TERMINAL",0,0,"l",,terminal_output
|
262 |
+
261,1021569,"TERMINAL",0,0,"e",,terminal_output
|
263 |
+
262,1021691,"TERMINAL",0,0," ",,terminal_output
|
264 |
+
263,1021831,"TERMINAL",0,0,"l",,terminal_output
|
265 |
+
264,1021979,"TERMINAL",0,0,"o",,terminal_output
|
266 |
+
265,1022033,"TERMINAL",0,0,"a",,terminal_output
|
267 |
+
266,1022113,"TERMINAL",0,0,"d",,terminal_output
|
268 |
+
267,1022183,"TERMINAL",0,0," ",,terminal_output
|
269 |
+
268,1022297,"TERMINAL",0,0,"n",,terminal_output
|
270 |
+
269,1022373,"TERMINAL",0,0,"s",,terminal_output
|
271 |
+
270,1022465,"TERMINAL",0,0,"i",,terminal_output
|
272 |
+
271,1022552,"TERMINAL",0,0,"g",,terminal_output
|
273 |
+
272,1022649,"TERMINAL",0,0,"h",,terminal_output
|
274 |
+
273,1022777,"TERMINAL",0,0,"t",,terminal_output
|
275 |
+
274,1023018,"TERMINAL",0,0,"_",,terminal_output
|
276 |
+
275,1023253,"TERMINAL",0,0,"oc",,terminal_output
|
277 |
+
276,1023589,"TERMINAL",0,0,"[K",,terminal_output
|
278 |
+
277,1023704,"TERMINAL",0,0,"[K",,terminal_output
|
279 |
+
278,1023778,"TERMINAL",0,0,"c",,terminal_output
|
280 |
+
279,1023900,"TERMINAL",0,0,"o",,terminal_output
|
281 |
+
280,1024023,"TERMINAL",0,0,"m",,terminal_output
|
282 |
+
281,1024155,"TERMINAL",0,0,"p",,terminal_output
|
283 |
+
282,1024259,"TERMINAL",0,0,"u",,terminal_output
|
284 |
+
283,1024395,"TERMINAL",0,0,"te",,terminal_output
|
285 |
+
284,1024488,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
|
286 |
+
285,1025856,"TERMINAL",0,0,"[1;31mLmod has detected the following error: [0m The following module(s) are unknown: ""nsight_compute""\r\n\r\nPlease check the spelling or version number. Also try ""module spider ...""\r\nIt is also possible your cache file is out-of-date; it may help to try:\r\n $ module --ignore_cache load ""nsight_compute""\r\n\r\nAlso make sure that all modulefiles written in TCL start with the string #%Module\r\n\r\n\r\n\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[[email protected]:~/jafar] $ ",,terminal_output
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-93adc08f-77de-486a-a0da-6bd1df62203b1753869084135-2025_07_30-11.51.32.679/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-995920d4-3066-4bd1-985c-53b12cb9e83c1753010233944-2025_07_20-13.18.05.231/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-afb08496-b4ce-4efc-aec5-cc21ff6731861752228993278-2025_07_11-12.16.54.20/source.csv
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,2,"experiments/tokenizer_optimal_batch_size.sh",0,0,"#!/usr/bin/env bash\nsource .venv/bin/activate\n\ndata_dir=""$PWD/data_arrayrecord/dummy""\nckpt_dir=""$PWD/checkpoints/tokenizer_openai_grain_checkpointing""\n\nexport XLA_FLAGS=--xla_gpu_autotune_level=0\nsrun python train_tokenizer.py \\n --batch_size 12 \\n --ckpt_dir $ckpt_dir \\n --num_steps 300000 \\n --warmup_steps 10000 \\n --seed 0 \\n --min_lr=0.0000866 \\n --max_lr=0.0000866 \\n --data_dir $data_dir",shellscript,tab
|
3 |
+
2,1656,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"12:16:54 PM [info] Activating crowd-code\n12:16:54 PM [info] Recording started\n12:16:54 PM [info] Initializing git provider using file system watchers...\n12:16:55 PM [info] Git repository found\n12:16:55 PM [info] Git provider initialized successfully\n12:16:55 PM [info] Initial git state: [object Object]\n",Log,tab
|
4 |
+
3,2482,"experiments/tokenizer_optimal_batch_size.sh",0,0,"",shellscript,tab
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-b00cd52f-686b-4cad-89ec-cf5dcdc287a11753702370531-2025_07_28-13.32.59.505/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-ba7dbcd4-5c4f-42a1-b9e3-2228180506061751641251586-2025_07_04-17.01.31.588/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c51cb8ee-522a-4c00-a6d0-920adfdf29e71753118966830-2025_07_21-19.29.37.366/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c66923f3-2a2a-4b19-880b-c1a8bfe1bf981753195775955-2025_07_22-16.49.43.384/source.csv
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,3,".venv/lib/python3.10/site-packages/flax/linen/attention.py",0,0,"# Copyright 2024 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the ""License"");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an ""AS IS"" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n""""""Attention core modules for Flax.""""""\nfrom __future__ import annotations\n\nimport functools\nimport inspect\nimport warnings\nfrom typing import Any, overload\nfrom collections.abc import Callable\n\nimport jax\nimport jax.numpy as jnp\nfrom jax import lax, random\n\nfrom flax.linen import initializers\nfrom flax.linen.dtypes import promote_dtype\nfrom flax.linen.linear import (\n DenseGeneral,\n default_kernel_init,\n)\nfrom flax.linen.module import Module, compact, merge_param\nfrom flax.linen.normalization import LayerNorm\nfrom flax.typing import (\n Array,\n PRNGKey,\n Dtype,\n Shape as Shape,\n Initializer,\n PrecisionLike,\n DotGeneralT,\n)\n\n\ndef dot_product_attention_weights(\n query: Array,\n key: Array,\n bias: Array | None = None,\n mask: Array | None = None,\n broadcast_dropout: bool = True,\n dropout_rng: PRNGKey | None = None,\n dropout_rate: float = 0.0,\n deterministic: bool = False,\n dtype: Dtype | None = None,\n precision: PrecisionLike = None,\n module: Module | None = None,\n force_fp32_for_softmax: bool = False,\n einsum_dot_general: Callable[..., Array] | None = None,\n einsum: Callable[..., Array] | None = None,\n):\n """"""Computes dot-product attention weights given query and key.\n\n Used by :func:`dot_product_attention`, which is what you'll most likely use.\n But if you want access to the attention weights for introspection, then\n you can directly call this function and call einsum yourself.\n\n Args:\n query: queries for calculating attention with shape of ``[batch...,\n q_length, num_heads, qk_depth_per_head]``.\n key: keys for calculating attention with shape of ``[batch..., kv_length,\n num_heads, qk_depth_per_head]``.\n bias: bias for the attention weights. This should be broadcastable to the\n shape ``[batch..., num_heads, q_length, kv_length]``. This can be used for\n incorporating causal masks, padding masks, proximity bias, etc.\n mask: mask for the attention weights. This should be broadcastable to the\n shape ``[batch..., num_heads, q_length, kv_length]``. This can be used for\n incorporating causal masks. Attention weights are masked out if their\n corresponding mask value is ``False``.\n broadcast_dropout: bool: use a broadcasted dropout along batch dims.\n dropout_rng: JAX PRNGKey: to be used for dropout\n dropout_rate: dropout rate\n deterministic: bool, deterministic or not (to apply dropout)\n dtype: the dtype of the computation (default: infer from inputs and params)\n precision: numerical precision of the computation see ``jax.lax.Precision``\n for details.\n module: the Module that will sow the attention weights into the\n 'intermediates' collection. Remember to mark 'intermediates' as mutable\n via ``mutable=['intermediates']`` in order to have that collection\n returned. If ``module`` is None, the attention weights will not be sowed.\n force_fp32_for_softmax: bool, whether to force the softmax to be computed in\n fp32. This is useful for mixed-precision training where higher precision\n is desired for numerical stability.\n einsum_dot_general: the dot_general to use in einsum.\n einsum: If unspecified, default `jnp.einsum` will be used. This argument is\n mutually exclusive with `precision` and `einsum_dot_general`.\n\n Raises:\n ValueError: if both `precision`/`einsum_dot_general` and `einsum` are\n specified.\n\n Returns:\n Output of shape ``[batch..., num_heads, q_length, kv_length]``.\n """"""\n if (precision or einsum_dot_general) and einsum:\n raise ValueError(\n 'precision/einsum_dot_general and einsum are mutually exclusive. Please'\n ' specify only one of them.'\n )\n if not einsum:\n einsum = functools.partial(\n jnp.einsum,\n precision=precision,\n _dot_general=einsum_dot_general\n if einsum_dot_general\n else jax.lax.dot_general,\n )\n\n query, key = promote_dtype(query, key, dtype=dtype)\n dtype = query.dtype\n\n assert query.ndim == key.ndim, 'q, k must have same rank.'\n assert query.shape[:-3] == key.shape[:-3], 'q, k batch dims must match.'\n assert query.shape[-2] == key.shape[-2], 'q, k num_heads must match.'\n assert query.shape[-1] == key.shape[-1], 'q, k depths must match.'\n\n # calculate attention matrix\n depth = query.shape[-1]\n query = query / jnp.sqrt(depth).astype(dtype)\n # attn weight shape is (batch..., num_heads, q_length, kv_length)\n attn_weights = einsum('...qhd,...khd->...hqk', query, key)\n\n # apply attention bias: masking, dropout, proximity bias, etc.\n if bias is not None:\n attn_weights = attn_weights + bias\n # apply attention mask\n if mask is not None:\n big_neg = jnp.finfo(dtype).min\n attn_weights = jnp.where(mask, attn_weights, big_neg)\n\n # normalize the attention weights\n if force_fp32_for_softmax and dtype != jnp.float32:\n attn_weights = jax.nn.softmax(attn_weights.astype(jnp.float32))\n else:\n attn_weights = jax.nn.softmax(attn_weights).astype(dtype)\n\n if module:\n module.sow('intermediates', 'attention_weights', attn_weights)\n\n # apply attention dropout\n if not deterministic and dropout_rate > 0.0:\n keep_prob = 1.0 - dropout_rate\n if broadcast_dropout:\n # dropout is broadcast across the batch + head dimensions\n dropout_shape = tuple([1] * (key.ndim - 2)) + attn_weights.shape[-2:]\n keep = random.bernoulli(dropout_rng, keep_prob, dropout_shape) # type: ignore\n else:\n keep = random.bernoulli(dropout_rng, keep_prob, attn_weights.shape) # type: ignore\n multiplier = keep.astype(dtype) / jnp.asarray(keep_prob, dtype=dtype)\n attn_weights = attn_weights * multiplier\n\n return attn_weights\n\n\ndef dot_product_attention(\n query: Array,\n key: Array,\n value: Array,\n bias: Array | None = None,\n mask: Array | None = None,\n broadcast_dropout: bool = True,\n dropout_rng: PRNGKey | None = None,\n dropout_rate: float = 0.0,\n deterministic: bool = False,\n dtype: Dtype | None = None,\n precision: PrecisionLike = None,\n module: Module | None = None,\n force_fp32_for_softmax: bool = False,\n einsum_dot_general: Callable[..., Array] | None = None,\n qk_attn_weights_einsum: Callable[..., Array] | None = None,\n attn_weights_value_einsum: Callable[..., Array] | None = None,\n):\n """"""Computes dot-product attention given query, key, and value.\n\n This is the core function for applying attention based on\n https://arxiv.org/abs/1706.03762. It calculates the attention weights given\n query and key and combines the values using the attention weights.\n\n .. note::\n ``query``, ``key``, ``value`` needn't have any batch dimensions.\n\n Args:\n query: queries for calculating attention with shape of ``[batch...,\n q_length, num_heads, qk_depth_per_head]``.\n key: keys for calculating attention with shape of ``[batch..., kv_length,\n num_heads, qk_depth_per_head]``.\n value: values to be used in attention with shape of ``[batch..., kv_length,\n num_heads, v_depth_per_head]``.\n bias: bias for the attention weights. This should be broadcastable to the\n shape ``[batch..., num_heads, q_length, kv_length]``. This can be used for\n incorporating causal masks, padding masks, proximity bias, etc.\n mask: mask for the attention weights. This should be broadcastable to the\n shape ``[batch..., num_heads, q_length, kv_length]``. This can be used for\n incorporating causal masks. Attention weights are masked out if their\n corresponding mask value is ``False``.\n broadcast_dropout: bool: use a broadcasted dropout along batch dims.\n dropout_rng: JAX PRNGKey: to be used for dropout\n dropout_rate: dropout rate\n deterministic: bool, deterministic or not (to apply dropout)\n dtype: the dtype of the computation (default: infer from inputs)\n precision: numerical precision of the computation see ``jax.lax.Precision`\n for details.\n module: the Module that will sow the attention weights into the\n 'intermediates' collection. Remember to mark 'intermediates' as mutable\n via ``mutable=['intermediates']`` in order to have that collection\n returned. If ``module`` is None, the attention weights will not be sowed.\n force_fp32_for_softmax: bool, whether to force the softmax to be computed in\n fp32. This is useful for mixed-precision training where higher precision\n is desired for numerical stability.\n einsum_dot_general: the dot_general to use in `jnp.einsum`.\n qk_attn_weights_einsum: the einsum for computing the attention weights. When\n unspecified, the default `jnp.einsum` will be used. This argument is\n mutually exclusive with `precision` and `einsum_dot_general`.\n attn_weights_value_einsum: the einsum for computing the product of the\n attention weights and the values. When unspecified, the default\n `jnp.einsum` will be used. This argument is mutually exclusive with\n `precision` and `einsum_dot_general`.\n\n Returns:\n Output of shape ``[batch..., q_length, num_heads, v_depth_per_head]``.\n\n Raises:\n ValueError: if both `precision`/`einsum_dot_general` and\n `qk_attn_weights_einsum`/`attn_weights_value_einsum` are\n specified.\n """"""\n if (qk_attn_weights_einsum and not attn_weights_value_einsum) or (\n not qk_attn_weights_einsum and attn_weights_value_einsum\n ):\n raise ValueError(\n 'qk_attn_weights_einsum and attn_weights_value_einsum must be specified'\n ' together.'\n )\n if (precision or einsum_dot_general) and (\n qk_attn_weights_einsum or attn_weights_value_einsum\n ):\n raise ValueError(\n 'precision/einsum_dot_general and'\n ' qk_attn_weights_einsum/attn_weights_value_einsum are mutually'\n ' exclusive. Please specify only one of them.'\n )\n\n query, key, value = promote_dtype(query, key, value, dtype=dtype)\n dtype = query.dtype\n assert key.ndim == query.ndim == value.ndim, 'q, k, v must have same rank.'\n assert (\n query.shape[:-3] == key.shape[:-3] == value.shape[:-3]\n ), 'q, k, v batch dims must match.'\n assert (\n query.shape[-2] == key.shape[-2] == value.shape[-2]\n ), 'q, k, v num_heads must match.'\n assert key.shape[-3] == value.shape[-3], 'k, v lengths must match.'\n\n # compute attention weights\n attn_weights = dot_product_attention_weights(\n query,\n key,\n bias,\n mask,\n broadcast_dropout,\n dropout_rng,\n dropout_rate,\n deterministic,\n dtype,\n precision,\n module,\n force_fp32_for_softmax,\n einsum_dot_general=einsum_dot_general,\n einsum=qk_attn_weights_einsum,\n )\n if not attn_weights_value_einsum:\n attn_weights_value_einsum = functools.partial(\n jnp.einsum,\n precision=precision,\n _dot_general=einsum_dot_general\n if einsum_dot_general\n else jax.lax.dot_general,\n )\n # return weighted sum over values for each query position\n return attn_weights_value_einsum(\n '...hqk,...khd->...qhd',\n attn_weights,\n value,\n )\n\n\nclass MultiHeadDotProductAttention(Module):\n """"""Multi-head dot-product attention.\n\n Example usage::\n\n >>> import flax.linen as nn\n >>> import jax\n\n >>> layer = nn.MultiHeadDotProductAttention(num_heads=8, qkv_features=16)\n >>> key1, key2, key3, key4, key5, key6 = jax.random.split(jax.random.key(0), 6)\n >>> shape = (4, 3, 2, 5)\n >>> q, k, v = jax.random.uniform(key1, shape), jax.random.uniform(key2, shape), jax.random.uniform(key3, shape)\n >>> variables = layer.init(jax.random.key(0), q)\n\n >>> # different inputs for inputs_q, inputs_k and inputs_v\n >>> out = layer.apply(variables, q, k, v)\n >>> # equivalent to layer.apply(variables, inputs_q=q, inputs_k=k, inputs_v=k)\n >>> out = layer.apply(variables, q, k)\n >>> # equivalent to layer.apply(variables, inputs_q=q, inputs_k=q) and layer.apply(variables, inputs_q=q, inputs_k=q, inputs_v=q)\n >>> out = layer.apply(variables, q)\n\n >>> attention_kwargs = dict(\n ... num_heads=8,\n ... qkv_features=16,\n ... kernel_init=nn.initializers.ones,\n ... bias_init=nn.initializers.zeros,\n ... dropout_rate=0.5,\n ... deterministic=False,\n ... )\n >>> class Module(nn.Module):\n ... attention_kwargs: dict\n ...\n ... @nn.compact\n ... def __call__(self, x, dropout_rng=None):\n ... out1 = nn.MultiHeadDotProductAttention(**self.attention_kwargs)(x, dropout_rng=dropout_rng)\n ... out2 = nn.MultiHeadDotProductAttention(**self.attention_kwargs)(x, dropout_rng=dropout_rng)\n ... return out1, out2\n >>> module = Module(attention_kwargs)\n >>> variables = module.init({'params': key1, 'dropout': key2}, q)\n\n >>> # out1 and out2 are different.\n >>> out1, out2 = module.apply(variables, q, rngs={'dropout': key3})\n >>> # out3 and out4 are different.\n >>> # out1 and out3 are different. out2 and out4 are different.\n >>> out3, out4 = module.apply(variables, q, rngs={'dropout': key4})\n >>> # out1 and out2 are the same.\n >>> out1, out2 = module.apply(variables, q, dropout_rng=key5)\n >>> # out1 and out2 are the same as out3 and out4.\n >>> # providing a `dropout_rng` arg will take precedence over the `rngs` arg in `.apply`\n >>> out3, out4 = module.apply(variables, q, rngs={'dropout': key6}, dropout_rng=key5)\n\n Attributes:\n num_heads: Number of attention heads. Features (i.e. inputs_q.shape[-1])\n should be divisible by the number of heads.\n dtype: The dtype of the computation (default: infer from inputs and params)\n param_dtype: The dtype passed to parameter initializers (default: float32)\n qkv_features: Dimension of the key, query, and value.\n out_features: Dimension of the last projection\n broadcast_dropout: Use a broadcasted dropout along batch dims.\n dropout_rate: Dropout rate.\n deterministic: If False, the attention weight is masked randomly using\n dropout, whereas if True, the attention weights are deterministic.\n precision: Numerical precision of the computation see ``jax.lax.Precision``\n for details.\n kernel_init: Initializer for the kernel of the Dense layers.\n out_kernel_init: Optional Initializer for the kernel of the output Dense layer,\n if None, ``kernel_init`` will be used.\n bias_init: Initializer for the bias of the Dense layers.\n out_bias_init: Optional Initializer for the bias of the output Dense layer,\n if None, ``bias_init`` will be used.\n use_bias: Whether pointwise QKVO dense transforms use bias.\n attention_fn: dot_product_attention or compatible function. Accepts query,\n key, value, and returns output of shape ``[bs, dim1, dim2, ..., dimN,,\n num_heads, value_channels]``\n decode: Whether to prepare and use an autoregressive cache.\n normalize_qk: Should QK normalization be applied (arxiv.org/abs/2302.05442).\n qk_attn_weights_einsum_cls: factory function to create the einsum for\n computing the attention weights.\n attn_weights_value_einsum_cls: factory function to create the einsum for\n computing the product of the attention weights and the values.\n """"""\n\n num_heads: int\n dtype: Dtype | None = None\n param_dtype: Dtype = jnp.float32\n qkv_features: int | None = None\n out_features: int | None = None\n broadcast_dropout: bool = True\n dropout_rate: float = 0.0\n deterministic: bool | None = None\n precision: PrecisionLike = None\n kernel_init: Initializer = default_kernel_init\n out_kernel_init: Initializer | None = None\n bias_init: Initializer = initializers.zeros_init()\n out_bias_init: Initializer | None = None\n use_bias: bool = True\n attention_fn: Callable[..., Array] = dot_product_attention\n decode: bool = False\n normalize_qk: bool = False\n force_fp32_for_softmax: bool = False\n # Deprecated, will be removed.\n qkv_dot_general: DotGeneralT | None = None\n out_dot_general: DotGeneralT | None = None\n qkv_dot_general_cls: Any = None\n out_dot_general_cls: Any = None\n qk_attn_weights_einsum_cls: Callable[..., Callable[..., Array]] | None = None\n attn_weights_value_einsum_cls: Callable[..., Callable[..., Array]] | None = (\n None\n )\n\n @overload\n def __call__(\n self,\n inputs_q: Array,\n inputs_k: Array | None = None,\n inputs_v: Array | None = None,\n *,\n mask: Array | None = None,\n deterministic: bool | None = None,\n dropout_rng: PRNGKey | None = None,\n sow_weights: bool = False,\n ):\n ...\n\n @overload\n def __call__(\n self,\n inputs_q: Array,\n *,\n inputs_kv: Array | None = None,\n mask: Array | None = None,\n deterministic: bool | None = None,\n dropout_rng: PRNGKey | None = None,\n sow_weights: bool = False,\n ):\n ...\n\n @compact\n def __call__(\n self,\n inputs_q: Array,\n inputs_k: Array | None = None,\n inputs_v: Array | None = None,\n *,\n inputs_kv: Array | None = None,\n mask: Array | None = None,\n deterministic: bool | None = None,\n dropout_rng: PRNGKey | None = None,\n sow_weights: bool = False,\n ):\n """"""Applies multi-head dot product attention on the input data.\n\n Projects the inputs into multi-headed query, key, and value vectors,\n applies dot-product attention and project the results to an output vector.\n\n If both inputs_k and inputs_v are None, they will both copy the value of\n inputs_q (self attention).\n If only inputs_v is None, it will copy the value of inputs_k.\n\n Args:\n inputs_q: input queries of shape ``[batch_sizes..., length, features]``.\n inputs_k: key of shape ``[batch_sizes..., length, features]``. If None,\n inputs_k will copy the value of inputs_q.\n inputs_v: values of shape ``[batch_sizes..., length, features]``. If None,\n inputs_v will copy the value of inputs_k.\n inputs_kv: key/values of shape ``[batch_sizes..., length, features]``. If\n None, inputs_kv will copy the value of inputs_q. This arg will be\n deprecated soon. Use inputs_k and inputs_v instead.\n mask: attention mask of shape ``[batch_sizes..., num_heads, query_length,\n key/value_length]``. Attention weights are masked out if their\n corresponding mask value is ``False``.\n deterministic: if false, the attention weight is masked randomly using\n dropout, whereas if true, the attention weights are deterministic.\n dropout_rng: optional rng key to pass to the attention layer's dropout\n mask. Otherwise, self.make_rng('dropout') is used instead.\n sow_weights: if ``True``, the attention weights are sowed into the\n 'intermediates' collection. Remember to mark 'intermediates' as\n mutable via ``mutable=['intermediates']`` in order to have that\n collection returned.\n\n Returns:\n output of shape ``[batch_sizes..., length, features]``.\n """"""\n if inputs_kv is not None:\n if inputs_k is not None or inputs_v is not None:\n raise ValueError(\n 'If either `inputs_k` or `inputs_v` is not None, '\n '`inputs_kv` must be None. If `inputs_kv` is not None, both `inputs_k` '\n 'and `inputs_v` must be None. We recommend using `inputs_k` and '\n '`inputs_v` args, since `inputs_kv` will be deprecated soon. See '\n 'https://github.com/google/flax/discussions/3389 for more '\n 'information.'\n )\n inputs_k = inputs_v = inputs_kv\n warnings.warn(\n 'The inputs_kv arg will be deprecated soon. '\n 'Use inputs_k and inputs_v instead. See '\n 'https://github.com/google/flax/discussions/3389 '\n 'for more information.',\n DeprecationWarning,\n )\n else:\n if inputs_k is None:\n if inputs_v is not None:\n raise ValueError(\n '`inputs_k` cannot be None if `inputs_v` is not None. '\n 'To have both `inputs_k` and `inputs_v` be the same value, pass in the '\n 'value to `inputs_k` and leave `inputs_v` as None.'\n )\n inputs_k = inputs_q\n if inputs_v is None:\n inputs_v = inputs_k\n elif inputs_v.shape[-1] == inputs_v.shape[-2]:\n warnings.warn(\n f'You are passing an array of shape {inputs_v.shape} '\n 'to the `inputs_v` arg, when you may have intended '\n 'to pass it to the `mask` arg. As of Flax version '\n '0.7.4, the function signature of '\n ""MultiHeadDotProductAttention's `__call__` method ""\n 'has changed to `__call__(inputs_q, inputs_k=None, '\n 'inputs_v=None, *, inputs_kv=None, mask=None, '\n 'deterministic=None)`. Use the kwarg `mask` instead. '\n 'See https://github.com/google/flax/discussions/3389 '\n 'and read the docstring for more information.',\n DeprecationWarning,\n )\n\n features = self.out_features or inputs_q.shape[-1]\n qkv_features = self.qkv_features or inputs_q.shape[-1]\n assert qkv_features % self.num_heads == 0, (\n f'Memory dimension ({qkv_features}) must be divisible by number of'\n f' heads ({self.num_heads}).'\n )\n head_dim = qkv_features // self.num_heads\n\n dense = functools.partial(\n DenseGeneral,\n axis=-1,\n dtype=self.dtype,\n param_dtype=self.param_dtype,\n features=(self.num_heads, head_dim),\n kernel_init=self.kernel_init,\n bias_init=self.bias_init,\n use_bias=self.use_bias,\n precision=self.precision,\n dot_general=self.qkv_dot_general,\n dot_general_cls=self.qkv_dot_general_cls,\n )\n # project inputs_q to multi-headed q/k/v\n # dimensions are then [batch..., length, n_heads, n_features_per_head]\n query, key, value = (\n dense(name='query')(inputs_q),\n dense(name='key')(inputs_k),\n dense(name='value')(inputs_v),\n )\n\n if self.normalize_qk:\n # Normalizing query and key projections stabilizes training with higher\n # LR. See ViT-22B paper http://arxiv.org/abs/2302.05442 for analysis.\n query = LayerNorm(\n name='query_ln',\n use_bias=False,\n dtype=self.dtype,\n param_dtype=self.param_dtype,\n )(query) # type: ignore[call-arg]\n key = LayerNorm(\n name='key_ln',\n use_bias=False,\n dtype=self.dtype,\n param_dtype=self.param_dtype,\n )(key) # type: ignore[call-arg]\n\n # During fast autoregressive decoding, we feed one position at a time,\n # and cache the keys and values step by step.\n if self.decode:\n # detect if we're initializing by absence of existing cache data.\n is_initialized = self.has_variable('cache', 'cached_key')\n cached_key = self.variable(\n 'cache', 'cached_key', jnp.zeros, key.shape, key.dtype\n )\n cached_value = self.variable(\n 'cache', 'cached_value', jnp.zeros, value.shape, value.dtype\n )\n cache_index = self.variable(\n 'cache', 'cache_index', lambda: jnp.array(0, dtype=jnp.int32)\n )\n if is_initialized:\n (\n *batch_dims,\n max_length,\n num_heads,\n depth_per_head,\n ) = cached_key.value.shape\n # shape check of cached keys against query input\n expected_shape = tuple(batch_dims) + (1, num_heads, depth_per_head)\n if expected_shape != query.shape:\n raise ValueError(\n 'Autoregressive cache shape error, '\n 'expected query shape %s instead got %s.'\n % (expected_shape, query.shape)\n )\n # update key, value caches with our new 1d spatial slices\n cur_index = cache_index.value\n zero = jnp.array(0, dtype=lax.dtype(cur_index.dtype))\n indices: tuple[int | jax.Array, ...] = (zero,) * len(\n batch_dims\n ) + (\n cur_index,\n zero,\n zero,\n )\n key = lax.dynamic_update_slice(cached_key.value, key, indices)\n value = lax.dynamic_update_slice(cached_value.value, value, indices)\n cached_key.value = key\n cached_value.value = value\n cache_index.value = cache_index.value + 1\n # causal mask for cached decoder self-attention:\n # our single query position should only attend to those key\n # positions that have already been generated and cached,\n # not the remaining zero elements.\n mask = combine_masks(\n mask,\n jnp.broadcast_to(\n jnp.arange(max_length) <= cur_index,\n tuple(batch_dims) + (1, 1, max_length),\n ),\n )\n\n if (\n self.dropout_rate > 0.0\n ): # Require `deterministic` only if using dropout.\n m_deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic\n )\n if not m_deterministic and dropout_rng is None:\n dropout_rng = self.make_rng('dropout')\n else:\n m_deterministic = True\n\n # `qk_attn_weights_einsum` and `attn_weights_value_einsum` are optional\n # arguments that can be used to override the default `jnp.einsum`. They\n # exist for quantized einsum support in AQT.\n qk_attn_weights_einsum = (\n self.qk_attn_weights_einsum_cls()\n if self.qk_attn_weights_einsum_cls\n else None\n )\n attn_weights_value_einsum = (\n self.attn_weights_value_einsum_cls()\n if self.attn_weights_value_einsum_cls\n else None\n )\n # apply attention\n attn_args = (query, key, value)\n # This kwargs list match the default nn.dot_product_attention.\n # For custom `attention_fn`s, invalid kwargs will be filtered.\n attn_kwargs = dict(\n mask=mask,\n dropout_rng=dropout_rng,\n dropout_rate=self.dropout_rate,\n broadcast_dropout=self.broadcast_dropout,\n deterministic=m_deterministic,\n dtype=self.dtype,\n precision=self.precision,\n force_fp32_for_softmax=self.force_fp32_for_softmax,\n qk_attn_weights_einsum=qk_attn_weights_einsum,\n attn_weights_value_einsum=attn_weights_value_einsum,\n )\n attn_kwargs = {\n k: v\n for k, v in attn_kwargs.items()\n if k in inspect.signature(self.attention_fn).parameters\n }\n if sow_weights:\n x = self.attention_fn(*attn_args, **attn_kwargs, module=self)\n else:\n x = self.attention_fn(*attn_args, **attn_kwargs)\n # back to the original inputs dimensions\n out = DenseGeneral(\n features=features,\n axis=(-2, -1),\n kernel_init=self.out_kernel_init or self.kernel_init,\n bias_init=self.out_bias_init or self.bias_init,\n use_bias=self.use_bias,\n dtype=self.dtype,\n param_dtype=self.param_dtype,\n precision=self.precision,\n dot_general=self.out_dot_general,\n dot_general_cls=self.out_dot_general_cls,\n name='out', # type: ignore[call-arg]\n )(x)\n return out\n\n\nclass MultiHeadAttention(MultiHeadDotProductAttention):\n """"""Multi-head dot-product attention.\n Alias for ``MultiHeadDotProductAttention``.\n\n **NOTE**: ``MultiHeadAttention`` is a wrapper of ``MultiHeadDotProductAttention``,\n and so their implementations are identical. However ``MultiHeadAttention`` layers\n will, by default, be named ``MultiHeadAttention_{index}``, whereas ``MultiHeadDotProductAttention``\n will be named ``MultiHeadDotProductAttention_{index}``. Therefore, this could affect\n checkpointing, param collection names and RNG threading (since the layer name is\n used when generating new RNG's) within the module.\n\n Example usage::\n\n >>> import flax.linen as nn\n >>> import jax\n\n >>> layer = nn.MultiHeadAttention(num_heads=8, qkv_features=16)\n >>> key1, key2, key3, key4, key5, key6 = jax.random.split(jax.random.key(0), 6)\n >>> shape = (4, 3, 2, 5)\n >>> q, k, v = jax.random.uniform(key1, shape), jax.random.uniform(key2, shape), jax.random.uniform(key3, shape)\n >>> variables = layer.init(jax.random.key(0), q)\n\n >>> # different inputs for inputs_q, inputs_k and inputs_v\n >>> out = layer.apply(variables, q, k, v)\n >>> # equivalent to layer.apply(variables, inputs_q=q, inputs_k=k, inputs_v=k)\n >>> out = layer.apply(variables, q, k)\n >>> # equivalent to layer.apply(variables, inputs_q=q, inputs_k=q) and layer.apply(variables, inputs_q=q, inputs_k=q, inputs_v=q)\n >>> out = layer.apply(variables, q)\n\n >>> attention_kwargs = dict(\n ... num_heads=8,\n ... qkv_features=16,\n ... kernel_init=nn.initializers.ones,\n ... bias_init=nn.initializers.zeros,\n ... dropout_rate=0.5,\n ... deterministic=False,\n ... )\n >>> class Module(nn.Module):\n ... attention_kwargs: dict\n ...\n ... @nn.compact\n ... def __call__(self, x, dropout_rng=None):\n ... out1 = nn.MultiHeadAttention(**self.attention_kwargs)(x, dropout_rng=dropout_rng)\n ... out2 = nn.MultiHeadAttention(**self.attention_kwargs)(x, dropout_rng=dropout_rng)\n ... return out1, out2\n >>> module = Module(attention_kwargs)\n >>> variables = module.init({'params': key1, 'dropout': key2}, q)\n\n >>> # out1 and out2 are different.\n >>> out1, out2 = module.apply(variables, q, rngs={'dropout': key3})\n >>> # out3 and out4 are different.\n >>> # out1 and out3 are different. out2 and out4 are different.\n >>> out3, out4 = module.apply(variables, q, rngs={'dropout': key4})\n >>> # out1 and out2 are the same.\n >>> out1, out2 = module.apply(variables, q, dropout_rng=key5)\n >>> # out1 and out2 are the same as out3 and out4.\n >>> # providing a `dropout_rng` arg will take precedence over the `rngs` arg in `.apply`\n >>> out3, out4 = module.apply(variables, q, rngs={'dropout': key6}, dropout_rng=key5)\n\n Attributes:\n num_heads: number of attention heads. Features (i.e. inputs_q.shape[-1])\n should be divisible by the number of heads.\n dtype: the dtype of the computation (default: infer from inputs and params)\n param_dtype: the dtype passed to parameter initializers (default: float32)\n qkv_features: dimension of the key, query, and value.\n out_features: dimension of the last projection\n broadcast_dropout: bool: use a broadcasted dropout along batch dims.\n dropout_rate: dropout rate\n deterministic: if false, the attention weight is masked randomly using\n dropout, whereas if true, the attention weights are deterministic.\n precision: numerical precision of the computation see ``jax.lax.Precision``\n for details.\n kernel_init: initializer for the kernel of the Dense layers.\n bias_init: initializer for the bias of the Dense layers.\n use_bias: bool: whether pointwise QKVO dense transforms use bias.\n attention_fn: dot_product_attention or compatible function. Accepts query,\n key, value, and returns output of shape ``[bs, dim1, dim2, ..., dimN,,\n num_heads, value_channels]``\n decode: whether to prepare and use an autoregressive cache.\n normalize_qk: should QK normalization be applied (arxiv.org/abs/2302.05442).\n """"""\n\n\nclass SelfAttention(MultiHeadDotProductAttention):\n """"""Self-attention special case of multi-head dot-product attention.\n This layer is deprecated in favor of ``MultiHeadDotProductAttention``.\n\n Example usage::\n >>> import flax.linen as nn\n >>> import jax, jax.numpy as jnp\n >>> layer = nn.MultiHeadDotProductAttention(num_heads=8, qkv_features=16)\n >>> variables = layer.init(jax.random.key(0), jnp.ones((4, 3, 2, 5)))\n """"""\n\n @compact\n def __call__( # type: ignore\n self,\n inputs_q: Array,\n mask: Array | None = None,\n deterministic: bool | None = None,\n dropout_rng: PRNGKey | None = None,\n sow_weights: bool = False,\n ):\n """"""Applies multi-head dot product self-attention on the input data.\n\n Projects the inputs into multi-headed query, key, and value vectors,\n applies dot-product attention and project the results to an output vector.\n\n Args:\n inputs_q: input queries of shape ``[batch_sizes..., length, features]``.\n mask: attention mask of shape ``[batch_sizes..., num_heads, query_length,\n key/value_length]``. Attention weights are masked out if their\n corresponding mask value is ``False``.\n deterministic: if false, the attention weight is masked randomly using\n dropout, whereas if true, the attention weights are deterministic.\n\n Returns:\n output of shape ``[batch_sizes..., length, features]``.\n """"""\n warnings.warn(\n 'SelfAttention will be deprecated soon. Use '\n '`MultiHeadDotProductAttention.__call__(inputs_q)` instead. '\n 'See https://github.com/google/flax/discussions/3389 '\n 'for more information.',\n DeprecationWarning,\n )\n return super().__call__(\n inputs_q,\n mask=mask,\n deterministic=deterministic,\n dropout_rng=dropout_rng,\n sow_weights=sow_weights,\n )\n\n\n# mask-making utility functions\n\n\ndef make_attention_mask(\n query_input: Array,\n key_input: Array,\n pairwise_fn: Callable[..., Any] = jnp.multiply,\n extra_batch_dims: int = 0,\n dtype: Dtype = jnp.float32,\n):\n """"""Mask-making helper for attention weights.\n\n In case of 1d inputs (i.e., ``[batch..., len_q]``, ``[batch..., len_kv]``, the\n attention weights will be ``[batch..., heads, len_q, len_kv]`` and this\n function will produce ``[batch..., 1, len_q, len_kv]``.\n\n Args:\n query_input: a batched, flat input of query_length size\n key_input: a batched, flat input of key_length size\n pairwise_fn: broadcasting elementwise comparison function\n extra_batch_dims: number of extra batch dims to add singleton axes for, none\n by default\n dtype: mask return dtype\n\n Returns:\n A ``[batch..., 1, len_q, len_kv]`` shaped mask for 1d attention.\n """"""\n mask = pairwise_fn(\n jnp.expand_dims(query_input, axis=-1), jnp.expand_dims(key_input, axis=-2)\n )\n mask = jnp.expand_dims(mask, axis=-3)\n mask = jnp.expand_dims(mask, axis=tuple(range(extra_batch_dims)))\n return mask.astype(dtype)\n\n\ndef make_causal_mask(\n x: Array, extra_batch_dims: int = 0, dtype: Dtype = jnp.float32\n) -> Array:\n """"""Make a causal mask for self-attention.\n\n In case of 1d inputs (i.e., ``[batch..., len]``, the self-attention weights\n will be ``[batch..., heads, len, len]`` and this function will produce a\n causal mask of shape ``[batch..., 1, len, len]``.\n\n Args:\n x: input array of shape ``[batch..., len]``\n extra_batch_dims: number of batch dims to add singleton axes for, none by\n default\n dtype: mask return dtype\n\n Returns:\n A ``[batch..., 1, len, len]`` shaped causal mask for 1d attention.\n """"""\n idxs = jnp.broadcast_to(jnp.arange(x.shape[-1], dtype=jnp.int32), x.shape)\n return make_attention_mask(\n idxs,\n idxs,\n jnp.greater_equal,\n extra_batch_dims=extra_batch_dims,\n dtype=dtype,\n )\n\n\ndef combine_masks(\n *masks: Array | None, dtype: Dtype = jnp.float32\n) -> Array | None:\n """"""Combine attention masks.\n\n Args:\n *masks: set of attention mask arguments to combine, some can be None.\n dtype: dtype for the returned mask.\n\n Returns:\n Combined mask, reduced by logical and, returns None if no masks given.\n """"""\n masks_list = [m for m in masks if m is not None]\n if not masks_list:\n return None\n assert all(\n map(lambda x: x.ndim == masks_list[0].ndim, masks_list)\n ), f'masks must have same rank: {tuple(map(lambda x: x.ndim, masks_list))}'\n mask, *other_masks = masks_list\n for other_mask in other_masks:\n mask = jnp.logical_and(mask, other_mask)\n return mask.astype(dtype)\n",python,tab
|
3 |
+
2,162,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"4:49:43 PM [info] Activating crowd-code\n4:49:43 PM [info] Recording started\n4:49:43 PM [info] Initializing git provider using file system watchers...\n",Log,tab
|
4 |
+
3,204,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"4:49:43 PM [info] Git repository found\n4:49:43 PM [info] Git provider initialized successfully\n4:49:43 PM [info] Initial git state: [object Object]\n",Log,content
|
5 |
+
4,972,".venv/lib/python3.10/site-packages/flax/linen/attention.py",0,0,"",python,tab
|
6 |
+
5,40295,".venv/lib/python3.10/site-packages/flax/linen/attention.py",13921,0,"",python,selection_command
|
7 |
+
6,40716,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16444,0,"",python,selection_command
|
8 |
+
7,42300,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16399,0,"",python,selection_command
|
9 |
+
8,42558,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16366,0,"",python,selection_command
|
10 |
+
9,42580,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16327,0,"",python,selection_command
|
11 |
+
10,42613,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16298,0,"",python,selection_command
|
12 |
+
11,42645,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16275,0,"",python,selection_command
|
13 |
+
12,42673,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16214,0,"",python,selection_command
|
14 |
+
13,42711,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16190,0,"",python,selection_command
|
15 |
+
14,42743,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16147,0,"",python,selection_command
|
16 |
+
15,42777,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16094,0,"",python,selection_command
|
17 |
+
16,42810,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16049,0,"",python,selection_command
|
18 |
+
17,42844,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16000,0,"",python,selection_command
|
19 |
+
18,42878,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15966,0,"",python,selection_command
|
20 |
+
19,42911,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15930,0,"",python,selection_command
|
21 |
+
20,42945,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15902,0,"",python,selection_command
|
22 |
+
21,42977,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15869,0,"",python,selection_command
|
23 |
+
22,43011,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15835,0,"",python,selection_command
|
24 |
+
23,43044,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15801,0,"",python,selection_command
|
25 |
+
24,43077,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15766,0,"",python,selection_command
|
26 |
+
25,43110,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15737,0,"",python,selection_command
|
27 |
+
26,43143,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15720,0,"",python,selection_command
|
28 |
+
27,43177,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15717,0,"",python,selection_command
|
29 |
+
28,43229,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15713,0,"",python,selection_command
|
30 |
+
29,43246,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15644,0,"",python,selection_command
|
31 |
+
30,43278,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15567,0,"",python,selection_command
|
32 |
+
31,43311,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15528,0,"",python,selection_command
|
33 |
+
32,43344,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15454,0,"",python,selection_command
|
34 |
+
33,43378,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15528,0,"",python,selection_command
|
35 |
+
34,43635,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15567,0,"",python,selection_command
|
36 |
+
35,43664,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15644,0,"",python,selection_command
|
37 |
+
36,43694,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15713,0,"",python,selection_command
|
38 |
+
37,43727,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15717,0,"",python,selection_command
|
39 |
+
38,43761,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15720,0,"",python,selection_command
|
40 |
+
39,43793,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15737,0,"",python,selection_command
|
41 |
+
40,43827,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15766,0,"",python,selection_command
|
42 |
+
41,43861,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15801,0,"",python,selection_command
|
43 |
+
42,43895,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15835,0,"",python,selection_command
|
44 |
+
43,43930,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15869,0,"",python,selection_command
|
45 |
+
44,43962,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15902,0,"",python,selection_command
|
46 |
+
45,43996,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15930,0,"",python,selection_command
|
47 |
+
46,44030,".venv/lib/python3.10/site-packages/flax/linen/attention.py",15966,0,"",python,selection_command
|
48 |
+
47,44066,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16000,0,"",python,selection_command
|
49 |
+
48,44782,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16049,0,"",python,selection_command
|
50 |
+
49,45035,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16094,0,"",python,selection_command
|
51 |
+
50,45063,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16147,0,"",python,selection_command
|
52 |
+
51,45095,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16190,0,"",python,selection_command
|
53 |
+
52,45210,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16214,0,"",python,selection_command
|
54 |
+
53,45384,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16275,0,"",python,selection_command
|
55 |
+
54,45643,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16298,0,"",python,selection_command
|
56 |
+
55,45666,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16327,0,"",python,selection_command
|
57 |
+
56,45695,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16366,0,"",python,selection_command
|
58 |
+
57,45732,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16399,0,"",python,selection_command
|
59 |
+
58,45761,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16444,0,"",python,selection_command
|
60 |
+
59,45795,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16489,0,"",python,selection_command
|
61 |
+
60,45934,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16523,0,"",python,selection_command
|
62 |
+
61,46084,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16557,0,"",python,selection_command
|
63 |
+
62,50566,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16523,0,"",python,selection_command
|
64 |
+
63,50816,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16489,0,"",python,selection_command
|
65 |
+
64,50854,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16444,0,"",python,selection_command
|
66 |
+
65,50871,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16399,0,"",python,selection_command
|
67 |
+
66,50900,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16366,0,"",python,selection_command
|
68 |
+
67,50936,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16327,0,"",python,selection_command
|
69 |
+
68,51045,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16298,0,"",python,selection_command
|
70 |
+
69,51198,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16275,0,"",python,selection_command
|
71 |
+
70,51334,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16214,0,"",python,selection_command
|
72 |
+
71,51454,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16190,0,"",python,selection_command
|
73 |
+
72,51622,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16147,0,"",python,selection_command
|
74 |
+
73,51714,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16190,0,"",python,selection_command
|
75 |
+
74,51898,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16214,0,"",python,selection_command
|
76 |
+
75,52063,".venv/lib/python3.10/site-packages/flax/linen/attention.py",16275,0,"",python,selection_command
|
77 |
+
76,53668,".venv/lib/python3.10/site-packages/flax/linen/attention.py",22967,0,"",python,selection_command
|
78 |
+
77,55696,".venv/lib/python3.10/site-packages/flax/linen/attention.py",22987,0,"",python,selection_command
|
79 |
+
78,55812,".venv/lib/python3.10/site-packages/flax/linen/attention.py",23059,0,"",python,selection_command
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c877f8c1-c8e0-4a7b-8720-40bb4df915221754138206219-2025_08_02-14.36.55.608/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-e6f472f8-41ac-4a94-8d9a-7becc51fed651753430069410-2025_07_25-09.54.49.121/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f0382786-979c-4a6d-8e9b-f5977f18eb4f1753726151187-2025_07_28-20.09.13.67/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f508ed97-76c1-4935-95ed-d4393099e6361753128212083-2025_07_21-22.03.39.166/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-f818bac9-3228-48bb-85cd-ad930fdb35d91752220838711-2025_07_11-10.00.40.248/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-faba6583-b2c9-4b94-9ba6-9f240428520a1750722089894-2025_06_23-23.49.28.299/source.csv
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type90,608191,"src/extension.ts",9984,0,"A",typescript,content
|
2 |
+
91,608196,"src/extension.ts",9985,0,"",typescript,selection_keyboard
|
3 |
+
92,608218,"src/extension.ts",9985,0,"P",typescript,content
|
4 |
+
93,608220,"src/extension.ts",9986,0,"",typescript,selection_keyboard
|
5 |
+
94,608231,"src/extension.ts",9986,0,"I",typescript,content
|
6 |
+
95,608233,"src/extension.ts",9987,0,"",typescript,selection_keyboard
|
7 |
+
96,608439,"src/extension.ts",9987,0,"=",typescript,content
|
8 |
+
97,608442,"src/extension.ts",9988,0,"",typescript,selection_keyboard
|
9 |
+
98,609356,"src/extension.ts",9988,0,"f",typescript,content
|
10 |
+
99,609363,"src/extension.ts",9989,0,"",typescript,selection_keyboard
|
11 |
+
100,609425,"src/extension.ts",9989,0,"o",typescript,content
|
12 |
+
101,609429,"src/extension.ts",9990,0,"",typescript,selection_keyboard
|
13 |
+
102,609547,"src/extension.ts",9990,0,"o",typescript,content
|
14 |
+
103,609551,"src/extension.ts",9991,0,"",typescript,selection_keyboard
|
15 |
+
104,610051,"src/extension.ts",9990,0,"",typescript,selection_command
|
16 |
+
105,610641,"src/extension.ts",9984,7,"",typescript,content
|
17 |
+
106,610907,"src/extension.ts",9984,0,"A",typescript,content
|
18 |
+
107,610908,"src/extension.ts",9985,0,"",typescript,selection_keyboard
|
19 |
+
108,610957,"src/extension.ts",9985,0,"P",typescript,content
|
20 |
+
109,610960,"src/extension.ts",9986,0,"",typescript,selection_keyboard
|
21 |
+
110,610983,"src/extension.ts",9986,0,"I",typescript,content
|
22 |
+
111,610986,"src/extension.ts",9987,0,"",typescript,selection_keyboard
|
23 |
+
112,611972,"src/extension.ts",9986,0,"",typescript,selection_command
|
24 |
+
113,612398,"src/extension.ts",9984,0,"",typescript,selection_command
|
25 |
+
114,613099,"src/extension.ts",9984,3,"",typescript,content
|
26 |
+
115,613505,"src/extension.ts",9984,0,"A",typescript,content
|
27 |
+
116,613506,"src/extension.ts",9985,0,"",typescript,selection_keyboard
|
28 |
+
117,613558,"src/extension.ts",9985,0,"P",typescript,content
|
29 |
+
118,613562,"src/extension.ts",9986,0,"",typescript,selection_keyboard
|
30 |
+
119,613614,"src/extension.ts",9986,0,"I",typescript,content
|
31 |
+
120,613618,"src/extension.ts",9987,0,"",typescript,selection_keyboard
|
32 |
+
121,613842,"src/extension.ts",9987,0,"=",typescript,content
|
33 |
+
122,613844,"src/extension.ts",9988,0,"",typescript,selection_keyboard
|
34 |
+
123,614226,"src/extension.ts",9988,0,"f",typescript,content
|
35 |
+
124,614230,"src/extension.ts",9989,0,"",typescript,selection_keyboard
|
36 |
+
125,614281,"src/extension.ts",9989,0,"o",typescript,content
|
37 |
+
126,614414,"src/extension.ts",9990,0,"o",typescript,content
|
38 |
+
127,614418,"src/extension.ts",9991,0,"",typescript,selection_keyboard
|
39 |
+
128,614676,"src/extension.ts",9990,0,"",typescript,selection_command
|
40 |
+
129,615398,"src/extension.ts",9984,7,"",typescript,content
|
41 |
+
130,632977,"src/extension.ts",9984,0,"\n",typescript,content160,689584,"src/extension.ts",9984,0,"\n",typescript,content
|
42 |
+
161,689722,"src/extension.ts",9985,0,"\n",typescript,content
|
43 |
+
162,689854,"src/extension.ts",9986,0,"\n",typescript,content
|
44 |
+
163,689985,"src/extension.ts",9987,0,"\n",typescript,content
|
45 |
+
164,690120,"src/extension.ts",9988,0,"\n",typescript,content
|
46 |
+
165,690244,"src/extension.ts",9989,0,"\n",typescript,content195,720758,"src/extension.ts",9990,1,"",typescript,content
|
47 |
+
196,720908,"src/extension.ts",9989,1,"",typescript,content
|
48 |
+
197,721051,"src/extension.ts",9988,1,"",typescript,content
|
49 |
+
198,721198,"src/extension.ts",9987,1,"",typescript,content
|
50 |
+
199,721347,"src/extension.ts",9986,1,"",typescript,content
|
51 |
+
200,721495,"src/extension.ts",9985,1,"",typescript,content
|
52 |
+
201,721627,"src/extension.ts",9984,1,"",typescript,content
|
53 |
+
202,721775,"src/extension.ts",9984,0,"\n",typescript,content
|
54 |
+
203,721906,"src/extension.ts",9985,0,"\n",typescript,content
|
55 |
+
204,722150,"src/extension.ts",9986,0,"\n",typescript,content
|
56 |
+
205,722190,"src/extension.ts",9987,0,"\n",typescript,content
|
57 |
+
206,722219,"src/extension.ts",9988,0,"\n",typescript,content
|
58 |
+
207,722259,"src/extension.ts",9989,0,"\n",typescript,content
|
59 |
+
208,722283,"src/extension.ts",9990,0,"\n",typescript,content
|
60 |
+
209,722320,"src/extension.ts",9991,0,"\n",typescript,content
|
61 |
+
210,722350,"src/extension.ts",9992,0,"\n",typescript,content
|
62 |
+
211,722387,"src/extension.ts",9993,0,"\n",typescript,content
|
63 |
+
212,722421,"src/extension.ts",9994,0,"\n",typescript,content
|
64 |
+
213,722454,"src/extension.ts",9995,0,"\n",typescript,content
|
65 |
+
214,722489,"src/extension.ts",9996,0,"\n",typescript,content
|
66 |
+
215,722520,"src/extension.ts",9997,0,"\n",typescript,content
|
67 |
+
216,722557,"src/extension.ts",9998,0,"\n",typescript,content
|
68 |
+
217,722587,"src/extension.ts",9999,0,"\n",typescript,content
|
69 |
+
218,722622,"src/extension.ts",10000,0,"\n",typescript,content
|
70 |
+
219,722654,"src/extension.ts",10001,0,"\n",typescript,content
|
71 |
+
220,722688,"src/extension.ts",10002,0,"\n",typescript,content
|
72 |
+
221,722800,"src/extension.ts",10002,1,"",typescript,content
|
73 |
+
222,723054,"src/extension.ts",10001,1,"",typescript,content
|
74 |
+
223,723084,"src/extension.ts",10000,1,"",typescript,content
|
75 |
+
224,723119,"src/extension.ts",9999,1,"",typescript,content254,744309,"src/extension.ts",9990,1,"",typescript,content
|
76 |
+
255,744454,"src/extension.ts",9989,1,"",typescript,content
|
77 |
+
256,744592,"src/extension.ts",9988,1,"",typescript,content
|
78 |
+
257,744729,"src/extension.ts",9987,1,"",typescript,content
|
79 |
+
258,744874,"src/extension.ts",9986,1,"",typescript,content
|
80 |
+
259,745025,"src/extension.ts",9985,1,"",typescript,content
|
81 |
+
260,745342,"src/extension.ts",9984,1,"",typescript,content
|
82 |
+
261,745611,"src/extension.ts",9984,0,"\n",typescript,content
|
83 |
+
262,745787,"src/extension.ts",9985,0,"\n",typescript,content
|
84 |
+
263,745943,"src/extension.ts",9986,0,"\n",typescript,content
|
85 |
+
264,746077,"src/extension.ts",9987,0,"\n",typescript,content
|
86 |
+
265,746223,"src/extension.ts",9988,0,"\n",typescript,content
|
87 |
+
266,746357,"src/extension.ts",9989,0,"\n",typescript,content
|
88 |
+
267,746479,"src/extension.ts",9990,0,"\n",typescript,content
|
89 |
+
268,746689,"src/extension.ts",9990,1,"",typescript,content
|
90 |
+
269,746852,"src/extension.ts",9989,1,"",typescript,content299,1440083,"src/extension.ts",9984,0,"\n",typescript,content
|
91 |
+
300,1440491,"src/extension.ts",9985,0,"\n",typescript,content
|
92 |
+
301,1440637,"src/extension.ts",9986,0,"\n",typescript,content
|
93 |
+
302,1440791,"src/extension.ts",9987,0,"\n",typescript,content
|
94 |
+
303,1440934,"src/extension.ts",9988,0,"\n",typescript,content
|
95 |
+
304,1441072,"src/extension.ts",9989,0,"\n",typescript,content
|
96 |
+
305,1441279,"src/extension.ts",9989,1,"",typescript,content
|
97 |
+
306,1441454,"src/extension.ts",9988,1,"",typescript,content
|
98 |
+
307,1441595,"src/extension.ts",9987,1,"",typescript,content
|
99 |
+
308,1441758,"src/extension.ts",9986,1,"",typescript,content
|
100 |
+
309,1441888,"src/extension.ts",9985,1,"",typescript,content
|
101 |
+
310,1442037,"src/extension.ts",9984,1,"",typescript,content
|
102 |
+
311,1474476,"src/extension.ts",9984,0,"\n",typescript,content341,1511212,"src/extension.ts",9983,0,"\n",typescript,content
|
103 |
+
342,1511454,"src/extension.ts",9984,0,"\n",typescript,content
|
104 |
+
343,1511587,"src/extension.ts",9985,0,"\n",typescript,content
|
105 |
+
344,1511731,"src/extension.ts",9986,0,"\n",typescript,content
|
106 |
+
345,1511864,"src/extension.ts",9987,0,"\n",typescript,content
|
107 |
+
346,1511996,"src/extension.ts",9988,0,"\n",typescript,content
|
108 |
+
347,1512115,"src/extension.ts",9989,0,"\n",typescript,content
|
109 |
+
348,1512251,"src/extension.ts",9990,0,"\n",typescript,content
|
110 |
+
349,1512384,"src/extension.ts",9991,0,"\n",typescript,content379,1535334,"src/extension.ts",9983,0,"\n",typescript,content
|
111 |
+
380,1535574,"src/extension.ts",9984,0,"\n",typescript,content
|
112 |
+
381,1535825,"src/extension.ts",9985,0,"\n",typescript,content
|
113 |
+
382,1535861,"src/extension.ts",9986,0,"\n",typescript,content
|
114 |
+
383,1535890,"src/extension.ts",9987,0,"\n",typescript,content
|
115 |
+
384,1535926,"src/extension.ts",9988,0,"\n",typescript,content414,1662008,"src/extension.ts",9983,0,"\n",typescript,content
|
116 |
+
415,1662468,"src/extension.ts",9984,0,"\n",typescript,content
|
117 |
+
416,1662714,"src/extension.ts",9985,0,"\n",typescript,content
|
118 |
+
417,1662748,"src/extension.ts",9986,0,"\n",typescript,content
|
119 |
+
418,1662778,"src/extension.ts",9987,0,"\n",typescript,content
|
120 |
+
419,1662813,"src/extension.ts",9988,0,"\n",typescript,content
|
121 |
+
420,1662845,"src/extension.ts",9989,0,"\n",typescript,content
|
122 |
+
421,1662880,"src/extension.ts",9990,0,"\n",typescript,content
|
123 |
+
422,1662913,"src/extension.ts",9991,0,"\n",typescript,content
|
124 |
+
423,1662949,"src/extension.ts",9992,0,"\n",typescript,content
|
125 |
+
424,1663179,"src/extension.ts",9993,0,"\n",typescript,content
|
126 |
+
425,1663429,"src/extension.ts",9993,1,"",typescript,content
|
127 |
+
426,1663685,"src/extension.ts",9992,1,"",typescript,content
|
128 |
+
427,1663709,"src/extension.ts",9991,1,"",typescript,content
|
129 |
+
428,1663747,"src/extension.ts",9990,1,"",typescript,content
|
130 |
+
429,1663775,"src/extension.ts",9989,1,"",typescript,content
|
131 |
+
430,1663813,"src/extension.ts",9988,1,"",typescript,content
|
132 |
+
431,1663842,"src/extension.ts",9987,1,"",typescript,content
|
133 |
+
432,1664057,"src/extension.ts",9986,1,"",typescript,content
|
134 |
+
433,1664229,"src/extension.ts",9985,1,"",typescript,content
|
135 |
+
434,1664513,"src/extension.ts",9984,1,"",typescript,content
|
136 |
+
435,1674303,"src/extension.ts",9984,0,"K",typescript,content
|
137 |
+
436,1674306,"src/extension.ts",9985,0,"",typescript,selection_keyboard
|
138 |
+
437,1674445,"src/extension.ts",9985,0,"E",typescript,content
|
139 |
+
438,1674448,"src/extension.ts",9986,0,"",typescript,selection_keyboard
|
140 |
+
439,1675238,"src/extension.ts",9986,0,"Y",typescript,content
|
141 |
+
440,1675248,"src/extension.ts",9987,0,"",typescript,selection_keyboard
|
142 |
+
441,1675504,"src/extension.ts",9987,0,"=",typescript,content
|
143 |
+
442,1675506,"src/extension.ts",9988,0,"",typescript,selection_keyboard
|
144 |
+
443,1675690,"src/extension.ts",9988,0,"f",typescript,content
|
145 |
+
444,1675691,"src/extension.ts",9989,0,"",typescript,selection_keyboard
|
146 |
+
445,1675750,"src/extension.ts",9989,0,"o",typescript,content
|
147 |
+
446,1675754,"src/extension.ts",9990,0,"",typescript,selection_keyboard
|
148 |
+
447,1675900,"src/extension.ts",9990,0,"o",typescript,content
|
149 |
+
448,1675904,"src/extension.ts",9991,0,"",typescript,selection_keyboard
|
150 |
+
449,1676387,"src/extension.ts",9990,0,"",typescript,selection_command
|
151 |
+
450,1676937,"src/extension.ts",9983,8,"",typescript,content
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-03e2d1f0-34f2-48bc-9e6a-99708362c3301750977820647-2025_06_27-00.43.44.850/source.csv
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
2,184,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n",Log,tab
|
3 |
+
3,2017,"extension-output-pdoom-org.crowd-code-#1-crowd-code",198,0,"12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,content
|
4 |
+
4,60656,"extension-output-pdoom-org.crowd-code-#1-crowd-code",287,0,"",Log,selection_mouse
|
5 |
+
5,62003,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,selection_mouse
|
6 |
+
6,62155,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,2,"12",Log,selection_mouse
|
7 |
+
7,62315,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,41,"12:43:44 AM [info] Activating crowd-code\n",Log,selection_mouse
|
8 |
+
8,62349,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,80,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12",Log,selection_mouse
|
9 |
+
9,62381,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,155,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12",Log,selection_mouse
|
10 |
+
10,62430,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,200,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12",Log,selection_mouse
|
11 |
+
11,62556,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,259,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12",Log,selection_mouse
|
12 |
+
12,62997,"extension-output-pdoom-org.crowd-code-#1-crowd-code",258,0,"",Log,selection_mouse
|
13 |
+
13,63180,"extension-output-pdoom-org.crowd-code-#1-crowd-code",257,2,"12",Log,selection_mouse
|
14 |
+
14,63330,"extension-output-pdoom-org.crowd-code-#1-crowd-code",257,45,"12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
15 |
+
15,63564,"extension-output-pdoom-org.crowd-code-#1-crowd-code",198,104,"12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
16 |
+
16,63582,"extension-output-pdoom-org.crowd-code-#1-crowd-code",153,149,"12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
17 |
+
17,63616,"extension-output-pdoom-org.crowd-code-#1-crowd-code",78,224,"12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
18 |
+
18,63650,"extension-output-pdoom-org.crowd-code-#1-crowd-code",41,261,"12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
19 |
+
19,63765,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,302,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
20 |
+
20,64159,"extension-output-pdoom-org.crowd-code-#1-crowd-code",1,0,"",Log,selection_mouse
|
21 |
+
21,64329,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,2,"12",Log,selection_mouse
|
22 |
+
22,64500,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,41,"12:43:44 AM [info] Activating crowd-code\n",Log,selection_mouse
|
23 |
+
23,64681,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,78,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n",Log,selection_mouse
|
24 |
+
24,64714,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,153,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n",Log,selection_mouse
|
25 |
+
25,64731,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,198,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n",Log,selection_mouse
|
26 |
+
26,64763,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,257,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n",Log,selection_mouse
|
27 |
+
27,64817,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,302,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
28 |
+
28,67138,"extension-output-pdoom-org.crowd-code-#1-crowd-code",302,0,"",Log,selection_mouse
|
29 |
+
29,67550,"extension-output-pdoom-org.crowd-code-#1-crowd-code",279,23,"workspace folder found\n",Log,selection_mouse
|
30 |
+
30,67566,"extension-output-pdoom-org.crowd-code-#1-crowd-code",278,24," workspace folder found\n",Log,selection_mouse
|
31 |
+
31,67586,"extension-output-pdoom-org.crowd-code-#1-crowd-code",217,85,"Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
32 |
+
32,67614,"extension-output-pdoom-org.crowd-code-#1-crowd-code",166,136,"info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
33 |
+
33,67661,"extension-output-pdoom-org.crowd-code-#1-crowd-code",162,140,"AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
34 |
+
34,67704,"extension-output-pdoom-org.crowd-code-#1-crowd-code",159,143,"44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
35 |
+
35,67708,"extension-output-pdoom-org.crowd-code-#1-crowd-code",84,218,"44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
36 |
+
36,67744,"extension-output-pdoom-org.crowd-code-#1-crowd-code",83,219,":44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
37 |
+
37,67768,"extension-output-pdoom-org.crowd-code-#1-crowd-code",81,221,"43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
38 |
+
38,67827,"extension-output-pdoom-org.crowd-code-#1-crowd-code",80,222,":43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
39 |
+
39,67844,"extension-output-pdoom-org.crowd-code-#1-crowd-code",43,259,":43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
40 |
+
40,67860,"extension-output-pdoom-org.crowd-code-#1-crowd-code",41,261,"12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
41 |
+
41,68044,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,302,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
42 |
+
42,69392,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,selection_mouse
|
43 |
+
43,69739,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,2,"12",Log,selection_mouse
|
44 |
+
44,69917,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,80,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12",Log,selection_mouse
|
45 |
+
45,69933,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,156,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:",Log,selection_mouse
|
46 |
+
46,69950,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,262,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43",Log,selection_mouse
|
47 |
+
47,69971,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,302,"12:43:44 AM [info] Activating crowd-code\n12:43:44 AM [info] Recording started\n12:43:44 AM [info] Initializing git provider using file system watchers...\n12:43:44 AM [info] No workspace folder found\n12:43:46 AM [info] Retrying git provider initialization...\n12:43:46 AM [info] No workspace folder found\n",Log,selection_mouse
|
48 |
+
48,70725,"extension-output-pdoom-org.crowd-code-#1-crowd-code",302,0,"",Log,selection_mouse
|
49 |
+
49,71654,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,selection_mouse
|
50 |
+
50,71789,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,2,"12",Log,selection_mouse
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180/source.csv
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,9,"src/recording.ts",0,0,"import * as fs from 'node:fs'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios'\nimport { hasConsent, showConsentChangeDialog } from './consent'\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateBaseFilePath,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording, type ConsentStatus } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'crowd-code.openSettings',\n startRecording: 'crowd-code.startRecording',\n stopRecording: 'crowd-code.stopRecording',\n panicButton: 'crowd-code.panicButton',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nlet panicStatusBarItem: vscode.StatusBarItem | undefined;\nlet panicButtonPressCount = 0;\nlet panicButtonTimeoutId: NodeJS.Timeout | undefined;\nlet accumulatedRemovedContent: Array<{content: string, sequence: number}> = []; // Store content with sequence numbers\n\nconst CROWD_CODE_API_GATEWAY_URL = process.env.CROWD_CODE_API_GATEWAY_URL;\n\nconst PANIC_BUTTON_TIMEOUT = 3000; // 3 seconds timeout for successive presses\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,\n}: CSVRowBuilder): string | undefined {\n if (!recording.startDateTime) {\n return\n }\n\n const time = new Date().getTime() - recording.startDateTime.getTime()\n\n if (type === ChangeType.HEADING) {\n return 'Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type\n'\n }\n\n if (type === ChangeType.TERMINAL_FOCUS || type === ChangeType.TERMINAL_COMMAND || type === ChangeType.TERMINAL_OUTPUT) {\n return `${sequence},${time},""TERMINAL"",${rangeOffset},${rangeLength},""${escapeString(text)}"",,${type}\n`\n }\n\n const editorFileName = getEditorFileName()\n return `${sequence},${time},""${editorFileName}"",${rangeOffset},${rangeLength},""${escapeString(text)}"",${getEditorLanguage()},${type}\n`\n}\n\n/**\n * Checks if the current file being edited is within the configured export path.\n * This is used to determine if the current file should be recorded or not.\n *\n * @returns {boolean} `true` if the current file is within the export path, `false` otherwise.\n */\nexport function isCurrentFileExported(): boolean {\n const editor = vscode.window.activeTextEditor\n const filename = editor?.document.fileName.replaceAll('\\', '/')\n const exportPath = getExportPath()\n if (!editor || !filename || !exportPath) {\n return false\n }\n return filename.startsWith(exportPath)\n}\n\nconst onChangeSubscription = vscode.workspace.onDidChangeTextDocument(event => {\n if (!recording.isRecording) {\n return\n }\n\n if (isCurrentFileExported()) {\n return\n }\n const editor = vscode.window.activeTextEditor\n if (editor && event.document === editor.document) {\n for (const change of event.contentChanges) {\n recording.sequence++\n addToFileQueue(\n buildCsvRow({\n sequence: recording.sequence,\n rangeOffset: change.rangeOffset,\n rangeLength: change.rangeLength,\n text: change.text,\n })\n )\n appendToFile()\n }\n }\n})\n\n/**\n * Creates the recording folder if it doesn't exist.\n * @param folderPath - The path to the recording folder.\n */\nfunction createRecordingFolder(folderPath: string): void {\n if (!fs.existsSync(folderPath)) {\n fs.mkdirSync(folderPath, { recursive: true })\n }\n}\n\n/**\n * Starts the recording process and initializes necessary variables.\n */\nexport async function startRecording(): Promise<void> {\n if (recording.isRecording) {\n notificationWithProgress('Already recording')\n logToOutput('Already recording', 'info')\n return\n }\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n // If the setting is enabled and the path is inside the workspace, add it to .gitignore\n if (\n getConfig().get<boolean>('export.addToGitignore') &&\n getConfig().get<string>('export.exportPath')?.startsWith('${workspaceFolder}')\n ) {\n await addToGitignore()\n }\n\n recording.startDateTime = new Date()\n recording.activatedFiles = new Set<string>()\n\n // Ask for folder name if enabled in settings\n let customFolderName: string | undefined\n if (getConfig().get('recording.askFolderName')) {\n customFolderName = await vscode.window.showInputBox({\n prompt: 'Enter a name for the recording folder',\n placeHolder: 'Enter recording folder name',\n })\n if (!customFolderName) {\n stopRecording(true)\n return\n }\n recording.customFolderName = customFolderName\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n stopRecording(true)\n return\n }\n\n // Create the recording folder\n const folderPath = path.dirname(path.join(exportPath, baseFilePath))\n createRecordingFolder(folderPath)\n\n recording.isRecording = true\n recording.timer = 0\n recording.endDateTime = null\n recording.sequence = 0\n panicButtonPressCount = 0 // Reset panic button counter for new recording\n accumulatedRemovedContent = [] // Clear accumulated content for new recording\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n intervalId = setInterval(() => {\n recording.timer++\n updateStatusBarItem()\n }, 1000)\n notificationWithProgress('Recording started')\n logToOutput('Recording started', 'info')\n\n // Only log initial editor content if there's an active text editor\n const editorText = vscode.window.activeTextEditor?.document.getText()\n const activeEditorUri = vscode.window.activeTextEditor?.document.uri.toString()\n\n if (editorText !== undefined && activeEditorUri) {\n recording.sequence++\n const csvRow = {\n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: editorText,\n type: ChangeType.TAB,\n }\n addToFileQueue(buildCsvRow({ ...csvRow, type: ChangeType.HEADING }))\n addToFileQueue(buildCsvRow(csvRow))\n appendToFile()\n recording.activatedFiles.add(activeEditorUri)\n actionsProvider.setCurrentFile(vscode.window.activeTextEditor?.document.fileName || '')\n } else {\n // If no active editor, just add the header row\n recording.sequence++\n addToFileQueue(buildCsvRow({ \n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: '',\n type: ChangeType.HEADING \n }))\n appendToFile()\n }\n\n extContext.subscriptions.push(onChangeSubscription)\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(true)\n\n // Set up a timer to send data to the Lambda endpoint periodically\n uploadIntervalId = setInterval(async () => {\n if (!exportPath) {\n return;\n }\n \n if (typeof CROWD_CODE_API_GATEWAY_URL !== 'string' || !CROWD_CODE_API_GATEWAY_URL.trim()) {\n logToOutput(""CROWD_CODE_API_GATEWAY_URL must be a non-empty string. Please check your build configuration."", 'error');\n return;\n }\n\n // Only upload data if user has given consent\n if (!hasConsent()) {\n return;\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`);\n const extensionVersion = extContext.extension.packageJSON.version as string;\n const userId = extContext.globalState.get<string>('userId');\n\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${baseFilePath}.csv`,\n content: fileContent,\n version: extensionVersion,\n userId: userId\n };\n await axios.post(CROWD_CODE_API_GATEWAY_URL, payload);\n console.log(`Successfully sent ${payload.fileName} to Lambda endpoint.`);\n }\n } catch (error: any) {\n if (error.code === 'ENOENT') {\n console.warn(`File not found at ${filePath}. It might be created on first write.`);\n } else {\n console.error(`Error sending data to Lambda: ${error.message}`);\n if (axios.isAxiosError(error) && error.response) {\n console.error(""Lambda response status:"", error.response.status);\n console.error(""Lambda response data:"", error.response.data);\n }\n }\n }\n }, 5 * 60 * 1000); // 5 minutes\n}\n\n/**\n * Stops the recording process and finalizes the recording data.\n * @param context - The extension context.\n */\nexport function stopRecording(force = false): Promise<void> | void {\n if (!recording.isRecording) {\n notificationWithProgress('Not recording')\n return\n }\n\n recording.isRecording = false\n clearInterval(intervalId)\n clearInterval(uploadIntervalId); // Clear the upload timer\n recording.timer = 0\n recording.activatedFiles?.clear()\n panicButtonPressCount = 0 // Reset panic button counter when recording stops\n accumulatedRemovedContent = [] // Clear accumulated content when recording stops\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n const index = extContext.subscriptions.indexOf(onChangeSubscription)\n if (index !== -1) {\n extContext.subscriptions.splice(index, 1)\n }\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(false)\n if (force) {\n notificationWithProgress('Recording cancelled')\n logToOutput('Recording cancelled', 'info')\n recording.customFolderName = undefined\n return\n }\n notificationWithProgress('Recording finished')\n logToOutput('Recording finished', 'info')\n recording.endDateTime = new Date()\n return processCsvFile().then(() => {\n // Reset customFolderName after processing is complete\n recording.customFolderName = undefined\n }).catch(err => {\n logToOutput(`Error processing CSV file during stop: ${String(err)}`, 'error')\n recording.customFolderName = undefined\n });\n}\n\n/**\n * Appends data from the file queue to the appropriate file in the workspace.\n */\nexport async function appendToFile(): Promise<void> {\n if (isAppending) {\n return\n }\n isAppending = true\n\n const exportPath = getExportPath()\n if (!exportPath) {\n logToOutput('Export path not available in appendToFile, stopping recording.', 'error')\n stopRecording(true)\n isAppending = false\n return\n }\n\n while (fileQueue.length > 0) {\n const itemToAppend = fileQueue.shift()\n if (!itemToAppend) {\n continue\n }\n\n const filePath = path.join(exportPath, itemToAppend.name)\n\n try {\n const directory = path.dirname(filePath)\n if (!fs.existsSync(directory)) {\n fs.mkdirSync(directory, { recursive: true })\n }\n await fs.promises.appendFile(filePath, itemToAppend.content)\n } catch (err) {\n logToOutput(\n `Failed to append to file ${filePath}: ${err}. Item dropped. Content: ${itemToAppend.content.substring(0, 100)}...`,\n 'error'\n )\n }\n }\n isAppending = false\n}\n\n/**\n * Appends an SRT line to the file queue for the previous change.\n *\n * This function is responsible for generating the SRT format line for the previous change and adding it to the file queue.\n * It checks if the SRT export format is enabled, and if so, it generates the SRT line for the previous change and adds it to the file queue.\n *\n * @param processedChanges - An array of processed changes.\n * @param i - The index of the current change in the processedChanges array.\n * @param exportInSrt - A boolean indicating whether the SRT export format is enabled.\n */\nfunction addToSRTFile(processedChanges: Change[], i: number, exportInSrt: boolean) {\n if (!exportInSrt) {\n return\n }\n if (i === 0) {\n return\n }\n addToFileQueue(\n addSrtLine(\n processedChanges[i - 1].sequence,\n processedChanges[i - 1].startTime,\n processedChanges[i - 1].endTime,\n JSON.stringify({\n text: processedChanges[i - 1].text,\n file: processedChanges[i - 1].file,\n language: processedChanges[i - 1].language,\n })\n ),\n 'srt',\n true\n )\n}\n\n/**\n * Returns the new text content based on the change type and the previous change.\n * @param type - The type of the change.\n * @param text - The text of the change.\n * @param previousChange - The previous change.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n */\nfunction getNewTextContent(\n type: string,\n text: string,\n previousChange: Change | null,\n rangeOffset: number,\n rangeLength: number\n): string {\n if (type === ChangeType.TAB) {\n return text\n }\n if (!previousChange) {\n return ''\n }\n return getUpdatedText(previousChange.text, rangeOffset, rangeLength, text)\n}\n\n/**\n * Processes a single CSV line and returns the processed change\n */\nasync function processCSVLine(line: string, previousChange: Change | null): Promise<Change | null> {\n const lineArr = line.split(/,(?=(?:[^""]*""[^""]*"")*[^""]*$)/)\n\n if (Number.isNaN(Number.parseInt(lineArr[0]))) {\n return null\n }\n\n const time = Number.parseInt(lineArr[1])\n const file = removeDoubleQuotes(lineArr[2])\n const rangeOffset = Number.parseInt(lineArr[3])\n const rangeLength = Number.parseInt(lineArr[4])\n const text = unescapeString(removeDoubleQuotes(lineArr[5]))\n const language = lineArr[6]\n const type = lineArr[7]\n\n const newText = getNewTextContent(type, text, previousChange, rangeOffset, rangeLength)\n\n /**\n * Skip exporting changes with the same values to the previous change.\n */\n if (\n previousChange &&\n time === previousChange.startTime &&\n file === previousChange.file &&\n newText === previousChange.text &&\n language === previousChange.language\n ) {\n return null\n }\n\n return {\n sequence: previousChange ? previousChange.sequence + 1 : 1,\n file,\n startTime: time,\n endTime: 0,\n language,\n text: newText,\n }\n}\n\n/**\n * Returns the updated text content based on the previous text, range offset, range length, and new text.\n * @param previousText - The previous text.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n * @param newText - The new text.\n */\nfunction getUpdatedText(\n previousText: string,\n rangeOffset: number,\n rangeLength: number,\n newText: string\n): string {\n const textArray = previousText.split('')\n textArray.splice(rangeOffset, rangeLength, newText)\n return textArray.join('')\n}\n\n/**\n * Processes the CSV file and generates the necessary output files.\n */\nasync function processCsvFile(): Promise<void> {\n if (!validateRecordingState()) {\n return\n }\n\n const exportFormats = getConfig().get<string[]>('export.exportFormats', [])\n if (exportFormats.length === 0) {\n logToOutput('No export formats specified', 'info')\n vscode.window.showWarningMessage('No export formats specified')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n if (!recording.startDateTime) {\n return\n }\n\n // Use the same custom folder name for reading the source file\n const baseFilePathSource = generateBaseFilePath(\n recording.startDateTime,\n false,\n recording.customFolderName,\n sessionUuid\n )\n if (!baseFilePathSource) {\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePathSource}.csv`)\n\n try {\n if (!fs.existsSync(filePath)) {\n throw new Error(`Source file not found: ${filePath}`)\n }\n\n const processedChanges: Change[] = []\n\n const rl = readline.createInterface({\n input: fs.createReadStream(filePath),\n crlfDelay: Number.POSITIVE_INFINITY,\n })\n\n for await (const line of rl) {\n const previousChange = processedChanges[processedChanges.length - 1]\n const change = await processCSVLine(line, previousChange)\n\n if (change) {\n if (previousChange) {\n previousChange.endTime = change.startTime\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n processedChanges.push(change)\n }\n }\n\n rl.close();\n\n return finalizeRecording(processedChanges, exportFormats);\n\n } catch (err) {\n vscode.window.showErrorMessage(`Error processing recording: ${err}`)\n logToOutput('Error processing CSV file: ' + String(err), 'error')\n return Promise.resolve(); // Resolve even on error after showing message\n }\n}\n\nfunction validateRecordingState(): boolean {\n if (!vscode.workspace.workspaceFolders) {\n logToOutput(\n 'No workspace folder found. To process the recording is needed a workspace folder',\n 'error'\n )\n return false\n }\n if (!recording.endDateTime || !recording.startDateTime) {\n logToOutput('Recording date time is not properly set', 'error')\n return false\n }\n return true\n}\n\nfunction finalizeRecording(processedChanges: Change[], exportFormats: string[]): Promise<void> {\n const lastChange = processedChanges[processedChanges.length - 1]\n if (lastChange && recording.endDateTime && recording.startDateTime) {\n lastChange.endTime = recording.endDateTime.getTime() - recording.startDateTime.getTime()\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n if (exportFormats.includes('JSON')) {\n addToFileQueue(JSON.stringify(processedChanges), 'json', true)\n }\n return appendToFile().then(() => {\n // Refresh the recordFiles view after export is complete\n vscode.commands.executeCommand('crowd-code.refreshRecordFiles')\n })\n}\n\n/**\n * Adds a line to the SRT file format.\n * @param sequence - The sequence number of the change.\n * @param start - The start time of the change.\n * @param end - The end time of the change.\n * @param text - The text of the change.\n * @returns A string representing a line in the SRT file format.\n */\nfunction addSrtLine(sequence: number, start: number, end: number, text: string): string {\n return `${sequence}\n${formatSrtTime(start)} --> ${formatSrtTime(end)}\n${text}\n\n`\n}\n\n/**\n * Adds content to the file queue.\n * @param content - The content to add.\n * @param fileExtension - The file extension (optional, defaults to 'csv').\n */\nexport function addToFileQueue(\n content: string | undefined,\n fileExtension = 'csv',\n isExport = false\n): void {\n if (!content) {\n return\n }\n if (!recording.startDateTime) {\n return\n }\n // Use the same custom name throughout the recording session\n const baseFilePath = generateBaseFilePath(recording.startDateTime, isExport, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n return\n }\n fileQueue.push({\n name: `${baseFilePath}.${fileExtension}`,\n content: content,\n })\n}\n\n/**\n * Updates the status bar item with the current recording status and time.\n */\nexport function updateStatusBarItem(): void {\n if (recording.isRecording) {\n if (getConfig().get('appearance.showTimer') === false) {\n statusBarItem.text = '$(debug-stop)'\n statusBarItem.tooltip = 'Current time: ' + formatDisplayTime(recording.timer)\n }\n if (getConfig().get('appearance.showTimer') === true) {\n statusBarItem.text = '$(debug-stop) ' + formatDisplayTime(recording.timer)\n statusBarItem.tooltip = 'Stop Recording'\n }\n statusBarItem.command = commands.stopRecording\n statusBarItem.show()\n } else {\n const editor = vscode.window.activeTextEditor\n if (!editor) {\n statusBarItem.hide()\n return\n }\n if (getConfig().get('appearance.minimalMode') === true) {\n statusBarItem.text = '$(circle-large-filled)'\n } else {\n statusBarItem.text = '$(circle-large-filled) Start Recording'\n }\n statusBarItem.tooltip = 'Start Recording'\n statusBarItem.command = commands.startRecording\n statusBarItem.show()\n }\n}\n\n/**\n * Creates and updates the panic button status bar item.\n */\nexport function updatePanicButton(): void {\n if (!recording.isRecording) {\n if (panicStatusBarItem) {\n panicStatusBarItem.hide()\n }\n return\n }\n\n // Create panic button if it doesn't exist\n if (!panicStatusBarItem) {\n panicStatusBarItem = vscode.window.createStatusBarItem(vscode.StatusBarAlignment.Right, 8999) // Position it to the left of the recording button\n extContext.subscriptions.push(panicStatusBarItem)\n }\n\n const secondsToRemove = (panicButtonPressCount + 1) * 10 // Show what the next press will remove\n panicStatusBarItem.text = '$(refresh)'\n panicStatusBarItem.tooltip = `Remove last ${secondsToRemove} seconds of recording (click again within 3 seconds to remove more)`\n panicStatusBarItem.command = commands.panicButton\n panicStatusBarItem.show()\n}\n\n/**\n * Deletes the last N seconds of recording data from the CSV file.\n * This is a ""panic button"" feature that allows users to quickly remove recent sensitive data.\n * Each successive press within 3 seconds removes more time: 10s, 20s, 30s, etc.\n * After 3 seconds of inactivity, the next press will be treated as a fresh press (10s).\n */\nexport async function panicButton(): Promise<void> {\n if (!recording.isRecording) {\n vscode.window.showWarningMessage('No active recording to remove data from')\n logToOutput('No active recording to remove data from', 'info')\n return\n }\n\n if (!recording.startDateTime) {\n vscode.window.showErrorMessage('Recording start time not available')\n logToOutput('Recording start time not available', 'error')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n vscode.window.showErrorMessage('Export path not available')\n logToOutput('Export path not available', 'error')\n return\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n vscode.window.showErrorMessage('Could not generate file path')\n logToOutput('Could not generate file path', 'error')\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`)\n\n try {\n // Check if file exists\n if (!fs.existsSync(filePath)) {\n vscode.window.showWarningMessage('No recording file found to remove data from')\n logToOutput('No recording file found to remove data from', 'info')\n return\n }\n\n // Read the file\n const content = fs.readFileSync(filePath, 'utf-8')\n const lines = content.split('\n')\n \n if (lines.length <= 1) {\n vscode.window.showWarningMessage('Recording file is empty, nothing to remove')\n logToOutput('Recording file is empty, nothing to remove', 'info')\n return\n }\n\n // Calculate how many lines to remove (10 seconds per press)\n const linesToRemove = Math.min((panicButtonPressCount + 1) * 10, lines.length - 1)\n const newLines = lines.slice(0, lines.length - linesToRemove)\n \n // Capture the lines that will be removed for display\n const removedLines = lines.slice(lines.length - linesToRemove)\n\n // Write back to file\n fs.writeFileSync(filePath, newLines.join('\n'))\n\n // Update panic button state\n panicButtonPressCount++\n \n // Set up timeout to reset the counter after 3 seconds of inactivity\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n }\n panicButtonTimeoutId = setTimeout(() => {\n panicButtonPressCount = 0\n accumulatedRemovedContent = [] // Clear accumulated content\n updatePanicButton()\n }, PANIC_BUTTON_TIMEOUT)\n \n updatePanicButton()\n\n const secondsToRemove = panicButtonPressCount * 10\n const actualLinesRemoved = lines.length - newLines.length\n \n // Accumulate removed content and show immediate popup\n if (removedLines.length > 0) {\n const nonEmptyLines = removedLines.filter(line => line.trim())\n if (nonEmptyLines.length > 0) {\n // Create a simple, readable summary of removed content\n const contentSummary = nonEmptyLines.map(line => {\n // Extract just the text content from CSV for cleaner display\n const parts = line.split(',')\n if (parts.length >= 6) {\n const textContent = parts[5].replace(/^""|""$/g, '') // Remove quotes\n // Clean up common escape sequences\n const cleanText = textContent\n .replace(/\\n/g, '\n')\n .replace(/\\t/g, '\t')\n .replace(/\\r/g, '\r')\n return { content: cleanText, sequence: Number.parseInt(parts[0]) }\n }\n return { content: line, sequence: Number.parseInt(line.split(',')[0]) }\n }).filter(item => item.content.trim().length > 0)\n \n // Add to accumulated content\n accumulatedRemovedContent.push(...contentSummary)\n \n // Sort by sequence number to show in original file order\n const sortedContent = accumulatedRemovedContent.sort((a, b) => a.sequence - b.sequence)\n \n // Show immediate popup with accumulated content\n const totalContent = sortedContent.map(item => item.content).join(' ')\n const summaryText = totalContent.length > 100 \n ? totalContent.substring(0, 100) + '...' \n : totalContent\n \n vscode.window.showInformationMessage(\n `Removed content: ""${summaryText}""`,\n 'Dismiss'\n )\n }\n }\n\n } catch (error) {\n const errorMessage = `Error during panic button operation: ${error}`\n vscode.window.showErrorMessage(errorMessage)\n logToOutput(errorMessage, 'error')\n }\n}",typescript,tab
|
3 |
+
2,464,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:34:24 AM [info] Activating crowd-code\n8:34:24 AM [info] Recording started\n8:34:24 AM [info] Initializing git provider using file system watchers...\n8:34:24 AM [info] Git repository found\n8:34:24 AM [info] Git provider initialized successfully\n8:34:24 AM [info] Initial git state: [object Object]\n",Log,tab
|
4 |
+
3,1593,"extension-output-pdoom-org.crowd-code-#1-crowd-code",297,0,"",Log,selection_mouse
|
5 |
+
4,2363,"src/recording.ts",0,0,"",typescript,tab
|
6 |
+
5,2366,"src/recording.ts",10018,0,"",typescript,selection_mouse
|
7 |
+
6,2463,"src/recording.ts",10017,0,"",typescript,selection_command
|
8 |
+
7,2894,"src/recording.ts",10056,0,"",typescript,selection_mouse
|
9 |
+
8,2921,"src/recording.ts",10055,0,"",typescript,selection_command
|
10 |
+
9,3382,"src/recording.ts",10018,0,"",typescript,selection_mouse
|
11 |
+
10,3463,"src/recording.ts",10017,0,"",typescript,selection_command
|
12 |
+
11,3908,"src/recording.ts",10054,0,"",typescript,selection_mouse
|
13 |
+
12,3941,"src/recording.ts",10053,0,"",typescript,selection_command
|
14 |
+
13,4225,"src/recording.ts",10054,0,"",typescript,selection_command
|
15 |
+
14,4596,"src/recording.ts",10047,7,"",typescript,content
|
16 |
+
15,5010,"src/recording.ts",10045,2,"",typescript,content
|
17 |
+
16,5622,"src/recording.ts",10045,0,"5",typescript,content
|
18 |
+
17,5626,"src/recording.ts",10046,0,"",typescript,selection_keyboard
|
19 |
+
18,5684,"src/recording.ts",10046,0," ",typescript,content
|
20 |
+
19,5689,"src/recording.ts",10047,0,"",typescript,selection_keyboard
|
21 |
+
20,5817,"src/recording.ts",10047,0,"m",typescript,content
|
22 |
+
21,5822,"src/recording.ts",10048,0,"",typescript,selection_keyboard
|
23 |
+
22,5990,"src/recording.ts",10048,0,"i",typescript,content
|
24 |
+
23,5994,"src/recording.ts",10049,0,"",typescript,selection_keyboard
|
25 |
+
24,6104,"src/recording.ts",10049,0,"n",typescript,content
|
26 |
+
25,6108,"src/recording.ts",10050,0,"",typescript,selection_keyboard
|
27 |
+
26,6228,"src/recording.ts",10050,0,"u",typescript,content
|
28 |
+
27,6233,"src/recording.ts",10051,0,"",typescript,selection_keyboard
|
29 |
+
28,6410,"src/recording.ts",10051,0,"t",typescript,content
|
30 |
+
29,6416,"src/recording.ts",10052,0,"",typescript,selection_keyboard
|
31 |
+
30,6483,"src/recording.ts",10052,0,"e",typescript,content
|
32 |
+
31,6488,"src/recording.ts",10053,0,"",typescript,selection_keyboard
|
33 |
+
32,6637,"src/recording.ts",10053,0,"s",typescript,content
|
34 |
+
33,6642,"src/recording.ts",10054,0,"",typescript,selection_keyboard
|
35 |
+
34,16299,"TERMINAL",0,0,"ls /tmp/crowd-code/",,terminal_command
|
36 |
+
35,16315,"TERMINAL",0,0,"]633;E;ls /tmp/crowd-code/;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C",,terminal_output
|
37 |
+
36,34028,"TERMINAL",0,0,"echo crowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180",,terminal_command
|
38 |
+
37,34034,"TERMINAL",0,0,"]633;E;echo crowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;Ccrowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180\r\n]0;maharajamihir@mihir-xps139305:~/Projects/coding-extension/crowd-code]633;D;0]633;P;Cwd=/home/maharajamihir/Projects/coding-extension/crowd-code",,terminal_output
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-1f4d4a04-f881-43ae-915a-b4684ec9fba71750685322384-2025_06_23-15.28.52.723/source.csv
ADDED
@@ -0,0 +1,404 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,8,"src/recording.ts",0,0,"import * as fs from 'node:fs'\nimport * as util from 'node:util'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios';\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateFileName,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'vs-code-recorder.openSettings',\n startRecording: 'vs-code-recorder.startRecording',\n stopRecording: 'vs-code-recorder.stopRecording',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,\n}: CSVRowBuilder): string | undefined {\n if (!recording.startDateTime) {\n return\n }\n\n const time = new Date().getTime() - recording.startDateTime.getTime()\n\n if (type === ChangeType.HEADING) {\n return 'Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type\n'\n }\n\n if (type === ChangeType.TERMINAL_FOCUS || type === ChangeType.TERMINAL_COMMAND || type === ChangeType.TERMINAL_OUTPUT) {\n return `${sequence},${time},""TERMINAL"",${rangeOffset},${rangeLength},""${escapeString(text)}"",,${type}\n`\n }\n\n const editorFileName = getEditorFileName()\n return `${sequence},${time},""${editorFileName}"",${rangeOffset},${rangeLength},""${escapeString(text)}"",${getEditorLanguage()},${type}\n`\n}\n\n/**\n * Checks if the current file being edited is within the configured export path.\n * This is used to determine if the current file should be recorded or not.\n *\n * @returns {boolean} `true` if the current file is within the export path, `false` otherwise.\n */\nexport function isCurrentFileExported(): boolean {\n const editor = vscode.window.activeTextEditor\n const filename = editor?.document.fileName.replaceAll('\\', '/')\n const exportPath = getExportPath()\n if (!editor || !filename || !exportPath) {\n return false\n }\n return filename.startsWith(exportPath)\n}\n\nconst onChangeSubscription = vscode.workspace.onDidChangeTextDocument(event => {\n if (!recording.isRecording) {\n return\n }\n\n if (isCurrentFileExported()) {\n return\n }\n const editor = vscode.window.activeTextEditor\n if (editor && event.document === editor.document) {\n for (const change of event.contentChanges) {\n recording.sequence++\n addToFileQueue(\n buildCsvRow({\n sequence: recording.sequence,\n rangeOffset: change.rangeOffset,\n rangeLength: change.rangeLength,\n text: change.text,\n })\n )\n appendToFile()\n }\n }\n})\n\n/**\n * Creates the recording folder if it doesn't exist.\n * @param folderPath - The path to the recording folder.\n */\nfunction createRecordingFolder(folderPath: string): void {\n if (!fs.existsSync(folderPath)) {\n fs.mkdirSync(folderPath, { recursive: true })\n }\n}\n\n/**\n * Starts the recording process and initializes necessary variables.\n */\nexport async function startRecording(): Promise<void> {\n if (!vscode.window.activeTextEditor) {\n vscode.window.showErrorMessage(vscode.l10n.t('No active text editor'))\n logToOutput(vscode.l10n.t('No active text editor'), 'info')\n return\n }\n if (recording.isRecording) {\n notificationWithProgress(vscode.l10n.t('Already recording'))\n logToOutput(vscode.l10n.t('Already recording'), 'info')\n return\n }\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n // If the setting is enabled and the path is inside the workspace, add it to .gitignore\n if (\n getConfig().get<boolean>('export.addToGitignore') &&\n getConfig().get<string>('export.exportPath')?.startsWith('${workspaceFolder}')\n ) {\n await addToGitignore()\n }\n\n recording.startDateTime = new Date()\n recording.activatedFiles = new Set<string>()\n\n // Ask for folder name if enabled in settings\n let customFolderName: string | undefined\n if (getConfig().get('recording.askFolderName')) {\n customFolderName = await vscode.window.showInputBox({\n prompt: vscode.l10n.t('Enter a name for the recording folder'),\n placeHolder: vscode.l10n.t('Enter recording folder name'),\n })\n if (!customFolderName) {\n stopRecording(true)\n return\n }\n recording.customFolderName = customFolderName\n }\n\n const fileName = generateFileName(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!fileName) {\n stopRecording(true)\n return\n }\n\n // Create the recording folder\n const folderPath = path.dirname(path.join(exportPath, fileName))\n createRecordingFolder(folderPath)\n\n recording.isRecording = true\n recording.timer = 0\n recording.endDateTime = null\n recording.sequence = 0\n intervalId = setInterval(() => {\n recording.timer++\n updateStatusBarItem()\n }, 1000)\n notificationWithProgress(vscode.l10n.t('Recording started'))\n logToOutput(vscode.l10n.t('Recording started'), 'info')\n\n const editorText = vscode.window.activeTextEditor?.document.getText()\n const activeEditorUri = vscode.window.activeTextEditor?.document.uri.toString()\n\n if (editorText !== undefined && activeEditorUri) {\n recording.sequence++\n const csvRow = {\n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: editorText,\n type: ChangeType.TAB,\n }\n addToFileQueue(buildCsvRow({ ...csvRow, type: ChangeType.HEADING }))\n addToFileQueue(buildCsvRow(csvRow))\n appendToFile()\n recording.activatedFiles.add(activeEditorUri)\n actionsProvider.setCurrentFile(vscode.window.activeTextEditor.document.fileName)\n }\n\n extContext.subscriptions.push(onChangeSubscription)\n updateStatusBarItem()\n actionsProvider.setRecordingState(true)\n\n // Set up a timer to send data to the Lambda endpoint periodically\n uploadIntervalId = setInterval(async () => {\n if (!exportPath) {\n return;\n }\n\n const filePath = path.join(exportPath, `${fileName}.csv`);\n const extensionVersion = extContext.extension.packageJSON.version as string;\n const userId = extContext.globalState.get<string>('userId');\n\n\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${fileName}.csv`,\n content: fileContent,\n version: extensionVersion,\n userId: userId\n };\n await axios.post(API_GATEWAY_URL, payload);\n console.log(`Successfully sent ${payload.fileName} to Lambda endpoint.`);\n }\n } catch (error: any) {\n if (error.code === 'ENOENT') {\n console.warn(`File not found at ${filePath}. It might be created on first write.`);\n } else {\n console.error(`Error sending data to Lambda: ${error.message}`);\n if (axios.isAxiosError(error) && error.response) {\n console.error(""Lambda response status:"", error.response.status);\n console.error(""Lambda response data:"", error.response.data);\n }\n }\n }\n }, 5 * 60 * 1000); // 5 minutes\n}\n\n/**\n * Stops the recording process and finalizes the recording data.\n * @param context - The extension context.\n */\nexport function stopRecording(force = false): Promise<void> | void {\n if (!recording.isRecording) {\n notificationWithProgress(vscode.l10n.t('Not recording'))\n return\n }\n\n recording.isRecording = false\n clearInterval(intervalId)\n clearInterval(uploadIntervalId); // Clear the upload timer\n recording.timer = 0\n recording.activatedFiles?.clear()\n const index = extContext.subscriptions.indexOf(onChangeSubscription)\n if (index !== -1) {\n extContext.subscriptions.splice(index, 1)\n }\n updateStatusBarItem()\n actionsProvider.setRecordingState(false)\n if (force) {\n notificationWithProgress(vscode.l10n.t('Recording cancelled'))\n logToOutput(vscode.l10n.t('Recording cancelled'), 'info')\n recording.customFolderName = undefined\n return\n }\n notificationWithProgress(vscode.l10n.t('Recording finished'))\n logToOutput(vscode.l10n.t('Recording finished'), 'info')\n recording.endDateTime = new Date()\n return processCsvFile().then(() => {\n // Reset customFolderName after processing is complete\n recording.customFolderName = undefined\n }).catch(err => {\n logToOutput(vscode.l10n.t('Error processing CSV file during stop: {0}', String(err)), 'error')\n recording.customFolderName = undefined\n });\n}\n\n/**\n * Appends data from the file queue to the appropriate file in the workspace.\n */\nexport async function appendToFile(): Promise<void> {\n if (isAppending) {\n return\n }\n isAppending = true\n\n const exportPath = getExportPath()\n if (!exportPath) {\n logToOutput('Export path not available in appendToFile, stopping recording.', 'error')\n stopRecording(true)\n isAppending = false\n return\n }\n\n while (fileQueue.length > 0) {\n const itemToAppend = fileQueue.shift()\n if (!itemToAppend) {\n continue\n }\n\n const filePath = path.join(exportPath, itemToAppend.name)\n\n try {\n const directory = path.dirname(filePath)\n if (!fs.existsSync(directory)) {\n fs.mkdirSync(directory, { recursive: true })\n }\n await fs.promises.appendFile(filePath, itemToAppend.content)\n } catch (err) {\n logToOutput(\n `Failed to append to file ${filePath}: ${err}. Item dropped. Content: ${itemToAppend.content.substring(0, 100)}...`,\n 'error'\n )\n }\n }\n isAppending = false\n}\n\n/**\n * Appends an SRT line to the file queue for the previous change.\n *\n * This function is responsible for generating the SRT format line for the previous change and adding it to the file queue.\n * It checks if the SRT export format is enabled, and if so, it generates the SRT line for the previous change and adds it to the file queue.\n *\n * @param processedChanges - An array of processed changes.\n * @param i - The index of the current change in the processedChanges array.\n * @param exportInSrt - A boolean indicating whether the SRT export format is enabled.\n */\nfunction addToSRTFile(processedChanges: Change[], i: number, exportInSrt: boolean) {\n if (!exportInSrt) {\n return\n }\n if (i === 0) {\n return\n }\n addToFileQueue(\n addSrtLine(\n processedChanges[i - 1].sequence,\n processedChanges[i - 1].startTime,\n processedChanges[i - 1].endTime,\n JSON.stringify({\n text: processedChanges[i - 1].text,\n file: processedChanges[i - 1].file,\n language: processedChanges[i - 1].language,\n })\n ),\n 'srt',\n true\n )\n}\n\n/**\n * Returns the new text content based on the change type and the previous change.\n * @param type - The type of the change.\n * @param text - The text of the change.\n * @param previousChange - The previous change.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n */\nfunction getNewTextContent(\n type: string,\n text: string,\n previousChange: Change | null,\n rangeOffset: number,\n rangeLength: number\n): string {\n if (type === ChangeType.TAB) {\n return text\n }\n if (!previousChange) {\n return ''\n }\n return getUpdatedText(previousChange.text, rangeOffset, rangeLength, text)\n}\n\n/**\n * Processes a single CSV line and returns the processed change\n */\nasync function processCSVLine(line: string, previousChange: Change | null): Promise<Change | null> {\n const lineArr = line.split(/,(?=(?:[^""]*""[^""]*"")*[^""]*$)/)\n\n if (Number.isNaN(Number.parseInt(lineArr[0]))) {\n return null\n }\n\n const time = Number.parseInt(lineArr[1])\n const file = removeDoubleQuotes(lineArr[2])\n const rangeOffset = Number.parseInt(lineArr[3])\n const rangeLength = Number.parseInt(lineArr[4])\n const text = unescapeString(removeDoubleQuotes(lineArr[5]))\n const language = lineArr[6]\n const type = lineArr[7]\n\n const newText = getNewTextContent(type, text, previousChange, rangeOffset, rangeLength)\n\n /**\n * Skip exporting changes with the same values to the previous change.\n */\n if (\n previousChange &&\n time === previousChange.startTime &&\n file === previousChange.file &&\n newText === previousChange.text &&\n language === previousChange.language\n ) {\n return null\n }\n\n return {\n sequence: previousChange ? previousChange.sequence + 1 : 1,\n file,\n startTime: time,\n endTime: 0,\n language,\n text: newText,\n }\n}\n\n/**\n * Returns the updated text content based on the previous text, range offset, range length, and new text.\n * @param previousText - The previous text.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n * @param newText - The new text.\n */\nfunction getUpdatedText(\n previousText: string,\n rangeOffset: number,\n rangeLength: number,\n newText: string\n): string {\n const textArray = previousText.split('')\n textArray.splice(rangeOffset, rangeLength, newText)\n return textArray.join('')\n}\n\n/**\n * Processes the CSV file and generates the necessary output files.\n */\nasync function processCsvFile(): Promise<void> {\n if (!validateRecordingState()) {\n return\n }\n\n const exportFormats = getConfig().get<string[]>('export.exportFormats', [])\n if (exportFormats.length === 0) {\n logToOutput(vscode.l10n.t('No export formats specified'), 'info')\n vscode.window.showWarningMessage(vscode.l10n.t('No export formats specified'))\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n if (!recording.startDateTime) {\n return\n }\n\n // Use the same custom folder name for reading the source file\n const sourceFileName = generateFileName(\n recording.startDateTime,\n false,\n recording.customFolderName,\n sessionUuid\n )\n if (!sourceFileName) {\n return\n }\n\n const filePath = path.join(exportPath, `${sourceFileName}.csv`)\n\n try {\n if (!fs.existsSync(filePath)) {\n throw new Error(`Source file not found: ${filePath}`)\n }\n\n const processedChanges: Change[] = []\n\n const rl = readline.createInterface({\n input: fs.createReadStream(filePath),\n crlfDelay: Number.POSITIVE_INFINITY,\n })\n\n for await (const line of rl) {\n const previousChange = processedChanges[processedChanges.length - 1]\n const change = await processCSVLine(line, previousChange)\n\n if (change) {\n if (previousChange) {\n previousChange.endTime = change.startTime\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n processedChanges.push(change)\n }\n }\n\n rl.close();\n\n return finalizeRecording(processedChanges, exportFormats);\n\n } catch (err) {\n vscode.window.showErrorMessage(`Error processing recording: ${err}`)\n logToOutput(vscode.l10n.t('Error processing CSV file: {0}', String(err)), 'error')\n return Promise.resolve(); // Resolve even on error after showing message\n }\n}\n\nfunction validateRecordingState(): boolean {\n if (!vscode.workspace.workspaceFolders) {\n logToOutput(\n vscode.l10n.t(\n 'No workspace folder found. To process the recording is needed a workspace folder'\n ),\n 'error'\n )\n return false\n }\n if (!recording.endDateTime || !recording.startDateTime) {\n logToOutput(vscode.l10n.t('Recording date time is not properly set'), 'error')\n return false\n }\n return true\n}\n\nfunction finalizeRecording(processedChanges: Change[], exportFormats: string[]): Promise<void> {\n const lastChange = processedChanges[processedChanges.length - 1]\n if (lastChange && recording.endDateTime && recording.startDateTime) {\n lastChange.endTime = recording.endDateTime.getTime() - recording.startDateTime.getTime()\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n if (exportFormats.includes('JSON')) {\n addToFileQueue(JSON.stringify(processedChanges), 'json', true)\n }\n return appendToFile().then(() => {\n // Refresh the recordFiles view after export is complete\n vscode.commands.executeCommand('vs-code-recorder.refreshRecordFiles')\n })\n}\n\n/**\n * Adds a line to the SRT file format.\n * @param sequence - The sequence number of the change.\n * @param start - The start time of the change.\n * @param end - The end time of the change.\n * @param text - The text of the change.\n * @returns A string representing a line in the SRT file format.\n */\nfunction addSrtLine(sequence: number, start: number, end: number, text: string): string {\n return `${sequence}\n${formatSrtTime(start)} --> ${formatSrtTime(end)}\n${text}\n\n`\n}\n\n/**\n * Adds content to the file queue.\n * @param content - The content to add.\n * @param fileExtension - The file extension (optional, defaults to 'csv').\n */\nexport function addToFileQueue(\n content: string | undefined,\n fileExtension = 'csv',\n isExport = false\n): void {\n if (!content) {\n return\n }\n if (!recording.startDateTime) {\n return\n }\n // Use the same custom name throughout the recording session\n const fileName = generateFileName(recording.startDateTime, isExport, recording.customFolderName, sessionUuid)\n if (!fileName) {\n return\n }\n fileQueue.push({\n name: `${fileName}.${fileExtension}`,\n content: content,\n })\n}\n\n/**\n * Updates the status bar item with the current recording status and time.\n */\nexport function updateStatusBarItem(): void {\n const editor = vscode.window.activeTextEditor\n if (!editor && !recording) {\n statusBarItem.hide()\n return\n }\n if (recording.isRecording) {\n if (getConfig().get('appearance.showTimer') === false) {\n statusBarItem.text = '$(debug-stop)'\n statusBarItem.tooltip = vscode.l10n.t('Current time: {0}', formatDisplayTime(recording.timer))\n }\n if (getConfig().get('appearance.showTimer') === true) {\n statusBarItem.text = `$(debug-stop) ${formatDisplayTime(recording.timer)}`\n statusBarItem.tooltip = vscode.l10n.t('Stop Recording')\n }\n statusBarItem.command = commands.stopRecording\n } else {\n if (getConfig().get('appearance.minimalMode') === true) {\n statusBarItem.text = '$(circle-large-filled)'\n } else {\n statusBarItem.text = `$(circle-large-filled) ${vscode.l10n.t('Start Recording')}`\n }\n statusBarItem.tooltip = vscode.l10n.t('Start Recording')\n statusBarItem.command = commands.startRecording\n }\n statusBarItem.show()\n}",typescript,tab
|
3 |
+
2,772,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,tab
|
4 |
+
3,1544,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:28:52 PM [info] Activating crowd-code\n3:28:52 PM [info] Welcome back maharajamihir. Your user-id is '69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e'. Happy coding!\n3:28:52 PM [info] Recording started\n",Log,content
|
5 |
+
4,93787,"src/recording.ts",0,0,"",typescript,tab
|
6 |
+
5,103779,"src/recording.ts",7567,488,"\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${fileName}.csv`,\n content: fileContent\n",typescript,content
|
7 |
+
6,111969,"src/recording.ts",1938,0,"",typescript,selection_mouse
|
8 |
+
7,111995,"src/recording.ts",1937,0,"",typescript,selection_command
|
9 |
+
8,112325,"src/recording.ts",1938,0,"",typescript,selection_mouse
|
10 |
+
9,112338,"src/recording.ts",1937,0,"",typescript,selection_command
|
11 |
+
10,112471,"src/recording.ts",1937,1,",",typescript,selection_mouse
|
12 |
+
11,112495,"src/recording.ts",1938,0,"",typescript,selection_command
|
13 |
+
12,112514,"src/recording.ts",1897,41,"\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
14 |
+
13,112539,"src/recording.ts",1359,579,"with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
15 |
+
14,112562,"src/recording.ts",938,1000," startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
16 |
+
15,112603,"src/recording.ts",751,1187," startRecording: 'vs-code-recorder.startRecording',\n stopRecording: 'vs-code-recorder.stopRecording',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
17 |
+
16,112674,"src/recording.ts",209,1729," getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateFileName,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'vs-code-recorder.openSettings',\n startRecording: 'vs-code-recorder.startRecording',\n stopRecording: 'vs-code-recorder.stopRecording',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
18 |
+
17,112777,"src/recording.ts",0,1938,"import * as fs from 'node:fs'\nimport * as util from 'node:util'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios';\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateFileName,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'vs-code-recorder.openSettings',\n startRecording: 'vs-code-recorder.startRecording',\n stopRecording: 'vs-code-recorder.stopRecording',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,",typescript,selection_mouse
|
19 |
+
18,113693,"src/recording.ts",481,0,"",typescript,selection_mouse
|
20 |
+
19,113744,"src/recording.ts",480,0,"",typescript,selection_command
|
21 |
+
20,114586,"src/recording.ts",1880,0,"",typescript,selection_mouse
|
22 |
+
21,114601,"src/recording.ts",1879,0,"",typescript,selection_command
|
23 |
+
22,115605,"src/recording.ts",1428,0,"",typescript,selection_mouse
|
24 |
+
23,115728,"src/recording.ts",1427,1,"h",typescript,selection_mouse
|
25 |
+
24,115749,"src/recording.ts",1388,40,"\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
26 |
+
25,115769,"src/recording.ts",1338,90,"\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
27 |
+
26,115785,"src/recording.ts",1334,94,"\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
28 |
+
27,115802,"src/recording.ts",1333,95,"\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
29 |
+
28,115843,"src/recording.ts",1247,181,"GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
30 |
+
29,115862,"src/recording.ts",1245,183,"I_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
31 |
+
30,115874,"src/recording.ts",1244,184,"PI_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
32 |
+
31,115887,"src/recording.ts",1236,192,"\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
33 |
+
32,115932,"src/recording.ts",1197,231,"st sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
34 |
+
33,115953,"src/recording.ts",1196,232,"nst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
35 |
+
34,115971,"src/recording.ts",1156,272,"let uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - Th",typescript,selection_mouse
|
36 |
+
35,119227,"src/recording.ts",1397,0,"",typescript,selection_mouse
|
37 |
+
36,119682,"src/recording.ts",1334,0,"",typescript,selection_mouse
|
38 |
+
37,119870,"src/recording.ts",1333,1,"\n",typescript,selection_mouse
|
39 |
+
38,119906,"src/recording.ts",1241,93,"t API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n",typescript,selection_mouse
|
40 |
+
39,119925,"src/recording.ts",1238,96,"onst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n",typescript,selection_mouse
|
41 |
+
40,119943,"src/recording.ts",1236,98,"\nconst API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n\n",typescript,selection_mouse
|
42 |
+
41,1197061,"src/recording.ts",1377,0,"",typescript,selection_mouse
|
43 |
+
42,1921598,"src/recording.ts",861,0,"",typescript,selection_mouse
|
44 |
+
43,1922403,"src/recording.ts",858,0,"",typescript,selection_mouse
|
45 |
+
44,1922419,"src/recording.ts",857,0,"",typescript,selection_command
|
46 |
+
45,1927795,"src/recording.ts",699,0,"",typescript,selection_mouse
|
47 |
+
46,1927814,"src/recording.ts",698,0,"",typescript,selection_command
|
48 |
+
47,1928319,"src/recording.ts",750,0,"",typescript,selection_mouse
|
49 |
+
48,1928323,"src/recording.ts",749,0,"",typescript,selection_command
|
50 |
+
49,1928867,"src/recording.ts",858,0,"",typescript,selection_mouse
|
51 |
+
50,1928871,"src/recording.ts",857,0,"",typescript,selection_command
|
52 |
+
51,1929665,"src/recording.ts",805,0,"",typescript,selection_mouse
|
53 |
+
52,1929683,"src/recording.ts",804,0,"",typescript,selection_command
|
54 |
+
53,1930233,"src/recording.ts",673,0,"",typescript,selection_mouse
|
55 |
+
54,2416235,"webpack.config.js",0,0,"const path = require('node:path')\n\n//@ts-check\n/** @typedef {import('webpack').Configuration} WebpackConfig **/\n\n/** @type WebpackConfig */\nconst extensionConfig = {\n\ttarget: 'node', // VS Code extensions run in a Node.js-context 📖 -> https://webpack.js.org/configuration/node/\n\tmode: 'none', // this leaves the source code as close as possible to the original (when packaging we set this to 'production')\n\n\tentry: './src/extension.ts', // the entry point of this extension, 📖 -> https://webpack.js.org/configuration/entry-context/\n\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,tab
|
56 |
+
55,2425901,"webpack.config.js",1280,0,"",javascript,selection_mouse
|
57 |
+
56,2425908,"webpack.config.js",1279,0,"",javascript,selection_command
|
58 |
+
57,2427882,"webpack.config.js",1375,1,"p",javascript,selection_command
|
59 |
+
58,2432738,"webpack.config.js",1373,1,"m",javascript,selection_command
|
60 |
+
59,2432923,"webpack.config.js",1476,2,"mo",javascript,selection_command
|
61 |
+
60,2433056,"webpack.config.js",1476,3,"mod",javascript,selection_command
|
62 |
+
61,2433177,"webpack.config.js",1476,4,"modu",javascript,selection_command
|
63 |
+
62,2433348,"webpack.config.js",1476,5,"modul",javascript,selection_command
|
64 |
+
63,2433450,"webpack.config.js",1476,6,"module",javascript,selection_command
|
65 |
+
64,2434821,"webpack.config.js",1476,7,"module.",javascript,selection_command
|
66 |
+
65,2435024,"webpack.config.js",1476,8,"module.e",javascript,selection_command
|
67 |
+
66,2435234,"webpack.config.js",1476,9,"module.ex",javascript,selection_command
|
68 |
+
67,2435331,"webpack.config.js",1476,10,"module.exp",javascript,selection_command
|
69 |
+
68,2435515,"webpack.config.js",1476,11,"module.expo",javascript,selection_command
|
70 |
+
69,2436917,"webpack.config.js",1476,12,"module.expor",javascript,selection_command
|
71 |
+
70,2437101,"webpack.config.js",1476,13,"module.export",javascript,selection_command
|
72 |
+
71,2437372,"webpack.config.js",1476,14,"module.exports",javascript,selection_command
|
73 |
+
72,2447128,"webpack.config.js",1194,0,"",javascript,selection_mouse
|
74 |
+
73,2447152,"webpack.config.js",1193,0,"",javascript,selection_command
|
75 |
+
74,2515687,"webpack.config.js",1475,0,"",javascript,selection_mouse
|
76 |
+
75,2515710,"webpack.config.js",1474,0,"",javascript,selection_command
|
77 |
+
76,2516253,"webpack.config.js",1475,0,"",javascript,selection_mouse
|
78 |
+
77,2516257,"webpack.config.js",1474,0,"",javascript,selection_command
|
79 |
+
78,2534878,"webpack.config.js",1511,0,"",javascript,selection_mouse
|
80 |
+
79,2535066,"webpack.config.js",1475,36,"\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
81 |
+
80,2535087,"webpack.config.js",1223,288,"\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
82 |
+
81,2535114,"webpack.config.js",545,966,"\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
83 |
+
82,2535149,"webpack.config.js",534,977,"\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
84 |
+
83,2535183,"webpack.config.js",35,1476,"//@ts-check\n/** @typedef {import('webpack').Configuration} WebpackConfig **/\n\n/** @type WebpackConfig */\nconst extensionConfig = {\n\ttarget: 'node', // VS Code extensions run in a Node.js-context 📖 -> https://webpack.js.org/configuration/node/\n\tmode: 'none', // this leaves the source code as close as possible to the original (when packaging we set this to 'production')\n\n\tentry: './src/extension.ts', // the entry point of this extension, 📖 -> https://webpack.js.org/configuration/entry-context/\n\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
85 |
+
84,2535246,"webpack.config.js",0,1511,"const path = require('node:path')\n\n//@ts-check\n/** @typedef {import('webpack').Configuration} WebpackConfig **/\n\n/** @type WebpackConfig */\nconst extensionConfig = {\n\ttarget: 'node', // VS Code extensions run in a Node.js-context 📖 -> https://webpack.js.org/configuration/node/\n\tmode: 'none', // this leaves the source code as close as possible to the original (when packaging we set this to 'production')\n\n\tentry: './src/extension.ts', // the entry point of this extension, 📖 -> https://webpack.js.org/configuration/entry-context/\n\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
86 |
+
85,2535277,"webpack.config.js",1475,36,"\nmodule.exports = [extensionConfig]\n",javascript,selection_command
|
87 |
+
86,2535312,"webpack.config.js",0,1511,"const path = require('node:path')\n\n//@ts-check\n/** @typedef {import('webpack').Configuration} WebpackConfig **/\n\n/** @type WebpackConfig */\nconst extensionConfig = {\n\ttarget: 'node', // VS Code extensions run in a Node.js-context 📖 -> https://webpack.js.org/configuration/node/\n\tmode: 'none', // this leaves the source code as close as possible to the original (when packaging we set this to 'production')\n\n\tentry: './src/extension.ts', // the entry point of this extension, 📖 -> https://webpack.js.org/configuration/entry-context/\n\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n}\nmodule.exports = [extensionConfig]\n",javascript,selection_mouse
|
88 |
+
87,2607682,"webpack.config.js",112,0,"",javascript,selection_mouse
|
89 |
+
88,2617206,"TERMINAL",0,0,"git status",,terminal_command
|
90 |
+
89,2617228,"TERMINAL",0,0,"]633;E;git status;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C",,terminal_output
|
91 |
+
90,2642884,"webpack.config.js",1240,0,"",javascript,selection_mouse
|
92 |
+
91,2642911,"webpack.config.js",1239,0,"",javascript,selection_command
|
93 |
+
92,2646548,"webpack.config.js",33,0,"",javascript,selection_mouse
|
94 |
+
93,2646552,"webpack.config.js",32,0,"",javascript,selection_command
|
95 |
+
94,2647516,"webpack.config.js",33,0,"\n",javascript,content
|
96 |
+
95,2647689,"webpack.config.js",34,0,"const webpack = require('webpack');",javascript,content
|
97 |
+
96,2652049,"webpack.config.js",68,1,"",javascript,content
|
98 |
+
97,2659852,"webpack.config.js",1510,0,"",javascript,selection_mouse
|
99 |
+
98,2660383,"webpack.config.js",1508,0,"",javascript,selection_mouse
|
100 |
+
99,2660796,"webpack.config.js",1508,0,"\n\t",javascript,content
|
101 |
+
100,2661060,"webpack.config.js",1510,0,"plugins: [\n new webpack.DefinePlugin({\n // This replaces all instances of process.env.API_GATEWAY_URL in your code\n // with the actual value of the environment variable at build time.\n // JSON.stringify is crucial to ensure the value is injected as a string literal.\n 'process.env.API_GATEWAY_URL': JSON.stringify(process.env.API_GATEWAY_URL)\n })\n ]\n",javascript,content
|
102 |
+
101,2667038,"webpack.config.js",1574,0,"",javascript,selection_mouse
|
103 |
+
102,2667181,"webpack.config.js",1571,4,"This",javascript,selection_mouse
|
104 |
+
103,2667709,"webpack.config.js",1555,0,"",javascript,selection_mouse
|
105 |
+
104,2668235,"webpack.config.js",1593,0,"",javascript,selection_mouse
|
106 |
+
105,2668375,"webpack.config.js",1592,0,"",javascript,selection_command
|
107 |
+
106,2673165,"webpack.config.js",1556,167,"",javascript,content
|
108 |
+
107,2673217,"webpack.config.js",1568,0,"",javascript,selection_command
|
109 |
+
108,2675021,"webpack.config.js",1617,0,"",javascript,selection_mouse
|
110 |
+
109,2675581,"webpack.config.js",1612,0,"",javascript,selection_mouse
|
111 |
+
110,2676105,"webpack.config.js",1603,0,"",javascript,selection_mouse
|
112 |
+
111,2676854,"webpack.config.js",1556,94,"",javascript,content
|
113 |
+
112,2676905,"webpack.config.js",1568,0,"",javascript,selection_command
|
114 |
+
113,2677875,"webpack.config.js",1653,0,"",javascript,selection_mouse
|
115 |
+
114,2677879,"webpack.config.js",1652,0,"",javascript,selection_command
|
116 |
+
115,2678380,"webpack.config.js",1660,0,"",javascript,selection_mouse
|
117 |
+
116,2679060,"webpack.config.js",1610,0,"",javascript,selection_mouse
|
118 |
+
117,2679619,"webpack.config.js",1659,0,"",javascript,selection_mouse
|
119 |
+
118,2679640,"webpack.config.js",1658,0,"",javascript,selection_command
|
120 |
+
119,2684479,"webpack.config.js",1581,0,"",javascript,selection_mouse
|
121 |
+
120,2685959,"webpack.config.js",1581,0,"C",javascript,content
|
122 |
+
121,2685970,"webpack.config.js",1582,0,"",javascript,selection_keyboard
|
123 |
+
122,2686293,"webpack.config.js",1582,0,"R",javascript,content
|
124 |
+
123,2686298,"webpack.config.js",1583,0,"",javascript,selection_keyboard
|
125 |
+
124,2686535,"webpack.config.js",1583,0,"O",javascript,content
|
126 |
+
125,2686541,"webpack.config.js",1584,0,"",javascript,selection_keyboard
|
127 |
+
126,2686651,"webpack.config.js",1584,0,"W",javascript,content
|
128 |
+
127,2686656,"webpack.config.js",1585,0,"",javascript,selection_keyboard
|
129 |
+
128,2686802,"webpack.config.js",1585,0,"D",javascript,content
|
130 |
+
129,2686806,"webpack.config.js",1586,0,"",javascript,selection_keyboard
|
131 |
+
130,2687144,"webpack.config.js",1586,0,"-",javascript,content
|
132 |
+
131,2687148,"webpack.config.js",1587,0,"",javascript,selection_keyboard
|
133 |
+
132,2687522,"webpack.config.js",1587,0,"C",javascript,content
|
134 |
+
133,2687526,"webpack.config.js",1588,0,"",javascript,selection_keyboard
|
135 |
+
134,2687648,"webpack.config.js",1588,0,"O",javascript,content
|
136 |
+
135,2687653,"webpack.config.js",1589,0,"",javascript,selection_keyboard
|
137 |
+
136,2687781,"webpack.config.js",1589,0,"D",javascript,content
|
138 |
+
137,2687785,"webpack.config.js",1590,0,"",javascript,selection_keyboard
|
139 |
+
138,2687868,"webpack.config.js",1590,0,"E",javascript,content
|
140 |
+
139,2687872,"webpack.config.js",1591,0,"",javascript,selection_keyboard
|
141 |
+
140,2688510,"webpack.config.js",1590,0,"",javascript,selection_command
|
142 |
+
141,2688700,"webpack.config.js",1589,0,"",javascript,selection_command
|
143 |
+
142,2688858,"webpack.config.js",1588,0,"",javascript,selection_command
|
144 |
+
143,2689013,"webpack.config.js",1587,0,"",javascript,selection_command
|
145 |
+
144,2689269,"webpack.config.js",1586,1,"",javascript,content
|
146 |
+
145,2690004,"webpack.config.js",1586,0,"_",javascript,content
|
147 |
+
146,2690011,"webpack.config.js",1587,0,"",javascript,selection_keyboard
|
148 |
+
147,2690429,"webpack.config.js",1588,0,"",javascript,selection_command
|
149 |
+
148,2690598,"webpack.config.js",1589,0,"",javascript,selection_command
|
150 |
+
149,2690745,"webpack.config.js",1590,0,"",javascript,selection_command
|
151 |
+
150,2691065,"webpack.config.js",1591,0,"",javascript,selection_command
|
152 |
+
151,2691646,"webpack.config.js",1591,0,"_",javascript,content
|
153 |
+
152,2691654,"webpack.config.js",1592,0,"",javascript,selection_keyboard
|
154 |
+
153,2701818,"webpack.config.js",1582,0,"",javascript,selection_mouse
|
155 |
+
154,2701996,"webpack.config.js",1581,26,"CROWD_CODE_API_GATEWAY_URL",javascript,selection_mouse
|
156 |
+
155,2704695,"webpack.config.js",1645,0,"",javascript,selection_mouse
|
157 |
+
156,2704837,"webpack.config.js",1637,15,"API_GATEWAY_URL",javascript,selection_mouse
|
158 |
+
157,2705153,"webpack.config.js",1637,15,"",javascript,content
|
159 |
+
158,2705431,"webpack.config.js",1637,0,"CROWD_CODE_API_GATEWAY_URL",javascript,content
|
160 |
+
159,2731780,"src/recording.ts",0,0,"",typescript,tab
|
161 |
+
160,2734234,"src/recording.ts",1333,0,"",typescript,selection_mouse
|
162 |
+
161,2734553,"src/recording.ts",1237,96,"const API_GATEWAY_URL = 'https://knm3fmbwbi.execute-api.us-east-1.amazonaws.com/v1/recordings';\n",typescript,selection_mouse
|
163 |
+
162,2735798,"src/recording.ts",1237,97,"",typescript,content
|
164 |
+
163,2736300,"src/recording.ts",1237,0,"const API_GATEWAY_URL = process.env.API_GATEWAY_URL;\n\n// You might want to add a fallback for local development or a check if it's undefined\nif (!API_GATEWAY_URL) {\n console.error(""API Gateway URL is not defined. Please check your build configuration."");\n // Potentially disable the upload feature if the URL is not set.\n}",typescript,content
|
165 |
+
164,2737402,"src/recording.ts",1302,0,"",typescript,selection_mouse
|
166 |
+
165,2737524,"src/recording.ts",1298,5,"might",typescript,selection_mouse
|
167 |
+
166,2737669,"src/recording.ts",1291,87,"// You might want to add a fallback for local development or a check if it's undefined\n",typescript,selection_mouse
|
168 |
+
167,2743331,"src/recording.ts",1291,87,"",typescript,content
|
169 |
+
168,2744267,"src/recording.ts",1290,1,"",typescript,content
|
170 |
+
169,2744892,"src/recording.ts",1290,0,"\n",typescript,content
|
171 |
+
170,2745644,"src/recording.ts",1315,0,"",typescript,selection_command
|
172 |
+
171,2745758,"src/recording.ts",1406,0,"",typescript,selection_command
|
173 |
+
172,2746162,"src/recording.ts",1406,67,"",typescript,content
|
174 |
+
173,2746276,"src/recording.ts",1315,0,"",typescript,selection_command
|
175 |
+
174,2747466,"src/recording.ts",1317,0,"",typescript,selection_command
|
176 |
+
175,2757138,"src/recording.ts",1315,0,"",typescript,selection_command
|
177 |
+
176,2758615,"src/recording.ts",1315,0," logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n",typescript,content
|
178 |
+
177,2758663,"src/recording.ts",1413,91,"",typescript,content
|
179 |
+
178,2762614,"src/recording.ts",1414,0,"",typescript,selection_mouse
|
180 |
+
179,2762631,"src/recording.ts",1413,0,"",typescript,selection_command
|
181 |
+
180,2763196,"src/recording.ts",1414,0,"",typescript,selection_mouse
|
182 |
+
181,2763212,"src/recording.ts",1413,0,"",typescript,selection_command
|
183 |
+
182,2766245,"src/recording.ts",1343,0,"",typescript,selection_mouse
|
184 |
+
183,2767001,"src/recording.ts",1333,0,"",typescript,selection_mouse
|
185 |
+
184,2767144,"src/recording.ts",1330,3,"API",typescript,selection_mouse
|
186 |
+
185,2767311,"src/recording.ts",1330,4,"API ",typescript,selection_mouse
|
187 |
+
186,2767333,"src/recording.ts",1330,11,"API Gateway",typescript,selection_mouse
|
188 |
+
187,2767367,"src/recording.ts",1330,12,"API Gateway ",typescript,selection_mouse
|
189 |
+
188,2767381,"src/recording.ts",1330,18,"API Gateway URL is",typescript,selection_mouse
|
190 |
+
189,2767396,"src/recording.ts",1330,22,"API Gateway URL is not",typescript,selection_mouse
|
191 |
+
190,2767417,"src/recording.ts",1330,30,"API Gateway URL is not defined",typescript,selection_mouse
|
192 |
+
191,2767438,"src/recording.ts",1330,32,"API Gateway URL is not defined. ",typescript,selection_mouse
|
193 |
+
192,2767462,"src/recording.ts",1330,38,"API Gateway URL is not defined. Please",typescript,selection_mouse
|
194 |
+
193,2767485,"src/recording.ts",1330,44,"API Gateway URL is not defined. Please check",typescript,selection_mouse
|
195 |
+
194,2767509,"src/recording.ts",1330,49,"API Gateway URL is not defined. Please check your",typescript,selection_mouse
|
196 |
+
195,2767532,"src/recording.ts",1330,55,"API Gateway URL is not defined. Please check your build",typescript,selection_mouse
|
197 |
+
196,2767556,"src/recording.ts",1330,69,"API Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
198 |
+
197,2767656,"src/recording.ts",1330,71,"API Gateway URL is not defined. Please check your build configuration.""",typescript,selection_mouse
|
199 |
+
198,2767703,"src/recording.ts",1330,73,"API Gateway URL is not defined. Please check your build configuration."", ",typescript,selection_mouse
|
200 |
+
199,2767723,"src/recording.ts",1330,74,"API Gateway URL is not defined. Please check your build configuration."", '",typescript,selection_mouse
|
201 |
+
200,2767807,"src/recording.ts",1330,79,"API Gateway URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
202 |
+
201,2768219,"src/recording.ts",1406,0,"",typescript,selection_mouse
|
203 |
+
202,2768232,"src/recording.ts",1404,5,"error",typescript,selection_mouse
|
204 |
+
203,2768376,"src/recording.ts",1402,7," 'error",typescript,selection_mouse
|
205 |
+
204,2768393,"src/recording.ts",1386,23,"configuration."", 'error",typescript,selection_mouse
|
206 |
+
205,2768426,"src/recording.ts",1380,29,"build configuration."", 'error",typescript,selection_mouse
|
207 |
+
206,2768441,"src/recording.ts",1375,34,"your build configuration."", 'error",typescript,selection_mouse
|
208 |
+
207,2768454,"src/recording.ts",1369,40,"check your build configuration."", 'error",typescript,selection_mouse
|
209 |
+
208,2768485,"src/recording.ts",1368,41," check your build configuration."", 'error",typescript,selection_mouse
|
210 |
+
209,2768511,"src/recording.ts",1362,47,"Please check your build configuration."", 'error",typescript,selection_mouse
|
211 |
+
210,2768531,"src/recording.ts",1353,56,"defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
212 |
+
211,2768580,"src/recording.ts",1352,57," defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
213 |
+
212,2768604,"src/recording.ts",1349,60,"not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
214 |
+
213,2768624,"src/recording.ts",1348,61," not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
215 |
+
214,2768651,"src/recording.ts",1346,63,"is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
216 |
+
215,2768677,"src/recording.ts",1345,64," is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
217 |
+
216,2768702,"src/recording.ts",1342,67,"URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
218 |
+
217,2768731,"src/recording.ts",1341,68," URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
219 |
+
218,2768762,"src/recording.ts",1334,75,"Gateway URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
220 |
+
219,2768858,"src/recording.ts",1333,76," Gateway URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
221 |
+
220,2768909,"src/recording.ts",1330,79,"API Gateway URL is not defined. Please check your build configuration."", 'error",typescript,selection_mouse
|
222 |
+
221,2769352,"src/recording.ts",1331,0,"",typescript,selection_mouse
|
223 |
+
222,2769364,"src/recording.ts",1330,3,"API",typescript,selection_mouse
|
224 |
+
223,2769542,"src/recording.ts",1330,84,"API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_mouse
|
225 |
+
224,2769576,"src/recording.ts",1330,12,"API Gateway ",typescript,selection_mouse
|
226 |
+
225,2769598,"src/recording.ts",1330,19,"API Gateway URL is ",typescript,selection_mouse
|
227 |
+
226,2769613,"src/recording.ts",1330,30,"API Gateway URL is not defined",typescript,selection_mouse
|
228 |
+
227,2769634,"src/recording.ts",1330,32,"API Gateway URL is not defined. ",typescript,selection_mouse
|
229 |
+
228,2769649,"src/recording.ts",1330,44,"API Gateway URL is not defined. Please check",typescript,selection_mouse
|
230 |
+
229,2769663,"src/recording.ts",1330,45,"API Gateway URL is not defined. Please check ",typescript,selection_mouse
|
231 |
+
230,2769683,"src/recording.ts",1330,50,"API Gateway URL is not defined. Please check your ",typescript,selection_mouse
|
232 |
+
231,2769696,"src/recording.ts",1330,55,"API Gateway URL is not defined. Please check your build",typescript,selection_mouse
|
233 |
+
232,2769710,"src/recording.ts",1330,56,"API Gateway URL is not defined. Please check your build ",typescript,selection_mouse
|
234 |
+
233,2769739,"src/recording.ts",1330,69,"API Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
235 |
+
234,2770197,"src/recording.ts",1399,0,"",typescript,selection_mouse
|
236 |
+
235,2770982,"src/recording.ts",1245,0,"",typescript,selection_mouse
|
237 |
+
236,2771760,"src/recording.ts",1243,0,"",typescript,selection_mouse
|
238 |
+
237,2774248,"src/recording.ts",1242,0,"",typescript,selection_command
|
239 |
+
238,2774532,"src/recording.ts",1243,0,"",typescript,selection_mouse
|
240 |
+
239,2774702,"src/recording.ts",1243,15,"API_GATEWAY_URL",typescript,selection_mouse
|
241 |
+
240,2788335,"src/recording.ts",1258,0,"",typescript,selection_command
|
242 |
+
241,2788379,"src/recording.ts",7949,0,"CROWD_CODE_",typescript,content
|
243 |
+
242,2788380,"src/recording.ts",1296,0,"CROWD_CODE_",typescript,content
|
244 |
+
243,2788380,"src/recording.ts",1243,0,"CROWD_CODE_",typescript,content
|
245 |
+
244,2789416,"src/recording.ts",1287,0,"",typescript,selection_mouse
|
246 |
+
245,2789557,"src/recording.ts",1284,15,"API_GATEWAY_URL",typescript,selection_mouse
|
247 |
+
246,2790016,"src/recording.ts",1284,15,"",typescript,content
|
248 |
+
247,2790883,"src/recording.ts",1284,0,"CROWD_CODE_API_GATEWAY_URL",typescript,content
|
249 |
+
248,2791951,"src/recording.ts",1301,0,"",typescript,selection_mouse
|
250 |
+
249,2792101,"src/recording.ts",1284,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
251 |
+
250,2792652,"src/recording.ts",1347,0,"",typescript,selection_mouse
|
252 |
+
251,2793600,"src/recording.ts",1346,0,"",typescript,selection_command
|
253 |
+
252,2803514,"src/recording.ts",7996,0,"",typescript,selection_mouse
|
254 |
+
253,2803660,"src/recording.ts",7982,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
255 |
+
254,2804609,"src/recording.ts",7992,0,"",typescript,selection_mouse
|
256 |
+
255,2813646,"src/recording.ts",7991,0,"",typescript,selection_command
|
257 |
+
256,3339766,"src/recording.ts",272,0,"",typescript,selection_mouse
|
258 |
+
257,3339795,"src/recording.ts",271,0,"",typescript,selection_command
|
259 |
+
258,3344656,"src/recording.ts",402,0,"",typescript,selection_mouse
|
260 |
+
259,3344674,"src/recording.ts",401,0,"",typescript,selection_command
|
261 |
+
260,3346597,"src/recording.ts",1451,0,"",typescript,selection_mouse
|
262 |
+
261,3346620,"src/recording.ts",1450,0,"",typescript,selection_command
|
263 |
+
262,3347135,"src/recording.ts",1447,0,"",typescript,selection_mouse
|
264 |
+
263,3347139,"src/recording.ts",1446,0,"",typescript,selection_command
|
265 |
+
264,3347287,"src/recording.ts",1447,0,"",typescript,selection_mouse
|
266 |
+
265,3347302,"src/recording.ts",1446,0,"",typescript,selection_command
|
267 |
+
266,3347443,"src/recording.ts",1446,1,"}",typescript,selection_mouse
|
268 |
+
267,3347463,"src/recording.ts",1350,96,"logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n",typescript,selection_mouse
|
269 |
+
268,3347485,"src/recording.ts",1447,0,"",typescript,selection_command
|
270 |
+
269,3347509,"src/recording.ts",1315,132," (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_mouse
|
271 |
+
270,3347700,"src/recording.ts",1313,134,"if (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_mouse
|
272 |
+
271,3349002,"src/recording.ts",1351,0,"",typescript,selection_mouse
|
273 |
+
272,3352769,"src/recording.ts",1451,0,"",typescript,selection_mouse
|
274 |
+
273,3352781,"src/recording.ts",1450,0,"",typescript,selection_command
|
275 |
+
274,3353313,"src/recording.ts",1447,0,"",typescript,selection_mouse
|
276 |
+
275,3353333,"src/recording.ts",1446,0,"",typescript,selection_command
|
277 |
+
276,3358474,"src/recording.ts",1261,0,"",typescript,selection_mouse
|
278 |
+
277,3358630,"src/recording.ts",1243,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
279 |
+
278,3372309,"src/recording.ts",1447,0,"",typescript,selection_mouse
|
280 |
+
279,3372327,"src/recording.ts",1446,0,"",typescript,selection_command
|
281 |
+
280,3373322,"src/recording.ts",1446,1,"}",typescript,selection_command
|
282 |
+
281,3373541,"src/recording.ts",1348,99," logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_command
|
283 |
+
282,3373696,"src/recording.ts",1313,134,"if (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_command
|
284 |
+
283,3373932,"src/recording.ts",1313,135,"",typescript,content
|
285 |
+
284,3378515,"src/recording.ts",7466,0,"",typescript,selection_mouse
|
286 |
+
285,3378538,"src/recording.ts",7465,0,"",typescript,selection_command
|
287 |
+
286,3379323,"src/recording.ts",7476,0,"",typescript,selection_mouse
|
288 |
+
287,3379338,"src/recording.ts",7475,0,"",typescript,selection_command
|
289 |
+
288,3380133,"src/recording.ts",7476,0,"\nif (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,content
|
290 |
+
289,3380172,"src/recording.ts",7477,0,"",typescript,selection_command
|
291 |
+
290,3381359,"src/recording.ts",7477,34,"if (!CROWD_CODE_API_GATEWAY_URL) {",typescript,selection_command
|
292 |
+
291,3381576,"src/recording.ts",7477,132,"if (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');",typescript,selection_command
|
293 |
+
292,3381873,"src/recording.ts",7477,134,"if (!CROWD_CODE_API_GATEWAY_URL) {\n logToOutput(""API Gateway URL is not defined. Please check your build configuration."", 'error');\n}",typescript,selection_command
|
294 |
+
293,3382081,"src/recording.ts",7477,0,"",typescript,selection_command
|
295 |
+
294,3382242,"src/recording.ts",7610,0," ",typescript,content
|
296 |
+
295,3382242,"src/recording.ts",7514,0," ",typescript,content
|
297 |
+
296,3382242,"src/recording.ts",7477,0," ",typescript,content
|
298 |
+
297,3382531,"src/recording.ts",7620,0," ",typescript,content
|
299 |
+
298,3382531,"src/recording.ts",7520,0," ",typescript,content
|
300 |
+
299,3382532,"src/recording.ts",7481,0," ",typescript,content
|
301 |
+
300,3383056,"src/recording.ts",7484,0,"",typescript,selection_command
|
302 |
+
301,3383318,"src/recording.ts",7474,0,"",typescript,selection_command
|
303 |
+
302,3383541,"src/recording.ts",7484,0,"",typescript,selection_command
|
304 |
+
303,3383999,"src/recording.ts",7527,0,"",typescript,selection_command
|
305 |
+
304,3384438,"src/recording.ts",7527,0," ",typescript,content
|
306 |
+
305,3385942,"src/recording.ts",7528,0," ",typescript,content
|
307 |
+
306,3386713,"src/recording.ts",7532,1,"",typescript,content
|
308 |
+
307,3387171,"src/recording.ts",7531,0,"",typescript,selection_command
|
309 |
+
308,3387615,"src/recording.ts",7488,0,"",typescript,selection_command
|
310 |
+
309,3387819,"src/recording.ts",7475,0,"",typescript,selection_command
|
311 |
+
310,3389497,"src/recording.ts",7476,0,"\n ",typescript,content
|
312 |
+
311,3389771,"src/recording.ts",7477,8,"",typescript,content
|
313 |
+
312,3390055,"src/recording.ts",7478,0,"",typescript,selection_command
|
314 |
+
313,3390248,"src/recording.ts",7521,0,"",typescript,selection_command
|
315 |
+
314,3392603,"src/recording.ts",7477,0,"",typescript,selection_mouse
|
316 |
+
315,3393281,"src/recording.ts",7513,0,"",typescript,selection_mouse
|
317 |
+
316,3393436,"src/recording.ts",7491,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
318 |
+
317,3394390,"src/recording.ts",7555,0,"",typescript,selection_mouse
|
319 |
+
318,3394540,"src/recording.ts",7550,7,"Gateway",typescript,selection_mouse
|
320 |
+
319,3394759,"src/recording.ts",7550,8,"Gateway ",typescript,selection_mouse
|
321 |
+
320,3394775,"src/recording.ts",7550,11,"Gateway URL",typescript,selection_mouse
|
322 |
+
321,3394800,"src/recording.ts",7550,14,"Gateway URL is",typescript,selection_mouse
|
323 |
+
322,3394816,"src/recording.ts",7550,18,"Gateway URL is not",typescript,selection_mouse
|
324 |
+
323,3394832,"src/recording.ts",7550,19,"Gateway URL is not ",typescript,selection_mouse
|
325 |
+
324,3394849,"src/recording.ts",7550,26,"Gateway URL is not defined",typescript,selection_mouse
|
326 |
+
325,3395284,"src/recording.ts",7575,0,"",typescript,selection_mouse
|
327 |
+
326,3395316,"src/recording.ts",7569,7,"defined",typescript,selection_mouse
|
328 |
+
327,3395543,"src/recording.ts",7565,11,"not defined",typescript,selection_mouse
|
329 |
+
328,3395566,"src/recording.ts",7562,14,"is not defined",typescript,selection_mouse
|
330 |
+
329,3395581,"src/recording.ts",7558,18,"URL is not defined",typescript,selection_mouse
|
331 |
+
330,3395595,"src/recording.ts",7550,26,"Gateway URL is not defined",typescript,selection_mouse
|
332 |
+
331,3395710,"src/recording.ts",7549,27," Gateway URL is not defined",typescript,selection_mouse
|
333 |
+
332,3395734,"src/recording.ts",7546,30,"API Gateway URL is not defined",typescript,selection_mouse
|
334 |
+
333,3396245,"src/recording.ts",7547,0,"",typescript,selection_mouse
|
335 |
+
334,3396451,"src/recording.ts",7546,3,"API",typescript,selection_mouse
|
336 |
+
335,3396660,"src/recording.ts",7546,11,"API Gateway",typescript,selection_mouse
|
337 |
+
336,3396695,"src/recording.ts",7546,12,"API Gateway ",typescript,selection_mouse
|
338 |
+
337,3396716,"src/recording.ts",7546,15,"API Gateway URL",typescript,selection_mouse
|
339 |
+
338,3396731,"src/recording.ts",7546,19,"API Gateway URL is ",typescript,selection_mouse
|
340 |
+
339,3396753,"src/recording.ts",7546,22,"API Gateway URL is not",typescript,selection_mouse
|
341 |
+
340,3396768,"src/recording.ts",7520,29,"\n logToOutput(""API",typescript,selection_mouse
|
342 |
+
341,3396804,"src/recording.ts",7546,30,"API Gateway URL is not defined",typescript,selection_mouse
|
343 |
+
342,3396838,"src/recording.ts",7546,32,"API Gateway URL is not defined. ",typescript,selection_mouse
|
344 |
+
343,3396859,"src/recording.ts",7546,38,"API Gateway URL is not defined. Please",typescript,selection_mouse
|
345 |
+
344,3396911,"src/recording.ts",7546,44,"API Gateway URL is not defined. Please check",typescript,selection_mouse
|
346 |
+
345,3396978,"src/recording.ts",7546,45,"API Gateway URL is not defined. Please check ",typescript,selection_mouse
|
347 |
+
346,3396999,"src/recording.ts",7546,49,"API Gateway URL is not defined. Please check your",typescript,selection_mouse
|
348 |
+
347,3397033,"src/recording.ts",7546,50,"API Gateway URL is not defined. Please check your ",typescript,selection_mouse
|
349 |
+
348,3397047,"src/recording.ts",7546,55,"API Gateway URL is not defined. Please check your build",typescript,selection_mouse
|
350 |
+
349,3397098,"src/recording.ts",7546,56,"API Gateway URL is not defined. Please check your build ",typescript,selection_mouse
|
351 |
+
350,3397112,"src/recording.ts",7546,69,"API Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
352 |
+
351,3397604,"src/recording.ts",7610,0,"",typescript,selection_mouse
|
353 |
+
352,3397615,"src/recording.ts",7602,13,"configuration",typescript,selection_mouse
|
354 |
+
353,3397779,"src/recording.ts",7602,36,"configuration."", 'error');\n }",typescript,selection_mouse
|
355 |
+
354,3397828,"src/recording.ts",7595,20," build configuration",typescript,selection_mouse
|
356 |
+
355,3397845,"src/recording.ts",7590,25," your build configuration",typescript,selection_mouse
|
357 |
+
356,3397868,"src/recording.ts",7585,30,"check your build configuration",typescript,selection_mouse
|
358 |
+
357,3397884,"src/recording.ts",7578,37,"Please check your build configuration",typescript,selection_mouse
|
359 |
+
358,3397944,"src/recording.ts",7569,46,"defined. Please check your build configuration",typescript,selection_mouse
|
360 |
+
359,3398058,"src/recording.ts",7602,36,"configuration."", 'error');\n }",typescript,selection_mouse
|
361 |
+
360,3398224,"src/recording.ts",7557,58," URL is not defined. Please check your build configuration",typescript,selection_mouse
|
362 |
+
361,3398247,"src/recording.ts",7550,65,"Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
363 |
+
362,3398456,"src/recording.ts",7549,66," Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
364 |
+
363,3398489,"src/recording.ts",7546,69,"API Gateway URL is not defined. Please check your build configuration",typescript,selection_mouse
|
365 |
+
364,3398999,"src/recording.ts",7546,0,"",typescript,selection_mouse
|
366 |
+
365,3399711,"src/recording.ts",7553,0,"",typescript,selection_mouse
|
367 |
+
366,3400356,"src/recording.ts",7505,0,"",typescript,selection_mouse
|
368 |
+
367,3401445,"src/recording.ts",7638,0,"",typescript,selection_mouse
|
369 |
+
368,3401465,"src/recording.ts",7637,0,"",typescript,selection_command
|
370 |
+
369,3402320,"src/recording.ts",7638,0,"\n ",typescript,content
|
371 |
+
370,3402851,"src/recording.ts",7643,4,"",typescript,content
|
372 |
+
371,3403040,"src/recording.ts",7639,4,"",typescript,content
|
373 |
+
372,3403207,"src/recording.ts",7638,1,"",typescript,content
|
374 |
+
373,3403652,"src/recording.ts",7638,0,"\n ",typescript,content
|
375 |
+
374,3410819,"src/recording.ts",7499,0,"",typescript,selection_mouse
|
376 |
+
375,3410972,"src/recording.ts",7491,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
377 |
+
376,3411217,"src/recording.ts",7516,0,"",typescript,selection_command
|
378 |
+
377,3416684,"src/recording.ts",7639,0,"",typescript,selection_command
|
379 |
+
378,3417954,"src/recording.ts",7639,0," // INSERT_YOUR_CODE\n",typescript,content
|
380 |
+
379,3418282,"src/recording.ts",7667,0," // Ensure CROWD_CODE_API_GATEWAY_URL is a string\n",typescript,content
|
381 |
+
380,3418580,"src/recording.ts",7724,0," if (typeof CROWD_CODE_API_GATEWAY_URL !== 'string' || !CROWD_CODE_API_GATEWAY_URL.trim()) {\n",typescript,content
|
382 |
+
381,3418847,"src/recording.ts",7824,0," logToOutput(""CROWD_CODE_API_GATEWAY_URL must be a non-empty string. Please check your build configuration."", 'error');\n",typescript,content
|
383 |
+
382,3418901,"src/recording.ts",7955,0," return;\n",typescript,content
|
384 |
+
383,3418956,"src/recording.ts",7975,0," }\n",typescript,content
|
385 |
+
384,3418986,"src/recording.ts",7985,9,"",typescript,content
|
386 |
+
385,3420897,"src/recording.ts",7688,0,"",typescript,selection_mouse
|
387 |
+
386,3442637,"src/recording.ts",7639,85,"",typescript,content
|
388 |
+
387,3442694,"src/recording.ts",7647,0,"",typescript,selection_command
|
389 |
+
388,3443201,"src/recording.ts",7637,0,"",typescript,selection_command
|
390 |
+
389,3443560,"src/recording.ts",7521,118,"",typescript,content
|
391 |
+
390,3443615,"src/recording.ts",7529,0,"",typescript,selection_command
|
392 |
+
391,3443944,"src/recording.ts",7499,0,"",typescript,selection_command
|
393 |
+
392,3444352,"src/recording.ts",7478,43,"",typescript,content
|
394 |
+
393,3444449,"src/recording.ts",7486,0,"",typescript,selection_command
|
395 |
+
394,3465330,"src/recording.ts",7966,0,"",typescript,selection_mouse
|
396 |
+
395,3465360,"src/recording.ts",7965,0,"",typescript,selection_command
|
397 |
+
396,3492209,"src/recording.ts",7612,0,"",typescript,selection_mouse
|
398 |
+
397,3492346,"src/recording.ts",7603,26,"CROWD_CODE_API_GATEWAY_URL",typescript,selection_mouse
|
399 |
+
398,3577415,"TERMINAL",0,0,"source ~/.bashrc",,terminal_command
|
400 |
+
399,3577493,"TERMINAL",0,0,"]633;E;source ~/.bashrc ;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C]0;maharajamihir@mihir-xps139305:~/Projects/coding-extension/crowd-code[?2004h[01;32m[maharajamihir@mihir-xps139305[01;37m crowd-code[01;32m]$[00m ",,terminal_output
|
401 |
+
400,3578555,"src/recording.ts",1671,0,"",typescript,selection_mouse
|
402 |
+
401,3579083,"src/recording.ts",1236,0,"",typescript,selection_mouse
|
403 |
+
402,3579650,"src/recording.ts",1345,0,"",typescript,selection_mouse
|
404 |
+
403,3599375,"TERMINAL",0,0,"\r[K\r[01;32m[maharajamihir@mihir-xps139305[01;37m crowd-code[01;32m]$[00m \r[K\r[01;32m[maharajamihir@mihir-xps139305[01;37m crowd-code[01;32m]$[00m \r[K\r[01;32m[maharajamihir@mihir-xps139305[01;37m crowd-code[01;32m]$[00m ",,terminal_output
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-27a851dc-3a84-4cfb-867a-eef6b63ee7ef1750746742858-2025_06_24-08.32.35.909/source.csv
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,76,"extract_vid_list.py",0,0,"from yt_dlp import YoutubeDL\n\ndef extract_video_links_from_playlist(playlist_url, output_file):\n ydl_opts = {\n 'quiet': True,\n 'extract_flat': True, # Do not download the videos, only extract metadata\n 'skip_download': True, # Skip actual download\n }\n\n with YoutubeDL(ydl_opts) as ydl:\n result = ydl.extract_info(playlist_url, download=False)\n \n if 'entries' in result:\n video_urls = [entry['url'] for entry in result['entries']]\n \n with open(output_file, 'w') as f:\n for video_url in video_urls:\n f.write(f""{video_url}\n"")\n print(f""Video links extracted to {output_file}"")\n else:\n print(""No videos found in the playlist."")\n\nif __name__ == ""__main__"":\n playlist_url = input(""Enter the YouTube playlist URL: "")\n output_file = ""links/tmp-1.txt""\n extract_video_links_from_playlist(playlist_url, output_file)\n\n# hello hello bla bla",python,tab
|
3 |
+
2,438,"tasks",0,0,"",Log,tab
|
4 |
+
3,454,"extract_vid_list.py",0,0,"",python,tab
|
5 |
+
4,471,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,tab
|
6 |
+
5,779,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:32:35 AM [info] Activating crowd-code\n8:32:35 AM [info] Recording started\n8:32:35 AM [info] Initializing git provider using file system watchers...\n8:32:36 AM [info] Git repository found\n8:32:36 AM [info] Git provider initialized successfully\n8:32:36 AM [info] Initial git state: [object Object]\n",Log,content
|
7 |
+
6,2818,"extract_vid_list.py",0,0,"",python,tab
|
8 |
+
7,9122,"extract_vid_list.py",982,3,"",python,content
|
9 |
+
8,9781,"extract_vid_list.py",982,0,"b",python,content
|
10 |
+
9,9794,"extract_vid_list.py",983,0,"",python,selection_keyboard
|
11 |
+
10,9959,"extract_vid_list.py",983,0,"l",python,content
|
12 |
+
11,9969,"extract_vid_list.py",984,0,"",python,selection_keyboard
|
13 |
+
12,10077,"extract_vid_list.py",984,0,"a",python,content
|
14 |
+
13,10088,"extract_vid_list.py",985,0,"",python,selection_keyboard
|
15 |
+
14,10167,"extract_vid_list.py",985,0," ",python,content
|
16 |
+
15,10177,"extract_vid_list.py",986,0,"",python,selection_keyboard
|
17 |
+
16,10321,"extract_vid_list.py",986,0,"b",python,content
|
18 |
+
17,10332,"extract_vid_list.py",987,0,"",python,selection_keyboard
|
19 |
+
18,10482,"extract_vid_list.py",987,0,"l",python,content
|
20 |
+
19,10494,"extract_vid_list.py",988,0,"",python,selection_keyboard
|
21 |
+
20,10572,"extract_vid_list.py",988,0,"a",python,content
|
22 |
+
21,10581,"extract_vid_list.py",989,0,"",python,selection_keyboard
|
23 |
+
22,26120,"extract_vid_list.py",989,0," ",python,content
|
24 |
+
23,26132,"extract_vid_list.py",990,0,"",python,selection_keyboard
|
25 |
+
24,26275,"extract_vid_list.py",990,0,"b",python,content
|
26 |
+
25,26287,"extract_vid_list.py",991,0,"",python,selection_keyboard
|
27 |
+
26,26365,"extract_vid_list.py",991,0,"a",python,content
|
28 |
+
27,26375,"extract_vid_list.py",992,0,"",python,selection_keyboard
|
29 |
+
28,26430,"extract_vid_list.py",992,0,"l",python,content
|
30 |
+
29,26441,"extract_vid_list.py",993,0,"",python,selection_keyboard
|
31 |
+
30,26552,"extract_vid_list.py",993,0," ",python,content
|
32 |
+
31,26564,"extract_vid_list.py",994,0,"",python,selection_keyboard
|
33 |
+
32,26678,"extract_vid_list.py",994,0,"b",python,content
|
34 |
+
33,26686,"extract_vid_list.py",995,0,"",python,selection_keyboard
|
35 |
+
34,26743,"extract_vid_list.py",995,0,"a",python,content
|
36 |
+
35,26751,"extract_vid_list.py",996,0,"",python,selection_keyboard
|
37 |
+
36,26857,"extract_vid_list.py",996,0,"l",python,content
|
38 |
+
37,26867,"extract_vid_list.py",997,0,"",python,selection_keyboard
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-66c1dffb-e395-48ae-8676-da72a2b6a5cb1751540512935-2025_07_03-13.02.33.440/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-08.44.49.850/source.csv
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,9,"src/recording.ts",0,0,"import * as fs from 'node:fs'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios'\nimport { hasConsent, showConsentChangeDialog } from './consent'\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateBaseFilePath,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording, type ConsentStatus } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'crowd-code.openSettings',\n startRecording: 'crowd-code.startRecording',\n stopRecording: 'crowd-code.stopRecording',\n panicButton: 'crowd-code.panicButton',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nlet panicStatusBarItem: vscode.StatusBarItem | undefined;\nlet panicButtonPressCount = 0;\nlet panicButtonTimeoutId: NodeJS.Timeout | undefined;\nlet accumulatedRemovedContent: Array<{content: string, sequence: number}> = []; // Store content with sequence numbers\n\nconst CROWD_CODE_API_GATEWAY_URL = process.env.CROWD_CODE_API_GATEWAY_URL;\n\nconst PANIC_BUTTON_TIMEOUT = 3000; // 3 seconds timeout for successive presses\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,\n}: CSVRowBuilder): string | undefined {\n if (!recording.startDateTime) {\n return\n }\n\n const time = new Date().getTime() - recording.startDateTime.getTime()\n\n if (type === ChangeType.HEADING) {\n return 'Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type\n'\n }\n\n if (type === ChangeType.TERMINAL_FOCUS || type === ChangeType.TERMINAL_COMMAND || type === ChangeType.TERMINAL_OUTPUT) {\n return `${sequence},${time},""TERMINAL"",${rangeOffset},${rangeLength},""${escapeString(text)}"",,${type}\n`\n }\n\n const editorFileName = getEditorFileName()\n return `${sequence},${time},""${editorFileName}"",${rangeOffset},${rangeLength},""${escapeString(text)}"",${getEditorLanguage()},${type}\n`\n}\n\n/**\n * Checks if the current file being edited is within the configured export path.\n * This is used to determine if the current file should be recorded or not.\n *\n * @returns {boolean} `true` if the current file is within the export path, `false` otherwise.\n */\nexport function isCurrentFileExported(): boolean {\n const editor = vscode.window.activeTextEditor\n const filename = editor?.document.fileName.replaceAll('\\', '/')\n const exportPath = getExportPath()\n if (!editor || !filename || !exportPath) {\n return false\n }\n return filename.startsWith(exportPath)\n}\n\nconst onChangeSubscription = vscode.workspace.onDidChangeTextDocument(event => {\n if (!recording.isRecording) {\n return\n }\n\n if (isCurrentFileExported()) {\n return\n }\n const editor = vscode.window.activeTextEditor\n if (editor && event.document === editor.document) {\n for (const change of event.contentChanges) {\n recording.sequence++\n addToFileQueue(\n buildCsvRow({\n sequence: recording.sequence,\n rangeOffset: change.rangeOffset,\n rangeLength: change.rangeLength,\n text: change.text,\n })\n )\n appendToFile()\n }\n }\n})\n\n/**\n * Creates the recording folder if it doesn't exist.\n * @param folderPath - The path to the recording folder.\n */\nfunction createRecordingFolder(folderPath: string): void {\n if (!fs.existsSync(folderPath)) {\n fs.mkdirSync(folderPath, { recursive: true })\n }\n}\n\n/**\n * Starts the recording process and initializes necessary variables.\n */\nexport async function startRecording(): Promise<void> {\n if (recording.isRecording) {\n notificationWithProgress('Already recording')\n logToOutput('Already recording', 'info')\n return\n }\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n // If the setting is enabled and the path is inside the workspace, add it to .gitignore\n if (\n getConfig().get<boolean>('export.addToGitignore') &&\n getConfig().get<string>('export.exportPath')?.startsWith('${workspaceFolder}')\n ) {\n await addToGitignore()\n }\n\n recording.startDateTime = new Date()\n recording.activatedFiles = new Set<string>()\n\n // Ask for folder name if enabled in settings\n let customFolderName: string | undefined\n if (getConfig().get('recording.askFolderName')) {\n customFolderName = await vscode.window.showInputBox({\n prompt: 'Enter a name for the recording folder',\n placeHolder: 'Enter recording folder name',\n })\n if (!customFolderName) {\n stopRecording(true)\n return\n }\n recording.customFolderName = customFolderName\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n stopRecording(true)\n return\n }\n\n // Create the recording folder\n const folderPath = path.dirname(path.join(exportPath, baseFilePath))\n createRecordingFolder(folderPath)\n\n recording.isRecording = true\n recording.timer = 0\n recording.endDateTime = null\n recording.sequence = 0\n panicButtonPressCount = 0 // Reset panic button counter for new recording\n accumulatedRemovedContent = [] // Clear accumulated content for new recording\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n intervalId = setInterval(() => {\n recording.timer++\n updateStatusBarItem()\n }, 1000)\n notificationWithProgress('Recording started')\n logToOutput('Recording started', 'info')\n\n // Only log initial editor content if there's an active text editor\n const editorText = vscode.window.activeTextEditor?.document.getText()\n const activeEditorUri = vscode.window.activeTextEditor?.document.uri.toString()\n\n if (editorText !== undefined && activeEditorUri) {\n recording.sequence++\n const csvRow = {\n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: editorText,\n type: ChangeType.TAB,\n }\n addToFileQueue(buildCsvRow({ ...csvRow, type: ChangeType.HEADING }))\n addToFileQueue(buildCsvRow(csvRow))\n appendToFile()\n recording.activatedFiles.add(activeEditorUri)\n actionsProvider.setCurrentFile(vscode.window.activeTextEditor?.document.fileName || '')\n } else {\n // If no active editor, just add the header row\n recording.sequence++\n addToFileQueue(buildCsvRow({ \n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: '',\n type: ChangeType.HEADING \n }))\n appendToFile()\n }\n\n extContext.subscriptions.push(onChangeSubscription)\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(true)\n\n // Set up a timer to send data to the Lambda endpoint periodically\n uploadIntervalId = setInterval(async () => {\n if (!exportPath) {\n return;\n }\n \n if (typeof CROWD_CODE_API_GATEWAY_URL !== 'string' || !CROWD_CODE_API_GATEWAY_URL.trim()) {\n logToOutput(""CROWD_CODE_API_GATEWAY_URL must be a non-empty string. Please check your build configuration."", 'error');\n return;\n }\n\n // Only upload data if user has given consent\n if (!hasConsent()) {\n return;\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`);\n const extensionVersion = extContext.extension.packageJSON.version as string;\n const userId = extContext.globalState.get<string>('userId');\n\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${baseFilePath}.csv`,\n content: fileContent,\n version: extensionVersion,\n userId: userId\n };\n await axios.post(CROWD_CODE_API_GATEWAY_URL, payload);\n console.log(`Successfully sent ${payload.fileName} to Lambda endpoint.`);\n }\n } catch (error: any) {\n if (error.code === 'ENOENT') {\n console.warn(`File not found at ${filePath}. It might be created on first write.`);\n } else {\n console.error(`Error sending data to Lambda: ${error.message}`);\n if (axios.isAxiosError(error) && error.response) {\n console.error(""Lambda response status:"", error.response.status);\n console.error(""Lambda response data:"", error.response.data);\n }\n }\n }\n }, 5 * 60 * 1000); // 5 minutes\n}\n\n/**\n * Stops the recording process and finalizes the recording data.\n * @param context - The extension context.\n */\nexport function stopRecording(force = false): Promise<void> | void {\n if (!recording.isRecording) {\n notificationWithProgress('Not recording')\n return\n }\n\n recording.isRecording = false\n clearInterval(intervalId)\n clearInterval(uploadIntervalId); // Clear the upload timer\n recording.timer = 0\n recording.activatedFiles?.clear()\n panicButtonPressCount = 0 // Reset panic button counter when recording stops\n accumulatedRemovedContent = [] // Clear accumulated content when recording stops\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n const index = extContext.subscriptions.indexOf(onChangeSubscription)\n if (index !== -1) {\n extContext.subscriptions.splice(index, 1)\n }\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(false)\n if (force) {\n notificationWithProgress('Recording cancelled')\n logToOutput('Recording cancelled', 'info')\n recording.customFolderName = undefined\n return\n }\n notificationWithProgress('Recording finished')\n logToOutput('Recording finished', 'info')\n recording.endDateTime = new Date()\n return processCsvFile().then(() => {\n // Reset customFolderName after processing is complete\n recording.customFolderName = undefined\n }).catch(err => {\n logToOutput(`Error processing CSV file during stop: ${String(err)}`, 'error')\n recording.customFolderName = undefined\n });\n}\n\n/**\n * Appends data from the file queue to the appropriate file in the workspace.\n */\nexport async function appendToFile(): Promise<void> {\n if (isAppending) {\n return\n }\n isAppending = true\n\n const exportPath = getExportPath()\n if (!exportPath) {\n logToOutput('Export path not available in appendToFile, stopping recording.', 'error')\n stopRecording(true)\n isAppending = false\n return\n }\n\n while (fileQueue.length > 0) {\n const itemToAppend = fileQueue.shift()\n if (!itemToAppend) {\n continue\n }\n\n const filePath = path.join(exportPath, itemToAppend.name)\n\n try {\n const directory = path.dirname(filePath)\n if (!fs.existsSync(directory)) {\n fs.mkdirSync(directory, { recursive: true })\n }\n await fs.promises.appendFile(filePath, itemToAppend.content)\n } catch (err) {\n logToOutput(\n `Failed to append to file ${filePath}: ${err}. Item dropped. Content: ${itemToAppend.content.substring(0, 100)}...`,\n 'error'\n )\n }\n }\n isAppending = false\n}\n\n/**\n * Appends an SRT line to the file queue for the previous change.\n *\n * This function is responsible for generating the SRT format line for the previous change and adding it to the file queue.\n * It checks if the SRT export format is enabled, and if so, it generates the SRT line for the previous change and adds it to the file queue.\n *\n * @param processedChanges - An array of processed changes.\n * @param i - The index of the current change in the processedChanges array.\n * @param exportInSrt - A boolean indicating whether the SRT export format is enabled.\n */\nfunction addToSRTFile(processedChanges: Change[], i: number, exportInSrt: boolean) {\n if (!exportInSrt) {\n return\n }\n if (i === 0) {\n return\n }\n addToFileQueue(\n addSrtLine(\n processedChanges[i - 1].sequence,\n processedChanges[i - 1].startTime,\n processedChanges[i - 1].endTime,\n JSON.stringify({\n text: processedChanges[i - 1].text,\n file: processedChanges[i - 1].file,\n language: processedChanges[i - 1].language,\n })\n ),\n 'srt',\n true\n )\n}\n\n/**\n * Returns the new text content based on the change type and the previous change.\n * @param type - The type of the change.\n * @param text - The text of the change.\n * @param previousChange - The previous change.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n */\nfunction getNewTextContent(\n type: string,\n text: string,\n previousChange: Change | null,\n rangeOffset: number,\n rangeLength: number\n): string {\n if (type === ChangeType.TAB) {\n return text\n }\n if (!previousChange) {\n return ''\n }\n return getUpdatedText(previousChange.text, rangeOffset, rangeLength, text)\n}\n\n/**\n * Processes a single CSV line and returns the processed change\n */\nasync function processCSVLine(line: string, previousChange: Change | null): Promise<Change | null> {\n const lineArr = line.split(/,(?=(?:[^""]*""[^""]*"")*[^""]*$)/)\n\n if (Number.isNaN(Number.parseInt(lineArr[0]))) {\n return null\n }\n\n const time = Number.parseInt(lineArr[1])\n const file = removeDoubleQuotes(lineArr[2])\n const rangeOffset = Number.parseInt(lineArr[3])\n const rangeLength = Number.parseInt(lineArr[4])\n const text = unescapeString(removeDoubleQuotes(lineArr[5]))\n const language = lineArr[6]\n const type = lineArr[7]\n\n const newText = getNewTextContent(type, text, previousChange, rangeOffset, rangeLength)\n\n /**\n * Skip exporting changes with the same values to the previous change.\n */\n if (\n previousChange &&\n time === previousChange.startTime &&\n file === previousChange.file &&\n newText === previousChange.text &&\n language === previousChange.language\n ) {\n return null\n }\n\n return {\n sequence: previousChange ? previousChange.sequence + 1 : 1,\n file,\n startTime: time,\n endTime: 0,\n language,\n text: newText,\n }\n}\n\n/**\n * Returns the updated text content based on the previous text, range offset, range length, and new text.\n * @param previousText - The previous text.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n * @param newText - The new text.\n */\nfunction getUpdatedText(\n previousText: string,\n rangeOffset: number,\n rangeLength: number,\n newText: string\n): string {\n const textArray = previousText.split('')\n textArray.splice(rangeOffset, rangeLength, newText)\n return textArray.join('')\n}\n\n/**\n * Processes the CSV file and generates the necessary output files.\n */\nasync function processCsvFile(): Promise<void> {\n if (!validateRecordingState()) {\n return\n }\n\n const exportFormats = getConfig().get<string[]>('export.exportFormats', [])\n if (exportFormats.length === 0) {\n logToOutput('No export formats specified', 'info')\n vscode.window.showWarningMessage('No export formats specified')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n if (!recording.startDateTime) {\n return\n }\n\n // Use the same custom folder name for reading the source file\n const baseFilePathSource = generateBaseFilePath(\n recording.startDateTime,\n false,\n recording.customFolderName,\n sessionUuid\n )\n if (!baseFilePathSource) {\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePathSource}.csv`)\n\n try {\n if (!fs.existsSync(filePath)) {\n throw new Error(`Source file not found: ${filePath}`)\n }\n\n const processedChanges: Change[] = []\n\n const rl = readline.createInterface({\n input: fs.createReadStream(filePath),\n crlfDelay: Number.POSITIVE_INFINITY,\n })\n\n for await (const line of rl) {\n const previousChange = processedChanges[processedChanges.length - 1]\n const change = await processCSVLine(line, previousChange)\n\n if (change) {\n if (previousChange) {\n previousChange.endTime = change.startTime\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n processedChanges.push(change)\n }\n }\n\n rl.close();\n\n return finalizeRecording(processedChanges, exportFormats);\n\n } catch (err) {\n vscode.window.showErrorMessage(`Error processing recording: ${err}`)\n logToOutput('Error processing CSV file: ' + String(err), 'error')\n return Promise.resolve(); // Resolve even on error after showing message\n }\n}\n\nfunction validateRecordingState(): boolean {\n if (!vscode.workspace.workspaceFolders) {\n logToOutput(\n 'No workspace folder found. To process the recording is needed a workspace folder',\n 'error'\n )\n return false\n }\n if (!recording.endDateTime || !recording.startDateTime) {\n logToOutput('Recording date time is not properly set', 'error')\n return false\n }\n return true\n}\n\nfunction finalizeRecording(processedChanges: Change[], exportFormats: string[]): Promise<void> {\n const lastChange = processedChanges[processedChanges.length - 1]\n if (lastChange && recording.endDateTime && recording.startDateTime) {\n lastChange.endTime = recording.endDateTime.getTime() - recording.startDateTime.getTime()\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n if (exportFormats.includes('JSON')) {\n addToFileQueue(JSON.stringify(processedChanges), 'json', true)\n }\n return appendToFile().then(() => {\n // Refresh the recordFiles view after export is complete\n vscode.commands.executeCommand('crowd-code.refreshRecordFiles')\n })\n}\n\n/**\n * Adds a line to the SRT file format.\n * @param sequence - The sequence number of the change.\n * @param start - The start time of the change.\n * @param end - The end time of the change.\n * @param text - The text of the change.\n * @returns A string representing a line in the SRT file format.\n */\nfunction addSrtLine(sequence: number, start: number, end: number, text: string): string {\n return `${sequence}\n${formatSrtTime(start)} --> ${formatSrtTime(end)}\n${text}\n\n`\n}\n\n/**\n * Adds content to the file queue.\n * @param content - The content to add.\n * @param fileExtension - The file extension (optional, defaults to 'csv').\n */\nexport function addToFileQueue(\n content: string | undefined,\n fileExtension = 'csv',\n isExport = false\n): void {\n if (!content) {\n return\n }\n if (!recording.startDateTime) {\n return\n }\n // Use the same custom name throughout the recording session\n const baseFilePath = generateBaseFilePath(recording.startDateTime, isExport, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n return\n }\n fileQueue.push({\n name: `${baseFilePath}.${fileExtension}`,\n content: content,\n })\n}\n\n/**\n * Updates the status bar item with the current recording status and time.\n */\nexport function updateStatusBarItem(): void {\n if (recording.isRecording) {\n if (getConfig().get('appearance.showTimer') === false) {\n statusBarItem.text = '$(debug-stop)'\n statusBarItem.tooltip = 'Current time: ' + formatDisplayTime(recording.timer)\n }\n if (getConfig().get('appearance.showTimer') === true) {\n statusBarItem.text = '$(debug-stop) ' + formatDisplayTime(recording.timer)\n statusBarItem.tooltip = 'Stop Recording'\n }\n statusBarItem.command = commands.stopRecording\n statusBarItem.show()\n } else {\n const editor = vscode.window.activeTextEditor\n if (!editor) {\n statusBarItem.hide()\n return\n }\n if (getConfig().get('appearance.minimalMode') === true) {\n statusBarItem.text = '$(circle-large-filled)'\n } else {\n statusBarItem.text = '$(circle-large-filled) Start Recording'\n }\n statusBarItem.tooltip = 'Start Recording'\n statusBarItem.command = commands.startRecording\n statusBarItem.show()\n }\n}\n\n/**\n * Creates and updates the panic button status bar item.\n */\nexport function updatePanicButton(): void {\n if (!recording.isRecording) {\n if (panicStatusBarItem) {\n panicStatusBarItem.hide()\n }\n return\n }\n\n // Create panic button if it doesn't exist\n if (!panicStatusBarItem) {\n panicStatusBarItem = vscode.window.createStatusBarItem(vscode.StatusBarAlignment.Right, 8999) // Position it to the left of the recording button\n extContext.subscriptions.push(panicStatusBarItem)\n }\n\n const secondsToRemove = (panicButtonPressCount + 1) * 10 // Show what the next press will remove\n panicStatusBarItem.text = '$(refresh)'\n panicStatusBarItem.tooltip = `Remove last ${secondsToRemove} seconds of recording (click again within 3 seconds to remove more)`\n panicStatusBarItem.command = commands.panicButton\n panicStatusBarItem.show()\n}\n\n/**\n * Deletes the last N seconds of recording data from the CSV file.\n * This is a ""panic button"" feature that allows users to quickly remove recent sensitive data.\n * Each successive press within 3 seconds removes more time: 10s, 20s, 30s, etc.\n * After 3 seconds of inactivity, the next press will be treated as a fresh press (10s).\n */\nexport async function panicButton(): Promise<void> {\n if (!recording.isRecording) {\n vscode.window.showWarningMessage('No active recording to remove data from')\n logToOutput('No active recording to remove data from', 'info')\n return\n }\n\n if (!recording.startDateTime) {\n vscode.window.showErrorMessage('Recording start time not available')\n logToOutput('Recording start time not available', 'error')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n vscode.window.showErrorMessage('Export path not available')\n logToOutput('Export path not available', 'error')\n return\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n vscode.window.showErrorMessage('Could not generate file path')\n logToOutput('Could not generate file path', 'error')\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`)\n\n try {\n // Check if file exists\n if (!fs.existsSync(filePath)) {\n vscode.window.showWarningMessage('No recording file found to remove data from')\n logToOutput('No recording file found to remove data from', 'info')\n return\n }\n\n // Read the file\n const content = fs.readFileSync(filePath, 'utf-8')\n const lines = content.split('\n')\n \n if (lines.length <= 1) {\n vscode.window.showWarningMessage('Recording file is empty, nothing to remove')\n logToOutput('Recording file is empty, nothing to remove', 'info')\n return\n }\n\n // Calculate how many lines to remove (10 seconds per press)\n const linesToRemove = Math.min((panicButtonPressCount + 1) * 10, lines.length - 1)\n const newLines = lines.slice(0, lines.length - linesToRemove)\n \n // Capture the lines that will be removed for display\n const removedLines = lines.slice(lines.length - linesToRemove)\n\n // Write back to file\n fs.writeFileSync(filePath, newLines.join('\n'))\n\n // Update panic button state\n panicButtonPressCount++\n \n // Set up timeout to reset the counter after 3 seconds of inactivity\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n }\n panicButtonTimeoutId = setTimeout(() => {\n panicButtonPressCount = 0\n accumulatedRemovedContent = [] // Clear accumulated content\n updatePanicButton()\n }, PANIC_BUTTON_TIMEOUT)\n \n updatePanicButton()\n\n const secondsToRemove = panicButtonPressCount * 10\n const actualLinesRemoved = lines.length - newLines.length\n \n // Accumulate removed content and show immediate popup\n if (removedLines.length > 0) {\n const nonEmptyLines = removedLines.filter(line => line.trim())\n if (nonEmptyLines.length > 0) {\n // Create a simple, readable summary of removed content\n const contentSummary = nonEmptyLines.map(line => {\n // Extract just the text content from CSV for cleaner display\n const parts = line.split(',')\n if (parts.length >= 6) {\n const textContent = parts[5].replace(/^""|""$/g, '') // Remove quotes\n // Clean up common escape sequences\n const cleanText = textContent\n .replace(/\\n/g, '\n')\n .replace(/\\t/g, '\t')\n .replace(/\\r/g, '\r')\n return { content: cleanText, sequence: Number.parseInt(parts[0]) }\n }\n return { content: line, sequence: Number.parseInt(line.split(',')[0]) }\n }).filter(item => item.content.trim().length > 0)\n \n // Add to accumulated content\n accumulatedRemovedContent.push(...contentSummary)\n \n // Sort by sequence number to show in original file order\n const sortedContent = accumulatedRemovedContent.sort((a, b) => a.sequence - b.sequence)\n \n // Show immediate popup with accumulated content\n const totalContent = sortedContent.map(item => item.content).join(' ')\n const summaryText = totalContent.length > 100 \n ? totalContent.substring(0, 100) + '...' \n : totalContent\n \n vscode.window.showInformationMessage(\n `Removed content: ""${summaryText}""`,\n 'Dismiss'\n )\n }\n }\n\n } catch (error) {\n const errorMessage = `Error during panic button operation: ${error}`\n vscode.window.showErrorMessage(errorMessage)\n logToOutput(errorMessage, 'error')\n }\n}",typescript,tab
|
3 |
+
2,399,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:44:49 AM [info] Activating crowd-code\n8:44:49 AM [info] Recording started\n8:44:49 AM [info] Initializing git provider using file system watchers...\n8:44:49 AM [info] Git repository found\n8:44:49 AM [info] Git provider initialized successfully\n8:44:49 AM [info] Initial git state: [object Object]\n",Log,tab
|
4 |
+
3,18241,"extension-output-pdoom-org.crowd-code-#1-crowd-code",298,0,"",Log,selection_mouse
|
5 |
+
4,20715,"src/recording.ts",0,0,"",typescript,tab
|
6 |
+
5,29085,"TERMINAL",0,0,"ls /tmp/crowd-code/",,terminal_command
|
7 |
+
6,29131,"TERMINAL",0,0,"]633;E;ls /tmp/crowd-code/;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C[0m[01;34mcrowd-code-12bded65-cc97-4d6e-a5a9-8eb203cdf5b21750746850455-2025_06_24-08.34.24.180[0m\r\n[01;34mcrowd-code-18a372f6-6ffb-4516-b3fc-5744a55e595b1750745384029-2025_06_24-08.09.58.114[0m\r\n[01;34mcrowd-code-1f4d4a04-f881-43ae-915a-b4684ec9fba71750685322384-2025_06_23-15.28.52.723[0m\r\n[01;34mcrowd-code-27a851dc-3a84-4cfb-867a-eef6b63ee7ef1750746742858-2025_06_24-08.32.35.909[0m\r\n[01;34mcrowd-code-47a52c8a-7a01-4d44-92cc-7fc1efc87b2c1750743549472-2025_06_24-07.42.17.399[0m\r\n[01;34mcrowd-code-54d64c52-45ee-426c-9f3f-dd28c172e3b51750744418870-2025_06_24-07.53.52.964[0m\r\n[01;34mcrowd-code-7bbba152-dac5-47a3-9097-97280b2bb6671750745644900-2025_06_24-08.14.17.536[0m\r\n[01;34mcrowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-08.44.49.850[0m\r\n[01;34mcrowd-code-a9c5e8e2-875d-4952-a3b9-18141ac818031750745077064-2025_06_24-08.04.52.29[0m\r\n[01;34mcrowd-code-bc8678ed-7352-41d3-8ab3-bf70f7958a0b1750745114983-2025_06_24-08.05.29.186[0m\r\n[01;34mcrowd-code-beb94361-bc52-424a-8726-24deb6772d751750744193601-2025_06_24-07.50.06.945[0m\r\n[01;34mcrowd-code-dc056f6b-886b-43e5-9b02-48725c7fb94f1750744782149-2025_06_24-07.59.55.809[0m\r\n[01;34mcrowd-code-dc67f66a-b65d-4784-baf6-471ba1e2ce651750745212705-2025_06_24-08.07.06.565[0m\r\n[01;34mcrowd-code-df370d00-88e7-4103-a3c0-8706ce572e0c1750744643799-2025_06_24-07.57.37.38[0m\r\n[01;34mcrowd-code-ecd6bcde-7cf4-4819-b2c7-c5b474828daa1750689105661-2025_06_23-16.31.57.981[0m\r\n[01;34mcrowd-code-ecd6bcde-7cf4-4819-b2c7-c5b474828daa1750689105661-2025_06_23-16.58.57.326[0m\r\n[01;34mvs-code-recorder-2025_06_23-18.34.26.367[0m\r\n]0;maharajamihir@mihir-xps139305:~/Projects/coding-extension/crowd-code]633;D;0",,terminal_output
|
8 |
+
7,44088,"TERMINAL",0,0,"echo crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-08.44.49.850",,terminal_command
|
9 |
+
8,44092,"TERMINAL",0,0,"]633;E;echo crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-08.44.49.850;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C",,terminal_output
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-89e3ea93-acec-4320-9c1b-eafc7c47155f1750747464127-2025_06_24-12.16.44.223/source.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,14,"src/recording.ts",0,0,"import * as fs from 'node:fs'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios'\nimport { hasConsent, showConsentChangeDialog } from './consent'\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateBaseFilePath,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording, type ConsentStatus } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'crowd-code.openSettings',\n startRecording: 'crowd-code.startRecording',\n stopRecording: 'crowd-code.stopRecording',\n panicButton: 'crowd-code.panicButton',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nlet panicStatusBarItem: vscode.StatusBarItem | undefined;\nlet panicButtonPressCount = 0;\nlet panicButtonTimeoutId: NodeJS.Timeout | undefined;\nlet accumulatedRemovedContent: Array<{content: string, sequence: number}> = []; // Store content with sequence numbers\n\nconst CROWD_CODE_API_GATEWAY_URL = process.env.CROWD_CODE_API_GATEWAY_URL;\n\nconst PANIC_BUTTON_TIMEOUT = 3000; // 3 seconds timeout for successive presses\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,\n}: CSVRowBuilder): string | undefined {\n if (!recording.startDateTime) {\n return\n }\n\n const time = new Date().getTime() - recording.startDateTime.getTime()\n\n if (type === ChangeType.HEADING) {\n return 'Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type\n'\n }\n\n if (type === ChangeType.TERMINAL_FOCUS || type === ChangeType.TERMINAL_COMMAND || type === ChangeType.TERMINAL_OUTPUT) {\n return `${sequence},${time},""TERMINAL"",${rangeOffset},${rangeLength},""${escapeString(text)}"",,${type}\n`\n }\n\n const editorFileName = getEditorFileName()\n return `${sequence},${time},""${editorFileName}"",${rangeOffset},${rangeLength},""${escapeString(text)}"",${getEditorLanguage()},${type}\n`\n}\n\n/**\n * Checks if the current file being edited is within the configured export path.\n * This is used to determine if the current file should be recorded or not.\n *\n * @returns {boolean} `true` if the current file is within the export path, `false` otherwise.\n */\nexport function isCurrentFileExported(): boolean {\n const editor = vscode.window.activeTextEditor\n const filename = editor?.document.fileName.replaceAll('\\', '/')\n const exportPath = getExportPath()\n if (!editor || !filename || !exportPath) {\n return false\n }\n return filename.startsWith(exportPath)\n}\n\nconst onChangeSubscription = vscode.workspace.onDidChangeTextDocument(event => {\n if (!recording.isRecording) {\n return\n }\n\n if (isCurrentFileExported()) {\n return\n }\n const editor = vscode.window.activeTextEditor\n if (editor && event.document === editor.document) {\n for (const change of event.contentChanges) {\n recording.sequence++\n addToFileQueue(\n buildCsvRow({\n sequence: recording.sequence,\n rangeOffset: change.rangeOffset,\n rangeLength: change.rangeLength,\n text: change.text,\n })\n )\n appendToFile()\n }\n }\n})\n\n/**\n * Creates the recording folder if it doesn't exist.\n * @param folderPath - The path to the recording folder.\n */\nfunction createRecordingFolder(folderPath: string): void {\n if (!fs.existsSync(folderPath)) {\n fs.mkdirSync(folderPath, { recursive: true })\n }\n}\n\n/**\n * Starts the recording process and initializes necessary variables.\n */\nexport async function startRecording(): Promise<void> {\n if (recording.isRecording) {\n notificationWithProgress('Already recording')\n logToOutput('Already recording', 'info')\n return\n }\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n // If the setting is enabled and the path is inside the workspace, add it to .gitignore\n if (\n getConfig().get<boolean>('export.addToGitignore') &&\n getConfig().get<string>('export.exportPath')?.startsWith('${workspaceFolder}')\n ) {\n await addToGitignore()\n }\n\n recording.startDateTime = new Date()\n recording.activatedFiles = new Set<string>()\n\n // Ask for folder name if enabled in settings\n let customFolderName: string | undefined\n if (getConfig().get('recording.askFolderName')) {\n customFolderName = await vscode.window.showInputBox({\n prompt: 'Enter a name for the recording folder',\n placeHolder: 'Enter recording folder name',\n })\n if (!customFolderName) {\n stopRecording(true)\n return\n }\n recording.customFolderName = customFolderName\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n stopRecording(true)\n return\n }\n\n // Create the recording folder\n const folderPath = path.dirname(path.join(exportPath, baseFilePath))\n createRecordingFolder(folderPath)\n\n recording.isRecording = true\n recording.timer = 0\n recording.endDateTime = null\n recording.sequence = 0\n panicButtonPressCount = 0 // Reset panic button counter for new recording\n accumulatedRemovedContent = [] // Clear accumulated content for new recording\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n intervalId = setInterval(() => {\n recording.timer++\n updateStatusBarItem()\n }, 1000)\n notificationWithProgress('Recording started')\n logToOutput('Recording started', 'info')\n\n // Only log initial editor content if there's an active text editor\n const editorText = vscode.window.activeTextEditor?.document.getText()\n const activeEditorUri = vscode.window.activeTextEditor?.document.uri.toString()\n\n if (editorText !== undefined && activeEditorUri) {\n recording.sequence++\n const csvRow = {\n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: editorText,\n type: ChangeType.TAB,\n }\n addToFileQueue(buildCsvRow({ ...csvRow, type: ChangeType.HEADING }))\n addToFileQueue(buildCsvRow(csvRow))\n appendToFile()\n recording.activatedFiles.add(activeEditorUri)\n actionsProvider.setCurrentFile(vscode.window.activeTextEditor?.document.fileName || '')\n } else {\n // If no active editor, just add the header row\n recording.sequence++\n addToFileQueue(buildCsvRow({ \n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: '',\n type: ChangeType.HEADING \n }))\n appendToFile()\n }\n\n extContext.subscriptions.push(onChangeSubscription)\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(true)\n\n // Set up a timer to send data to the Lambda endpoint periodically\n uploadIntervalId = setInterval(async () => {\n if (!exportPath) {\n return;\n }\n \n if (typeof CROWD_CODE_API_GATEWAY_URL !== 'string' || !CROWD_CODE_API_GATEWAY_URL.trim()) {\n logToOutput(""CROWD_CODE_API_GATEWAY_URL must be a non-empty string. Please check your build configuration."", 'error');\n return;\n }\n\n // Only upload data if user has given consent\n if (!hasConsent()) {\n return;\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`);\n const extensionVersion = extContext.extension.packageJSON.version as string;\n const userId = extContext.globalState.get<string>('userId');\n\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${baseFilePath}.csv`,\n content: fileContent,\n version: extensionVersion,\n userId: userId\n };\n await axios.post(CROWD_CODE_API_GATEWAY_URL, payload);\n console.log(`Successfully sent ${payload.fileName} to Lambda endpoint.`);\n }\n } catch (error: any) {\n if (error.code === 'ENOENT') {\n console.warn(`File not found at ${filePath}. It might be created on first write.`);\n } else {\n console.error(`Error sending data to Lambda: ${error.message}`);\n if (axios.isAxiosError(error) && error.response) {\n console.error(""Lambda response status:"", error.response.status);\n console.error(""Lambda response data:"", error.response.data);\n }\n }\n }\n }, 5 * 60 * 1000); // 5 minutes\n}\n\n/**\n * Stops the recording process and finalizes the recording data.\n * @param context - The extension context.\n */\nexport function stopRecording(force = false): Promise<void> | void {\n if (!recording.isRecording) {\n notificationWithProgress('Not recording')\n return\n }\n\n recording.isRecording = false\n clearInterval(intervalId)\n clearInterval(uploadIntervalId); // Clear the upload timer\n recording.timer = 0\n recording.activatedFiles?.clear()\n panicButtonPressCount = 0 // Reset panic button counter when recording stops\n accumulatedRemovedContent = [] // Clear accumulated content when recording stops\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n panicButtonTimeoutId = undefined\n }\n const index = extContext.subscriptions.indexOf(onChangeSubscription)\n if (index !== -1) {\n extContext.subscriptions.splice(index, 1)\n }\n updateStatusBarItem()\n updatePanicButton()\n actionsProvider.setRecordingState(false)\n if (force) {\n notificationWithProgress('Recording cancelled')\n logToOutput('Recording cancelled', 'info')\n recording.customFolderName = undefined\n return\n }\n notificationWithProgress('Recording finished')\n logToOutput('Recording finished', 'info')\n recording.endDateTime = new Date()\n return processCsvFile().then(() => {\n // Reset customFolderName after processing is complete\n recording.customFolderName = undefined\n }).catch(err => {\n logToOutput(`Error processing CSV file during stop: ${String(err)}`, 'error')\n recording.customFolderName = undefined\n });\n}\n\n/**\n * Appends data from the file queue to the appropriate file in the workspace.\n */\nexport async function appendToFile(): Promise<void> {\n if (isAppending) {\n return\n }\n isAppending = true\n\n const exportPath = getExportPath()\n if (!exportPath) {\n logToOutput('Export path not available in appendToFile, stopping recording.', 'error')\n stopRecording(true)\n isAppending = false\n return\n }\n\n while (fileQueue.length > 0) {\n const itemToAppend = fileQueue.shift()\n if (!itemToAppend) {\n continue\n }\n\n const filePath = path.join(exportPath, itemToAppend.name)\n\n try {\n const directory = path.dirname(filePath)\n if (!fs.existsSync(directory)) {\n fs.mkdirSync(directory, { recursive: true })\n }\n await fs.promises.appendFile(filePath, itemToAppend.content)\n } catch (err) {\n logToOutput(\n `Failed to append to file ${filePath}: ${err}. Item dropped. Content: ${itemToAppend.content.substring(0, 100)}...`,\n 'error'\n )\n }\n }\n isAppending = false\n}\n\n/**\n * Appends an SRT line to the file queue for the previous change.\n *\n * This function is responsible for generating the SRT format line for the previous change and adding it to the file queue.\n * It checks if the SRT export format is enabled, and if so, it generates the SRT line for the previous change and adds it to the file queue.\n *\n * @param processedChanges - An array of processed changes.\n * @param i - The index of the current change in the processedChanges array.\n * @param exportInSrt - A boolean indicating whether the SRT export format is enabled.\n */\nfunction addToSRTFile(processedChanges: Change[], i: number, exportInSrt: boolean) {\n if (!exportInSrt) {\n return\n }\n if (i === 0) {\n return\n }\n addToFileQueue(\n addSrtLine(\n processedChanges[i - 1].sequence,\n processedChanges[i - 1].startTime,\n processedChanges[i - 1].endTime,\n JSON.stringify({\n text: processedChanges[i - 1].text,\n file: processedChanges[i - 1].file,\n language: processedChanges[i - 1].language,\n })\n ),\n 'srt',\n true\n )\n}\n\n/**\n * Returns the new text content based on the change type and the previous change.\n * @param type - The type of the change.\n * @param text - The text of the change.\n * @param previousChange - The previous change.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n */\nfunction getNewTextContent(\n type: string,\n text: string,\n previousChange: Change | null,\n rangeOffset: number,\n rangeLength: number\n): string {\n if (type === ChangeType.TAB) {\n return text\n }\n if (!previousChange) {\n return ''\n }\n return getUpdatedText(previousChange.text, rangeOffset, rangeLength, text)\n}\n\n/**\n * Processes a single CSV line and returns the processed change\n */\nasync function processCSVLine(line: string, previousChange: Change | null): Promise<Change | null> {\n const lineArr = line.split(/,(?=(?:[^""]*""[^""]*"")*[^""]*$)/)\n\n if (Number.isNaN(Number.parseInt(lineArr[0]))) {\n return null\n }\n\n const time = Number.parseInt(lineArr[1])\n const file = removeDoubleQuotes(lineArr[2])\n const rangeOffset = Number.parseInt(lineArr[3])\n const rangeLength = Number.parseInt(lineArr[4])\n const text = unescapeString(removeDoubleQuotes(lineArr[5]))\n const language = lineArr[6]\n const type = lineArr[7]\n\n const newText = getNewTextContent(type, text, previousChange, rangeOffset, rangeLength)\n\n /**\n * Skip exporting changes with the same values to the previous change.\n */\n if (\n previousChange &&\n time === previousChange.startTime &&\n file === previousChange.file &&\n newText === previousChange.text &&\n language === previousChange.language\n ) {\n return null\n }\n\n return {\n sequence: previousChange ? previousChange.sequence + 1 : 1,\n file,\n startTime: time,\n endTime: 0,\n language,\n text: newText,\n }\n}\n\n/**\n * Returns the updated text content based on the previous text, range offset, range length, and new text.\n * @param previousText - The previous text.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n * @param newText - The new text.\n */\nfunction getUpdatedText(\n previousText: string,\n rangeOffset: number,\n rangeLength: number,\n newText: string\n): string {\n const textArray = previousText.split('')\n textArray.splice(rangeOffset, rangeLength, newText)\n return textArray.join('')\n}\n\n/**\n * Processes the CSV file and generates the necessary output files.\n */\nasync function processCsvFile(): Promise<void> {\n if (!validateRecordingState()) {\n return\n }\n\n const exportFormats = getConfig().get<string[]>('export.exportFormats', [])\n if (exportFormats.length === 0) {\n logToOutput('No export formats specified', 'info')\n vscode.window.showWarningMessage('No export formats specified')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n if (!recording.startDateTime) {\n return\n }\n\n // Use the same custom folder name for reading the source file\n const baseFilePathSource = generateBaseFilePath(\n recording.startDateTime,\n false,\n recording.customFolderName,\n sessionUuid\n )\n if (!baseFilePathSource) {\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePathSource}.csv`)\n\n try {\n if (!fs.existsSync(filePath)) {\n throw new Error(`Source file not found: ${filePath}`)\n }\n\n const processedChanges: Change[] = []\n\n const rl = readline.createInterface({\n input: fs.createReadStream(filePath),\n crlfDelay: Number.POSITIVE_INFINITY,\n })\n\n for await (const line of rl) {\n const previousChange = processedChanges[processedChanges.length - 1]\n const change = await processCSVLine(line, previousChange)\n\n if (change) {\n if (previousChange) {\n previousChange.endTime = change.startTime\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n processedChanges.push(change)\n }\n }\n\n rl.close();\n\n return finalizeRecording(processedChanges, exportFormats);\n\n } catch (err) {\n vscode.window.showErrorMessage(`Error processing recording: ${err}`)\n logToOutput('Error processing CSV file: ' + String(err), 'error')\n return Promise.resolve(); // Resolve even on error after showing message\n }\n}\n\nfunction validateRecordingState(): boolean {\n if (!vscode.workspace.workspaceFolders) {\n logToOutput(\n 'No workspace folder found. To process the recording is needed a workspace folder',\n 'error'\n )\n return false\n }\n if (!recording.endDateTime || !recording.startDateTime) {\n logToOutput('Recording date time is not properly set', 'error')\n return false\n }\n return true\n}\n\nfunction finalizeRecording(processedChanges: Change[], exportFormats: string[]): Promise<void> {\n const lastChange = processedChanges[processedChanges.length - 1]\n if (lastChange && recording.endDateTime && recording.startDateTime) {\n lastChange.endTime = recording.endDateTime.getTime() - recording.startDateTime.getTime()\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n if (exportFormats.includes('JSON')) {\n addToFileQueue(JSON.stringify(processedChanges), 'json', true)\n }\n return appendToFile().then(() => {\n // Refresh the recordFiles view after export is complete\n vscode.commands.executeCommand('crowd-code.refreshRecordFiles')\n })\n}\n\n/**\n * Adds a line to the SRT file format.\n * @param sequence - The sequence number of the change.\n * @param start - The start time of the change.\n * @param end - The end time of the change.\n * @param text - The text of the change.\n * @returns A string representing a line in the SRT file format.\n */\nfunction addSrtLine(sequence: number, start: number, end: number, text: string): string {\n return `${sequence}\n${formatSrtTime(start)} --> ${formatSrtTime(end)}\n${text}\n\n`\n}\n\n/**\n * Adds content to the file queue.\n * @param content - The content to add.\n * @param fileExtension - The file extension (optional, defaults to 'csv').\n */\nexport function addToFileQueue(\n content: string | undefined,\n fileExtension = 'csv',\n isExport = false\n): void {\n if (!content) {\n return\n }\n if (!recording.startDateTime) {\n return\n }\n // Use the same custom name throughout the recording session\n const baseFilePath = generateBaseFilePath(recording.startDateTime, isExport, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n return\n }\n fileQueue.push({\n name: `${baseFilePath}.${fileExtension}`,\n content: content,\n })\n}\n\n/**\n * Updates the status bar item with the current recording status and time.\n */\nexport function updateStatusBarItem(): void {\n if (recording.isRecording) {\n if (getConfig().get('appearance.showTimer') === false) {\n statusBarItem.text = '$(debug-stop)'\n statusBarItem.tooltip = 'Current time: ' + formatDisplayTime(recording.timer)\n }\n if (getConfig().get('appearance.showTimer') === true) {\n statusBarItem.text = '$(debug-stop) ' + formatDisplayTime(recording.timer)\n statusBarItem.tooltip = 'Stop Recording'\n }\n statusBarItem.command = commands.stopRecording\n statusBarItem.show()\n } else {\n const editor = vscode.window.activeTextEditor\n if (!editor) {\n statusBarItem.hide()\n return\n }\n if (getConfig().get('appearance.minimalMode') === true) {\n statusBarItem.text = '$(circle-large-filled)'\n } else {\n statusBarItem.text = '$(circle-large-filled) Start Recording'\n }\n statusBarItem.tooltip = 'Start Recording'\n statusBarItem.command = commands.startRecording\n statusBarItem.show()\n }\n}\n\n/**\n * Creates and updates the panic button status bar item.\n */\nexport function updatePanicButton(): void {\n if (!recording.isRecording) {\n if (panicStatusBarItem) {\n panicStatusBarItem.hide()\n }\n return\n }\n\n // Create panic button if it doesn't exist\n if (!panicStatusBarItem) {\n panicStatusBarItem = vscode.window.createStatusBarItem(vscode.StatusBarAlignment.Right, 8999) // Position it to the left of the recording button\n extContext.subscriptions.push(panicStatusBarItem)\n }\n\n const secondsToRemove = (panicButtonPressCount + 1) * 10 // Show what the next press will remove\n panicStatusBarItem.text = '$(refresh)'\n panicStatusBarItem.tooltip = `Remove last ${secondsToRemove} seconds of recording (click again within 3 seconds to remove more)`\n panicStatusBarItem.command = commands.panicButton\n panicStatusBarItem.show()\n}\n\n/**\n * Deletes the last N seconds of recording data from the CSV file.\n * This is a ""panic button"" feature that allows users to quickly remove recent sensitive data.\n * Each successive press within 3 seconds removes more time: 10s, 20s, 30s, etc.\n * After 3 seconds of inactivity, the next press will be treated as a fresh press (10s).\n */\nexport async function panicButton(): Promise<void> {\n if (!recording.isRecording) {\n vscode.window.showWarningMessage('No active recording to remove data from')\n logToOutput('No active recording to remove data from', 'info')\n return\n }\n\n if (!recording.startDateTime) {\n vscode.window.showErrorMessage('Recording start time not available')\n logToOutput('Recording start time not available', 'error')\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n vscode.window.showErrorMessage('Export path not available')\n logToOutput('Export path not available', 'error')\n return\n }\n\n const baseFilePath = generateBaseFilePath(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!baseFilePath) {\n vscode.window.showErrorMessage('Could not generate file path')\n logToOutput('Could not generate file path', 'error')\n return\n }\n\n const filePath = path.join(exportPath, `${baseFilePath}.csv`)\n\n try {\n // Check if file exists\n if (!fs.existsSync(filePath)) {\n vscode.window.showWarningMessage('No recording file found to remove data from')\n logToOutput('No recording file found to remove data from', 'info')\n return\n }\n\n // Read the file\n const content = fs.readFileSync(filePath, 'utf-8')\n const lines = content.split('\n')\n \n if (lines.length <= 1) {\n vscode.window.showWarningMessage('Recording file is empty, nothing to remove')\n logToOutput('Recording file is empty, nothing to remove', 'info')\n return\n }\n\n // Calculate how many lines to remove (10 seconds per press)\n const linesToRemove = Math.min((panicButtonPressCount + 1) * 10, lines.length - 1)\n const newLines = lines.slice(0, lines.length - linesToRemove)\n \n // Capture the lines that will be removed for display\n const removedLines = lines.slice(lines.length - linesToRemove)\n\n // Write back to file\n fs.writeFileSync(filePath, newLines.join('\n'))\n\n // Update panic button state\n panicButtonPressCount++\n \n // Set up timeout to reset the counter after 3 seconds of inactivity\n if (panicButtonTimeoutId) {\n clearTimeout(panicButtonTimeoutId)\n }\n panicButtonTimeoutId = setTimeout(() => {\n panicButtonPressCount = 0\n accumulatedRemovedContent = [] // Clear accumulated content\n updatePanicButton()\n }, PANIC_BUTTON_TIMEOUT)\n \n updatePanicButton()\n\n const secondsToRemove = panicButtonPressCount * 10\n const actualLinesRemoved = lines.length - newLines.length\n \n // Accumulate removed content and show immediate popup\n if (removedLines.length > 0) {\n const nonEmptyLines = removedLines.filter(line => line.trim())\n if (nonEmptyLines.length > 0) {\n // Create a simple, readable summary of removed content\n const contentSummary = nonEmptyLines.map(line => {\n // Extract just the text content from CSV for cleaner display\n const parts = line.split(',')\n if (parts.length >= 6) {\n const textContent = parts[5].replace(/^""|""$/g, '') // Remove quotes\n // Clean up common escape sequences\n const cleanText = textContent\n .replace(/\\n/g, '\n')\n .replace(/\\t/g, '\t')\n .replace(/\\r/g, '\r')\n return { content: cleanText, sequence: Number.parseInt(parts[0]) }\n }\n return { content: line, sequence: Number.parseInt(line.split(',')[0]) }\n }).filter(item => item.content.trim().length > 0)\n \n // Add to accumulated content\n accumulatedRemovedContent.push(...contentSummary)\n \n // Sort by sequence number to show in original file order\n const sortedContent = accumulatedRemovedContent.sort((a, b) => a.sequence - b.sequence)\n \n // Show immediate popup with accumulated content\n const totalContent = sortedContent.map(item => item.content).join(' ')\n const summaryText = totalContent.length > 100 \n ? totalContent.substring(0, 100) + '...' \n : totalContent\n \n vscode.window.showInformationMessage(\n `Removed content: ""${summaryText}""`,\n 'Dismiss'\n )\n }\n }\n\n } catch (error) {\n const errorMessage = `Error during panic button operation: ${error}`\n vscode.window.showErrorMessage(errorMessage)\n logToOutput(errorMessage, 'error')\n }\n}",typescript,tab
|
3 |
+
2,229309,"src/recording.ts",8711,0,"",typescript,selection_mouse
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-98469f8a-b7a1-4997-8c1b-664c4f92dfac1751019295411-2025_06_27-12.14.59.339/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-b6479f87-a6dd-4f48-918b-47aeda5068fc1750926520523-2025_06_26-10.29.13.179/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-bc8678ed-7352-41d3-8ab3-bf70f7958a0b1750745114983-2025_06_24-08.05.29.186/source.csv
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,90,"extract_vid_list.py",0,0,"from yt_dlp import YoutubeDL\n\ndef extract_video_links_from_playlist(playlist_url, output_file):\n ydl_opts = {\n 'quiet': True,\n 'extract_flat': True, # Do not download the videos, only extract metadata\n 'skip_download': True, # Skip actual download\n }\n\n with YoutubeDL(ydl_opts) as ydl:\n result = ydl.extract_info(playlist_url, download=False)\n \n if 'entries' in result:\n video_urls = [entry['url'] for entry in result['entries']]\n \n with open(output_file, 'w') as f:\n for video_url in video_urls:\n f.write(f""{video_url}\n"")\n print(f""Video links extracted to {output_file}"")\n else:\n print(""No videos found in the playlist."")\n\nif __name__ == ""__main__"":\n playlist_url = input(""Enter the YouTube playlist URL: "")\n output_file = ""links/tmp-1.txt""\n extract_video_links_from_playlist(playlist_url, output_file)\n\n# hello hello bla bla bla bla",python,tab
|
3 |
+
2,283,"tasks",0,0,"",Log,tab
|
4 |
+
3,297,"extract_vid_list.py",0,0,"",python,tab
|
5 |
+
4,314,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,tab
|
6 |
+
5,573,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:05:29 AM [info] Activating crowd-code\n8:05:29 AM [info] Recording started\n8:05:29 AM [info] Initializing git provider using file system watchers...\n8:05:29 AM [info] Git repository found\n8:05:29 AM [info] Git provider initialized successfully\n8:05:29 AM [info] Initial git state: [object Object]\n",Log,content
|
7 |
+
6,751,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,selection_command
|
8 |
+
7,2939,"extract_vid_list.py",0,0,"",python,tab
|
9 |
+
8,3510,"extract_vid_list.py",992,0,"",python,selection_command
|
10 |
+
9,3673,"extract_vid_list.py",991,0,"",python,selection_command
|
11 |
+
10,3756,"extract_vid_list.py",990,0,"",python,selection_command
|
12 |
+
11,4504,"extract_vid_list.py",993,0,"",python,selection_command
|
13 |
+
12,4765,"extract_vid_list.py",992,1,"",python,content
|
14 |
+
13,4896,"extract_vid_list.py",991,1,"",python,content
|
15 |
+
14,5040,"extract_vid_list.py",990,1,"",python,content
|
16 |
+
15,5336,"extract_vid_list.py",990,0,"b",python,content
|
17 |
+
16,5350,"extract_vid_list.py",991,0,"",python,selection_keyboard
|
18 |
+
17,5584,"extract_vid_list.py",991,0,"a",python,content
|
19 |
+
18,5598,"extract_vid_list.py",992,0,"",python,selection_keyboard
|
20 |
+
19,6012,"extract_vid_list.py",991,1,"",python,content
|
21 |
+
20,6146,"extract_vid_list.py",991,0,"l",python,content
|
22 |
+
21,6159,"extract_vid_list.py",992,0,"",python,selection_keyboard
|
23 |
+
22,6250,"extract_vid_list.py",992,0,"a",python,content
|
24 |
+
23,6262,"extract_vid_list.py",993,0,"",python,selection_keyboard
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-eb4593ba-8717-4311-aac2-0669058b8e141750152994551-2025_06_17-11.36.51.515/source.csv
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,9,"crowd-code/make-ext.sh",0,0,"rm -rf node_modules/\nnpm install\nnpm run compile\nvsce package\nchmod 644 crowd-code-*.vsix\nscp crowd-code-1.1.3.vsix [email protected]:/u/halle/mahajanm/home_at/home_page/html-data",shellscript,tab
|
3 |
+
2,338,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"11:36:51 AM [info] Activating crowd-code\n11:36:51 AM [info] Welcome back maharajamihir. Your user-id is '69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e'. Happy coding!\n11:36:51 AM [info] Recording started\n",Log,tab
|
4 |
+
3,2290,"extension-output-pdoom-org.crowd-code-#1-crowd-code",223,0,"",Log,selection_mouse
|
5 |
+
4,40994,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"",Log,tab
|
69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e/crowd-code-ecd6bcde-7cf4-4819-b2c7-c5b474828daa1750689105661-2025_06_23-16.31.57.981/source.csv
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
|
2 |
+
1,7,"src/recording.ts",0,0,"import * as fs from 'node:fs'\nimport * as util from 'node:util'\nimport * as path from 'node:path'\nimport * as vscode from 'vscode'\nimport * as readline from 'node:readline'\nimport axios from 'axios';\nimport {\n getEditorFileName,\n escapeString,\n getEditorLanguage,\n notificationWithProgress,\n generateFileName,\n formatDisplayTime,\n getExportPath,\n logToOutput,\n formatSrtTime,\n getConfig,\n removeDoubleQuotes,\n unescapeString,\n addToGitignore,\n} from './utilities'\nimport { type File, ChangeType, type CSVRowBuilder, type Change, type Recording } from './types'\nimport { extContext, statusBarItem, actionsProvider } from './extension'\n\nexport const commands = {\n openSettings: 'vs-code-recorder.openSettings',\n startRecording: 'vs-code-recorder.startRecording',\n stopRecording: 'vs-code-recorder.stopRecording',\n}\n\nexport const recording: Recording = {\n isRecording: false,\n timer: 0,\n startDateTime: null,\n endDateTime: null,\n sequence: 0,\n customFolderName: '',\n activatedFiles: new Set<string>(),\n}\n\nlet intervalId: NodeJS.Timeout\nconst fileQueue: File[] = []\nlet isAppending = false\n\nlet uploadIntervalId: NodeJS.Timeout;\nconst sessionUuid = vscode.env.sessionId;\n\nconst CROWD_CODE_API_GATEWAY_URL = process.env.CROWD_CODE_API_GATEWAY_URL;\n\n/**\n * Builds a CSV row with the given parameters.\n *\n * @param {CSVRowBuilder} sequence - The sequence number of the change.\n * @param {CSVRowBuilder} rangeOffset - The offset of the changed range.\n * @param {CSVRowBuilder} rangeLength - The length of the changed range.\n * @param {CSVRowBuilder} text - The text of the change.\n * @param {string} type - The type of the change (optional, defaults to 'content').\n * @return {string} A CSV row string with the provided information.\n */\nexport function buildCsvRow({\n sequence,\n rangeOffset,\n rangeLength,\n text,\n type = ChangeType.CONTENT,\n}: CSVRowBuilder): string | undefined {\n if (!recording.startDateTime) {\n return\n }\n\n const time = new Date().getTime() - recording.startDateTime.getTime()\n\n if (type === ChangeType.HEADING) {\n return 'Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type\n'\n }\n\n if (type === ChangeType.TERMINAL_FOCUS || type === ChangeType.TERMINAL_COMMAND || type === ChangeType.TERMINAL_OUTPUT) {\n return `${sequence},${time},""TERMINAL"",${rangeOffset},${rangeLength},""${escapeString(text)}"",,${type}\n`\n }\n\n const editorFileName = getEditorFileName()\n return `${sequence},${time},""${editorFileName}"",${rangeOffset},${rangeLength},""${escapeString(text)}"",${getEditorLanguage()},${type}\n`\n}\n\n/**\n * Checks if the current file being edited is within the configured export path.\n * This is used to determine if the current file should be recorded or not.\n *\n * @returns {boolean} `true` if the current file is within the export path, `false` otherwise.\n */\nexport function isCurrentFileExported(): boolean {\n const editor = vscode.window.activeTextEditor\n const filename = editor?.document.fileName.replaceAll('\\', '/')\n const exportPath = getExportPath()\n if (!editor || !filename || !exportPath) {\n return false\n }\n return filename.startsWith(exportPath)\n}\n\nconst onChangeSubscription = vscode.workspace.onDidChangeTextDocument(event => {\n if (!recording.isRecording) {\n return\n }\n\n if (isCurrentFileExported()) {\n return\n }\n const editor = vscode.window.activeTextEditor\n if (editor && event.document === editor.document) {\n for (const change of event.contentChanges) {\n recording.sequence++\n addToFileQueue(\n buildCsvRow({\n sequence: recording.sequence,\n rangeOffset: change.rangeOffset,\n rangeLength: change.rangeLength,\n text: change.text,\n })\n )\n appendToFile()\n }\n }\n})\n\n/**\n * Creates the recording folder if it doesn't exist.\n * @param folderPath - The path to the recording folder.\n */\nfunction createRecordingFolder(folderPath: string): void {\n if (!fs.existsSync(folderPath)) {\n fs.mkdirSync(folderPath, { recursive: true })\n }\n}\n\n/**\n * Starts the recording process and initializes necessary variables.\n */\nexport async function startRecording(): Promise<void> {\n if (!vscode.window.activeTextEditor) {\n vscode.window.showErrorMessage(vscode.l10n.t('No active text editor'))\n logToOutput(vscode.l10n.t('No active text editor'), 'info')\n return\n }\n if (recording.isRecording) {\n notificationWithProgress(vscode.l10n.t('Already recording'))\n logToOutput(vscode.l10n.t('Already recording'), 'info')\n return\n }\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n // If the setting is enabled and the path is inside the workspace, add it to .gitignore\n if (\n getConfig().get<boolean>('export.addToGitignore') &&\n getConfig().get<string>('export.exportPath')?.startsWith('${workspaceFolder}')\n ) {\n await addToGitignore()\n }\n\n recording.startDateTime = new Date()\n recording.activatedFiles = new Set<string>()\n\n // Ask for folder name if enabled in settings\n let customFolderName: string | undefined\n if (getConfig().get('recording.askFolderName')) {\n customFolderName = await vscode.window.showInputBox({\n prompt: vscode.l10n.t('Enter a name for the recording folder'),\n placeHolder: vscode.l10n.t('Enter recording folder name'),\n })\n if (!customFolderName) {\n stopRecording(true)\n return\n }\n recording.customFolderName = customFolderName\n }\n\n const fileName = generateFileName(recording.startDateTime, false, recording.customFolderName, sessionUuid)\n if (!fileName) {\n stopRecording(true)\n return\n }\n\n // Create the recording folder\n const folderPath = path.dirname(path.join(exportPath, fileName))\n createRecordingFolder(folderPath)\n\n recording.isRecording = true\n recording.timer = 0\n recording.endDateTime = null\n recording.sequence = 0\n intervalId = setInterval(() => {\n recording.timer++\n updateStatusBarItem()\n }, 1000)\n notificationWithProgress(vscode.l10n.t('Recording started'))\n logToOutput(vscode.l10n.t('Recording started'), 'info')\n\n const editorText = vscode.window.activeTextEditor?.document.getText()\n const activeEditorUri = vscode.window.activeTextEditor?.document.uri.toString()\n\n if (editorText !== undefined && activeEditorUri) {\n recording.sequence++\n const csvRow = {\n sequence: recording.sequence,\n rangeOffset: 0,\n rangeLength: 0,\n text: editorText,\n type: ChangeType.TAB,\n }\n addToFileQueue(buildCsvRow({ ...csvRow, type: ChangeType.HEADING }))\n addToFileQueue(buildCsvRow(csvRow))\n appendToFile()\n recording.activatedFiles.add(activeEditorUri)\n actionsProvider.setCurrentFile(vscode.window.activeTextEditor.document.fileName)\n }\n\n extContext.subscriptions.push(onChangeSubscription)\n updateStatusBarItem()\n actionsProvider.setRecordingState(true)\n\n // Set up a timer to send data to the Lambda endpoint periodically\n uploadIntervalId = setInterval(async () => {\n if (!exportPath) {\n return;\n }\n\n if (typeof CROWD_CODE_API_GATEWAY_URL !== 'string' || !CROWD_CODE_API_GATEWAY_URL.trim()) {\n logToOutput(""CROWD_CODE_API_GATEWAY_URL must be a non-empty string. Please check your build configuration."", 'error');\n logToOutput(`CROWD_CODE_API_GATEWAY_URL: ${CROWD_CODE_API_GATEWAY_URL}`, 'info');\n return;\n }\n\n const filePath = path.join(exportPath, `${fileName}.csv`);\n\n try {\n const fileContent = await fs.promises.readFile(filePath, 'utf-8');\n\n if (fileContent) {\n const payload = {\n fileName: `${fileName}.csv`,\n content: fileContent\n };\n await axios.post(CROWD_CODE_API_GATEWAY_URL, payload);\n console.log(`Successfully sent ${payload.fileName} to Lambda endpoint.`);\n }\n } catch (error: any) {\n if (error.code === 'ENOENT') {\n console.warn(`File not found at ${filePath}. It might be created on first write.`);\n } else {\n console.error(`Error sending data to Lambda: ${error.message}`);\n if (axios.isAxiosError(error) && error.response) {\n console.error(""Lambda response status:"", error.response.status);\n console.error(""Lambda response data:"", error.response.data);\n }\n }\n }\n }, 0.2 * 60 * 1000); // 5 minutes\n}\n\n/**\n * Stops the recording process and finalizes the recording data.\n * @param context - The extension context.\n */\nexport function stopRecording(force = false): Promise<void> | void {\n if (!recording.isRecording) {\n notificationWithProgress(vscode.l10n.t('Not recording'))\n return\n }\n\n recording.isRecording = false\n clearInterval(intervalId)\n clearInterval(uploadIntervalId); // Clear the upload timer\n recording.timer = 0\n recording.activatedFiles?.clear()\n const index = extContext.subscriptions.indexOf(onChangeSubscription)\n if (index !== -1) {\n extContext.subscriptions.splice(index, 1)\n }\n updateStatusBarItem()\n actionsProvider.setRecordingState(false)\n if (force) {\n notificationWithProgress(vscode.l10n.t('Recording cancelled'))\n logToOutput(vscode.l10n.t('Recording cancelled'), 'info')\n recording.customFolderName = undefined\n return\n }\n notificationWithProgress(vscode.l10n.t('Recording finished'))\n logToOutput(vscode.l10n.t('Recording finished'), 'info')\n recording.endDateTime = new Date()\n return processCsvFile().then(() => {\n // Reset customFolderName after processing is complete\n recording.customFolderName = undefined\n }).catch(err => {\n logToOutput(vscode.l10n.t('Error processing CSV file during stop: {0}', String(err)), 'error')\n recording.customFolderName = undefined\n });\n}\n\n/**\n * Appends data from the file queue to the appropriate file in the workspace.\n */\nexport async function appendToFile(): Promise<void> {\n if (isAppending) {\n return\n }\n isAppending = true\n\n const exportPath = getExportPath()\n if (!exportPath) {\n logToOutput('Export path not available in appendToFile, stopping recording.', 'error')\n stopRecording(true)\n isAppending = false\n return\n }\n\n while (fileQueue.length > 0) {\n const itemToAppend = fileQueue.shift()\n if (!itemToAppend) {\n continue\n }\n\n const filePath = path.join(exportPath, itemToAppend.name)\n\n try {\n const directory = path.dirname(filePath)\n if (!fs.existsSync(directory)) {\n fs.mkdirSync(directory, { recursive: true })\n }\n await fs.promises.appendFile(filePath, itemToAppend.content)\n } catch (err) {\n logToOutput(\n `Failed to append to file ${filePath}: ${err}. Item dropped. Content: ${itemToAppend.content.substring(0, 100)}...`,\n 'error'\n )\n }\n }\n isAppending = false\n}\n\n/**\n * Appends an SRT line to the file queue for the previous change.\n *\n * This function is responsible for generating the SRT format line for the previous change and adding it to the file queue.\n * It checks if the SRT export format is enabled, and if so, it generates the SRT line for the previous change and adds it to the file queue.\n *\n * @param processedChanges - An array of processed changes.\n * @param i - The index of the current change in the processedChanges array.\n * @param exportInSrt - A boolean indicating whether the SRT export format is enabled.\n */\nfunction addToSRTFile(processedChanges: Change[], i: number, exportInSrt: boolean) {\n if (!exportInSrt) {\n return\n }\n if (i === 0) {\n return\n }\n addToFileQueue(\n addSrtLine(\n processedChanges[i - 1].sequence,\n processedChanges[i - 1].startTime,\n processedChanges[i - 1].endTime,\n JSON.stringify({\n text: processedChanges[i - 1].text,\n file: processedChanges[i - 1].file,\n language: processedChanges[i - 1].language,\n })\n ),\n 'srt',\n true\n )\n}\n\n/**\n * Returns the new text content based on the change type and the previous change.\n * @param type - The type of the change.\n * @param text - The text of the change.\n * @param previousChange - The previous change.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n */\nfunction getNewTextContent(\n type: string,\n text: string,\n previousChange: Change | null,\n rangeOffset: number,\n rangeLength: number\n): string {\n if (type === ChangeType.TAB) {\n return text\n }\n if (!previousChange) {\n return ''\n }\n return getUpdatedText(previousChange.text, rangeOffset, rangeLength, text)\n}\n\n/**\n * Processes a single CSV line and returns the processed change\n */\nasync function processCSVLine(line: string, previousChange: Change | null): Promise<Change | null> {\n const lineArr = line.split(/,(?=(?:[^""]*""[^""]*"")*[^""]*$)/)\n\n if (Number.isNaN(Number.parseInt(lineArr[0]))) {\n return null\n }\n\n const time = Number.parseInt(lineArr[1])\n const file = removeDoubleQuotes(lineArr[2])\n const rangeOffset = Number.parseInt(lineArr[3])\n const rangeLength = Number.parseInt(lineArr[4])\n const text = unescapeString(removeDoubleQuotes(lineArr[5]))\n const language = lineArr[6]\n const type = lineArr[7]\n\n const newText = getNewTextContent(type, text, previousChange, rangeOffset, rangeLength)\n\n /**\n * Skip exporting changes with the same values to the previous change.\n */\n if (\n previousChange &&\n time === previousChange.startTime &&\n file === previousChange.file &&\n newText === previousChange.text &&\n language === previousChange.language\n ) {\n return null\n }\n\n return {\n sequence: previousChange ? previousChange.sequence + 1 : 1,\n file,\n startTime: time,\n endTime: 0,\n language,\n text: newText,\n }\n}\n\n/**\n * Returns the updated text content based on the previous text, range offset, range length, and new text.\n * @param previousText - The previous text.\n * @param rangeOffset - The offset of the range.\n * @param rangeLength - The length of the range.\n * @param newText - The new text.\n */\nfunction getUpdatedText(\n previousText: string,\n rangeOffset: number,\n rangeLength: number,\n newText: string\n): string {\n const textArray = previousText.split('')\n textArray.splice(rangeOffset, rangeLength, newText)\n return textArray.join('')\n}\n\n/**\n * Processes the CSV file and generates the necessary output files.\n */\nasync function processCsvFile(): Promise<void> {\n if (!validateRecordingState()) {\n return\n }\n\n const exportFormats = getConfig().get<string[]>('export.exportFormats', [])\n if (exportFormats.length === 0) {\n logToOutput(vscode.l10n.t('No export formats specified'), 'info')\n vscode.window.showWarningMessage(vscode.l10n.t('No export formats specified'))\n return\n }\n\n const exportPath = getExportPath()\n if (!exportPath) {\n return\n }\n\n if (!recording.startDateTime) {\n return\n }\n\n // Use the same custom folder name for reading the source file\n const sourceFileName = generateFileName(\n recording.startDateTime,\n false,\n recording.customFolderName,\n sessionUuid\n )\n if (!sourceFileName) {\n return\n }\n\n const filePath = path.join(exportPath, `${sourceFileName}.csv`)\n\n try {\n if (!fs.existsSync(filePath)) {\n throw new Error(`Source file not found: ${filePath}`)\n }\n\n const processedChanges: Change[] = []\n\n const rl = readline.createInterface({\n input: fs.createReadStream(filePath),\n crlfDelay: Number.POSITIVE_INFINITY,\n })\n\n for await (const line of rl) {\n const previousChange = processedChanges[processedChanges.length - 1]\n const change = await processCSVLine(line, previousChange)\n\n if (change) {\n if (previousChange) {\n previousChange.endTime = change.startTime\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n processedChanges.push(change)\n }\n }\n\n rl.close();\n\n return finalizeRecording(processedChanges, exportFormats);\n\n } catch (err) {\n vscode.window.showErrorMessage(`Error processing recording: ${err}`)\n logToOutput(vscode.l10n.t('Error processing CSV file: {0}', String(err)), 'error')\n return Promise.resolve(); // Resolve even on error after showing message\n }\n}\n\nfunction validateRecordingState(): boolean {\n if (!vscode.workspace.workspaceFolders) {\n logToOutput(\n vscode.l10n.t(\n 'No workspace folder found. To process the recording is needed a workspace folder'\n ),\n 'error'\n )\n return false\n }\n if (!recording.endDateTime || !recording.startDateTime) {\n logToOutput(vscode.l10n.t('Recording date time is not properly set'), 'error')\n return false\n }\n return true\n}\n\nfunction finalizeRecording(processedChanges: Change[], exportFormats: string[]): Promise<void> {\n const lastChange = processedChanges[processedChanges.length - 1]\n if (lastChange && recording.endDateTime && recording.startDateTime) {\n lastChange.endTime = recording.endDateTime.getTime() - recording.startDateTime.getTime()\n if (exportFormats.includes('SRT')) {\n addToSRTFile(processedChanges, processedChanges.length, true)\n }\n }\n if (exportFormats.includes('JSON')) {\n addToFileQueue(JSON.stringify(processedChanges), 'json', true)\n }\n return appendToFile().then(() => {\n // Refresh the recordFiles view after export is complete\n vscode.commands.executeCommand('vs-code-recorder.refreshRecordFiles')\n })\n}\n\n/**\n * Adds a line to the SRT file format.\n * @param sequence - The sequence number of the change.\n * @param start - The start time of the change.\n * @param end - The end time of the change.\n * @param text - The text of the change.\n * @returns A string representing a line in the SRT file format.\n */\nfunction addSrtLine(sequence: number, start: number, end: number, text: string): string {\n return `${sequence}\n${formatSrtTime(start)} --> ${formatSrtTime(end)}\n${text}\n\n`\n}\n\n/**\n * Adds content to the file queue.\n * @param content - The content to add.\n * @param fileExtension - The file extension (optional, defaults to 'csv').\n */\nexport function addToFileQueue(\n content: string | undefined,\n fileExtension = 'csv',\n isExport = false\n): void {\n if (!content) {\n return\n }\n if (!recording.startDateTime) {\n return\n }\n // Use the same custom name throughout the recording session\n const fileName = generateFileName(recording.startDateTime, isExport, recording.customFolderName, sessionUuid)\n if (!fileName) {\n return\n }\n fileQueue.push({\n name: `${fileName}.${fileExtension}`,\n content: content,\n })\n}\n\n/**\n * Updates the status bar item with the current recording status and time.\n */\nexport function updateStatusBarItem(): void {\n const editor = vscode.window.activeTextEditor\n if (!editor && !recording) {\n statusBarItem.hide()\n return\n }\n if (recording.isRecording) {\n if (getConfig().get('appearance.showTimer') === false) {\n statusBarItem.text = '$(debug-stop)'\n statusBarItem.tooltip = vscode.l10n.t('Current time: {0}', formatDisplayTime(recording.timer))\n }\n if (getConfig().get('appearance.showTimer') === true) {\n statusBarItem.text = `$(debug-stop) ${formatDisplayTime(recording.timer)}`\n statusBarItem.tooltip = vscode.l10n.t('Stop Recording')\n }\n statusBarItem.command = commands.stopRecording\n } else {\n if (getConfig().get('appearance.minimalMode') === true) {\n statusBarItem.text = '$(circle-large-filled)'\n } else {\n statusBarItem.text = `$(circle-large-filled) ${vscode.l10n.t('Start Recording')}`\n }\n statusBarItem.tooltip = vscode.l10n.t('Start Recording')\n statusBarItem.command = commands.startRecording\n }\n statusBarItem.show()\n}",typescript,tab
|
3 |
+
2,823,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"4:31:57 PM [info] Activating crowd-code\n4:31:57 PM [info] Welcome back maharajamihir. Your user-id is '69a563db57051868fc3ecdda3a43f162385be48f5447fe691a10177ee4dc3a0e'. Happy coding!\n4:31:57 PM [info] Recording started\n",Log,tab
|
4 |
+
3,35676,"src/recording.ts",0,0,"",typescript,tab
|
5 |
+
4,79594,"webpack.config.js",0,0,"const path = require('node:path')\nconst webpack = require('webpack')\n\n//@ts-check\n/** @typedef {import('webpack').Configuration} WebpackConfig **/\n\n/** @type WebpackConfig */\nconst extensionConfig = {\n\ttarget: 'node', // VS Code extensions run in a Node.js-context 📖 -> https://webpack.js.org/configuration/node/\n\tmode: 'none', // this leaves the source code as close as possible to the original (when packaging we set this to 'production')\n\n\tentry: './src/extension.ts', // the entry point of this extension, 📖 -> https://webpack.js.org/configuration/entry-context/\n\toutput: {\n\t\t// the bundle is stored in the 'out' folder (check package.json), 📖 -> https://webpack.js.org/configuration/output/\n\t\tpath: path.resolve(__dirname, 'out'),\n\t\tfilename: 'extension.js',\n\t\tlibraryTarget: 'commonjs2',\n\t},\n\texternals: {\n\t\tvscode: 'commonjs vscode', // the vscode-module is created on-the-fly and must be excluded. Add other modules that cannot be webpack'ed, 📖 -> https://webpack.js.org/configuration/externals/\n\t\t// modules added here also need to be added in the .vscodeignore file\n\t},\n\tresolve: {\n\t\t// support reading TypeScript and JavaScript files, 📖 -> https://github.com/TypeStrong/ts-loader\n\t\textensions: ['.ts', '.js'],\n\t},\n\tmodule: {\n\t\trules: [\n\t\t\t{\n\t\t\t\ttest: /\.ts$/,\n\t\t\t\texclude: /node_modules/,\n\t\t\t\tuse: [\n\t\t\t\t\t{\n\t\t\t\t\t\tloader: 'ts-loader',\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\tdevtool: 'nosources-source-map',\n\tinfrastructureLogging: {\n\t\tlevel: 'log', // enables logging required for problem matchers\n\t},\n\tplugins: [\n new webpack.DefinePlugin({\n 'process.env.CROWD_CODE_API_GATEWAY_URL': JSON.stringify(process.env.CROWD_CODE_API_GATEWAY_URL)\n })\n ]\n\n}\nmodule.exports = [extensionConfig]\n",javascript,tab
|
6 |
+
5,864791,"TERMINAL",0,0,"ls ../INFRA_SETUP.md",,terminal_command
|
7 |
+
6,864820,"TERMINAL",0,0,"]633;E;ls ../INFRA_SETUP.md ;a808ac8a-c4ad-476e-8ef8-0a741b3ad045]633;C../INFRA_SETUP.md\r\n]0;maharajamihir@mihir-xps139305:~/Projects/coding-extension/crowd-code]633;D;0",,terminal_output
|
8 |
+
7,865821,"/home/maharajamihir/Projects/coding-extension/INFRA_SETUP.md",0,0,"# AWS Serverless Backend Setup for VS Code Recorder Extension\n\nThis guide provides step-by-step instructions to set up the necessary AWS infrastructure for the VS Code Recorder extension's serverless backend. This includes creating an IAM Role for the Lambda function, deploying the Lambda function, and configuring API Gateway to expose the Lambda function as a public endpoint with guardrails.\n\n**Assumptions:**\n\n* You have an AWS account and access to the AWS Management Console.\n* You have the AWS CLI configured (optional, but good for local testing/packaging).\n* You have Node.js and npm/yarn installed to build the Lambda package.\n\n---\n\n## 1. Prepare the Lambda Deployment Package\n\nBefore deploying the Lambda function, you need to package its code and dependencies into a ZIP file.\n\n1. **Navigate to the `lambda-backend` directory:**\n ```bash\n cd lambda-backend\n ```\n\n2. **Install dependencies and build the TypeScript code:**\n ```bash\n npm install\n npm run build\n ```\n\n3. **Create the deployment package (ZIP file):**\n Navigate into the `dist` folder, then zip its contents along with `node_modules` and `package.json`.\n ```bash\n cd dist\n zip -r ../lambda-package.zip . ../node_modules ../package.json\n # Alternatively, for better control, ensure node_modules is directly in the root of the zip\n # From the lambda-backend directory:\n # zip -r lambda-package.zip dist node_modules package.json\n ```\n Ensure the `lambda-package.zip` file is created in the `lambda-backend` directory. The `index.js` (compiled from `index.ts`) should be at the root of the ZIP file when extracted.\n\n---\n\n## 2. Create IAM Role for AWS Lambda\n\nThis role will grant your Lambda function the necessary permissions to write to S3 and log to CloudWatch.\n\n1. **Go to the IAM Console:**\n * Open the AWS Management Console.\n * Search for ""IAM"" in the search bar and select ""IAM"".\n\n2. **Create a new Role:**\n * In the left-hand navigation pane, click on **Roles**.\n * Click the **Create role** button.\n\n3. **Select trusted entity:**\n * For ""Trusted entity type"", choose **AWS service**.\n * For ""Use case"", select **Lambda**.\n * Click **Next**.\n\n4. **Add permissions:**\n * In the ""Add permissions"" section, search for and select:\n * `AWSLambdaBasicExecutionRole` (This allows Lambda to write logs to CloudWatch Logs).\n * Click **Next**.\n\n5. **Name the role and create:**\n * For ""Role name"", enter `LambdaS3UploadRole` (or a similar descriptive name).\n * (Optional) Add a ""Description"" like ""Role for Lambda to upload recording data to S3.""\n * Click **Create role**.\n\n6. **Attach Custom Inline Policy for S3:**\n * Once the role is created, find and click on the role you just created (`LambdaS3UploadRole`).\n * On the role's summary page, click **Add permissions** -> **Create inline policy**.\n * Choose the **JSON** tab and paste the following policy:\n\n ```json\n {\n ""Version"": ""2012-10-17"",\n ""Statement"": [\n {\n ""Effect"": ""Allow"",\n ""Action"": [\n ""s3:PutObject""\n ],\n ""Resource"": ""arn:aws:s3:::crowd-code-bucket/*""\n }\n ]\n }\n ```\n * Click **Next**.\n * For ""Policy name"", enter `S3PutObjectPolicy` (or a similar name).\n * Click **Create policy**.\n\n *Note: Ensure the S3 bucket `crowd-code-bucket` exists before the Lambda function attempts to write to it. If it doesn't, create it now in the S3 console.*\n\n---\n\n## 3. Deploy the AWS Lambda Function\n\nNow you will create and deploy your Node.js Lambda function.\n\n1. **Go to the Lambda Console:**\n * Open the AWS Management Console.\n * Search for ""Lambda"" and select ""Lambda"".\n\n2. **Create a new Function:**\n * Click the **Create function** button.\n * Select **Author from scratch**.\n\n3. **Configure basic settings:**\n * **Function name:** `VSCodeRecordingUploader`\n * **Runtime:** Choose `Node.js 20.x` (or the latest LTS version available).\n * **Architecture:** `x86_64` (default)\n * **Permissions:**\n * Under ""Change default execution role"", select **Use an existing role**.\n * Choose the `LambdaS3UploadRole` (or the name you gave it) from the dropdown.\n * Click **Create function**.\n\n4. **Upload your code:**\n * Once the function is created, you'll be on its configuration page.\n * In the ""Code source"" section, click **Upload from** -> **.zip file**.\n * Click **Upload**, then select the `lambda-package.zip` file you created earlier.\n * Click **Save**.\n\n5. **Configure Handler:**\n * Make sure the **Runtime settings** (below ""Code source"") show the ""Handler"" as `index.handler`. If it's different (e.g., `app.handler`), click **Edit** and change it to `index.handler`.\n\n---\n\n## 4. Set up API Gateway\n\nThis will create a public HTTP endpoint that triggers your Lambda function.\n\n1. **Go to the API Gateway Console:**\n * Open the AWS Management Console.\n * Search for ""API Gateway"" and select ""API Gateway"".\n\n2. **Create a new REST API:**\n * Under ""REST API"", click **Build**.\n * Select **New API**.\n * **API name:** `VSCodeRecordingAPI`\n * **Endpoint Type:** `Regional` (default)\n * Click **Create API**.\n\n3. **Create a Resource:**\n * In the left-hand navigation pane, select **Resources** (under your API name).\n * Click **Actions** -> **Create Resource**.\n * **Resource Name:** `recordings`\n * **Resource Path:** `/recordings`\n * Click **Create Resource**.\n\n4. **Create a POST Method:**\n * With the `/recordings` resource selected, click **Actions** -> **Create Method**.\n * Select **POST** from the dropdown and click the checkmark.\n\n5. **Configure POST Method Integration:**\n * **Integration type:** `Lambda Function`\n * **Use Lambda Proxy integration:** Check this box (important for full event forwarding).\n * **Lambda Region:** Select the region where you deployed your Lambda function.\n * **Lambda Function:** Start typing `VSCodeRecordingUploader` and select your Lambda function from the dropdown.\n * **Timeout:** (Optional) Adjust if you expect long-running Lambda executions, but for this, default is usually fine.\n * Click **Save**.\n * When prompted, click **OK** to grant API Gateway permissions to invoke the Lambda function.\n\n6. **Enable CORS for the POST method:**\n * With the `/recordings` resource selected, click **Actions** -> **Enable CORS**.\n * Keep the default settings (or adjust `Access-Control-Allow-Origin` if you want to restrict specific origins, but for an ""Open Endpoint"", `*` is fine).\n * Click **Enable CORS and replace existing CORS headers**.\n * Click **Yes, replace existing values** when prompted.\n\n7. **Configure Throttling (Guardrails):**\n * In the API Gateway console, go to **Stages** (under your API name in the left navigation).\n * If you haven't deployed your API yet, deploy it first (see step 8).\n * Select your stage (e.g., `v1`).\n * Go to the **Throttling** tab.\n * Set the **Rate** (requests per second) and **Burst** (maximum concurrent requests) according to your expected traffic. For example:\n * **Rate:** `10`\n * **Burst:** `5`\n * Click **Save Changes**.\n\n8. **Deploy the API:**\n * In the left-hand navigation pane, select **Resources**.\n * Click **Actions** -> **Deploy API**.\n * **Deployment stage:**\n * Select `[New Stage]`\n * **Stage name:** `v1` (or another appropriate name like `prod`, `dev`)\n * Click **Deploy**.\n\n---\n\n## 5. Final Steps: Get Invoke URL and Update VS Code Extension\n\n1. **Retrieve the Invoke URL:**\n * After deploying, you will be taken to the Stage Editor page.\n * The **Invoke URL** for your API will be displayed at the top. It will look something like:\n `https://xxxxxxxxx.execute-api.your-region.amazonaws.com/v1/recordings`\n * **Copy this entire URL.**\n\n2. **Update the VS Code Extension:**\n * Open your VS Code extension project.\n * Go to `src/recording.ts`.\n * Replace the placeholder `YOUR_API_GATEWAY_ENDPOINT_URL_HERE` with the copied Invoke URL:\n\n ```typescript\n const API_GATEWAY_URL = '[https://xxxxxxxxx.execute-api.your-region.amazonaws.com/v1/recordings](https://xxxxxxxxx.execute-api.your-region.amazonaws.com/v1/recordings)';\n ```\n * Save the file.\n\n3. **Re-package and install the VS Code Extension:**\n * You'll need to rebuild and re-package your VS Code extension with the updated API Gateway URL.\n * From your extension's root directory, run:\n ```bash\n npm install\n npm run compile\n vsce package # If you have vsce installed\n ```\n * Then, install the new `.vsix` file into VS Code.\n\nNow your VS Code extension will send recording data to your secure and scalable serverless backend!\n\n---",markdown,tab
|
927a8af5474e5654810c00ce2e09fd2de87d3e5722f33fa1090d867db114e403/crowd-code-68ed574d-e77d-4b90-972b-cb14379b3ba21752587095408-2025_07_15-15.45.46.648/source.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|