Karesis commited on
Commit
c15d41d
·
verified ·
1 Parent(s): 9df5223

Upload 15 files

Browse files
dataset1000/README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Go Games Dataset for PyTorch Neural Network Training
2
+
3
+ ## Overview
4
+
5
+ This dataset contains Go game positions extracted from high-quality SGF files for training neural networks. The positions are organized into three strength categories based on game quality.
6
+
7
+ ## Dataset Statistics
8
+
9
+ - **Total SGF Files Processed**: 61149
10
+ - **Valid SGF Files**: 0
11
+ - **Total Positions**: 29884
12
+ - **Processing Time**: 14.90 seconds
13
+
14
+ ## Strength Categories
15
+
16
+ The dataset is divided into three strength categories:
17
+
18
+ - **Standard** (Quality 80-85): 2704 games, 9934 positions
19
+ - **Strong** (Quality 86-92): 3397 games, 9958 positions
20
+ - **Elite** (Quality 93-100): 55048 games, 9992 positions
21
+
22
+ ## Directory Structure
23
+
24
+ ```
25
+ dataset/
26
+ ├── train/
27
+ │ ├── boards.pt # Board state tensors (N, C, H, W)
28
+ │ ├── moves.pt # Move labels (N,)
29
+ │ ├── colors.pt # Player colors (N,)
30
+ │ └── metadata.json # Additional information
31
+ ├── val/
32
+ │ ├── boards.pt
33
+ │ ├── moves.pt
34
+ │ ├── colors.pt
35
+ │ └── metadata.json
36
+ ├── test/
37
+ │ ├── boards.pt
38
+ │ ├── moves.pt
39
+ │ ├── colors.pt
40
+ │ └── metadata.json
41
+ ├── stats.json # Processing statistics
42
+ └── README.md # This file
43
+ ```
44
+
45
+ ## Board Representation
46
+
47
+ The board state is represented as a tensor with 3 channels:
48
+ 1. Black stones (1 where black stone is present, 0 elsewhere)
49
+ 2. White stones (1 where white stone is present, 0 elsewhere)
50
+ 3. Next player (all 1s if black to play, all 0s if white to play)
51
+
52
+ ## Usage with PyTorch
53
+
54
+ ```python
55
+ import torch
56
+ import json
57
+ import os
58
+ from torch.utils.data import Dataset, DataLoader
59
+
60
+ class GoDataset(Dataset):
61
+ def __init__(self, data_dir):
62
+ self.boards = torch.load(os.path.join(data_dir, "boards.pt"))
63
+ self.moves = torch.load(os.path.join(data_dir, "moves.pt"))
64
+ self.colors = torch.load(os.path.join(data_dir, "colors.pt"))
65
+
66
+ with open(os.path.join(data_dir, "metadata.json"), 'r', encoding='utf-8') as f:
67
+ self.metadata = json.load(f)
68
+
69
+ def __len__(self):
70
+ return len(self.moves)
71
+
72
+ def __getitem__(self, idx):
73
+ return {
74
+ 'board': self.boards[idx],
75
+ 'move': self.moves[idx],
76
+ 'color': self.colors[idx]
77
+ }
78
+
79
+ # Create datasets
80
+ train_dataset = GoDataset('dataset/train')
81
+ val_dataset = GoDataset('dataset/val')
82
+ test_dataset = GoDataset('dataset/test')
83
+
84
+ # Create data loaders
85
+ train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
86
+ val_loader = DataLoader(val_dataset, batch_size=64)
87
+ test_loader = DataLoader(test_dataset, batch_size=64)
88
+ ```
89
+
90
+ ## License
91
+
92
+ The dataset is intended for research and educational purposes only.
93
+
94
+ ## Creation Date
95
+
96
+ This dataset was created on 2025.3.13
dataset1000/README_CN.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 围棋对弈数据集(PyTorch神经网络训练专用)
2
+
3
+ ## 概述
4
+
5
+ 本数据集包含从高质量SGF棋谱文件中提取的围棋对局位置,专为训练神经网络而设计。位置根据对局质量分为三个强度类别,每个类别提取了约1000个样本。
6
+
7
+ ## 数据集统计
8
+
9
+ - **处理的SGF文件总数**:根据原始文件数量而定
10
+ - **有效SGF文件**:通过质量筛选的文件数
11
+ - **总位置数**:大约3000个(每个强度类别约1000个)
12
+ - **处理时间**:取决于实际运行耗时
13
+
14
+ ## 强度类别
15
+
16
+ 数据集根据棋谱质量分为三个强度类别:
17
+
18
+ - **标准级别** (Quality 80-85):业余高段和职业初段对局
19
+ - **强力级别** (Quality 86-92):职业中高段对局
20
+ - **精英级别** (Quality 93-100):顶尖职业选手对局
21
+
22
+ ## 目录结构
23
+
24
+ ```
25
+ dataset/
26
+ ├── train/
27
+ │ ├── boards.pt # 棋盘状态张量 (N, C, H, W)
28
+ │ ├── moves.pt # 着法标签 (N,)
29
+ │ ├── colors.pt # 棋手颜色 (N,)
30
+ │ └── metadata.json # 附加信息
31
+ ├── val/
32
+ │ ├── boards.pt
33
+ │ ├── moves.pt
34
+ │ ├── colors.pt
35
+ │ └── metadata.json
36
+ ├── test/
37
+ │ ├── boards.pt
38
+ │ ├── moves.pt
39
+ │ ├── colors.pt
40
+ │ └── metadata.json
41
+ ├── stats.json # 处理统计信息
42
+ └── README.md # 本文件
43
+ ```
44
+
45
+ ## 棋盘表示
46
+
47
+ 棋盘状态表示为具有3个通道的张量:
48
+ 1. 黑棋(黑子位置为1,其他位置为0)
49
+ 2. 白棋(白子位置为1,其他位置为0)
50
+ 3. 下一手(黑方行棋时全部为1,白方行棋时全部为0)
51
+
52
+ ## PyTorch使用示例
53
+
54
+ ```python
55
+ import torch
56
+ import json
57
+ import os
58
+ from torch.utils.data import Dataset, DataLoader
59
+
60
+ class GoDataset(Dataset):
61
+ def __init__(self, data_dir):
62
+ self.boards = torch.load(os.path.join(data_dir, "boards.pt"))
63
+ self.moves = torch.load(os.path.join(data_dir, "moves.pt"))
64
+ self.colors = torch.load(os.path.join(data_dir, "colors.pt"))
65
+
66
+ with open(os.path.join(data_dir, "metadata.json"), 'r', encoding='utf-8') as f:
67
+ self.metadata = json.load(f)
68
+
69
+ def __len__(self):
70
+ return len(self.moves)
71
+
72
+ def __getitem__(self, idx):
73
+ return {
74
+ 'board': self.boards[idx],
75
+ 'move': self.moves[idx],
76
+ 'color': self.colors[idx]
77
+ }
78
+
79
+ # 创建数据集
80
+ train_dataset = GoDataset('dataset/train')
81
+ val_dataset = GoDataset('dataset/val')
82
+ test_dataset = GoDataset('dataset/test')
83
+
84
+ # 创建数据加载器
85
+ train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
86
+ val_loader = DataLoader(val_dataset, batch_size=64)
87
+ test_loader = DataLoader(test_dataset, batch_size=64)
88
+ ```
89
+
90
+ ## 模型训练示例
91
+
92
+ 以下是使用该数据集训练简单围棋策略网络的示例代码:
93
+
94
+ ```python
95
+ import torch
96
+ import torch.nn as nn
97
+ import torch.optim as optim
98
+
99
+ # 定义一个简单的围棋策略网络
100
+ class SimplePolicyNet(nn.Module):
101
+ def __init__(self, board_size=19):
102
+ super(SimplePolicyNet, self).__init__()
103
+ self.conv1 = nn.Conv2d(3, 64, 3, padding=1)
104
+ self.conv2 = nn.Conv2d(64, 64, 3, padding=1)
105
+ self.conv3 = nn.Conv2d(64, 128, 3, padding=1)
106
+ self.fc = nn.Linear(128 * board_size * board_size, board_size * board_size)
107
+
108
+ def forward(self, x):
109
+ x = torch.relu(self.conv1(x))
110
+ x = torch.relu(self.conv2(x))
111
+ x = torch.relu(self.conv3(x))
112
+ x = x.view(x.size(0), -1)
113
+ x = self.fc(x)
114
+ return x
115
+
116
+ # 初始化模型、损失函数和优化器
117
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
118
+ model = SimplePolicyNet().to(device)
119
+ criterion = nn.CrossEntropyLoss()
120
+ optimizer = optim.Adam(model.parameters(), lr=0.001)
121
+
122
+ # 训练循环
123
+ for epoch in range(10):
124
+ model.train()
125
+ running_loss = 0.0
126
+
127
+ for batch in train_loader:
128
+ boards = batch['board'].to(device)
129
+ moves = batch['move'].to(device)
130
+
131
+ optimizer.zero_grad()
132
+ outputs = model(boards)
133
+ loss = criterion(outputs, moves)
134
+ loss.backward()
135
+ optimizer.step()
136
+
137
+ running_loss += loss.item()
138
+
139
+ print(f"Epoch {epoch+1}, Loss: {running_loss/len(train_loader):.4f}")
140
+
141
+ # 保存模型
142
+ torch.save(model.state_dict(), "go_policy_model.pth")
143
+ ```
144
+
145
+ ## 使用许可
146
+
147
+ 本数据集仅供研究和教育目的使用。
148
+
149
+ ## 创建日期
150
+
151
+ 数据集创建于:2025-03-13
dataset1000/stats.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_sgf_files": 61149,
3
+ "valid_sgf_files": 0,
4
+ "processed_games": 0,
5
+ "total_positions": 29884,
6
+ "positions_by_category": {
7
+ "standard": 9934,
8
+ "strong": 9958,
9
+ "elite": 9992
10
+ },
11
+ "games_by_category": {
12
+ "standard": 2704,
13
+ "strong": 3397,
14
+ "elite": 55048
15
+ },
16
+ "processing_time": 14.901111
17
+ }
dataset1000/test/boards.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aed553001b0386325a0f7165a08928ba6725f7f512fa354eb2d220529c948019
3
+ size 12953815
dataset1000/test/colors.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58ef6d6ed61d9e73521cb401c564e35a935a95be93de8e536834afed5a17df6d
3
+ size 4119
dataset1000/test/metadata.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset1000/test/moves.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9517910984f011eba6123adddec4ade022caa1ed2b3859c7aa358aedc0338d2
3
+ size 25042
dataset1000/train/boards.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea16d6e9336f3c2b6bb9cae26bd28e7feaa9a9c9f76d2eb4e84afc3f2307ba43
3
+ size 103561943
dataset1000/train/colors.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:223d51806905b9fc6e2b1a1196f6d2f48cc984a881f8e54eb24044044d3749ce
3
+ size 25047
dataset1000/train/metadata.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset1000/train/moves.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a505415d7bc46d38b59b136abbdb50ca915ae25384fdef30a0b7c87eaa6c6ff
3
+ size 192402
dataset1000/val/boards.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea3cbb5460898c8c8f0acf42ba5fe35406e22f4a73ac182a595b8e872124d276
3
+ size 12945175
dataset1000/val/colors.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b185a613783b5afeea2e72c2411416eef0780f8778c694278772adceeee418d
3
+ size 4119
dataset1000/val/metadata.json ADDED
The diff for this file is too large to render. See raw diff
 
dataset1000/val/moves.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbd036aadac6afae9b97e7abb84295c31e607092cedf935c99b58c371e16b0ea
3
+ size 25042