hieunguyen1053 commited on
Commit
275b694
·
1 Parent(s): c7e3245

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -4
README.md CHANGED
@@ -1,10 +1,48 @@
1
  ---
2
  title: README
3
- emoji: 🦀
4
- colorFrom: green
5
- colorTo: green
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji: 📈
4
+ colorFrom: blue
5
+ colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ # VLSP 2023 VLLMs: Vietnamese Large Language Models
11
+
12
+ ## 1. Important dates
13
+
14
+ Aug 05, 2023: Team Registration opens
15
+ Aug 31, 2023: Team Registration closes
16
+ Sep 30, 2023: Release: Test samples and evaluation instruction
17
+ Nov 10, 2023: System submission deadline (API only)
18
+ Nov 26, 2023: Technical report submission
19
+ Dec 15, 2023: Result announcement (workshop day)
20
+
21
+ ## 2. Task Description
22
+
23
+ In recent years, Large Language Models (LLMs) have gained widespread recognition and popularity worldwide, with models such as GPT-X, BARD and LLaMa making significant strides in natural language processing tasks. In Vietnam, there is also a growing interest in developing LLMs specifically tailored for the Vietnamese language. However, unlike LLMs developed for other languages, the availability of publicly accessible evaluation data for Vietnamese LLMs is significantly limited. The limited availability of evaluation data for Vietnamese LLMs presents a substantial obstacle for organizations seeking to establish uniform evaluation standards.
24
+ The goal of VLSP2023-VLLMs is to promote the development of large language models for Vietnamese by constructing an evaluation dataset for VLLMs. This dataset will be different from conventional datasets for downstream NLP tasks, as it will focus on testing the following capabilities:
25
+ Overcoming the Hallucination phenomenon of LLMs.
26
+ Ability to generate text based on broad contextual understanding.
27
+ Ability to accurately answer open questions.
28
+
29
+ The teams participating in this challenge will build their own LLMs for Vietnamese, and these models will be provided with a test dataset and instructions on how to evaluate them. The models participating in this competition remain the copyright of the respective development groups and are not required to be open-source.
30
+
31
+ ## 3. Registration
32
+ Link:
33
+
34
+ ## 4. Resource
35
+
36
+ We will provide the following instructions to the participating groups:
37
+ The publicly shared pre-trained LLMs.
38
+ Plain text datasets for Vietnamese.
39
+ - Instruction datasets.
40
+ - Sample examples of the evaluation dataset.
41
+
42
+ **Note that the participating teams can use any resources to train their models.**
43
+
44
+ ## Organizers
45
+ - Lê Anh Cường (TDTU)
46
+ - Nguyễn Trọng Hiếu (TDTU)
47
+ - Nguyễn Việt Cường (Intelligent Integration Co, Ltd.)
48
+ - Le-Minh Nguyen - JAIST(Japan Advanced Institute of Science and Technology)