Zaɓi Harshe

RACE Dataset: Babban Ma'auni don Fahimtar Karatu ta Injin

Bincike akan Dataset na RACE, babban ma'auni na fahimtar karatu daga jarrabawar Turanci na Sinawa, mai dauke da tambayoyin da masana suka tsara waɗanda ke buƙatar tunani.
learn-en.org | PDF Size: 0.1 MB
Kima: 4.5/5
Kimarku
Kun riga kun ƙididdige wannan takarda
Murfin Takardar PDF - RACE Dataset: Babban Ma'auni don Fahimtar Karatu ta Injin

1. Gabatarwa & Bayyani

Wannan takarda tana bincika takarda mai mahimmanci "RACE: Babban Dataset na Fahimtar Karatu Daga Jarrabawa" da aka gabatar a EMNLP 2017. Aikin ya gabatar da dataset na RACE, wanda aka gina don magance manyan gazawar da ke cikin ma'auni na fahimtar karatu na injin (MRC). Babban jigon shi ne cewa datasets na baya, galibi sun dogara da tambayoyin da aka samo daga jama'a ko kuma na cirewa, sun kasa gwada ikon tunanin samfuri yadda ya kamata, wanda ke haifar da haɓaka ma'aunin aiki waɗanda ba su nuna ainihin fahimtar harshe ba.

Girman Dataset

~28,000 Rubuce-rubuce

Adadin Tambayoyi

~100,000 Tambayoyi

Aikin Dan Adam

95% Iyakar Daidaito

Mafi Kyau a Zamani (2017)

43% Daidaiton Samfuri

2. Dataset na RACE

2.1. Tattara Bayanai & Tushe

RACE an samo shi daga jarrabawar Turanci da aka tsara don ɗaliban sakandare na Sinawa (masu shekaru 12-18). Tambayoyi da rubuce-rubucen masana fannin (malaman Turanci) ne suka ƙirƙira, suna tabbatar da inganci mai girma da dacewa da ilimi. Wannan zaɓin masana wani yunƙuri ne da aka yi niyya don nisantar da hayaniyar da ke tattare da datasets da aka samo daga jama'a ko aka ƙirƙira ta atomatik kamar SQuAD ko NewsQA.

2.2. Kididdigar Dataset & Tsari

  • Rubuce-rubuce: 27,933
  • Tambayoyi: 97,687
  • Tsari: Zaɓi da yawa (zaɓuɓɓuka 4, 1 daidai)
  • Rarraba: RACE-M (sakandare ta tsakiya), RACE-H (sakandare ta manya), tare da rarrabuwar horo/dev/gwaji na yau da kullun.
  • Yankin Batsa: Fadi da bambancin, kamar yadda tsarin ilimi ya ƙulla, yana guje wa son zuciya na batutuwan datasets da aka samo daga tushe guda ɗaya kamar labaran labarai ko labaran yara.

2.3. Bambance-bambance Mafi Muhimmanci

An tsara RACE don zama ma'auni "mai wahala". Babban bambance-bambancensa sune:

  • Amsoshi Ba Na Cirewa Ba: Tambayoyi da zaɓuɓɓukan amsa ba ɓangaren rubutu ba ne da aka kwafa daga rubutun. An sake fassara su ko kuma an taƙaita su, suna tilasta samfuran yin ƙiyasawa maimakon sauƙaƙan daidaita tsari. Wannan yana adawa kai tsaye da babban aibi a cikin datasets kamar SQuAD v1.1, inda samfuran sau da yawa za su iya gano amsoshi ta hanyar haɗuwar kalmomi a saman.
  • Matsakaicin Girman Tunani: Wani ɓangare mai girma na tambayoyi yana buƙatar tunani na ma'ana, ƙiyasawa, haɗawa, da fahimtar alaƙar dalili da sakamako idan aka kwatanta da na zamani kamar CNN/Daily Mail ko Gwajin Littafin Yara.
  • Iyakar Tushen Masana: Iyakar aikin ɗan adam, wanda masu ƙirƙirar jarrabawa da ɗaliban da suka yi nasara suka kafa, shine 95%. Wannan yana ba da manufa bayyananne, mai ma'ana don aikin samfuri, ba kamar datasets inda yarjejeniyar ɗan adam ta yi ƙasa ba.

3. Cikakkun Bayanai na Fasaha & Hanyoyin Aiki

3.1. Tsarin Matsala

Aikin fahimtar karatu a cikin RACE an tsara shi azaman matsalar amsa tambaya mai zaɓi da yawa. Idan aka ba da wani rubutu $P$ wanda ya ƙunshi alamomi $n$ $\{p_1, p_2, ..., p_n\}$, tambaya $Q$ tare da alamomi $m$ $\{q_1, q_2, ..., q_m\}$, da kuma saitin amsoshi masu yuwuwa $k$ $A = \{a_1, a_2, a_3, a_4\}$, dole ne samfurin ya zaɓi amsar daidai $a_{daidai} \in A$.

Yuwuwar amsa $a_i$ ya zama daidai ana iya ƙirƙira shi azaman aiki na wakilcin haɗin gwiwa na $P$, $Q$, da $a_i$: $$P(a_i \text{ daidai ne } | P, Q) = \text{Softmax}(f(\phi(P), \psi(Q), \omega(a_i)))$$ inda $\phi, \psi, \omega$ ayyukan ɓoyewa ne (misali, daga RNNs ko Transformers) kuma $f$ aikin maki ne.

3.2. Ma'aunin Kimantawa

Babban ma'aunin kimantawa shine daidaito: kaso na tambayoyin da aka amsa daidai. Wannan ma'auni mai sauƙi yai daidai da asalin tushen jarrabawa kuma yana ba da damar kwatanta kai tsaye da aikin ɗaliban ɗan adam.

4. Sakamakon Gwaji & Bincike

4.1. Aikin Samfurin Asali

Takardar ta kafa ƙaƙƙarfan ma'auni a cikin 2017, gami da samfuran kamar Sliding Window, Stanford Attentive Reader, da GA Reader. Mafi kyawun samfurin asali ya sami daidaito na kusan 43% akan saitin gwajin RACE. Wannan ya bambanta da samfuran da suka kasance suna cimma aikin kusa da ɗan adam ko fiye da na ɗan adam akan datasets masu sauƙi na cirewa a lokacin.

4.2. Iyakar Aikin Dan Adam

Iyakar aikin ɗan adam, wanda aka samo daga aikin manyan ɗalibai da masana, shine 95%. Wannan ya kafa babban gibin kaso 52 tsakanin samfuran mafi kyau a zamani (SOTA) da iyawar ɗan adam, yana nuna wahalar dataset da kuma dogon hanya da ke gaba don fahimtar injin.

4.3. Binciken Gibin Aiki

Gibin ~43% vs. 95% shine mafi ƙarfin hujjar takardar. Ya nuna a zahiri cewa samfuran MRC na yanzu, duk da nasara akan ayyuka masu sauƙi, sun rasa ainihin ikon tunani da fahimta. Wannan gibin ya zama kira bayyananne ga al'ummar NLP don haɓaka gine-gine masu ƙware.

Bayanin Chati (An fahimta): Chati na sanduna zai nuna sanduna biyu: "Mafi kyawun Samfuri (2017)" a ~43% da "Iyakar Dan Adam" a 95%, tare da babban gibin mai ban sha'awa a tsakaninsu. Sandu na uku don "Zato na Bazuwar" a 25% zai ba da ƙarin mahallin.

5. Tsarin Bincike & Nazarin Hali

Tsarin don Kimanta Datasets na MRC: Don tantance inganci da wahalar ma'auni na MRC, masu bincike yakamata su bincika:

  1. Tushen Amsa: Shin amsoshi na cirewa ne (ɓangaren kalmomi daga rubutu) ko na taƙaitawa/ƙirƙira?
  2. Nau'in Tambaya: Wane kaso yana buƙatar tunawa da gaskiya vs. ƙiyasawa (misali, na dalili, na ma'ana, na hasashe)?
  3. Asalin Bayanai: Shin bayanan masana ne suka tsara, an samo su daga jama'a, ko na roba? Menene matakin hayaniya?
  4. Gibin Aiki: Menene bambanci tsakanin aikin samfurin SOTA da iyakar ɗan adam?
  5. Bambancin Batsa & Salon: Shin an samo dataset daga yanki mai kunkuntar (misali, Wikipedia) ko yankuna da yawa?

Nazarin Hali: RACE vs. SQuAD 1.1
Aiwatar da wannan tsarin: Amsoshin SQuAD 1.1 ƙayyadaddun ɓangarori ne na cirewa, tambayoyin gaskiya ne galibi, bayanai an samo su daga jama'a (wanda ke haifar da wasu shubuha), SOTA na 2017 (BiDAF) yana kusanci aikin ɗan adam (~77% vs. ~82% F1), kuma batutuwa sun iyakance ga labaran Wikipedia. RACE yana da maki masu girma akan wahala (amsoshi na taƙaitawa, tunani mai girma), inganci (masana sun tsara), da bambancin (rubutun ilimi), wanda ya haifar da babban gibin aiki mai ma'ana wanda ya fi dacewa don gano raunin samfuri.

6. Bincike Mai Zurfi & Hasashen Masana

Babban Hasashe: Takardar RACE ba kawai ta gabatar da wani dataset ba ne; wani shiri ne na dabarun da ya fallasa wani muhimmin rauni a cikin labarin ci gaban fannin NLP. A shekara ta 2017, sakamako masu jan hankali akan SQuAD suna haifar da ruɗi cewa injuna suna kusantar matakin fahimtar karatu na ɗan adam. RACE ya bayyana wannan a matsayin ruɗi, wanda aka gina akan ma'auni waɗanda suka ba da lada ga daidaita tsari mai zurfi fiye da zurfin fahimta. Gibin aikinsa na maki 52 ya kasance sake duba gaskiya mai ban sha'awa, yana jayayya da ƙarfi cewa ainihin tunanin injin ya kasance manufa mai nisa.

Kwararar Ma'ana: Ma'anar marubutan ba ta da kyau. 1) Gano aibi: datasets na yanzu suna da sauƙi da hayaniya. 2) Ba da shawara: ƙirƙiri dataset daga tushen da aka tsara a fili don gwada fahimta—jarrabawar daidaitacce. 3) Tabbatar da hasashe: nuna cewa samfuran SOTA sun gaza sosai akan wannan sabon gwaji mai tsanani. Wannan yayi daidai da hanyar ƙirƙirar datasets "maƙiya" a cikin hangen nesa na kwamfuta don karya samfuran da aka yi ƙari, kamar yadda aka gani tare da gabatarwar ImageNet-C don gwada ƙarfin gwiwa ga lalata. RACE ya yi irin wannan aiki ga NLP.

Ƙarfi & Aibobi: Babban ƙarfin RACE shine ainihin tushensa: yin amfani da ƙwarewar shekaru da yawa da ke cikin kimantawar ilimi. Wannan ya ba shi ingancin gini mara misaltuwa don auna fahimta. Duk da haka, babban aibi, wanda ma masu ƙirƙiransa suka yarda da shi, shine takamancinsa na al'adu da harshe. Rubuce-rubuce da tsarin tunani ana tace su ta hanyar ilimin Turanci na Sinawa. Duk da yake wannan bai ɓata amfaninsa ba, yana iya haifar da son zuciya da ba a cikin jarrabawar Turanci na asali ba. Datasets na gaba kamar DROP (yana buƙatar tunani mai ma'ana akan sakin layi) ko BoolQ (tambayoyin eh/a'a) sun gina akan falsafar RACE yayin neman tushen al'adu mai faɗi.

Hasashe Mai Aiki: Ga masu aiki da masu bincike, darasi a bayyane yake: zaɓin ma'auni yana ƙayyade hasashen ci gaba. Dogaro kawai akan ma'auni "da aka warware" yana haifar da gamsuwa. Dole ne fannin ya ci gaba da haɓakawa da ba da fifiko ga "tsarin kalubale" waɗanda ke bincika takamaiman iyawa, kamar yadda tsarin HELM (Holistic Evaluation of Language Models) ke yi a yau. Lokacin kimanta sabon samfuri, aikinsa akan RACE (ko magadansa kamar RACE++, ko ma'auni na tunani na zamani) yakamata a fi ɗaukar nauyi fiye da aikinsa akan ayyukan QA na cirewa. Zuba jari yakamata a jagorance shi zuwa gine-ginen da ke ƙirƙira sarƙoƙin tunani da ilimin duniya a fili, suna motsawa bayar da daidaitawar tambaya. Dorewar RACE, kamar yadda aka ambata a cikin ayyukan tushe kamar takardar BERT ta asali da sauransu, ya tabbatar da cewa ƙirƙirar ma'auni mai wahala, wanda aka gina da kyau, ɗaya ne daga cikin mafi tasiri ga binciken AI.

7. Aikace-aikace na Gaba & Hanyoyin Bincike

  • Horarwa don Ƙarfin Tunani: RACE da magadansa filayen horo ne masu dacewa don haɓaka samfuran da ke yin ƙarfin tunani mai matakai da yawa. Wannan yana aiki kai tsaye ga bitar takaddun shari'a, nazarin wallafe-wallafen likitanci, da tsarin tallafin fasaha inda amsoshi ba su cikin rubutu ba.
  • Fasahar Ilimi: Aikace-aikacen kai tsaye shine a cikin tsarin koyarwa mai hankali (ITS). Samfuran da aka horar akan RACE za su iya ba da taimakon fahimtar karatu na musamman, ƙirƙirar tambayoyin aiki, ko gano takamaiman raunin ɗalibi a cikin tunani.
  • Ma'auni don Manyan Samfuran Harshe (LLMs): RACE ya kasance ma'auni mai dacewa don kimanta iyawar tunani na LLMs na zamani kamar GPT-4, Claude, ko Gemini. Duk da yake waɗannan samfuran sun zarce ma'auni na 2017 da babban tazara, bincika yanayin kuskurensu akan RACE na iya bayyana gibin da ke ci gaba a cikin rabe-raben ma'ana ko fahimtar bayanan da ba a bayyana ba.
  • Ƙaddamarwa Tsakanin Harsuna & Nau'i-nau'i da yawa: Aikin gaba ya haɗa da ƙirƙirar ma'auni irin na RACE a cikin wasu harsuna da don fahimtar nau'i-nau'i da yawa (rubutu + zane-zane, chati), ƙara tura iyakokin fahimtar injin.
  • AI Mai Bayyanawa (XAI): Sarƙaƙiyar tambayoyin RACE ya sa ya zama wurin gwaji mai kyau don haɓaka samfuran da ba kawai suna amsa daidai ba har ma suna ba da bayanin da ɗan adam zai iya karantawa ko alamun tunani don zaɓinsu.

8. Nassoshi

  1. Lai, G., Xie, Q., Liu, H., Yang, Y., & Hovy, E. (2017). RACE: Large-scale ReAding Comprehension Dataset From Examinations. A cikin Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP) (shafi na 785-794).
  2. Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. A cikin Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP).
  3. Hermann, K. M., et al. (2015). Teaching Machines to Read and Comprehend. A cikin Advances in Neural Information Processing Systems (NeurIPS).
  4. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. A cikin Proceedings of NAACL-HLT.
  5. Dua, D., et al. (2019). DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. A cikin Proceedings of NAACL-HLT.
  6. Hendrycks, D., & Dietterich, T. (2019). Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. A cikin International Conference on Learning Representations (ICLR). (An ambata don kwatanta da ImageNet-C).
  7. Liang, P., et al. (2022). Holistic Evaluation of Language Models (HELM). arXiv preprint arXiv:2211.09110.