상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
151913.jpg
KCI등재 학술저널

Not Yet as Native as Native Speakers: Comparing Deep Learning Predictions and Human Judgments

  • 79

The purpose of this paper is to examine feasibility of replacing humans with deep learning in nativeness judgments and figure out in which way to develop the model in order to reach the level of humans by comparing nativeness judgments by deep learning and humans on English data. The controlled items, composed of 210 sentences, are categorized into two types: well-formedness test (i.e., no syntactic violation) and plausibility (i.e., no awkwardness) test items, most of which are excerpted from precedent linguistics literature. The deep learning model and five English native speakers are asked to classify the nativeness of the same stimulus sentences and the results reveal differences and similarities between them; although the overall performance of humans overwhelms that of deep learning, they are quite similar in judging plausibility items and learner data. The length of response time―hanging back from decision of nativeness―does not guarantee the accuracy, which means judging nativeness depends on something like intuition rather than deliberation.

1. Introduction

2. Nativeness of Sentences

3. Language Experiment

4. Divergence of Scores

5. Response Time

6. Discussion

7. Conclusion

로딩중