DIGITAL LIBRARY ARCHIVE
HOME > DIGITAL LIBRARY ARCHIVE
< Previous   List   Next >  
Korean Sentence Generation Using Phoneme-Level LSTM Language Model
Full-text Download
SungMahn Ahn (School of Business Administration, Kookmin University)
Yeojin Chung (School of Business Administration, Kookmin University)
Jaejoon Lee (Department of Data Science, Kookmin University)
Jiheon Yang (Department of Data Science, Kookmin University)
Vol. 23, No. 2, Page: 71 ~ 88
http://dx.doi.org/10.13088/jiis.2017.23.2.071
Keywords
Language model, Recurrent neural network, Long short-term memory model, Sentence generation model
Abstract
Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016).
Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU.
We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.
Show/Hide Detailed Information in Korean
한국어 음소 단위 LSTM 언어모델을 이용한문장 생성
안성만 (국민대학교 경영학부)
정여진 (국민대학교 경영학부)
이재준 (국민대학교 데이터사이언스 학과)
양지현 (국민대학교 데이터사이언스 학과)
Keywords
언어 모델, 순환 신경망 모형, LSTM 모형, 문장 생성 모형
Abstract
언어모델은 순차적으로 입력된 자료를 바탕으로 다음에 나올 단어나 문자를 예측하는 모델로 언어처리나 음성인식 분야에 활용된다. 최근 딥러닝 알고리즘이 발전되면서 입력 개체 간의 의존성을 효과적으로 반영할 수있는 순환신경망 모델과 이를 발전시킨 Long short-term memory(LSTM) 모델이 언어모델에 사용되고 있다. 이러한 모형에 자료를 입력하기 위해서는 문장을 단어 혹은 형태소로 분해하는 과정을 거친 후 단어 레벨 혹은형태소 레벨의 모형을 사용하는 것이 일반적이다. 하지만 이러한 모형은 텍스트가 포함하는 단어나 형태소의수가 일반적으로 매우 많기 때문에 사전 크기가 커지게 되고 이에 따라 모형의 복잡도가 증가하는 문제가 있고사전에 포함된 어휘 외에는 생성이 불가능하다는 등의 단점이 있다. 특히 한국어와 같이 형태소 활용이 다양한언어의 경우 형태소 분석기를 통한 분해과정에서 오류가 더해질 수 있다. 이를 보완하기 위해 본 논문에서는문장을 자음과 모음으로 이루어진 음소 단위로 분해한 뒤 입력 데이터로 사용하는 음소 레벨의 LSTM 언어모델을 제안한다. 본 논문에서는 LSTM layer를 3개 또는 4개 포함하는 모형을 사용한다. 모형의 최적화를 위해Stochastic Gradient 알고리즘과 이를 개선시킨 다양한 알고리즘을 사용하고 그 성능을 비교한다. 구약성경 텍스트를 사용하여 실험을 진행하였고 모든 실험은 Theano를 기반으로 하는 Keras 패키지를 사용하여 수행되었다.
모형의 정량적 비교를 위해 validation loss와 test set에 대한perplexity를 계산하였다. 그 결과 Stochastic Gradient 알고리즘이 상대적으로 큰 validation loss와 perplexity를 나타냈고 나머지 최적화 알고리즘들은 유사한 값들을보이며 비슷한 수준의 모형 복잡도를 나타냈다. Layer 4개인 모형이 3개인 모형에 비해 학습시간이 평균적으로69% 정도 길게 소요되었으나 정량지표는 크게 개선되지 않거나 특정 조건에서는 오히려 악화되는 것으로 나타났다. 하지만 layer 4개를 사용한 모형이 3개를 사용한 모형에 비해 완성도가 높은 문장을 생성했다. 본 논문에서 고려한 어떤 시뮬레이션 조건에서도 한글에서 사용되지 않는 문자조합이 생성되지 않았고 명사와 조사의 조합이나 동사의 활용, 주어 동사의 결합 면에서 상당히 완성도 높은 문장이 발생되었다. 본 연구결과는 현재 대두되고 있는 인공지능 시스템의 기초가 되는 언어처리나 음성인식 분야에서 한국어 처리를 위해 다양하게 활용될 수 있을 것으로 기대된다.
Cite this article
JIIS Style
Ahn, S., Y. Chung, J. Lee, and J. Yang, "Korean Sentence Generation Using Phoneme-Level LSTM Language Model", Journal of Intelligence and Information Systems, Vol. 23, No. 2 (2017), 71~88.

IEEE Style
SungMahn Ahn, Yeojin Chung, Jaejoon Lee, and Jiheon Yang, "Korean Sentence Generation Using Phoneme-Level LSTM Language Model", Journal of Intelligence and Information Systems, vol. 23, no. 2, pp. 71~88, 2017.

ACM Style
Ahn, S., Chung, Y., Lee, J., and Yang, J., 2017. Korean Sentence Generation Using Phoneme-Level LSTM Language Model. Journal of Intelligence and Information Systems. 23, 2, 71--88.
Export Formats : BiBTeX, EndNote

Warning: include(/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php) [function.include]: failed to open stream: No such file or directory in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429

Warning: include() [function.include]: Failed opening '/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429
@article{Ahn:JIIS:2017:690,
author = {Ahn, SungMahn and Chung, Yeojin and Lee, Jaejoon and Yang, Jiheon},
title = {Korean Sentence Generation Using Phoneme-Level LSTM Language Model},
journal = {Journal of Intelligence and Information Systems},
issue_date = {June 2017},
volume = {23},
number = {2},
month = Jun,
year = {2017},
issn = {2288-4866},
pages = {71--88},
url = {http://dx.doi.org/http://dx.doi.org/10.13088/jiis.2017.23.2.071 },
doi = {http://dx.doi.org/10.13088/jiis.2017.23.2.071},
publisher = {Korea Intelligent Information System Society},
address = {Seoul, Republic of Korea},
keywords = { Language model, Recurrent neural network, Long short-term memory model and Sentence generation model },
}
%0 Journal Article
%1 690
%A SungMahn Ahn
%A Yeojin Chung
%A Jaejoon Lee
%A Jiheon Yang
%T Korean Sentence Generation Using Phoneme-Level LSTM Language Model
%J Journal of Intelligence and Information Systems
%@ 2288-4866
%V 23
%N 2
%P 71-88
%D 2017
%R http://dx.doi.org/10.13088/jiis.2017.23.2.071
%I Korea Intelligent Information System Society