< Previous   List   Next >  
Subject-Balanced Intelligent Text Summarization Scheme
Full-text Download
Yeoil Yun (College of Business Administration, Kookmin University)
Eunjung Ko (Graduate School of Business IT, Kookmin University)
Namgyu Kim (College of Business Administration, Kookmin University)
Vol. 25, No. 2, Page: 141 ~ 166
Document Summarization, Review Summarization, Text Mining, Topic Modeling, Word Embedding
Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called “automatic summarization”. However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents.
Those summaries have a limitation for contain small-weight subjects that mentioned less in original text.
If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently.
In this study, we propose “subject-balanced” text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics “completeness” and “succinctness”. Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called “seed terms”. However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity.
Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects.
However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself.
For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.
Show/Hide Detailed Information in Korean
주제 균형 지능형 텍스트 요약 기법
윤여일 (국민대학교)
고은정 (국민대학교)
김남규 (국민대학교)
문서 자동 요약, 워드 임베딩, 토픽 모델링, 텍스트 마이닝, 리뷰 요약
최근 다양한 매체를 통해 생성되는 방대한 양의 텍스트 데이터를 효율적으로 관리 및 활용하기 위한 방안으로써 문서 요약에 대한 연구가 활발히 진행되고 있다. 특히 최근에는 기계 학습 및 인공 지능을 활용하여 객관적이고 효율적으로 요약문을 도출하기 위한 다양한 자동 요약 기법이(Automatic Summarization) 고안되고 있다.
하지만 현재까지 제안된 대부분의 텍스트 자동 요약 기법들은 원문에서 나타난 내용의 분포에 따라 요약문의내용이 구성되는 방식을 따르며, 이와 같은 방식은 비중이 낮은 주제(Subject), 즉 원문 내에서 언급 빈도가 낮은 주제에 대한 내용이 요약문에 포함되기 어렵다는 한계를 갖고 있다. 본 논문에서는 이러한 한계를 극복하기위해 저빈도 주제의 누락을 최소화하는 문서 자동 요약 기법을 제안한다. 구체적으로 본 연구에서는 (i) 원문에포함된 다양한 주제를 식별하고 주제별 대표 용어를 선정한 뒤 워드 임베딩을 통해 주제별 용어 사전을 생성하고, (ii) 원문의 각 문장이 다양한 주제에 대응되는 정도를 파악하고, (iii) 문장을 주제별로 분할한 후 각 주제에해당하는 문장들의 유사도를 계산한 뒤, (iv) 요약문 내 내용의 중복을 최소화하면서도 원문의 다양한 내용을최대한 포함할 수 있는 자동적인 문서 요약 기법을 제시한다. 제안 방법론의 평가를 위해 TripAdvisor의 리뷰50,000건으로부터 용어 사전을 구축하고, 리뷰 23,087건에 대한 요약 실험을 수행한 뒤 기존의 단순 빈도 기반의 요약문과 주제별 분포의 비교를 진행하였다. 실험 결과 제안 방법론에 따른 문서 자동 요약을 통해 원문 내각 주제의 균형을 유지하는 요약문을 도출할 수 있음을 확인하였다.
Cite this article
JIIS Style
Yun, Y., E. Ko, and N. Kim, "Subject-Balanced Intelligent Text Summarization Scheme", Journal of Intelligence and Information Systems, Vol. 25, No. 2 (2019), 141~166.

IEEE Style
Yeoil Yun, Eunjung Ko, and Namgyu Kim, "Subject-Balanced Intelligent Text Summarization Scheme", Journal of Intelligence and Information Systems, vol. 25, no. 2, pp. 141~166, 2019.

ACM Style
Yun, Y., Ko, E., and Kim, N., 2019. Subject-Balanced Intelligent Text Summarization Scheme. Journal of Intelligence and Information Systems. 25, 2, 141--166.
Export Formats : BiBTeX, EndNote

Warning: include(/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php) [function.include]: failed to open stream: No such file or directory in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429

Warning: include() [function.include]: Failed opening '/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429
author = {Yun, Yeoil and Ko, Eunjung and Kim, Namgyu},
title = {Subject-Balanced Intelligent Text Summarization Scheme},
journal = {Journal of Intelligence and Information Systems},
issue_date = {June 2019},
volume = {25},
number = {2},
month = Jun,
year = {2019},
issn = {2288-4866},
pages = {141--166},
url = {},
doi = {},
publisher = {Korea Intelligent Information System Society},
address = {Seoul, Republic of Korea},
keywords = { Document Summarization, Review Summarization, Text Mining, Topic Modeling and Word Embedding
%0 Journal Article
%1 777
%A Yeoil Yun
%A Eunjung Ko
%A Namgyu Kim
%T Subject-Balanced Intelligent Text Summarization Scheme
%J Journal of Intelligence and Information Systems
%@ 2288-4866
%V 25
%N 2
%P 141-166
%D 2019
%I Korea Intelligent Information System Society