Journal of Intelligence and Information Systems,
Vol. 21, No. 2, June 2015
A Collaborative Filtering System Combined with Users’ Review Mining : Application to the Recommendation of Smartphone Apps
ByeoungKug Jeon, and Hyunchul Ahn
Vol. 21, No. 2, Page: 1 ~ 18
Keywords : Recommender system, Collaborative filtering, Text mining, TF-IDF, App Store
Collaborative filtering(CF) algorithm has been popularly used for recommender systems in both academic and practical applications. A general CF system compares users based on how similar they are, and creates recommendation results with the items favored by other people with similar tastes. Thus, it is very important for CF to measure the similarities between users because the recommendation quality depends on it. In most cases, users' explicit numeric ratings of items(i.e. quantitative information) have only been used to calculate the similarities between users in CF. However, several studies indicated that qualitative information such as user's reviews on the items may contribute to measure these similarities more accurately. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's reviews can be regarded as the informative source for identifying user's preference with accuracy.
Under this background, this study proposes a new hybrid recommender system that combines with users' review mining. Our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and his/her text reviews on the items when calculating similarities between users. In specific, our system creates not only user-item rating matrix, but also user-item review term matrix. Then, it calculates rating similarity and review similarity from each matrix, and calculates the final user-to-user similarity based on these two similarities(i.e. rating and review similarities). As the methods for calculating review similarity between users, we proposed two alternatives - one is to use the frequency of the commonly used terms, and the other one is to use the sum of the importance weights of the commonly used terms in users' review. In the case of the importance weights of terms, we proposed the use of average TF-IDF(Term Frequency - Inverse Document Frequency) weights.
To validate the applicability of the proposed system, we applied it to the implementation of a recommender system for smartphone applications (hereafter, app). At present, over a million apps are offered in each app stores operated by Google and Apple. Due to this information overload, users have difficulty in selecting proper apps that they really want. Furthermore, app store operators like Google and Apple have cumulated huge amount of users' reviews on apps until now. Thus, we chose smartphone app stores as the application domain of our system. In order to collect the experimental data set, we built and operated a Web-based data collection system for about two weeks. As a result, we could obtain 1,246 valid responses(ratings and reviews) from 78 users. The experimental system was implemented using Microsoft Visual Basic for Applications(VBA) and SAS Text Miner. And, to avoid distortion due to human intervention, we did not adopt any refining works by human during the user's review mining process. To examine the effectiveness of the proposed system, we compared its performance to the performance of conventional CF system. The performances of recommender systems were evaluated by using average MAE(mean absolute error).
The experimental results showed that our proposed system(MAE = 0.7867 ~ 0.7881) slightly outperformed a conventional CF system(MAE = 0.7939). Also, they showed that the calculation of review similarity between users based on the TF-IDF weights(MAE = 0.7867) leaded to better recommendation accuracy than the calculation based on the frequency of the commonly used terms in reviews(MAE = 0.7881). The results from paired samples t-test presented that our proposed system with review similarity calculation using the frequency of the commonly used terms outperformed conventional CF system with 10% statistical significance level. Our study sheds a light on the application of users' review information for facilitating electronic commerce by recommending proper items to users.
Membership Fluidity and Knowledge Collaboration in Virtual Communities: A Multilateral Approach to Membership Fluidity
Hyun-jung Park, and Kyung-shik Shin
Vol. 21, No. 2, Page: 19 ~ 47
Keywords : Virtual community, fluidity, turnover, member retention, knowledge collaboration
In this era of knowledge economy, a variety of virtual communities are proliferating for the purpose of knowledge creation and utilization. Since the voluntary contributions of members are the essential source of knowledge, member turnover can have significant implications on the survival and success of virtual communities. However, there is a dearth of research on the effect of membership turnover and even the method of measurement for membership turnover is left unclear in virtual communities. In a traditional context, membership turnover is calculated as the ratio of the number of departing members to the average number of members for a given time period. In virtual communities, while the influx of newcomers can be clearly measured, the magnitude of departure is elusive since explicit withdrawals are seldom executed. In addition, there doesn’t exist a common way to determine the average number of community members who return and contribute intermittently at will.
This study initially examines the limitations in applying the concept of traditional turnover to virtual communities, and proposes five membership fluidity measures based on a preliminary analysis of editing behaviors of 2,978 featured articles in English Wikipedia. Subsequently, this work investigates the relationships between three selected membership fluidity measures and group collaboration performance, reflecting a moderating effect dependent on work characteristic. We obtained the following results: First, membership turnover relates to collaboration efficiency in a right-shortened U-shaped manner, with a moderating effect from work characteristic; given the same turnover rate, the promotion likelihood for a more professional task is lower than that for a less professional task, and the likelihood difference diminishes as the turnover rate increases. Second, contribution period relates to collaboration efficiency in a left-shortened U-shaped manner, with a moderating effect from work characteristic; the marginal performance change per unit change of contribution period is greater for a less professional task. Third, the number of new participants per month relates to collaboration efficiency in a left-shortened reversed U-shaped manner, for which the moderating effect from work characteristic appears to be insignificant.
Building a Korean Sentiment Lexicon Using Collective Intelligence
Jungkook An, and Hee-Woong Kim
Vol. 21, No. 2, Page: 49 ~ 67
Keywords : Sentiment Lexicon, Korean Natural Language Processing, Sentiment Analysis, Collective Intelligence, Folksonomy
Recently, emerging the notion of big data and social media has led us to enter data’s big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers’ attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public ( For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.
A Study of 'Emotion Trigger' by Text Mining Techniques
Juyoung An, Junghwan Bae, Namgi Han, and Song Min
Vol. 21, No. 2, Page: 69 ~ 92
Keywords : Emotion Trigger, Word2Vec, Sentimental Analysis, Text Mining, Social Issues
The explosion of social media data has led to apply text–mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec’s semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with “emotional word” in the sentence. The process of extracting these trigger factor of emotional word is named “Emotion Trigger” in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi ( was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.
Strategy for Store Management Using SOM Based on RFM
Yoon Jeong Jeong, Il Young Choi, Jae Kyeong Kim, and Ju Choel Choi
Vol. 21, No. 2, Page: 93 ~ 112
Keywords : Product Segmentation, RFM, SOM, Clustering.
Depending on the change in consumer’s consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer’s buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster’s products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer’s path. In contrast, the lowest of all three value(R, F, M) cluster’s products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster’s products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster’s products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.
An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology
Sukjae Choi, Jongshik Jeon, Biswas Subrata, and Ohbyung Kwon
Vol. 21, No. 2, Page: 113 ~ 129
Keywords : Text Mining, Brand Image, Location Brand, Keyword Analysis, Anholt’s Brand Index
Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place’s location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt’s evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper’s methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability.
The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.
Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM
Min-Kook Hwang, Youngtae Kim, Dongyul Ra, Soojong Lim, and Hyunki Kim
Vol. 21, No. 2, Page: 131 ~ 150
Keywords : Encyclopedia, Noun phrase omission, Zero anaphora resolution, Structural SVM, Sequence labeling
Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem.
In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems.
To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system’s performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.
A Study on the establishment of IoT management process in terms of business according to Paradigm Shift
Min-Eui Jeong, and Song-Jin Yu
Vol. 21, No. 2, Page: 151 ~ 171
Keywords : Internet of Things, Paradigm Shift, Real Time Enterprise, Management Strategy, Business Process Management
This study examined the concepts of the Internet of Things(IoT), the major issue and IoT trend in the domestic and international market. also reviewed the advent of IoT era which caused a ‘Paradigm Shift’. This study proposed a solution for the appropriate corresponding strategy in terms of Enterprise. Global competition began in the IoT market. So, Businesses to be competitive and responsive, the government's efforts, as well as the efforts of companies themselves is needed. In particular, in order to cope with the dynamic environment appropriately, faster and more efficient strategy is required. In other words, proposed a management strategy that can respond the IoT competitive era on tipping point through the vision of paradigm shift.
We forecasted and proposed the emergence of paradigm shift through a comparative analysis of past management paradigm and IoT management paradigm as follow; Ⅰ) Knowledge & learning oriented management, Ⅱ) Technology & innovation oriented management, Ⅲ) Demand driven management, Ⅳ) Global collaboration management. The Knowledge & learning oriented management paradigm is expected to be a new management paradigm due to the development of IT technology development and information processing technology. In addition to the rapid development such as IT infrastructure and processing of data, storage, knowledge sharing and learning has become more important. Currently Hardware-oriented management paradigm will be changed to the software-oriented paradigm. In particular, the software and platform market is a key component of the IoT ecosystem, has been estimated to be led by Technology & innovation oriented management. In 2011, Gartner announced the concept of "Demand-Driven Value Networks(DDVN)", DDVN emphasizes value of the whole of the network. Therefore, Demand driven management paradigm is creating demand for advanced process, not the process corresponding to the demand simply. Global collaboration management paradigm create the value creation through the fusion between technology, between countries, between industries. In particular, cooperation between enterprises that has financial resources and brand power and venture companies with creative ideas and technical will generate positive synergies. Through this, The large enterprises and small companies that can be win-win environment would be built.
Cope with the a paradigm shift and to establish a management strategy of Enterprise process, this study utilized the ‘RTE cyclone model’ which proposed by Gartner. RTE concept consists of three stages, Lead, Operate, Manage. The Lead stage is utilizing capital to strengthen the business competitiveness. This stages has the goal of linking to external stimuli strategy development, also Execute the business strategy of the company for capital and investment activities and environmental changes. Manege stage is to respond appropriately to threats and internalize the goals of the enterprise. Operate stage proceeds to action for increasing the efficiency of the services across the enterprise, also achieve the integration and simplification of the process, with real-time data capture. RTE(Real Time Enterprise) concept has the value for practical use with the management strategy. Appropriately applied in this study, we propose a 'IoT-RTE Cyclone model' which emphasizes the agility of the enterprise. In addition, based on the real-time monitoring, analysis, act through IT and IoT technology. ‘IoT-RTE Cyclone model’ that could integrate the business processes of the enterprise each sector and support the overall service. therefore the model be used as an effective response strategy for Enterprise. In particular, IoT-RTE Cyclone Model is to respond to external events, waste elements are removed according to the process is repeated. Therefore, it is possible to model the operation of the process more efficient and agile.
This IoT-RTE Cyclone Model can be used as an effective response strategy of the enterprise in terms of IoT era of rapidly changing because it supports the overall service of the enterprise. When this model leverages a collaborative system among enterprises it expects breakthrough cost savings through competitiveness, global lead time, minimizing duplication.
A Hybrid Under-sampling Approach for Better Bankruptcy Prediction
Taehoon Kim, and Hyunchul Ahn
Vol. 21, No. 2, Page: 173 ~ 190
Keywords : Bankruptcy Prediction, Under-sampling, k-Reverse Nearest Neighbor, One-class Support Vector Machine, Classification
The purpose of this study is to improve bankruptcy prediction models by using a novel hybrid under-sampling approach. Most prior studies have tried to enhance the accuracy of bankruptcy prediction models by improving the classification methods involved. In contrast, we focus on appropriate data preprocessing as a means of enhancing accuracy. In particular, we aim to develop an effective sampling approach for bankruptcy prediction, since most prediction models suffer from class imbalance problems. The approach proposed in this study is a hybrid under-sampling method that combines the k-Reverse Nearest Neighbor (k-RNN) and one-class support vector machine (OCSVM) approaches. k-RNN can effectively eliminate outliers, while OCSVM contributes to the selection of informative training samples from majority class data. To validate our proposed approach, we have applied it to data from H Bank’s non-external auditing companies in Korea, and compared the performances of the classifiers with the proposed under-sampling and random sampling data. The empirical results show that the proposed under-sampling approach generally improves the accuracy of classifiers, such as logistic regression, discriminant analysis, decision tree, and support vector machines. They also show that the proposed under-sampling approach reduces the risk of false negative errors, which lead to higher misclassification costs.
A Study of Factors Associated with Software Developers Job Turnover
In-Ho Jeon, Sun W. Park, and Yoon-Joo Park
Vol. 21, No. 2, Page: 191 ~ 204
Keywords : software developers, job continuity, IT policy, datamining, human resource management
According to the ‘2013 Performance Assessment Report on the Financial Program’ from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries.
The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers.
Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers.
Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an ‘expected age’ to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a ‘motivation’ to become a SW developer and a ‘personality (introverted tendency)’ of a SW developer are highly importantly factors associated with the job continuity intention.
Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that ‘motivation’, ‘personality’, and ‘expected age’ were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the ‘ability to learn’ new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a ‘social position’ of SW developers and a ‘prospect’ of SW industry were minor factors influencing job continuity intensions. On the other hand, ‘type of an employment (regular position/ non-regular position)’ and ‘type of company (ordering company/ service providing company)’ did not affect the job continuity intension in both methods.
In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Advanced Search
Date Range