DIGITAL LIBRARY ARCHIVE
HOME > DIGITAL LIBRARY ARCHIVE
Journal of Intelligence and Information Systems,
Vol. 20, No. 3, September 2014
A Smoothing Data Cleaning based on Adaptive Window Sliding for Intelligent RFID Middleware Systems
DongCheon Shin, Dongok Oh, SeungWan Ryu, and Seikwon Park
Vol. 20, No. 3, Page: 1 ~ 18
10.13088/jiis.2014.20.3.001
Keywords : data cleaning, RFID middleware system, window time slide, tag transition
Abstract
Over the past years RFID/SN has been an elementary technology in a diversity of applications for the ubiquitous environments, especially for Internet of Things. However, one of obstacles for widespread deployment of RFID technology is the inherent unreliability of the RFID data streams by tag readers. In particular, the problem of false readings such as lost readings and mistaken readings needs to be treated by RFID middleware systems because false readings ultimately degrade the quality of application services due to the dirty data delivered by middleware systems. As a result, for the higher quality of services, an RFID middleware system is responsible for intelligently dealing with false readings for the delivery of clean data to the applications in accordance with the tag reading environment. One of popular techniques used to compensate false readings is a sliding window filter. In a sliding window scheme, it is evident that determining optimal window size intelligently is a nontrivial important task in RFID middleware systems in order to reduce false readings, especially in mobile environments. In this paper, for the purpose of reducing false readings by intelligent window adaption, we propose a new adaptive RFID data cleaning scheme based on window sliding for a single tag. Unlike previous works based on a binomial sampling model, we introduce the weight averaging. Our insight starts from the need to differentiate the past readings and the current readings, since the more recent readings may indicate the more accurate tag transitions. Owing to weight averaging, our scheme is expected to dynamically adapt the window size in an efficient manner even for non-homogeneous reading patterns in mobile environments. In addition, we analyze reading patterns in the window and effects of decreased window so that a more accurate and efficient decision on window adaption can be made. With our scheme, we can expect to obtain the ultimate goal that RFID middleware systems can provide applications with more clean data so that they can ensure high quality of intended services.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia
Hyun-jung Park, and Kyung-shik Shin
Vol. 20, No. 3, Page: 19 ~ 43
10.13088/jiis.2014.20.3.019
Keywords : Virtual Knowledge Collaboration, Knowledge Sharing, Pareto Ratio, Gini Coefficient, Online Community
Abstract
The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that “the trivial many” produces more value than “the vital few,” has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.
System Development for Measuring Group Engagement in the Art Center
Joon Mo Ryu, Il Young Choi, Lee Kwon Choi, and Jae Kyeong Kim
Vol. 20, No. 3, Page: 45 ~ 58
10.13088/jiis.2014.20.3.045
Keywords : Multi-Audience, Arousal, Differential Image, Multi-Arousal, MAEI(Multi-Audience Engagement Index)
Abstract
The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person’s arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person’s arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience’s reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience’ movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience’ reaction and audience eye’s tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part’s contain contents’ data, such as each scene’s weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.
An Interactive Cooking Video Query Service System with Linked Data
Woo-Ri Park, Kyeong-Jin Oh, Myung-Duk Hong, and Geun-Sik Jo
Vol. 20, No. 3, Page: 59 ~ 76
10.13088/jiis.2014.20.3.059
Keywords : Linked Data, Interaction, Cooking Video, Interactive Video, UI/UX
Abstract
The revolution of smart media such as smart phone, smart TV and tablets has brought easiness for people to get contents and related information anywhere and anytime. The characteristics of the smart media have changed user behavior for watching the contents from passive attitude into active one. Video is a kind of multimedia resources and widely used to provide information effectively. People not only watch video contents, but also search for related information to specific objects appeared in the contents. However, people have to use extra views or devices to find the information because the existing video contents provide no information through the contents. Therefore, the interaction between user and media is becoming a major concern. The demand for direct interaction and instant information is much increasing. Digital media environment is no longer expected to serve as a one-way information service, which requires user to search manually on the internet finding information they need. To solve the current inconvenience, an interactive service is needed to provide the information exchange function between people and video contents, or between people themselves. Recently, many researchers have recognized the importance of the requirements for interactive services, but only few services provide interactive video within restricted functionality.
Only cooking domain is chosen for an interactive cooking video query service in this research. Cooking is receiving lots of people attention continuously. By using smart media devices, user can easily watch a cooking video. One-way information nature of cooking video does not allow to interactively getting more information about the certain contents, although due to the characteristics of videos, cooking videos provide various information such as cooking scenes and explanation for each recipe step. Cooking video indeed attracts academic researches to study and solve several problems related to cooking. However, just few studies focused on interactive services in cooking video and they still not sufficient to provide the interaction with users.
In this paper, an interactive cooking video query service system with linked data to provide the interaction functionalities to users. A linked recipe schema is used to handle the linked data. The linked data approach is applied to construct queries in systematic manner when user interacts with cooking videos. We add some classes, data properties, and relations to the linked recipe schema because the current version of the schema is not enough to serve user interaction. A web crawler extracts recipe information from allrecipes.com. All extracted recipe information is transformed into ontology instances by using developed instance generator. To provide a query function, hundreds of questions in cooking video web sites such as BBC food, Foodista, Fine cooking are investigated and analyzed. After the analysis of the investigated questions, we summary the questions into four categories by question generalization. For the question generalization, the questions are clustered in eleven questions. The proposed system provides an environment associating UI (User Interface) and UX (User Experience) that allow user to watch cooking videos while obtaining the necessary additional information using extra information layer. User can use the proposed interactive cooking video system at both PC and mobile environments because responsive web design is applied for the proposed system. In addition, the proposed system enables the interaction between user and video in various smart media devices by employing linked data to provide information matching with the current context. Two methods are used to evaluate the proposed system. First, through a questionnaire-based method, computer system usability is measured by comparing the proposed system with the existing web site. Second, the answer accuracy for user interaction is measured to inspect to-be-offered information. The experimental results show that the proposed system receives a favorable evaluation and provides accurate answers for user interaction.
A Methodology for Automatic Multi-Categorization of Single-Categorized Documents
Jin-Sung Hong, Namgyu Kim, and Sangwon Lee
Vol. 20, No. 3, Page: 77 ~ 92
10.13088/jiis.2014.20.3.077
Keywords : Multi-Category, Document Classification, BigData Analysis, Text Minning, Topic Analysis.
Abstract
Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided.
To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents.
Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were “IT Science,” “Economy,” “Society,” “Life and Culture,” “World,” “Sports,” “Entertainment,” and “Politics.”
By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.
A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products
Dongsung Kim, Kitae Kim, Jongwoo Kim, and Steve Park
Vol. 20, No. 3, Page: 93 ~ 108
10.13088/jiis.2014.20.3.093
Keywords : Fraud Detection, Outlier Detection, Auction Exception Products
Abstract
To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection.
In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts.
To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not.
To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files.
The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.
Export Control System based on Case Based Reasoning: Design and Evaluation
Woneui Hong, Uihyun Kim, Sinhee Cho, Sansung Kim, Mun Yong Yi, and Donghoon Shin
Vol. 20, No. 3, Page: 109 ~ 131
10.13088/jiis.2014.20.3.109
Keywords : Expert System, Export Control System, Nuclear Nonproliferation and Control, Cased Based Reasoning
Abstract
As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document- to-document similarity (α) and document-to-nuclear system similarity (β), in order to derive the final score (γ) for the decision of whether the presented case is of strategic material or not. The final score (γ) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.
The knowledge and human resources distribution system for university-industry cooperation
Yoon-Joo Park
Vol. 20, No. 3, Page: 133 ~ 149
10.13088/jiis.2014.20.3.133
Keywords : intellectual resources, university, university-industry cooperation, knowledge distribution systems, human resources
Abstract
One of the main purposes of universities is to create new intellectual resources that will increase social values. These intellectual resources include academic research papers, lecture notes, patents, and creative ideas produced by both professors and students. However, intellectual resources in universities are often not distributed to the actual users or companies; and moreover, they are not even systematically being managed inside of the universities. Therefore, it is almost impossible for companies to access the knowledge created by university students and professors to utilize them. Thus, the current level of knowledge sharing between universities and industries are very low.
This causes a great extravagant with high-quality intellectual and human resources, and it leads to quite an amount of social loss in the modern society. In the 21st century, the creative ideas are the key growth powers for many industries. Many of the globally leading companies such as Fedex, Dell, and Facebook have established their business models based on the innovative ideas created by university students in undergraduate courses. This indicates that the unconventional ideas from young generations can create new growth power for companies and immensely increase social values.
Therefore, this paper suggests of a new platform for intellectual properties distribution with university-industry cooperation. The suggested platform distributes intellectual resources of universities to industries. This platform has following characteristics. First, it distributes not only the intellectual resources, but also the human resources associated with the knowledge. Second, it diversifies the types of compensation for utilizing the intellectual properties, which are beneficial for both the university students and companies. For example, it extends the conventional monetary rewards to non-monetary rewards such as influencing on the participating internship programs or job interviews. Third, it suggests of a new knowledge map based on the relationships between key words, so that the various types of intellectual properties can be searched efficiently.
In order to design the system platform, we surveyed 120 potential users to obtain the system requirements. First, 50 university students and 30 professors in humanities and social sciences departments were surveyed. We sent queries on what types of intellectual resources they produce per year, how many intellectual resources they produce, if they are willing to distribute their intellectual properties to the industries, and what types of compensations they expect in returns. Secondly, 40 entrepreneurs were surveyed, who are potential consumers of the intellectual properties of universities. We sent queries on what types of intellectual resources they want, what types of compensations they are willing to provide in returns, and what are the main factors they considered to be important when searching for the intellectual properties.
The implications of this survey are as follows. First, entrepreneurs are willing to utilize intellectual properties created by both professors and students. They are more interested in creative ideas in universities rather than the academic papers or educational class materials. Second, non-monetary rewards, such as participating internship program or job interview, can be the appropriate types of compensations to replace monetary rewards. The results of the survey showed that majority of the university students were willing to provide their intellectual properties without any monetary rewards to earn the industrial networks with companies. Also, the entrepreneurs were willing to provide non-monetary compensation and hoped to have networks with university students for recruiting. Thus, the non-monetary rewards are mutually beneficial for both sides. Thirdly, classifying intellectual resources of universities based on the academic areas are inappropriate for efficient searching. Also, the various types of intellectual resources cannot be categorized into one standard.
This paper suggests of a new platform for the distribution of intellectual materials and human resources, with university-industry cooperation based on these survey results. The suggested platform contains the four major components such as knowledge schema, knowledge map, system interface, and GUI (Graphic User Interface), and it presents the overall system architecture.
Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity
SeongWook Chae, and Sungbum Park
Vol. 20, No. 3, Page: 151 ~ 171
10.13088/jiis.2014.20.3.151
Keywords : SaaS, differentiation strategy, low-cost strategy, SaaS technology maturity, customer acquisition
Abstract
Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area.
In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships.
For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny’s procedure for determining if firm strategies affect customer acquisition through technology maturity.
Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers’ obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.
1



Warning: include(/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php) [function.include]: failed to open stream: No such file or directory in /home/hosting_users/ev_jiisonline/www/archive/subList.php on line 113

Warning: include() [function.include]: Failed opening '/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /home/hosting_users/ev_jiisonline/www/archive/subList.php on line 113