In the research community, the estimation of the scholarly impact of an individual is based on either citation-based indicators or network centrality measures. The network-based centrality measures like degree, closeness...
The treatment of large data is difficult and it looks like the arrival of the framework MapReduce is a solution of this problem. This framework can be used to analyze and process vast amounts of data. This happens by...
Association Rule Mining(ARM) is one of the most significant and active research areas in data mining. Recently, Whale Optimization Algorithm (WOA) has been successfully applied in the field of data mining, however, it easily...
The purpose of this paper is to explore the emotional composition, psychological characteristics, and the consistency between information behavior and attitude of social media users, and to provide reference for online public...
An autonomous acoustic system based on two bottom-moored hydrophones, a two-input audio board and a small single-board computer was installed at the entrance of a marina to detect entering/exiting boat. Windowed time lagged...
Mapping of image-based object textures to ASCII characters can be a new modification towards visual cryptography. Naor and Shamir proposed a new dimension of Information security as visual cryptography which is a secret sharing...
This article aims to recognize Odia handwritten digits using gradient-based feature extraction techniques and Clonal Selection Algorithm-based (CSA) multilayer artificial neural network (MANN) classifier. For the extraction of...
The classification of data streams has become a significant and active research area. The principal characteristics of data streams are a large amount of arrival data, the high speed and rate of its arrival, and the change of...
It has been witnessed in recent years for the rising of Group recommender systems (GRSs) in most e-commerce and tourism applications like Booking.com, Traveloka.com, Amazon, etc. One of the most concerned problems in GRSs is to...
Document classification is a research topic aiming to predict the overall text sentiment polarity with the advent of deep neural networks. Various deep learning algorithms have been employed in the current studies to improve...
This paper presents Versus, which is the first automatic method for generating comparison tables from knowledge bases of the Semantic Web. For this purpose, it introduces the contextual reference level to evaluate whether a...
This article introduces a service that helps provide context and an explanation for the outlier score given to any network flow record selected by the analyst. The authors propose a service architecture for the delivery of...
Open Domain Question Answering (ODQA) on a large-scale corpus of documents (e.g. Wikipedia) is a key challenge in computer science. Although Transformer-based language models such as Bert have shown an ability to outperform...
One potential approach for crime analysis that has shown promising results is data analytics, particularly descriptive and predictive techniques. Data analytics can explore former criminal incidents seeking hidden correlations...
The paper explains the need for a standard way of defining modelling constructs from different enterprise modelling languages and proposes a template for defining enterprise modelling constructs in a way that facilitates...
Code smell is an inherent property of software that results in design problems which makes the software hard to extend, understand, and maintain. In the literature, several tools are used to detect code smell that are informally...
Ever since Pawlak introduced the concepts of rough sets, it has attracted many researchers and scientists from various fields of science and technology. Particularly for algebraists as it presented a gold mine to explore the...
This article compares the performance of different Partial Distance Search-based (PDS) kNN classifiers on a benchmark Kyoto 2006+ dataset for Network Intrusion Detection Systems (NIDS). These PDS classifiers are named based on...
A common model used in addressing today's overwhelming amounts of data is the OLAP Cube. The OLAP community has proposed several cube algebras, although a standard has still not been nominated. This study focuses on a recent...
Program slicing is a technique to decompose programs depending on control flow and data flow amongst several lines of code in a program. Conditioned slicing is a generalization of static slicing and dynamic slicing. A variable...
One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of rough sets on fuzzy approximation spaces. It is based upon a fuzzy proximity relation defined over a Universe. As is well known, an...
The dialectical Arabic and the Modern Standard Arabic lacks sufficient standardized language resources to enable the tasks of Arabic language processing, despite it being an active research area. This work addresses this issue...
The software testing efforts and costs are mitigated by appropriate automatic defect prediction models. So far, many automatic software defect prediction (SDP) models were developed using machine learning methods. However, it is...
The ability to predict the patients with long-term length of stay (LOS) can aid a hospital's admission management, maintain effective resource utilization and provide a high quality of inpatient care. Hospital discharge data...
Software evolution is mandatory to keep it useful and functional. However, the quality of the evolving software may degrade due to improper incorporation of changes. Quality can be monitored by analyzing the trends of software...