INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY en-US (Editorial Office) (Manbir Singh) Sun, 28 Jan 2024 05:56:19 +0000 OJS 60 A NEW ROBUST HOMOMORPHIC ENCRYPTION SCHEME BASED ON PAILLIER, RESIDUE NUMBER SYSTEM AND EL-GAMAL <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>The new focus of cryptographic research is on encryption schemes that can withstand cyber-attacks, with the arrival of cloud computing. The widely used public key encryption system designed by Taher El Gamal based on the discrete logarithm problem has been used in many sectors such as internet security, E-voting systems, and other applications for a long time. However, considering the potential data security threats in cloud computing, cryptologists are developing new and more robust cryptographic algorithms. To this end, a new robust homomorphic encryption scheme based on Paillier, Residue Number system (RNS), and El Gamal (PRE), is proposed in this paper., which is expected to be highly effective and resistant to cyber-attacks. The proposed scheme is composed a three-layer encryption and a three-layer decryption processes thereby, making it robust. It employs an existing RNS moduli set {2n + 1, 2n, 2n − 1, 2n−1} − 1}, having passed it through the Paillier encryption process for forward conversion and then the El Gamal cryptosystem to encrpyt any data. The decryption process is a reversal of these processes starting from the El Gamal through a reverse conversion with the same moduli set using the Chinese Remainder Theorem (CRT). The simulation results shows that the proposed scheme outperforms similar existing schemes in terms of robustness and therefore, making it more secured which however, trades off with the time of execution in similar comparison.</p> </div> </div> </div> Peter Awonnatemi Agbedemnab, Abdul Somed Safianu and, Abdul-Mumin Selanwiah Salifu Copyright (c) 2024 Peter Awonnatemi Agbedemnab, Abdul Somed Safianu and, Abdul-Mumin Selanwiah Salifu Wed, 17 Apr 2024 00:00:00 +0000 Convolutional Neural Networks for Deep Sleep Detection Based on Data Augmentation <p>Sleep is a necessary process that individuals undergo daily for physical recovery, and the proportion of deep sleep in the sleep stages is a critical aspect of the recovery process. Convolutional Neural Networks (CNNs) have shown remarkable success in automatically identifying deep sleep stages through the analysis of electroencephalogram (EEG) signals. This article introduces three data augmentation techniques, including time shifting, amplitude scaling and noise addition, to enhance the diversity and features of the data. These techniques aim to enable machine learning models to extract features from various aspects of sleep EEG data, thus improving the model’s accuracy. Three deep learning models are introduced, namely DeepConvNet, ShallowConvNet and EEGNet, for the identification of deep sleep. To evaluate the proposed methods, the Sleep-EDF public dataset was utilized. Experimental results demonstrate that the enhanced dataset formed by applying the three data augmentation techniques achieved higher accuracy in all deep learning models compared to the original dataset. This highlights the feasibility and effectiveness of these methods in deep sleep detection.</p> Ruixuan Chen, Linfeng Sui, Mo Xia, Jinsha Liu, Tao Zhang, Jianting Cao Copyright (c) 2024 Ruixuan Chen, Linfeng Sui, Mo Xia, Jinsha Liu, Tao Zhang, Jianting Cao Sun, 28 Jan 2024 00:00:00 +0000 On Defining Smart Cities using Transformer Neural Networks <p><span style="font-weight: 400;">Cities worldwide are rapidly adopting “smart” technologies, transforming urban life. Despite this trend, a universally accepted definition of “smart city” remains elusive. Past efforts to define it haven’t yielded a consensus, as evidenced by the numerous definitions in use. In this paper, we endeavored to create a new “compromise” definition that should resonate with most experts previously involved in defining this concept and aimed to validate one of the existing definitions. We reviewed 60 definitions of smart cities from industry, academia, and various relevant organizations, employing transformer architecture-based generative AI and semantic text analysis to reach this compromise. We proposed a semantic similarity measure as an evaluation technique, which could generally be used to compare different smart city definitions, assessing their uniqueness or resemblance. Our methodology employed generative AI to analyze various existing definitions of smart cities, generating a list of potential new composite definitions. Each of these new definitions was then tested against the pre-existing individual definitions we’ve gathered, using cosine similarity as our metric. This process identified smart city definitions with the highest average cosine similarity, semantically positioning them as the closest on average to all the 60 individual definitions selected.</span></p> Andrei Khurshudov Copyright (c) 2024 Andrei Khurshudov Sun, 28 Jan 2024 00:00:00 +0000