Program and Abstract for 2024 International Symposium on Information Theory and Its Applications

Time (Taipei) Room 1 (IB101) Room 2 (IB201) Room 3 (IB202) Room 4 (IB301) IBB

Monday, November 11

09:00 am-10:00 am Mo-AM-P: Plenary Talk 1        
10:20 am-12:00 pm   Mo-AM-1-1: Coding Theory 1 Mo-AM-1-2: Quantum Information Theory Mo-AM-1-3: Statistical Learning 1  
02:00 pm-03:00 pm Mo-PM-P: Plenary Talk 2        
03:30 pm-04:50 pm   Mo-PM-1-1: Quantum Insertion/Deletion Errors Mo-PM-1-2: AI Security and Privacy Mo-PM-1-3: Communication Theory  

Tuesday, November 12

09:00 am-10:00 am Tu-AM-P: Plenary Talk 3        
10:20 am-12:00 pm   Tu-AM-1-1: Coding for Storage/Group Testing Tu-AM-1-2: Information Theory and Security Tu-AM-1-3: Statistical Learning 2  
01:50 pm-02:50 pm   Tu-PM-1-1: Integrated Circuit Design for Information Theory Tu-PM-1-2: Cryptography Tu-PM-1-3: Signal Processing  
02:50 pm-03:10 pm      
03:30 pm-04:50 pm   Tu-PM-2-1: Code Based Cryptography/Wiretap Coding Tu-PM-2-2: Biometrics and Authentication Tu-PM-2-3: Source Coding  

Wednesday, November 13

09:00 am-10:00 am         We-AM-Poster: Poster Session
10:10 am-11:50 am   We-AM-1-1: Coding Theory 2 We-AM-1-2: Symmetric-Key Encryption We-AM-1-3: Shannon Theory  

Monday, November 11

Monday, November 11 9:00 - 10:00

Mo-AM-P: Plenary Talk 1

Theory and Examples of One-Bit Secret Sharing
Hiroki Koga (University of Tsukuba, Japan)
Room 1 (IB101)

Secret sharing is one of the important cryptographic primitives. In a secret sharing scheme, a dealer generates n shares from a secret S and distributes the n shares to n respective participants. If we consider the t-out-of-n threshold access structure, S is correctly recovered from arbitrary more than or equal to t shares, while no information on S is revealed from less than t shares. It is well-known that Shamir's scheme using a random polynomial of order at most t-1 gives a secret sharing scheme realizing the t-out-of-n threshold access structure. In Shamir's scheme, a secret takes values in the same alphabet as the n shares.

In my talk, we focus on nonconventional secret sharing schemes for the case where a secret S is one-bit. Visual cryptography schemes that originate from Naor and Shamir's paper in 1994 can be regarded as one-bit secret sharing schemes. Recently, Bogdanov, Guo and Komargodski gave a new lower bound of the share size by using a new theory for the one-bit secret sharing.

We first give interesting examples of the one-bit secret sharing schemes. Next, we give a construction of the one-bit secret sharing scheme realizing the t-out-of-p access structure, where p is an arbitrary prime number. In this scheme we actually construct a function with p variables that corresponds to a multiple of the difference between the conditional probability of the shares given S=0 and S=1. Surprisingly, the discrete Fourier transform of this function takes only two values, 0 and 1. We use this property to guarantee the security of the one-bit secret S against any t-1 shares. We also discuss an extension of this scheme to the case of a multi-valued secret and a relationship of this scheme to the orthogonal array.

Monday, November 11 10:20 - 12:00

Mo-AM-1-1: Coding Theory 1

Room 2 (IB201)
Improving Convergence Speed Of Neural Polar Decoder Using Weighted Loss Function
Yi-Wei Lu (National Tsing Hua University, Taiwan); Shan Lu and Takaya Yamazato (Nagoya University, Japan); Yeong-Luh Ueng (National Tsing Hua University, Taiwan)

In recent years, neural network decoding of polar codes, such as neural belief propagation (BP), has been introduced. These methods use deep learning to transform the factor graph into a neural network model by unfolding the decoding iterations, thereby enhancing the accuracy of traditional decoding processes. However, current prevalent methodologies for loss function calculation only take into account the output of the final layer. In our analysis, we found that when calculating the loss function using only the output from the last layer, the convergence speed of the decoder significantly decreases, especially when the number of unfolded iterations is higher. In this paper, we incorporate the output of all iterations into the loss function in the original neural BP structure. Additionally, we optimize the loss function by assigning different weights to losses at different iterations. As a result, the weighted loss function not only provides a lower Bit Error Rate (BER) compared to the original neural BP decoder at lower SNR, but also accelerates convergence speed.

BP Decoding and SGRAND for Partially Permuted Factor Graphs of Polar Codes
Kohei Yoshida and Hiroshi Kamabe (Gifu University, Japan); Shan Lu (Nagoya University, Japan)

Polar codes are provably capacity-achieving error-correcting codes, suitable not only for error correction but also for source coding, constrained coding, and multiple access channels. Many decoding schemes have been proposed for polar codes. Among these, CRC-aided successive cancellation list decoding (CA-SCL) is known to have almost the best error-correcting performance. Belief propagation (BP) techniques have also been explored for decoding polar codes. Recently, BP decoding with partially permuted factor graphs (PPFG) has shown good performance, although it falls slightly short of CA-SCL decoding. Guess Random Additive Noise Decoding (GRAND) is a general decoding scheme for linear codes. While Soft-GRAND (SGRAND) offers excellent performance for polar codes, it suffers from high time complexity. This paper proposes a novel decoding scheme that combines BP decoding with PPFG and SGRAND. We show that our scheme achieves performance close to CA-SCL decoding.

Recursive Algorithm for Maximum Likelihood Decoding of Non-Binary Codes
Yingtao Zhou and Yuichi Kaji (Nagoya University, Japan)

The maximum likelihood (ML) decoding is essential for error-correcting codes, but the ML decoding problem is NP-hard in general, which suggests that there does not exist a single universal ML decoding algorithm that works efficiently for any code. Researchers therefore developed many ML decoding algorithms with different characteristics so that they can be used complementary. This study is intended to propose an ML decoding algorithm that can be used for general non-binary linear codes. The algorithm proposed in this paper is an extension and a successor of the trellis-based recursive ML decoding algorithm that was studied for binary codes. The proposed algorithm inherits the advantageous point of the preceding algorithm and is enhanced so that it can be used for non-binary codes. This paper introduces the structure of codes that helps realize recursive decoding, explains the proposed algorithm, and reports that the algorithm indeed works for several non-binary codes.

Probabilistic Density Evolution Analysis of IRSA
Jin Wei Liu and Po-Ning Chen (National Yang Ming Chiao Tung University, Taiwan); Shin-Lin Shieh (National Tsing Hua University, Taiwan); Yu-Chih Huang (National Yang Ming Chiao Tung University, Taiwan)

In this paper, by considering the effect of error correcting codes in addition to collision resolution, a novel probabilistic density-evolution analysis of the irregular repetition slotted ALOHA (IRSA) is proposed. Simulation results confirm that the proposed extension analysis can accurately recover the efficiency of the iterative successive interference cancellation (iSIC) scheme for a satellite Internet-of-Things (IoT) system endowed with an error correcting code, and therefore can be used to determine the corresponding optimal degree distributions.

ARQ Schemes for Broadcast Channel with Random Feedback Delay
Hikaru Shioda and Toshihiro Niinomi (Tokyo City University, Japan)

We discuss a model that considers feedback delay as a random variable. That is, the feedback delay is different for each packet and receiver. Although algorithms with windows have been used in conventional studies, they have not been applied to this model. In this study, we propose algorithms that use windows for this model. This allows us to extend the range of applications of the index codes. It is also found that there is a trade-off between throughput and acceptance time in the proposed algorithm.

Mo-AM-1-2: Quantum Information Theory

Room 3 (IB202)
Generalized trace inequalities related to q uncertainty relations
Kenjiro Yanagi (Yamaguchi University, Japan)

In 2015 we obtained non-hermitian extensions of Heisenberg type and Schrodinger type uncertainty relations for generalized metric adjusted skew information or generalized metric adjusted correlation measure and gave the results of Dou-Du as corollaries. In this paper we define generalized quasi-metric adjusted q skew information for different two generalized states and obtain corresponding uncertainty relation. The result is applied to the inequalities related to fidelity and trace distance for different two generalized states which were given by Audenaert et al; and Powers-Stormer. Finally we state q uncertainty relations for generalized metric adjusted q skew information or generalized metric adjusted q correlation measure.

Entanglement detection based on the generalized Landau-Pollak uncertainty relation for (N, M)-POVMs
Masakazu Yoshida (Osaka Sangyo University, Japan)

We introduce a criterion for detecting entanglement based on the uncertainty relation. Our focus is on the generalized Landau-Pollak uncertainty relation and the generalized quantum measurements (N, M)-POVMs for quantum state tomography. We introduce a generalized Landau-Pollak uncertainty relation for (N, M)-POVMs, which is tighter than other inequalities of its kind. Using this relation, we introduce the criterion for detecting entanglement. In this criterion, a relation between the certainty of a bipartite system and the certainty of the subsystem plays the important role. We also demonstrate the ability to detect both the maximally entangled state and the isortopic state under certain parameters.

On Optimal Ternary Signal Constellation Maximizing Capacity under Energy Constraints
Shion Kitamura (Aichi Prefectural University, Japan); Tiancheng Wang (Kanagawa University, Japan); Souichi Takahira (Meijo University, Japan); Tsuyoshi Usuda (Aichi Prefectural University, Japan)

For efficient use of energy in quantum communication, it is better to use discrete signals that approach the classical capacity achieved by a continuous input of coherent-states. The analytical treatment of classical communication channel requires discretization, and among such discrete signals, it is better to use signals as close as possible to the quantum channel capacity. In the case of binary discretization, BPSK coherent-state signals has been proven to be optimal. However, for multiary discretization with three or more signals, the optimal state is not known. In this study, we consider the case of ternary discretization. Finally, we numerically show that 3PSK is the optimal signal constellation within two dimensions.

Property of classical reliability function for BPSK signals coded by simplex codes with SRM
Ken Masaki (Meijo University, Japan); Tiancheng Wang (Kanagawa University, Japan); Shogo Usami and Souichi Takahira (Meijo University, Japan); Tsuyoshi Usuda (Aichi Prefectural University, Japan)

There is a gap between properties of quantum and classical reliability functions. That is, the upper bound of quantum reliability function based on quantum optimum receiver diverges, whereas that of classical one based on classical optimum receiver does not, even if the same coherent signal is used.
We are interested in the origin of the gap. In this paper, we investigate the upper bound of the reliability function for quantum signals coded by simplex codes when using square-root measurement (SRM) as a quantum collective decoding. The reliability function has much more steeper property with respect to the rate compared to the uncoded case. This result suggests the divergence of the upper bound of the quantum reliability function in the previous study.

Advance Sharing with Ogawa et al.'s Ramp Quantum Secret Sharing Scheme for Prime-Dimensional Quantum Systems
Satoshi Masumori and Ryutaroh Matsumoto (Tokyo Institute of Technology, Japan)

The ramp quantum secret sharing proposed by Ogawa et al. has the highest possible coding rate given a threshold type access structure. On the other hand, in some quantum secret sharing schemes, it is known that some shares can be distributed to participants before a secret is given to the dealer. However, it is unclear whether some shares can be distributed before a secret is given in Ogawa et al.'s scheme. In this paper, we propose a method to distribute some shares before a secret is given in Ogawa et al.'s scheme, then determine a necessary and sufficient condition on sets of shares that can be distributed before a given secret.

Mo-AM-1-3: Statistical Learning 1

Room 4 (IB301)
Out-of-Distribution Detection using Maximum Entropy Coding and Generative Networks
Mojtaba Abolfazli, Mohammad Amirani, Anders Høst-Madsen, June Zhang and Andras Bratincsak (University of Hawaii, USA)

Given a default distribution (P) and a set of test data (x^M={x_1, x_2, \ldots, x_M}) this paper seeks to answer the question if it was likely that (x^M) was generated by (P). For discrete distributions, the definitive answer is in principle given by Kolmogorov-Martin-L"{o}f randomness. In this paper we seek to generalize this to continuous distributions. We consider a set of statistics (T_1(x^M), T_2(x^M), \ldots). To each statistic we associate its maximum entropy distribution and with this a universal source coder. The maximum entropy distributions are subsequently combined to give a total codelength, which is compared with (-\log P(x^M)). We show that this approach satisfied a number of theoretical properties.

For real world data (P) usually is unknown. We transform data into a standard distribution in the latent space using a bidirectional generate network and use maximum entropy coding there. We compare the resulting method to other methods that also used generative neural networks to detect anomalies. In most cases, our results show better performance.

Bounding Excess Minimum Risk via Rényi's Divergence
Ananya S Omanwar, Fady Alajaji and Tamas Linder (Queen's University, Canada)

Given finite dimensional random vectors (\boldsymbol{Y}), (\boldsymbol{X}) and (\boldsymbol{Z}) that form a Markov chain in that order ((\boldsymbol{Y\to X\to Z})), we derive R'enyi divergence based upper bounds for excess minimum risk, where (\boldsymbol{Y}) is a (target) vector that is to be estimated from an observed (feature) vector (\boldsymbol{X}) or its (stochastically) degraded version~(\boldsymbol{Z}). We define the excess minimum risk as the difference between the minimum expected loss in estimating (\boldsymbol{Y}) from (\boldsymbol{X}) and the minimum expected loss in estimating (\boldsymbol{Y}) from (\boldsymbol{Z}). We obtain a family of bounds which generalize the bounds developed by Gy"orfi {\em et al.} (2023) expressed in terms of Shannon's mutual information. Our bounds are similar to the bounds by Modak {\em et al.} (2021) obtained in the context of the generalization error of learning algorithms, but unlike the latter they do not involve fixed sub-Gaussian parameters and therefore hold for more general joint distributions of (\boldsymbol{Y}), (\boldsymbol{X}), and (\boldsymbol{Z}). We also provide an example with Bernoulli random variables where R'enyi's divergence based upper bound are tighter than mutual information bounds.

A Bound for Learning Lossless Source Coding with Online Learning
Anders Høst-Madsen and Mohammad Amirani (University of Hawaii, USA); Narayana Prasad Santhanam (University of Hawaii at Manoa, USA)

This paper develops bounds for learning lossless source coding under the PAC (probably approximately correct) framework. The paper considers iid sources with online learning: first the coder learns the data structure from training sequences. When presented with a test sequence for compression, it continues to learn from/adapt to the test sequence. The results show, not unsurprisingly, that there is little gain from online learning when the training sequence length is much longer than the test sequence length. But if the test sequence length is longer than the training sequence, there is a significant gain. Coders for online learning has a somewhat surprising structure: the training sequence is used to estimate a confidence interval for the distribution, and the coding distribution is found through a prior distribution over this interval.

Bayesian Decision-Theoretic Prediction with Ensemble of Meta-Trees for Classification Problems
Naoki Ichijo (Waseda University, Japan); Ryota Maniwa (KDDI Corporation, Japan); Yuta Nakahara, Koshi Shimada and Toshiyasu Matsushima (Waseda University, Japan)

Decision tree algorithms are one of the most popular methods in machine learning. However, most decision tree algorithms do not assume a stochastic model behind data. On the other hand, a meta-tree was recently proposed as a stochastic model with a tree structure. The prediction under the assumption of the meta-tree is decided in a manner of Bayesian decision theory. Although the optimal prediction can be calculated with an assumption of a known meta-tree, an approximation is necessary to obtain a prediction under the problem setting of an unknown meta-tree, because of the marginalization of all possible meta-tree. In this paper, we propose an approximation method, where a subset of meta-trees is sequentially constructed and the prediction is made by weighting the meta-trees. For the approximation, we clarify the approaches of constructing the subset and making predictions with weighted meta-trees. We also examine the effectiveness of the approaches in an experiment using synthetic data. In addition, we conduct an experiment on benchmark data to confirm the performance of the proposal.

Variational Bayesian Methods for a Tree-Structured Stick-Breaking Process Mixture of Gaussians by Application of the Bayes Codes for Context Tree Models
Yuta Nakahara (Waseda University, Japan)

The tree-structured stick-breaking process (TS-SBP) mixture model is a non-parametric Bayesian model that can represent tree-like hierarchical structures among the mixture components. For TS-SBP mixture models, only a Markov chain Monte Carlo (MCMC) method has been proposed and any variational Bayesian (VB) methods has not been proposed. In general, MCMC methods are computationally more expensive than VB methods. Therefore, we require a large computational cost to learn the TS-SBP mixture model. In this paper, we propose a learning algorithm with less computational cost for the TS-SBP mixture of Gaussians by using the VB method under an assumption of finite tree width and depth. When constructing such VB method, the main challenge is efficient calculation of a sum over all possible trees. To solve this challenge, we utilizes a subroutine in the Bayes coding algorithm for context tree models. We confirm the computational efficiency of our VB method through an experiments on a benchmark dataset.

Monday, November 11 2:00 - 3:00

Mo-PM-P: Plenary Talk 2

A Journey in Information Theory: From Second-Order Rates to Common Information
Vincent Y. F. Tan (National University of Singapore, Singapore)
Room 1 (IB101)

In this talk, I will discuss my journey in information theory. The story starts from my initial interest in establishing second-order coding rates for various network information theory problems including the Slepian-Wolf distributed lossless source coding problem and various (simple) variants of multiple access and interference channels. These led to my interest in using Rényi resolvability to establish second-order rates in problems involving equivocations such as the wiretap channel. The above studies eventually inspired me to the study of various common information quantities and the resolution of some open problems therein.

Monday, November 11 3:30 - 4:50

Mo-PM-1-1: Quantum Insertion/Deletion Errors

Room 2 (IB201)
Pure States in Quantum Deletion Error-Correction
Manabu Hagiwara (Chiba University, Japan)

This paper discusses harnessing of pure states in the study of quantum deletion. Quantum deletion is one of the recently prominent aspects in the error model of quantum error-correction codes, and pure states represent a class of quantum states widely used in the study of other error models. This study reveals that, under specific encoding and decoding operations, the possibility of deletion error-correction for pure states extends to deletion error-correction for mixed states. Through this paper, it is hoped that the understanding of quantum deletion errors deepens, contributing to the advancement of research in quantum deletion error-correction codes.

Decoding Algorithm Correcting Single-Insertion Plus Single-Deletion for Non-binary Quantum Codes
Ken Nakamura and Takayuki Nozaki (Yamaguchi University, Japan)

In this paper, we assume an error such that a single insertion occurs and then a single deletion occurs. Under such an error model, this paper provides a decoding algorithm for non-binary quantum codes constructed by Matsumoto and Hagiwara.

Insertion Correcting Algorithm for Quantum Deletion Correcting Codes Based on Quantum Reed-Solomon Codes
Koki Sasaki (Yamaguchi University & Dept. of Informatics, Japan); Takayuki Nozaki (Yamaguchi University, Japan)

Research on quantum insertion or deletion error correcting codes has become active in recent years. In 2023, Hagiwara constructed a binary quantum code that corrects multiple deletion errors and has a more flexible code rate than conventional ones. The contribution of this study is to provide an insertion correcting algorithm for the code constructed by Hagiwara.

Quantum Deletion/Insertion Errors in Optical Qubits due to Synchronization Error
Masaki Takekoshi and Shion Kitamura (Aichi Prefectural University, Japan); Tiancheng Wang (Kanagawa University, Japan); Tsuyoshi Usuda (Aichi Prefectural University, Japan)

In recent years, study on error occurring in quantum deletion/insertion channels, which pose practical challenges, have been advancing. Several codes that correct these errors are also being discussed. However, there are cases in which the current channel model cannot be applied when assuming specific situations such as using coherent-state qubits as a fundamental qubit. In this paper, we generalize the conventional model of synchronization errors when using the coherent-state qubits and their superposition states. Furthermore, we investigate quantitatively the difference between the conventional and our realistic model and suggest the limitations in applications of the former.

Mo-PM-1-2: AI Security and Privacy

Room 3 (IB202)
Generative Adversarial Networks for Imbalanced Dataset Intrusion Detection in Software-Defined Networking
S. M. Shamim, Muhammad Bisri Musthafa, Samsul Huda, Yuta Kodera and Yasuyuki Nogami (Okayama University, Japan)

Software-defined networks (SDN) have become prominent technologies in recent times owing to their centralized network management, flexibility, and rapidity. The centralized structure of SDN architecture may introduce vulnerability and threat, which can affect normal users through resource depletion, decreased internet speeds, and memory consumption on controllers and switches. An efficient intrusion detection system (IDS) is required for actively monitoring and identifying malicious activities or potential threats within SDN networks. The current machine learning techniques in IDS often face challenges when dealing with imbalanced datasets. These datasets can lead to biased model performance toward the dominant class, causing inadequate detection of minority-class instances like anomalies or intrusions. Moreover, a large number of features in the dataset increases computational challenges and may adversely affect the model's performance. This work presents a deep learning-based technique generative adversarial networks (GAN) to generate synthetic data for balancing the imbalanced dataset issues in IDS. This helps in improving detection performance, especially for minority classes. The chi-square test based on statistics is also used to select the most significant features that enhance model performance and decrease both training and testing time. We evaluate the model's performance using multiple machine learning algorithms, including Naive Bayes (NB), Extra Trees (ET), Random Forest (RF), and XGBoost (XGB). Our evaluation demonstrates improved accuracy and reduced training and testing times across these algorithms. Notably, XGB achieves the highest accuracy 0.99.

\(k^m\)-Anonymization Meets Differential Privacy under Sampling
Masaya Kobayashi and Atsushi Fujioka (Kanagawa University, Japan); Koji Chida (Gunma University, Japan); Akira Nagai and Kan Yasuda (NTT, Japan)

Various models for evaluating anonymity have been proposed so far. Among them, (k)-anonymity is widely known as a typical anonymity measure, guaranteeing that at least (k) individuals in a database have the same values.
Unfortunately, it is difficult to create highly useful anonymized data satisfying (k)-anonymity for high-dimensional data because of the curse of dimensionality. Several approaches relaxing (k)-anonymity have been proposed, such as (k^m)-anonymity, to overcome the problem. On the other hand, we have another privacy protection metric, developed by Dwork et al., and it is differential privacy. However, the full protection index for differential privacy, i.e., the level of noise that can satisfy the desired privacy, has not been clarified. This paper shows relationships between (k^m)-anonymity and differential privacy under sampling, proposed by Li et al., that is, a weak notion of differential privacy. Numerical experiments are then performed to give relations among the parameters of (k^m)-anonymity and differential privacy under sampling. These experiments also show relationships between (k)-anonymity and (k^m)-anonymity as (k)-anonymity is a special case of (k^m)-anonymity in some sence.

Simple Privacy-Preserving Federated Learning with Different Encryption Keys
Haruto Yokomitsu and Ryo Nojima (Ritsumeikan University, Japan); Lihua Wang (National Institute of Information and Communications Technology (NICT), Japan)

Federated learning is a method where multiple participants collaboratively train a model using machine learning techniques, such as deep learning, while keeping each participant's data private through the use of a central server. However, there are cases in which participant input information is leaked to the server. To solve this problem, several protocols have been proposed. In particular, Phong et al. (in IEEE TIFS 2018) proposed a protocol that protects participant information against a server using homomorphic encryption. Later, Park et al. (in ICTC 2022) proposed an improved protocol, where each client has a different secret-key for the homomorphic encryption. In this paper, we show that Park et al.'s protocol has some weaknesses and propose a secure one based on the previously proposed protocols.

Privacy-preserving Machine Learning for Generic Data
Tomoshi Yagishita and Atsuko Miyaji (Osaka University, Japan)

With the increasing use of personal data in recent years, privacy protection has become increasingly important. Differential privacy (DP) is a technique that protects privacy by adding noise. While DP manages privacy on a central server, Local Differential Privacy (LDP) can manage privacy locally, and LDP is preferable in terms of user privacy protection. In previous work, SUPM, a framework for applying LDP to machine learning, was proposed, which proposed WALDP, a privacy mechanism that treats all attributes uniformly regardless of data type, but it is a method specialized for continuous data. In this study, we propose a privacy-preserving machine learning method that can be applied to databases containing discrete values.

Mo-PM-1-3: Communication Theory

Room 4 (IB301)
DPC-Coded MIMO Multiuser Communications using RIS with Discrete Phase Shifts
Chia-cheng Wu (National Yang Ming Chiao Tung University, Taiwan); Hsiao-feng Francis Lu (Taiwan)

A novel design of RIS-assisted MIMO multiuser communication system using dirty paper coding (DPC) is presented in this paper. The design is applicable to any numbers of transmit and receive antennas and to any number of users, and requires only discrete phase shifts for RIS elements. For any given phase-configuration of RIS, the proposed design always achieves the corresponding channel capacity. The proposed framework involves two alternating optimization steps. Initially, we optimize the input covariance matrices of DPC encoders to maximize the sum-rate of downlink communication while keeping the RIS coefficient matrix fixed. Subsequently, we employ the alternating direction method of multipliers to identify optimal discrete RIS phases that maximize an upper bound on the sum-rate, with the DPC covariance matrices fixed. Simulation results show that the proposed design is highly robust and delivers near optimal sum-rate performance using only quad-phases for RIS elements.

Throughput Analysis of SIC-Based Two-Device Slotted ALOHA with Feedback Over Nakagami-m Fading Channels
Daiki Fukui and Yuhei Takahashi (Doshisha University, Japan); Guanghui Song (Xidian University, China); Tomotaka Kimura (Doshisha University, Japan); Zilong Liu (University of Essex, United Kingdom (Great Britain)); Jun Cheng (Doshisha University, Japan)

Throughput analysis for successive interference cancellation-based two-device slotted ALOHA with feedback is studied over Nakagami-m fading channels. Explicit expressions for the state transition probabilities are derived for a Markov process, thus facilitating the computation of the throughput. Through optimization of the transmission probability, it is shown that the maximum throughput is achieved for a given code rate. Simulations verify the analytical results.

Modulation Scheme of Delayed Bit-Interleaved Coded Modulation for Continuous Transmission
Gou Hosoya (Osaka Sangyo University, Japan)

In this study, we focus on delayed bit-interleaved coded modulation (DBICM). Unlike bit-interleaved coded modulation (BICM), DBICM can achieve the coded modulation capacity.
The difference between DBICM and BICM lies in the transmission of multiple codewords. However, the decoding result of the previous codeword is used for the next decoding, so bad information may be fed back if the decoding fails, which leads to deterioration of the overall system. In the DBICM, common sequences, which are shared between the transmitter and the receiver, are used at the early and termination stages of transmission. By determining the common sequences and delay time slots, the DBICM capacity of the entire system can be specified. To ensure that the DBICM capacity for each codeword is higher than the BICM capacity, the common sequences and delay time slots of DBICM are determined accordingly for the amplitude phase modulation and quadrature phase amplitude modulation.

Golay complementary sequences of all sequence lengths
Ying Li (Yuan Ze University, Taiwan); Shu-Jui Yang (ASML Holding N. V., Taiwan)

Two sequences whose aperiodic autocorrelation functions sum to form a perfect impulse constitute a Golay complementary sequence pair, with each sequence being a Golay complementary sequence. These sequences have many applications in wireless communication, ultrasound, and radar systems. We show that the paraunitary construction proposed by Budišin and Spasojević can be used to create 8-QAM Golay complementary sequences for all lengths where BPSK and QPSK Golay complementary sequences were previously unknown. Through recursive and paraunitary constructions, Golay complementary sequences can be generated for all sequence lengths. Furthermore, this result enables the construction of 8-QAM unitary matrices of all even orders, expanding the potential applications of Golay complementary sequences.

Tuesday, November 12

Tuesday, November 12 9:00 - 10:00

Tu-AM-P: Plenary Talk 3

A Journey in belief propagation: From practical LDPC decoders to near-maximum likelihood Reed-Solomon decoders
Yeong-Luh Ueng (National Tsing Hua University, Taiwan)
Room 1 (IB101)

In this talk, I will present an overview of several results that I have gradually developed over the last decade for message passing scheduling in belief propagation (BP) decoders for both low-density and high-density parity-check codes. Low-density parity-check (LDPC) codes, because of BP decoding, have become one of the most popular coding schemes for both communication and storage systems. Hardware implementation widely employs several fixed scheduling techniques, including layered and shuffled scheduling, to enhance convergence rate and error-rate performance. Although BP is very successful in LDPC codes, it performs poorly when applied to high-density parity-check codes like Reed-Solomon (RS). Recently, we applied the Nested-Polling Residual Belief Propagation (NP-RBP) method to the high-density Tanner graph of RS codes. The NP-RBP decoding approach, like the popular adaptive BP (ABP) decoding approach, involves adaptation in a binary parity-check matrix based on the reliability of variable nodes, and is able to provide an error rate performance close to the maximum-likelihood (ML) bound for short codes. For long codes, we use a bit-flipping (BF) technique to correct a subset of errors in the most reliable variable nodes, thereby improving error-rate performance. When the proposed NP-RBP-BF decoding is applied to the (255, 239) RS code, it can provide a gain of about 0.4 dB compared to the ABP decoding and the performance gap to the ML bound can be narrowed to about 0.25 dB at a frame error rate of 2 × 10^{-3}.

Tuesday, November 12 10:20 - 12:00

Tu-AM-1-1: Coding for Storage/Group Testing

Room 2 (IB201)
Private Information Delivery from Coded Storage against Byzantine and Eavesdropping Attacks
Koki Nakano (University of Hyogo, Japan); Jun Kurihara (University of Hyogo & Zettant Inc., Japan); Toshiaki Tanaka (University of Hyogo, Japan)

Private information delivery (PID) from coded storage is a problem in delivering a message to a receiver without revealing the location of servers storing encoded pieces of the message. Although existing researches only focus on the method to keep the server location secret to the receiver, they do not consider other types of security against storage servers; security of stored messages against the Byzantine servers and the eavesdropping servers. In this paper, we first define the security properties against these attackers in PID; 1) the 𝑡-Byzantine resistance and 2) the 𝜇-message confidentiality, where 𝑡 and 𝜇 are the maximum capable numbers of Byzantine servers and eavesdropping servers, respectively. We also introduce an explicit scheme guaranteeing these properties with no deterioration of the PID's server location privacy. The scheme can be viewed as a pre-coding method of messages using a maximum rank distance (MRD) code, and hence it can be applied for any PID scheme from coded storage. Furthermore, we characterize the maximum possible 𝑡 and 𝜇 in terms of parameters of the underlying MRD code, and clarify the rate of degradation introduced by these properties.

Aggregable Generalized Deduplication
Riki Aoshima (University of Hyogo, Japan); Jun Kurihara (University of Hyogo & Zettant Inc., Japan); Toshiaki Tanaka (University of Hyogo, Japan)

Considering the edge-cloud environment and uploading large amounts of data from IoT devices through the edge node towards the cloud, this paper investigates an aggregation method of data streams compressed by Generalized Deduplication (GD) for the edge node. The simplest way is to decode GD-compressed streams, aggregate raw streams and re-encode it into a single GD-compression stream. However, this involves a large computational complexity for aggregation due to the decoding and re-encoding of linear codes underlain the GD. From this observation, this paper presented a novel aggregation method, called Aggregable GD (AGD). The AGD is designed to aggregate multiple GD streams into a single AGD stream without decoding and re-encoding operations of GD, and removes duplicated information among GD streams. This paper also shows that AGD involves smaller computational complexity for data aggregation than the ordinary simple scheme. Furthermore, by the preliminary computer simulation using the Hamming code as the underlying linear code of GD, we demonstrate that AGD performs comparable to the ordinary scheme from the viewpoints of the compression rate.

One Bit-Flipping/Insertion/Deletion Correcting Code for Substrings of Binary Circular String
Takahiro Ota (Senshu University, Japan); Akiko Manada (Nagaoka University of Technology, Japan)

Compression by Substring Enumeration (CSE), which is one of lossless data compression algorithms, and various versions of CSE have been proposed. In encoding of CSE, substrings of given fixed length and their frequencies within circular string for an input string are output as a codeword. The circular string is made by connecting the first symbol and the last symbol of an input string. In decoding of CSE, the circular string is reconstructed from its substrings and their frequencies. Furthermore, the minimum length of substrings for which the decoding does reconstruct the circular string has been proved, together with a reconstruction algorithm. However, the algorithm requires substrings to have no errors.

Therefore, in this paper, we propose an error correcting algorithm which can detect one of substrings having one bit-flipping, one bit-insertion, or one bit-deletion error and correct the bit error. By applying the proposed algorithm, we can reconstruct a circular string from a set of substrings and their frequencies including only one substring which has at most one bit error.

On flow equivalence of the subshifts associated with the stream version of asymmetric binary systems
Hiroshi Fujisaki (Kanazawa University, Japan)

The stream version of asymmetric binary systems (ABS) developed by Duda is an entropy coder for information sources with a finite alphabet. It has the state parameter (l) of a nonnegative integer and the probability parameter (p) with (0<p<1). The algorithm of stream encoding yields the edge shift (X_G) associated with the stream version of ABS while the algorithm produces the edge shift (X_H) associated with output blocks from the stream version of ABS. Previously, we have shown that (X_G) and (X_H) have the same topological entropy. In this research, we find that (X_G) and (X_H) are flow equivalent. We consider the case where (p=1/\beta) with the golden mean (\beta=(1+\sqrt{5})/{2}). For several (l)'s, we compute the Parry--Sullivan quantity (D(A_G)) and the Bowen--Franks group (BF(A_G)), which are a complete set of invariants of flow equivalence. The results imply that for a fixed (p), (D(A_G)) and (BF(A_G)) depend on (l).

Improved Quasi-Random Designs with Constant Pool Sizes for Group Testing
Ryouji Hamada (The University of Electro-Communications, Japan); Xiao-Nan Lu (Gifu University, Japan)

Group testing is a well-studied approach for identifying defective items among a large amount of items by conducting a relatively small number of tests on pools of items. In this paper, we propose a novel method for generating quasi-random designs with constant pool sizes for noiseless non-adaptive group testing. Our approach incorporates insights from combinatorial properties to optimize the distribution of Hamming distances in pooling designs. Simulations with established practical decoding algorithms show that our quasi-random designs outperform existing random and quasi-random methods in both identification accuracy and stability. These improved designs have the potential to provide significant benefits for real-world applications of group testing, such as PCR testing in large populations and network security.

Tu-AM-1-2: Information Theory and Security

Room 3 (IB202)
Achievable Cost Region for Erasure of Distributed Information Using a Common Random Number
Tetsunao Matsuta (Saitama University, Japan)

Confidential information held by companies and individuals is often stored on storage devices. When the migration or disposal of such devices is required, it is necessary to overwrite and erase the information from the devices. In order to avoid degradation or to reduce the erasure time, it is desirable to minimize the cost of information erasure, such as the number of overwriting locations. In this paper, we consider the case where confidential information is distributed and stored in multiple storage devices. To analyze the costs of information erasure for this setting, we consider the achievable cost region, i.e., the region of possible cost values. We then show that this region can be characterized using single-letter random variables of bounded cardinalities. Here, we assume that the confidential information is generated by a stationary memoryless source and that a common random number is available in storage devices for erasure.

Bounds for message authentication with partially leaked secret key using conditional Rényi entropy
Yuta Saito and Shun Watanabe (Tokyo University of Agriculture and Technology, Japan)

In message authentication, we consider a situation where a sender sends messages to a receiver through an insecure channel. In the insecure channel, there is a risk of impersonation or substitution by an adversary. Message authentication is a scheme to detect such attacks and to accept legitimate messages as legitimate. In Igawa's study, the upper and lower bounds of the product of the success probabilities of the attacks are derived under the condition that each party observes correlated i.i.d. sequences as secret information, and it is shown that the upper and lower bounds are asymptotically equal in certain conditions. In Shikata's study, the success probabilities of attacks are evaluated in terms of the Rényi entropy in the situation where the sender and receiver share a secret key of finite length. In this study, we evaluate the success probability of attacks using the conditional Rényi entropy for the case where each party observes correlated information of finite length and the sender and receiver observe the same, which is one of the open questions in Igawa's study. We can also think of such a situation as one where the secret key is partially leaked. First, we extend the definitions of the attack success probabilities to our situation. Next, we evaluate these probabilities using the conditional Rényi entropy by dividing them into cases by the size of the alphabet of the secret key.

Constructions of a Visual Cryptography Scheme Using an Affine Resolvable BIBD
Naoki Mitarai and Hiroki Koga (University of Tsukuba, Japan)

A visual cryptography scheme (VCS) is one of the secret sharing schemes for digital images. In the VCS, n images called shares are generated from a secret image and are distributed to n respective participants. In this paper, we first give a new construction of basis matrices of the VCS realizing the (2, n)-threshold access structure. We use a combinatorial design known as an affine resolvable BIBD. In the construction, the relative difference is almost equal to the optimal value while the pixel expansion is almost half compared with an existing construction attaining the optimal value
if we use a (4l, 2l, 2l-1) affine resolvable BIBD. We also give a construction of basis matrices using a (4l, 2l, 2l-1) affine resolvable BIBD for a certain access structure. In this access structure, participants are partitioned into disjoint groups of size two and any two participants in the same group can reproduce a secret image. In addition, any three participants from different groups can obtain no information on a secret image.

The Space Complexity of Generating Random Tent Codes
Naoaki Okada (Kyushu University, Japan); Shuji Kijima (Shiga University, Japan)

This paper is motivated by a question of whether it is possible to calculate a chaotic sequence efficiently, e.g., is it possible to get the (n)-th bit of a bit sequence generated by a chaotic map, such as (\beta)-expansion, tent map, and logistic map in (\mathrm{o}(n)) time/space? From the viewpoint of theoretical computer science, this paper gives an affirmative answer to the question about the space complexity of a tent map. We prove that a tent code of $n$-bits with an initial condition uniformly at random is exactly generated in (\mathrm{O}(\log^2 n)) space in expectation.

Complexity of Solving Hash Puzzles Using Revised Boyer Quantum Algorithm
Chris Idota and Yuichi Kaji (Nagoya University, Japan)

This study investigates the computational complexity of solving hash puzzles using quantum algorithms. A hash puzzle is formalized as a problem for finding a pre-image whose hash value falls within a specified target set. It is known that Grover quantum search algorithm and Boyer algorithm, which is a Bayesian utilization of Grover algorithm, can compute the preimage of a given hash value more efficiently than non-quantum algorithms. However, those quantum algorithms use implicit assumptions and approximations that are not compatible with a general hash puzzle. This study revises Boyer algorithm by excluding those assumptions and approximations, and clarifies the relation between the size of the target set of a hash puzzle and the complexity necessary for solving the puzzle. It is shown that, in the quantum framework, a hash puzzle with fewer targets can be easier to solve than a hash puzzle with more targets, which contradicts to the classical understanding where puzzles with fewer targets are more difficult to solve.

Tu-AM-1-3: Statistical Learning 2

Room 4 (IB301)
An Efficient Image Segmentation Algorithm Based on Bayes Decision Theory and Its Application for Region Detection
Yilun Wang, Koshi Shimada and Toshiyasu Matsushima (Waseda University, Japan)

In this study, we aim to extract rectangular regions using a method for labeling as single or multi-label. To extract rectangles, we assume a quadtree model for the image. Furthermore, we assume prior distributions for the image, labels, and quadtree. Based on Bayesian decision theory, we derive the optimal estimation solution. There are few studies that assume prior distributions on images for optimal estimation. By setting an appropriate prior distribution for the quadtree, we reduced computational complexity. Additionally, we conducted estimation of various hyperparameters from the training data.

Adversarial Examples with Missing Perturbation Using Laser Irradiation
Daisuke Kosuge, Hayato Watanabe, Taiga Manabe, Yoshihisa Takayama and Toshihiro Ohigashi (Tokai University, Japan)

There exist Adversarial Examples (AEs) that intentionally misclassified images of neural networks to a different class. We propose a method of attack by AEs that uses lasers to facilitate control of misclassification. A previous work has shown that road signs can be misclassified by irradiating them with lasers. We focus on the effective attack in the situations that the attacker can prepare the road signs. Namely, in our attack, the attacker prepares the road signs made by using AEs with missing perturbation, and makes misclassified by using laser irradiation in effective. In this paper, we present principle and method for misclassifying the road signs made by the AEs with missing perturbation by using laser irradiation. In addition, we demonstrate the effectiveness of the proposed method by experiments.

Semantic Segmentation with Attention-Modulated Feature Fusion in HRNet V2
Weijie Zhang and Shuhei Kaneko (Tokyo City University, Japan); Shuichi Arai (Graduate School of Engineering, Tokyo City University, Japan)

Semantic segmentation is pivotal for precise object identification and localization within images, a cornerstone for automated analysis and machine vision. Despite advancements, challenges persist, particularly in segmenting small objects and recognizing multi-class large objects. This paper introduces an innovative approach to enhance HRNet V2 by integrating a Squeeze-and-Attention (SA) Block, enabling dynamic feature selection and integration across resolutions, thereby improving segmentation accuracy. We propose three strategies: the Master Strategy focusing on output resolution features, the Slave Strategy sharing features from other resolutions, and the Dual Strategy integrating features determined by weights from two resolutions. Our experiments on the Cityscapes dataset demonstrate significant improvements, with the Dual Strategy outperforming conventional methods by 4.86pt in mIoU scores. These results underscore the potential of our approach to significantly advance semantic segmentation.

Object Detection for Low-illumination Scenes Based on Enhanced YOLOv7 Architecture
Chia Hsien Lin, Guan-Jie Peng and Chang Hong Lin (National Taiwan University of Science and Technology, Taiwan)

This paper proposes a novel approach to object detection in low-illumination conditions, an area where existing methods often fall short. By recognizing the crucial role of image quality in subsequent detection accuracy, we introduce preprocessing steps involving image enhancement and restoration networks. Our detection network then focuses on enhancing both channel and spatial feature information, integrating adaptive attention mechanisms and a Transformer architecture for contextual understanding. A multi-scale fusion strategy is employed to merge these features effectively. Additionally, we implement a partial cross-stage network to facilitate CNN learning while minimizing model size. Experimental results demonstrate the superiority of our approach over previous methods, showcasing significant accuracy improvements, particularly in challenging scenarios. This work underscores the importance of addressing image quality concerns in object detection tasks, especially in low-illumination environments, and offers a promising solution with practical implications across various domains, including autonomous driving and surveillance.

Enhancing Proximal Decoding for LDPC Codes through Deep Unfolding
Asahi Ito, Lantian Wei and Tadashi Wadayama (Nagoya Institute of Technology, Japan)

This paper introduces a deep unfolding-assisted proximal decoding for low-density parity-check (LDPC) codes.Proximal decoding method is a decoding algorithm based on the proximal gradient method.Our proposal, deep unfolding-assisted proximal (DU-Proximal) decoding, is obtained by applying deep unfolding to the proximal decoding.We especially focus on the decoding performance in multiple-input multiple-output (MIMO) channels. In numerical experiments, we compare the error correcting capability of the proposed algorithm with the minimum mean squared error (MMSE) signal detection method, the tanh signal detection method, and the MMSE jointly with belief propagation (BP) algorithm for LDPC decoding.The experimental results demonstrate that parameter optimization via DU yields significant performance gains, especially at high signal-to-noise ratio levels.

Tuesday, November 12 1:50 - 3:10

Tu-PM-1-1: Integrated Circuit Design for Information Theory

Room 2 (IB201)
Reformulated Euclidean Algorithm and Optimized (OREA) Architecture for Reed-Solomon Decoding
Chao Chen (Xidian, China); Zhongfeng Wang (Sun Yat-sen University, China); Yunghsiang Sam Han (University of Electronic Science and Technology of China, China); Baoming Bai (Xidian University, China)

In this paper, we present a Reformulated Euclidean Algorithm (REA) and its optimized architecture for Reed--Solomon decoding. Through algorithm transformations on a modified Euclidean algorithm by Berlekamp et al., the REA is derived, featuring free of inversion operations. It has a fixed number 2t of iterations (t is the error-correction capability), and owns a very simple description. By generalizing the Horiguchi-Koetter formula and exploring the early termination mechanism, we present the optimized reformulated Euclidean algorithm (OREA). The derivative architecture is a systolic one, consisting of 2t+1 processing elements (PEs) with the critical path of one multiplier and one adder. Complexity comparisons show that the proposed OREA saves 30% resources over sDCMEA, the state-of-art architecture based on Euclidean algorithm, and has almost the same (actually slight lower) complexity as ePIBMA, the state-of-art architecture based on Berlekamp-Massey algorithm. Thus this work fills an important gap for the hardware implementation between two RS decoding algorithms.

Block-layered Sign-flipping Belief Propagation Decoder Architecture for Surface Codes
Hsuan Ku, Ting-An Hu, Tzu-Hsuan Huang and Yeong-Luh Ueng (National Tsing Hua University, Taiwan)

Quantum computing requires an error correction code to protect the messages against quantum noise. Surface codes are suitable for quantum error correction based on their local-qubit connection. Decoding via the conventional belief prop- agation (BP) algorithm is deficient for highly degenerated codes. Using the proposed block-layered sign-flipping (SF) technique, this problem can be mitigated and the error rate performance can be improved. The decoder hardware implemented for the [[181, 1, 10]] surface code occupies an area of 0.97 mm2 and achieves a latency of 315 ns in a 90 nm process.

Investigating the Role of D Flip-Flop as a Synchronization Circuit for Enhancing Randomness and Stabilizing Bit Distribution in RO-Based RNG on FPGA
Mitsuki Fujiwara (Okayama University, Japan); Ryoichi Sato (Okayama University & None, Japan); Samsul Huda, Yasuyuki Nogami and Yuta Kodera (Okayama University, Japan)

Random number generators (RNGs) are crucial in applications requiring unpredictable sequences, including cryptography, simulations, and gaming. Among the different types of RNGs, ring oscillator (RO)-based RNGs have gained popularity due to their simplicity, and suitability for FPGA implementation. Previous research by Wold et al. has suggested that connecting directly a D flip-flop (D-FF) to the RO circuit enhances randomness. However, the precise factors contributing to the improved randomness through the connection of the D-FF remain unclear. Then, we hypothesize that the D-FF acts as a synchronization circuit, enhances randomness. To verify this hypothesis, we design and implement RO-based RNG circuits with and without a D-FF directly connected to the RO circuit on an FPGA platform. By analyzing and comparing the bit distribution of the generated number sequences, we investigate the impact of the D-FF on the stability of the generated numbers. The results confirm that incorporating a D-FF into the RO-based RNG circuit stabilizes the random number sequence, leading to the conclusion that the D-FF plays a synchronization role in these circuits.

Integrated High-gain Broadband Low Noise Amplifier Receiver with Ultra-wideband Antenna
Wen Cheng Lai (National Taiwan University of Science and Technology, Taiwan)

This article presents RF integrated receiver consists of high-gain broadband low noise amplifier (LNA) with balun, Gm-C band-pass filter, and ultra-wideband (UWB) antenna with band-notch design. The proposed wireless integrated CMOS design and analysis are chosen as the receiver of RF circuits to enhance conversion gain low noise amplifier while maintaining high-flatness at broadband. The integrated circuit is fabricated in tsmc 0.18-μm process. From 3 to 6 GHz, the measured results exhibit high-gain of 16.2 to 17.4dB, noise figure of 4.0dB to 4.4dB with Input-referred third-order intercept point (IIP3) of -20.2 dBm while consuming 14 mA from a 1.8V supply. The measured results exhibit high-gain of 11.7 dB to 13.7 dB, noise figure of 4.6 dB to 4.9 dB with Input-referred third-order intercept point (IIP3) of -20.9 dBm while consuming 6 mA from a 1.0V supply, respectively.

Tu-PM-1-2: Cryptography

Room 3 (IB202)
Checkable Key Generation and its Application to Hierarchical Identity-Based Signature
Atsushi Fujioka (Kanagawa University, Japan); Takahiro Oshima and Taiichi Saitoh (Tokyo Denki University, Japan)

This paper extends the concept of the specific verification function (SVF), introduced by Cui et al. in formally describing the Naor transformation, from hierarchical identity-based encryption (HIBE) to hierarchical identity-based cryptography (HIBC), including hierarchical identity-based signature (HIBS), where the Naor transformation can convert a secure (H)IBE scheme to a secure (H)IBS one.

We formulate this extended concept, checkable key generation (CKG), give its syntax, and classify it into two types: fully and partially. We also define security notions of CKG, named unextractability (UE) against several attacks.

Next, we show that an (\ell)-level CKG component can be constructed from an (\ell)-level HIBS scheme, and an (\ell)-level HIBS scheme can be constructed from an ((\ell+1))-level CKG component. Both give security enhancement of HIBS from a weakly secure scheme to a strongly secure one.

In addition, we discuss that the security notions of CKG are related to total unbreakability (TUB) in HIBC. A new notion, partial unbreakability (PUB), is related to that of partially CKG. In particular, an HIBS scheme with constant-size signatures can be constructed from a partially CKG component.

Chameleon Hashing Security Enhancement to Hierarchical Identity-Based Identification
Atsushi Fujioka (Kanagawa University, Japan); Keisuke Saito and Taiichi Saitoh (Tokyo Denki University, Japan); Keita Xagawa (Technology Innovation Institute, United Arab Emirates)

This paper examines a security enhancement technique from a passively secure hierarchical identity-based identification (HIBI) protocol to a concurrently secure one. Two types of security enhancement techniques for identification protocols have been proposed: one based on the OR-proof technique and the other using a chameleon hash function.

The former has been examined in detail, while the latter has not been formulated in the HIBI protocol, and its close evaluation of applicability, especially in identity-selecting settings, and reduction efficiency has not been made public.

We describe a transformation using a chameleon hash function and compare it with the others based on the OR-proof technique in applicability and reduction efficiency.

Fast and Secure Scalar Multiplication on Elliptic Curve GLS254
Ryosuke Kido and Atsuko Miyaji (Osaka University, Japan)

Elliptic curve cryptosystems (ECCs), based on the discrete logarithm problem, give compact and fast cryptosystems with smaller key sizes than Rivest-Shamir-Adleman (RSA). It is expected to be used in IoT devices, where available memory is limited. In addition, it is necessary to construct ECCs secure against side-channel attacks (SCA). Therefore, more compact and fast ECCs secure against SCA is needed. Elliptic curve scalar multiplication (ECSM) is the dominant computation of ECCs, and thus, it is important to build a secure, compact, and fast ECSM. GLS254 defined over F_{q^2} (with q=2^m) was proposed, which gives a fast ECSM by using an endomorphism. Recently, GLS254 is improved by newly introducing (x, s)-coordinates, where an addition formula can be executed without any exception. In this study, we further improve secure, compact, and fast GLS254 by introducing additional new coordinates and optimizing ECSM. As a result, our ECSM achieves 29371 cycles on an Intel x86 CPU, which improves the best previous records by 7.1%.

How to Certify Deletion with Constant-Length Verification Key
Duo XU (Nagoya University, Japan)

The certified deletion process involves transmitting a ciphertext and confirming its deletion via a digital certificate. In this context, the term "deletion" refers to the fact that no information-processing procedure can decipher the cyphertext if the certificate is valid. Those familiar with cryptography recognize this type of cipher as being computationally secure before and informationally secure after deletion. Let λ and n be the security parameter and length of the message sent. We proposed a method for Certified Deletion that needs only O(λ) bit of verification key, while to my knowledge, all the previous researches need at least O(λn) bit of verification key. Imagine someone who wants to store 1GB of data on a server in a deletable manner; then the previous methods need to store 1GB of metadata, while ours do not. Our method uses PRF (short for Pseudo-random Functions) to generate certificates and its key as the verification key. Also, we conclude our result in a general form as the previous research did. So, applying our result the same way as in the prior works gets the same applications. The key's length concerns storage costs and key leakage risks, and we are the first to attempt to reduce that. General ways to realize Certified Deletion are known [1], [2] and our result can be seen as an essential adaption to improve efficiency in practice.

Tuesday, November 12 1:50 - 2:50

Tu-PM-1-3: Signal Processing

Room 4 (IB301)
MZI-based Optical Circuit for MIMO Signal Detector Immune to Fabrication Errors
Takumi Nishiyama, Tadashi Wadayama and Ayano Nakai-Kasai (Nagoya Institute of Technology, Japan)

This paper presents a design method for Mach-Zehnder interferometers (MZIs)-based optical circuits intended for MIMO signal detection that are robust against fabrication errors in MZIs. Typically, faulty MZIs can critically impair the functioning of optical circuits. Our primary objective is to devise a design strategy that ensures the reliability of optical circuits for MIMO linear detection even when faced with faulty MZIs. The cornerstone of our approach is the adjustment of parameters in functional MZIs by minimizing the receiver's mean squared error using a gradient descent method. Numerical experiments underscore that our method enables signal detectors, even with faulty MZIs, to approach the performance of an ideal, faultless MMSE MIMO detector.

Deep Unfolding-Aided Design for Optical MIMO Signal Detection Circuit
Tomoya Kuno and Tadashi Wadayama (Nagoya Institute of Technology, Japan)

In this paper, we study an optical MIMO (Multiple-Input Multiple-Output) signal detection circuit based on the projected gradient method. The projected gradient method is an iterative algorithm that iteratively performs a gradient descent step and a projection step. In the proposed optical MIMO signal detection circuit, the gradient descent step is implemented by an optical analog circuit in the optical domain, and the projection step is implemented by a nonlinear processing circuit in the electrical domain. The system noise due to the optical amplifier significantly degrades the detection performance of the optical MIMO detection circuit, which is a crucial issue for the optical implementation. In order to mitigate the effect of the system noises, we present how to apply deep unfolding, a technique for improving the performance of iterative algorithms, to the design of an optical MIMO signal detection circuit. The results of numerical experiments show that the proposed deep unfolding-based design achieves improved convergence performance and reduces the influence of the system noises.

A Prototype of Proximity Warning System on the Sea, Utilizing Received Signal Strength
Hiroomi Isozaki (Japan Coast Guard Academy, Japan); Yasuhiro Sato (Osaka Electro-Communication University, Japan); Yoshinori Matsuura and Masaaki Yamanaka (Japan Coast Guard Academy, Japan)

Our research group aims to realize a low-cost alert system that informs the crews of small-sized ships of approaching floating objects on the sea. We previously proposed a maritime object identification system that uses a Radio Frequency Identification (RFID) tag. However, we concluded that it is difficult to measure the received signal strength, due to the effects of waves and other factors, on the sea, and the difficulty in miniaturizing the transmitter and the receiver devices. In this paper, we discuss the construction of a high-precision positioning system necessary to measure a received signal strength on the sea. We decided to use a ESP32, a low-power-consumption Wi-Fi controller, to elucidate characteristics of the received signal strength of Wi-Fi on the sea, which is necessary for a proximity warning system. Finally We make a prototype of the proximity warning system.

Tuesday, November 12 3:30 - 4:50

Tu-PM-2-1: Code Based Cryptography/Wiretap Coding

Room 2 (IB201)
Encrypted Numeric Vector Computations Based on Non-Systematic QC-MDPC Code Over Prime Field
Haruhiko Kaneko (Tokyo Institute of Technology, Japan)

This paper presents three types of post-quantum encrypted vector computations over the prime field. Based on the linearity and quasi-cyclicity of non-binary QC-MDPC code, the method can perform vector addition (VA), scalar multiplication (SM), and cyclic shift (CS) without decrypting ciphertexts, where the scalar value and shift width are public plaintext data. Combinations of these computations enable some important practical computations, e.g., the batch of weighted sums and convolution of vector elements.
The parity-check matrix of QC-MDPC code used in the key encapsulation is modified to remove the systematic information part of the ciphertext. The decoding failure rate is evaluated by simulation for QC-MDPC code over (\mathbb{F}q) with the non-binary sum-product algorithm and the parallel symbol flipping decoding. Also the work factors of the information set decoding for key recovery and decoding attacks are calculated. The numerical results show that, using a (9602,4801) code over (\mathbb{F}{251}), the method can perform up to three VAs and arbitrary numbers of SMs and CSs.

Enhancement of Physical Layer Security based on NB LDPC Coded Modulation
Irina Bocharova, Boris D. Kudryashov and Vitaly Skachek (University of Tartu, Estonia); Stefano Alberico (Skudo OÜ, Estonia)

In this work, we propose a system for joint security and error correction at the physical layer. The proposed approach combines the ideas from the area of wiretap channel coding together with a variant of the McEliece public key cryptosystem. The system makes use of randomly chosen nonbinary low-density parity-check codes with quasi-cyclic base matrices. Such a choice allows for efficient hiding of the structure of the generator matrix of the code, thus allowing for protection of the data against the eavedropper's attacks. At the same time, it also supports an efficient error correction. The analysis of the complexity of the attacks and the simulation results demonstrate the advantages of the proposed system.

Lee Metric Code-based Signature
Chik How Tan and Theo Fanuela Prabowo (National University of Singapore, Singapore)

We propose a new Lee-metric code-based signature scheme (called LMQCS) based on 2-quasi-cyclic codes (2QC-codes) and assuming the hardness of the Lee metric syndrome decoding problem for 2QC-codes (2QC-SDP). Furthermore, we also give a detailed security analysis and a brief security proof of the LMQCS signature scheme. Based on the complexities of solving the underlying 2QC-SDP problem, the public key size and signature size of the LMQCS signature scheme are 2217 bytes and 4434 bytes respectively at 128-bit security level.

Achieving Optimal Short-Blocklength Secrecy Rate Using Multi-Kernel PAC Codes for the Binary Erasure Wiretap Channel
Hsuan-Yin Lin (Simula UiB, Norway); Yi-Sheng Su and Mao-Ching Chiu (National Chung Cheng University, Taiwan)

We investigate practical short-blocklength coding for the semi-deterministic binary erasure wiretap channel (BE-WTC), where the main channel to the legitimate receiver is noiseless, and the eavesdropper's channel is a binary erasure channel (BEC). It is shown that under the average total variation distance secrecy metric, multi-kernel polarization-adjusted convolutional (PAC) codes can achieve the best possible theoretical secrecy rate at blocklengths of 16, 32, 64, and 128 if the secrecy leakage is less than or equal to certain values.

Tu-PM-2-2: Biometrics and Authentication

Room 3 (IB202)
Proof of Origin: Creating Data Authenticity by Biometric Information
Takayuki Suzuki (Hitachi America, Ltd., USA); Ken Naganuma (Hitachi, The University of Tokyo, Japan); Kaiho Fukuchi and Takatoshi Ohara (Hitachi, Ltd., Japan)

Non-Fungible Token(NFT) issued on the blockchain made adding scarcity to digital data possible. However, as NFT transactions soared, illegal use of other people's content increased. To solve this situation, this paper proposes the Proof of Origin concept, which adds authenticity to NFTs by utilizing a technology that generates the cryptographic keys required for digital signatures directly from biometric information and describes the system architecture, data structure, and implementation of this concept. Using biometric information to link NFTs to actual persons makes it possible to provide trustworthiness and high-added value to data. At the same time, the creator does not need to manage secret keys and can safely and efficiently claim the originality of the data.

Leakable Mnemonic Phrase: A New Orientation of Wallets Backup based on Biometric-key
Ken Naganuma (Hitachi Ltd., Japan); Takayuki Suzuki (Hitachi America, Ltd., USA); Takatoshi Ohara (Hitachi, Ltd., Japan)

A Mnemonic phrase is a set of secret words that correspond one-to-one with the secret key of a user's crypto wallet. If this phrase is lost, the user cannot access the cryptocurrencies. Thus, the user wants to back up the phrase with multiple storage, but the risk of leakage increases. To solve this dilemma, this paper proposes a leakable mnemonic phrase system in which the phrase is encrypted with a secret key generated from the user's biometric information. Since the leakable mnemonic phrase can be decrypted only with the user's biometric information, it can be backed up to various external storage and devices to solve the contradictory problem of keeping the contents of the phrase secret while preventing its loss. In this paper, we also show the results of evaluating the performance of this system by implementing it on an actual biometric authentication device. The encryption/decryption processing time is fast (200 ms), and the system successfully decrypted the mnemonic phrase six months after enrollment.

Make Seeds Expire in TOTP Authentication
Keishiro Noda and Yuichi Kaji (Nagoya University, Japan)

Time-based One-Time Password (TOTP) is commonly used in many digital services to meet the increasing demands of security in user authentication. The security of TOTP owes much to the management of TOTP seeds from which onetime passwords are computed, but there is a certain risk of the leakage of TOTP seeds in practice. This study aims to bring a mechanism that virtually realizes the expiration of TOTP seeds. Even if a seed is left unattended or exposed to somebody at a certain point in time, the seed expires as time passes. The mechanism is developed by using the logistic map, together with careful control of numeric values that is necessary to avoid issues caused by finite-precision calculations. The paper sketches the proposed scheme and introduces the results of numerical investigations for discussing choices of good parameters.

Proposal of CAPTCHA using the Munker Illusion and Prototype Implimentation
Yasuyuki Murakami, Kotaro Yamaguchi, Hiroki Izumi and Motonori Doi (Osaka Electro-Communication University, Japan)

CAPTCHA is an authentication test that distin- guishes between humans and machines. CAPTCHA is essential to prevent fraudulent account acquisition by BOTs. Conventional CAPTCHAs were based on the assumption that human abilities were superior to BOTs, but with the evolution of AI technology, this assumption no longer holds true. As CAPTCHAs have become more complex, usability has been compromised. The Munker illusion is a visual illusion in which shapes of the same color appear to be different colors to the human eye due to the synergistic effect of color assimilation and color contrast. In this paper, we propose a new CAPTCHA using the Munker illusion.

Tu-PM-2-3: Source Coding

Room 4 (IB301)
An Explicit Construction of a Coded Caching Scheme with Heterogeneous Cache Sizes
Akihito Nagaya and Hiroki Koga (University of Tsukuba, Japan)

In the coded caching scheme proposed by Maddah-Ali and Niesen, we usually consider the setup where $K$ users have respective cache memories of equal size and request arbitrary one of $N$ files to a server. The server broadcasts a signal to the $K$ users so that each user can reproduce the requested file from the transmitted signal together with the contents in their cache memory. Finding the memory-rate tradeoff is one of the fundamental problems in the coded caching problem. In this paper, we extend the coded caching problem to the case where $K$ users have cache memories of unequal sizes. We succeed in giving a new upper bound of the memory-rate tradeoff under a certain assumption. We establish the upper bound by developing a new scheme in which $N$ files are divided into subfiles of unequal sizes according to the sizes of cache memories. The validity of this scheme is established by using combinatoric arguments. We also show that adding new users with no cache memory to the $K$ users can reduce the rate of the transmitted signal.

An Upper Bound of Cumulant Generating Function of Codeword Lengths in Variable-Length Lossy Source Coding Under Logarithmic Loss
Shota Saito (Gunma University, Japan)

This paper deals with variable-length lossy source coding. We consider a single-shot approach to source coding in which the source to be compressed is a random variable (X). The performance criterion of codeword lengths is a cumulant generating function of codeword lengths and the performance criterion of a distortion measure is a logarithmic loss distortion. We derive an upper bound of the cumulant generating function of codeword lengths under the condition that the logarithmic loss distortion is less than or equal to (D), where (D \geq 0) is a given distortion level. The upper bound is characterized by the R'enyi entropy of the random variable (\left \lceil \frac{X}{\lfloor \exp(D) \rfloor} \right \rceil). Numerical examples show that there are cases where the new upper bound is tighter than the upper bound derived in a previous study.

A Simple Proof of Multi-letter Converse Theorem for Distributed Lossless Source Coding
Jun Muramatsu (NTT Corporation, Japan)

This paper provides a simple proof of the multiletter converse theorem for distributed lossless source coding for Slepian-Wolf source coding and Wyner-Ahlswede-Körner source coding, where the number of reproduced sources is arbitrary. It is shown only by using basic inequalities of limit superior/inferior in probability, the non-negativity of the divergence, and the singlesource version of the Fano inequality, where the epsilon-delta arguments are hidden in the above inequalities.

Asymptotic Optimality of the Asymmetric Encoding-Decoding Scheme
Hirosuke Yamamoto (The University of Tokyo, Japan); Ken-ichi Iwata (University of Fukui, Japan)

The Asymmetric Encoding-Decoding Scheme (AEDS) recently proposed by the authors is a lossless data compression scheme that performs the encoding and decoding in reverse order of each other. We have shown that the optimal AEDS can attain a compression ratio better than (or at least the same as) the Huffman code and the tANS (tabled variant of Asymmetric Numeral Systems). In this paper we will show that for any stationary memoryless source with a finite discrete alphabet, the average codeword length of the optimal AEDS attains the source entropy asymptotically by increasing the number of states used in the AEDS.

Wednesday, November 13

Wednesday, November 13 9:00 - 10:00

We-AM-Poster: Poster Session

Room: IBB
Rate-Compatible Polar Coding Performance for LED-Propeller Based Image Sensor Communication
Shin Asaoka, Shan Lu, Takaya Yamazato and Zhengqiang Tang (Nagoya University, Japan)

This paper proposes the use of rate-compatible polar codes in image sensor communication (ISC) using a propeller LED transmitter (P-Tx). In a traditional zone-segment data transmission method, the length of the light trails for each bit signal is controlled according to the rotation radius, which improves transmission efficiency in the system. However, the length of the light trails and the received brightness values per bit vary with the rotation radius, resulting in variations in the error probability across different zones. Therefore, this paper proposes the use of polar codes to achieve optimal data transmission in each zone. By employing polar codes with different code lengths and code rates according to the rotation radius, we maximize the transmission performance specific to each zone. We evaluate the effectiveness of the proposed method through indoor experiments.

On the Hadamard-Type Matrices of Even Orders on Finite Fields
Iori Kodama and Tetsuya Kojima (National Institute of Technology, Tokyo College, Japan)

Hadamard-type matrix on finite fields is a multi-valued square matrix (H) on finite fields satisfying (HH^{T} \equiv nI \mod p), where (I) is an identity matrix. Any additions and multiplications should be executed under modulo (p). It has been shown that the order of an arbitrary Hadamard-type matrix of odd order is limited to quadratic residues of the given prime (p). In this study, we show that it is possible to generate a Hadamard-type matrix of any even order.

Faster Computation of Private Key in ID-NIKS Using DLP with Distinguished Points and Montgomery Multiplication
Hayato Inoue, Mizuki Miki, Ryuichi Sakai and Yasuyuki Murakami (Osaka Electro-Communication University, Japan)

In 1990, Murakami and Kasahara proposed an ID based non-interactive key-sharing scheme (MK ID-NIKS) that uses the discrete logarithm problem with composite number as the modulus. In this study, we implement the faster private key generation of MK scheme by using the distinguished point method and Montgomery multiplication.

Distributed LT Coding of Correlated Sources for a Single-Output MAC with Erasures
Bo-Yu Lin (National Taiwan University of Science and Technology, Taiwan); Jen-Huan Huang and Jian-Jia Weng (National Chung Cheng University, Taiwan)

This work considers a distributed encoding of binary correlated sources using Luby-transform (LT) codes, where the coded symbols are independently generated and transmitted over a single-output multiple-access channel (MAC) with erasures. Despite the independent encoding, a high source correlation reduces the reception overhead for the same error rate requirement. Here, three decoding strategies are used to demonstrate this correlation benefits. Moreover, the amount of received symbols from different encoders also impacts the data recovery performance.

Maximal-Information Channel Optimized Scalar Quantization with Side-Information
Chen-An Cheng (National Chung Cheng University, Taiwan); Jian-Yu Lai (National Taiwan Ocean University, Taiwan); Jian-Jia Weng (National Chung Cheng University, Taiwan)

In a communication network consisting of a source node, a relay, and a destination, this work presents a design of channel-optimized scalar quantization (COSQ) at the relay, incorporating available side information (SI). The objective of the COSQ is to maximize the mutual information (MI) between the source data and its reconstruction at the destination. Numerical results demonstrate that SI generally increases the MI achieved by our COSQ, especially when the quality of the direct link (i.e., the source-to-destination channel) is poor. When the direct link is of high quality, the benefit of SI diminishes, and the MI is mainly determined by the quality of the relay-to-destination channel.

State-Dependent Joint Source-Channel Coding for Gilbert-Elliot Channels
Ting-Jhong Sie (National Taiwan Ocean University, Taiwan); Chen-An Cheng and Jian-Jia Weng (National Chung Cheng University, Taiwan)

This paper investigates a joint source-channel coding (JSCC) problem for transmitting a Gaussian source over Gilbert-Elliot channels. Under the mean square error (MSE) distortion measure and the optimal decoder, four cases of our JSCC scheme are compared. Results show that full state information at both encoder and decoder yields the highest signal-to-distortion ratio (SDR), while limited state information reduces SDR performance. Particularly, having state information at the decoder is more useful than at the encoder for this problem setting.

Light-Trail Based Polar Coding Scheme of Visual Optical Communication in Drone Propellers
Chenyu CAO, Zhengqiang Tang, Takaya Yamazato and Shan Lu (Nagoya University, Japan)

The purpose of this research is to transmit signals to the ground using light trails generated by the rotation of a propeller with LEDs attached to a drone's propeller. The blinking frequency of the LED automatically adapts to the rotation speed of the propeller. It can display a specified pattern, with a single-bit signal represented by light trails at different angles of the circle. In this study, we compare the BER of 11° and 22° circular angles of light trails for 1-bit transmissions in an indoor environment at flight heights from 8 m to 12 m and analyze the causes of the errors. Based on this result, we propose a communication scheme of polar code to improve the performance based on the error rates.

Secure Spectrogram Masking Based on Extended Visual Cryptography
Yuuki Tachioka (Denso IT Laboratory, Japan)

Masking-based audio watermarking is an important technique, which mixes many interference signals with a secret signal to prevent the secret signal from being extracted. We propose a spectogram masking in the time-frequency domain using the extended visual cryptography scheme, which is theoretically secure. This method can extract secret signals by adding the required number of share signals. The experiment shows that the proposed method can restore the secret signals.

Efficient Secret Reconstruction Quantum Circuit for Binary Stabilizer-Based Quantum Secret Sharing
Shogo Chiwaki and Ryutaroh Matsumoto (Tokyo Institute of Technology, Japan)

Stabilizer-based quantum secret sharing has two procedures to reconstruct a quantum secret: the erasure correcting procedure, which requires measurements and auxiliary systems, and the unitary procedure, which does not. The unitary procedure is more efficient. This research focuses on designing efficient quantum circuits for the unitary procedure in binary stabilizer-based QSS. We propose a method to design a special encoding circuit that ensure the inverse of the encoding circuit outputs the secret unentangled with extra qubits. We demonstrate our proposal using the [[7,1,3]] binary stabilizer code. The circuit designed by proposed method is compared with the circuit designed by the straightforward erasure correcting procedure, and shown to be efficient.

Sequential Matrix Completion Method Based on Variational Bayesian Methods
Shunsuke Obana, Yasuhiro Mochizuki and Shunsuke Horii (Waseda University, Japan)

We proposed a model that applies the concept of sequential experimental design in statistics to the problem of matrix completion where elements are observed sequentially. By defining the information gain and observing the location of the maximum, it is expected to be highly accurate with a small sample.

A Computational Method to Approximate the Von Neumann Entropy of ASK Signals
Keisuke Goto and Shion Kitamura (Aichi Prefectural University, Japan); Tiancheng Wang (Kanagawa University, Japan); Tsuyoshi Usuda (Aichi Prefectural University, Japan)

Calculating the von Neumann entropy of a quantum information source is a critical problem in both quantum communication and quantum cryptography. As the number of signals increases, the calculation becomes increasingly challenging. This difficulty arises because the computation requires solving an eigenvalue problem for the Gram matrix, the size of which is proportional to the number of signals. In this study, we propose an approximation method for eigenvalues and the von Neumann entropy for amplitude-shift-keying (ASK) coherent-state signals by approximating the Gram matrix with a tridiagonal matrix. Our results demonstrate the effectiveness and practicality of this approximation.

Uniform Side Information Gain via Multilevel Lattice Codes over Integers
Juliana Gomes Ferreira de Souza (University of Campinas, Brazil)

Index coding is a technique used to enhance efficiency in transmissions when the receiver has access to side information. We propose a coding scheme called CRT lattice index coding, which uses Construction (\pi_A) lattices over (\mathbb{Z}). This strategy addresses the index coding problem by providing uniform side information gain and reducing the decoding complexity associated with Construction (\pi_A) lattices.

Comparison of CT Numbers Using Quantum Ghost Imaging and Ordinary Imaging
Yasushi Horiba (Aichi Prefectural University, Japan); Tiancheng Wang (Kanagawa University, Japan); Tsuyoshi Usuda (Aichi Prefectural University, Japan)

Quantum ghost imaging is a protocol that uses quantum entanglement. We compare classical measurements and quantum ghost imaging by conducting Geant4 simulations. In previous studies, results were calculated and evaluated by the number of photons in two- and three-dimensional object simulations. In this study, the object is a part of the human body and the results are quantified in terms of CT number which can be applied to medical applications.

Card-Based Threshold Function Protocols with a Fewer Cards by Using Directional Cards
Kanta Moriyama, Tomohiro Taguchi, Hayato Inoue and Yasuyuki Murakami (Osaka Electro-Communication University, Japan)

In this study, we propose five card-based protocols of threshold function with a fewer cards by using directional cards.

Implementation of Deep Learning Using Homomorphic Encryption Library
Masanori Hirotomo (Saga University, Japan); Yoshiaki Shiraishi (Kobe University, Japan)

In this paper, we present an implementation of deep learning using the homomorphic encryption library Microsoft SEAL. For an efficient implementation, we propose an algorithm of matrix-matrix multiplication with diagonal packing.

Improvement of Fractal Compression with Domain Block Classification Based on DCT
Takashi Sakai and Natsuo Hayashi (The University of Kitakyushu, Japan); Takeru Miyazaki (Kyushu Institute of Information Sciences, Japan); Shunsuke Araki (Kyushu Institute of Technology, Japan); Satoshi Uehara (The University of Kitakyushu, Japan)

With the aim of shortening the encoding time of fractal image compression while maintaining the quality of the decoded image, we propose an encoding method that reduces the number of comparisons with K-means using DCT components. As a consequently, we can achieve a thirty-fold speed increase in encoding compared to non-classifying methods while maintaining a high PSNR. Additionally, we discuss the optimal number of classes in the K-means method for encoding time.

Other Variations on a Card Protocol for 6-Card 3-Valued Equality Function (Six-Card Trick)
Yuji Suga (Internet Initiative Japan Inc., Japan)

This paper discusses a card protocol for 6-card 3-valued equality functions (six-card trick). The six-card trick, proposed by Shinagawa et al. at ICISC2018, is a simple method consisting of the pre-processing method of a permutation and random cuts (cyclic substitutions). In this paper, we provide other variations about pre-processing permutation and the discovery of new methods that are considered superior to the conventional method in terms of usability.

A Modulation-Level Coding Design for Downlink SCMA Systems
Yi-Feng Chen (National Taiwan Ocean University, Taiwan); Jian-Jia Weng (National Chung Cheng University, Taiwan)

Sparse code multiple access (SCMA) enables the simultaneous transmission of multiple users over shared channel resources by exploiting codeword sparsity. This work proposes a modulation-level coding method for downlink SCMA systems that introduces time correlation among consecutive codewords to reduce interference. Implemented within a trellis framework, this method allows for efficient joint detection using the Viterbi algorithm. Simulations show a significant error rate improvement in slow Rayleigh fading channels using the same SCMA codebook.

Wednesday, November 13 10:10 - 11:50

We-AM-1-1: Coding Theory 2

Room 2 (IB201)
Object Tracking Using Multiset Color Coding
Chung Shue Chen (Nokia Bell Labs, France); Yuan-Hsun Lo (National Pingtung University, Taiwan); Wing Shing Wong (The Chinese University of Hong Kong, China); Yijin Zhang (Nanjing University of Science and Technology, China)

We consider the tracking problem of an object that can randomly appear on a line of a fixed length. We want to determine the position of the object and rely on sensors that can detect objects with a predefined range and report that to a remote observer. Sensors are equipped with a transmitter that can transmit at a limited data rate. Once a sensor is triggered, it transmits its own ID to inform. A straightforward protocol is to label each sensor with different ID. However, this would require a large number of unique IDs and many bits to represent. As a result, higher data rate is required. We propose a newly defined protocol using multiset color coding with optimal design or efficiency in reusing a much smaller number of IDs for the whole system. We only require the minimal number of bits for labeling each sensor. We derive the factor of reduction and show its significance. We present some optimal constructions for the required multiset color coding sequence. Besides, we derive the general upper and lower bounds for the maximum length (size) of the system. Numerical examples have also demonstrated the effectiveness and improvement by the proposed new method.

A Note on the Theoretical Support to Compute Dimension in Abelian Codes
José Joaquín Bernal (University of Murcia, Spain); Juan Jacobo Simón (Universidad de Murcia, Spain)

In this note we give a theoretical support by means of quotient polynomial rings for the computation formulas of the dimension of abelian codes.

The Tight Upper Bound for the Size of Single Deletion Error Correcting Codes of Length 11
Kazuhisa Nakasho (Yamaguchi University, Japan); Manabu Hagiwara (Chiba University, Japan); Austin Anderson (Kapiolani Community College, USA); James Nation (University of Hawaii at Manoa, USA)

A single deletion error correcting code (SDECC) over binary alphabet is a set of fixed-length sequences consisting of two types of symbols, 0 and 1, such that the original sequence can be recovered for at most one deletion error. There is a conjecture ``the upper bound for the size of SDECC is equal to the size of Varshamov-Tenengolts (VT) code.'' This conjecture had been shown to be true when the code length is ten or less. In this paper, we discuss a method for calculating this upper bound by providing an integer linear programming solver with several linear constraints. As a new result, we obtained that the tight upper bound for the size of a single deletion error correcting code of length 11 is 172. In other words, we could prove that the conjecture is true for the case where the length is 11.

Necessary and Sufficient Conditions for Capacity-Achieving Private Information Retrieval and Its Construction Method
Atsushi Miki (Waseda University & Fujitsu, Japan); Yusuke Morishita and Toshiyasu Matsushima (Waseda University, Japan)

Private Information Retrieval (PIR) is a mechanism for efficiently downloading messages while keeping the index secret. The information-theoretic upper bound on efficiency has been proved in previous studies. In many previous studies, PIR properties and the proofs of capacity were notated in terms of entropy and probability. However, in order to construct a linear PIR, it is necessary to clarify the properties of the query matrix. In this paper, we prove the necessary and sufficient conditions for the PIR properties, and represent the PIR properties in matrix form. We also show a PIR construction method that satisfies the conditions.

A Note on Parity Check Matrices of Private Information Retrieval Codes
Koki Kazama (Shonan Institute of Technology, Japan); Takahiro Yoshida (Nihon University, Japan)

A private information retrieval (PIR) is an information retrieval scheme that allows a user to retrieve messages from information databases while keeping secret which one the user wants to retrieve. Sun et al. formulated the download rate and showed that there is an upper limit to it (capacity). Previous construction methods of a capacity-achieving linear PIR (CALPIR) are ad hoc. We propose conditions for a CALPIR using the parity check matrix.

We-AM-1-2: Symmetric-Key Encryption

Room 3 (IB202)
Development of an Outsourcing Password Evaluation System using Intel SGX
Takeki Kitayama and Yuma Nishihira (Tokai University, Japan); Akira Kanaoka (Toho University, Japan); Yoshio Kakizaki and Toshihiro Ohigashi (Tokai University, Japan)

Users are required to set passwords that are both complex and not included in leaked lists, with the recent increase in unauthorised access due to password list attacks. When password security is evaluated by the user, the burden on the user is large and impractical due to the increased size of the compromised password list. Therefore, there are methods to evaluate passwords on an external server, but there is a possibility that passwords may be compromised due to eavesdropping by the server administrator. In this paper, we propose, implement and evaluate an outsourcing password evaluation system using Intel SGX, which can prevent passwords from being eavesdropped.

Security Analysis on End-to-End Encryption of Zoom Mail
Shogo Shiraki and Takanori Isobe (University of Hyogo, Japan)

Zoom Mail, an email service provided by Zoom Video Communications, features a proprietary end-to-end encryption (E2EE) scheme, which is detailed in a whitepaper available on GitHub. To date, there has been no detailed discussion or third-party evaluation of the security of the E2EE implementation in Zoom Mail. In this paper, we conduct a comprehensive security analysis of Zoom Mail's E2EE. Specifically, we establishes four types of adversary models: insiders, outsiders, To/CC (carbon copy) members, and BCC (blind carbon copy) member. We focus on three security goals: confidentiality, integrity, and authenticity. Based on these adversary models and security goals, we present the results of security evaluation of Zoom Mail's E2EE for the first time.

Formal Security Verification for Searchable Symmetric Encryption Using ProVerif
Mieno Takehiko (Shimonogo Otsu, Ueda City, Nagano & Epson Avasys Company, Japan); Hiroyuki Okazaki (Shinshu University Graduate School, Japan); Kenichi Arai (Nagasaki University, Japan); Yuichi Futa (Tokyo University of Technology, Japan); Yamamoto Hiroaki (Shinshu University, Japan)

With the rapid proliferation of various cloud storage services in recent years, the development of technology to efficiently search data while ensuring its confidentiality during cloud usage is an important issue. The technology that enables keyword searches on encrypted files using previously set keywords is called searchable symmetric encryption (SSE). In this paper, we propose a method formally representing encrypted document, and verify the security of SSE using the formal verification tool ProVerif. Our proposed method considers the channel-type terms of ProVerif as a Document that includes different keywords to verify the indistinguishability of encrypted documents.

Practical Key Management for Searchable Symmetric Encryption using Fuzzy Extractor
Dai Watanabe and Masayuki Yoshino (Hitachi, Ltd., Japan)

With the spread of cloud computing, information leakage from the cloud is recognized as one of the significant issues. Searchable symmetric encryption has been expected to be a core technology that fundamentally solves this problem. On the other hand, the technology has the security advantage of protecting data in the cloud, instead of the users being responsible for strictly controlling their secret keys. Computer environments and IT literacy vary from user to user, therefore it is important to note that users may be exposed to the risk of attackers gaining unauthorized access to their secret keys. This background underlines the importance of user's key management. In this paper, we propose a method that secures and simplifies this process by encrypting and storing the secret key of SSE using a fuzzy extractor. This method not only enhances the security of SSE but also offers significant benefits for end users, such as the ability to operate in multiple computer environments, thereby providing a more flexible and user-friendly experience. We implemented the proposed method on a laptop computer running Windows 10 and a smartphone running Android OS, which are the users' typical execution environments. As a result, we confirmed that the proposal performs practical application

Integral Attack with Bit-Based Division Property on Block Cipher LBC-3
Naoki Shibayama and Yasutaka Igarashi (Tokyo University of Science, Japan)

The block cipher LBC-3, which was proposed by Nyssanbayeva et al. in 2022, has a 64-bit block size, an 80-bit secret key size, and the number of rounds is 20. The designer investigated the avalanche effect of LBC-3, and Yasushi et al. evaluated the security against differential and linear cryptanalysis. On the other hand, it has not been evaluated the security against integral cryptanalysis, which is one of the most powerful cryptanalysis techniques on block cipher. In this paper, we apply integral cryptanalysis to LBC-3. By establishing the Mixed Integer Linear Programming (MILP) model based on bit-based division property, we discovered the 18-round integral characteristic, the longest integral characteristic of LBC-3. Then, based on the 16-round characteristic newly found, we show that the integral attack on full-round LBC-3 is more efficiently possible with \2^{50.6}\ date and \2^{51.4}\ times of encryption.

We-AM-1-3: Shannon Theory

Room 4 (IB301)
Capacity Region for Multiple-Access Channels Employing Relay Nodes
Kaito Suzuki and Tomohiko Uyematsu (Institute of Science Tokyo, Japan)

Almost all problems in multi-terminal information theory stem from Slepian-Wolf coding problem and a coding problem for multiple access channels (MACs). Cover et al. considered a joint source-channel coding problem and integrated these two problems. One typical example of the communication model proposed by Cover et al. is a sensor network. However, in usual sensor networks, the transmission power of sensors is so small that it cannot send the message to the data center directly, but the nearest relay or edge node, and the relay node resends the message to the data center through a backborn network. We model such a sensor network as MACs employing relay nodes that consist of a cascade of point-to-point channels and a MAC. We show a sufficient and necessary condition of reliable communication over a MAC employing relay nodes. Moreover, we show that both conditions coincide if the sources are independent.

A New Algorithm for Computing α-Capacity
Akira Kamatsuka and Koki Kazama (Shonan Institute of Technology, Japan); Takahiro Yoshida (Nihon University, Japan)

The problem of computing α-capacity for α>1 is equivalent to that of computing the correct decoding exponent. Various algorithms for computing them have been proposed, such as Arimoto and Jitsumatsu--Oohama algorithm. In this study, we propose a new alternating optimization algorithm for computing the α-capacity for α>1 based on a variational characterization of the Augustin--Csiszár mutual information. A comparison of the convergence performance of these algorithms is demonstrated through numerical examples.

Coding Theorems for Source Coding with Cost
Shigeaki Kuzuoka (Wakayama University, Japan)

Source coding with a cost function for code symbols is studied and the moment function of the cost is investigated. Our coding theorems show that the optimal exponential moment of the cost is characterized by using the Renyi entropy. Further, coding theorems for epsilon-coding are also given.

Sum Rate Computation for the Gaussian Many-Help-One Problem
Yasutada Oohama (University of Electro-Communications, Japan)

In this paper we consider the separate coding problem for $L+1$ correlated Gaussian memoryless sources. We deal with the case where $L$ separately encoded data of sources work as side information at the decoder for the reconstruction of the remaining source. In this paper we study the case where $L+1$ sources satisfy a kind of tree structure on their correlation. This tree structure of information sources the TS condition In his previous work the author determine the sum rate part of the rate distortion region for the case where information sources satisfy the TS condition. In this paper we derive an explicit recursive formula of this sum rate part. This result completes the author's previous work in which the formula is established for some limited case.

Simulation of Individual Sequences Using Training Data
Tomohiko Uyematsu (Institute of Science Tokyo, Japan)

Simulation of random process is one of classic problems in information theory. The simulation problem finds its wide applications in generative AI's, as well as generation of noise for the purpose of simulating various physical systems. We revisit a simulation problem for individual sequences, and propose an efficient universal simulation scheme for a given training sequence {x^n} with its length {n}. In other words, our goal is to find a good deterministic mapping which generates sequences {y^n} simulating {x^n} from uniform random numbers. As criteria of good simulation schemes, we deal with the following two conditions: 1) {y^n} is statistically similar to {x^n}, 2) If {y^n} satisfies condition 1), there is as much uncertainty as possible in the choice of {y^n}. We propose two simulation schemes based on the interval algorithm which can be executed with the computational complexity of the order {O(n)}. Both schemes satisfy the condition 1), and their output entropy rates are clarified. Further, any simulation scheme which satisfies the condition 1) cannot provide significantly larger output entropy rate than our proposed schemes. This reveals the asymptotical optimality of the proposed simulation schemes.

Program last updated on 22:28 Asia/Tokyo