This is the implementation of paper “Error Correction Coding for One-Bit Quantization with CNN-Based AutoEncoder”.
@ARTICLE{9791347,
author={Zeng, Rui and Lu, Zhilin and Wang, Jintao and Song, Jian},
journal={IEEE Communications Letters},
title={Error Correction Coding for One-Bit Quantization With CNN-Based AutoEncoder},
year={2022},
volume={26},
number={8},
pages={1814-1818},
doi={10.1109/LCOMM.2022.3181502}}
This project involves the joint programming of MATLAB and Python, and thus package matlab.engine is need for
getdata.py (generate data) and system_main.py (test bit error rate).
Find the path like MATLAB\R2017b\extern\engines\python in your MATLAB installation path, and execute the command python setup.py install.
pytorch >= 1.7.1
torchvision >= 0.8.2
python >= 3.6
home
├── data
│ ├── mydataset.py
│ ├── gen_data_4.npz (Data Files, generated by getdata.py, for QPSK modulation)
│ ├── gen_data_16.npz (Data Files, generated by getdata.py, for 16QAM modulation)
│ ├── *.npz (BER Files, generated by system_main.py)
│ ├── model_cnn/ (Model Files, generated by system_cnn.py)
├── models
│ ├── ECCNet.py (Encoder and Decoder)
│ ├── quantization.py (soft and hard quantization)
├── setting
│ ├── settings.py (random seed, gpu)
├── tools
│ ├── draw_BER.py
│ ├── draw_loss.py
│ ├── logger.py
│ ├── parse.py
│ ├── utils.py
├── traditional
│ ├── channel.py
│ ├── match_filtering.py
│ ├── pulse_shaping.py
│ ├── r_filter.py (Root sign rising cosine function)
├── getdata.py
├── system_cnn.py
├── system_main.py
...
python getdata.py -modem_num 4
python getdata.py -modem_num 16Generated data files are data/gen_data_4.npz and data/gen_data_16.npz. We have generated these two files and you can use them directly.
Modifiable Parameters:
- train_len: train_len × len is total training length.
- test_len: test_len × len is total testing length.
- val_len: val_len × len is total validating length.
- ber_len: ber_len × len is final testing length.
- len: Turbo code input 6144 bit
- code_len: Turbo code output 18444 bit
- train_cut: generate training dataset with dimension of [train_cut, N]
- test_cut: generate testing dataset with dimension of [test_cut, N]
- val_cut: generate validation dataset with dimension of [val_cut, N]
python getdata.py -train_len 300 -test_len 200 -val_len 100 -ber_len 100 -len 6144 -code_len 18444 -train_cut 40000 -test_cut 25000 -val_cut 10000 -modem_num 4python system_cnn.pyGenerate model files are in data/model_cnn_awgn_4/ (or data/model_cnn_awgn_16).
Modifiable Parameters:
- epoch: total epoch
- batch_size: batch size
- mode: 'train' or 'test'
- G: G-fold symbold extension
- N: Block of symbols
- modem_num: modulation order (4 for QPSK, and 16 for 16QAM)
- unit_T: unit increment of soft quantization
- lr: learning rate of Autoencoder
- lr_step: decay step of learning rate
- snr: Eb/n0 when training
- snr_start: start of Eb/n0 when testing
- snr_step: step of Eb/n0 when testing
- snr_end: end of Eb/n0 when testing
python system_cnn.py -channel_mode awgn -modem_num 4 -snr 5 -lr 5e-4python system_cnn.py -channel_mode awgn -modem_num 16 -snr 10 -lr 1e-4python system_main.pyResult of BER is in data/unquantized_4.npz (or like this).
Modifiable Parameters:
- ber_len: ber_len × len is final testing length
- curve: measure BER based on 'unquantized'、'quantized'、'cnn'
- G: G-fold symbold extension
- N: Block of symbols
- modem_num: modulation order (4 for QPSK, and 16 for 16QAM)
- snr_start: start of Eb/n0 when testing
- snr_step: step of Eb/n0 when testing
- snr_end: end of Eb/n0 when testing
- model_path: path of trained model
python system_main.py -channel_mode awgn -curve cnn -modem_num 4 -snr_start -1 -snr_end 3 -snr_step 1 -model_path ./data/model_cnn_awgn_4python system_main.py -channel_mode awgn -curve cnn -modem_num 16 -snr_start -2 -snr_end 13 -snr_step 1 -model_path ./data/model_cnn_awgn_16