Sunteți pe pagina 1din 11

DeepMIMO Applications Page 1 of 11

Machine Learning for Millimeter Wave and Massive MIMO


Systems
1) Mapping Channels in Space and Frequency

Illustration of a general system model of a user communicating with a set of antennas. Assume the user is communicating with
only one of the two sets of antennas. Can we map the channels at one set of antennas and one frequency band to the channels at
another set of antennas and possibly at a different frequency band?

◾ Paper: Muhammad Alrabeiah and Ahmed Alkhateeb, "Deep Learning for TDD and FDD Massive MIMO: Mapping
Channels in Space and Frequency," arXiv e-prints, p. arXiv:1905.03761, May 2019.
◾ Key Idea: Can we map the channels at one set of antennas and one frequency band to the channels at another set of
antennas---possibly at a different location and a different frequency band? If this channel-to-channel mapping is
possible, we can expect dramatic gains for massive MIMO systems. For example, in FDD massive MIMO, the uplink
channels can be mapped to the downlink channels or the downlink channels at one subset of antennas can be mapped to
the downlink channels at all the other antennas. This can significantly reduce (or even eliminate) the downlink
training/feedback overhead. In the context of cell-free/distributed massive MIMO systems, this channel mapping can be
leveraged to reduce the fronthaul signaling overhead as only the channels at a subset of the distributed terminals need to
be fed to the central unit which can map them to the channels at all the other terminals. This mapping can also find
interesting applications in mmWave beam prediction, MIMO radar, and massive MIMO based positioning.

In this paper, we introduce the new concept of channel mapping in space and frequency, where the channels at one set of
antennas and one frequency band are mapped to the channels at another set of antennas and frequency band. First, we
prove that this channel-to-channel mapping function exists under the condition that the mapping from the candidate
user positions to the channels at the first set of antennas is bijective; a condition that can be achieved with high
probability in several practical MIMO communication scenarios. Then, we note that the channel-to-channel mapping
function, even if it exists, is typically unknown and very hard to characterize analytically as it heavily depends on the
various elements of the surrounding environment. With this motivation, we propose to leverage the powerful learning
capabilities of deep neural networks to learn (approximate) this complex channel mapping function. For a case study of
distributed/cell-free massive MIMO system with 64 antennas, the results show that acquiring the channels at only 4-8
antennas can be efficiently mapped to the channels at all the 64 distributed antennas, even if the 64 antennas are at a
different frequency band. Further, the 3D ray-tracing based simulations show that the achievable rates with the
predicted channels achieve near-optimal data rates when compared to the upper bound with perfect channel knowledge.
This highlight a novel solution for reducing the training and feedback overhead in mmWave and massive MIMO systems
thanks to the powerful learning capabilities of deep neural networks.

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 2 of 11

◾ To reproduce the results, please follow these steps:


1. Generate a dataset for scenario I1_2p4 using the settings
in this table. Number of paths should be 1.
2. Organize the data into a MATLAB structure named
"rawData" with the following fields: channel and
userLoc. "channel" is a 3D array with dimensions: # of
antennas X # of sub-carriers X # of users while
"userLoc" is a 2D array with dimensions: 3 X # of users.
3. Save the data structure into a .mat file.
4. In the file main, set the option: options.rawDataFile1 to
point to the .mat file.
5. Run main.m

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and used dataset, please contact Muhammad Alrabeiah.

2) mmWave Beam and Blockage Prediction Using Sub-6GHz Channels

The adopted system model where a base station and a mobile user communicate over both sub-6 and mmWave bands. For both
band, the basestation implements co-located antennas arrays. Both arrays are connected to the same central unit.

◾ Paper: Muhammad Alrabeiah and Ahmed Alkhateeb, "Deep Learning for mmWave Beam and Blockage Prediction
Using Sub-6GHz Channels," arXiv e-prints, p. arXiv:1910.02900, Oct. 2019.
◾ Key Idea: Predicting the millimeter wave (mmWave) beams and blockages using sub-6GHz channels has the potential
of enabling mobility and reliability in scalable mmWave systems. Prior work has focused on extracting spatial channel
characteristics at the sub-6GHz band and then use them to reduce the mmWave beam training overhead. This approach
still requires beam refinement at mmWave and does not normally account for the different dielectric properties at the
different bands. In this paper, we first prove that under certain conditions, there exist mapping functions that can
predict the optimal mmWave beam and blockage status directly from the sub-6GHz channel. These mapping functions,
however, are hard to characterize analytically which motivates exploiting deep neural network models to learn them. For
that, we prove that a large enough neural network can predict mmWave beams and blockages with success probabilities
that can be made arbitrarily close to one. Then, we develop a deep learning model and empirically evaluate its
beam/blockage prediction performance using a publicly available dataset. The results show that the proposed solution
can predict the mmWave blockages with more than 90% success probability and can predict the optimal mmWave
beams to approach the upper bounds while requiring no beam training overhead.

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 3 of 11

◾ To reproduce the results, please follow these steps:


1. Generate the datasets using the scenarios ‘O1_28’ and
‘O1_3p5’ from the DeepMIMO datasets. Use the parameters
illustrated in Table.1 in Section VII-B of the paper. (Note that
the DeepMIMO source data are available on this link).
2. Prepare two MATLAB structures, one for the sub-6GHz data
and the other for 28GHz. Please refer to the comments at the
beginning of main.m for more information on the data
structures.
3. Assign the paths to the two MATLAB structures to the two
parameters: options.dataFile1 and options.dataFile2 in the
beginning of main.m.
4. Run main.m to get the figure on the right.

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and used dataset, please contact Muhammad Alrabeiah.

3) Deep Learning for Massive MIMO with 1-Bit ADCs

The adopted massive MIMO system where the base station receiver uses 1-bit ADCs. The uplink quantized received
measurement matrix is fed to a deep learning model that predicts the channel vector.

◾ Paper: Yu Zhang, Muhammad Alrabeiah, and Ahmed Alkhateeb "Deep Learning for Massive MIMO with 1-Bit ADCs:
When More Antennas Need Fewer Pilots," arXiv e-prints, p. arXiv:1910.06960, Oct. 2019.
◾ Key Idea: In this paper, we propose a deep-learning based framework for the channel estimation problem in massive
MIMO systems with 1-bit ADCs. In this framework, the prior channel estimation observations and deep neural networks
are exploited to learn the mapping from the received quantized measurements to the channels. Learning this mapping,
however, requires its existence in the first place. For that, we derive the sufficient length and structure of the pilot
sequence that guarantee the existence of this quantized measurement to channel mapping. Then, we make the
interesting observation that for the same set of candidate user locations, more antennas require fewer pilots to guarantee

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 4 of 11

the mapping existence. This means that increasing the number of base station antennas reduces the number of required
pilots to estimate their channels, which may seem counter-intuitive. The intuition justifying this observation, however, is
that with more antennas, the quantized measurement vectors become more unique for the different channels. Hence,
they can be efficiently mapped to their corresponding channels with less error probability. This observation is also
proved analytically for the case of single-path channels. Simulation results, based on a publicly available 3D ray-tracing
dataset, highlight the promising gains of the proposed deep learning approach that requires only a few pilots to
efficiently estimate the massive MIMO channels. Further, the results confirm that more antennas lead to better channel
estimates, both in terms of normalized mean-squared error (NMSE) and per-antenna signal-to-noise ratio (SNR).

◾ To reproduce the results, please follow these steps:


1. Download all the files of this GitHub repository.
2. Create two empty folders at the same directory as the
downloaded codes and name them "Networks" and
"Data" respectively. As the names indicate, "Networks"
will store the trained neural networks and "Data" will
store the predicted channels for evaluations.
3. Run "main.m" in MATLAB.
4. When "main.m" finishes, execute "Fig3_Generator.m",
which will get the figure on the right.

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and
used dataset, please contact Yu Zhang.

4) Deep Learning for Large Intelligent Surfaces

Illustration of the proposed large intelligent surface (LIS) architecture where few active channel sensors are randomly
distributed over the LIS. These active elements have two modes of operation (i) a channel sensing mode where it is connected to

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 5 of 11

the baseband and is used to estimate the channels and (ii) a reflection mode where it just reflects the incident signal by applying
a phase shift. The rest of the LIS elements are passive reflectors and are not connected to the baseband.

◾ Paper: Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling Large Intelligent Surfaces with
Compressive Sensing and Deep Learning," arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
◾ Key Idea: Employing large intelligent surfaces (LISs) is a promising solution for improving the coverage and rate of
future wireless systems. These surfaces comprise a massive number of nearly-passive elements that interact with the
incident signals, for example by reflecting them, in a smart way that improves the wireless system performance. Prior
work focused on the design of the LIS reflection matrices assuming full knowledge of the channels. Estimating these
channels at the LIS, however, is a key challenging problem, and is associated with large training overhead given the
massive number of LIS elements. This paper proposes efficient solutions for these problems by leveraging tools from
compressive sensing and deep learning. First, a novel LIS architecture based on sparse channel sensors is proposed. In
this architecture, all the LIS elements are passive except for a few elements that are active (connected to the baseband of
the LIS controller). We then develop two solutions that design the LIS reflection matrices with negligible training
overhead. In the first approach, we leverage compressive sensing tools to construct the channels at all the LIS elements
from the channels seen only at the active elements. These full channels can then be used to design the LIS reflection
matrices with no training overhead. In the second approach, we develop a deep learning based solution where the LIS
learns how to optimally interact with the incident signal given the channels at the active elements, which represent the
current state of the environment and transmitter/receiver locations. We show that the achievable rates of the proposed
compressive sensing and deep learning solutions approach the upper bound, that assumes perfect channel knowledge,
with negligible training overhead and with less than 1% of the elements being active.

◾ To reproduce the results, please follow these steps:


1. Download all the files of this GitHub project and add them to
the "DeepMIMO_Dataset_Generation" folder. (Note that the
DeepMIMO source data are available on this link).
2. Run the file named "Fig10_generator.m" in MATLAB and
the script will sequentially execute the following tasks:
a. Generate the inputs and outputs of the deep learning
model
b. Build, train, and test the deep learning model
c. Process the deep learning outputs and generate the
performance results.

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and used dataset, please contact Abdelrahman Taha.

5) Millimeter Wave Beam Prediction Based on Multipath Signature

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 6 of 11

Block diagram of the proposed mmWave coordinated beamforming system. The transmitted signal at every subcarrier is first
precoded at the central/cloud processing unit using digital precoders, and then transmitted jointly from the terminals/BSs
employing the RF beamforming vectors.

◾ Paper: Ahmed Alkhateeb, Sam Alex, Paul Varkey, Ying Li, Qi Qu and Djordje Tujkovic, " Deep Learning Coordinated
Beamforming for Highly-Mobile Millimeter Wave Systems ," in IEEE Access, vol. 6, pp. 37328-37348, 2018.
◾ Key Idea: Supporting high mobility in millimeter wave (mmWave) systems enables a wide range of important
applications such as vehicular communications and wireless virtual/augmented reality. Realizing this in practice,
though, requires overcoming several challenges. First, the use of narrow beams and the sensitivity of mmWave signals to
blockage greatly impact the coverage and reliability of highly-mobile links. Second, highly-mobile users in dense
mmWave deployments need to frequently hand-off between base stations (BSs), which is associated with critical control
and latency overhead. Further, identifying the optimal beamforming vectors in large antenna array mmWave systems
requires considerable training overhead, which significantly affects the efficiency of these mobile systems. In this paper,
a novel integrated machine learning and coordinated beamforming solution is developed to overcome these challenges
and enable highly-mobile mmWave applications. In the proposed solution, a number of distributed yet coordinating BSs
simultaneously serve a mobile user. This user ideally needs to transmit only one uplink training pilot sequence that will
be jointly received at the coordinating BSs using omni or quasi-omni beam patterns. These received signals draw a
defining signature not only for the user location, but also for its interaction with the surrounding environment. The
developed solution then leverages a deep learning model that learns how to use these signatures to predict the
beamforming vectors at the BSs. This renders a comprehensive solution that supports highly-mobile mmWave
applications with reliable coverage, low latency, and negligible training overhead. Extensive simulation results, based on
accurate ray-tracing, show that the proposed deep-learning coordinated beamforming strategy approaches the
achievable rate of the genie-aided solution that knows the optimal beamforming vectors with no training overhead, and
attains higher rates compared to traditional mmWave beamforming techniques.

◾ To reproduce the results, please follow these


steps:
1. Download the "BeamPrediction_Signature.zip" file,
expand/uncompress it, and then add the folder to
the "DeepMIMO_Dataset_Generation" folder. (Note
that the DeepMIMO source data are available on
this link).
2. Run the file named
"DL_CoordinatedBeamforming.m" in MATLAB to

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 7 of 11

generate the inputs/outputs of the deep learning


model.
3. Run the file named "DLmodel_python_code.py" to build, train, and test the deep learning model. This step
requires Python 3.6, Keras, and Tensorflow.
4. Run the file named "Figure_Generator.m" in MATLAB to process the deep learning outputs and generate the
performance results/figures.

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and used dataset, please contact Ahmed Alkhateeb.

6) Deep Learning for mmWave Channel Estimation and Hybrid Precoding


Prediction

The proposed auto-precoder neural network consists of two sections: the channel encoder and the precoder. The channel
encoder takes the vectorized channel vector as an input and passes it through two complex-valued convolutional layers that
mimic the Kronecker product operation of the transmit and receive measurement matrices. The output of the channel encoder
(the received sensing vector) is then inputted to the precoder network, which predicts the optimal hybrid beamforming vectors.

◾ Paper: Xiaofeng Li and Ahmed Alkhateeb, " Deep Learning for Direct Hybrid Precoding in Millimeter Wave Massive
MIMO Systems ," submitted to Asilomar, 2019.
◾ Key Idea: This paper proposes a novel neural network architecture, that we call an auto-precoder, and a deep-learning
based approach that jointly senses the millimeter wave (mmWave) channel and designs the hybrid precoding matrices
with only a few training pilots. More specifically, the proposed machine learning model leverages the prior observations
of the channel to achieve two objectives. First, it optimizes the compressive channel sensing vectors based on the
surrounding environment in an unsupervised manner to focus the sensing power on the most promising spatial
directions. This is enabled by a novel neural network architecture that accounts for the constraints on the RF chains and
models the transmitter/receiver measurement matrices as two complex-valued convolutional layers. Second, the
proposed model learns how to construct the RF beamforming vectors of the hybrid architectures directly from the
projected channel vector (the received sensing vector). The auto-precoder neural network that incorporates both the
channel sensing and beam prediction is trained end-to-end as a multi-task classification problem. Thanks to this design
methodology that leverages the prior channel observations and the implicit awareness about the surrounding
environment/user distributions, the proposed approach significantly reduces the training overhead compared to

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 8 of 11

classical (non-machine learning) solutions. For example, for a system of 64 transmit and 64 receive antennas, with 3 RF
chains at both sides, the proposed solution needs only 8 or 16 channel training pilots to directly predict the RF
beamforming/combining vectors of the hybrid architectures and achieve near-optimal achievable rates. This highlights a
promising solution for the channel estimation and hybrid precoding design problem in mmWave and massive MIMO
systems.

◾ To reproduce the results, please follow these steps:


1. Download all the files of this GitHub project
2. Quick run: Run in terminal "python main_train_beamforming.py -train 1" to train the model and run "python
main_train_beamforming.py -train 0" for testing. The default parameters are:
dataset='DeepMIMO_dataset_train20.mat' and 'DeepMIMO_dataset_test20.mat' (which are corresponding to
total transmit power of 20dB), epochs=15, batch_size=512, learning_rate=0.002.

◾ GitHub link: Source code of the paper


◾ If you have any questions regarding the code and used dataset, please contact Ahmed Alkhateeb.

7) Machine Learning for Blockage Prediction and Proactive Handoff: Towards


Relaible mmWave Systems

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 9 of 11

The figure on the left illustrates the system model that considers one user moving in a trajectory, served by one out of N
candidate BSs at every step in the trajectory, with the probability for the link to be blocked. The figure on the right shows the
proposed recurrent neural network model for blockage prediction and proactive handoff.

◾ Paper: Ahmed Alkhateeb and Iz Beltagy, " Machine Learning for Reliable mmWave Systems: Blockage Prediction and
Proactive Handoff ," in Proc. of IEEE GlobalSIP, 2018.
◾ Key Idea: The sensitivity of millimeter wave (mmWave) signals to blockages is a fundamental challenge for mobile
mmWave communication systems. The sudden blockage of the line-of-sight (LOS) link between the base station and the
mobile user normally leads to disconnecting the communication session, which highly impacts the system reliability.
Further, reconnecting the user to another LOS base station incurs high beam training overhead and critical latency
problems. In this paper, we leverage machine learning tools and propose a novel solution for these reliability and latency
challenges in mmWave MIMO systems. In the developed solution, the base stations learn how to predict that a certain
link will experience blockage in the next few time frames using their past observations of adopted beamforming vectors.
This allows the serving base station to proactively hand-over the user to another base station with a highly probable LOS
link. Simulation results show that the developed deep learning based strategy successfully predicts blockage/hand-off in
close to 95% of the times. This reduces the probability of communication session disconnection, which ensures high
reliability and low latency in mobile mmWave systems.

◾ To reproduce the results, please follow these steps: Coming soon!


◾ If you have any questions regarding the code and used dataset, please contact Ahmed Alkhateeb.

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 10 of 11

8) Channel Covariance Prediction using Generative Adversarial Networks for


mmWave Massive MIMO

The figure on the left illustrates the system model that considers one user/vehicle moving, and communicating with N BSs that
simultaneously receive uplink pilot signals from the user. The figure on the right shows the proposed conditional generative
aadversarial neural network model for the channel covariance problem.

◾ Paper: Xiaofeng Li, Ahmed Alkhateeb, and Cihan Tepedelenlioğlu, " Generative Adversarial Estimation of Channel
Covariance in Vehicular Millimeter Wave Systems ," in Proc. of Asilomar, 2018.
◾ Key Idea: Enabling highly-mobile millimeter wave (mmWave) systems is challenging because of the huge training
overhead associated with acquiring the channel knowledge or designing the narrow beams. Current mmWave beam
training and channel estimation techniques do not normally make use of the prior beam training or channel estimation
observations. Intuitively, though, the channel matrices are functions of the various elements of the environment.
Learning these functions can dramatically reduce the training overhead needed to obtain the channel knowledge. In this
paper, a novel solution that exploits machine learning tools, namely conditional generative adversarial networks (GAN),
is developed to learn these functions between the environment and the channel covariance matrices. More specifically,
the proposed machine learning model treats the covariance matrices as 2D images and learns the mapping function
relating the uplink received pilots, which act as RF signatures of the environment, and these images. Simulation results

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019
DeepMIMO Applications Page 11 of 11

show that the developed strategy efficiently predicts the covariance matrices of the large-dimensional mmWave channels
with negligible training overhead.

◾ To reproduce the results, please follow these steps: Coming soon!


◾ If you have any questions regarding the code and used dataset, please contact Ahmed Alkhateeb.

More applications will be posted soon!

Sign-up for the DeepMIMO mailing list!


Please provide your contact information to get the latest updates about the DeepMIMO dataset

Full Name (Optional) Email Address Sign Up

If you have a problem with signing up to the mailing list, please use this form.

Towards a Community Dataset!


◾ The main objective of this dataset is to advance the machine/deep learning research in mmWave/massive MIMO by
enabling the reproducibility of the results, setting benchmarks, and comparing different solutions based on a common
publicly-available dataset.
◾ If you are interested in contributing to the dataset codes or benchmarks (tasks), please contact Ahmed Alkhateeb.

Last updated on 2019-10

http://deepmimo.net/DeepMIMO_applications.html 12/29/2019

S-ar putea să vă placă și