Documente Academic
Documente Profesional
Documente Cultură
Coordonator:
Prof. Dr.Ing. erban Obreja
Student
Cosmin Tudorache
Master: TSAC II
Cuprins
1.
2.
3.
4.
5.
6.
7.
8.
9.
Policy and Charging Rules Function (PCRF): PCRF-ul ofera politicile de control si
acces pentru utilizatorii retelei prin determinarea accesului la resurse i limitarea utilizarii
acestora n funcie de profil. PCRF-ul se conecteaz cu serviciile IMS (IP Multimedia Subsystem), prin intermediul interfeei RX, pentru a asigura accesul utilizatorilor la IMS.
Principalele funcii ale PCRF-ului sunt prezentate in figura urmatoare:
integrarea fluxurilor de trafic pentru aceste purttori. Deoarece QCI este un cmp de 8-bii,
acesta poate fi extins pentru a oferi 255 de seturi diferite de QoS.
Pentru planificarea resurselor fizice, planificatorul va verifica mai nti toate fluxurile
active, cele care au ca date ce trebuie transferate n buffer-ul RLC. Dac acest flux este activ,
atunci planificatorul verific eticheta de prioritate atribuita acestui flux. Bazat pe eticheta de
prioritate, planificatorul calculeaz metricile astfel nct cea mai mica mica valoare a tag-ului va
avea cea mai mare prioritate, respectiv metrica cea mai mare. Pe baza metricilor calculate,
planificatorul ia decizia de alocare a resurselor.
Pe downlink, LTE utilizeaza Orthogonal Frequency Division Multiple Access (OFDMA),
pentru a permite mai multor utilizatori sa imparta aceeai lime de band. Ideea din spatele
OFDMA este de a mpri un flux mare de date n mai multe fluxuri de date, ce pot fi transmise
de un numr mare de purttori. Fiecare purttoarea este modulat la o rat simbol sczut,
purtand un simbol al formatului modulatiei, cum ar fi Quadrature Phase-Shift Keying (QPSK),
16-QAM (quadrature amplitude modulation) sau 64-QAM pentru LTE. Aceste sub-purtatoare
sunt ortogonale, pentru a permite suprapunerea n domeniul de frecven, suprapunere foarte
benefic n termen de economisire a limii de band.
Cu OFDMA, mai muli utilizatori pot partaja aceeai resurs att n frecven, cat i in
domeniul timp. n LTE, resursele retelei sunt mprite n mai multe blocuri de resurse msurate
n timp i domeniul frecven aa cum este ilustrat n figura urmatoare:
purttorea i QCI(QoS Class Identifier) . Purttorea poate fi privita ca o combinaie a mai multor
cerine QoS ce sunt indicate de numrul QCI . Fiecare QCI este caracterizat de prioritate,
intrzierea pachetelor i rat acceptabil a pierderilor de pachete . Eticheta QCI pentru o
purttoare determin modul n care este manipulata n eNodeB . Tabelul urmator rezum valorile
de baz ale QCI-ului pentru definite niveluri de trafic ale retelei LTE . GBR (Guaranteed Bit
Rate) implic faptul c resursa dedicat retelei va fi alocata permanent atunci cnd se stabilete
acest tip de purttor.
este posibila definirea mai multor eNB-uri, fiecare dintre acestea cu propria
conexiune de backhaul, cu diferite capaciti; prin urmare, protocoalele planului de date ntre
eNBS i S-GW/P-GW trebuiesc modelate foarte precis;
este posibil ca un singur UE sa utiliezea diferite aplicaii cu cerine QoS diferite ,
astfel nct mai multe purtatoare EPS trebuie s fie
o modelare exacta a planului de date EPC este un scop principal, n timp ce
planul de control EPC este dezvoltat ntr-un mod simplist;
obiectivul principal pentru simulrile EPC este managementul utilizatorilor
activi n modul ECM , astfel nct toate funcionalitile care sunt relevante doar pentru ECM n
modul inactiv nu sunt modelate in totalitate
Modelul ar trebui s permit posibilitatea de a efectua un transfer pe interfata
X2 ntre dou eNB-uri
Figura urmatoare prezinta stiva protocoalelor LTE-EPC in planul de date asa cum a fost
implementata n modelul LENA simulat.
de RBGs, nu toate fluxurile pot fi programate ntr-un anumit sub-cadru; apoi, n urmtoarul subcadru de alocarea va ncepe de la ultimul flux care nu a fost alocat. Schema de modulare i
codare (MCS-Modulation and Coding Scheme), care urmeaz s fie adoptata pentru fiecare
utilizator se realizeaza n funcie de indicatia de banda data de CQI (Channel Quality Indication)
Figura urmatoare arat cum se utilizeaz interfeele Round Robin n eNodeB:
obiectele ChannelRealization ntr-un container std::map, unde fiecare realizare este identificata
prin pinteri la modelele de mobilitate ale perechii de noduri la care se refer. nainte de livrarea
pachetelor transmise catre toate instanele fizice atasate, SpectrumChannel utilizeaz functia
CalcRxPowerSpectralDensity () ce apartine clasei LteSpectrumPropagationLoss pentru
calcularea densitatii spectrale de putere a semnalului.
Dispozitivul LTE, modelat de clasa LteNetDevice, implementeaz clasa NetDevice
furnizata de NS-3. Pentru efectuarea tuturor functiilor dispozitivelor din NS3, aceasta conine i
gestioneaz principalele entiti ale stivei de protocoale e-UTRAN, precum RRC, RLC, MAC i
PHY. O multime de clase dedicate, UeLteNetDevice i EnbLteNetDevice, motenesc de la
LteNetDevice i implementeaza functiile specific a UE-ului i eNB-ului.
Deoarece cele mai multe dintre funcii sunt legate de eNB, implementarea UE-urilor este
simpla. UE stocheaz informaii despre eNB-ul la care adera; n acest scop, o variabil
m_targetEnb a fost definita. Aceast variabil ar trebui s fie stabilit la nceputul simulrii, prin
utilizarea functiei SetTargetEnb (). Aceasta este utilizat pentru a implementa canalul de control
amintit anterior.
eNB-ul are un rol esential in implementarea retelei e-UTRAN. Cea mai important
sarcin a eNB-ului este Managementul Resurselor Radio (RRM), care este efectuat
programatorul de resurse. eNB-ul foloseste componenta UE Manager pentru a gestiona UE-urile
n diverse operaii. In primul rand, folosind UE Manager, eNB invata toate UE-urile nregistrate.
Pentru fiecare dintre acestea, UE Record este creat i stocat n UE Manager. Este important de
retinut c UE Record este utilizat pentru a stoca informaii despre cele mai recente feedback-uri
CQI trimise de UE. Aceste informaii sunt utilizate de ctre planificatorul de pachete pentru a
aloca resurse pentru UE-urilor, lund n considerare condiiile canalului de transmisie.
n LTE, fluxurile de trafic ntre UE i eNB sunt grupate n entiti logice numite
purtatoare care sunt identificate de cerinele QoS corespunztoare.Clasa RadioBearerInstance
modeleaza purttoarea radio stabilita ntre UE i eNB, fiind stocata n variabila m_direction.
Fiecare purttor de radio este identificat printr-o pereche socket, de exemplu, sursa i destinaia
adresei IP, tipul de protocol de transport, precum i sursa si portul de destinatie. n scopul de a
gestiona aceast informaie, se va alege utilizarea clasei IpcsClassiferRecord. Clasa
BearerQoSParameters a fost conceputa pentru a reprezenta cerinele QoS asociate fiecrei
purttoare. Este o structur de date care deine parametrii ce caracterizeaz purtatoarele: (i)
m_bearerType, care descrie tipul purttoarei radio; (ii) clasa QoS m_qci care identific diferite
tratamente ce pot fi adoptate pentru fiecarepurtatoare; (iii) valoarea garantata a ratei de bit m_gbr,
care reprezint rata de bit a unei purtatoare GBR i (iv) m_mbr ce reprezint rata minim de bit
oferita purtatoarei GBR.
Entitatea RRC (Radio Resource Control) este implementata de clasa RrcEntity i ofer
doar funcionalitatea de gestionare a purtatoarei radio. Functia Classify a entitii RRC realizeaz
clasificarea pachetelor care vin de la un nivel superior la purtatoarea corespunzatoare. Aceast
clasificare se bazeaz pe informaiile furnizate de clas IpcsClassiferRecord. Clasa MacQueue
implementeaz coada MAC unde sunt stocate toate pachetele ce proven de la nivelul aplicaie i
care aparin unui anumite purtatoare radio. Clasa MacQueue implementeaz o politica de
management de tip First In First Out (FIFO). Interaciunea dintre MAC i purtatoarea radio este
asigurat de ctre entitatea RLC, care este implementata de clasa RlcEntity.
Clasa PacketScheduler definete interfaa pentru implementarea planificatorului MAC.
Planificatoarele de downlink si uplink implementate sunt introduse in eNB, prin stabilirea
variabilelor m_downlikScheduler i m_uplinkScheduler ale componentei EnbMacEntity.
// UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header)
* 1000 byte/sec -> 232000 byte/rate
// Totol bandwidth: 24 PRB at Itbs 20 -> 1383 -> 1383000 byte/sec
// 1 user -> 903000 * 1 = 232000 < 1383000 -> throughput = 232000 byte/sec
// 3 user -> 232000 * 3 = 696000 < 1383000 -> througphut = 232000 byte/sec
// 6 user -> 232000 * 6 = 139200 > 1383000 -> throughput = 1383000 / 6 = 230500 byte/sec
// 12 user -> 232000 * 12 = 2784000 > 1383000 -> throughput = 1383000 / 12 = 115250
byte/sec
// UPLINK - DISTANCE 4800 -> MCS 14 -> Itbs 13 (from table 7.1.7.2.1-1 of 36.213)
// 1 user -> 25 PRB at Itbs 13 -> 807 -> 807000 > 232000 -> throughput = 232000 bytes/sec
// 3 users -> 8 PRB at Itbs 13 -> 253 -> 253000 > 232000 -> throughput = 232000 bytes/sec
// 6 users -> 4 PRB at Itbs 13 -> 125 -> 125000 < 232000 -> throughput = 125000 bytes/sec
// after the patch enforcing min 3 PRBs per UE:
// 12 users -> 3 PRB at Itbs 13 -> 93 bytes * 8/12 UE/TTI -> 62000 < 232000 -> throughput =
62000 bytes/sec
AddTestCase (new LenaCqaFfMacSchedulerTestCase1
(1,4800,232000,232000,200,1,errorModel), TestCase::EXTENSIVE);
AddTestCase (new LenaCqaFfMacSchedulerTestCase1
(3,4800,232000,232000,200,1,errorModel), TestCase::EXTENSIVE);
AddTestCase (new LenaCqaFfMacSchedulerTestCase1
(6,4800,230500,125000,200,1,errorModel), TestCase::EXTENSIVE);
//AddTestCase (new LenaCqaFfMacSchedulerTestCase1
(12,4800,115250,62000,200,1,errorModel)); // simulation time = 1.5, otherwise, ul test will fail
// DOWNLINK - DISTANCE 6000 -> MCS 20 -> Itbs 18 (from table 7.1.7.2.1-1 of 36.213)
// Traffic info
// UDP traffic: payload size = 200 bytes, interval = 1 ms
// UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header)
* 1000 byte/sec -> 232000 byte/rate
// Totol bandwidth: 24 PRB at Itbs 18 -> 1191 -> 1191000 byte/sec
// 1 user -> 903000 * 1 = 232000 < 1191000 -> throughput = 232000 byte/sec
// 3 user -> 232000 * 3 = 696000 < 1191000 -> througphut = 232000 byte/sec
// 6 user -> 232000 * 6 = 1392000 > 1191000 -> throughput = 1191000 / 6 = 198500 byte/sec
// 12 user -> 232000 * 12 = 2784000 > 1191000 -> throughput = 1191000 / 12 = 99250
byte/sec
// UPLINK - DISTANCE 6000 -> MCS 12 -> Itbs 11 (from table 7.1.7.2.1-1 of 36.213)
// 1 user -> 25 PRB at Itbs 11 -> 621 -> 621000 > 232000 -> throughput = 232000 bytes/sec
// 3 users -> 8 PRB at Itbs 11 -> 201 -> 201000 < 232000 -> throughput = 201000 bytes/sec
// 6 users -> 4 PRB at Itbs 11 -> 97 -> 97000 < 232000 -> throughput = 97000 bytes/sec
// after the patch enforcing min 3 PRBs per UE:
// 12 users -> 3 PRB at Itbs 11 -> 73 bytes * 8/12 UE/TTI -> 48667 < 232000 -> throughput =
48667 bytes/sec
AddTestCase (new LenaCqaFfMacSchedulerTestCase1
(1,6000,232000,232000,200,1,errorModel), TestCase::EXTENSIVE);
std::vector<uint16_t> packetSize1;
packetSize1.push_back (100);
packetSize1.push_back (100);
packetSize1.push_back (100);
packetSize1.push_back (100);
std::vector<uint32_t> estThrCqaDl1;
estThrCqaDl1.push_back (132000); // User 0 estimated TTI throughput from CQA
estThrCqaDl1.push_back (132000); // User 1 estimated TTI throughput from CQA
estThrCqaDl1.push_back (132000); // User 2 estimated TTI throughput from CQA
estThrCqaDl1.push_back (132000); // User 3 estimated TTI throughput from CQA
AddTestCase (new LenaCqaFfMacSchedulerTestCase2
(dist1,estThrCqaDl1,packetSize1,1,errorModel), TestCase::QUICK);
// Traffic2 info
// UDP traffic: payload size = 200 bytes, interval = 1 ms
// UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header)
* 1000 byte/sec -> 232000 byte/rate
// Maximum throughput = 4 / ( 1/2196000 + 1/1191000 + 1/1383000 + 1/775000 ) = 1209046
byte/s
// 232000 * 4 = 928000 < 1209046 -> estimated throughput in downlink = 928000 / 4 = 230000
byte/sec
std::vector<uint16_t> dist2;
dist2.push_back (0);
// User 0 distance --> MCS 28
dist2.push_back (4800); // User 1 distance --> MCS 22
dist2.push_back (6000); // User 2 distance --> MCS 20
dist2.push_back (10000); // User 3 distance --> MCS 14
std::vector<uint16_t> packetSize2;
packetSize2.push_back (200);
packetSize2.push_back (200);
packetSize2.push_back (200);
packetSize2.push_back (200);
std::vector<uint32_t> estThrCqaDl2;
estThrCqaDl2.push_back (230000); // User 0 estimated TTI throughput from CQA
estThrCqaDl2.push_back (230000); // User 1 estimated TTI throughput from CQA
estThrCqaDl2.push_back (230000); // User 2 estimated TTI throughput from CQA
estThrCqaDl2.push_back (230000); // User 3 estimated TTI throughput from CQA
AddTestCase (new LenaCqaFfMacSchedulerTestCase2
(dist2,estThrCqaDl2,packetSize2,1,errorModel), TestCase::QUICK);
// Test Case 3: heterogeneous flow test in CQA
// UDP traffic: payload size = [100,200,300] bytes, interval = 1 ms
// UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header)
* 1000 byte/sec -> [132000, 232000, 332000] byte/rate
// Maximum throughput = 3 / ( 1/2196000 + 1/1191000 + 1/1383000) = 1486569 byte/s
// 132000 + 232000 + 332000 = 696000 < 1486569 -> estimated throughput in downlink =
[132000, 232000, 332000] byte/sec
std::vector<uint16_t> dist3;
dist3.push_back (0); // User 0 distance --> MCS 28
dist3.push_back (4800); // User 1 distance --> MCS 22
dist3.push_back (6000); // User 2 distance --> MCS 20
std::vector<uint16_t> packetSize3;
packetSize3.push_back (100);
packetSize3.push_back (200);
packetSize3.push_back (300);
std::vector<uint32_t> estThrCqaDl3;
estThrCqaDl3.push_back (132000); // User 0 estimated TTI throughput from CQA
estThrCqaDl3.push_back (232000); // User 1 estimated TTI throughput from CQA
estThrCqaDl3.push_back (332000); // User 2 estimated TTI throughput from CQA
AddTestCase (new LenaCqaFfMacSchedulerTestCase2
(dist3,estThrCqaDl3,packetSize3,1,errorModel), TestCase::QUICK);
}
static LenaTestCqaFfMacSchedulerSuite lenaTestCqaFfMacSchedulerSuite;
// --------------- T E S T - C A S E # 1 ------------------------------
std::string
LenaCqaFfMacSchedulerTestCase1::BuildNameString (uint16_t nUser, uint16_t dist)
{
std::ostringstream oss;
oss << nUser << " UEs, distance " << dist << " m";
return oss.str ();
}
{
}
void
LenaCqaFfMacSchedulerTestCase1::DoRun (void)
{
NS_LOG_FUNCTION (this << GetName ());
if (!m_errorModelEnabled)
{
Config::SetDefault ("ns3::LteSpectrumPhy::CtrlErrorModelEnabled", BooleanValue (false));
Config::SetDefault ("ns3::LteSpectrumPhy::DataErrorModelEnabled", BooleanValue (false));
}
Config::SetDefault ("ns3::LteHelper::UseIdealRrc", BooleanValue (true));
Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
Ptr<PointToPointEpcHelper> epcHelper = CreateObject<PointToPointEpcHelper> ();
lteHelper->SetEpcHelper (epcHelper);
//LogComponentEnable ("CqaFfMacScheduler", LOG_DEBUG);
Ptr<Node> pgw = epcHelper->GetPgwNode ();
// Create a single RemoteHost
NodeContainer remoteHostContainer;
remoteHostContainer.Create (1);
Ptr<Node> remoteHost = remoteHostContainer.Get (0);
InternetStackHelper internet;
internet.Install (remoteHostContainer);
// Create the Internet
PointToPointHelper p2ph;
p2ph.SetDeviceAttribute ("DataRate", DataRateValue (DataRate ("100Gb/s")));
p2ph.SetDeviceAttribute ("Mtu", UintegerValue (1500));
p2ph.SetChannelAttribute ("Delay", TimeValue (Seconds (0.001)));
NetDeviceContainer internetDevices = p2ph.Install (pgw, remoteHost);
Ipv4AddressHelper ipv4h;
ipv4h.SetBase ("1.0.0.0", "255.0.0.0");
Ipv4InterfaceContainer internetIpIfaces = ipv4h.Assign (internetDevices);
// interface 0 is localhost, 1 is the p2p device
Ipv4Address remoteHostAddr = internetIpIfaces.GetAddress (1);
Ipv4StaticRoutingHelper ipv4RoutingHelper;
Ptr<Ipv4StaticRouting> remoteHostStaticRouting = ipv4RoutingHelper.GetStaticRouting
(remoteHost->GetObject<Ipv4> ());
NS_LOG_INFO ("DL - Test with " << m_nUser << " user(s) at distance " << m_dist);
std::vector <uint64_t> dlDataRxed;
for (int i = 0; i < m_nUser; i++)
{
// get the imsi
uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
// get the lcId
uint8_t lcId = 4;
uint64_t data = rlcStats->GetDlRxData (imsi, lcId);
dlDataRxed.push_back (data);
NS_LOG_INFO ("\tUser " << i << " imsi " << imsi << " bytes rxed " <<
(double)dlDataRxed.at (i) << " thr " << (double)dlDataRxed.at (i) / statsDuration << " ref " <<
m_thrRefDl);
}
for (int i = 0; i < m_nUser; i++)
{
NS_TEST_ASSERT_MSG_EQ_TOL ((double)dlDataRxed.at (i) / statsDuration,
m_thrRefDl, m_thrRefDl * tolerance, " Unfair Throughput!");
}
/**
* Check that the uplink assignation is done in a "round robin" manner
*/
NS_LOG_INFO ("UL - Test with " << m_nUser << " user(s) at distance " << m_dist);
std::vector <uint64_t> ulDataRxed;
for (int i = 0; i < m_nUser; i++)
{
// get the imsi
uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
// get the lcId
uint8_t lcId = 4;
ulDataRxed.push_back (rlcStats->GetUlRxData (imsi, lcId));
NS_LOG_INFO ("\tUser " << i << " imsi " << imsi << " bytes rxed " <<
(double)ulDataRxed.at (i) << " thr " << (double)ulDataRxed.at (i) / statsDuration << " ref " <<
m_thrRefUl);
}
for (int i = 0; i < m_nUser; i++)
{
NS_TEST_ASSERT_MSG_EQ_TOL ((double)ulDataRxed.at (i) / statsDuration,
m_thrRefUl, m_thrRefUl * tolerance, " Unfair Throughput!");
}
Simulator::Destroy ();
// --------------- T E S T - C A S E # 2 ------------------------------
std::string
LenaCqaFfMacSchedulerTestCase2::BuildNameString (uint16_t nUser, std::vector<uint16_t>
dist)
{
std::ostringstream oss;
oss << "distances (m) = [ " ;
for (std::vector<uint16_t>::iterator it = dist.begin (); it != dist.end (); ++it)
{
oss << *it << " ";
}
oss << "]";
return oss.str ();
}
LenaCqaFfMacSchedulerTestCase2::LenaCqaFfMacSchedulerTestCase2 (std::vector<uint16_t>
dist, std::vector<uint32_t> estThrCqaDl, std::vector<uint16_t> packetSize, uint16_t interval,bool
errorModelEnabled)
: TestCase (BuildNameString (dist.size (), dist)),
m_nUser (dist.size ()),
m_dist (dist),
m_packetSize (packetSize),
m_interval (interval),
m_estThrCqaDl (estThrCqaDl),
m_errorModelEnabled (errorModelEnabled)
{
}
LenaCqaFfMacSchedulerTestCase2::~LenaCqaFfMacSchedulerTestCase2 ()
{
}
void
LenaCqaFfMacSchedulerTestCase2::DoRun (void)
{
if (!m_errorModelEnabled)
{
// LogComponentDisableAll (LOG_LEVEL_ALL);
//LogComponentEnable ("LenaTestCqaFfMacCheduler", LOG_LEVEL_ALL);
lteHelper->SetAttribute ("PathlossModel", StringValue
("ns3::FriisSpectrumPropagationLossModel"));
// Create Nodes: eNodeB and UE
NodeContainer enbNodes;
NodeContainer ueNodes;
enbNodes.Create (1);
ueNodes.Create (m_nUser);
// Install Mobility Model
MobilityHelper mobility;
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
mobility.Install (enbNodes);
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
mobility.Install (ueNodes);
// Create Devices and install them in the Nodes (eNB and UE)
NetDeviceContainer enbDevs;
NetDeviceContainer ueDevs;
lteHelper->SetSchedulerType ("ns3::CqaFfMacScheduler");
enbDevs = lteHelper->InstallEnbDevice (enbNodes);
ueDevs = lteHelper->InstallUeDevice (ueNodes);
Ptr<LteEnbNetDevice> lteEnbDev = enbDevs.Get (0)->GetObject<LteEnbNetDevice> ();
Ptr<LteEnbPhy> enbPhy = lteEnbDev->GetPhy ();
enbPhy->SetAttribute ("TxPower", DoubleValue (30.0));
enbPhy->SetAttribute ("NoiseFigure", DoubleValue (5.0));
// Set UEs' position and power
for (int i = 0; i < m_nUser; i++)
{
Ptr<ConstantPositionMobilityModel> mm = ueNodes.Get (i)>GetObject<ConstantPositionMobilityModel> ();
mm->SetPosition (Vector (m_dist.at (i), 0.0, 0.0));
Ptr<LteUeNetDevice> lteUeDev = ueDevs.Get (i)->GetObject<LteUeNetDevice> ();
Ptr<LteUePhy> uePhy = lteUeDev->GetPhy ();
uePhy->SetAttribute ("TxPower", DoubleValue (23.0));
uePhy->SetAttribute ("NoiseFigure", DoubleValue (9.0));
}
// Install the IP stack on the UEs
internet.Install (ueNodes);
Ipv4InterfaceContainer ueIpIface;
ueIpIface = epcHelper->AssignUeIpv4Address (NetDeviceContainer (ueDevs));
// Assign IP address to UEs
for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
{
Ptr<Node> ueNode = ueNodes.Get (u);
// Set the default gateway for the UE