TY - JOUR
T1 - Distributed and Collaborative High Speed Inference Deep Learning for Mobile Edge with Topological Dependencies
AU - Henna, Shagufta
AU - Davy, Alan
N1 - Publisher Copyright:
CCBY
PY - 2020
Y1 - 2020
N2 - Ubiquitous computing has potentials to harness the flexibility of distributed computing systems including cloud, edge, and internet of things devices. Mobile edge computing (MEC) benefits time-critical applications by providing low latency connections. However, most of the resource-constrained edge devices are not computationally feasible to host deep learning (DL) solutions. Further, these edge devices if deployed under denser deployments may result in topological dependencies which if not taken into consideration may adversely affect the MEC performance. To bring more intelligence to the edge under topological dependencies, compared to optimization heuristics, this work proposes a novel collaborative distributed DL approach. The proposed approach exploits topological dependencies of the edge using a resource-optimized graph neural network (GNN) version with an accelerated inference. By exploiting edge collaborative learning using stochastic gradient (SGD), the proposed approach called CGNN-edge ensures fast convergence and high accuracy. Collaborative learning of the deployed CGNN-edge incurs extra communication overhead and latency. To cope, this work proposes compressed collaborative learning based on momentum correction called cCGNN-edge with better scalability while preserving accuracy. Performance evaluation under IEEE 802.11ax-high-density wireless local area networks deployment demonstrates that both the schemes outperform cloud-based GNN inference in response time, satisfaction of latency requirements, and communication overhead.
AB - Ubiquitous computing has potentials to harness the flexibility of distributed computing systems including cloud, edge, and internet of things devices. Mobile edge computing (MEC) benefits time-critical applications by providing low latency connections. However, most of the resource-constrained edge devices are not computationally feasible to host deep learning (DL) solutions. Further, these edge devices if deployed under denser deployments may result in topological dependencies which if not taken into consideration may adversely affect the MEC performance. To bring more intelligence to the edge under topological dependencies, compared to optimization heuristics, this work proposes a novel collaborative distributed DL approach. The proposed approach exploits topological dependencies of the edge using a resource-optimized graph neural network (GNN) version with an accelerated inference. By exploiting edge collaborative learning using stochastic gradient (SGD), the proposed approach called CGNN-edge ensures fast convergence and high accuracy. Collaborative learning of the deployed CGNN-edge incurs extra communication overhead and latency. To cope, this work proposes compressed collaborative learning based on momentum correction called cCGNN-edge with better scalability while preserving accuracy. Performance evaluation under IEEE 802.11ax-high-density wireless local area networks deployment demonstrates that both the schemes outperform cloud-based GNN inference in response time, satisfaction of latency requirements, and communication overhead.
KW - deep learning in cloud computing
KW - deep learning in edge computing
KW - edge inference
KW - edge with topological dependencies
KW - intelligent cloud computing
KW - intelligent edge
UR - http://www.scopus.com/inward/record.url?scp=85081971447&partnerID=8YFLogxK
U2 - 10.1109/TCC.2020.2978846
DO - 10.1109/TCC.2020.2978846
M3 - Article
AN - SCOPUS:85081971447
JO - IEEE Transactions on Cloud Computing
JF - IEEE Transactions on Cloud Computing
SN - 2168-7161
ER -