This year we are going "green". We will not be printing programs or including cd's in the registration gift bag. Instead we will have a password protected "program page" on our website which will host the power point presentations of the tutorials and the poster abstract book. The power point presentations for our tutorials will only be available to actual symposium participants. A user name and password will be sent to you via email in the registration confirmation that you receive after you register/pay the symposium fees. This page will be activated on May 31, 2016 with the poster abstract book, however the power point presentations themselves will not be loaded until after the symposium ends. This page will be available until June 30, 2016. Please be sure to download any power points you may need prior to June 30.
Standard participants fee: $800
Early Discount: $700 (must register by May 15, 2016)
One Day Registration: $500
Full time Non VT Student: $300 (must register by May 15, 2016: a $100 late fee will be applied after May 15, 2016)
7:00 - 1:00 p.m.: Registration, Main Hallway: 1:00 p.m. - 5:00 p.m. Registration, Floor 2 Corridor
9:00 a.m. - 9:10 a.m.: Welcome Address and Opening Remarks, Alumni Assembly Hall
Dr. Michael Buehrer, Director, Wireless @ Virginia Tech : Introduction of Keynote Speaker, Dr. Alan Gatherer
9:10 a.m. - 10:00 a.m.: Keynote Address, Dr. Alan Gatherer, Huawei Technologies, USA
Title: Chemical reactions in the organic network: strategic directions for developing chips and systems in next generation cellular
Abstract: In our quest for more and more bandwidth per customer and per unit cube of space, we are starting to break down the basic cellular structure of the wireless network. People talk of the death of cellular due to technologies such as Coordinated Multipoint, massive MIMO and densification. But the implications of this breakdown go well beyond algorithmic changes and fuzzy cell boundaries. The way we construct the network, control the network and optimize the network will see fundamental changes in the next decade. In this presentation I will discuss two main ideas. Firstly the idea that the network will become organic and will spread and evolve in concert with the community of users and services it supports, in a manner that we will only be able to observe and align to rather than control. Minimizing network cost will be replaced by enabling network evolution as the main goal of the network developer. Cloud RAN, edge computing and user centric networks will be discussed. Secondly, because of the massive parallelism in the system the role of the compute and storage elements in the radio access part of the network will become like atoms in a larger chemical reaction and this will change the way we define, program and use Systems on Chip (SoCs). I will discuss Service Oriented Radio (SOR) and the emerging division of SoC functionality into "Bernoulli" and "Newtonian" functionality, and give some examples.
Alan Gatherer is the CTO for Baseband System on Chip in Huawei Technologies, USA and a Fellow of the IEEE. He is responsible for R&D efforts in the US to develop next generation baseband chips and software for 4G and 5G base station modems. His group is presently developing new technologies for baseband SoC in the areas massive parallelism, programming models and languages and real time isolation for multimode virtualization and Service Oriented Radio. Alan joined Huawei in January 2010. Prior to that he was a TI Fellow and CTO at Texas Instruments where he led the development of high performance, multicore DSP at TI and worked on various telecommunication standards. Alan has authored multiple journal and conference papers and is regularly asked to give keynote and plenary talks at communication equipment conferences. In addition, he holds over 60 awarded patents and is author of the book “The Application of Programmable DSPs in Mobile Communications.” Alan holds a bachelor of engineering in microprocessor engineering from Strathclyde University in Scotland. He also attended Stanford University in California where he received a master’s in electrical engineering in 1989 and his doctorate in electrical engineering in 1993.
10:00 a.m 11:00 a.m.:
Ettus Research Demo, Alumni Assembly Hall
Presenter: Dr. Martin Braun, Ettus Research
RFNoC is the core architecture for Ettus Research third generation USRP devices that aims to make FPGA acceleration more easily accessible. Users can create modular, FPGA accelerated SDR applications by chaining blocks into a flow graph in a fashion similar to many GPP SDR suites. One such suite, GNU Radio, fully supports RFNoC. Users can create flow graphs containing both GNU Radio blocks and RFNoC blocks that seamlessly communicate. Parameters, such as FFT size and FIR filter coefficients, can be easily set from within the driving application without having to develop much application-side code.
In this demo, we will showcase basic RFNoC usage, and give some overview of how it works both on the user side and under the hood. We will show some applications, highlighting RFNoC's capabilities, and also give some basic introduction into how developing with RFNoC works.
Martin Braun received his Dipl.-Ing. and Dr.-Ing. (PhD) degrees from Karlsruhe Institue of Technology (KIT), Germany, working on software defined radio, radar systems and digital signal processing problems. As of 2014, he is a Senior Software Design Engineer for Ettus Research, with a focus on driver software development and RF-Network-on-Chip. He's a long-time contributor to the GNU Radio project, as well as the GNU Radio community manager.
11:00 a.m. - 11:15 a.m.: Break
11:15 a.m. - 12:15: Panel Session, Alumni Assembly Hall
Title: Do we really need 5G by 2020?
Abstract: With first rollouts expected by 2020, the next generation of cellular networks (termed "5G") is on the horizon. In addition to handling ever-increasing mobile data traffic, 5G is expected to cater to new use-cases, such as enabling massive access management for the Internet of Things. 5G is usually differentiated from 4G in terms of these new use-cases, some of which are merely predictions that may not live up their initial hype. For instance, it is not clear that we will have 50 billion IoT devices by 2020 as initially predicted. In order to take an objective view of these trends, this panel will bring together world-renowned wireless experts from industry and academia. The discussions will focus on the need for 5G, realism of the tentative timeline (first roll by 2020), components of 5G, and key differences from 4G. In addition to the 5G proponents, the panel will also include LTE experts who will discus the current state of the LTE deployments and its realistic shortcomings that will be addressed by 5G.
Moderator: Dr. Jeff Reed (Virginia Tech)
Panelists: Alan Gatherer, Phil Fleming, Ivan Seskar (Additional panelist TBA)
12:15 p.m. - 12:30 p.m.: A Brief Overview of Wireless Research Interests at DARPA/MTO
Dr. Tom Rondeau is a DARPA program manager in the Microsystems Technology Office. His research interests included adaptive and reconfigurable radios, improving the development cycle for new signal processing techniques, and creating general purpose electromagnetic systems.
Prior to joining DARPA, Dr. Rondeau was the maintainer and lead developer of the GNU Radio project and a consultant on signal processing and wireless communications. He worked as a visiting researcher with the University of Pennsylvania and as an Adjunct with the IDA Center for Communications Research in Princeton, NJ. In these roles, he helped push forward architectures and algorithms in signal processing for communications, signal analysis, and spectrum monitoring and usage.
Dr. Rondeau is active in many conferences and workshops around the world to help further research and technology in these areas, and he has consulted with many companies and government organization on new techniques in wireless signal processing. He has published widely in the fields of wireless communications, software radio, and cognitive radio. Dr. Rondeau holds a Ph.D. in electrical engineering from Virginia Tech and won the 2007 Outstanding Dissertation Award in math, science, and engineering from the Council of Graduate Schools for his work in artificial intelligence in wireless communications.
12:30 p.m. - 1:30 p.m.: Lunch, Latham Ballroom
1:30 p.m. - 5:00 p.m.: Registration, Floor 2 Corridor
1:30 p.m. - 5:30 p.m.: Spectrum-ShaRC Student Cognitive Radio Contest Semi-Finals, Cascades Room (Full agenda available here)
1:30 p.m. -5:30 p.m. Tutorials Sessions Begin (3 hours long with a break half way through)
Tutorial Session 1A - Drillfield Room
Presenter: Dr. Michael Buehrer, Virginia Tech
Abstract: Real-time location awareness has become essential for many wireless applications, particularly for 5G, Wi-Fi, and sensor networks. Reliable localization and navigation is a critical component for a diverse set of applications including smart cities, logistics, security tracking, medical services, search and rescue operations, automotive safety, and military systems. The coming years will see the emergence of location awareness in challenging environments with sub-meter accuracy and minimal infrastructure requirements. This tutorial will address the foundations and trends of localization and navigation technologies. In particular we will examine foundational concepts related to localization and explore recent advances in the field including techniques related to cellular networks, sensor networks, GNSS, and Wi-Fi. Further, we will look at the challenges and emerging solutions for indoor navigation. Although the focus will be on “active” localization approaches, where a device is cooperating with the localization process, we will also examine “passive” approaches where either the device is not cooperating, or the person being localized does not have a device at all.
Dr. R. Michael Buehrer joined Virginia Tech from Bell Labs as an Assistant Professor with the Bradley Department of Electrical Engineering in 2001. He is currently a Professor and the director of Wireless @ Virginia Tech, a comprehensive research group focusing on wireless communications. His current research interests include position location networks, localization, direction finding, dynamic spectrum sharing, and cognitive radio, among others. More specifically, Dr. Buehrer has done extensive work in received signal strength and time-of-arrival based device positioning as well as network (i.e., collaborative) positioning. His position location work has been funded by the National Science Foundation, federal laboratories and industrial sponsors. He has also served as an expert witness in position location related patent litigation.
Dr. Buehrer is the co-editor of the Handbook of Position Location: Theory, Practice and Advances, was the lead guest editor for a recent issue on Non-cooperative Localization Networks in the IEEE Journal on Selected Topics in Signal Processing, has authored or co-authored over 50 journal and approximately 125 conference papers and holds 11 patents in the area of wireless communications. He has served as an organizer or technical program chair for multiple localization-oriented workshops including the IEEE GLOBECOM Workshop on Localization for Indoors, Outdoors, and Emerging Networks (LION) and Workshop on Positioning Navigation and Communication (WPNC). He is currently a Senior Member of IEEE and an Associate Editor for IEEE Transactions on Wireless Communications. He was formerly an associate editor for IEEE Wireless Communication Letters, IEEE Transactions on Vehicular Technologies, IEEE Transactions on Signal Processing, IEEE Transactions on Communications, and IEEE Transactions on Education. He is a recipient of the Fred Ellersick Best Paper Award. In 2014 he was presented the Dean’s Award for Teaching Excellence and in 2003 he was named Outstanding New Assistant Professor both by the Virginia Tech College of Engineering.
Tutorial Session 1B - Duck Pond Room
Title: On System-Level Analysis & Design of Cellular Networks: The Magic of Stochastic Geometry
Abstract: This tutorial is aimed to provide a comprehensive crash course on the critical and essential importance of spatial models for an accurate system-level analysis and optimization of emerging 5G ultra-dense and heterogeneous cellular networks. Due to the increased heterogeneity and deployment density, new flexible and scalable approaches for modeling, simulating, analyzing and optimizing cellular networks are needed. Recently, a new approach has been proposed: it is based on the theory of point processes and it leverages tools from stochastic geometry for tractable system-level modeling, performance evaluation and optimization. The potential of stochastic geometry for modeling and analyzing cellular networks will be investigated for application to several emerging case studies, including massive MIMO, mmWave communication, and wireless power transfer. In addition, the accuracy of this emerging abstraction for modeling cellular networks will be experimentally validated by using base station locations and building footprints from two publicly available databases in the United Kingdom (OFCOM and Ordnance Survey). This topic is highly relevant to graduate students and researchers from academia and industry, who are highly interested in understanding the potential of a variety of candidate communication technologies for 5G networks.
Marco Di Renzo received the Laurea (cum laude) and the Ph.D. degrees in electrical engineering from the University of L’Aquila, L’Aquila, Italy, in 2003 and in 2007, respectively, and the Habilitation à Diriger des Recherches (Doctor of Science) degree from University Paris-Sud, France, in 2013. He has held various research and academic positions in Italy at the University of L’Aquila, in the United States at Virginia Tech, in Spain at CTTC, and in the United Kingdom at The University of Edinburgh. Since 2010, he has been a CNRS Associate Professor (“Chargé de Recherche Titulaire CNRS”) in the Laboratory of Signals and Systems of Paris-Saclay University—CNRS, CentraleSupélec, University Paris Sud, France. He is a Distinguished Visiting Fellow of the Royal Academy of Engineering, U.K. He is a co-founder of the university spin-off company WEST Aquila s.r.l. Italy. He is a recipient of several awards, including Best Paper Awards at IEEECAMAD (2012 and 2014), IEEE-VTCfall (2013), IEEE-ATC (2014), IEEE ComManTel (2015), the 2013 Network of Excellence NEWCOM# Best Paper Award, the 2013 IEEE-COMSOC Best Young Researcher Award for Europe, Middle East and Africa (EMEA Region), the 2015 IEEE Jack Neubauer Memorial Best System Paper Award, and the 2015- 2018 CNRS Award for Excellence in Research and in Advising Doctoral Students.
Currently, he serves as an Editor of the IEEE COMMUNICATIONS LETTERS and IEEE TRANSACTIONS ON COMMUNICATIONS, where is the Editor for Heterogeneous Networks Modeling and Analysis of the IEEE Communications Society. He is a Senior Member of the IEEE (COMSOC and VTS) and a Member of the European Association for Communications and Networking (EURACON).
Tutorial Session 1C - Smithfield Room
Title: Open-Source SDR on Embedded Platforms
Abstract: In the past ten years, low-power, embedded computers that are capable of running a Linux-based operating system, know as single board computers (SBCs), have become increasingly available, capable, and low-cost. They are quickly becoming the platform of choice for projects that require modest computing capabilities in the do-it yourself, maker, and hacker communities. Popular news sites targeted toward these communities publish articles like the one entitled “Ringing in 2015 with 40 Linux-friendly hacker SBCs”, and a cursory search through the catalogs of popular electronics distributors in the US reveals the existence of at least hundreds of different Linux-compatible SBCs. The vast majority of SBC processors are based on ARM or x86-derived instruction sets, and Linux-based operating systems for these SBCs are widely available. By being compatible with Linux, these SBCs are easily programmable using any of a plethora of standard, familiar programming languages and interfaces, such as C++, Python, and OpenCL. Any software that is readily available, open source, and runs on desktop computers running Linux can generally be made to run on the current generation of SBCs, making for an unprecedented availability of software resources on embedded platforms. Other embedded platforms have also become wildly more popular in the last ten years. Tablets, smartphones, and smart-TV boxes and sticks are product categories that did not exist ten years ago, and again, the vast majority of these use ARM or x86 instruction sets and can run Linux-based operating systems. If an embedded system can run Linux, it can generally be used and programmed just as easily as any desktop computer.
The past ten years have also seen an unprecedented rise in commercial off-the-shelf software-defined radio (SDR) technology. At the time of writing, a non-exhaustive, crowd-sourced list of commercially available SDRs lists 84 different commercially available models, ranging in price from $8 to $6000. These have RF transmitting and/or receiving capabilities, and interface with a computer that performs the signal processing. Among the do-it- yourself, maker, and hacker communities, the de facto standard tool for interfacing with SDRs and performing signal processing for wireless communications is GNU Radio. It also has a large degree of popularity in the wireless telecommunications research sector and also in academia. GNU Radio is an open-source tool written in C++ and Python using software libraries that are readily available on Linux-based operating systems, and can thus be made to run on the aforementioned embedded, low-cost SBCs.
An SBC or other embedded smart-device with a compatible SDR is an incredibly general, and potentially incredibly low cost platform for learning, development, and experimentation. For example, the Raspberry Pi Foundation’s ”Raspberry Pi Zero” ($5 at the time of writing) can be coupled with a USB SDR based on Realtek’s ”RTL2832U” chip (approx. $10 at the time of writing) to form a very general receiver that can tune from 25 MHz to 1700 MHz and cover 3:2 MHz of instantaneous bandwidth. The complexity of the modulation types that can be handled by such a system is only limited by the computational power of the ARM CPU that powers the Pi Zero. At the other end of the price spectrum, one can couple an Ettus Research ”X310” with an NVIDIA ”Tegra TX1” embedded computer for upwards of $6000, and potentially transmit and receive waveforms with several tens of MHz of instantaneous bandwidth.
This tutorial will cover a selection of topics in commercial embedded computers and commercial SDR technology. A nearly exhaustive list of available, Linux-compatible, commercial SDR-compatible, embedded computers will be presented. A nearly exhaustive list of commercial SDR products will also be presented. Some combinations of these technologies will be discussed and demonstrated, chiefly focusing on the low-cost alternatives. There will be hands on demonstrations featuring GNU Radio running on a low-cost SBC+SDR combination, and the tutorial will cover some common pitfalls in Linux and GNU Radio on embedded platforms. The general capabilities of various platforms at different price points will be discussed in terms the tradeoffs involved, which include factors such as modulation types, data rates, tuning frequency ranges, noise characteristics, and sensitivity.
Dr. Raj Bhattacharjea received the BS, MSECE, and Ph.D. degrees from Georgia Tech. He specializes in the modeling and simulation of long-range radiofrequency links for wireless telecommunications, radar, and electronic warfare applications. While pursuing his Ph.D., he was a member of the research faculty and a research consultant for the US Naval Academy, where he developed a novel computational model for radiofrequency propagation prediction near the surface of the earth in the presence of atmospheric refractive effects. He currently is a research engineer in the Georgia Tech Research Institute Information and Communications Laboratory, where he works on propagation modeling and software defined radio for electronic warfare scenarios. His other research interests include transformation electromagnetics, wideband wireless channel measurements, and backscatter communications. He is an active reviewer for several IEEE journals and magazines.
3:00 p.m. - 3:30 p.m.: - Refreshment Break
3:30 p.m. - 5:00 p.m.: Conclusion of Tutorial Sessions 1A, 1B, and 1C
5:30 p.m. - 8:00 p.m.: Virginia Tech Student Poster Session and Beer/Pizza Reception, Old Dominion Ballroom - Squires Student Center
7:00 a.m. - 9:00 a.m.: Registration - Main Hall
8:00 a.m. - 8:05 a.m.: Alumni Assembly Hall, Dr. Jeffrey Reed introduces Keynote Speaker, Dr. Theodore "Ted" Rappaport
8:05 a.m. - 9:00 a.m.: Keynote Address, Dr. Ted Rappaport, New York University
Title: Millimeter Wave Wireless Communications and 5G: It will work!
Since 2011-2012, when pioneering channel measurements by the author first proved to the world that the millimeter wave spectrum could support future mobile wireless access, corporations and governments have invested billions of dollars in the development of 5G wireless systems. This keynote demonstrates the demand drivers and technology innovations that will enable future 5G mobile systems to operate with carrier frequencies and bandwidths that are orders of magnitudes greater than what has ever been used in the 40 year history of the cellular industry. Some key disconnects between the legacy 3GPP standardization process and its existing channel models, and the channel characteristics of millimeter wave frequencies are also presented, with the upshot being that capacity predictions and system analysis can be dramatically erroneous if legacy channel models are used.
Theodore (Ted) S. Rappaport is the David Lee/Ernst Weber Professor of Electrical and Computer Engineering at the Polytechnic Institute of New York University (NYU-Poly) and is a professor of computer science at New York University’s Courant Institute of Mathematical Sciences. He is also a professor of radiology at the NYU School of Medicine.
Rappaport is the founding director of NYU WIRELESS, one of the world’s first academic research centers to combine engineering, computer science, and medicine. Earlier, he founded two of the world’s largest academic wireless research centers: the Wireless Networking and Communications Group (WNCG) at the University of Texas at Austin in 2002, and the Mobile and Portable Radio Research Group (MPRG), now known as Wireless@Virginia Tech, in 1990.
9:00 a.m. - 12:30 p.m.: Tutorial Sessions 2A, 2B , and 2C Begin (3 hours long with a break half way through)
Tutorial Session 2A: Drillfield Room
Title: Recent Developments in Artificial Intelligence Applications of Deep Learning for Signal Processing
Abstract: In this tutorial, we will provide a quick introduction to the artificial intelligence topic known as deep learning. Deep learning is a rapidly moving field with lots of recent developments and recent theoretical work done by Virginia Tech Hume Center faculty. Specifically a new signal processing application using deep learning will be described as well as new applications of the tools of probability to help understand what is happening in deep learning networks.
Dr. Bob McGwier is Director of Research for the Ted and Karyn Hume Center for National Security and Technology. He is a Research Professor of Electrical and Computer Engineering as well as Aerospace and Ocean Engineering. Dr. McGwier earned his undergraduate degrees at Auburn University and his Ph.D. in Applied Mathematics from Brown University in 1988. He was on the research staff of the Institute for Defense Analyses, Center for Communications Research from June 1986 until June 2011 when he joined Virginia Tech. His research interests are numerous but are most closely related to cognitive radio, software radio, machine learning, and satellite systems. He was former VP Engineer and a current director of AMSAT, Inc. He is a co-founder and technical director of Federated Wireless, Inc and co-founder of Hawkeye 360, Inc.
Tutorial Session 2B: Duck Pond Room
Title: Security Principles for LTE Device-to-Device (D2D) Proximity Services (ProSe)
Abstract: With Release 12 of its set of technical specifications the 3rd Generation Partnership Project (3GPP) added two features, that were standardized within a work item called “LTE Device-To-Device (D2D) Proximity Services (ProSe)”. These two features targeting two very different application. First, there is Direct Discovery, a mechanism to allow two or multiple devices in close proximity to detect each other. This functionality is intended for commercial application, such as advanced social media and advertisement. Second, there is Direct Communication. Direct Communication enables two or more devices to communicate with each other, including group communication, which is important to critical communication applications such as public safety. Security is an important aspect in all types of communications. For Direct Communication, however, lives may depend on this communication link and thus robust security principles need to be in place to ensure proper and reliable communication. In this tutorial we will explore the basics and fundamentals behind LTE D2D Prose with an emphasis on Direct Communication, intended for critical communication and tactical communication as used by the armed forces. We will further review the security principles embedded into Direct Communication to establish and maintain a secured communication link between two or multiple devices with and without the LTE network being involved.
Andreas Roessler is technology manager for North America at Rohde & Schwarz USA, Inc. with a focus on LTE/LTE-Advanced and now 5G. His responsibilities include strategic marketing and product portfolio development. Andreas graduated from Ottovon-Guericke University in Magdeburg, Germany, and holds a masters degree in communication engineering. He has more than 12 years of experience in the mobile industry and in wireless technologies.
Tutorial Session 2C: Smithfield Room
Title: Throughput Maximization in Wireless Networks Through Cross-layer Design and Optimization
Abstract: Modern wireless networks are usually built upon advanced technologies and algorithms at all layers. The performance of such networks depends on the interactions of technologies and algorithms across different layers. This tutorial focuses on how to maximize throughput of wireless networks through cross-layer design and optimization. We show how to develop tractable models at the physical, link, and network layers so that the behavior of each layer and the intricate relationships across different layers can be characterized. More important, we show how such complex cross-layer optimization problems can be solved through some powerful analytical tools from mathematical programming. The optimal solutions of these problems allow the network designer to understand the performance limits of such networks. They also serve as benchmarks for the design of distributed algorithms in the field. A number of case studies will be presented.
Dr. Thomas Hou is the Bradley Distinguished Professor of Electrical and Computer Engineering at Virginia Tech, USA. His research interests are to develop innovative solutions to complex cross-layer optimization problems in wireless networks. He is particularly interested in exploring new limits of network performance by exploiting advances at the physical layer and other new enabling technologies.
Prof. Hou was named an IEEE Fellow for contributions to modeling and optimization of wireless networks. He has published two textbooks: Cognitive Radio Communications and Networks: Principles and Practices (Academic Press/Elsevier, 2009) and Applied Optimization Methods for Wireless Networks (Cambridge University Press, 2014). The first book has been selected as one of the Best Readings on Cognitive Radio by the IEEE Communications Society. Prof. Hou’s research was recognized by five best paper awards from the IEEE and two paper awards from the ACM. He holds five U.S. patents.
Prof. Hou was an Area Editor of IEEE Transactions on Wireless Communications (Wireless Networking area), and an Editor of IEEE Transactions on Mobile Computing, IEEE Journal on Selected Areas in Communications – Cognitive Radio Series, and IEEE Wireless Communications. Currently, he is an Editor of IEEE/ACM Transactions on Networking and ACM Transactions on Sensor Networks. He is the Steering Committee Chair of IEEE INFOCOM conference, a member of the IEEE Communications Society Board of Governors, and the Chair of GLOBECOM/ICC Technical Committee (GITC). He is a Distinguished Lecturer of the IEEE Communications Society.
10:30 a.m. - 11:00 a.m.: Refreshment Break, Poster Session
11:00 a.m. - 12:30 p.m.: Tutorial Sessions 2A, 2B, and 2C Conclude
12:30 p.m. - 1:30 p.m.: Lunch, Latham Ball Room
1:30 p.m. - 4:00 p.m.: Spectrum-ShaRC Student Cognitive Radio Contest Finals, Cascades Room (Full agenda available here)
1:30 p.m. - 3:00 p.m.: - Tutorial Sessions 3A, 3B and 3C Begin (3 hours long with a break half way through)
Tutorial Session 3A: Drillfield Room
Title: Radio Frequency Spectrum Management and Recent Developments in Broadband Wireless Spectrum
Abstract: The basics include explanations of key definitions such as allocation, assignment, and allotment. An overview is presented of the relevant legal documents and regulatory agencies, and how they intersect: the International Telecommunication Union (ITU); the Federal Communications Commission (FCC); and the National Telecommunications and Information Administration (NTIA). The Table of Frequency Allocations is presented and explained, emphasizing current broadband wireless spectrum. Spectrum re purposing, changing allocations, and licensing processes including auctions are presented. The Commercial Spectrum Enhancement Act (CSEA), and recent amendments are discussed.
Recent Developments Concerning Broadband Wireless Spectrum Include:
- Legacy and modern spectrum sharing techniques are detailed and why modern techniques are necessary to provide spectrum for broadband wireless services while sharing with incumbent spectrum users.
- The Presidential Memoranda concerning spectrum management and broadband wireless spectrum are highlighted. Recent legislation and possible impact are briefly summarized. A brief status of achieving the President’s broadband wireless goals is discussed.
- Identifying spectrum for 5G broadband wireless services is discussed including the results of the ITU World Radiocommunications Conference (WRC-15).
Fredrick Matos is a senior spectrum manager with the National Telecommunications and Information Administration (NTIA), Office of Spectrum Management, a unit of the Dept. of Commerce. Mr. Matos has been with NTIA for over 20 years.
During June 2003-April 2004, Mr. Matos was detailed to the Coalition Provisional Authority (CPA) in Iraq where he was a senior advisor on telecommunications in the CPA’s nation-building efforts. He was the national telecommunications regulator and director of all Iraqi national spectrum management, including licensing and policy development and implementation for all spectrum-related services. He wrote the cellular telecommunications licenses for the three cellular operators, and negotiated the license provisions. He led an Iraqi team in developing the national channel allotment plans for FM and TV broadcasting. He was in a Senior Executive Service Position for several months, advising Ambassador Bremer, the CPA Administrator.
In previous positions at NTIA, he developed policies and participated as a member of the U.S. delegation to international spectrum management treaty conferences of the International Telecommunication Union, a United Nations organization, where he negotiated various complex issues. Prior to NTIA, he was a spectrum engineer and project manager with the IIT Research Institute at the Department of Defense Electromagnetic Compatibility Analysis Center (ECAC).
Mr. Matos served as senior telecommunications policy adviser and Legislative Assistant to U.S. Congressman Tom Tauke where he supported the Congressman’s activities on the House Subcommittee on Telecommunications.
He initiated and teaches a course in radio frequency spectrum management at the George Washington University. He was the NTIA director of the spectrum management training course conducted with the United States Telecommunications Training Institute (USTTI), training spectrum managers from developing countries.
He is the author of the book, Spectrum Management and Engineering, published in 1985. He was Director of Engineering for the MediAmerica Corporation, a chain of radio broadcasting stations.
Mr. Matos advised the government of Egypt by conducting an extensive evaluation of the spectrum management organization and processes. At the request of the Egyptian Telecommunications Minister, he reviewed the draft new telecommunications law, providing extensive comments in all areas of the law.
Mr. Matos coordinated and led a team of experts presenting a special seminar in radio frequency spectrum management to the Panama Canal Commission in Panama. He subsequently led another team of experts presenting a special seminar to the Ente Regulador, the newly established telecommunications and spectrum management regulatory agency of the Republic of Panama. Mr. Matos was an adviser to the Ente Regulador in establishing and organizing the spectrum management organization and its procedures, and it is now the most modern spectrum management organization in Latin America.
Mr. Matos was part of a team of experts presenting a spectrum management seminar in Tel Aviv, Israel that included participants from Israel, the Palestine Authority, and Jordan.
Mr. Matos holds Bachelor’s and Master’s degrees in electrical engineering. His graduate school research was in computer communications networks. He is a licensed radio amateur, first obtaining a license as a teenager. He holds the extra class license.
Tutorial Session 3B: Duck Pond Room
Title: Towards 5G: Slicing a 4G LTE Network
Abstract: Experimentation has played a very important role to advance research in mobile computing. Although simulation is an important tool in studying network behavior and analyzing new protocols and algorithms, it is essential that new research ideas be validated on real systems. The evolution of the 5G standard needs a test network that provides researchers the opportunity to develop, test, and evaluate network technologies and architectures in a live environment.
Today the most significant infrastructure for wireless experimentation at-scale is available through the NSF GENI program . Researchers have found access to GENI’s 4G WiMAX and LTE deployment connected with distributed cloud infrastructure GENI racks over high speed Internet2  networks to be a very fruitful environment for experiments and service deployments in a wide range of disciplines including:
- Novel, non--‐IP mobile internetworking protocols, e.g. MobilityFirst 
- Vehicular networking and connected vehicle applications
- Emergency, public safety, and healthcare IT communications
The deployment consists of 30 Base stations (BS) across 12 campuses nationwide and 4 indoor small cell LTE BS deployed in Orbit Lab  at Rutgers University.
This tutorial will begin by introducing the GENI experimental framework and infrastructure layout to attendees. The tutorial will have a strong hands-on experimental component. Attendees will be encouraged to sign up for accounts that allow them to use this testbed even after the tutorial is over. We will then shift to exploring concepts of network slicing in 4G networks using SDN techniques developed in GENI. That will be followed with introduction of deployed LTE core network Evolved Packet Core (EPC) instance that is exposed to experimenters using a well-defined API. The attendees will run sample experiments in different network slices that configure the infrastructure end to end using SDN protocols such as OpenFlow/OVS. These experiments will be instrumented allowing the attendees to analyze performance metrics in an LTE network by using static nodes with LTE devices to allow attendees remote access to the client side of the link. The tutorial concludes by laying out open challenges as we continue to evolve towards the 5G standard.
Abhimanyu Gosain is a Network Scientist at Raytheon BBN Technologies. He is the wireless system engineer for the NSF GENI project and co-principal investigator for the NSF Orbit LTE project. His current research focus is on applying software defined networking techniques to mobile 4G LTE networks. Abhimanyu received his M.Sc. degree in Electrical Engineering from Tufts University in 2007.
Ivan Seskar, Associate Director at WINLAB, Rutgers Universit, is responsible for experimental systems and prototyping projects. Mr. Seskar is currently PI for two NSF GENI projects - the “meso-scale” Open-Flow virtual network deployment at Rutgers University and the Open WiMAX/LTE base station project which resulted in campus deployments at several US universities as well as the PI of an ongoing NSF CRI project aimed at deployment of a wideband cognitive radio platform (“WiSER”). He is also co-PI for the NSF-supported ORBIT project at WINLAB and has led technology development and operations activities since the testbed was released as a community resource in 2005 and for which the team received 2008 NSF Alexander Schwarzkopf Prize for Technological Innovation. His technical interests include experimental protocol evaluation, radio technology, software defined and cognitive radios, vehicular networking and wireless systems in general. Ivan is a Senior Member of the IEEE, member of ACM and co-founder and CTO of Upside Wireless Inc.
Tutorial Session 3C: Smithfield Room
Presenters: Dr. Allen MacKenzie and Dr. Mohammad J. Abdel-Rahman, Virginia Tech
Abstract: Emerging wireless networks operate using dynamic and uncertain resources that render them susceptible to severe performance degradation. Managing resources in such stochastic networks while ensuring a certain level of network performance is challenging. In this tutorial, we introduce stochastic/robust optimization as a powerful tool to handle resource allocation in such uncertain networks and we will discuss various approaches of modeling uncertainty and explain different feasibility and optimality approaches under uncertainty. We will talk about both static and adaptive stochastic optimization approaches. In particular, we focus on (i) chance-constrained, (ii) simple recourse, and (iii) two- as well as multi-stage stochastic optimization approaches. Throughout the tutorial, we will illustrate how to use some of the aforementioned stochastic optimization techniques to formulate example resource allocation problems in emerging wireless networks. Specifically, we will consider: (i) Joint channel and base station allocation in opportunistic LTE networks, (ii) orchestrating a robust virtual LTE-U network from hybrid half/full-duplex Wi-Fi APs, and (iii) link scheduling in cellular networks augmented with millimeter-wave APs.
Allen B. MacKenzie received his bachelor’s degree in electrical engineering and mathematics from Vanderbilt University in 1999. In 2003 he earned his Ph.D. in electrical engineering at Cornell University and joined the faculty of the Bradley Department of Electrical and Computer Engineering at Virginia Tech, where he is now an associate professor.
Prof. MacKenzie’s research focuses on wireless communications systems and networks. His current research interests include cognitive radio and cognitive network algorithms, architectures, and protocols and the analysis of such systems and networks using game theory. His past and current research sponsors include the National Science Foundation, the Defense Advanced Research Projects Agency, and the National Institute of Justice.
Prof. MacKenzie is an associate editor of the IEEE Transactions on Communications and the IEEE Transactions on Mobile Computing. He also serves on the technical program committee of several international conferences in the areas of communications and networking, and is a regular reviewer for journals in these areas.
Prof. MacKenzie is a senior member of the IEEE and a member of the ASEE and the ACM. In 2006, he received the Dean’s Award for Outstanding New Assistant Professor in the College of Engineering at Virginia Tech. He is the author of more than 45 refereed conference and journal papers and a co-author of the book Game Theory for Wireless Engineers.
Mohammad Abdel-Rahman is currently a Postdoctoral Research Associate in the Department of Electrical and Computer Engineering at Virginia Tech. He received his PhD degree from the Electrical and Computer Engineering Department at the University of Arizona in November 2014. He is the recipient of the College of Engineering Fall 2014 Outstanding Graduate Student Award. He received his M.Sc. degree from the Electrical Engineering Department at Jordan University of Science and Technology, Jordan, in May 2010, and his B.Sc. degree from the Communication Engineering Department at Yarmouk University, Jordan, in September 2008.
Dr. Abdel-Rahman’s research is in the broad area of wireless communications and networking, with particular emphasis on resource management, adaptive protocols, and security issues. He is currently involved with projects related to virtualized wireless networks and millimeter-wave networking. He was involved with projects related to cognitive radios and dynamic spectrum access; wireless security (e.g., insider attacks, jamming, game theoretic countermeasures); secure satellite communications; and target detection in mobile wireless sensor networks. He is a member of the IEEE.
3:00 p.m. - 3:30 p.m.: Refreshment Break
3:30 p.m. - 5:00 p.m.: Tutorial Sessions 3A, 3B, an 3C Conclude
6:00 p.m. - 9:00 p.m.: Wireless @ Virginia Tech Dinner Party, The German Club
8:00 a.m. - 8:05 a.m.: Alumni Assembly Hall, Dr. Allen MacKenzie Introduces Keynote Speaker, Dr. Phil Fleming
8:05 a.m. - 8:55 a.m.: Keynote Address, Dr. Phil Fleming, Nokia Networks
Title: Future Directions for Wireless Networks
Abstract: Almost all communication networks will have a wireless link in the next several years driven by an explosion of use cases and radio technologies. In this talk, we will discuss the broad set of requirements that this implies for future networks as well as a view into the innovative solutions being proposed to meet those requirements.
Dr. Phil Fleming is Chief Technology Officer of North America at Nokia Networks. He earned his Ph.D. in Mathematics from the University of Michigan, after which he worked for nine years (1982-1991) at Bell Labs as a Distinguished Member of Technical Staff (DMTS). For the next two decades Dr. Fleming led a team of researchers and advanced technologists in designing and developing innovative solutions in packet scheduling and radio receiver/transmitters for WiMAX and LTE as a Fellow of the Technical Staff and Senior Director at Motorola Solutions. In 2011 he joined Nokia Solutions and Networks to be their head of Advanced Radio Technology and Engineering, and later head of Advanced Technology. Currently at CTO, Dr. Fleming looks for opportunities for research and strategic collaborations with telecom technologists in North America and worldwide.
9:00 a.m. - 12:30 p.m.: Tutorial Sessions 4A, 4B, and 4C Begin (3 hours long with a break half way through)
Tutorial Session 4A: Drillfield Room
Presenters: Dr. Chris Anderson, US Naval Academy and Dr. Greg Durgin, Georgia Tech
Abstract: Over the past decade, wireless communications has undergone a revolutionary transformation from voice-centric and point-to-point networking, to a world of ubiquitous data communications. This rapid growth has been driven by users insatiable demand for real-time high-speed data increasingly complex environments. Current predictions indicate that by 2020 users will expect 1 Gbps average throughput and 10 Gbps peak throughput everywhere regardless of location, with demand primarily driven by machine-to-machine and Internet of Things communications. Numerous ideas for these future 5G systems have been proposed to meet these demands, including Spectrum Sharing, Femtocellular systems, Massive MIMO, mm-Wave systems, and ever more complex wireless networking standards. A vitally important requirement for the accurate, reliable, and efficient operation of these 5G systems is robust and reliable propagation models that can precisely predict (i) coverage for desired, primary, signals regardless of environmental complexity (ii) aggregate effects of numerous in-band secondary signals, and (iii) real-time propagation predictions for mobile users or highly dynamic environments. These problems are exacerbated at the anticipated 5G mm-Wave frequencies due to dense multipath scattering, greater atmospheric effects, and antenna considerations. In order to effectively educate the next generation of technical professionals on this topic, it is imperative to have a firm understanding of what wireless propagation encompasses and why there is a compelling need for multidisciplinary research in this field.
This tutorial provides an overview of foundational propagation models, modern “big data” approaches to propagation modeling, narrowband and broadband propagation measurements, and presents applications of these techniques to proposed 3.5 GHz spectrum sharing and mm-Wave 5G communications. The first part of the tutorial will cover the development of “classical” models such as free-space, two-ray, knife-edge diffraction, and log-distance scattering; many of these are deeply embedded in even the most advanced prediction software and a deep understanding of them is vital to interpret prediction results. From there, we will discuss the development of empirical and semi-empirical models such as the Really Extended Hata, Irregular Terrain Model, and Attenuation Factor models currently in use by U.S. Government labs and DoD for propagation prediction of spectrum sharing systems. Next, we will cover modern approaches to propagation modeling that incorporate Geographic Information Systems terrain maps, land-use/land-cover databases, building databases, and LIDAR data in an effort to more accurately predict propagation in complex environments. Then, we will cover techniques for precision narrowband and broadband propagation measurements, as well as their limitations, sources of errors, and how that can adversely impact model development. Finally, we will discuss the unique aspects of mm-wave propagation, both for indoor femtocellular systems as well as outdoor massive-MIMO applications. The tutorial will end with case studies of propagation prediction for both 3.5 GHz spectrum sharing as well as mm-wave systems.
Christopher R. Anderson joined the United States Naval Academy from Virginia Tech as an Assistant Professor in 2007. In 2013, he was promoted to Associate Professor of Electrical Engineering. He is the founder and current director of the Wireless Measurements Group, a focused research group that specializes in spectrum, propagation, and field strength measurements in diverse environments and at frequencies ranging from 300 MHz to over 20 GHz.
Anderson's current research interests include radiowave propagation measurements and modeling, embedded software-defined radios, dynamic spectrum sharing, and ultra wideband communications. He is a Senior Member of IEEE, has authored or co-authored over 30 refereed publications, and his research has been funded by the National Science Foundation, the Office of Naval Research, NASA, Defense Spectrum Organization, and the Federal Railroad Administration. He is currently an Editor for IEEE Transactions on Wireless Communications and was a Guest Editor for IEEE Journal on Selected Areas in Signal Processing Special Issue on Non-Cooperative Localization Networks.
Gregory D. Durgin received the B.S. degree in electrical engineering, M.S. degree in electrical engineering, and Ph.D. degree from Virginia Tech in 1996, 1998, and 2000, respectively. In Fall 2003, he joined the faculty of the School of Electrical and Computer Engineering at Georgia Tech. In 2001, he spent one year as a Visiting Researcher with the Morinaga Laboratory, Osaka University. He regularly serves as a consultant to industry. He authored the textbook Space-Time Wireless Channels (Prentice-Hall, 2002), which is the first in the field of space-time channel modeling.
Prof. Durgin serves on the IEEE Wave Propagation Standards Committee. He was a co-recipient of the 1998 Stephen O. Rice Prize for best original journal paper in the IEEE TRANSACTIONS ON COMMUNICATIONS. He was the recipient of a 2001 Japanese Society for the Promotion of Science (JSPS) Post-Doctoral Fellowship. He was also the recipient of several teaching awards, as well as the National Science Foundation CAREER Research Award.
Tutorial Session 4B: Cascades Room
Title: Wireless Testbeds in Research and Education
Abstract: Wireless testbeds play a major role in developing and testing new wireless communications technologies and systems. Future mobile broadband systems will need to be of high availability and flexibility to support mission-critical applications and new spectrum management paradigms. Over the years, several large-scale testbeds have emerged at universities and research centers supporting research and education on several aspects of wireless communications using custom hardware and software-defined radio (SDR) technology. At Virginia Tech, the 48-node cognitive radio network (CORNET) testbed spans 4 floors of a campus building, whereas 11 outdoor nodes have been deployed on rooftops across campus for outdoor experiments. An LTE and Cognitive Medical Wireless Testbed System (COMWITS) are currently being developed. All testbed nodes are remotely accessible. CORNET is currently used for the student spectrum sharing radio contest (Spectrum-ShaRC), where student teams develop and test cognitive engines using the in-house developed cognitive radio test system (CRTS) software. CORNET is also used for developing visualization and gamification tools (CORNET-3D) for undergraduate STEM education. LTE-CORNET features industry-grade/commercial and SDR LTE systems, open-source waveforms and channel emulators that support experimental research on LTE for mission-critical communications.
This tutorial will discuss the use of wireless testbeds in research and education. It will provide a survey of some of the existing wireless testbeds and trials with focus on spectrum sharing before presenting Virginia Tech’s testbed infrastructure, its usability and example use cases. The audience will learn about the diverse capabilities of wireless testbeds, experience some of the available hardware and software tools and be involved in discussions on new applications and desired capabilities to support current and emerging needs for advancing research and education.
Vuk Marojevic graduated from the University of Hannover (M.S.), Germany, and Polytechnic University of Catalonia (Ph.D.), Spain, both in electrical engineering. He joined Wireless @ Virginia Tech in 2013. His research interests are in software-defined radio, spectrum sharing, 4G/5G cellular technology, and resource management with application to public safety and mission-critical networks and unmanned aircraft systems.
Tutorial Session 4C: Smithfield Room
Title: Massive MIMO
Abstract: MIMO (Multiple-Input Multiple-Output) technology enables spatial multiplexing and diversity through the use of multiple transmit and receive antennas at each communications station. In Multiple User MIMO (MU-MIMO), a multi-antenna base station (BS) incorporating typically < 10 antennas simultaneously communicates with multiple single antenna user terminals. Massive MIMO increases the number of BS antennas by an order of magnitude (hundreds of antennas) providing much finer spatial resolution which enables an increase in the number of concurrent streams (i.e. increase in number of users, as well as data throughput). In addition, the large number of antennas enables a reduction in the power transmitted per antenna which allows the use of less expensive radios. This tutorial will present the theory of massive MIMO to include the ability to accurately estimate the channel information, the use of precoding to focus the energy to each user terminal, and signal detection and recovery techniques to enable multi-user reception at the BS. The benefits and implementation challenges will be explored including a discussion of the effect of pilot signal contamination from adjacent cells on the ability to estimate the channel.
Jim Costabile is the founder and CEO of Syncopated Engineering Inc., a creative solution provider of software applications and embedded systems for wireless communications, signal processing and data analytics. He has over 20 years of experience leading business development and operations, product management, and research and development. Specialized engineering skills include wireless communications, embedded software development, and statistical signal processing. He is an adjunct faculty member at Johns Hopkins University where he teaches Adaptive Signal Processing and Communication Systems Engineering in the electrical engineering graduate program. He received his BSEE from the University of Akron, OH, MSEE from Johns Hopkins University, and an MBA as part of the Executive program at Loyola. He is a senior member of the IEEE and a graduate of the MIT Entrepreneurial Development Program. He previously presented a tutorial on Wireless Position Location Systems at the 2010 Virginia Tech Wireless Symposium.
10:30 a.m. - 11:00 a.m.: Refreshment Break
11:00 a.m. - 12:30 p.m.: Tutorial Sessions 4A, 4B, and 4C Conclude
12:30 p.m.: Event Concludes