Thursday, October 3, 2019
Design Of Wifi Based Tdma Protocol Information Technology Essay
Design Of Wifi Based Tdma Protocol Information Technology Essay Time division multiple access is a multiple access method for shared the channel by dividing the signal into different time slots. TDMA is successful works in cellular mobile communication for several years ago. Recently has been combined with OFDM to introduce OFDMA. TDMA also ensure fairness between nodes in the network. In vehicular scenario, we proposed TDMA protocol to work with CSMA/CA to mitigate and cope with some of the challenges in vehicular communications. In this chapter we will discuss the design of the protocol, connection messages, protocol flow and cross intercommunication between the new TDMA sublayer with CSMA/CA and PHY layers. In section 3.2 a general explanation of proposed TDMA protocol, the design of Wi-Fi-based 802.11p is discussed in Section 3.3. In Section 3.4, implementations of TDMA protocol in the simulation environment is presented. Simulation problems and implementation improvements is discussed in Section 3.5. The chapter summarization is given in Sec tion 3.6. 4.2. EXPLANATIONS OF TDMA PROTOCOL The TDMA protocol is representing as a provider client protocol, which means the protocol is centralized. The other possibility is to define a distributed or an ad hoc protocol as it is done in (Fan Yu, 2007) and (Katrin, 2009). We mean by centralized that the provider will be the only one handles the information that has given in both channels. This does not mean that all communication is going to be only unidirectional (from the provider to the client), but sometimes is going to be bidirectional communication. The provider in our case here is the RSU (Road Side Unit) and the client/station is OBUs (Onboard Units). Form now we may always use the term RSU to provider or centralized node and the OBU to client or mobile station. Here we need to implement frame of 10 ms, those frame consist of two main time slots, one of them for the control and the second one for the service or data channel, and both are using different duration. Why we chose value of 10 ms because this is currently used in many TDMA implementations i.e. WIMAX. 4.3. DESIGN OF WI-FI BASED TDMA PROTOCOL IEEE802.11 has two modes DCF and PCF. Distributed Coordination Function (DCF) relies on CSMA/CA distributed algorithm and an optional virtual carrier sense using RTS and CTS control frames (IEEE Std 802.11, 1999). If the channel is busy during the DIFS (DCF Interframe Space) interval, the station defers its transmission. Point Coordination Function (PCF) is used for infrastructure mode, which provides contention-free frame transfer for processing time-critical information transfers (W. Wang, 2003). PCF is optional in the standard and only few vendors implemented it in their adapters. Two different periods defined at PCF mode: contention-free period (CFP) and contention Period (CP). CFP uses contention free-poll frames to give stations the permission to transmit. However, PCF has many drawbacks and limitations in long distance applications (i.e. up to tens of kilometers) this due to sensitivity of the acknowledgement (ACK) messages to propagation delay which is designed for contention -free local area networks purposes. Also, once a station reserves the access to the medium, it may occupy the medium for long time without any station can interrupt its transmissions even in the high priority traffics case; i.e. if the remote station has lower data rate due to the distance, then it will take long time to release the channel (Pravin, 2003). Consequently, it has been shown that (S. Sharma, 2002) (Sridhar, 2006) TDMA based MAC is suitable for long distance propagation delay. Most of the implemented solution for long distance Wi-Fi-based network was used WiMAX like TDMA frame for conducting the PMP scenario. However, using WiMAX/TDMA above Wi-Fi is increasing the system complexity and overhead since the WiMAX/TDMA has been built for the licensed-based and Wi-Fi is built with unlicensed environment. In this research a design of TDMA over the 802.11 is presented. The function of the proposed TDMA is to disable the contention behavior of 802.11 (CSMA/CA) for contention-less MAC. In this research a new cross layer design is introduced between CSMA/CA and new logical TDMA layer, which the Wi-Fi MAC frame is encapsulated in a logical TDMA header before forwarded to IP layer. The proposed protocol stack is shown in Figure4.1. The CSMA/CA peer-to-peer protocol is disabled and replaced with TDMA peer-to-peer protocol as shown with the dot-lines. Figure.4.1. Protocol flow of the TDMA-based PMP The logical TDMA header is added between IP header and MAC header. The function of the new header is to disable the random access feature of the CSMA/CA in 802.11 and replace it by logical TDMA function, which is maintains the synchronization of the local timers in the stations and delivers protocol related parameters. The frame is shown in Figure 4.2. The proposed TDMA header contains BCCH (broadcast control channel), FCCH (frame control channel) and RA (random access). BCCH: contains general information i.e. timestamp through time_stamp_update(), SSID, BS-node capabilities and random access time interval ra_interval(). All this parameters (except the RA time interval) is prepared and copied from the beacon frame (using beacon_content()) from the Wi-Fi MAC device driver. The BCCH information helps the APs in the sleep, wakeup, transmitting and receiving times. Figure.4.2. Additional TDMA header is added to Wi-Fi frame FCCH: carries the information about the structure and format of the ongoing frame i.e. scheduler () and time_slot_builder(); containing the exact position of all slots and Tx/Rx times and guard time between them and scheduling. RACH: contains a number of random access channels (RCH). This field is uses when no schedule has been assigned to the APs in the UL fields. Non-associated APs use RA for the first contact with an AP using slot_time_request(). The flow diagram of logical control and data channels is shown in Figure 4.3. Figure 4.3: the flow of the virtual channels for the TDMA frame, First, the RACH frame is receiving if there any connection request from APs to BS. Then, BCCH, FCCH and AGCH broadcast their information, then transmit and receive users payload. Timer is controlling all the transmitted and received signals. Although, the new TDMA header is introduced at the cost of the performance due to the overhead, however, in the long distance applications with point-to-multiple-point infrastructure scenarios usually the numbers of stations are not too high compared with end-user part. In our scenario we consider 4 remote access points and one central access point (BS-node). By implementing TDMA_module() each APs would assigned with time slot within the TDMA frame. TDMA also saves power because each STA only needs to wake-up during these time slots in each frame. If new node (AP) wants to join the network it listens to the BCCH frame to get the initial parameters from the BS-node. Then it uses the RA period to send time_slot_request() request to the BS-node to r equest for time slot. The BS-node uses the FCCH field to update the new scheduling table in scheduler(). The TDMA_module() assigns time slots for APs by taking copy of the NAV (network allocation vector) information (NAV_update()) from the Wi-Fi MAC layer and modifying it according to the schedule scheme. NAV is considered as virtual carrier sensing which is limits the need for contention-based physical carrier sensing. This is done by setting new back_off_counter() and NAV_new() in the TDMA_module() which indicates the amount of time that medium will be reserved for each time slots. The BS-node set the NAV value to the frame length time plus any other necessary messages to complete the current operation to make sure that no station (AP) will access the channel during the frame transmission. Other stations count down from the NAV to 0. When the NAV has nonzero value, the scheduler () send back to the Wi-Fi MAC that indication that the medium is busy; before the NAV reaches 0, the ba ck_off() and NAV_new() update the Wi-Fi MAC with the new NAV. The destination address (DA) and source address (SA) in the MAC frame header and in the SSID is modified according to the new NAV and RR scheduling information. Figure4.4 shows illustrate the flow of the process in cross-layer concept, which is consisting of three layers: TDMA source code, wireless driver and hardware abstraction layer (HAL). The cross layer is performed between wireless driver and the source code. HAL is different for each hardware platform. The procedure of this approach is also below: Core Module: Repoint the WiFi_MAC_SAP to TDMA_MAC Point the MAC-TDMA_SAP to IP TDMA_module() { //modify the NAV vector for virtual (fake) busy network busy If NAV() not_equal_to_zero then { //copy the NAV value to new place to use it for new AP Network_entry Copy CSMA/CA/NAV() to CSMA/CA/NAV_old() Copy TDMA()/NAV_new() to CSMA/CA/NAV() } If NAV()=0 then { // call NAV_update() TDMA/NAV_update() Set back-off counter() Send the NAV_new() to scheduler() } Scheduler(){ //using round robin queue scheme Round_robin() } //time_slot_builderà ¢Ã¢â ¬Ã ¦ Time_slot_builder(){ random_access(){ // See if there are any time slot request If time_slot_request(){ Time_slot()++ } else traffic(); } } //add the new TDMA header // send the broadcast control channel (BCCH) bcch(){timestamp(); ra_interval; SSID;BS-node capability}; //for the RA using the same etiquettes used by contention period (CP) at the MAC level fcch() { slots_time_builder() Set frame_format(){ Slot_time_interval; } 4.3.1 TDMA Protocol Flowchart In Vehicular Environment The RSU sends the beacon frame periodically with the free slots available in the TDMA frame. The OBU scans for the RSU beacon. If more than one RSU respond, comparisons are made on their (received signal strength indicator) RSSIs and the best one is selected after which the order of merits are applied on the other RSUs as first, second, etc candidates according to their RSSI signal strength. The OBU uses the beacon to synchronize its frame with the RSU after which the OBU sends the data in the free slots (in the coming uplink frame). A check is performed to find if the RSSI Figure.4.4. Implementation and incorporating TDMA in 802.11p protocol stack. OBU scan for the RSUs Beacon frame More than one Beacon frame Are received No Compare the different RSSIs and select the best and > threshold Yes Synchronization and clock Exchange with RSU Send data in the free slots In the UL sub-frame RSSI No Yes Figure4.5 TDMA Protocol Flowchart The TDMA frame structure is shown in Figure4. 3. The TDMA frame encapsulates the 802.11 frames in the payload subsection. The frame is repeated periodically for every 20msec (which is the length of the frame). Each frame contains the beacon filed, i.e. the broadcast control channel (BCCH) which comprises timestamp, SSID, and BS-node capabilities. The frame also contains the frame control channels which carries information on the structure and format of the ongoing frame, i.e. the slots scheduler which contains the exact position of all the slots and the Tx/Rx times and guard times. The GACH and the RACH are used for random access channel when the OBU needs to join the WBSS. The RACH is the channel that the OBUs use for association request. The GACH is the grant access channel that contains all the OBUs accepted for transmission in the next frame. The TDMA is using transmission opportunities (TXOPs) mechanism originally provided by the IEEE 802.11e to calculate the DL and UL time slots duration. The TXOP is a predefined start time and a maximum duration for the station to access the medium. A RSU will set its own NAV to prevent its transmission during a TXOP that has been granted through the OBU (Figure4. 6). Rather than categorizing the data traffic based on the voice, the data and the video as in the 802.11e, the data traffic priority categories are based on the OBUs channel quality. The RSU gives high priority to vehicles with high speed to send more frames before it leaves the WBSS. The vehicle with the high channel fading will get more number of slots. Of course, this mechanism will introduce performance anomaly, however, we can use any of the solutions available in the literature for the performance anomaly (Tavanti, 2007) (IEEE P802.11p/D3.0,2007). DCD feedback TDMA DCF EDCA PMD and PLCP Figure4.6 Channel fading parameters feedback for vehicles transmission priority and TXOP setting. 4.4. IMPLEMENTATIONS OF TDMA PROTOCOL IN THE SIMULATION ENVIRONMENT In the previous section, we theoretically describe the main characteristics of the protocol which we want to design. This section explains how we implemented the ideas of this thesis by the modification Code of C++ in the Network Simulation. Although the protocol that we want to design is basically a protocol of MAC, we have to put in mind that is not only changes to be made in the MAC layer will be done. We will also have to deal with the physical and application layer. From the point of view of the provider of MAC layer is the one which is responsible to handle the various types of packages (from service and control channel). In fact, the MAC layer in the side of the provider is the one which carries the multiplexing of TDMA between the two channels (and also between various services to the interior of the channel of the service). The application, in this case, will produce the packages which will be presented in each channel. From the point of view of the client layers, MAC and application are simple. The MAC layer basically responsible; to send to application the packages which the client wants to receive (packages which belong to the channel or from desire service) or throwing the packages that not requested by the client (packets from a broadcast or unicast service the client is not interested in). The application layer will be the one that which produces the packages of the request from clients to send to the provider when they are interested by a service unicast. Well as we described in detail the characteristics of the MAC protocol designed earlier, we may carry out this idea described theirs corresponds to the last version of our protocol, to reach this execution that we programmed and examined before simple versions, the changes of the force and improvements made between the versions are related to the definition of the various services offered in the section of channel service: The first version only considered a unidirectional communication between provider and client. The reason was that we only define broadcast services in the service channel. A second version consisted in defining unicast services and hence introducing a bidirectional communication between provider and client. This new version was more complex than the previous one so we decided to divide the application we had until now (called pbc3) into two sides: the application in the provider side (pbc3) and the application in the client side (pbc3Sink). This idea of defining two sides of an application or protocol layer (to simplify its implementation) is already used in other applications or protocol layers included in the simulator as TCP. The third and last version consisted in implementing the algorithm which handles the access of more than one client to the same unicast service. This was not considered in version two. When we are programming our MAC protocol, several problems raised, which are not only in our side, but also of the limitations or the restrictions of the Network Simulation. Well, in the next section4.5 we describe and explained all, it is worth to mention a principal limitation in so much it deeply influenced the execution of our protocol. The limitation comes like more parallel flow of the data in the same node started to appear (the node can be client or provider). In our case we decided to have only one provider, this provider will produce the data of both services. This means that the provider will have more application to the function in parallel and more file test in the MAC layer (see Figure 3.2), each one associated to a different data flow. That is the reason why we are interested in parallelizing. The problem is, in so far as we know, installing parallel flows in same node; is not a task easy to make in the simulator. The most common solution is composed to use nodes as parallel flows as much of an idea used when a protocol stack is defined on the two aircraft or more (like the plan of the data and management represented on Figure 2.3). This is Explained in (GMPLS, 2006), or when we wish to have a node of multi-interface, in (NS2 Notebook). The idea is: if we cannot have more than one application function in parallel in the same node, what would be the possible solution? The answer to that, like also accentuated in section4.5, is to have only one application to function in the provider, which produces various types of packages according to the time execution. This approximation also solves the problem to have more than one queue (in parallel) in the MAC layer. We will not need various queues to store various packages due to these packages arrive at the MAC layer already in the order; they must be sent. This solution simplifies the definition of the MAC layer but made the definition of the application layer to be more complex. Although the solution taken could seem rudimentary; the fact is that the difference between the theoretical solution and rectifies is not also large particularly when to think that what we want with once examine with the protocol is implemented. After the mention and explanation of this problem we can now specify how the protocol was made. We will start to explain how application is defined in all both, client and provider. Both sides application have two principal functions: one is responsible for creation and sending of the packages to the lower layers and the other is responsible to receive the packages of the lower layer. The application in the side of the provider calls the pbc3 and has two principal functions: one for send and other for receive frame. While sending the frame we basically have to create a package (by defining its title) and send it. The provider will send various types of packages according to the execution time. Basically we will have two types of packages: the packages of management in the control channel of inspection mark and the packages of the data in the excavation of service of the channel timeslot. These packages will have various headers. In case of the packages of the zones information of the header are as in Figure 4.7: Type Service_id Time_slot seqNum lastPacket node_id Send_time Payload Figure 4.7: Fields of the application header for data frames. Service_id: Field used by the provider to indicate the service whose payload is included in the frame. Time_slot: Field that shows the subtime slot when the service is offered. SeqNum: Sequence number of the packet sent. Nowadays is only used in data packets which belong to unicast services, it is used by the provider when more than one client want to receive the same private information. LastPacket: This field is related to seqNum. It is used to indicate that the packet sent is the last one. In case of management frames the header is defined by the following fields in Figure 4.8. Type Services_ Num_services node_id Send_time Payload Information Figure4.8. Fields of the application header for management frames. services_information: It is only used in management frames. It is a vector which contains the basic information about the services offered by the provider. This basic information is defined by three fields: the first field is the identifier of the service, the second field is the subtimeslot identifier and the third field is the type of the service (as we said already before the type of the service means if the service is broadcast or if it is unicast). These three fields must be defined for each service available in the provider, the Figure 4.9 shows that. Service 1 Time Slot where Type of Service 1 Service 2 Time Slot when Type of service 2 Identifier Service 1 is offered Identifier Service 2 is offered Figure 4.9: Example of the services_information buffer when two services offered. Although in our implementation the identifier of the service and the identifier of the timeslot is the same (which means the service whose identifier is the number one will be offered in the subtime slot number one), we decided to define two variables because they would have different values in future versions of the protocol. Num_services: Value used to indicate the total number of services which are going to be offered by the provider during the service channel timeslot. Once we have explained how our application works in both sides (provider and client) we must explain the main changes done in the MAC layer. When we download the NS2.33 version there was already included an implementation of IEEE802.11a protocol. We didnt want to make use of this code because it was totally oriented to guaranty the CSMA/CA with virtual carries sense mechanism, we are not interested in. there was also simple TDMA implementation included. We decided to adapt into our requirements. We basically had to change the definition of the TDMA frame and to set up both data and management MAC headers. In contrast to the application layer there are no variables defended to make use of the MAC layer through the Tcl script. If we concentrate on the physical layer, we will see that in our version of Network Simulation NS2, there were already two physical layers for wireless communications applied: the first one called the wirelessPHY and the second one called WirelessPhyExt. We are interested to use this last version of the physical channel basically it presented an important concept for us: it supports multiple arrangements of the modulation. WirelessPhyExt leaves the function with BPSK, QPSK, QAM16 and QAM64 as it is described in (Qi Chen, 2008). The modulation influences certain important characteristics such as the rate of header information and minimum sensitivity of the receiver, according to the indications the Figure 3.3 of, and consequently the period of the data of transmission and SINR necessary to receive it and to decode it. The only problem is that this new version of the wireless channel must be used together with an extension of the MAC layer called Mac802_11Ext. We were not interested in using that one for the same reason we were not interested in using the Mac802_11 version; and for that we decided to introduce the multiple modulation schemes in the WirelessPhy layer. Another important point when working low layers of the WAVE protocol stack is to think about how the channel is modelled in NS2. There are four different types of channel propagation defined and include in NS2.33, The free space model, the Two-Ray Ground reflection model, the Shadowing model and the Nakagami model. The first three models are well described in (The NS manual, 2008). 4.5 SIMULATION PROBLEMS AND IMPLIMENTATION IMPROVEMENTS Here we explained the reason for which we are interested to study the technology of TDMA in the V2I communications and the process followed to define and apply our protocol. By creating a new protocol, sometimes it is not possible to design the theoretical idea that we had because of some limitations presented by the simulator. It is also possible that our execution could be improved at the points given. We must realize that although the protocol seems to be complex sometimes, many improvements could be made to obtain the best and more specific results. The idea of this section is just to explain the main problems found when elaborating our protocol and to suggest some future improvements. If we refer first to the problems found when we were working we must clarify that most of them are not really problems (in the sense of bugs found when executing the protocol) but limitations the simulator has which do not allow us to define the protocol as we wanted to. There are three main limitations we want to point out: The first is already mentioned in the previous section. The problem is related to the parallel data flows in a node. We were interested by this fact of being able not only to have more application to the function in parallel in the same node (as explained in the section 4.4) but to also define the two planes of protocol, data and planes of management, in the same node (that we can see Figure 2.3) although this last idea was thrown it required of much work to make. In section 4.4 we adopted easy and the fast solution which does not have affected the results obtained. But there is other solutions, simplest is composed to define nodes as many, in the code of TCL, as data flows we need and link these nodes through a router. To explain it easily: we will have one node per each data queue (see Figure 3.3) and one router that handle the information from each node. This solution is based in the actual implementation of the Diffserv queues in the NS (Definition of physical queues) where virtual and physical queues are used (Implementing multiqueue). In our case we will need at least two nodes: one for the data of the control channel and the other for the data of the service channel, in case only one service is offered. We must realize this solution involves changes in the Tcl code which leads to a simple C++ implementation. Another solution, which could be considered as an improvement is to have only one application running on the provider that generates different types of packets but instead of doing it as a function of the execution time, it could generate them randomly and give the work of organize them to the link layer. In this case we will need to define an algorithm in charge of finding the desired packets in the unique queue that exists in the link layer and sending them in the correct order to the MAC layer. There is a third solution which allows having an implementation closer to the one specified in the standards ((IEEE 802.11, 2007) and (Implementing multiqueue)). The idea consists of adapting the definition of the queue done nowadays in the implementation 802.11e standard which is included in the simulator. As we can see in (Design and verification, 2003) this implementation requires changes in the definition of the class queue which allows having multiples queues by creating them in an array (Evaluation of IEEE 802.11e). The source code of this new type of queue can be found in (Evaluation of IEEE 802.11e). The second limitation is related to the synchronization of the nodes. Those nodes can be an OBU or a RSU. The NS is a simulator based in events controlled by timers. The fact is, as it is pointed out in IEEE 1609.4 standard in (Yunpeng, 2007) all the nodes require to be synchronized before communication. The synchronization is especially important when using TDMA technology and it is a process which will be carried on when any OBU enters in the communication area of a new RSU in a centralized system. The fact is that in the NS tool all the nodes implemented (in the Tcl code) have the same time basis which means they do not need any synchronization because they are already synchronized. If we are interested in defining the synchronization process we should first desynchronize the nodes by manipulating their timers. In our case we will consider the RSU time basis to be the one the other nodes must to synchronize on. Each OBU will have to follow a synchronization process before receiving data frames from the RSU. The idea could be the following: the first time an OBU receives frames from a new RSU it gets the timestamp of the RSU and, after adding the delay produced by the propagation of the frame to this timestamp, adjusts its timers. Calculate the delay or time difference between the RSU and the OBU is not complicated. The only idea which does not seem clear is how to set up different time basis in the nodes. The third and last limitation is related to the anti-collisions mechanism used in the MAC layer mainly based in CSMA/CA algorithm. We detected the problem when executing our code: we found there were collisions between request frames when a considered number of OBUs were interested in receiving information about the same unicast service. Those collisions should not take place if we keep in mind each node is supposed to be able of sense the medium to see whether it is busy before sending any kind of frame. Why this type of collisions is produced? As we explained earlier, it is necessary to present intervals of guard to the end of each time slit to avoid collisions between the reinforcements produced by various devices (OBUs and RSU in our case) but in this case the collisions due are produced to different OBUs send braces of request really narrowly in time and, because of them cannot detect if the medium is with vacuum or not, a collision is produced and detected by the RSU. When working with the NS tool we are not able to do all the things we want to, not only because of some restrictions or limitations the tool has (as we explained before) but also because of time we did not implement all the ideas which came to our mind and we must simplify and focus our work. Because of this lack of time there exist a lot of points in our protocol which could be improved. Some of these points are explained in the following paragraphs. If we focus in the implementation of the control channel the most important improvement which could be done is to introduce critical frames and implement the process each node has to follow when receiving those frames. Introducing those types of frames will be really interesting because we can see how to handle both types of information (critical and no critical) and we would make a better use of the control channel than we do in our actual implementation. If we pay attention to the service channel there are some things which could be improved, the ideas are summarized in the following points: In the actual implementation there is not any prioritization between nodes, which means when two or more nodes want to receive the same information (which is unicast) the first who ask for it is the first which receives the data. One possibility is to define the priority as a function of the position of each OBU with respect to the RSU. It sounds coherent to give higher priority to the nodes that are closer to the RSU because of their small latency (time necessary to consume a service). Basically the latency is smaller because the propagation time (one of the terms
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.