SPAWARSYSCEN - SAN DIEGO e-mail address: nms@spawar.navy.mil Technical Representatives {nik,rafael}@spawar.navy.mil Maureen Battern Contracting Officer SPAWARSYSCEN SAN DIEGO CONTRACTS D212 PHONE 619-553-4489 FAX 619-553-7822 battern@spawar.navy.mil |
University of California, San Diego (UCSD) 9500 Gilman Drive La Jolla, CA 92093-0505 Dr. Kimberly Claffy PHONE 858-534-8333 FAX 858-822-0861 kc@caida.org Contract/Financial Contact Pamela J. Alexander PHONE 858-534-0240 FAX 858-534-0280 pjalexander@ucsd.edu |
Quarterly Status Report #Qtr1
Macroscopic Internet Data Collection and Analysis in Supportof the NMS Community
1.0 Purpose of Report
This status report is the quarterly cooperative agreement report that summarizes the effort expended by the UCSD's Cooperative Association for Internet Data Analysis (CAIDA) program in support of SPAWARSYSCEN-SAN DIEGO and DARPA on Agreement N66001-01-1-8909 during Jul - Sep 2001.
2.0 Project Members
UCSD hours: | |
CAIDA Staff: | 754 |
Total Hours: | 754 |
3.0 Project Description
UCSD/CAIDA is focusing on advancing the capacity to monitor, depict, and predict traffic behavior on current and advanced networks, through developing and deploying tools to better engineer and operate networks and to identify traffic anomalies in real time. CAIDA will concentrate efforts in the development of tools to automate the discovery and visualization of Internet topology and peering relationships, monitor and analyze Internet traffic behavior on high speed links, detect and control resource use (security), and provide for storage and analysis of data collected in aforementioned efforts.
4.0 Performance Against Plan
Status | Task 1 Year 1 Milestones: | Notes |
---|---|---|
Planned | Add 5 additional skitter source sites | Placing skitter monitors at DNS root server sites is a priority. |
Planned | Add 5 workload monitor sites | Both CoralReef and NeTraMet passive monitors will be added. |
Complete | Develop comprehensive website(s) for public availability of data |
|
Status | Task 2 Year 1 Milestones: | Notes |
---|---|---|
Begun | Establish archive and interactive database for community access to skitter, mantra, routing, and CoralReef data. |
|
Ongoing | Solicit community feedback regarding needed data types, formats, and dataset sizes. | Discussions occurred while preparing for SPAWAR RLBTS Demo |
Ongoing | Work with the NMS community to design common experiments | Integrated CoralReef into RLBTS test lab demo |
5.0 Major Accomplishments to Date
Task 1. Monitoring Task
A. Topology Measurement
Approach
skitter is a CAIDA tool that measures both the forward path and round trip time (RTT) to a set of destination hosts by sending probe packets through the network. It does not require any configuration or cooperation from the remote sites on its target list. In order to reveal global IP topology, the skitter project:
- Collects path and RTT data
- Acquires infrastructure-wide global connectivity information
- Analyzes the visibility and frequency of IP routing changes
- Visualizes network-wide IP connectivity
An essential design goal of skitter is to execute its pervasive measurement while placing minimal load on the infrastructure and upon final destination hosts. To achieve this goal, skitter packets are small (52 bytes in length), and we restrict the frequency of probing to 1 packet every 2 minutes per destination and 300 packets per second to all destinations. To improve the accuracy of its round trip time calculations, CAIDA added a kernel module to the FreeBSD operating system platform used by its skitter monitors. Kernel timestamping does not solve the synchronization issue required for one-way measurements, but reduces variance caused by multitasking processing when taking round trip measurements. This feature helps to capture performance variations across the infrastructure more effectively. By comparing data from various sources, we can identify points of congestion and performance degradation or areas for potential improvements in the infrastructure.
skitter Monitor Status as of 30-Sep-01 (19 monitors active):
Status | skitter monitor name/location (org) |
---|---|
Active | a-root.skitter.caida.org Herndon, VA, US (Verisign) |
Active | d-root.skitter.caida.org College Park, MD, US (Univ. of Maryland) |
Active | e-root.skitter.caida.org Moffett Field, CA, US (NASA) |
Active | f-root.skitter.caida.org Palo Alto, CA, US (VIX) |
Inactive | g-root.skitter.caida.org Vienna, VA, US (NIC.mil) |
Inactive | h-root.skitter.caida.org Aberdeen, MD, US (US Army Research Lab) |
Active | k-peer.skitter.caida.org Amsterdam, North Holland, NL (RIPE) |
Active | k-root.skitter.caida.org London, UK (RIPE) |
Broken | l-root.skitter.caida.org Marina del Rey, CA, US (ISI) |
Active | m-root.skitter.caida.org Tokyo, Kanto, JP (WIDE) |
Active | sjc.skitter.caida.org San Jose, CA, US (MFN) |
Active | yto.skitter.caida.org Ottawa, CA (CANet) |
Active | lhr.skitter.caida.org London, UK (MFN) |
Active | skitter.uoregon.edu Eugene, OR, US (Univ. of Oregon) |
Active | waikato.skitter.caida.org Hamilton, NZ (Univ. of Waikato) |
Active | champagne.caida.org Urbana, IL, US (VBNS) |
Active | apan-jp.skitter.caida.org Tokyo, Kanto, JP (APAN) |
Active | iad.skitter.caida.org Washington, DC, US (MFN) |
Active | nrt.skitter.caida.org Tokyo, Kanto, JP (MFN) |
Active | riesling.caida.org San Diego, CA, US (CAIDA) |
Active | skitter.kaist.kr.apan.net Taejon, KR (APAN) |
Deactivated | ams-skt-01.carrier.net Amsterdam, North Holland (Carrier 1) |
Deactivated | fra-skt-01.carrier1.net Frankfurt. DE (Carrier 1) |
Deactivated | lon-skt-01.carrier1.net London, UK (Carrier 1) |
Deactivated | mw.skitter.caida.org San Jose, CA, US (Worldcom) |
Deactivated | chenin.caida.org Boulder, CO, US (NCAR) |
Deactivated | galahad.caida.org Ann Arbor, MI, US (CAIDA) |
Deactivated | nyc-engr-01.inet.ipwest.net New York, NY, US (Qwest) |
Deactivated | sin.skitter.caida.org Singapore, SG |
Deactivated | sjo-engr-01.inet.qwest.net San Jose, CA, US (Qwest) |
Analysis Results
Marina Fomenkov led CAIDA RSSAC investigations which are reported in "Macroscopic Internet Topology and Performance Measurements from the DNS root name servers", a paper to be presented at the USENIX Lisa 2001 conference. The methodology employed makes use of a common skitter destination list for all skitter monitors co-located with each root name server, containing more than 58,000 IP destinations covering 8406 origin Autonomous Systems (ASes) and 184 countries. In addition to providing representative address prefix coverage, use of this common "DNS clients" destination list serves as a yardstick against which we can make performance comparisons. If a set of destinations shows high latency from all root servers and clusters either geographically or topologically without having systematic regional bandwidth problems or other political constraints, this might suggest a region meriting a new root name server. However, collected data cannot be used to decide how well a particular root server responds to its own specific clients, due to an internal BIND load-balancing feature. Despite this, by knowing which destinations in this list are frequent clients of which particular root server, local subsets of the DNS clients lists can still be used to study individual server-specific issues.
The first set of traces was collected between December 1 through December 30, 2000. The second set was gathered between March 6, and April 4, 2001. In December, we had not yet started M-root monitoring. In March, the L-root monitor experienced local connectivity problems and was temporarily disconnected. In March, each monitor probed destinations in the DNS Clients list between 7 and 13 times per day, a rate which is between 15-60% higher than probes made during December when an older, slower version of skitter was in use.
Two metrics of connectivity (hop count and round trip time) were calculated from the root name server to the hosts in the target set. The IP hop count distributions for each root server monitor can indicate whether they are near the edge of their local networks, near a major exchange point, or both, (See peak positions for A, E, F, and L root server monitors in Figure 1) or are further away from their destinations (See peak positions for K and M root servers in Figure 1).
Figure 1. IP path length distributions for DNS root server monitors.
Clusters of hosts having particularly large latencies from all root name servers suggest a potential deficiency in the current Internet infrastructure. A destination is classified as having high latency during a given day if, on that day, it had RTTs in the 90th percentile in at least half the probe cycles on all root server monitors. Results are then aggregated over a month to filter out transient problems. In Figure 2, the left side maximum is due to random variations in connectivity while the right-side maximum reflects destinations that consistently have high latency on every (or almost every) day during the 30-day collection period.
Figure 2. The persistence of high latency destinations.
Figure 3 depicts high latency destinations by continents. It shows that Africa, Asia, and South America IP addresses account for over 60% of high latency destinations, but less than 14% of the total DNS client list.
December 1 - December 30, 2000 |
March 6 - April 4, 2001 |
---|---|
|
|
![]() |
Figure 3. High-latency destinations compared to entire target list by continent
CAIDA's skitter measurements can be used with local client lists as a baseline against which to analyze topology and performance characteristics of the network between a root name server and its clients. Other placement issues, such as distance to the edge of the local network, peering relationships, and choice of upstream transit providers, can be discerned from the graphs provided by the daily summaries generated automatically from each skitter monitor's data.
Other skitter analysis projects:
- https://www.caida.org/tools/measurement/skitter/sample_code/
- https://www.caida.org/tools/visualization/otter/otter_plots/all_iad.png
- skitter daily summaries archive: http://sk-summary.caida.org/cgi-bin/main.pl
- skitter data access for researchers: https://www.caida.org/data/skitter/skitter_data_use.xml
B. Workload Measurement
OC48 Hardware Interface:
Of the four working prototype Dag4.1 capture cards, one has been deployed for testing on an OC48 link at Metromedia Fiber Network (collaborating with Brett Watson).
Another Dag board was set up in SDSC on a link between a CISCO GSR 12000 and Juniper M20 router, running in CISCO HDLC mode. Although data rates on this link are not high, it was sufficient for the purposes of debugging the firmware of the card. The system has been used at SDSC for further development.
We are testing two of the boards at Sprint ATL in Burlingame. Two boards were set up there on a POS link from a CISCO router that could take input from an Agilent OC48 router tester. Measurements on the throughput of the Dag 4.1 showed that with a 64-bit 33 MHz interface it could sustain typical 100 byte packets at full line rate, and 40 byte packets (the worst case with respect to data reduction) at 80% of full line rate. The two boards were left at Sprint for further testing and development, but will be moved to an existing CAIDA monitor upon completion of testing.
In 2001 we continue to improve the firmware, especially in the areas of packet filtering, and to make performance enhancements. This design will meet the original performance requirements for an OC48 measurement board. While the team was constructing the prototype boards, they continued to use completed Dag 4.0 and 4.1p boards to test firmware and software. The main aim in software development will be to integrate the Dag4 with CoralReef.
Traces were successfully captured from the OC48 Dag4 cards.
The Dag4.1 has the following characteristics:
- OC48 SMF optical interface.
- ATM and POS traffic capture
- Conditioned clock with GPS time pulse input for cell/packet timestamping
- 1 Mbyte cell/packet FIFO
- Separate FPGA for cell/packet processing, with 2 Mbytes SSRAM
- 64-bit 66 MHz PCI interface, standard PCI board form factor
- StrongARM 233 MHz processor with 2 Mbytes SSRAM
- LINUX device driver and applications software.
Gigabit Ethernet Interface Hardware:
We worked with Lucent Technologies to test out their beta Gigabit Ethernet and OC48 cards. Although we hoped to use these cards to develop passive network monitors, we finally gave up and went back to using DAG cards.
We are also collaborating with Narus to test their OC48 and Gigabit Ethernet capture hardware.
CoralReef Software Suite:
During August, we released version 3.4.7 of the CoralReef software package to CAIDA members and the public. CoralReef is a comprehensive software package from CAIDA for passive monitoring of ATM, POS, and other network interfaces and reading "crl" and pcap tracefiles. It includes FreeBSD drivers for Apptel POINT (OC12 and OC3 ATM) and FORE FATM (OC3 ATM) cards, support for WAND DAG (OC3 and OC12, POS and ATM) cards, programming APIs for C and perl, and software applications for capture, analysis, and reporting of ATM, IP, and TCP/UDP traffic.
For additional updates and fixes, see https://www.caida.org/tools/measurement/coralreef/doc/doc/CHANGELOG.
In order to facilitate increased understanding of network use (and misuse), we are planning to increase the tools for analysis of passively collected data. Part of this is possible by aggregating traffic data into small (but still informative) tables of information and archiving them. This will be achieved with an API for data aggregation (simply called Tables), which helps facilitate both the automated storage of useful data as well as providing a simple way for researchers to later access that data in myriad ways. This API was originally written in Perl. While a C++ backend exists, it is currently limited in terms of flexibility and future growth. By rewriting this API using templates and algorithms and containers from the C++ Standard Library, it should be easier for programmers to extend and manipulate to serve their specific analysis needs.
NeTraMet Software Development:
NeTraMet was tested on an OC48 link at Metromedia Fiber Network (MFN) in San Jose. NeTraMet successfully merged packets from two OC48 Dag4 cards. Further development of NeTraMet yielded improved processing speed. Now, several rulesets can run concurrently, with 5-second data rates around 900 MB/s or 200 Kp/s and can cope with one-second peaks of 400 Kp/s.
Analysis of OC48 tracefile:
CoralReef was used in conjunction with DAG cards to characterize Internet traffic from an OC48 commercial backbone link. We present data sampled from the Metromedia Fiber Network in San Jose ( http://www.mmfn.com/mfn/index.jsp). The total trace time was 75 minutes. The size of the trace was 32 Gigabytes.
Figure 4: Distribution of applications by bytes, top 25 applications.
While dominated by WWW traffic, both peer-to-peer applications (e.g. KaZaA, Gnutella, Napster) and games (e.g. Asherons, Starcraft, Quake) comprise a significant portion of remaining traffic. The trace was recorded on 08/05/2001 for a total of 75 minutes for a total of 32 Gigabytes of data.
Figure 4 presents data in bytes, stratified by applications, from the MFN OC48 link. The graph shows that while traffic is dominated by "second wave" network technologies (in particular WWW traffic), "third wave" applications like peer-to-peer file sharing (e.g. KaZaA, Gnutella, Napster) and gaming (e.g. Asherons, Quake, Starcraft) comprise a large portion of the remaining traffic.
For methodological reasons our statistics reflect lower bounds for traffic volume attributable to peer-to-peer applications. For instance, Napster does not use a fixed set of ports for file transfers, so we identified the three most commonly used TCP ports in use; 6688, 6697, and 6699, and mapped all such traffic to the Napster category. Bulk transfer traffic may be either sent to or received from these ports, since Napster supports both active and passive mode transfers. However, it is possible that some traffic is sent on alternative ports and reported in the "Unclassified" category.
Figure 5: Flow of traffic, in bytes, from source to destination country.
A large amount of traffic in this figure flows to Asian destinations. The trace was recorded on 08/05/2001 for a total of 75 minutes for a total of 32 Gb of data.
Figure 5 shows the distribution of traffic, in bytes,from source to destination countries. A significant portion of the traffic destinations are Asian due to the ISP's routing policy for this particular link on their backbone. The data suggests that commecial backbones carry traffic from and to a wide range of geographic locations regardless of physical location. Understanding the effect of routing policies and the nature of their dynamics (how much and how often they change) could offer a significant advantage in deployment of performance-sensitive collaborative applications.
Figure 6: Flow of traffic, in bytes, from source and destination countries within Asia.
Figure 6 shows that a significant amount of traffic traverses routes that may be unexpected by users or application developers.This figure shows traffic between sources and destinations within Asia (i.e., intra-Asia traffic) that is routed through San Jose, California (USA). A developer of collaborative applications might assume that traffic with both source and destination within the same country or region would have a low latency as it would not traverse borders outside the country or region (e.g., continent). Figure 6 demonstrates that actual routing violates this assumption. Many countries (e.g. Taiwan) have local traffic routed through North America This is driven by economic and regulatory realities of the underlying global telecommunication system.
C. Routing Measurement
Two studies carried out in conjunction reveal that the current common research practice of using routing tables for connectivity analysis is somewhat suspect. CAIDA finds that routing tables capture only a very small fraction of actual connectivity. Instead, CAIDA offers the beginnings of a new calculus for routing and connectivity analysis. "Internet Topology: Connectivity of IP Graphs, " by Andre Broido and k claffy, presented at the ACM SIGCOMM Internet Measurement Workshop in August, introduced a framework for analyzing local properties of Internet connectivity by comparing BGP and probed topology data. A second talk at this conference "Complexity of Global Routing Policies" analyzed BGP connectivity, and evaluated a number of new complexity measures for a union of core backbone BGP tables. Sensitive to engineering resource limitations of router memory and CPU cycles, the authors focused on techniques to estimate redundancy of the merged tables, in particular how many entries are essential for complete and correct routing. The notion of "policy atoms" is also introduced as part of this new calculus for routing table analysis.
Analysis Results
Resilience of graphs to removal of nodes has been the subject of a number of recent studies. In our study, we tested how resilient the giant component (the combinatorial backbone) of the IP-only graph is when we remove nodes with the largest outdegrees. These nodes exhibit the smallest average distance to the rest of the graph.
Figure 7: Topological resilience of IP giant component with respect to deactivated nodes.
Preliminary results, shown in Figure 7, suggest that the IP giant component size decays smoothly as nodes are removed in order of outdegree. The "g.c.diameter.hops" curve decays approximately linearly relative to the number of deactivated nodes. To our knowledge, this property of the IP topology graph does not match any previously published data.
Figure 8: Topological resilience of IP giant component with respect to width measures.
Figure 8 shows several plots comparing nodes removed by outdegree (on the x-axis) with different values plotted on the y-axis. For example, the two curves with a central peak suggest that diameter (see "g.c.diameter.hops") and average distance (see "ave.dist"), both expressed in hops, increase as nodes are deactivated. The peaks on these curves indicate the total number of nodes that must be removed before connectivity within the giant component breaks down. In this case the values are approximately 12,500 (from a total of 52,505), suggesting that the giant component is robust enough to sustain removal of 25% of its nodes before the giant component finally breaks down. These results differ qualitatively to behavior described for models of scale-free networks (Albert, Jeong, and Barabasi. "Error and attack tolerance of complex networks." Nature v405, 27 Jul 2000).
We are currently analyzing results to confirm whether a Weibull approximation fits several different Internet topology object size distributions. The Weibull approximation generally appears to apply to local size measures (e.g., immediately adjacent connectivity), for parameters intrinsically controlled by an object and not dependent upon the global environment. Several open research questions remain as to whether this approximation has a general cause or many unrelated contributors.AS Connectivity Graph
We generated a new AS connectivity graph in January 2001. See:
https://www.caida.org/research/topology/as_core_network/
We are experimenting with an animated version of the AS Internet Graph made from data collected over the course of a year. Additionally, several new AS Internet graph visualizations have been incorporated into the skitter daily summaries. Users are able to get a view of this visualization from any of the skitter monitors on any given day on which data has been collected. The skitter daily summaries also have a new user interface which makes it more convenient to search for the specific, desired datasets.
Task 2, Archiving and Storage Task
Approach for Archiving skitter Data
- CAIDA provides interactive access to skitter daily summaries on its public web site at: http://sk-summary.caida.org/cgi-bin/main.pl.
- CAIDA grants access to archived skitter data to researchers who agree to our Acceptable Use Policy. Summaries of their skitter-related research projects can be found at: https://www.caida.org/data/skitter/skitter_data_use.xml.
Analysis of skitter Data
Researcher John Byers of Boston University has commented:
"The Skitter datasets provided by CAIDA have proven an invaluable tool for us, by virtue of their breadth and the frequency with which data is collected. Our work is essentially focused on the quality with which one can map the Internet topology from a set of distributed vantage points. However, setting up such a set of vantage points and conducting the measurements is prohibitively time-consuming and expensive for virtually any academic research group; thus the CAIDA datasets (which are ideally suited to our study) have enabled us to conduct research which we otherwise would not have been able to pursue."
Approach for Archiving CoralReef Data
- CAIDA provides a demonstration of CoralReef data collection, analysis, and reporting at: https://www.caida.org/dynamic/analysis/workload/sdnap/. Results are updated every 5 minutes.
- CAIDA archives CoralReef data for special purpose studies as needed, but must limit data collection to available disk space.
Analysis of CoralReef Data
1. Code-Red Analysis
Beginning in July, CAIDA archived CoralReef traffic traces in order to study the propagation of the Code-Red worms. Significant press coverage was generated after the analysis was released. In fact, CNN used the animation on their front page for a time.
The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for its random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II.
CAIDA's analysis covers spread of the worm during the 24 hour period beginning July 19th at midnight UTC. The data used for this preliminary study were collected from two locations: a /8 network at UCSD and two /16 networks at Lawrence Berkeley Laboratory (LBL). Two types of data from the UCSD network are used to maximize coverage of the expansion of the worm. Between midnight and 16:30 UTC, a passive network monitor recorded headers of all packets destined for the /8 research network. After 16:30 UTC, a filter installed on a campus router to reduce congestion caused by the worm blocked all external traffic to this network. Because this filter was put into place upstream of the monitor, we were unable to capture IP packet headers after 16:30 UTC. However, a second UCSD data set consisting of sampled netflow output from the filtering router was available at the UCSD site throughout the 24 hour period. Vern Paxson provided probe information collected by Bro on the LBL networks between 10:00 UTC on July 19th and 7:00 on July 20th. Unless otherwise specified, we have merged these three sources into a single dataset to produce the following results.
A full analysis by David Moore on the spread of the Code-Red work (CRv2) can be found at https://www.caida.org/research/security/code-red/coderedv2_analysis.xml An animation depicting the geographic spread of the worm was created by Jeff Brown (UCSD CSE Department), and is available from the analysis page.
CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.
2. Top 25 Applications
CAIDA is refining a CoralReef based report to more efficiently track the top 25 applications seen in traffic on a University link: https://www.caida.org/dynamic/analysis/workload/sdnap/0_0_/ts_top_n_app_bytes.html.
6.0 Artifacts Developed During the Past Quarter
None7.0 Issues
None8.0 Near-term Plan
The following work is planned for 01-Oct-01 through 31-Dec-01:
General/Administrative Outreach and Reporting Plans
- Submit Quarterly Report to SPAWAR covering progress, status and management.
Task 1. Monitoring Task Plans
- A. Topology Measurement
- B. Workload Measurement
- Using the prototype cards deployed on OC48 links at Metromedia Fiber Network (MFN) in San Jose, The University of Waikato DAG development team (http://dag.cs.waikato.ac.nz/), (Ian Graham, David Miller, and Joerg Micheel) tested them under operational networking conditions.
- Refinement of the CoralReef software suite will continue, (https://www.caida.org/tools/measurement/coralreef/ ) especially concerning improving the CoralReef Report Generator tool https://www.caida.org/tools/measurement/coralreef/components.xml#HTML as well as optimizing interoperability with NeTraMet and Narus software.
- C. Routing Measurement
CAIDA will continue to collect and analyze data from the skitter project.
CAIDA will refine methodology and results from ongoing routing studies. Results will be presented at the ISMA Routing and Topology Analysis Workshop to be held at SDSC Dec 17-19. (This ISMA workshop is sponsored by NSF under grant number ANI-9996248.)
Task 2, Archiving and Storage Task Plans
- We will continue to collect and analyze data collected from skitter sources deployed in the field
- We will continue to make skitter topology and performance data available to researchers via Certificate Authority for use in their research and monitor results. See: https://www.caida.org/data/skitter/skitter_data_use.xml
- We will continue briefings to the Internet community on the purpose and results of skitter active monitoring and will solicit their feedback.
- We will continue to re-design the structure and user interface of skitter daily summaries to improve quality of access to collected data. See: http://sk-summary.caida.org/cgi-bin/main.pl
- We will make additional improvements on the Walrus viewer. See: https://www.caida.org/tools/visualization/walrus/ We will add the ability to load a more complete file format, add more filtering and other interactive processing, and add rendering labels and other attributes for nodes and links.
9.0 Completed Travel
The following travel occurred during Year 1, 1-Jul-01 through 30-Sep-01:
- Nevil Brownlee - 8/5 IEPG Meeting, London, UK
- kc claffy and Andre Broido - 8/6,7 Mathematical Opportunites in Large-Scale Network Dynamics, Minneapolis, MN
- David Moore - 8/13 - 8/17 10th Usenix Security Symposium Washington, D.C. (David presented a paper on the spread of the Code-Red worm.)
- kc claffy and Andre Broido 8/23 - 8/24 SPIE Itcom Conference, Denver, CO
- Andre Broido and kc claffy 9/10 - 9/14 Multiresolution Analysis of Global Internet Measurements Workshop, Leiden University, Leiden, NL
Other related travel occurred but was not charged to this award.
10.0 Equipment Purchases and Description
None.
11.0 Significant Events
- CAIDA participated in a series of conference calls and meetings in support of a SPAWAR demo organized by Nikhil Dave. CAIDA's Ken Keys supported installation and configuration of a CoralReef passive traffic monitor in the San Diego RLBTS test lab. CAIDA's crl_flow (part of CoralReef) was integrated into a SPAWAR ship-to-ship and ship-to-shore networking demo.
- k claffy participated by phone in the London RSSAC meeting held in July. k was asked to present an invited talk to a National Academy of Sciences panel and was invited to sit on a the "Committee on Internet Searching and the Domain Name System: Technical Alternatives and Policy Implications"
- k claffy discussed possible collaboration with Rolf Riedi of Rice University (who is separately funded by Sri Kumar). Rolf's chirp tool uses a semi-active probing mechanism. CAIDA's skitter active measurement methodology is similar. A joint proposal has been submitted to Sri Kumar.
- CAIDA organized and participated in the ACM SIGCOMM2001 Conference held at UCSD Aug 27 - 31.
12.0 Publications:
- The following paper was published:
- David Moore, Geoffrey M. Voelker, and Stefan Savage "Inferring Internet Denial-of- Service Activity" to USENIX Security Symposium August 13-17, 2001, Washington, D.C. See: https://www.caida.org/publications/papers/2001/BackScatter/
- Nevil Brownlee presented his DNS measurement work at the IEPG Meeting in London on Aug 5.
- The following papers were presented at the SPIE Itcom Conference in Denver (21 Aug - 24 Aug), the ACM SIGCOMM Internet Measurement Workshop in San Diego (27 Aug - 31 Aug), and at the Multiresolution analysis of Global Internet Measurements workshop at Leiden University (10 Sep - 14 Sep):
- Andre Broido, Daniel Plummer and k claffy "Internet Topology: Connectivity of IP Graphs"
- Andre Broido, Daniel Plummer and k claffy "Complexity of Global Routing Policies"
- The following papers were accepted for presentation:
- N. Brownlee, k claffy, and E. Nemeth. "DNS Root/gTLD Performance Measurements" accepted at Usenix LISA 2001. NeTraMet passive traffic meters have provided an effective way to monitor the performance of the global nameservers as seen from the client side.
- K. Keys, d. Morre, R. Koga, E. Lagache, M. Tesch, and k claffy, "The Architecture of the CoralReef Internet Traffic Monitoring Software Suite" accepted at Usenix LISA 2001. The CoralReef passive monitoring and analysis tool suite provides convenient tools for a diverse audience, from network administrators to researchers.
- M. Fomenkov, kc claffy, B. Huffaker and D. Moore, "Macroscopic Internet Topology and Performance Measurements from the DNS root name servers" accepted at Usenix LISA 2001. skitter measurements using a specially constructed combined DNS clients list can identify clients that have high latency to each of the current root server locations being monitored.
- N. Brownlee, k claffy, and E. Nemeth. "DNS Measurements at a Root Server" accepted at Globecomm 2001. We passively measure the performance of one of them: F.root-servers.net. measurements show an astounding number of bogus queries: from 60-85% of observed queries were repeated from the same host within the measurement interval. Over 14% of a root server's query load is due to queries that violate the DNS specification. Denial of service attacks using root servers are common and occurred throughout our measurement period (7-24 Jan 2001).
- kc claffy and Rob Beverly of MCI submitted a paper on multicast to the IEEE Journal on Selected Areas in Communications.
13.0 FINANCIAL INFORMATION:
Contract #: N66001-01-1-8909
Contract Period of Performance: 5 Jun 2001 to 5 Jun 2004
Ceiling Value: $ 2, 924, 958
Current Obligated Funds: $2, 924, 958
Reporting Period: 1 Jul 2001 to 30 Sep 2001
Actual Costs Incurred: $ 157, 253
Current Period:
UCSD | ||
Labor Hours: | 754 | $ 92, 137 |
ODC's: | $ 3, 519 | |
IDC's: | $ 49, 741 | |
TOTAL: | 754 Hours | $ 145, 398 |
Cumulative to date:
Labor Hours: | 926 | $ 99, 954 |
ODC's: | $ 3, 528 | |
IDC's: | $ 53, 771 | |
TOTAL: | $ 157, 253 |