Connect with RIPE 60 : Facebook Twitter RSS linkedin

These are unedited transcripts and may contain errors.



Notice: Use of undefined constant steno - assumed 'steno' in /var/www/html/ripe-60/steno-transcripts.php on line 24

PLENARY SESSION CONTINUED AT 2 P.M., 4 MAY 2010:

CHAIR: We are ready for the third ?? last session of the Plenary slots. My name is Kurtis Lindqvist, I am from NetNod, I'll be chairing this session. The first presentation is George Michaelson of APNIC.

GEORGE MICHAELSON: Hi, everyone. You are an intimidating audience, I have got to tell you. I have spoken here many times and every single time I get flop sweats so I probably have a large amount of dark marks on my shirt, but anyway, it's nice to see you all again. I am actually presenting a body of work which, in large part, is actually Geoff's activity. This is a responsibility that he carried as the Chief Scientist to analyse an emerging problem that came from the use of this particular historical net block. He was kind enough to let me join this activity, but I want to stress that the informative parts of this presentation, the substantive body of work that's actually guiding operational deployment of the network is essentially based on his work and I am getting to have the fun of presenting.

So, the usual chief scientist applies.

What do we normally do when we get resources from the IANA? When we get assigned a /8, we try to find out what are the problems people are going to face in the use of that network block. We are testing reachability and because the RIPE NCC has a huge investment in information systems, processes to analyse behaviour of network routing, a monitor system worldwide that is absolutely best of breed, rather than attempting to replicate this, we entered into a collaborative relationship with the RIPE and I would like to say thank you for that kind of substantive assistance which comes from you, the community. I mean, Danny gets to code the stuff but the bottom line is because you are prepared to have a body that does this kind of activity, we get the benefit of it, so thank you very much, it's a really deliverable for us.

What we do is we typically take four networks from this range, announce them as small subnets and find out what's going on. We invite people to reach them, we look into the resource and find out what's going on. Once we understand that this isn't a blog, people have lifted the filters. We got on the list and said this network block is being released, please update your prefix list. Once the work is done, we then put it into our service deployment framework and actually start making allocations and assignments. We had a feeling network 1 was going to be different. We didn't have any immediate sense of what the problems were going to be, but there was good reason to believe this was not going to be straightforward. Now, I don't know the exact wording. I don't think Leo is here, but there is a phrase that's been floating around saying something to the effect "Nobody ever said the water in the bottom of the barrel was going to be very fresh or tasty." So, here we are scraping the last drops of dew out of a leaky wooden barrel, and we are starting to notice that there are some newts swimming around in there and a couple of frogs. Now, a favourite author of mine points out that, in olden times, people would say because the frogs were alive in the water, that meant it must be healthy. And they'd obviously never asked themselves the question: Where do the frogs go to the toilet?

So we knew that things were going to be bad, but we had not really quite worked out what was going to happen.

The RIPE NCC gave us an amazingly strong signal that things were going to be radically different when they announced the four normal /24 at AMS?IX, when they obviously realised rapidly it was appropriate to withdraw this announcement.

If I laser pointer here on this certainty one, you will see that this is the classic flat line signal in a port that you have exceeded all reasonable bounds of traffic through this facility, and it is not appropriate to continue, and they wisely withdrew that announcement. And they wrote a really good report up on the RIPE labs that encouraged a lot of discussion, they reviewed a lot of behaviours that they had seen in their networks, good piece of work.

So we kind of thought, well, let's actually talk about this as bad traffic. You know, what's really happening here? Are these networks a magnificent net for all the bad behaviour in the world? What is bad? How bad is it? Is it really bad? Who knows? So we thought let's find how bad the whole /8 is.

We want to make this the baddest network in the world. So we knew we had to go out and find relationships that could assert a large prefix and receive a lot of traffic, way, way, way above our transit capacity. So we went out and said we are interested in having this experiment and we'd really like to work with people who have got the ability to sync a very large traffic, we need to do this now. The reason is that you might have appreciated, as network operators, people actually want IPv4 address so we can't exactly hold back this block for like a year because we want to find out how bad it is. We have a process obligation to get the resource in play, we need it to work quickly.

These four entities, first caps off the rank, Merit, [ARNET], Google and YouTube were able to provide us facilities within about two days of the request, for which we are grateful. They really pulled out the stops to get this working for us. They worked very, very hard, to provide facilities to us and did a lot of coding work on this. But it let us do one?week?long announcements which meant we could explore the long baseline behaviours, look at patterns, look at the weekend behaviours and actually get some depth knowledge around what was going on in this network range.

We also had this idea that comes out of Geoff's work on the bad idea code. You might remember a presentation that's been floating around about not really doing TCP but doing just enough TCP to make people think you are there and then tell them to go away. There is a way you can use this to tickle extra traffic out of the network, if you imagine that you have got a route that's receiving something you didn't expect to receive. If you send them back just the right kind of packet, every now and then you get a bit more. That's useful because it potentially allows you to differentiate between the one way flows, this is just someone having a doss attack and he is never going to come back to you and the people who are doing substantive work and have real packet flows and if you answer the right sequential behaviour they'll send you another bit of data. Maybe you get to see a little bit more about what they were trying to do. It was an interesting idea.

The other thing we were going was to the classic PCAP capture, capture everything and write it to disc.

This is maybe the first diagram ?? all of this first half of the talk is in the paper that's published on [Poterou], this is the substantive work that Geoff has done that's informed our operational behaviour. If you really want to understand what's going on here, you are going to have to read the document. I am just a traffic fairy presenting the information.

So, have a look at those numbers on the left?hand side. You are going to see some stuff here, it's a little hard to make out because it's a very, very small.but up high in this region, there is a single spike event that is close to a gig of traffic coming in. That's an awful lot of traffic and we have no reason to believe it was someone's fat fingers on a router dumping the traffic. This was really out there. You will also notice that there is a fairly large distinction between the UDP volume, which is the green line, and the TCP volume, which is the blue line. It's really massively swamped by UDP traffic. And the other thing to take notice of here is that this traffic represents pretty well equally across the whole thing, the whole duration of this experiment that's shown here, 150 megabits of sustained load across this network range.

So, when you do activity on the net†?? I do DNS monitoring all the time and my signals routinely have incredibly strong diurnal behaviour, 24?hour cycle, through human interaction, things that are swamped or dominated by single time zone. If you look at this data, you will see that, at this level, there might be things you could begin to say are signals but they are not strong. This is just a continuous load of traffic that's coming out of the system. If we look at the packet rate rather than the megabyte count, it actually looks subtly different. In the UDP, there actually is a very strong diurnal signal. It's clear whatever is going on there, it's human centric. It's a very strong message. TCP†? well, you could maybe argue there are some interesting flicks there, but it's essentially a flat line base. There is an immediate difference ?? remember, this is kind of freaky, most situations, when I get packets on the wire, the packet count and the megabyte count are going to show similar behaviours, because the traffic mix works out this way. Here, we have a graph where the traffic count is absolutely flat, and the packet count has got really strong swings in them.

AUDIENCE SPEAKER: Sleeping people's and bigger packets.

GEORGE MICHAELSON: Yes, the elephants of your dreams; when you are dreaming of Dumbo, that's in the packets, and when you wake up in the morning, you are thinking about a little apple, so it's a small packet. Isn't that wonderful? I don't know. I mean, it's the network. You tell me.

Okay. So, this analysis needed to actually be informative, we were looking for a way to get rapidly to some information to inform the operational process of giving out these blocks, and Geoff came up with a wonderful piece of C code that did a rapid sum over this, summarising the traffic volumes and packet counts into the 16 and 24, which gave us a kind of a representation that we could defend in the process of what's the worst case, looking at this from an allocation process. I know you guys routinely operate on different segment sizes, super blocks over the 16s or the 24s, but our processes tend to reflect that kind of structural thing. I am not saying it's a classful network world. It just gave us a handle on what was going on.

And it asks this question: how uniform is it? Is it equally awful? Is the whole of this network bad everywhere? Well, the answer is, it's kind of complicated. Now, I know this looks like a Jackson Pollock painting, but I actually think it's really interesting. I mean, I have seen elephant art and I'll tell you it's not as good as random packet data. There is actually three classes going on here. There is a very small circuit of four or five networks here that are really very, very bad. You will notice one of them and only one of them has a very strong diurnal swing. There is a weaker one, but the essential quality of these guys, and remember, I believe this is logarithmic, they are really truly atrocious. Then there is a second camp of networks that are hovering in this region here and they are not nice networks, but they are kind of distinctly less bad. And then there is this bottom layer here which is where almost everyone of the nets in this space are. So, what's normal for traffic in network 1 is really quite distinct from the other two classes and we have a small number of things that are very bad.

So, if you plot this into the sub?splits by the network blocks, so what we are looking at here on the X axis is the specific sub?net. So that's 0.0, 50.0, etc., and then if you plot the average and the peak, you get this really interesting little grouping down the bottom here where the peak is floating ?? the average is floating up into the peak and there is a very strong sense, these are the really bad guys. You have got some really bad spikes of peak load going on and you have got some interesting spikes of average load going on, but there is a really clear distinction here between what's going on in the body of the net and what's going on in just a small number of places, and, in the address plan, there are a very specific group of addresses.

Okay, so what are people sending in these packets what are they actually doing? Well, there is an interesting tweak in the distribution. We are getting a lot of small packets, but there is this weird number of absolutely exactly 255 byte packets. 31%, I have no idea why. Do you know a protocol that says I do 200, I don't do 199 and I don't do 201? There they are. They stand out like a sore thumb. There is also a 60?byte peak and there is a really weird out wire over here of 1,400 peak, that's a huge peak of someone that says I only talk in 1,400?byte units. Anyone got any clue? We'd love to know.

So, if you do the broad range protocol distribution, this thing is absolutely swamped by UDP and that's really confirming what RIPE NCC saw. They gave that as a clear signal in their write?up. It's not typical. We have also been looking at some of the other network blocks and we are seeing other ratios in nets which are interesting but are not network 1. So something is definitely particular here. The volume of ICMP, that's kind of understandable but the tunnelling is kind of interesting. That was a little unexpected, perhaps.

If we look at the port distribution. The overwhelming majority is SIP, SIP with with RTP. Wonderful number here, 33368, which we have looked at the payload of, and it's DNS. The interesting thing about these numbers is it comes from Mexico. It's a cable rollout, it's a large number plan in Mexico and they are using weird magic numbers that are half the number of the beast to do DNS. Now, what is going on? Now there is a theory that's been put here that there is a particular brand of broken equipment that made a coding error somewhere along the line and the easiest way out was just let it flow. And tune their bind to listen on that port and cope because it's easier than replacing your CPE. And as long as no one was taking their network 1, they didn't have to worry because they owned the customer. Go figure.

So we have a lot of port zero. There is some initialisation behaviour that takes place in some†?? they got it wrong and they didn't do a give me any port. They said let the compiler give me a number and there you go. Port 0. We see quite a lot of that. We see sys log. UDP port 80. Way cool. I mean, the web is atomic, right, so there is no need for a TCP protocol because the packet you send and get, that's the transaction. Why didn't we with make it UDP to start with? Be that as it may, the assumption here is this is just a way to find what you can do through the file. It looks like probing. Then we have another number that's pseudo DNS. It's quite strange how much is out there.

Here is this port, port 15206. This looks to be music streaming services. There is an awful lot of traffic out there but someone on network 1 is running [musac], although I don't think we have decoded and played it. Maybe we should do.

There is an awful lot of SIP. If that's your details in there, I am very sorry that I am publishing that you do SIP on network 1. There is a parasite ports. Maybe the SIP guys know why, but this thing uses a lot of ports and a lot of associated service calls take place but the nexus of telephony seems to be cropping up here.

We thought Slammer was dead and gone; it's alive and well and kicking. Through the network, we see a lot of of it. 6112, what are you doing up in that number space, guys?

Badness: What's the badness poll here, there are five stand?outs on that traffic graph that we were seeing from two points of AS tests. Here, we were consistently soaking a large amount of traffic. So that's the net 116 ?? the 1016, 1.10 crept in. Now, Geoff has this cunning theory that this is a fat finger mistake and they really wanted to be in 1.1, but they mistyped it.

I mean, you know, you guys are the cream of the crop. Surely you would never do that. Well, I don't know.

Well, there is the second observation which is we are going out there saying, bad, bad guy, bad network, but you know what, maybe it's not bad, maybe this is just the stuff that happens because that's the stuff that happens. So the qualitative aspect here of saying it's bad, well it's bad for you if you have got one of these blocks. I mean, you don't want to take the surplus traffic, but no one meant to do this. You know, it's not so clear. It looks like it's leakage. It's private use, it's stuff that anyone routinely could have done but we do actually see the scanning the viruses, the worms but it's leakage.

Okay, where did we go with this? Well, we had to make a pragmatic decision as a registry and I stress this there is a distinction between APNIC the registry and Geoff and me the science group. So, what we said is, let's hold off on these 16s for awhile, we are going to have to do some more work on this. If there is ever an opportunity in the testing on these blocks that they could be viable for use in the community, we are going to give them bag. We don't think it's appropriate to put these into community use right now. It's important that they are not left as unused or reserved because if we do that, people are going to continue to use them so we actually have made an assignment of record to R&D for further testing. We also now realise that testing is going to have to be absolutely routine for all of these blocks and the RIPE NCC knows this too, network 2 has been in recent test. I believe you are already aware of substantial traffic issues in that block. It's in new world, we are all scraping in the barrel.

I am going to rapidly get to the visualisation part. This was a second strand of attack. Let's look and see if we can look at this as time series and understand what's going on.

This is a map of the entire routed address space. The yellow represents fully announced blocks. The two block stripes are the remaining reserve that is yet to be given out. You can see the classic B space with breaks, you can see the classic C space and swamp with breaks and you can see a white area that is the multicast and unusable upper part of the network. This is a picture of the whole block where that 0000, and this is 255, 255, 255, 255.

Okay. So there you go in detail. In fact, we are probably camping on this little corner down here because we are using the right network, I'd imagine, which is 193.

Okay. So if you look at net 1, you can do the same thing, but, instead of plotting it as 16s, you do it as a map as a 24, so every pixel is 126 in the range. You are going to see there are some clear signals that come out. There is particular groupings of behaviour where you'll see striking pattern which is a whole stride of activity in a 16. You are going to see intense isolated corners of activity. And you are going to see particular distinct nets that may be taking a hit. I might add that this colour map is kind of grade out a lot of small packet rate. The rest of this background is taking 1 or 2 packets a second. I am only moving the stuff which is really, really heavy.

Now we go to the bit where it's a now watch this.

We are going to have some sounds now.
So this is just to give you a sense of the overall level of activity you see in the net. And, in particular, you are going to notice that the distribution of source ports is actually remarkably even across large swaths of the entire number plan. So when people mail us saying, where is the traffic coming from? The answer is: It's coming from you guys. It's coming from everywhere. There are, however, obvious particular hot spots. You are going to notice that there are some very, very strong signals coming out of swamp land and there are some other highlights that you see here. The second thing is you'll notice that on the receive side, it's really quite a different picture. So that's just to give you a sense of the kind of background view of activity we see on the network.

This is an example of a rapid fire scan on the net and you can see the circle block here, that's is the origin net that's causing this process, okay. So this guy is walking through the entire global address space and he happened to hit network 1, so here he is, I have edited down which is why the time is changing so fast. Basically, they walk across the entire network 1 range high intensity scanning, pretty much 8 minutes and then went into network 2.

So, did you see that little flash there? That is a beautiful example of a DDoS, right? It's not possible for this many synchronised sources to actually achieve a single packet event within five seconds of each other. This is somewhere around 700,000 IPs swamping one destination net in five seconds. So, it's reasonable to assume this is spoof source behaviour. But I thought I'd show you what looks like so you could get a sense of just how bad things could be.

And I think we are nearly there. There was an obvious conclusion I had to draw. But I also realise that we need a serious message. So, it has a serious message.

Thank you.
(Applause)

CHAIR: So any questions?

OLAF: So this is now done for net 1 and net 2. Do we have something similar for other networks which are sort of non?special, so to speak. Wrong Word but...

GEORGE MICHAELSON: We attempted a survey of a 12 from a non?special network to gain some sense of what background might mean, but clearly a measure of what background radiation is would inform this hugely and that did give us an indication, the scanning behaviour is general and that kind of speckledy look, that's also quite general so there is a sense, yes, we can differentiate what normal net block C and what's bad net C, we have recently started testing net 14, which is Telecom Italia and net 223, which is, irony of ironies, is a network block that APNIC was given and because one /24 had been previously used, we gave it back, and yet the spin of the dice from Leo has given it back to us. Had we kept it, we'd actually have understood this problem a bit better. The things you learn in hindsight. But those networks attract subtly different patterns of traffic and we are looking at them very intensively to try and get some sense of the variations here. We want to be informative about what background is, what noise is, what's the behaviour we want to tell everyone so that they can inform your expectations of this declining resource.

AUDIENCE SPEAKER: Daniel Karrenberg, RIPE NCC. You gave me a little bit too much credit there in passing for the debugging effort. I actually didn't code it and that was James Aldridge, who some of you may know, and he can't be with us today because†?? or this week because he has some health issues, so for those who know him you might want to drop him a note.

GEORGE MICHAELSON: I certainly didn't mean a misattribution, but I respect RIPE as a complete entity. I respect the work you do.

AUDIENCE SPEAKER: I needed to correct that. The other thing that this in deorganising. 1/8 was the only one that generate that had much traffic absolutely.

GEORGE MICHAELSON: It was a clear stand?out.

AUDIENCE SPEAKER: It stood out very, very clearly, so that's also something that I think people should be aware of, that not everything has a toxic waste ??

GEORGE MICHAELSON: That's a very good observation. You should not necessarily, necessarily read from this that everywhere is going to be equally as bad but I would like to say we are already seeing emerging behaviours that could be very unpleasant so this story is unquestionably going to be more complicated.

AUDIENCE SPEAKER: Steve can't, BBN, I have a suggestion for the large amount of traffic you are seeing coming out of Mexico in particular. It's an attempt to smuggle digitised cocaine and the trick is figure out how to undo it.

GEORGE MICHAELSON: Isn't that the basis of TRON?

AUDIENCE SPEAKER: I have two thoughts on that. Number one, pre?looking at this in terms of reachability, how many people on the Internet are filtering 1. And the other one was, have you thought about this in terms of the securities holes of the people who are sending traffic to 1 and potentially is allocated to a bad guy.

GEORGE MICHAELSON: I would like to talk about those both separately. We did think about that filtering question and we were very careful in the experiment to make sure that the norms of routing registry announcement and statements on the lists were made, but obviously there is a question that can be made were all of those filters removed? And at this stage it's too early to say. But I would observe the source address distribution of the non spoof traffic is extremely wide ranging. And to the extent that you'd argue there is a clear AS filtering behaviour here I'd say that may be true and that's something we should look at in depth but I also see a very, very large and diverse source address population. It is genuinely almost every net we can see out there has people that will send packets into net 1. Now, as to the second one, you may not have noticed it but in some those diagrams that I showed in the movies, there was a huge thick stripe down the left?hand side on the source range which is net 1 sending to net 1. So, for the internalised use, which is what we are thinking is the not bad guys, yeah, I'd say there is a fairly substantial internal risk here. If these guys have routing that would allow their package to leave IGP and get to us, then, yeah, we could be receiving some very interesting data and there are potential risks here and that has to be thought about, but I don't know what the answer is. But yeah, that has some potential to be a real issue. I am not convinced it's necessarily different to any of the other nets that have been previously used. If you were camping on Telecom Italia and you never vacate it because you didn't feel you had to, maybe you are exposed to the same risk.

MARTIN LEVY: Martin Levy, Hurricane Electric. Could you talk a little bit about normalising this noise against, let's say, a random allocation that would come from APNIC today, or a very old allocation, whether it be swamp or otherwise that's been around for a long time?

GEORGE MICHAELSON: I would like to defer to the chief scientist.

AUDIENCE SPEAKER: I would, too.

GEOFF HUSTON: I'll wait my turn, but I'll answer that. Geoff Huston, APNIC. I'll answer the first one. What's normal appears to be that every single /32 in v4 gets a packet every two to three seconds that they didn't ask for. This is based on studies in net 27 and it's also based in studies in net 14 and net 223. The other thing I'd like to note, though, about net 14 and net 223 is that 1/8 isn't that special, and that's not good news. What we saw was 150 megabits a second coming into 1/8 we thought well that's all in these sort of 11111, you guys just can't add up. But net 14 gets a peak of 35 megs, it's a diurnal between 20 and 35 and you go maybe that's just net 14. Net 223, diurnal, 18 to 35 megs. And I suspect that this leakage of traffic is actually quite widespread a cross a huge amount of the v4 space. The only things that's radically different with net 1 as against the others is that net 1 has this strong UDP RTP sort of signal, there is a whole lot of audio crap. Everything else is TCP sins and when you start answering the sins with an [AK], it gets a lot of fun. And there is some bizarre stuff flying around and it's predominantly port 80 going, are you there, kind of stuff. So, the comment was, I don't think one is that special. We are seeing it everywhere.

AUDIENCE SPEAKER: So the natural question then to follow up on that is: That I thought that is.1.1.17, or 1717 something is so easy to type that the numbers would be out to lunch compared to out numbers. You are saying something different.

GEOFF HUSTON: I am. I am saying I see a degree of background radiation across the entire net. When you take it as a /8 there is an awful lot of traffic that's unsolicited.

CHAIR: I had one question for you as well. On the ?? and then Daniel and then we have to close the microphones. But on the source address distribution, it looked like there was actually equal amount of traffic coming out of old E class base, is that correct?

GEORGE MICHAELSON: Yes. There is a small amount of behaviour. I have got to say that the heat map that you are looking at is an exceptionally crude approximation of what is really going on. This was a rapid visualisation but not all of the artefacts are a function of the graphing tool. There is traffic in that space.

DANIEL: More a comment on what Geoff just said, Daniel the other chief scientist. I am not ?? the only sure time series data that we have is from the†?? that I am aware of, is the debogonising effort at the RIPE NCC which is in a specific place in the network 1. Network 1 was where where he maxed out the link and saw a lot of bad ?? unsolicited traffic. Of course, if you go and go off at different places like YouTube, Google, wherever, and you have more bandwidth available, you will find more. So, I am not so sure whether the other places are just as polluted as 1/8 so the only thing we know from the past is the debogonising thing, that was systematically done and their 1/8 stood out fairly clearly.

CHAIR: Next it the BGP presentation.

I come from similar research laboratory which is in Oslo, Norway, and I will be speaking for half an hour about BGP and the evolution of churn in BGP.

That would be a challenging task with a Norwegian task after this excellent presentation that we just heard.

This is joint work with my student, Ahmed, and Constantine from Georgia Tech.

So the background for our work on this is that, as, you know, the Internet is growing. We illustrate the growth here by, you know, in the measurement period that we are looking at here, the number of ASs in the Internet approximately doubled, more than doubled. And the number of routable prefixes also more than doubled in this six?year period that we are looking at. And there has been concern, as Geoff Huston spoke about yesterday, that this growth will be a problem for the routing system. People are afraid that as the routing table size goes bigger and bigger and the amount of churn routing updates in BGP increases, this will be a problem in the sense that you will need very fast, very expensive routers to keep up with this.

So, Geoff showed a quote from this IAB report yesterday, here is another quote saying that there is a need to devise a scaleable routing and addressing system because of these two reasons: The growth in the routing table size and the growth in the churn rate.

So this is our motivation. How is really BGP churning evolving or increasing over time? We wanted to try to find out.

So, to do this, here is our approach. We take data from the route use project, publicly available data. From that we identify. So route use most of you are probably familiar with it. What they do is they place monitoring around in many networks in the Internet, these are BGP multi?BGP sessions from the monitors to a collector and they basically report all BGP updates that they, that they produce, they send it to the collector and the collector then gathers all the updates sent from these monitors. So, we identified four monitors placed in the core of the Internet, large tier one SIP networks and these four monitors, they have been there been, they have been alive for this whole measurement period that we're talking about, six years from beginning of 2003 till the beginning of 2009.

So, this is our data that we are looking at. So, I felt I had to put in a slide here, because the topic of this talk of course is very similar to what you heard about yesterday, when Geoff presented his work on BGP evolution in 2009 and also before that. So just to explain a little bit about the differences in these works.

You see here, Geoff, he looked at both the RIB size and the church. We only look at churn. Geoff explained yesterday how you can take different approaches to this we have taken one of the other approaches that he had on his list. Instead of monitoring from a single point. We look at several monitoring points.

We look at a longer time period because we are interested in how does this evolve over time and we believe we need to look at a long time period to say something about this.

We wanted to look at how is churn evolving in the core of the Internet, so at the most well connected and largest networks.

Of course, since we use route use data we don't have complete control over the monitoring setup. And you will see how our graphs look different than the graph than you saw yesterday.

And this is also part of the reason for the last points, that we need to try to decompose this churn and understand what causes the shape of this before we can draw any conclusions.

So, before we start looking at the data. Let me just go through, so BGP churn, right, BGP updates, what causes this churn? What causes the level of churn that you observe at any given point in the network? Well this is complex actually. There are many factors that will influence this.

The size of the network, it is natural to believe, at least as a starting point, that as the network gross, you have more elements in the network, you have more prefixes that can be withdrawn, you have more Linx that can fail, you have more sessions that can be reset so as the network grows, would you expect to see more churn.

The structure of the Internet topology plays an important role in determining how many updates will reach you at your, the point you are looking at the Internet from. So, which networks peer with which networks, how many providers does ever network in the Internet have? How ?? what about the path lengths in the Internet, as Geoff spoke about yesterday. And that translates into the depth of the Internet hierarchy. So, you know, Internet as a hierarchal routing system with a large tier 1s at the top and your customer of the customer of the customer.

Policies and protocol configuration. So the routing protocol plays an important role, of course, for churn levels. Whether you try to rate limit churn by using something ?? whether you enable other mechanisms like route flap dampening, filtering routes, all this will influence the level of churn that you observe. And of course the types of events that takes place in the network. Things like policy changes, failures, prefix withdrawals that actually causes this churn rate.

So, let me just give a quick example here to show how some of these factors will influence churn.

Here is a simple illustration. Two transit networks, X and Y have three different customers, A, B, C. A is connected to both of them. The other ones so one each. So what happens now if this border router at ASA fails? X and Y will both discover this and they will start sending updates to the other networks and as you can see, there will be a convergence process going on. And the thing to observe here is that this event had to be propagated globally so that the whole network had to learn about this because there is no route any more to the prefixes announced by network A.

The other thing to notice is that sometimes there is more than one update going over the same link. So there is this convergence sequence. And this has to do with several things. This has to do with that there are more than one roots to these destinations right. There is one root here and there is another route going here so when there are more routes to a destination, there will be more parts to explore, right.

Here is another event, right. What if it isn't the board router that fails, it is the link between A and X? So, in this case you will also have a convergence sequence, but we note now that this convergence is different. This event doesn't have to go propagated globally. There is no message ending up here at network C, because the best path from C to A was not affected by the failure. So keep this in mind that different events will cause different amounts of churn. And the structure, how many providers a network has will also influence this.

Here is the first look at the data. This shows the number of updates received each day over our six years monitoring period from a monitor in AT&T. So, as you can see, this is not an easy time series to reason about. It is dominated by large spikes. It fluctuates wildly, and there are several level shifts that you can see. There are periods here where churn is at a sustained high level for weeks or months or even years at a time. And then it drops down to what seems to be kind of the normal level.

So, how can we try to understand what is going on here? I mean, clearly there is no use trying to directly try to draw a line or a do such a thing with this time series, we need to clean it in a sense. So, I should say that in this series we have already removed the effect of session resets between the monitor and the collector.

Another observation about this time series is that, as I said earlier, we have looked at four different monitors. But I will use the time series from only one of them as I go through this. But here we see that the same time series from another monitor shows a completely different picture. It is similar in the sense that it has all these spikes but the spikes are not correlated at all between the two monitors. And these level shifts they exist in both of them but they exist at different times and for different reasons.

So, there is very little correlation between these monitors.

So here is kind of our approach went we try to reason about these time series. We start with the raw time series that I just showed you, and then we put that time series through a number of filtering steps, if you would. First we say, okay, let's pick out all the updates that are redundant, that are duplicates, duplicates meaning that one update is an exact copy of the previous update. And then we go through ?? our goal here is to boil this down to what we would call the baseline churn, the churn that is not due to local effects around the monitor, but that can be seen from many monitoring points around the network. This is kind of the real underlying churn in the Internet that we are trying to get at and we are trying to characterise that churn. So we try to remove other local effects, and finally we try to remove these level shifts and explain these level shifts to see what we end up with. We call that the baseline churn that we are interested in.

So first, let's look at the redundant updates ?? duplicates. These are, of course, not needed for protocol, correct protocol operations. They don't carry any meaning. Nothing happens when you receive it; you just drop it.

So, surprisingly, I would say duplicates across our for monitors in this six?year period account for about 40% of churn. So 40% of BGP updates are complete copies of the previous update and they are unnecessary. You see here that this varies from network to network and it varies over time. Here is the highest value, more than 60% of updates in level 3 in 2005 were duplicates. Sprint in 2003, only 7.2. But there is no trend as time goes. This number doesn't grow or decrease on average. What I show here is the original time series and then the time series, when we remove all the duplicates we see that the duplicates are actually responsible for most of these large spikes. So most large spikes are simply duplicates. So now we end up with this time series, but still, this is not a smooth and nice time series, there are still spikes left and there are still these level shifts with periods of sustained high churn.

So, what causes these duplicates? Well, obviously a router that sends a duplicate announcement to his neighbour doesn't check that he has already sent that same message earlier. So he doesn't keep the state that is needed to say that I have already sent you this update so I don't need to send it again, I can filter it.

Another cause of this is interactions between iBGP and eBGP. So something that is sent in iBGP as a different update, it is different in iBGP, it carries some information that is useful internal in the network, but when you translate it, it creates the same eBGP message that is sent to your neighbour.

So could these be filtered out? Of course they could. You could keep state and check every message, did I already send this exact copy? But this is a cost, of course. You need to keep this RIB out and you need to run this processing every time. So it's a matter of cost.

So, that's the first topic. Let's going to the large local events. What do I mean by large local events? Large events here in this setting is an event in the routing system that affects many prefixes at the same time. So typically, a link failure of a link that carries many prefixes, or things like that.

So, what we see is that when looking at these large events, we see that they are always, almost always caused by events in the monitored AS, so these are things that happened internally in the AS, or very close, meaning in the session to the neighbour of the monitored AS. When looking into the causes of these large events, we see that changes in the MED value is typical. They are typically can also be caused by communities, different communities saying, for instance the exit point in the network. Or failures in or close to the monitored AS, meaning link failures at the neighbouring session.

We also see that these four different monitors that we look at, they experience these large events with ?? there is no correlation among them, so they ?? that's also an indication that these events are local to the monitored AS.

So, what I show here again, the previous time series where we had removed the duplicates. Here we have also removed these large events affecting more than 2,000 prefixes and being local to the monitored AS. So we see that doing this removes most of the remaining spikes in this time series, so we are getting closer to what we believe is the pace line churn time series. But still, we are left with these level shifts where you have these periods of sustained high churn, so this is next.

How about these levels? What causes them? So, here is, we did this manually for each network we identified these level shifts looking at periods where you see this level shifts, we see what causes these level shifts in each case? So here is an example for AT&T. We identified these four level shifts we plotted, this is more for illustration, we plotted the traction of the churn contributed by each and every prefix in the network, sorted by how much churn that prefix contributes. And we can clearly see that, in these four periods, this is a normal period for reference, and these are these four periods. We see that in these four periods, there are a very few network prefixes that creates a lot of churn. They are flapping, they are being updated all the time.

And this is how the ?? so, causes for these level shifts were found to be in one case here, this was a leakage of internal AS numbers. So they used the internal AS numbers for some purpose within their network and they started announcing these private AS numbers publicly. And in other cases we discovered links that were flapping up and down that caused this. So we were able to pinpoint in these effects. So once we remove all churn caused by these exact effects that we discovered, we ended up with this time series. This is what we call the baseline time series.

So these level shifts are caused by specifically failures or misconfiguations in or near the monitor. These are local effects. They are not really part of the global out these large local events, we filtered out these level shifts that we identified. So what we are left with then is this that we call the baseline churn.

So what characterises this baseline? Well, it is growing. Baseline churn is growing. We have run several different statistical tests on all the four monitors and it is growing, but it is growing slowly and it is growing much slower than you would expect. Here, we compare it to the growth in the number of prefixes in the RIB. And it grows much, much slower. Over this period of six years, it increases between 20 and 80%, depending on which monitor you look at.

So, then the conclusion ?? the main conclusion from this is that the most severe churn, the most intense period of BGP updates is not caused by this global things that are propagating through the whole network that everyone can see. It is caused by events and configuration mistakes that are happening in or close to the monitored networks. So the kind of the recollect the take away message here is that the increase in this background churn, this does not pose a threat to the scaleability of the routing system. The Internet can keep growing, if it continues as it has done so far, and this background churn is not what is going to kill the Internet.

If you want to reduce churn and then what you need to do something about are all these local effects. The churn that is caused either by events in your own network or in the session with your neighbour.

So, this is really ?? what I presented so far is a paper we wrote that was presented earlier this spring, and of course we are working further on this, and I just want to give you a glimpse now of what we are now doing, of course there is the big question here that Geoff was asked yesterday. This is counter?intuitive. I mean, this puzzles me. This Internet is growing so fast and you would expect that churn was growing much faster. Why is it growing so slowly? And this, of course, we are trying to find out. So a disclaimer, what I am going to present now is ongoing work and the conclusions are not final, but I am going to give you just a glimpse of the directions we are working in to explain why is churn growing so slowly.

So, what we are looking at is the topology, the effect of the topology and the densification that is going on in the network. So what do we know about the Internet topology, the structure of the Internet topology?

Well, we know that as the Internet grows larger, it is getting denser, meaning that each node on average has more inter?connections with other networks. More providers basically, multihoming is increasing and it is increasing faster in the centre of the Internet than at the edges. And this, of course, gives more paths, more ways to reach each destination in the network. We also know that the average path length is constant and it stayed constant for the last ten years.

So, how does this influence churn? Well, this is actually complex. Because, as you get more paths to each destination, there are more paths that can potentially be explored. Remember, Geoff told you yesterday about the convergence sequences, right recollect the number of updates that are needed to kind of capture one event in the Internet. So, potentially there is more path exploration when you have a denser network. At the same time there is another effect that not all events need to be propagated globally any more. As I showed in this example, if you have more than one provider, then, when something fails, it is not given that this will affect your preferred path. So some events will be confined locally, not the whole Internet needs to know about this event.

So, what I plot here is the fraction of events, of all events seen by a monitor, AT&T, here is the fraction that contains ?? the fraction of events that end with a complete withdrawal of the path or that goes from not having a path to having a path. So, these are radical events in the sense that they need to be†?? probably need to be advertised globally.

The other ones, they are simply path changes or path disturbances and this is the majority of events. And its fraction is actually increasing over time, indicating that, as the network grows, fewer and fewer events need to be propagated globally.

So, we are looking ?? can we ?? so these two points that I made here, the thing that more paths will lead to possibly more path exploration, and at the same time, more events can be confined locally so that they don't have to be distributed globally in the network? Can we find support for these two predictions in the data?

Well, it seems that we can. Here is the number of updates received after the withdrawal of a prefix. This is the beacon prefix, also Geoff used yesterday. So a beacon prefix is withdrawn and announced in a regular pattern, so every time it's withdrawn, this will cause a sequence of packets and what we see here is that, yes, it seems that when the network grows denser, there will be an increased number of updates received after this beacon prefix is withdrawn. But you will only see this if you don't use the MRAI timer or rate limiting in general, MRAI timer on Cisco boxes. If you don't use this on your monitoring session, you can see a growth in the number of received updates. If you do use rate limiting, I am sorry for the scale on the axis is different than the two plots which makes it difficult to see the difference. But here is stays pretty constant. So it seems that this protocol configuration rate limiting is very effective at masking this trend.

On the other hand, how does densification limit the visibility of routing changes? So what we did is for each event that happened in the network, we look at all our four monitors we see how many of these monitors with observe this event. Not all monitors with observe all events. So, some monitors, the red ones ?? some events, they give updates that are seen by all monitors. Some, the red ones, they are only seen by a single monitor, by this single monitor in this case, and some of the green are seen by two or three out of four. And what we observe here is that there is a growing trend in events that are only seen by a single monitor. Meaning that there is an increase in locality in update propagation.

Correspondingly, there is a decrease in trend in those that are seen by more than one monitor but not all of them. Those that are seen by all monitors it's relatively stable. So these might be events that need to be globally propagated. For instance, the complete withdrawal of a prefix.

So, this is the end of my talk. Thanks for listening. Any questions?
(Applause)

CHAIR: Thank you very much. Next presentation then is Ben on the v6 PMTUD behaviour.

BEN: So, I am representing the WandNet research group, and recently I was involved in an investigation into path empty discovery of a particular focus on IPv6. Now, we also did test IPv4 but I don't have time to present them in the presentation, and but if you want to know them, feel free to e?mail me and I'll let you know.

So, Internet communications are most efficient when the largest possible packet size is used. Path MTU discovery is the mechanism used by end host to say find the largest packet size that Internet past can accommodate. Based on experiences in IPv4 there is a common perception that PMTUD is unreliable in IPv6. But is this really the case? We decided to find out. So what we did, we implemented a PMTUD test and used it to survey a number of dual?stack servers on the Internet.

So just to recap on PMTUD discovery. When a router receives a package. DNS send a package to bug message informing the sender. When the sender receives this, this will reduce its packet size accordingly and see a small packet.

Now, there is an important difference between IPv4 and IPv6 and that's in fragmentation. In IPv4 intermediate routers can fragment packets. This is provided that the IPDF is not set.

And our testing found to, there were top 1 million web servers, they had about 97% of web servers set the DF bit, have PMTUD enabled.

Now, in IPv6, intermediate routers cannot fragment IPv6 packets. Only the sending node can. A packet whose size exceeds the next HOP MTU will be discarded and cause an ICMP message to be sent.

The success of PMTUD is particularly important in IPv6. The reason for this is that tunnelled IPv6 connectivity surveys your 6to4, 6 and 4 Teredo etc. Is currently common and these tunnels have small MTUs to allow for the extra overheads. Packets then ?? are likely to be too big and discard and therefore PMTUD is needed more often in IPv6.

Okay. So, this diagram here shows a path between a client and a server. And most of the linking are 1,500 bytes except for the one between, are 1 and, are 2, which is 1,400. So, to begin with, the server receives a 1,200 byte packet. However, are 2 can't afford to due to be too big for the next top link so it discards it and sense and ?? packet too big message. The server when it receives this message reducing its packet size accordingly.

So that's when it works well. However, there are some problems that can break PMTUD. The prime one being over zealous file administrators filtering all PMTUD including the packet too big messages. Another reason is IPv6 tunnel not seen in packet too big messages in the first place.

And so, situations cause what's known as PMTUD black holes and this is where the server will continue to pump large packets that are discarded by router and it will ?? packet too big messages being filtered.

And this is particularly bewildering to the end user because what they'll see is the connection successfully established, that's because the TCP SYN packets are small enough to traverse the Internet packets. However the large packets do not get through and the connection hangs.

And you have also, got a diagram there that shows this, R3 is filtering, ICMP messages in the server as a result isn't learning to reduce its packet size to 1,400 bytes, which is PMTUD in this case.

So, if you work around that have been used and the first being clamping the MTU on IPv6 interface to say ?? this limits the DNS 1280 bytes and effectively by?passes PMTUD. Another option is to reduce the [] em /SEUS and SYN packets to 1,220 bytes. This only effects TCP. Both of these solutions aren't ideal because they will have a performance hit. It is certainly preferable to fix the ICMP filtering problem and especially if we want to use larger MTUs one day.

So this diagram here shows the server†?? the service path, although it supports a packet size up to 1,400 bytes, the server is conservative and uses 1280 byte packet which gets to the client. No worries.

So, what I did, I created a PMTUD test and I implemented Scamper. This is an Internet measurement tool and it ping, does alias resolution amongst other things. It scampers free and open source and you can get it at the URL on the slide there. And so my test, it hosts the ability to do PMTUD discovery that it can do this in post IPv4 and IPv6. It can be used to test HTVP, SMTP and DNS servers. It's been implement in a way that's generic. For example, DNS is just a small page of code.

The test runs on systems that use the IPFW firewall, and Linux support is planned for the near future.

So here is how the ping TD test operated. We first establish a TCP connection to the target server, we specify a TCP maximum segment size of 1,440 bytes in the ?? after that we send a request packet and this packet is especially crafted in an attempt to elicit a large response from a server and you'll see why this is necessary in a minute. So, the ?? determining PMTUD successor failure depends on the response packet size. If the response packet is larger than 1280 bytes we'll use what is known as the reduce packet size. Otherwise we'll use the fragmentation header.

Finally, after the test, we do some extra analysis to detect additional successes and failures. But this is not part of Scamper.

So, the basic idea of the reduced packet size algorithm is does the server use smaller response packets after we send a packet too big's emergency to do so. If yes, we infer path into discovery success. Otherwise we refer PMTUD failure and this is likely due to filtering. This technique requires large response packets from the server. In the case of IPv6, this must be larger than 1s280 bytes. Hosts will not reduce their packet size below this in response to a packet too big message specifying MTU lists.

This idea is taken from the paper measuring the evolution of transport protocols on the Internet.

So here is an example of the reduced packet size algorithm where it's used to infer success. So, these diagrams, they don't show the TCP control packets so ?? initially we send a request packet to the server we get a response which is 1,500 bytes in size. We then send a packet too big message asking it to reduce its packet to 1,280 bytes. The server does this and when we receive the retransmitted packet that is now 1,280 bytes we infer PMTUD success.

And here is an example where we infer failure. So, like before, we send a request. We get a response, that is 1,500 bytes in size. We send the packet too big. However, this time it is filtered somewhere between the server and the client. And so, eventually the server will time?out and retransmit the response. And it will use the same size of 1,500 bytes. We will retransmit the packet too big message up to two times and if after the second retransmission the server still hasn't reduced its packet size, we'll infer failure.

Okay. Here is the second algorithm. Frag header. And the basic idea is does the server include a fragmentation header in its response packets after we send out a PTB specifying an MTU listed in 1,280 bytes. This behaviour might seem bizarre and if you don't believe that hosts actually do this, see RFC 2460 section 5.

So, if yes, the server does include fragmentation header, we infer PMTUD success. Otherwise the result is too small. And okay, so this algorithm can only be used to infer PMTUD success. The reasons for this is that we tested 688 IPv6 enabled web servers we sent each of them an MTU of 1,000 bytes we found that lists less than half of them exhibited the behaviour mentioned in the IPv6 RFC.

So, using it to infer failure would result in many incorrect results. And the key advantage of this algorithm is that it does not require large response packets.

So here is using it to infer success. We send the request and this time we get a response that is 1,100 bytes in size. So, we send a packet too big message, we'll specify an MTU of 1,000 bytes and you'll see here that the server sends 1,108?byte packet which includes an IPv6 fragmentation header. That packet isn't actually fragmented.

And success MTU PMTUD can occur before Scamper has a chance to get involved, so, basically, if PMTUD happens in the path before Scamper sends a PTB, it will only see a small response packet, but we can detect when this happens by looking at whether the following criteria is met. So, if the server is greater than 1,220 bytes and we receive the 1,280 byte response packet from the server and another data packet followed it, we can infer that the server learnt of a 1,280?byte tunnel. This is because it is very unlikely that the server would use a 1,280?byte packet.

And we have also got a post?analysis technique for inferring failure, and so, PMTUD failure can mean that Scamper does not receive the server's response packet and so, a test in this case would result in no data. Now, we can detect when this happens by repeating the test but using a smaller MSS of 1,220 bytes. That means that all server response packets can make it to Scamper without being discarded for being too big. In the second test when a response packet is received, then we can change the no data result to a PMTUD failure.

And so, as I mentioned before in the reduced packet size technique, large packet sizes, large packets are needed. So, for http, SMTP and DNS we devised techniques for enlisting large packets. For http we wrote a script that searches the web server for an URL to a large object that it serves, so either an HTML page or an image or something like that. And an ?? such an object should result in a large packet in the web server we do this separately for IPv4 and IPv6 because some web servers will serve different content depending on whether it's accessed by v4 or v6.

SMTP is a bit trickier. There is no general technique that can be used for all MTAs. What we do, we chose three popular ones. Now, anybody who has sent a help command to send mail will know that the responses are very long and it turns out that sometimes sending the commands is large enough to elicit a large response packet from the server. Exim is a bit trickier, as it has short help messages but it turns out that specifying a really long domain name in the LO, so means that the server and its 220 response echoes this large domain name and that's a large enough packet.

With Postfix, we see multiple LOs in the same packet.

All three techniques were implemented, but in the end, we only tested send mail. So the techniques for Exim and Postfix might be considered a breach of mail server etiquette. I'd be interested to hear your opinions on this.

Finally, for DNS, you will see that taking up half the slide there is a very large text record and so, this was configured on this, and the recursive query for this should result in a large packet. And so we can, therefore, use this to test recursive name servers.

Okay. So, this slide describes how we collected the web servers, mail servers and DNS servers that we used in the final testing. So to qualify for testing, a server must be dual tacked, have Unicast IPv4 and IPv6 addresses and be reachable on these addresses. We started with the top 1 million web sites list and that was consisted of 987,000 unique domains.

To get the web servers, we simply propended dub?dub?dub to each of the domains hand then queried each of those for an A and AAAA record. Mail servers we queried each domain for ?? name server record.

And so, the batch test was run from five different vantage points around the world. Four of them had native IPv6 connectivity and one of them was tunnelled using 6to4. That was actually an advantage point in my flat at home.

And it turns out that the vantage point has a significant effect on the results. For example, New Zealand one is behind a transparent web proxy and that means that all, instead of going to their intended target, went to the same host. And i.e., 1 has a 1,280?byte tunnel configured on the next HOP and that means that the server response packets were limited to 1,280.

And okay, so this shows the test population for the batch test and so these are the servers that met the criteria I mentioned the in the previous slide. As you can see there is not a hell of a lot of them and this is really shows that IPv6 adoption amongst content providers really has a long way to go.

So, weened up with with 825 dual stacked web servers, 643 dual stacked mail servers and 1504 dual stacked name servers.

For each test we collected the result of the PMTUD test. So that's success, failure or other. And also the MS S that the server advertised to us.

As well as all packets sent and received during the test.

Now, onto the results: So, the Y axis there shows the number of tests that resulted in a particular results so we have got three bars there, success, failure and other. Success and failure are divided into the techniques that were used to infer them. So you have got post test analysis, fragmentation header, reduced packet size and the other bar is also divided into categories such as TCP reset, no connection, which means we couldn't establish a connection to the server, and those that were too small.

As you can see, over three quarters of them resulted in success, and are just 2% if failure. And if we only take into consideration the successes and failures, we can calculate a failure rate of 2.6 percent.

And here is for SMTP, IPv6, so, as you will notice, the number of results represented by this is quite small and that's because we only tested send mail and here we have got a failure rate of 4.4%.

And for DNS IPv6, you will see that there is a large number of the servers sent us packets that were too small and this is because many of the servers tested didn't support recursion and that's a good thing because we all know that authoritative name servers should also not support recursive queries.

And once again, the failure rate was very small. 1.1%.

Okay. And here, this pie chart shows the distribution of maximum signal sizes that was advertised by the server. So over three quarters of them advertised the 1,440 bytes and these are hosts with native IPv6 connectivity. The 9% of servers that advertised the MSS of 1,220 appears to have taken the recommendation of clamping their MPU to 1,280. The 5% that advertise MSS of 1420 bytes appear to have tunnel connectivity and 1380 bytes. Cisco firewalls, by default, will rewrite the MSS and TCP send segments to 1380 bytes.

The 1212 one was a tricky one and it turns out that the MSS used by Google and it appears that some other organisations have followed suit.

So, I have created a web interface for the PMTUD test. You can go to it at the URL there. I warn you that it is in the very early stages but it should still work and if you do manage to break it, please send me an e?mail.

So, what you do, as you enter the e?mail that you'd like the test result to be sent to and you specify an URL of an object to be requested of the object to say tests and this should be an URL to a large observe, an aimage for example because it will use the reduced packet size tech neck. You use with you want to to do IPv4 or IPv6 or both and you can optionally specify an IPv4 or IPv6 address. If you leave these blank then we'll simply resolve the domain name in the URL to an address.

And yes, you need to register first before you can use the test.

So, in conclusion. My results suggest that PMTUD failure in IPv6 is not as prevalent as widely believed. And so, the take home point is the combined failure rate ?? if we include all the successes and failures for http, FTP and DNS, it's just 1.9% and I am very interested to hear your opinions as to whether this is good, bad, terrible, etc..

So, what can you do to help? Well, you can run the PMTUD test to a host on your network. This is be either using Scamper on the command line or alternatively you can use the web interface that I just showed you.

I also highly recommend you read implement RFC 4890. This includes filtering recommendations for IPv6, not just packet too big messages. And to make life easier for you, I have common firewalls and route operating systems, given instructions to enable packet too big messages. So we have got here, IPFW, Linux IP tables, etc.. so there is no excuse now for filtering packet too big messages.

And I'd just like to acknowledge the following people:

Those who provided machines for my use, Dan wing Cisco, Bill Walker at Snap Internet in New Zealand and and those who run PMTUD tests on my behalf. Hemeal at RIPE and David Malone. And last but not least, a huge thanks to RIPE for giving me the opportunity to present at this conference. I am having an awesome time in Prague.

(Applause)

So, there is there any questions?

AUDIENCE SPEAKER: Did you do or consider any tests with an MTU larger than 1,500 of /SKWRUPL bow frames.

I didn't, but it's an interesting idea.

AUDIENCE SPEAKER: We did this about a year ago and the results were pretty bad. We had a large carrier that just dropped anything over 4,000 and you maybe just do it. You do run a test in our network.

SPEAKER: Absolutely. Thanks.

AUDIENCE SPEAKER: Does this service support testing DNS requests? Because you mentioned other protocols. I just ran a test on my DNS server ??

SPEAKER: The web interface that only just http tests. However if you use the Scamper application, which I have provided a link to, that can do http, SMTP and DNS and so yes you will be able to test your DNS server using that.

AUDIENCE SPEAKER: The tool yes but the web interface.

SPEAKER: Not currently the web interface but soon.

AUDIENCE SPEAKER: Number 2 request is that you are hub lick or you want to keep it low key?

SPEAKER: Public is fine yes, but there is lots more work to do on T as I said, if you break it, be sure to tell me.

AUDIENCE SPEAKER: Thanks.

AUDIENCE SPEAKER: George Michaelson, APNIC. I am speaking in a sort of personal role. This isn't about address management policy or anything. I am pretty glued onto the other camp on this you know. I am not saying that your conclusions are wrong and I am not saying that your research work is done. I welcome you do this. I think it's great to have this exercise. Nonetheless I still walk away saying the efficiency question isn't enough, that I feel I want to risk the consequences of a potential failure and I know that you say it's only 1.8 or 1.9 or 1.6, somewhere around that level but I think you haven't sufficiently accounted for the potential impact of behaviours even at that low number. For instance, we are quite aware now there is a behaviour in DNS facing systems that is application specific behaviour, it's not the network stack, it's application reaction. They try to do two things in parallel, one 4 and 1 six and the one that wins the race determines their future binding no services in 4 or 6. Now IPv6 fragmentation, and I may be wrong, I may his understand but my understanding is it doesn't permit intermediate states to do reassembly in passing off. It has to say sorry drop and the originator has to say oh well I'll do that all again. Which is a state overhead and which is a delay overhead. So, there is an element here that causing this fragmentation and failure because the architecture in 6 has consequences, in hamcation decisions about what arrives first, the 4 or 6. Or you could consider the DNS. If there is something in DNS that doesn't work because of PMTUD problem you are going to make a problem to use the alternate that did work. So, I think I still a ascribe to a feeling that what you are doing is important and you should continue to do it but I don't entirely buy the conclusions that you are coming to. And that's just the difference between us. It will be part of a debate eye imagine in the community, because there is traffic on lists right, we talk about this stuff. So the work is good, I am not there on the conclusions but that's a personal opinion.

SPEAKER: Okay. Thanks.

CHAIR: All right. Thanks.
(Applause)

CHAIR: So, that's the end of this session. And we have a coffee break for 25 minutes. And then it's the IPv6 Working Group.

(Coffee break)

Live captioning by Mary McKeon
Doyle Court Reporters Limited,
Dublin, Ireland.