Archives

MAT WORKING GROUP ‑‑
13 MAY 2015
2:00 P.M.:


CHAIR: Good afternoon, welcome to the MAT Working Group, as we usually overrun by five or ten minutes, I thought I really start on time and see how we are doing this time. I already get a stop sign from the first row.

What is different this time, and I actually talk about that for two or three minutes afterwards is that we have a new co‑chair. This is Nina in the first row, say hello to Nina. We also have, as usual, a scribe over here. And a stenographer.

When you go to the microphone and you either have a question or a comment later on, then please state your name so that we know who you are and who you are working for.

Next point would be the minutes. I'm sure you all read the minutes multiple times because they are lovely. Any questions or comments to them? Is anyone against that we approve the minutes? Good, then therefore the minutes from the RIPE 69 from London are approved.

Having a look what is on the agenda today. So, the first part is the new co‑chair, we already quickly talked about it. For the ones who haven't been on the mailing list or haven't been at the London meeting, the former co‑chair, Richard, actually stepped down because, well apparently he had too much work with his day job so the last session I was leading alone and then as all the other Working Groups, we started to publish a little process which helps us to find new co‑chairs instead of they just randomly appear from somewhere. The people on the mailing list found consensus so now we have the little process. And then I send out a call for nominations, people who are interested. I talked to probably five or six people which ask what is that all about and how that works, and I think some of them were scared off because they never went to the mailing list. And Nina showed interest and as you have seen from the mailing list, we had consensus on that, so from now on, she is my new, well our new co‑chair an helps me running the session.

The people who paid attention to the agenda in the past, might have figured out that we always had a little bit of a theme. At least most of the times. So, some agendas we had before had some IPv6 in it, some were more on the mobile side, and if you have a look this time besides RIPE Atlas, which we always have, it has more to to about ISPs, BGP and a little bit IX, so we are more on the BGP side. With that I'm welcoming George Michaelson who should be here in the room. That means, as ‑‑ here we go ‑‑ so let's see if we can change the order and ask Vesna, can you do your talk now? It doesn't make a big difference, I guess, and then we come back to George afterwards.

VESNA MANOJLIVIC: Hi everybody. I'm Vesna, I am a community builder for measurement tools at RIPE NCC, and I'm here to give you very, very short news and update about RIPE Atlas.

So, at this meeting already, there were several technical presentations about the use of RIPE Atlas, about the new features, we had the workshop on Monday evening.

So, I will start with the community news, because this is my favourite. And I'm very grateful and I want to use this opportunity to thank everybody who participates in RIPE Atlas and makes it such a great project, and that's all the volunteers who are hosting the hardware probes, people who are hosting RIPE Atlas anchors, ambassadors who help us distribute the probes in the local communities and at the conference where is they are going, the users that are using the RIPE Atlas data sponsors that help us with money, people who develop software, thanks, a big thank you for all of you.

So this year we had three sponsors until now, there is in total 230 ambassadors, and there is a new feature for that that they can actually say to which conference they are going to go and you can see that list if you follow the link from the slides, the slides are already online. And there is a lot of people who are contributing code and it's all on GitHub, there is also an initiative to share the workshop and tutorials and I'm inviting people from universities to share their syllabus and tell us which exercises they are giving to students that other people with reuse it, there is already some of that already there on GitHub.

So, our goal is to make the RIPE Atlas bigger and bigger, but we have noticed that there is a large concentration in Europe so we now trying to kind of spread it out a little bit and to reach more networks rather than to pile up more probes in the same network. So, we introduced a new guideline sets, say, and we are trying to limit the distribution of probes for one probe per AS number. And it's not very strict, so, let's see how that goes. And at the same time, we are trying to cooperate with other region registries to reach their membership and to make the RIPE Atlas spread in other parts of the globe.

We also have changed the way how we purchase the probes, so this year we are actually not going to use the RIPE NCC membership money to buy new probes, we only spend the money that we receive from sponsors. So, if you want to be able to get these probes shipped to you for free, please approach me later and we can make some deals.

And some numbers ‑‑ well, I won't read this, just spend a few seconds looking at it. I hope you're impressed. The huge numbers. What I like specifically is that we keep getting one more anchor every week, so, it's on average, but it's like really consistent, and then there is many more applications coming in. So, we can expect this growth to continue. And we see kind of a steady growth in the probes that are being shipped, but as I said, it's kind of conflicting with the goal that we are trying to make it a bit more spread out throughout the world, and so on, so the probes are not growing as fast as before. And there is many more measurements being scheduled all the time.

And very short overview of the new features. I have some additional slides after the question mark slide, so you can actually study this in detail if you download it. So we introduced MTP measurements and we kind of modified the SSL measurement to be TLS measurement, so, it's not exactly the new type, but it might be news for some you.

Then, the major new feature was the data streaming, there was a lot of talk about it so I won't go into details. But there are two kinds of realtime data that you can get. You can get measurement results realtime of any measurement type, and you can also get just the probe status if it's connected or disconnected. So based on this, there were interesting use cases for example about network outages, and we did it like for the country outage or the electricity going down or the specific network having an outage. We improved a lot in the user interface, more APIs and we also enabled the probe tagging, so there are some system tags where we say this is a probe at home or it's in the backbone or it's behind the NAT, but you can also come up with your own tag: "This is a blue probe"; "this is the living room probe," whatever, and then you can find your probe later based on those tags.

And then another interesting use case is something that Emile Aben was working very much on and that was a tool called IXP Country Jedi; you can see where he got his inspiration. And so he was trying to find out how many of the trace route paths between the probes in a certain country, how many of them stay local and how many of them go through the local IXP, and there is a lot of use cases later on in the slides in the additional material.

And we are also still busy and this is what we will be working on for the rest of the year, so we are planning to introduce http measurements and it will be measurements ‑‑ the measurements being enabled only towards RIPE Atlas anchors, and later on, we are going to work on the wifi enabled probe. More APIs historical view of the data streaming, open IP map, another important project, the crowdsourcinged geolocation of infrastructure IP addresses, so please take a look at this and tell us where you're IP addresses are located geographically so we can put these traceroutes on the map with more accuracy. We are planning to have security review for the whole RIPE Atlas. We are working on webinar and we want to reach 10,000 active probes by the end of the year, or sooner, and 150 anchors, we have a road map. Take a look, all this is documented there.

These are our contact details. Thank you for your attention. Any questions?

CHAIR: Any questions to Vesna? Or comments?

AUDIENCE SPEAKER: I'm Andrei from Success Net and also Atlas Anchor Host, and as the Atlas anchor project is going in the time keeps going, we are going to reach the limit of three years per lifetime of one anchor. After that it will be deactivated according to current rules of the Atlas anchor hosting. And I'm not happy with the idea of throwing away hardware that is completely functional just because it reached end of its warranty, even though the warranty, if if it's there, it's still without any support, so if it breaks for instance now, it will still be, the Atlas anchor will be unreachable for days or weeks until a replacement hardware will be shipped. So, I don't see actually any point why these strict enforcing of three years lifetime per anchor is necessary, and if you are not going to reconsider it actually, we are going to do this this time.

VESNA MANOJLIVIC: Thank you. That's a very good point. I'm sure we won't be very strict and we will rely on some best operational practice and discuss this with the rest of the anchor host community to see what is the best solution for it. We had the goal to provide stability, so, that was the only reason why we put like three years. We didn't know at the beginning how is it going to go on, and that was more like a recommendation. So, now that we are nearing that time, we will look into that again.

AUDIENCE SPEAKER: Jenna, thank you very much. I'm really glad to see all these new features come in. Just a quick feature request, if I may, because I'm actually spend sometime on trying to get probes evenly distributed across different AS numbers like say /19 or IPv4 or /22 in IPv6. I'm going to request X probes are located in different subnets, different I'll numbers across the world, I think it would really help like worldwide measurements.

VESNA MANOJLIVIC: That's an interesting idea, but again this doesn't depend on us, it's more for the community as a recommendation, please put the probes in topologically diverse way.

AUDIENCE SPEAKER: I use an API I requested like 20 probes. I would like to be able to say one of those probes to be evenly distributed as possible.

VESNA MANOJLIVIC: That's a good feature request. I will record it.

AUDIENCE SPEAKER: Thomas King. I just want to say thank you for the cool IXP mapping tool that's really helpful for us and also for the IXP community. So that's really cool. Thank you.

VESNA MANOJLIVIC: Thank you. And all the gratitude goes to Emile, can you stand up. There he is.

(Applause)

AUDIENCE SPEAKER: Marco, just another probe host. Can you elaborate a bit on the security review you mention, is it going to be some code review or penetration test or...

VESNA MANOJLIVIC: I'm going to defer this question to my colleague, Robert, he is there on the mike.

AUDIENCE SPEAKER: Robert committees, RIPE NCC. Before answering that to Jen, definitely it's on our agenda. You have to keep in mind that on the map it looks nice if you have a fully distributed set of probes like you know from all over the world. But, the extreme example, if we had one probe in Antarctica, then everyone would want to use it and then becomes quickly overloaded. So the system also has to balance the current usage of the probes which most of the time leads to unbalanced probe section. So that's something that we are trying to tackle.

VESNA MANOJLIVIC: Just another answer. Stay tuned, in my talk about the hackathon results there is somebody who actually suggested a solution in a way of this, so...

AUDIENCE SPEAKER: Robert: And he ignored the business of the probes which makes it a lot easier. For the security review what we are planning to do is ask external consultants to look at what we did, how we did it and to tell us if they see holes that we haven't seen before. It will probably include cold review as well, we'll see how much we can negotiate with them and who they are going to be, so we are going to put out an RFP very soon to start interest process. But the point would be that we want to have an independent eye so it shouldn't only be us who says yes it's fine, no worries, do nothing, nothing wrong can happen. Because that would be a bit bullish.

For the moment we believe we are good but it's nice to have someone telling us if we have overlooked something. That's the goal.

CHAIR: Good. Thanks a lot. We'll hear later more from Vesna. Let's see our next speaker ‑‑ here we go, our first speaker is here.

GEORGE MICHAELSON: Please accept my apologies, I was too enthusiastic manning the ISOC booth.

Before I actually talk, I'd like to point you to the talk that you really should be reading, which is not my talk. This is a paper mark Allman wrote for this year's palm and it's an amazing summary of views around network measurement, data sharing, what we're trying to do, imper simple, science, methodology, it's a brilliant paper. If you haven't read this paper, read this paper. In fact, don't listen to me, just go and read this paper.

So, I'm not going to go through it any great length about what APNIC is doing because I'm sure you are heard us talking a lot about how we used a adverts and placement to measure user behaviour. But as a quick summary, this is the current state of play of our S we are seeing around 5 to 600,000 randomized measurements a day across the global Internet, and we have been developing this system continuously. The basic technique, we are presenting pixel fetches that request resources that can lie behind different names, unique names, and they test qualities like dual stack behaviour, v6 only, various different attributes and we do a basic correlation technique. We're doing this using an advertising channel, so, the goal of advertising is to make money, I mean that's why Google is in that business and they have a basic metric they use, "clicks per mill," which drives their model of how to place adverts but we are using this technique to gain impressions. We're not actually trying to get people to buy things. So, our bidding adjustment is an mechanism that's kind of playing inside Googling bidding engine and Google /WAEPBTS that revenue and so it's driven to give us a very large number of impressions worldwide. And there are really quite interesting how they go about doing that.

So, the placement mechanism they do is that they have to find new eyeballs, that's their commitment as good brokers of ads, clean eyes everyday seeing our advert and we pay for that placement. The first time you lot an ad with them you get this huge spike of placement as they say oh it's the billing cycle I better get all of your money. Then they back off and present a lower rate. Then the next day their model does a much better job of smoothing out the placement and over successive days, you can actually see their model improving the presentation rate and flattening down. We have tracked this both at the launch of a new ad and then across a long run. And we think that we can show that there's a lovely consistent, regular placement of these tests in front of people. The spiking you see there is the fairly normal diurnal pattern in a you would expect to see when you do interaction with real people. We have looked at that, decomposed, there's the three regions we run and we can see the separate diurnal waives for a nation footprint, European footprint and American footprint. This is really quite nice, we are getting to see people quite evenly actually.

So, we have this technique and we are running multiple adverts in bands to try and even out display time. But we are aware of some problems. We're not actually getting data in quite the volume we'd like from everywhere in the world particularly from the emerging Internet economies. We are doing very well with the G20, we have to think about how we're going to get better numbers for the rest of the world.

So the uniqueness question. Are we really getting uniques? We did a basic plot and we have been able to demonstrate that we're getting an almost linear rate of uniquely new presented addresses. If you look down the bottom here in red you can see the presentation rate, the blue one, the variants there that's using our JavaScript method and it shows kind of a daily differential hump and you can see the classic logistic supply curve tailing off here in the JavaScript presentation. So yes we do get lots of unique IPs. Success, achievement unlocked. Wonderful, let's do sufficient with this. Let's look a bit deeper.

This is the table of the distribution of addresses that came to our experiment, sorted by the most popular. So this is the weighting of where the addresses are coming from in the world. Keep an eye on that list. And have a look at this, which is the ranking of the population of declared users of the Internet as lodged to the ITU. Now, there are issues to be had with how the ITU collect and update this data. This is not a frequently updated survey. As an example, last year, China discovered 150 million extra users half‑way through the year. 150 million out of 535 million is a significant increase. So, there are some question marks but nonetheless this is the declared public ranking of the population of the Internet.

This is the overlap between the ranking we got and the ones that occur in the ITU stats. So we did agree on some of the members of the top 20. But that's the only one that we severally positioned according to its true ranking by the declared Internet population. (Successfully) we actually do have quite a problem with the skew of presentation of adverts and there is a reason for this. Google has to do the best job it can. It's looking for fresh eyes to present ads to and its also looking for cheap eyes because we drove the bidding engine there and unfortunately, cheap eyeballs aren't necessarily an economies that have the right ranking for relative numbers. There are a lot of people in Indonesia visiting websites that will take our ad placement, they are quite cheap to present, so we pick up a lot of Indonesia traffic. However, we have worked out a way to adjust our information using those regional world totals.

But then that introduced this question, we don't think any of the other people who are regularly displaying world Internet up take, we don't think any of them are actually doing this class of adjustment. So, our idea of the population of the world and the distribution of v6 has been adjusted to take account of the relative population rankings of that economy in the Internet. And we consistently see one or two percent variants from the headline figure that a lot of other people are presenting. And we think, we think that because on an individual economy level we have a very strong agreement, we think that this variance is because we are performing a population adjustment. So, this is the Google population trend, the green line at the bottom, it's a lovely clean signal, the spiking is the real trending of the behaviour, it's a weekend spike because it's domestic use that the predominant factor. We're the top line and it's obviously a much more noisy signal and these are classic experimental problems. But the overall trends are really very good. The problem is the absolute value, if you look here we are reporting at this point, 2 .5 when the rest of the world of being reported as 4.5, we think the variance is because of our adjustment factor.

So, there's another problem under the counter. We know that quite a lot of ISPs deliberately deconstruct their routing complexion down to individual ASN and a good example of this is Time Warner, road runner, they have 12 ASN that are visible in BGP recollect we have seen nine of them. So, if I look here at the table of all the autonomous system numbers that are assigned to Time Warner, I can show you the kind of levels of activity we have seen and you can see here, that we know quite overtly some of their regional networks haven't yet had v6 deployed. Whereas other ones, are showing remarkably high levels of measurement. So, if we did an aggregation and we said oh, no, no, no we know Time Warner is just one company, let's put all of this data together, we completely mask all of the properties of a staged rollout that's going on in that company. We know that Time Warner is heading to a place where they may have remarkably high level of penetration of v6 but at the point where we know in engineering terms they haven't deployed to half of their ASN, we'd seriously undercount them because we are measuring at random across all of their ASN.

So we actually do have to think about some these reporting differences and structural differences in BGP. But there's another version of this problem. This is the table of ASN where they have one ASN but they are in lots and lots of different economies. So, for instance, if we take level 3, they have a significant footprint of announcement visibility in Argentina and Brazil and chilley, using this one ASN. And also in France. Hurricane Electric, for obvious reasons, are visible in a huge number of economies because they are mediating connectivity on behalf of lots of people worldwide.

So in this problem, we have the problem that when we talk about what the ASN is doing, we are misattributing that traffic economically because in registry terms we believe an ASN is tagged to a single economy, not because it's used in one economy, but because the formulism of what we say in registry practice is where are you registered for business purposes? But we're using that data to inform some of our economic tagging. And this problem, we think is actually going to get substantially worse over time. As more international behaviours emerge and as more address transfers emerge. The economies of registration is going to stop being a good model for how to do this.

So, we have a way to get uniqueness and to integrate data and it includes doing a whole bunch of work with the DNS and timing and collating keys, and we also take these declarations of economy of use and databases like MaxMind, but that is starting to become a bit of an issue. So, I would like to, at this point, mention the activity Emile Aben has been doing in your region developing the open GO IP project, I may not have called it right, but his work to encourage more people to accurately identify where resources are, it's really critical, and I'd encourage everyone to get involved in supporting that activity.

So, we have a problem that we only measure in Flash and we are mitigating this looking at moving to HTML 5, we also don't mention people who block adverts. That's probably not a bad thing if they did it consciously.

So, what's actually going on? Well, we decided that our previous charts that we were showing the world didn't look very nice and we took an idea from Eric Vyncke and we took his look and feel and we deployed on that, and we have identified what we think are four or five different cases. So an economy like Britain we think we can make a fairly strong case even allowing for noise, there really isn't a significant level of penetration, we're down at the low numbers. China, it looks broadly similar but it has certain behaviours that make us suspect we actually cannot measure behind the great firewall of China.

France is interesting because we can measure a very solid deployment of national level based on one ISP. This is essentially all 6RD in prox‑ad free and they have reached the maximum capacity of penetration that they are going to do for their market share. This economy it functionally stalled in its deployment.

America, this is totally on. This is amazing, this is now up at 15% plus deployment.

Malaysia. Small economy with one large provider, very rapid up take, up to 8%.

Germany, nice steady growth. I'm afraid for the people from the local economy, although it looks like a nice curve, it's actually not a high figure.

But your southern neighbours, tonne err do you Brest, so, the Belgians have really got something happening. Okay, so, can we tell why?

Well this is the list of capability measured down in Belgium and you can see here that the IPv6 capability by the top ASs in terms it of the numbers of samples we see is really quite high. It's astronomically high if you go down the list, telnet and AA BruTel.

So the Dutch table, at first glance it looks like similar kinds of figures. But, what if we sought it by sample? So if we sought by sample, do you see how the top listing is now Zigo, which I believe was UPC, but the high v6 up take figure which was access for all has dropped down the table markedly.

Let's just think about this. We're doing a random sample, we are.randomly measuring what users can do. The eyeballs get seen randomly across an economy. We know the population from ITU figures can do adjustment and we can work out relative ranking by the number of eyeballs seen. That's interesting. So, Geoff did quite a lot of work to make a service that go actually display this ranking as a first class service. So, here for instance, is the ranking for America. And I have spoken with people in the states and they say that the first five or six here is a fairly reasonable reflection of relative market density in that economy. The reason that Time Warner isn't there is because you can see all the separate Time Warner ASs as discreet entities, but broadly speaking, I have been told this is actually quite a good reflection of market share. This is the one for Australia which is where I am based. This is will figure for Great Britain and again I have been told that the top two or three, this is a fairly good reflection of the kind of customer dense tees these people have.

Japan, quite interesting to see that the second ranking organisation, KDDI has really really good v6 capability. But NTT's issue with distribution still stands out and in market share terms they are about twice the size.

Germany: Not looking too bad as a distribution.

We can do the same thing in DNSSEC, because the tests are presenting DNSSEC signed information and we can test DNSSEC capability. So, this is the chart of how DNSSEC is doing in the world and you'll notice that that's actually looking quite good. We're talking about 6% or better figures. If we look at the UK for instance, who were not doing well in v6, we can actually see quite high penetration of DNSSEC capability. It's really quite nicely distributed. And if you saw that by market share although the top figures are low there are some very, very high numbers emerging here, significantly higher than v6.

If you look in the America for instance, we can see the separation of who is performing DNSSEC and who is using Google. We can do the ranking. We can see charts of what's going on in Europe. The colouring here is a little unfortunate. Red is not meant no implying none, it's just the relative colour coding you get from Google charting. But we also get a sense of what's going on in the European system as a whole.

So we think that we have reliable measurement framework and we think that we're able to get some insites into market share that we don't think other people are currently telling you. I know that some economies from statutory reporting obligations, but we're not aware of an independent source of this class of information, that's publicly visible. Google would know this. Akamai, CloudFlare, people in that position of mediating end end user behaviour with content, they know this, but it's not necessarily something they want to share, because it's business intelligence, and we're quite interested in what you would think of a relative market share measure being publicly available and how it might inform what we're doing. So we'd like to invite you all to explore the data and have a look around and see what you can see.

We are aware of the bias in the S we know that this is not a perfect count. And we know that we have to do some work here to try and fix this. But, the motivation here is that it's not just measurement for measurement sake. We're actually quite concerned about how we can use this kind of information to help set address space, because the network we're building as a result of address shortage in v4 may not have the kind of properties that we want. I'm quite interested in the idea that the network was essentially flat in terms of its behaviour. You might have more or less but what you could do, I could do and we seem to be emerging into a world where the Internet economy is at least two tears through address and CGN and possibly three are ore four of differential qualities of behaviour and we don't think that this is ‑‑ well I don't personally think this this is a desirable outcome. I think it would be nicer if we preserved some of those end‑to‑end qualities.

On the other hand, you have got to the realistic, we understand CGN it real, cheap, deployable, easy. But also with the ARIN runout we have to think about that's going to happen here. I'd like to thank people who have helped us with the systems, finance, back end technologies to do this experiment. We are very grateful to the research relationships we have and we love to talk to people.

Science is fun. Thank you.

(Applause)

CHAIR: Thanks a lot, George. Any questions or comments?

EMILE ABEN: Aben from RIPE database I just wanted to open RIPE map, if people are interested in crowdsourcing geolocation of infrastructure, please come talk.

GEORGE MICHAELSON: I think it is an extremely useful and valuable project and I strongly /TPHAOURPBLG people to look at that URL and get involved. I think it helps everyone.

AUDIENCE SPEAKER: Hi. Bart from Imines, Belgium. So actually we are the proud leaders in IPv6 deployment, but I strongly encourage you to investigate strategies to study the mobile landscape.

GEORGE MICHAELSON: I'm aware that there are many people who come up to say I have complete dual stack deployed in our mobile and you are not accounting for T we actually know we think we are going to be able to do that but the compliance processes to modify how we currently measure to become acceptable for mobiles is taking longer than we thought. I think that's the next big story we have is what's the world like including mobiles? And I'd love to get that figure.

AUDIENCE SPEAKER: Me too. And actually you have to admit it's probably for the Belgians in this case we will problems even worse than the Netherlands so it's really worth investigating.

GEORGE MICHAELSON: My assumption is thatth would improve the figures over all but you don't think that's the case?

AUDIENCE SPEAKER: We know from operators in the meetings of the IPv6 Council, the Belgian IPv6 councils ongoing but we're not there yet. So it's certainly not the rates which we have currently at our /O CPE and at the fixed deployments.

GEORGE MICHAELSON: If I can just waste the room's time a little bit on this one. We had a good conversation in New Zealand about why people don't deploy v6 and a common story we heard there is most modern mobile deployment its is what's called a virtual overlay and secondly even the prime provider is no longer in‑house, you have to be a big economy to run your own they are outsourced and when you have a list the cost issues going from 3 go to 6 go, 6 is a long way down the list so it's no the the tick box that moves up the list when you actually deploy. Is that where you sort of think you might be going?

AUDIENCE SPEAKER: It might be. There are some other factors playing there too but perhaps we can discuss it off line since it's a long discussion.

AUDIENCE SPEAKER: Dave Wilson. Gosh that's cool. How do you calculate the number of users? What I'm really getting here is is there an assumption that every user has exactly one ISP?

GEORGE MICHAELSON: So the variants that you have got to the heart of is the other side of the ITU reporting figure, because that number is a very crude measure and it could indeed be registered customer premises rather than actual true consuming IP end point. And the numbers could be out by a factor of 2. But the quality they have is we believe it's a universally consistent measure, it's applied the same way by the reporting agencies that choose to respond. Certainly for the G20 there is a consistent basis for how they report on that number. So, whilst the absolute values are at question, the relativities between them, that problem of China and 140 million new once, the relativities are the best figures we have for the relative weighting and we can use the relativities as an adjustment factor against random absolutes. Statistically, arguably, it works. First approximation. Second thing, because so many people I talked to say yes, you're top 10, you're top 20 ranking for my economy is broadly speaking right, I feel the strong random story that it's random buy balls who are visiting the ad, I believe that property is being kept, so, random measures, roughly equating to balance of market share within an economy, population, not so happy but in relative terms, consistent way of calculating. That's the essential model but in absolute terms those numbers could be way out because like you say, it could be registered premises rather than actual IP addresses, devices, whatever. Do you know a better way?

DAVE WILSON: No. I think I'm with you. I have the same feeling with the relative numbers. I got a bit of a shock when I looked and I saw what the absolute numbers were and I thought hang on a second, could there be a user going from the.

PAUL RENDEK: To their mobile to their home and ‑‑

GEORGE MICHAELSON: If we take Australia for instance there is 12 million declared internet users out of a population of 25 million. I suspect the Internet use is well north of that, and I expect 12 domiciles or end points we are declared in the provider broadband plan.

CHAIR: Good. Thank you. Thanks George.

Our next speaker is Chiara Orsini talking about BGP stream, especially in the context of an open form framework and data analysis which probably makes it one of the longest titles we have ever

CHIARA ORSINI: I'm going to talk about BGP stream which is a framework for BGP analysis. It's app framework that we are currently developing at CAIDA and it's carried on by these people. The idea of BGP stream started about two years ago when we started thinking about developing a platform for the detection of large scale events in the Internet and it ended up in the development of a framework, more general framework for BGP analysis in realtime and historic analysis of BGP data.

The main goal of the framework is to create assorted stream of BGP information in order to support the creation of a BGP state over time. This is the main goal and we want to achieve this goal having some features in mind like abstracting from the underlying source, we want to filter BGP databased on the user needs, we want to be able to tag unreliable data and all of these features should support realtime.

As I said, it's a current work in progress at CAIDA and we are planning to release the software, the different components of the software as open source, in particular we plan to release version 1 of our core library this summer. But in case you are interested in a current version of the code, please come talk to me after this presentation.

So, today I'm going to foe Gus on two layers of the BGP stream framework. The first layer is a C library that gives the name to the entire framework. It takes an input, MRT data from different data feeds, and then it creates an output. It creates an output, assorted sequence of BGP records and then fire the process by layer 2. Layer 2 we have common line tool that outputs ASCII information, we provide Python BINDings for the C library, and also we download an interval driven processing tool that allows user to create plug ins for processing this BGP data.

DHCP streaming is designed to transparently access several RR, it data sources. So, for example, the MRT can be provided in the form of previously downloaded local files, it can be provided in the form of a realtime stream. We are planning the version 1 of our BGP stream library to support the toll RAD owe state BGP MON, and we want to support future realtime string from RIS.

What we're using right now to support our stack, to input data to our stack is an approach which is hybrid, it's called BGP downloader, it's a programme that basically downloads to the realtime files like updates and routes in RIS and it does that pulling in the website periodically, so, as soon as the file is there, we download the file, we insert a new entry in a my scale data that we called BGP archive saying the location of the file in our file system, and also some information such as the file type, so, the file can be either a ream or an update. Some information about the time stamp of the file and the collector name.

Using these through the realtime approach we are able to be 20 minutes behind realtime, which means that we are able to process an announcement 20 minutes after the same announcement has been registered by a collector.

From now on, you can see there the BGP archive has the data feed that provide information to our stack and start talking about the BGP stream library. The BGP stream library is responsible for creating assorted output, this output would be lake a stream of BGP records. It's also responsible for filtering information, filtering the BGP data starting from the user needs, and it also tag the BGP data that are not reliable and it achieves this goal in five steps.

First of all, it access the EMI SQL archive and it collects files based on project, so RIS or route use in this context, it can select updates or RIS or both of them. It can select different collectors and it can collect databased on time.

The second step is using a modified version of BGP bump to open a group of dump files in parallel. It then creates BGP records that are wrappers around MRT data containing BGP dump entries. Then it marshals this record in output according to their time stamp. If also the user needs to extract atomic BGP information from the records we provide in the public to transform these records.

The BGP record is a structure that looks like this. It contains information coming from the BGP archive, so every record is stacked with the collector, the type and the downtime. We also register where these MRT information were originally in the dump. We have a time also with the BGP dump entry. And as you see, we report the status of the record. Meaning that the MRT information carried within this record could be either valid or coming from a corrupted source.

The BGP dump entry as I said can be far further processed and we can transform MRT data into something more readable which are the BGP ELEMs. These are atomic BGP information. They can be a rib entry, an announcement, a withdrawal or a state message.

Every ELEM has a time stamp that refers to when this information was collected on the collector. It has information about the peer, IP address and the peer AS number that generated this information. And depending on the type, it contains the prefix, the next hop, the AS path field of the BGP, and if it's a state message it hassen coded the BGP final state machine, state of the peer.

Let's see BGP stream library in action. I'm going to present now the CA B I. Suppose we want to process some updates, you want to process updates from two different collectors, one coming from the RIS project, the other one coming from the rows use one that are provided with the different time, we just want to process updates. No ribs and we want to process just a small amount of time. So what we have to do is, first of all, allocate memory for the BGP stream. Second, we have to tell what kind of filters we want. In this case collectors, the typend a the time. And then finally we have to ask the BGP stream to give us one of the order records that we want to process. As you see from the figure, BGP stream does this work of combining different sources and provided a sort the stream in output. So, for example, route use and RIS update will be interlinked together in such a way that BGP records in output are sorted by time.

Finally of course we have to allocate memory for the BGP stream.

On top of BGP stream we build like several software tools, and each one serves different needs. The first one, it's called BGP reader is the simplest one. The way it works, it's a command line software that takes an input from the common line some filters and it outputs ASCII information. For example, let's say I want to have in ASCII format this stream of all the updates coming from two different collectors here highlighted with red and green in a four‑minute period. All I have to do as a user is to specify the filters at common line and then read the output. As you see again, the output is interleaved. We support different kinds of outputs, one of them is compatible with BGP dump.

If what we want to do is something different, something more complex, for example, we want to route the prototype new ideas, then the simplest solution is to use the Python BINDings. For each function in the public API of BGP stream, we provide a Python function, in this way none of the functionalities provided by the C library are lost. I'm going to show you now some examples of how can we use the Python library.

Suppose we want to have the list of AS links as seen by a specific collector. So, what it means is that I am importing into rib in memory, collecting all the S pats and for all the adjacent ASs I have to save them in a set. So, from a code point of view, how much it costs is less than 50 lines of code. If I want to process one collector, the entire process on a normal PC takes about two minutes. If I want to process the information for all RIS collectors for example, the entire process takes about 15 minutes. In terms of code what it costs is just commenting one line of code.

Let's take a look at the more complex example. Let's suppose I want to list all the multi‑origin AS events that happen in a period of 3 hours. What it means is that I am importing ribs and updates file and for each prefix I'm saving a set of peers that tells, for each peer, what's the origin AS of serving in the AS path and of course if I see, at the same time, two peers that announce that will well, that say this prefix is announced by two different peers, two different origin ASs, then I have to signal this event.

The entire programme, which is on the right, is less than 100 lines of code. In terms time, the processing one collector takes about five minutes. I'm taking as a reference RFC 00. If I want to process all RIS collectors, it takes about one hour to have the list of all the events in three hours.

An interesting thing to note is that if I want to transform this programme from an historical analogy to a realtime programme, let's say I want to know what our events now from, deal three hours from that. I have to what I have to change in the code is three lines. First of all what the end time is going to be in the future and the second I have to add one more line of configuration which is this. Stream.set blocking, and I already have a realtime processing script.

The last two I'm going to talk about it called BGP Corsaro; it's an interval driven common line tool which has a model architecture based on plug‑ins. This tool is useful when the aim of the computation is maintain a BGP state over time, and to output some statistics or some metrics at specific time intervals.

BGP Corsaro is based on a C library that aims at the interval driven processing, and as I said, it has this architecture which is based on plug‑ins. A user can write his own plug in, activate one or more plug‑ins, and these plug‑ins can be activated in the scale.

The main components of this BGP Corsaro are three. There is the BGP stream instance that generates a sequence of BGP records sorted in time. There is the main logic of BGP Corsaro that we call BGP Corsaro core, the text is BGP records input and then it handles this records to the BGP Corsaro plug‑ins that are active, along with the interval start and interval end signals. Of course the implementation of the plug inis up to the user, but usually what happens is that the plug initself maintains a state over time that is updated every time a new BGP record is inputted, or provided by the core. And then it generates some statistics or output some information at the end of every interval.

The main plug inthat we have developed at CAIDA is called routing tables. It maintains the state of the peer and the state of the routing table for each peer in the system. And to do that it has to deal with information about the BGP finite state machine, it deals of course with ribs and updates, and is also able to recall from out of order and corrupted data. Currently we output statistics every minute of BGP time and we maintain about 600 IPv4 routing tables and 300 IPv6 routing table in memory.

In order to give you a flavour of what kind of information can be extracted from this routing tables plug in, I'm going to show you some example of metrics that we monitor over time.

The first one that is shown in this figure is the number of active prefixes in a routing table as seen from appear. In this specific case, we are looking at three different peers that belong to level 3. As you see, they are like three full feed peers, they provide 500 K prefixes. And as you can observe, there's a huge drop between 9:30 and 11 a.m. This graphic refers to August 2014 in which Time Warner cable underwent an outage and of course from a full feed peer routing table, what we see is a drop in the number prefixes of serving the routing table.

In other information that we tracked over time, is the number of announcements and the number of withdrawals that a Pier receives every minute. A more complex monitor is the number of a Pier in an announcement or withdrawal over time.

Today I described two layers of how our BGP framework, I hope to have some feedback, I'm really looking forward to have some feedback from the RIPE meeting audience. I hope this presentation raises some curiosity and so, please ask questions or feel free to contact me off line. Thank you.

CHAIR: Thanks Ciara. So, any feedback and comments already sparking up or any questions? Here we go ‑‑

AUDIENCE SPEAKER: Martin Levy CloudFlare. First of all, thank you, cool work, two very sort of quick questions. The first one is: Can you use your own source of BGP data within this work you have talked about RIPE and route views as a collector, could you put your own private collector into this framework of code?

CHIARA ORSINI: Yes, so BGP stream support data in IAT format. So, the easiest way to do it right now would be have MRT files saved somewhere in the file system and then some way to, from BGP stream, to know where these files are. So, CSV file would be the easiest solution, and it would work.

AUDIENCE SPEAKER: Okay. And then the second question is, why my SQL and not some time based databases that are starting to show up in user ‑‑?

CHIARA ORSINI: It was just easy to implement this way. There are no specific requirements format a data information for what regards the metrics that we out every minute, we have a time series database.

AUDIENCE SPEAKER: Hi. Colin Pete re from the RIPE NCC. It's very interesting work. We have been doing some similar stuff internally for the RIS project as well. And I just wanted to mention that I'm doing a presentation tomorrow at the Routing Working Group about how we're developing the realtime streaming interface to RIS, which would hope three then plug into a system like this. If people are interested in that, should come along to that presentation.

CHAIR: Cool. Well, for the others, if you have ‑‑

AUDIENCE SPEAKER: Where can I download this stuff and try it?

CHIARA ORSINI: Right now, we don't have this software available online, but if you can contact me, we can talk about sharing software definitely.

AUDIENCE SPEAKER: Perfect. Thank you.

CHAIR: Good. If someone else has comments or feedback I'm sure Ciara will be somewhere in the hallway thanks again.

(Applause)

So the next topic is about measuring delay and packet loss at an IXP, and it will be presented by Christoph.

CHRISTOPH DIETZEL: So, Hi, I'm CHRIS, I'm with DE‑CIX and I like to talk about measuring delay and packet loss at an IXP and of course especially at DE‑CIX.

So, let me first demonstrate what I'm going to talk about. At first, I just introduce the agreed service levels and then a history which is some short of situation, why we did what we did, and then I just talk about the challenges we faced and finally I just present our implementation.

Our agreed service levels. We have the requirements, we need a delay which is supposed to be less than 500 microseconds for up to 97 .5 of the packets. And actually, we want to have the one way delay. Additionally, we need the jitter which is supposed to be less than 100 microseconds for the same amount of packets. Another requirement is the packet loss, which is supposed to be less than 0.05% on a daily average, daily is defined by 24 hours of course. And another requirement which is more or less big IXP or DE‑CIX specific that we need to cover all physical links. I'm going to talk about that later a bit more. And the end result is supposed to be graphic or a graph on our customer portal, which allows our customers to verify or not.

So, the history or some sort of motivation for our work was we used the RIPE TTM boxes, but unfortunately the service discontinued in 2014, so, we had to come up with a different solution. We evaluated the accede yen metro node or metro N ID boxes, but unfortunately we have a rich feature set but unfortunately for our use case we could not adapt those because there were some issues regarding the path selection, it comes with limitations from the protocol they are using. And additionally, for our specific use case, it was too pricey. I mean those box /K‑S do a lot of stuff but we don't need all the features, we just need to measure the few SLAs I introduced before.

So, we had to come up with a custom implementation. Basically with we need to do is measure the round strip time and derive from the round‑trip time, we can get the delay, and the jitter which is basically an average deviation of the mean latency and additionally we want to ensure or measure the packet loss and we want to ensure that all links over our platform are covered, so not just testing the same link all the time, we just want to make sure that we do our measurement over all the links.

So, here are our challenges in a nutshell. So, we have multiple paths, so if he want to send a probe from A to B we are going to have multiple path over or platform so we need to know how many exactly. Additionally, we are going to have limited control over the choice which LAG is used and in specific, by LAG member is chosen for each run from probe A to B, we want to be nice to our platform, so we don't want to put too much load on it and we don't want to consume too much bandwidth.

Additionally, we have some limitations arriving from the OS and which are platform specific such as a protocol stack, if the message is going out up the protocol stack, it will take sometime which we actually don't want to measure if we want to do a one way delay.

So, here is, first of all, a figure which depicts a simplified overview of our system, our measurement environment. Let's say we have, or we have 4 edge switches at the DE‑CIX and so we're going to use 4 probing systems, and there is one probing system attached to each edge switch, which gives us this picture. So, what we're going to do then is if we want to send a probe from A to B we send an UDP packet, and on the host B, we respond with a port unreachable ICMP port unreachable response, and so we get the round‑trip time. However, actually we want to do one way, measure one way delay so we need to divide it by 2, that's obvious, but the thing is, we actually want to measure unidirectional which is in terms of path coverage, so, if I send a probe from A to B, I don't consider the path back. That comes from some limitations which I'm going to explain on the next slides.

So, first our real set up, here we got the 4 edge switches which I mentioned earlier connected to the probing systems like I said, each probing system is connected to one edge witch, and then we have a LAG to each core from each edge, and from the core and each core is connected to each edge again.

So, again, those are the four LAGs which we see on the left side, and if we look at one LAG closer, we see that one LAG has up to 12 LAG members. So, you see from ‑‑ for instance from A to B, there are like several paths, and now we need to really determine how many in order to set up a measurement environment which takes into consideration the path coverage. So, from the first ‑‑ for instance, sending a probe from A to B, we have 4 LAGs, which we can choose, and each LAG as again 12 members. On the way from a core back to an F switch which is connected to the receiver instance B, we have 12 LAG members again.

So, this sums up to a total of 576 different paths from A to B. The point is, why I mentioned earlier, that we won't consider the response from B to A, of course practically we are going to do that because we need the round‑trip time. But we don't consider this for path coverage since we would have, if we want to send it over our path, we would need to check 576 to the square paths and that's like quite too many paths to really measure. And from here on, we focus on the 576 paths from one probing system to the other.



CHAIR: I'm sure it's coming back in a second but it looks good...

SPEAKER: Is it clear so far, maybe we can just answer a question already to use the time.

CHAIR: I think you're back online.

CHRIS: So, we know the number of paths, but we don't know how the choice is done. Because, first, we have a choice between 4 LAGs and this choice is done by ECMP, but that's a vendor secret, so we could not really determine how it's done, so, we need to go away with an assumption. Our assumption is that equal advance for each LAG. So that a packet, when it goes from the probing system to the F switch and from there to the core that this is based on the ECMP. Then again the choice which LAG member is chosen, we know that this is done by a hash, and the hash space is divided by the LAG members, so by our 12 LAG members, and this hash is calculated on the destination IO source and all these. Unfortunately, of those, 6 values for a fixed since we want to go from a probing system A and B, those are servers in our environment we can't change those values for each run of each probe, so what we got left is the port.

So, we need to generate a lot of entropy with our port. Again the assumption that the hash space is equally distributed over all LAG members.

AUDIENCE SPEAKER: Can I ask a question? The return packets, are these also UDP or ICMP? Detail detail this is a hack from us, it's an ICMP port unreachable.

AUDIENCE SPEAKER: What you see a lot with ECMP is for ICMP packets the local balancing actually happens on the ‑‑ we check some of the ‑‑ so that can be problem for this kind of stuff. Detail detail yeah, right but let me clarify, from the actual measurement or from the idea to cover all paths, where we go from A to B it's an UDP packet and just the response to measure the round‑trip time is ICMP.

AUDIENCE SPEAKER: Right. But then you can be sure that even if you send a whole bunch of probes with the same port number that ‑‑ detail detail no, we're not going to send to the same port number.

AUDIENCE SPEAKER: Okay. Detail detail just one more one port number.

AUDIENCE SPEAKER: The only point I want to make is that it's hard to know that all the ICMP messages are going back over the same path because of this strange implementation

CHRISTOPH DIETZEL: Okay... thank you for your hint.

And now we know the number of paths and we know how the decision is made, so, we need to calculate for a certainty of 95% of paths that are covered. So we want to make sure that at least 95% of all our paths are covered, and therefore, we make use of the Cuban collectors problem, I guess fathers, if you know it, if your child wants to collect soccer players and put them in a nice booklet, and that's a problem. We could map to our problem and use it. But additionally, we had to use the theory, the limit theorom to make sure it's not 50% certainty because of the problem, the chance of covering all coupons is 50% but we need 95%, you can look it up in a nice book on classical problem of probability theory, so we just applied this formula and switched it a bit, and came to our conclusion that we are going to need to measure all paths from A to B with a certainty of 95%, 5,372 probes.

And I finallily come to our implementation. So, we have on each of our four provisioning systems which equals the probing systems, we have three sending instances and we send 5, 372 packets via UDP and on the way to 2 B we're going to ensure that 95% of the paths are covered.

Therefore, we choose a random port for each probe and the port range is somewhat limited, as you can see, it's ‑‑ but it's the best we could do, and it's the only way to generate our entropy, because the probe systems are in use also for provisioning, so we can't use all the ports on those systems.

Then the IP table rule to reduce protocol stack cost delay, that's what we just discussed, that we want to avoid that this packets of information is processed too high up the protocol stack, so, we just put IP tail in place which directly in response to the ICMP report unreachable, and to make that work we had to remove the IMC P kernel ‑‑ ICMP kernel rate limb. And so our system was complete and we got it running and now we have those nice graphs for our customers and so they can evaluate if our SLAs are violated or not.

Thank you. Do you have any questions? Anything to discuss?

CHAIR: First of all, thanks for your presentation.

(Applause)

Any question about the math? Because that was the part when I looked at the presentation, I didn't get so I trusted it's good enough.

AUDIENCE SPEAKER: Erik Vyncke Cisco, just one question, you are using random UDP port number, why not using random IPv6 addresses as a source

CHRISTOPH DIETZEL: Because ‑‑ ah you mean spoofing?

AUDIENCE SPEAKER: No, multiple addresses on the machine. Or spoofing if you want, but the ICMP message will come back ‑‑

CHRISTOPH DIETZEL: The issue is, because in theory you would like to do that but the issue is like we have some requirements within the system and we calculated and set it up, in theory it's all brilliant but there are some limitations in practice and of course you could like have some more IP addresses and some more systems, but since we used the provisioning systems for are our measurements, yeah, we stick to the IP addresses we got and those are fixed unfortunately.

AUDIENCE SPEAKER: Gert Döring ‑‑ I'm quite happy to not have seen the neighbour discover path for trying to resolve 65,000 IP addresses in addition to all the noise we already have. So I can see this for one way it might actually work out, but for having the ICMPs come back, don't go there.

AUDIENCE SPEAKER: Roman, could you please elaborate a bit more on the measurement, the nodes from which you are measuring, and how do you measure time and how can you actually ensure your time measurements are precise if you are like doing it in software, since those measuring equipment is really expensive for a reason

CHRISTOPH DIETZEL: We measure it on the same‑so...

AUDIENCE SPEAKER: I mean you measure time in kernel in driver, you pick up packets ‑‑

CHRISTOPH DIETZEL: Okay. We actually used the default M ping environment and we have an NTP time synchronisation and we use that as a basis for the measurement.

CHAIR: Good. No further questions. Then thanks again for your presentation.

(Applause)

So, the last presentation is Vesna, to also prove that you know, the MAT is actually a very time critical Working Group, we also ‑‑ that is not Vesna by the way ‑‑ we also have a little presentation from Robert about ‑‑ should I call it AMS‑IX outage or should I call it interesting behaviours or...

ROBERT: The realtime Working Group. So just I wanted to spend two minutes of Vesna's time because she usually has some left over, as you know, to show you a couple of examples of what we saw. This was just before lunch time, so five minutes or so before the lunch time started. This is the traffic statistics of one of the Atlas anchors, I think this is one is in Finland. What you can see here is obviously there is a gap where there was just you know no traffic flying. That's easy. The next one, this is the so‑called seismograph against our own anchor which is hosted in the RIPE NCC network, so this is NL ‑‑ MA ‑‑ 3333, as seen which every probe that is measuring its. So this is the result of pings and I think it's quite clear that there was a problem here. Some of the probes could get to to still because there was some kind of network connectivity. Most likely not going through AMS‑IX but I wouldn't want to speculate too much. The point is that this is really something that clearly shows that there was a problem and the problem was closer to the anchor rather than the probes.

This is DNSMON, some of you might know it. This is DNSMON, measuring I think in this example K dot route servers.net and the interesting behaviour here is because on the Y axis we see results submitted by the Atlas anchors all over the world, however in this time period at the couldn't connect, they couldn't send the data that they collected so they were still collecting data. The gaps that you see I actually stopped this so it's not updating any more, if I reloaded it, we would see no more gaps the the gaps that you see are there because the anchors after reconnecting they tried to start sending the data in that they saw while this was connecting and so they will eventually fill in these gaps and let's see if that's actually true. Yeah, so when I reload is you see less and less gaps the so over time they will catch up with the data had a they also want to submit to us.

Finally, this is something very interesting and very, very simple. Andreas, one of our colleagues made this map because he just could in five minutes, what you are going to see here, now it's paused animation, what you are going to see here is probe connection and disconnection events as we see them on the infrastructure. The expectation is that if AMS‑IX had a general problem and we are connected to AMS‑IX most of the probes are driven by infrastructure which is hosted at the NCC network, so the expectation is that we will see a whole lot of red dots. And let me just click... streaming is not starting. Obviously we still have a problem. We will publish this as a labs article ‑‑ and that will actually have a working illustration. Thank you, that's all I wanted to say.

CHAIR: Thanks a lot. That is a new way to show how a loop looks like. This said, Vesna ‑‑ this is Vesna talking about the hackathon.

VESNA MANOJLIVIC: Hi again. For the people who are not here, I'm Vesna, I'm a community manager for RIPE Atlas, and I love hackathons. So, why should anybody do this? Will well, you can do it differently, you can just hire more people or pay somebody to use the data to create utilisations, but we and the sponsor Comcast thought that putting a hackathon together would be a better idea. So we could bring the operators, programmers, academics and designers together and then they can inspire each other, share ideas, work on it together, have fun, and it works, it actually produces amazing results.

So, hacking is not only breaking into your computers. It's actually finding creative solutions to problems that we all have, and our problem was how to create more software to visualise the open data that RIPE Atlas is collecting, generating, measuring the health of the Internet and how can we share that with everybody who is actually interested in this?

So, RIPE NCC provided the tools and the data, staff, to organise the logistics and the developers to actually help out with the details about the data. Comcast was generous to provide the money and also the staff who was taking part in the hackathon, and with that money, we gave the awards to the winning teams and paid some of the travel expenses for the people. And then we also got in contact with the local hacker space here and then we used their location for part of the hackathon.

So, out of 70 and more people who actually applied, we have selected 25 in the end one didn't show up, there was a lot of us locally, so people from RIPE NCC and from Comcast, and we put a special effort into increasing the diversity, so making a mixed crowd, so we had a lot of women involved. And mostly ‑‑ unfortunately, most of the people actually came from Europe because we didn't have a lot of money to spend on paying the tickets for the people to come from Asia and Africa and States, but next time we can do it in your location.

And so, what were the results? Well, there were ten projects, and 14 actual GitHub tools were submitted, and these are the winning ones. So starting from the, let's say, third place, that was shared between two projects that were both working on the trace route. I really have very little time so I'm going to go quickly through this. And the second was Jan pay attention, somebody who also thought well there is no equal distribution between the probes when I say I want 100 probes from all around the world, you you basically get them mostly from Europe and then a few scattered around so he thought well let's make this different, and eye Mac named it for more quality in the probe selection. And then the first place was actually by the team of people who who were he really multi‑disciplinary and they have told discipline multi‑data sources that we actually had an outage on that Friday in Amsterdam, so they had a real life use case to actually practice on, so this is one of ‑‑ and it looks very familiar to what we saw again today. So there you go.

So, these were the three leading ones and then there were many more interesting visualisations, and these are the people in their work environment, brainstorming armed stuff. Here are all the links, and as the Dutch people say, next time at your place. So please invite us to organise a smaller event in your location, you just need to provide the office space and some help, and the RIPE NCC will send our developers there, we will help you with the registration and with announcing it, and we'll bring the T‑shirts of course.

So... if you like this T‑shirt, and if you are ambassador or a sponsor or an anchor host for the RIPE Atlas, you can still these these T‑shirt at the info desk in the break or tomorrow or on Friday, and if you are interested in hosting a hackathon, please get in touch. Thank you.



(Applause)

CHAIR: Any questions, comments, to hackathon?

VESNA MANOJLIVIC: There are actually at least five people from the hackathon participants at this meeting so you can ask them too how did it go.

CHAIR: I actually was there. So even if I can't programme and I can't do math, I had time to configure my name server again. So, it was actually quite a great and nice event. No other comments? Good. Thanks Vesna.

That brings us actually to the end of the MAT Working Group session and we are just one or two minutes over the time. I guess that's a new record, probably because Nina is holding the numbers.

So, one last point before we close. Since last time we actually have a rating system, you know, like the Plenary as well, so if you give us a little bit of feedback, us as the little Programme Committee of the Working Group then we have a better idea of what you want to see. So please use the rating system and then we know what we can do for you next time. Okay. And thanks again for coming and see you in the coffee break.

(Coffee break)

LIVE CAPTIONING BY MARY McKEON RMR, CRR, CBC

DOYLE COURT REPORTERS LTD, DUBLIN, IRELAND.

WWW.DCR.IE