Archives


Plenary Session:
11th May 2015 ‑‑ 4 p.m..



CHAIR: To kick us off, we have Thomas King, with making route servers aware of data link failure at IXPs.

THOMAS KING: So, welcome to my talk. As they said, I am Thomas King from DE‑CIX and ‑‑ and this presentation is about an Internet draft we're working on, and, I will quickly go quickly and I hope we'll have a good discussion afterwards. So, nearly one and a half years ago, it was discussed there was problem with route server aware of not data link failures, and during the discussion, they realised that we needed a solution here, so they added myself and John and we started working on this, and before I show you what our solution is, let me quickly show you the motivation for this work.

So in a typical BGP session you have usually two routers and a cable and a BGP session that runs on top of this cable, and after some while, hopefully traffic will also be exchanged, and if a data link failure occurs, so let's say the cable breaks, then BGP will recognise that and it will stop the traffic flow, so, not a lot of traffic should be dropped and the problem should be recovered quickly, so, this works pretty well in a traditional peer‑to‑peer setup, but if you involve router was like an IXP, there is a challenge, because here you see that we have an IXP Cloud, nowadays many IXPs are disputed IXPs, so the data path is different packets are travelling might be different and you see if the peers connect through the router over BGP, then there is a different path for each BGP packet and also if the data that is then exchanged between the two routers might go a different path. So if there's a data link failure, which means only the data plane is affected, then the router or BGP will never detect that and the problem will never be resolved until someone manually interacts.

And that's a big problem especially because we see that at IXPs, this kind of scenario sometimes happens, and we thought we should come up with a solution for that.

And our solution is, contains two building blocks. The first is that we need for the client routers a mechanism that they can verify that the connectivity to the next hops is still there and it's still working, and for this, there is something out which is called bidirectional forwarding detection. It's designed for doing this, so it's already in RFC and many router vendors implemented that. The second building block we are using is a mechanism that allows the client routers to send back the knowledge they gained about the routability to the router, whether that's a router that can take this information into account during the routing process, or the route selection process.

And for that, we are using the northbound distribution and Link‑State ‑‑ sorry, it's a northbound distribution of Link‑State and traffic information using BGP that's an Internet draft but it might be quite soon an RFC. And let me dig a little bit deeper into how these two building blocks are working for us.

As I said, peer directional forwarding detection, that's something that was designed for doing that job and it is a protocol that exchanged hello packages on a high rate and detects if a link breaks. It's like the BGP messages you already know but it's a stream protocol for that. We recommend using the default mode and using one packet per second so that after three seconds, a data link failure is detected.

If you look at the northbound distribution of Link‑State and traffic information using BGP which is also called BGP Link‑State sometimes, the thing is that proposal comes with two elements we need here, we need something that allows us to model the network and then if we have a model of the network, that we can transfer this information from one BGP speaker to another one. And BGP Link‑State exactly provides that, so, we can model the IXP network as nodes, that's the term BGP links that is using a node we mean here a client rather than the route server, and we can use the BGP Link‑State term links that allows us to model the data plain information.

Together with what we call a next hop information base, that's a table that does the reachability for our next hops at the client router and also at the router, when we need that data structure for each peer. We have all we need to get our solution working, and let me quickly show how this works in practice.

So, here we have the scenario again with the router and the two peers, and the BGP session is already set up, so the router will update its next hop information base and it sends out to peer A a BGP announcement saying the IP prefix from peer B is this and also into next hop information base you see, or I see there is a next hop potential for you, which is B.

So, the peer A received that and automatically sets up a BFD session and tries to verify the connectivity, and during that process, it updates its next hop information base, and sends out an update to the route server saying that it is indeed ‑‑ there's indeed a link between the peer A and peer B.

So, now the route server is aware of the routability information as peer A see it and during the route selection process it does for peer A, it can take that kind of information into account and exclude all the routes with the next hop, not reach able for peer A during the route selection process.

So, now what happens if a data link breaks? In this case, the BFD session will go down because there is a data link failure right, so peer A recognises this and the next hop information base can be updated, which results in a withdrawal of the link that was seen between peer A and peer B, and this information is forwarded to the route server, so, the route server takes that information into the route selection process and excludes the route from which the next hop is marked as unreachable. And in this case you see the BGP announcement here on the slide, the prefix of B is not announced any more, or actually it's withdrawn.

So, what happens if the data link failure gets resolved? In this case, the BFD session will get re‑established and this is because the client router will re‑try or will try to set up a new BFD session for sometime and if it's successful, then it knows the data link is back again. And in this case, it also updates its next hop information base, sends out an update saying that now the link is available again, so a link between A and B is there, and this information is sent to the route server where it then is included in the route selection process and this time the route ‑‑ this time there is no route for which the next hop is marked as unreachable and then the route server accordingly announces the prefix of B again to A.

So, that's how the solution works in practice, and as I said, that this presentation is about an Internet draft we are currently working on, and the Internet draft was already adopted by the inter‑domain Working Group of the IETF. The document is available by this link, and there's a mailing list, the Internet domain Routing Working Group mailing list, so if you have feedback or comments or want to discuss something with us, please use that mailing list.

And in the document that it's currently available on the IETF website, the change we just made from carrying the next hop cost information in BGP to BGP Link‑State is not yet reflected. The reason is that we switched to BGP Link‑State is that BGP Link‑State provides a similar mechanism like the next hop cost, but it is well‑supported by the router vendors and most of them already implemented that so it makes sense for us to switch to that one. And if you have any comments on that, and BGP Link‑State is still an Internet draft so we can probably modify things if that helps us making it even better.

That's pretty much it from my side. Do you have any questions, comments, feedback?

So thank you very much for your time. Any questions?

(Applause)

AUDIENCE SPEAKER: Peter Hessler from Hostserver, my question is what happens if the ‑‑ in your example you have host A and ‑‑ you have network A and network B, what happens if network does not support by directional forwarding detection or this mechanism at all, but host A does?

THOMAS KING: Right. In that case this mechanism is not working, right, because if there is no BGP support, we can ‑‑ nothing do about that. In the document that's on the IETF website we describe how to resolve that situation in that, this feature is announced with the BGP capabilities, so the route server knows which capabilities are supported by the client routers and actually, it distributes this information as well so the different client routers know about what capabilities the different next hops have, and based on that, the BFD session is set up or not. Does that answer your question?

AUDIENCE SPEAKER: Perfectly, yes.


AUDIENCE SPEAKER: [] I have a question, why did you invent the BGP or why did you try to use BGP LS and not, for example, app path to convey which next hops are available, and if the route server has to support BFD anyway he could make that decision.

THOMAS KING: Right, so you are saying why do we need BGP Link‑State to transport the information back to the route server and why not just using at path and then let do the decision by the client router?

AUDIENCE SPEAKER: What I think ‑‑ what I see as advantage, do you get a picture of which next hops fail? But I don't know if that's the intention?

THOMAS KING: You mean by ‑‑

AUDIENCE SPEAKER: You can aggregate the information ‑‑ the Link‑State information will tell you which next hop fail on the IXP, right?

THOMAS KING: Exactly. But the route server will know that, right? And what you are saying is that with at path, the route server distributes not only one best path for a given route but it distributes many and then the selection should be done by the client also. Yeah, but still in this case you need something to verify that the data link failure happened or didn't happen, and for that you would need something like BFD anyway.

AUDIENCE SPEAKER: Would you need BFD but you would not BGP‑LS, that would not be necessary.

THOMAS KING: Right. We discussed that, things like that, like app path would also be a good solution, but we thought it might be even better if the route server which is the one who does the route selection process, knows which links are down and then should just remove the next hops from the route selection process.

AUDIENCE SPEAKER: So there was a draft a while ago that myself and John Scudder and a couple of other folk drafted something social light where basically all that the route server does is inform all the other participants how to each each other and then they go BGP directly between themselves, that way the data plane and control plane followed the same path. I wasn't sure if you had seen that or...

THOMAS KING: I think I have at least heard of it, so what you are saying is that the route server is not any more the one that does all the BGP route selection process, but it helps setting up direct, peer directional BGP sessions between the clients?

AUDIENCE SPEAKER: Yes.

THOMAS KING: Yes, that's also interesting work, that ‑‑

AUDIENCE SPEAKER: Basically it just introduces each other, says hello, that's where the other guys are and then steps aside.

AUDIENCE SPEAKER: Michael from the RIPE NCC, I have got a question on the chat if that's okay. It's either a two‑part question or two separate questions, I'm not sure, so I'll just go ahead and ask both.

From the Sandy Murphy at Parsons, the questions are: Isn't the data path versus control path problem true for any sort of third‑party next hop? Is the client to route sever modelling here applicable for all the situations where the data path breaks?

THOMAS KING: I'm not sure if I got your second question but first, the answer for the first question is yes, that's true.

AUDIENCE SPEAKER: And is the client to route server modelling here applicable for other situations where the data path breaks?

THOMAS KING: Probably, yes. True, I could think of situation where is it makes sense, but... it's a bit abstract the question.

AUDIENCE SPEAKER: I have a small addendum to the question. The route server assistance not possible in other situations but next hop database...

THOMAS KING: So that's a question or... is that a comment? Yeah, I think we have a mailing list for that so the inter‑domain Routing Working Group at the IETF, there's a mailing list, please use that one if you have questions about the draft.

AUDIENCE SPEAKER: Thanks very much.

CHAIR: Any more questions for Thomas? Okay. Thank you very much.

(Applause)

So, next up will be Ivan, he is talking about software defined networks and we'll have a look at it how it's now. Thank you.

IVAN PEPELNJAK: So, some of you might remember me standing here on this exact podium two‑and‑a‑half years ago telling you what this new SDN stuff is all about, and it's actually just over four years ago that this thing erupted when open networking foundation came out and said that they will revolutionise the networks. So let's see what has happened in those four years, and whether we are doing anything new.

And by the way, most of the things I have been doing in this two and a half years is trying to figure out this technology, actually first what is SDN? Second, whether this technology has any chance of actually working and third, whether anyone is using it. And it turns out that I got some interesting ‑‑ answers that I would like to share with you.

First, what is SDN? It turns out that four years later it still stands for still don't know. There is no good answer. We have the official definition is the physical separation of the control and the forwarding plane, this is from the open networking foundation, where the control plane would control multiple physical devices. Anyone who has ever tried to implement this in practice knows that this is mostly useless.

Any distributed system with centralised control plane does not scale, and it fails when it partitions. So, this is good for a router manufacturer, this is good for people like Google who became router manufacturers, this is good for academics in practice in large networks, I don't see much of this.

There are other people that say, well, software defined networking is packet forwarding done in software and it's amazing you can actually do 40 gig or more on a Xeon server these days. Well, we have been doing software‑based packet forwarding since day one, I still remember my AGS router, it was doing software based forwarding, so, yes, this is exciting that we can do this now at 40 gig or at 100 gig, but this is not something new.

Then there are people who are claiming that SDN is wide box switching. The idea that you buy a hardware from vendor X and X could be HP or Dell these days or it can be any one of the Taiwanese manufacturers and then you sake software from vendor Y and that would be cumulus or big switch networks and you put them together and now you are free. Because you can choose whatever vendor for the hardware and whatever vendor for the software. Now, this sounds awesome, and it does simplify sparing and a few other things, but this is just a margin‑shifting exercise. Instead of paying all money to one company, these guys want you to pay some small money to the hardware vendors and most money to them. So, yes, this solution might be a bit cheaper, but when you take a look at the total cost of ownership eventually we'll get where we are today, more or less.

Then there are people who claim that well SDN and network automation, it does the same thing and all we need are programmable access to network devices and we're done and these people forget that, well, you know, if you have an API, that's not SDN. We have API on reasonable boxes forever, and there are other vendors that have only recently learned how to spell NETCONF but we are making progress, this is no different from traditional networking.

Then I have this one, this is from Wikipedia, and it says that SDN is an approach to computer networking that allows the administrators to manage the network through an abstraction layer so that they don't have to think about the underlying primitives. For example,, if I want connectivity between here and there why do I have to think about VLANs or VPLS or EVPN or whatever I need. I just want connectivity. And this, to me, sounds ex saying and it makes sense, because if we get to this stage where we can actually start provisioning networks like this then we are making progress, then we can step back and allow the users to do the sufficient themselves because they didn't have to know the technical details.

But, if you look at this in a bit more detail, some people are already doing that, and they are calling this orchestration systems, so how is SDN different than a glorified orchestration system? Whatever I have seen so far, the best things I have seen so far are glorified orchestration systems and then I was talking with people who are actually doing things like what I have been describing so far in these slides, and, for them, this doesn't matter. They don't even want to call whatever they are doing SDN, they are too smart for that. For them, this whole thing is a lifestyle change. Instead of carefully crafting each service out of commands on individual boxes like you know people were making shoes in the middle ages, now, these people want to automate whatever they're doing and step away and allow the orchestration systems to do their stuff, which is how we should be doing things today.

So, for the people that actually work on these things in production networks, it's a lifestyle change, it's a change in mentality, and I'll mention a few of those later on.

And as I was talking with these people, it turned out that there are like four or five different architectures that are commonly used. The first one is device provisions systems. Something template‑driven, and people use an open source tool that has an awesome templating support, so they would building router or switch configurations and just deploy them on to their switches or routers as they are provisioned.

And I know someone who is managing a large data centre and he said he hasn't logged into his switch in two years. All he is doing is, he is changing the parameters when he is adding the new switch, configurations are built automatically, deployed and replaced automatically during the day, no maintenance Windows, he hasn't logged into one of his switches in years. There are obviously vendor solutions like Arista provisions service or Dell fabric manager and so on. The idea is you template your configs, you build your configs automatically, you deploy them automatically. Some people have provisioned these to service provisions. Yet again you would template something, you would build your service definition, the service definition based on the template would generate the config files which are pushed down to the boxes. Most Cloud orchestration systems work this way, NCS, this is now Cisco, is one of the well known examples of an orchestrational provisioning system that works this way and there are a few others on the slides. And there are actually people who have already implemented these solutions and used this in practice.

So they can yet again step away the networking team can step away and the operators can provision services without the involvement of the networking team.

Next one that I'm seeing in practice are things that adjust the routing or the forwarding tables. And, I mean, this fits all the vague definitions of SDN, and anyone who is doing remote triggered blackholing in his network is actually doing this. You are pushing something out into the network to stop some traffic. Only we didn't know it was called SDN in those days.

People are usually using BGP for this, Caradon which is another excellent tool, yet again bought by Cisco not so long ago, they were recalculating the traffic flows across your network based on the link metrics on your links, they would build a new network model with different costs on the links and even deploy that automatically into the network. So they would be working on the configuration level, whereas, for example, Microsoft, in one of their data centres is using BGP as a mechanism from the controller to the switches to change the forwarding tables in the switches. If you want to know more about that one, it was ‑‑ there was a presentation at NANOG from Peter [Lebuco], and there are two RFCs written on this, one how he was using BGP to build his data centre so he used BGP IGP in his data centre and the second presentation was how he is using BGP as his SDN tool. He left Microsoft, now he is at Facebook and he can't tell me what he is working on. And then, of course, there is the centralised control plane stuff with all the OpenFlow hype around it. Where the intelligence is supposed to be in the central controller and all the switches are dumped. And guess what, anyone who tried to push this in production figured out this doesn't work. NEC was the first one who had a commercial product and they used a number of OpenFlow extensions to get it working. Big switch networks was working on this for years, and finally, they got the epiphany and started implementing OpenFlow extensions like crazy, so that they offloaded LACP and Arp and LLDP and all that stuff to the switches and now their stuff actually works. HP, similarly with their SDN van controller, they are using OpenFlow in campus networks to programme exceptions into the switches. For example, they had this very simple application when they would redirect DNS to their intrusion detection engine, so that if one gets infected and start generating weird DNS queries, they would collect those queries and respond to them. It's really PBR on steroids driven by the controller.

So, I have mentioned a number of different architectures and I already mentioned a few tools, so if you are interested in any one of these things and you either want to work with the vendor or you want to do it yourself and a lot of people are building these things themselves, do you really have to throw out all your existing gear and bring in something new? Well, if you talk to the vendors, the answer is obvious ‑‑ yes. And you can buy the gear from us.

In reality, you can use most of the existing things, like NETCONF. Any vendor supports NETCONF these days, you can use it to sport of reliably configure your switches. There is no standardised data models around that, but work around it. BGP, it's commonly used to programme the forwarding table. BCEP for people who want to do traffic engineering, we have been hearing about BGP Link‑State just in the previous presentation. There is BGP flow stack that you can use to push really specific traffic filters into your routers if happen to have Juniper routers, Cloud fair was or is using BGP flow back. They are figuring out who is attacking them, they push FlowSpecs into routers so they drop the traffic in the edges, not not data centres. Or if you like, MPLS, another mechanism to install paths across the network. There are a few things you can do with the existing tools. So, for example, it's really hard with the existing tools to redirect OSPF traffic for example to the central controller to implement your own implementation of OSPF if you want to do that.

For that you need something like OpenFlow.

There is I2RS, which is the interface to the routing table. There is OS Config, which is the configuration part of OpenFlow which is just a NetFlow data model. People are using XMPP to pass information around. Arista is using XMPP so you can send one command to you will at switches and all of them would respond, so for example you are looking for a MAC address you don't know where it is, you send show MAC something to all the switches and you get one response back, now you know where that MAC address is, simple things like that. Juniper is using XMPP to implement for escapable multi‑properly BGP. So, what they are doing is they are using POPs app model where the edge nodes subscribe to certain messages about certain VPNs and the controller just pushes those messages out through XMPP and they use a third‑party XMPP server to get job done. Cisco has, I would have to have an API like that from every vendor out there. Be careful, if you start using that one you will never change your equipment vendor. You will be too closely tied into Cisco.

And then there are all sorts of open something open, something open initiatives, everyone is open these days. And the question that I have to ask myself when I look at all this open initiatives is: well, will this really save the day? Will our networks get better if we will follow some open mantra? It turns out that yes, there are certain things that get better. So, for example once you get Linux on a switch you can manage that switch like any other Linux host. You can use source control, you can use Puppet, whatever your server guys like, you can use all the Linux tools to manage your switches and routers, which is awesome, but, will this make for a better networking? Probably not.

So, do keep in mind that technology is never a solution. It's always an enabler, it allows you to do something else, but if you won't change your business processes, you won't gain anything. For example, that guy I mentioned before, who is deploying his configurations during the day, if he would would have to follow some ideal practice and deploy his configurations every fifth Friday of the month, then obviously he would not get anywhere, because he would be doing the same thing with new tools, he would gain maybe 30 seconds because the configures would be built automatically and he would still have to wait the whole week to deploy the configuration because the process says you can only deploy the configuration on Friday evening. So we need to change the model we work, this is the main message of this presentation.

If you don't change how you work, if you don't manage to persuade your management that the whole organisation needs to change, you will not get anything done. You might experience 10 to 20% decrease of whatever your metric is, using wide box switching you might decrease your acquisition cost and you may increase your support costs. But, without changing the model you will not get anywhere. This is what I'm always telling people to do.

First, throw away everything that's not making money. So, simplify as much as you can. Once you have simplified what services you are offering in your network you can standardise them. Once you can standardise them, you can automate them. Once they are automated, someone can write a nice gooey that your operators can use to deploy those services. I know that this is not new for ‑‑ I hope the majority of the room, but it's still amazing how many people go like, this is a good idea.

And I already mentioned this a few times. Automate everything. Whatever you are doing a few times a day, it's worth automating. Because, if you automate everything, then your mistakes will be consistent. And if your mistakes are consistent, it's easy to identify the mistake, it's easy to fix that one and then that mistake is never again being done. Whereas, if you rely on humans to do manual configurations, the mistakes will be different every time. And, well, it does make for for interesting trouble shooting and job security, so maybe don't automate.

And by the way, start now. There is absolutely no reason not to do something very simple today. And something very simple might be as easy as what one of my friends did, he wrote a script that did nothing else than analyse the configuration files of all the devices in the network and documenting where every sub‑net is and which router switch, firewall or load balancer has that sub‑net configured on it. So when someone says well I can't get to there or I can't get from here to there, at least the operations team can immediately figure out where that sub‑net is supposed to be so they can start trouble shooting. Well, yet again, the answer is good documentation, but how many of you actually update documentation in realtime?

So, there is always something really simple that you can do, and there is no reason not to do that, so start now.

And finally, please do stay in touch, if you are doing something interesting, I want to hear about it. I'm doing a podcast on actual SDN solutions and there are a lot of people who like to talk about them, so, I'm trying to collect as many things that make sense, as possible, so, whoever is doing something interesting, whoever is doing something along these lines that make your life easier, please let me know.

And my e‑mail address is really simple so you can't miss it.

Thank you. And I hope you'll go back home and start working on some network automation next Monday.

(Applause)

CHAIR: Thank you. Do we have questions? We are running ahead of schedule.

AUDIENCE SPEAKER: Kristian Van Der Vliet, Cumulus Networks. I think you and I probably agree on basically everything, though I would disagree with your assertion that puts Linux on switches doesn't really buy you anything. You know, I think if things are going to change and it's going to work, we should be looking at what the operations guys have been doing for near on the last ten years, so, if you are putting Linux on your switches you can leverage all the tooling and the practices and the work flows and you get some really, really cool integrations there that at the moment you just don't have. The network layer is basically a black box and then the operations and the application guys are above it and they never sort of meet. So if you put Linux on your switches you can just have the whole stack and you know solve a huge number of problems.

IVAN PEPELNJAK: I totally agree with you. The only problem I have with discussions like this where we're all in violent agreement, is that some people think that Linux on a switch means wide box switching, which is not true. You can have Linux on an Arista switch, you can have an switch on a Juniper router, so if you wanted to use Linux on a switch, you could have started years ago, and the problem is why hasn't anyone started? Because the managers told them like, no, you can't do that.

AUDIENCE SPEAKER: Benedikt Stockebrand, I'm currently working in an environment where there are network administrators and there are Linux administrators and there are window administrators and the problem there is that in the network team I'm basically the only one who has some sort of experience using tools like ‑‑ well, Linux or whatever, to automate things. Good news is that I'm paid there to do an IPv6 deployment instead, so at least I'm spared trying to conveyance all the network people or teaching them how to do this.

Generally speaking, I fully agree with what you're saying, but it's sometimes tremendously difficult to conveyance people that they should actually stop touching every single router switch, access point or whatever, and it's really not so much a technical issue but actually making people change their mind and change the way they work and actually learn something new that they are unfamiliar with and they actually have to ask their weird colleagues down the floor who actually appear to know about this. So, that's, in a lot of cases, a huge problem. Probably not so much in ISP context because things are much better there but in an enterprise environment this is quite an issue. So, I have seen that a number of times, especially with some older customers before. It's sometimes really difficult, and when company politics enter the games you just want to get out there anyway.

IVAN PEPELNJAK: Yeah, but for the last part I would say run away.

AUDIENCE SPEAKER: Definitely, as fast as you can.

IVAN PEPELNJAK: You are absolutely right. And there are at least two parts of that, as you identified. One is the technical, one is the cultural, and we, as the networking engineers, always have hard times admitting we don't know something, and stepping out of our comfort zone and actually something the server guy what this Puppet thing is is a scary thing. So get over it.

The other part, you don't actually need Linux on a switch to get the benefit of these ideas.

AUDIENCE SPEAKER: Sure, but you need it on the management whatever to build the abstraction.

IVAN PEPELNJAK: Yes. Well... I know people who are doing these things on OSX or, god forbid, Windows, but I totally agree with you. So, the thing you can do immediately is you can start templating your configurations. This is operating‑system independent, this is vendor‑independent, this is deployment‑method‑independent. If you just do that, you are way better off than most of the people in this world so far.

AUDIENCE SPEAKER: Hi, Michael from the RIPE NCC again. I have got a couple of questions on chat.

First one from Daniel Karrenberg of the RIPE NCC. Can you give a rundown of the content of the current favourite automation tools in your tool box and what they are good for?

IVAN PEPELNJAK: Is this a trick question? Okay, yeah I will.

So, most people are using Ansible for network network automation. For very simple reason. Ansible is the most straightforward tool, and it does not require an agent on the target device, so with Ansible you can start templating things and pushing things to the devices without any vendor support on the device. There are people who are working on linking Ansible with NETCONF so that you would build the configuration and push the device through net come. Geoff and puppet are the other two tools that I would put in my tool box, with Puppet being supposedly a little bit simpler to use and people are telling me that Chef can get really complex. The only problem of these two tools that you need an agent on the device, which means that unless you are running cumulus Linux where you can do whatever you wish with the device, you require vendor support. And, yes, all vendors claim they support Puppet and Chef on their boxes but you have to look into what they are actually able to provision with Puppet and cheque on their boxes and usually it's interfaces in VLANs. Because they are implementing Puppet and Chef for server administrators not for networking engineers. So if you are okay with provisions interfaces and VLANs on data centre switches then maybe Puppet and Chef are the right tools to do that, otherwise stick with Ansible.

AUDIENCE SPEAKER: I have got one more question from Giacomo Bernardi from NGI. First comment is good point from the Cumulus guy, sorry I missed your name, also what about packet forwarding ‑‑ sorry, what about packet forwarding acceleration coming up lately, i.e. DPDK for X86? Figure looked promising?

IVAN PEPELNJAK: Okay. So, first let's talk about Cumulus and then I'll go into DPDK. I didn't understand the question about ‑‑ what was the question about Cumulus again?

AUDIENCE SPEAKER: What about packet forwarding acceleration coming up lately?

IVAN PEPELNJAK: Okay. So from the Cumulus perspective, it's very simple, they are using existing hardware forwarding. So Cumulus is just control plane for the existing switch, they build the routing table, they put the routing table into the Linux kernel and then they have their own tools that pushes that routing table down into forwarding ASIC, which today is strident too from road com and I think they are working on others as well, so the switch actually works as any other trident 2 switch that you buy on the market. Only the control plane is based on Linux.

As we go into the pure X86 based solution which is where DPDK comes from, there are a number of solutions that sell rate the packet forwarding on X86, because the Linux kernel is just bad at forwarding packets. It's an awesome server operating system, but it's really bad when it comes to packet forwarding. So, you have to get rid of Linux kernel, you have to replace that with something which is way faster with way better code path and there are a number of solutions on the market, Intel data path development kit is one of them, there is PR RING from the guys that wrote net top ‑‑ ntop and ntopng, there is snap switch that is used in some of the NSB trials and there is one other that I can't remember the name. So all these things replace the Linux kernel with their own code either in kernel or in user space, and most of them can process 10 gig of small ‑‑ well 10 gig at line rate per CPU core. So, I hope that answers that question.

CHAIR: Anyone else? We are ahead of schedule and now it's time for the lightning talks, so...

(Applause)

We have Massimo up with the description of RIPE Atlas streaming, which is...

MASSIMO CANDELA: Hello. My name is Massimo Candela. I work for the R&D department of RIPE NCC. Today, I want to introduce you to something completely new that we developed in the last months and something that I think is useful, easy to use and that can help you to get the maximum from the RIPE Atlas measurement network.

So, probably a lot of you already know about this RIPE Atlas measurement network. It's this huge measurement posted of probes distributed worldwide. This is the map of distribution with connected and disconnected probes. We have two kind of devices, the small one distributed in the end‑user connection and the big one called Anchors. They are professional hardware in professional environment. So, while they have different use cases, different performances, different connectivity and we have 8,200 probes connected around, and 119 anchors, so we collect around 2,500 measurement results per second, and especially when you have one of these devices when you host one of these devices you can scale the measurement, periodic measurement, active measurement to your target, service, whatever. We have five types of measurements: Ping, Traceroute, DNS, SSL and a new one, that's NTP. And, of course, even if you don't host any of these devices, you can have access to all the public data collected from all the people that flagged us public from our measurements.

So, well after you scale your measurement, we collect the results, we process them and we store, and after a few minutes you can download the result in JSON or you can click on one of the tabs and visualise, well there are some representation of the measurement. So now one of the feedback that we got often, we received often is that it would be nice to monitor our networking realtime and we thought that this feedback ‑‑ feature would be really nice. So, the idea is that, for example, you have your screen, rings your phone or whatever, your monitor, your visualisation on a screen on a wall and you want this visualisation is updated realtime while the results are coming, so, that's why we developed the new RIPE Atlas streaming architecture. So basically, it's a way to receive the measurement result as soon as they are sent by the probes. So, after a probe sends, it you receive it in realtime.

It's based on a publish/subscribe, it's a user socket. You can get both the measurement result, so the single sample of your measurement, or connection and disconnection event of the probes. And you have also this, that's a prototype feature, the possibility to replay a history. So the objective was that basically we wanted to push the people to use our data, so a way to do it is to make their life easier so what we did is to basically create a simple API for the screaming that was the same for all the data type that we have, so you have to remember really few key words and they put some logic on our side. So, for example, you can ask to receive in the channel of the streaming, only the ‑‑ I don't know ‑‑ the ping result that have more than 60 millisecond of round‑trip time. Just an example. Or, for example, if you have your visualisation on a monitor and then you have your script and you want to replace something that already happened because now your visualisation is only able to see the new samples arrive, you don't have to change or emulate anything because we are doing the emulation on our side. You have to just to change, put in some additional parameter in the subscription phase, so that was the idea to keep it easy.

I give you just a few examples. This is a map where there are ‑‑ when we receive a connection or a disconnection event of a probe we put a green or a red dot, and you can also filter for autonomous system number. This is instead a prototype of our realtime ping visualisation, so, these are trends of ping results. So, you schedule your measurement in this case we group the probe by countries, but you can group whatever you want, and every time there is a new sample, the new sample on the right are bigger dots, so the lines are updated in realtime. That's just an example of stuff that you can do. This is in the browser.

So, just a few words about the architecture. We have the normal probes connected to the controller as usual, but in this case the controllers are pushing the results to the some rapid M and Q queues, in this case there are two servers for now, they are clones, and basically we have some Python consumer grabbing the result and pushing it. We have some note JS instances, they are doing the logic for the filtering, keeping track of the subscription and connection stuff. That was for the architecture on the service side. What about the client side? The client side is just something easy, for example, like Socket IO with a few lines of JavaScript code. If you like it you can do it with Python or whatever.

That's just to give you an idea about the size, basically this is a working example, you can copy and paste, and the first line you load the library, after you connect to the streaming, you start listening to the channel and you ask what you want to see in the channel, so you subscribe, then in this case to all the results of these measurement ID. So if you copy and paste this you are going to get this, so in the console log you are going to see every time a probe send a measurement, you are going to receive it.

And, that's all. And well, just ‑‑ if you are interested in this technology and all the detail, we are going to do a workshop today at 6, and well there are going to be exercises too. There is these two repositories, the second one where you can share what you have done with our data or you can see other people and stuff and there is also the streaming documentation where there are working examples. And one of the most important slides about feedback, we really care about feedback, we are ‑‑ we base or work on feedback, so please see how you can send us feedback and please do it.

So that's all. And now if you have questions, I'm happy to answer.

CHAIR: Any questions?

MASSIMO CANDELA: Today in the side room at 6 p.m., the workshop.

AUDIENCE SPEAKER: Do you use any standards for that pub sub‑mechanism, like MQTT or could he ‑‑ do you use any existing standards for that pops up mechanism, like MQTT or CoAP?

MASSIMO CANDELA: That's a good question, for now, we are basically using the Socket.IO inventor description model, and so we basis everything on events, so the published subscribes is based on events in this case and ‑‑ well... we have a channel that you start listening for the result and you decide what you want to see in the channel. It's based on Socket.IO, so for now this is the only implementation that we have.

CHAIR: Any more questions? Okay. Thanks a lot.

(Applause)

CHAIR: Next up is George Michaelson, he is talking about Cryptech.

GEORGE MICHAELSON: Maybe not that...

So, this is something of a call to arms, but it's actually a call to arms in your own self have. It's not Lord Kichener asking you what you can do for your country, but it's what you might need to do for yourself.

So, this guy, he doesn't have a tattoo on his forehead, by the way ‑‑ he did actually find some interesting things, some quite scary things. What he essentially found is that the assumption that we have some sense of privacy in our transactions on the Internet is essentially not true, and in respect of the hardware that we are depending on to implement fundamental cryptography, although I don't actually feel at any particular point my cryptographic sessions are under threat, the general sense, should I trust this hardware? Well the answer actually is no. And the reason is that some of the processes that we the we understood and in particular the role of agencies like NIST, who are a standards and review body, has been really seriously called into question as a result of what we learnt from Snowden. NIST had a functional role specifying cryptography in conjunction with other agencies in the interests of the Federal Government needs for cryptography both in Government and in military and also in commerce and in the wider world because it's a useful thing. But NIST, according to what I have read and I admit it's only what I have read in the public media, I'm not directly aware of this and I'm not a cryptographer and I don't play one on TV, but NIST essentially allowed the NSA to undermine the integrity of the cryptographic algorithm development process and they have sufficiently embarrassed about this that they have formally withdrawing their badging from algorithms and left them in the control of the NSA and they are currently under gag review of all of their historical review of the cryptography behaviour in recent years. But to all intents and purposes their credibility as an agency who are looking after even their own citizens goodwill and I would say my goodwill, is blown. Algorithms have demonstrably been played with and we now have to question fundamental design choices around technology and systems that we are using.

Now, I'm only partly paranoid. I do have a small number of enemies and I know paranoia can go to a strange place, but I'm very unhappy about this. I feel like something that I was going to depend on in the long term, belief and trust in a process of review and development of algorithms, I believed in that, I actually I was quite naive and IP I believed when they said they are doing this in the wider public interest, that's what they meant. And I know longer feel that. I feel this has been compromised and I'm extremely unhappy. I'm so unhappy that when I got up to speak about this at the last Internet AC meeting in balance as, I tripped over my own feet and fell down on the floor I was so enraged to speak at the microphone, which was very amusing and embarrassing for me but I'm really very unhappy about this development.

And the fundamental position for me is that we no longer can actually trust routinely equipment Ortech technology we buy oft shell of. If we are going to regain some trust in this we are going to have to make it for our self. So a group of people and I know some of these people so on apparently level where I say I'm not a cryptographer and I don't know any, I can actually say but I know some people who do and I know some people who are good cryptographers and I know some people I trust who are involved in an endeavour and these people if you have ever gone to this website. You have to refresh your browser state, it was quite ironic going to a secure website and saying bad certificate, but they have fixed that, these people have a publicly reviewable website where they are discussing the design and implementation of a trustable hardware security model which is a fairly fundamental building block of trust in cryptography on the the net and they have done a design in Verilog and VHDL which is testable, and you can go and review this, and you understand this language, you can actually understand what this is doing or maybe you know one and you can could say to them, is this actually doing what it says it does?

And unlike a lot of commercial solutions where we're told this conforms with FIPS 140 level B option 2, I mean this is a description that's just completely out there. If you want to understand if it's trustable, you can talk to people that you have some faith in. It's not as good as knowing for yourself, I know there's one or two cryptographers here and maybe they have already done this, I don't know, but at least there is the potential for us to develop some community trust in the technology.

Now, these guys have actually got to a really amazing point, they have got hardware based on the [Novaina] board which is a development system using armed technology with an FPGA and they actually have this running, the crypto engine. Separately they have been working on the creation of sources of randomness. This is a very strange place to be. How do you make truly random numbers on a system? A personal story here, a long time ago, my dad was working on a computer system in the fifties, and he was quite interested in the software random number generator, so he contacted the post office in Britain and said, could we compare our random numbers with ERNIE, which is a thermal value of based random number source that was run used by the post office to run the national lottery and the post office said no and their reasoning was if he found non random distribution of numbers in the lottery system, they might able to gain the system and win. So, this thing of how do you test randomness? It's actually a very really quite well understood problem, it's been a big shoe for a very long time. Thee guys have actually had their design tested using the standard profiles for testing randomness and it's passed which is a really lovely moment. They have got to do some stuff to make this system hard, so that if you try to take keying off it destroys keys or wipes or behaves in an appropriate way. This is a work‑in‑progress, and the problem is, they have hit their funding limits. So, here is an example of what they are building and it must be cool because it's good flashing lights. So, the critical point here is that rather than get captured, it would be amazing if there was an millionaire in the audience who leaps up and says yes, I will put millions of dollars on the table now to protect fundamental privacy, that would be lovely, but they actually don't want that, what they would like it for quite a lot of people to fund them so they don't have any risk of capture, so if you have a lot of money, you could use ISOC as a clearing house and they are prepared to channel the money through that, but in practice they want a lot of diverse funding source to say guarantee some independence and some public perception of neutrality and they'll take individuals too.

If, like me, you are not a cryptographer and you don't play one on TV, there is actually stuff you can do. You can could do what I'm doing, you could talk about it at meetings, you could promote the idea, you do lightnings or you could get involved in the review process or in the porting process. This is going to talk PKCS11, so if you are going to do software development, there is an opportunity for you to use this technology to use this device to use this interface. Do stuff. Test it.

So, some very nice people have already made a commitment to do this, IANA got involved, the PIR got involved, SunNet, SURFnet, Afilias, a certain agency very dear to all our hearts has been underlined and bolted here because they also made a commitment and if you know the board of the RIPE NCC, or the senior staff, I would feel free to talk to them about this. I'm not asking to you hassle them to spend more money, but they obviously like to understand the community engagement because potentially, this is something they'd want to think about a continuing relationship with, so feedback is useful. Google made a big commitment, they were very, very concerned about this, so well done to Google. I know it's been discussed in the APNIC region as well. If you want to get real information there's a much better description there of what's going on, you can also look at the website.

So, it really is the classic call‑out, you really need to think about what you did in this situation and it's not just about daddies, it's about everybody.

Thank you.

(Applause)

I'd take questions but I'm probably going to have to say I'm not a cryptographer a lot of times.

CHAIR: Questions?

FILIZ YILMAZ: Well, I'm not a cryptographer either, but I studied it way before with a bit of cryptography as a mathematician at the time, and so what I want to share is ‑‑ sorry, this is Filiz Yilmaz speaking on her own behalf, not representing anybody. This is ‑‑ I really congratulate you for taking this up here. This issue was raised two years ago at an IGF, I remember hearing from tech it will community members that that is the side of their scope and there was voice there telling people that well those people whose actions are you are now concerned about are also part of the technical community, they are engineers, they are developers, so, I think this is great that you are taking ‑‑ or the group is taking active responsibility.

GEORGE MICHAELSON: I'm not in any formal sense a member of this group. The people who are really doing the hard yards are Cryptech, they are deserve the ‑‑

FILIZ YILMAZ: I just want to say how the loud that this is a needed move in my personal opinion, and I'm really happy that the technical community is moving.

GEORGE MICHAELSON: I'm essentially moved to speak as an individual who uses the network. The the fact that I'm a technologist, to me irrelevant. I'm glad you're a mathematician; that means I can trust things you say about cryptography. As a person in the global Internet, I'm just appalled the erosion of trust in a fundamental way. So, as a user, I want this initiative to succeed, as a user, sorry...

AUDIENCE SPEAKER: Benedikt Stockebrand. I have been kind of involved with the Cryptech project with the random numbers, and, yes, what you say is right, but actually, we wound up with secondary issue like that most of the random number stuff and the real hardware random system stuff what you find in literature is basisly based on computer programming volume 2 from '69 or so, so ‑‑ I have second edition only, I'm not that old ‑‑

GEORGE MICHAELSON: I God my dad's ‑‑

AUDIENCE SPEAKER: My dad didn't do this sort of stuff unfortunately. But anyway, there is actually more to this project than doing the hardware. We are actually finding ourselves in doing some kind of research as a side area on cryptography which has been long neglected, if that's kind of more motivating than anything you have already said, I would ‑‑

GEORGE MICHAELSON: I would absolutely echo that. If there's anyone here interested or knows people in the academic community who is interested in pursuing this space, I think that would be fantastic to regain control of fundamental research in these questions.

AUDIENCE SPEAKER: Exactly. By the way, FIPS 140 has quite obviously screwed up on this big time as well recollect random numbers, not just some other funny stuff that has been more well known.

AUDIENCE SPEAKER: Eric Vyncke, Cisco. So, I'm a simple engineer, even doing an IPv6, so don't trust me, it's very nice to have an open add might be crypto algorithm, it's nice to have an open VHDL, but once the ship is built, is the Cryptech project, sure what is being built is being built and shipped with specification.

GEORGE MICHAELSON: I have no idea how they are going to make that fly, none. It's a good question ‑‑ luckily ‑‑ someone smarter than me on the floor ‑‑

AUDIENCE SPEAKER: So, Warren Kumari, Google. I'm sort of vaguely involved in this. Much of what Cryptech is doing is actually not building hardware. It's going to be designing more of a reference implementation and then because it's being, lots of it's being designed to be done on FPGA, you can use whatever you like. FPGA in theory are low level enough that this should be fairly hard to design a back door in that will work you know based on who puts stuff in it, a lot of this work is also to try and design a trusted tool chain, because you know a lot of FPGA it goes into a compiler and you have no idea how it works, so there is a cross compiling type ideas to try and end up with a trusted tool chain. Hopefully, you won't get a Cryptech and buy a big box and install it. You will go to someone, potentially not in the US, that you trust and say here is a design, I would like you to build, it I trust you, or here is a design, I'm going to compile this and build it myself. That gets around hopefully some these concerns.

CHAIR: Did you have a question ‑‑

AUDIENCE SPEAKER: I was going to stress what George was saying that this was more sort of reference design, reference implementation, go build your own but I said that to Eric.

CHAIR: Then Rob next, because you spoke.

AUDIENCE SPEAKER: My name is Rob Blokzijl, I'm a past Chairman of RIPE, I just walked in from the street. I fully support this project, and I find it amusing that the message got, somehow, lost, George. We are now discussing how to build random generators and other aspects of fascinating crypto technology. I understood your message was we need more sponsors.

GEORGE MICHAELSON: Well, yes, I felt ‑‑ I felt constrained in in community, as a member of another RIR, to come with a begging bowl but the message is important, you're right, they need the money, they need diverse money, they need the funds and it's really important.

ROB BLOKZIJL: And as you said, unused cryptographical brain cycles are also welcome, but this project needs money, and ‑‑ so that message should not be lost while we continue random generators. In a previous life I was a particle physicist, I know the problems with good random generators.

GEORGE MICHAELSON: And I repeat the RIPE NCC made a very significant contribution early in the life of this project and I think we should recognise that.

AUDIENCE SPEAKER: Benedikt Stockebrand once more. Both in answer to the money aspect and to Eric's question. One of the ways to make is more difficult to subvert these designs is by diversity, so have lots of manufacturers which is economically completely crazy and which is another reason why this, hopefully, works, because if we have, like, 20 dozen different manufacturers using the design we have come up with, it will be much harder to subvert than if it was just a single one. And that also means that, yes, funding is actually important because there's very little chance that we'll ever make whatever money we put into this development out of it again or otherwise make it a business case.

GEORGE MICHAELSON: Thank you. Send money now.

CHAIR: Money seems to be important. Anything from the chat forums? George, one more...

AUDIENCE SPEAKER: A couple of things on the chat. Firstly, from Sandy again, it's just a comment really. Important research is ensuring hardware design follows spec and inserts nothing other than in the model. Very hard and recognised by Cryptech.

GEORGE MICHAELSON: I think that echoes what Warren said about them trying to develop trusted tools and the qualities around how you can verify what's in the in the system so I think that's pertinent, yes.

AUDIENCE SPEAKER: That's all I had. No further questions.

CHAIR: I actually have just one. Looking at this presentation, one of the things that sort of, and some of the comments, I think this topic of validation and verification is sort of the false bottom on which a lot of this is resting. You notice like the we want small discreet sums because we don't want to be captured by one funder, right, we don't want people from this agency or that agency, I think it points to the inability to sort of validate that something really is secure in X context or Y context and in concert with working on things like Cryptech, we really need to look at better tool chains like warn was talking about you know pre‑were do you sayable bills, other things alloy us to ensure this this is secure and we can guarantee that with some sort of seal, because at this point we are relying on networks of goodwill and trust that break apart ‑‑

GEORGE MICHAELSON: I think it goes back to a concept of what the dak demand I can discipline used to be when people audited each other's outcomes and results and there was consensus around behaviour so the role of people who are prepared to say, well, I can't do this but I will look at the VHDL and look if it's valid and legal and when I run in your fools I get the some outputs and then just it and up in public and say I'm a member of the community in good standing, I did this and this is what I found. These kinds of things are very important. But mainly money.

(Applause)

CHAIR: Next up will be [] Berry.

SPEAKER: Thank you, and welcome and thank you for having me here, I am fairly new to this community until this platform, and this is a late entry, so bear with me.

As we stand here as a community, we're very interested in technology and its advances, so I'm a bit preaching here for the choir when I say that putting IPv6 into the domain public, DNSSEC and all kind of other advances is a good thing. However, what we see in actual use than especially IPv6 deployment is in poor state, very small percentage of the HTC P traffic is actually going over IPv6. I have yet to find a Dutch website, a Dutch web shop that actually has DNSSEC enabled, and most sites we see do have poor standards as far as security. So, the adoption is actually quite low.

There is a need and a push from a technology point of view, but it's not getting out there fast enough. We need better incentives. Just having a technological incentive is not good enough and the business incentive is not very strong. In the end, it will be good, but if you are the first ISP to introduce IPv6 you are going to take all the pain, and only after a large number of ISP a large number of registrars and other carriers have adopted this, only then the benefit starts coming.

So there's a lot of cost involved in trying to get it out there, but there's not an immediate benefit of it. And there is another way to get a benefit from it and it's actually created a public interest and awareness and it's currently not there, this awareness. While people are actually involved in their own privacy and security, as soon as your tax returns can be viewed by everyone, or your healthcare records is viewable by anyone, and there are breakages of those systems, then there is generally an outcry and how can this happen?

So, it is important to people to have their security guaranteed but it's not on the individual yet. If you say hey, your website doesn't have ‑‑ your private web shop doesn't have DNSSEC enabled, people say yeah, that's nice, what's for dinner? They are not generally interested at a family dinner.

So we need to grow some interest by the general public. And as ‑‑ there's an initiative started by a number of organisations, and I'll come back later to them, which tries to create this public awareness, just like the previous talk, trying to get awareness, this is on the general public audience to create awareness. And create a hall of fame, create a hall of shame, and what you generally see is that if a web shop or an ISP gets targeted, their customer care will generally respond to this. So, there is interest from this, and we can create some driving force.

Now, this consortium that's actually not by some companies started, it is a Dutch initiative, so there is many Dutch partners involved, CEDN, so the runner of the .nl, ccTLD, SURFnet partner that runs the network for the universities, RIPE, internet society, all partners that are not for profit, either Government related or umbrella organisation that have some say in this, and that carries weight with the general public. As you can see we are not yet members of this set of partners, we are just an implementer of this site.

So, there is a website available to the general audience without extra plug ins or difficult language, it's easy, get a score, how is your ISP doing? How is this website you are visiting doing? Analysis of famous web shops, the equivalent of Amazon or just a public grocery or, in fact, DHL behaves, so, the Government website, how are they doing, how is the tax office doing in its security? And note OpenFlow extensions note they are not getting 100% score. This allows us to gather relevant data from the websites that people actually visit, and we can grow in these tests.

Now, I had planned a little bit of a demo, but that's out. This is only a PDF, so that won't work. It shows only a very simple picture. There is a detailed report of showing there is a number of name servers, not IPv6 enabled, DNSSEC is not enabled, your bogus, whatever, but that doesn't matter to the general audience, you are not able to reach the entire Internet, you are not secure enough. People can listen in if technology advances, and your scoring is ‑‑ and this isn't high scoring, on the scrutiny bank, but most sites are actually scoring very low.

Now, this is not the end. We are growing this site. We'd like to get more tests. We are adding tests to this platform, and we are getting a lot of criticism, which is good because we are very strict and we are getting a lot of criticism from actually many people here that are saying this is not realistic because it's too strict, but it's on purpose, we are very strict in trying to gather awareness of this, and we see that already I have a large number of websites updated the statistics and what we hope is that by creating more and more visitors, we had some good spikes in your visitor range by getting publicity, that we can get additional awareness. Thank you.

(Applause)

CHAIR: Do you have any questions?

AUDIENCE SPEAKER: Benno Overeinder, NLnet Labs. The software behind this is intended to be open source, etc., etc. Not yet available but...

I deliberately not talk about all the plans. We are expanding the test. In the end, we want to make this test open source. We are adding testing to these not by reinventing the wheel all of the time. Actually there are a number of test sites for IPv6, for DNSSEC etc., out there. We are trying to get all these tests in one place for the general audience.

CHAIR: No further questions?

AUDIENCE SPEAKER: From the chat room, from Antoine [], no affiliation. The question says: My domain is all green, except for the fact that I have a self‑signed certificate. Why do I need to spend money on a CA to be considered secure?

SPEAKER: Because the end user will not see you as such. You will get a browser pop‑up saying this site has a self‑signed certificate, do you trust him? And it may be good for [Dane] to get out there in the real world, but it's not there yet.

AUDIENCE SPEAKER: Thank you.

CHAIR: I'm going to interject. You should check out the Let's Encrypt Project, DFF, with a lot of others trying to solve this issue ‑‑ EFF.

AUDIENCE SPEAKER: I just want to add that the Let's Encrypt is the big way forward but I got a hundred percent score and I have not spent a single dime. So you can get free certificates right now. They suck, but they are free. And thank you for the project, I think it's awesome, but we are also one of the sponsors, so I'm meant to entirely...

AUDIENCE SPEAKER: From my experience in Germany, it's always good to show statistics to guys who don't believe that IPv6 or any new technology is coming. What are the Americans doing? There is big differents, the US is doing quite well in IPv6, the Canadians are not. Show them statistics. Germany is doing quite well. Belgium is doing quite well; why not the Netherlands? What are we doing wrong? So make them fear that losing the train on IPv6 is not good for their business, so they have to fear it and then they will start their projects. Don't try to educate them. That's not the way we can go.

SPEAKER: We are not in the business of educating people in general, but if there is an incentive for businesses not to lose money because customers leave because there is a better provider, then that might push forward technology. And Belgium is doing a lot better than the Netherlands in IPv6, and with a big interest for Germany from this site.

CHAIR: Thank you very much. So, we come to the end of the Plenary Sessions for today. Further on, there will be the Best Current Operational Practices BoF, is it ‑‑ taskforce will be in this room and in the side room we'll have the events topics in RIPE Atlas usage. That's a workshop. Please go to the Plenary site and rate the talks, help us make the Plenary Sessions even better, and good luck at the workshops and the BoFs.

(Applause)

LIVE CAPTIONING BY MARY McKEON RMR, CRR, CBC

DOYLE COURT REPORTERS LTD, DUBLIN, IRELAND.

WWW.DCR.IE