OpenSource Working Group, Thursday, 14 May, 2015, at 11:00 a.m.:

ONDREJ FILIP: Good morning. Thank you for coming to the Working Group of this RIPE meeting, thank you for that. We have pretty full agenda so I don't want to make the introduction very long.

Here is Martin Winter, we are the co‑chairs of this Working Group. Again, thank you for coming for this Working Group.

We have some administrative matters at the beginning, first of all we need to select a scribe, which I think is Alex, right? Alex volunteered or was volunteered, I don't know. We have also, IRC master, which is ‑‑ which is Oliver. We have posted agenda on time, I hope there are no additions, at least we are not aware of anything. And the last item I would like to say is we would like to have approval of the minutes. The only problem is, they came quite late and I only sent them two minutes ago to the mailing list. What I suggest for this Working Group, that we will wait up until the end of the Working Group, there will be a last moment where you can comment on the minutes. Please try read them. If you have any comments, let us know. If not, we will take it they are approved. They were late sent to us so it's very last minute action. Let's go to the agenda, it's pretty packed, we had a lot more than we could handle, and it's great, says that there is a big interest in this Working Group, so again, thank you very much and I am passing the mike to Martin, who will introduce you to the next speaker.

MARTIN WINTER: I just want to have a quick overview on the ‑‑ the first presentation Thomas King, I am not sure, I hope he is in the room. OK, great. We see the agenda here, so as I say, it's very packed, I hope we have time for a few questions. We have also a few lightning talks there, which are basically these few things. So I get you are not getting too hungry, we will definitely be filling up to the last second. Without much further, let's start with Thomas King, talking about the jFlowLib.

THOMAS KING: Thank you very much for the introduction. I will talk about jFlowLib which is for parsing and generating sFlow and IPFIX data. And this jFlowLib consists two sub libraries, which is library for sFlow which is called sFlow which supports version 5 with counter and sampling, and we tested this library with force 10 E‑series, Alcatel‑Lucent 7750 machines. We heard it also works with other machines. We know for sure it works with these ones. And the J IPFIX library, it's for ‑‑ supposed to work with IPFIX and in this special case it supports L2 IP template which is coming from the Alcatel‑Lucent boxes. To be honest here, the J IPFIX library is not a comprehensive ipfix library so if you want to use it for your purposes, besides L2 IP template it might be the case that you have to do some work here to get it to work for your environment. But still, it's a starting point and let me quickly show you why we started this work.

The thing is that at DE‑CIX we have many switches and routers and they export IPFIX or sFlow and we have many internal and external monitoring systems that want to get this data so we want to make sure two things: First, that if we add a new monitoring system, that we don't have to change configuration of the production switches. And we also wanted to make sure that all the monitoring systems we are working with can get IPFIX or sFlow stream from the switches. So, we know that some switches have limited capabilities in terms of how many streams they can export so we came up with an idea of starting a project to build an IPFIX and multiplexer and that was main reason to start the jFlowLib library.

And as you see, that is our architectural ‑‑ how we use this, this library, and we built this multiplexer based on this library.

For this main use case for having the multiplexer, we had some requirements, we wanted to make sure that multiplexer is able to multiplex up to 5,000 IPFIX packets a second up to 1,000 sFlow packets a second, this have to be done and the data is coming with UDPs, this has to be done without packet loss. We have up to 10 switches or routers that are exporting IPFIX or sFlow and currently we have up to 10 collectors that are collecting this IPFIX or sFlow streams.

So, it's quite challenging in terms of bandwidth that we are handling here and we wanted to make sure that multiplexer looks as transparent as possible so it does IP spoofing here, so for the monitoring system it looks like the data is directly coming from the switches or routers. And for IP spoofing we are using RAW sockets and wanted to have an easy configuration that we can quickly reconfigure the multiplexer one.

And that is as we are here in OpenSource Working Group, I will show in the next couple of slides some source code.

And here you see the configuration file that allows to configure the multiplexer and the last line shows you how easy it is to start the multiplexer, just calling the file and linking it to the configuration file and that is it. But besides this main use case we realise this had library is also useful for different use cases and one is that we are now able to read data directly from that work in terms if it's IPFIX or sFlow and this allows us to quickly debug things and prototype some tools we have built internally. It's quite easy here for reading data from the network, open socket and you pass the IPFIX code in this case and you can output this packet to the ‑‑ to standard out so you can get a text or a presentation of what this packet contains.

And we also added some support for reading PCAP files which is quite easy because there are libraries out there which support that and you see also reading from PCAP files it's just a few lines of code.

Also writing data to the network, so generating IPFIX packets or sFlow is quite easy, you just have an object for message header, which is the base object for IPFIX and then you set your properties and send it out to the network. So it's quite easy to have something like that working.

Also writing IPFIX or sFlow data to PCAP file is easy now. We are using the PCAP Java libraries that are available and writing this kind of stuff is just a few lines of code.

What is also interesting for us anonymising IP addresses especially if we have to give away sFlow or IPFIX data to our contractors or also sometimes are asking us for specific data to look into special cases. And as IP addresses are considered private data in Germany we have to make sure that this stuff is randomised and for that we implemented two different randomisers, one is for completely randomise IP address, there is no link back to the orinal IP address and third which allows to map ‑‑ we have constant mapping between the randomised value and the original one. And you see it's actually quite easy to do it in source code.

So what is the status of this library. It's released under apatchy 2 licence and it's available on GitHub, already we have 47 commits so far, four contributors and major parts, a little bit ‑‑ little bit optimistic are covered by software tests. And all relevant use cases especially the ones I just presented, are already implemented and working. JFlowLib is actively used at DE‑CIX and maintained by them so we will make sure that jFlowLib will be active in the next couple of months I think at least, years. If you want to contribute, that is highly appreciated so, please check out the source code and have a look it's of interest for you.

So that is pretty much it. I hope I stayed in time. Do we have any questions, comments, feedback on that? Thank you.

MARTIN WINTER: Questions? I think no questions.

Next person we have Gianni Antichi. He is talking about OSNT for packet generation like I think also like capture part so as an alternative OpenSource tool.

GIANNI ANTICHI: Hello, so I am working at University of Cambridge. I am going to present OSNT OpenSource network test. Well let's start with the context, I think you already know why we need testers and there are plenty of things that does the job. Why we start trying to understand there is the need for the OpenSource network tester. Actually, because at university we need performances but, you know, solution like the one provideed by IXIA and stuff like that are quite expensive for us so we wanted to build something that is community based, that is open and that we can improve and adapt to our needs. So that was the starting point.

And then we ended up with this system, the OpenSource network tester, you can find the website. That is the concept, is to create an OpenSource hardware/software Co. design for the research community where people can find a code, download the code and improve, depending on the needs. We provide a starting point, I will show you the API that we provide as a starting point, and then people can improve and commit a code and share use cases. So, it's like the idea is flexible and community based.

But, I need ‑‑ as I said, we need something that is high performance. What do I mean? . The starting point we want to do something that is in hardware and soft because with the hardware we can have performances so we start with net FPGA and that for people that does not know what is, is like a platform that has FPGA inside and is like a platform that is open and is like sort of enable fast prototyping on networking devices, is this one, essentially, is like a board with four port giga with FPGA inside. It has PC I press you can connect the board and it provides tools and reference design, contributed projects, is a commune based project itself, the Net FPGA, what use for teaching networking. So this was the good starting point for us to develop OSNT so that is the reason we start with net FPGA.

Now, I said the concept, but what we provide essentially, so we provide like a tool kit that give you the possibility to programme the Net FPGA from the hardware perspective and you will also have the driver and the GUI to use the board for network testing. What I mean with network testing, so we start providing a support for some use cases, again this is a starting point, so, we are not aiming for like a product itself, is a research project. So, you can use OSNT so the board and the related software as an high performance traffic generator, so think about you can use the board that you generate traffic at high speed using the four port 10 gig, so you can use the board as a high performance traffic captured system. So you can use the board to receive packet and user space. I will show you later like the features that we implemented for traffic generation and traffic capture. You can use the board to both capture and packet and also generate packet, or the idea that we have in mind that we will start exploring the next month is just to create a scaleable so think about more boards that are connected together and syncronised by ‑‑ reference clock. But yes, what about the traffic generator, what you can find if you start digging to the code. Right now, what we provide is the ability, the capability to replay in hardware directly pcap traces so think about sort of hardware assisted TCP replay, the common UNIX tool but something that you can replay in hardware so gives you the possibility to replay at 10 gig or 20 gig, if you consider the two port 10 gig. Of course, here, you are limited bit amount of memory that you have inside the board. The board itself has 27 meg byte of RAM but you can replay the P crap trace such as want. It gives you the possibility to embed inside the packet so inside the pcap trace, along the packet count also, a time stamp take then hardware just after the transmission queue. And this time stamp can be corrected GPS so this means you can have hardware time stamping in transmission, at the offset that you want, so depending on what kind of things you want to test, if you want to test a Layer 2 switch, for example, it is obvious that you cannot embed the time stamp like at the Layer 2 because otherwise you will screw up all the forwarding things of the switch.

And then this is helpful, and gives you the possibility to have a full line rate regardless the packet length on two ports. So having like a 20 gigabit per second.

For ‑‑ as for the monitoring measurement part, what we have is like we provide the packet capture functionality with GPS corrected hardware time stamp in reception, so you have hardware time stamp that is taken before the receiving queues, so this give you high precision and gives you the possibility to have two traffic thing approaches like from software you can decide what kind of flows you want to receive at user space, in terms of 5 T U PL E, layer four ports and protocol and gives you also the protocol to choose in hardware the snap length, so you say I want to receive the first 40 bytes of each packet, and you will receive a user space just for the first 40 byte with ‑ of the discarded part so you you can still recognise what kind of packet you received at the board.

You have high level statistic being calculated in hardware with high level I mean packet count per port, IP, packet per port, UDP packet, TCP packet, stuff like that. Of course we provide also a lip PCAP patch for nanosecond granularity. If we are talking about next general you have nanosecond granularity but if you want to use like all the ‑‑ provide microsecond granularity so it will be useless to have hardware time stamp with the micro second granularity as user space.

This is an evaluation of what we get from the board in terms of capture. Make these if you see is like the mainly limitation essentially is the PC IX bandwidth of the board itself. The board is as ‑‑ so you are limited by that bandwidth but again the code is like OpenSource is written in Verilog so is portable across different FP G devices so if you take the code you can ‑‑ you port the code on a different board that may be as a better PC I plus engine and then you can get better performances, a good thing is like no matter the packet length you are able to get ‑‑ you are able to grab all the packets so it's not like depending on the packet length.

So, we built this and we said, OK, what can we do with OSNT essentially, how can I use it? Well, you can do traffic characterisation, you have like a traffic capture system so you can use like they are receiving time stamp to start understanding how the traffic goes. You can do network device testing because it's a traffic generator. You can adapt to your needs so, yes, so we start thinking, what about using your OSNT for switch performance evaluation characterisation so in terms of latency, how can you do this? Of course, you can embed the transmission time stamping to the packet and you can receive the packet back and if you connect to OSNT to the switch under test of course you can understand the late enreceive the test itself, at different traffic loads. And yeah, we were able to measure the switching late enreceive different switches, like we had in our lab the Pickia and Aristar. N F 10 switch is a Layer 2 switch on top of the board so we were able to check, depending on the packet size, different loads also, the forwarding latency in terms of micro seconds of the switch under test.

Yeah, cool. We start building something, we can extend OSNT with new features. What can we do from here or fully exploit. OSNT.qq we decided to build a framework for OpenFlow switch evaluation. So, as ‑‑ OSNT OFLOPS turbo is a use case to show how to use OSNT to build something that is more complex, in this case like OpenFlow switch testing. What I mean ‑‑ okay apart from that is you can ‑‑ the code will be available soon at that link,, and but ‑‑ I am talking about OFLOPS, it is essentially holistic measurement platform that enables developer and custom based experiment. If you use OFLOPS with OSNT that gives you the power to generate traffic at high rates with transmission time stamp or receiving time stamp. You can get these systems essentially you can get like the software side, you can use the API OSNT and can fulfil the data with, you know, the power of the hardware given by OSNT. And what you can do essentially, is like you can test like, for example, the latency of a flow table insertion or flow table modification or you can create your own test. For example, given these scenario, and then I will conclude, where you can see the OFLOPS two box with OFLOPS software and OSNT you can correct to the switch under test. And you can test different switches; for example, in our lab we have a pick /KA 8 P 3922 and Dell force, and we try to understand the insertion delay, how much it takes depending on the number of flows, to insert the rule or how much it takes to modify a rule. And yeah, we can argue about these results that, and I will conclude, at the first instance it sounds like scary for us, why, just modification takes like, can take up to two seconds, why if you check like the addition takes like ‑‑ can take up to 20? Because ‑‑ and then we start targeting about different kinds of implementation and the reason we think about it is like usually the forwarding on switch is made by TCAM so if you have to modify your reel you need to write the RAM that is referred to the entry in. TCAM but if you had to add a rule write the TCAM and RAM so it is a double writing. Depending on the type of rule you want to insert this goes to rewrite different entry in the TCAM so. This can give like higher insertion delays for this kind of things. I can show a demo to whoever is interested in my laptop. So that is all.

MARTIN WINTER: OK. Questions? Please remember to state your fame and afill facial. He is he is

Peter he is letter: Does this require the special FPGA hardware or can it work for commodity Linux system with a 10 gig Intel card or something like this? King

GIANNI ANTICHI: We decided to go for these to have high precision time stamping, essentially we want to enable high precision measurement so we wanted to do something in hardware in order that you can get the time stamp directly for transmission and reception. But the code for the hardware is OpenSource so if you have the Net FPGA you can reuse the code and reload the code that you have. If you have a different board, FPGA based you can start with the code and you can just readapt probably, you have to readapt on different APIs, maybe different memory controller so you have to readapt that part but essentially you have the code so it's like the portable part should not be so difficult.

SPEAKER: You are focusing right now on performance. Are you also looking at compliance or fuzzing for protocol layers to investigate the behaviour of possibly a mall formed packet or to verify that something is RFC compliant?

GIANNI ANTICHI: This is what we want to start digging into. The starting point was performances because my background was ‑‑ is more related to switching performances and latency so I started working with these for this reason. Of course, then when we build the system we realised that we can also improve the system to do something more and then this will be like future work. We want to start then. If anyone is interested we are also happy to collaborate with ‑‑ to get also different use cases on that. Is a community based system at the end.

SPEAKER: Thank you.

SPEAKER: ECIX Berlin. One question, basically the same as that for frame testing and whatnot. Are you already implemented or thinking about implementing RFC 2544 testing or is that something that the community should then look at?

GIANNI ANTICHI: Right now, we don't have in the plan. But of course, if we receive like the ‑‑ this use case, for someone that is keen to implement this and upload, the idea is to create like a sort of as for the NAT PGA, like think about the sort of AP store in the GitHub where you can go in the GitHub and you can see OK, let me see what the community provides. Is there something that is more ‑‑ is quite close to what I need, yeah, I can start from that, I can download the code for that and then I can just read up a little bit so that is the idea what we want so we can provide WIKI access to everyone so people upload the kind of use case for network testing and put a WIKI page ‑‑ we modify these and insert these new features because this way you can do these kind of tests. So people that want to do something similar, they just start from that without building everything from scratch. So, what we are providing is a starting point that is more or less useful for everything that is like high performance generation monitoring and then, you know, we can start build on top of that.

SPEAKER: Alex. You have mentioned this net FPGA boards. I am curious, is it possible to connect a few of such a boards to each other to have more ports and if it is possible, then how, which kind of back plain will be used there

GIANNI ANTICHI: There is a lot of P G FA boards. When I started this project the most powerful board was the so‑called net FPGA 10 gig but now we have a new board that is coming, actually is available, that has a four port 10 gig as well but has the possibility to connect more boards together as you will say, through FM Cs and so on. So the thing is, like, now the plan is to start porting, we will port all these framework on the new board, so we can enable also to connect more boards together and, you know, just you can do like more than just 40 gig traffic generation because of the ‑‑ because of these reason. This board does not have this capability. The new board will have also a PC IX, this means you will have also the bandwidth if you write the D MA properly and this is what we are doing right now and you write properly you will have the bandwidth to receive all the traffic, the 40 gig bit of traffic directly in user space but with hardware time stamping reception:

MARTIN WINTER: OK. No more questions. Then, thank you.

Next up we have Patrik, he is talking about Zone Master and DNS testing tool for testing the DNS delegations.

Patrik Walstrom: I am Patrik Walstrom. I work for .se and we started this tool together with AFNIC and the reason for this was that we had DNS check which was not giving us deterministic results any more and required a total rewrite. AFNIC had their own tool zone check, written in ruby old legacy code and no developers left. We decided to join forces and create a better tool together so. Our decision was this should be a reference tool for anybody who wants to do this type of testing, this should be the tool they want to use.

So, we joint forces and created a joint set of requirements and specifications for the tool. And the collaboration is that we did this completely together and we decided on a new namer this tool rather than keeping zone check or DNS check, a completely new name which was the hardest part really so we have Zone Master now.

The requirements come from the both tools combined so we collected all the test functionality, test cases and everything else that both tools did, and created new set of requirements for this. We also decided this should be very modular code so that we can have this distributed development going on to create everything separately from each other. And it should also be fast and have lower impact on the network when you perform the actual tests.

So the requirements was a long list of requirements that we started out with and that we, if you look at the requirements list right now it's missing pieces here and there and that was due to the lot of duplicates and things that was not relevant in this decade any longer.

So, when we had the list of requirements, we also decided that we wanted very explicit test specifications for each test that is we wanted to implement so this is one example of this. We have a test case identifier, you have the inputs needed for the test and the objective is very important; this is what you really want to test, and we decided that we should have references to RFCs explaining exactly why we are testing this. We also have the order descriptions of how you are testing the objective.

So, the implementation, we decided to write it in Perl because that was the common language between the organisations, so we ended up with four different modules, the engine which is the core test framework with all the test specifications implemented as test cases; everything else is built on engine. The CLI which is a, the back end which is ‑‑ I explain it more ‑‑ and the web interface.

So the Zone Master engine is Perl library, and that implements all functionality for testing, the complete test framework for actually writing test cases. All the test cases that we have, in fact different functionality for logging results as well. And it has its own resolver, it's based on LDNS so we decided to wrap LDNS into net DNS instead because that was giving us more access what was needed to write this test frame.

The CLI take ininput from user and executes a test and.

The back end is JSON RPC interface for the engine. It's web‑based so you can write any tool that wants to run a test through this API instead but it's mainly used by the graphical user interface.

The GUI runs a test and can present the result and when I publish this the web interface that is available crashed so let's see if that happens today if you want to run a test. It looks like this and it should look like this if going to that interface now. What you have is the domain you want to test and the results after a while. You can also do pre‑delegated tests if you want to have a delegation done but has not performed it yet, you can give information about the name server records and the G L U and the DS records, if that is available now, I am not sure, before you actually do the delegation test with this with fake parent data.

We have a number of releases done, the modules are released as different releases, so the current versions is not on this list but it is quite recent we had a new release this Monday and we have this almost distribution list switches, all the components bashed together as something we know that works perfect together.

Installation, it requires lots of external ‑‑ it should be as easy as this on ‑‑ intall the necessary packages and you can see CPAN to install the rest and it will fetch the module from the CPAN and the that are not included will follow them. And the same goes for the CLI. When you have done this you have a working system where you can run actual tests.

How to run a test. It's as easy as this: You write with the command that the domain you want to test and you have results. So this is the log result in English. It's also available in Swedish or French in you refer that.

In the CLI it can run pre delegation test, you can record the session, which means you can run a complete test, record the DNS data required to perform the test, and then you can later go back and rerun the test from this recorded dump, which is very useful for debugging purposes. If you want to change test policy you can reuse a previous dump to see what happens when they change the policy but you can also use this file if you find a bug or a strange domain, you can send the dump to us and we can debug the issue.

You can also different policies and configuration for running tests. You can record name servers response types if you are suspecting that one of the name servers is running /SHROERBGS we can see which one it is.


Wave policy which describes exactly which test case is giving which log levelerer. So if you decide that certain message should not be an error but just an ininformation ‑‑ information messages or a notice, you can change this configuration completely.

This is the default part of the policy that we have (part of the default), it might not be perfect but it makes sense based on what kind of DNS delegations that have today.

Here is log, you can have the log output in J /SORPBGS which is very useful if you want to write another interface for use in this tool.

It's also extremely easy to write your own applications, just include the Zone Master module and you can have the log by running the test zone commander, or whatever. It's extremely easy to make use of the Zone Master.

I made my own batch tool for mass collecting of domains which is threaded module so this is very useful if you want to collect result for large number of domains and then do some sort of analysis and you can have the results stored in the MONGO database for analysis there was a lot of imperfection in the log output, we are going to fix that and implement new test requirements from new ‑‑ from user requests and so on. We are also creating an IANA test profile so you can run the same tests as IANA does when you are doing root delegation and other features such as HTML.

What do you need in this room for doing tests? Batch testing from the CLI, Anycast testing, mobile apps, any specific test cases are more than welcome to contact us and go through the GitHub page and make sure request there and we will try to fix all the problems. We have a task force within the centre which is the European coalition between the ccTLDs and our goal is to have the least of test requirements as an RFC, hopefully soon. We are still working on it.

This is all the URLs if you want to get involved. Thank you.

MARTIN WINTER: OK. Thank you. Any quick questions? I don't see any. OK. Thank you.

Next up we have Steven Barth from OpenWrt, he is talking about IPv6 configuration framework on OpenWrt. Thank you.

STEVEN BARTH: So, thanks for having me. Yeah, I was actually asked to maybe give a little intro into OpenWrt before talking about v6. Why could do we think we need OpenWrt? Well, the thing is that many home routers and small enterprise routers are very bad and they are often outdated and they have countless security issues and actually, a few months ago when I got my new cable connection at home and the tech guy that installed the plug and gave me the modem, he said oh, yes, you should probably reboot it every week or so because it gets slower over time and if you don't reboot it it will fail at some point. I said oh, that is nice, can you get me a better thing? Sorry, we can't, the vendor does it and they won't fix it or they don't care enough.

So, what we want to accomplish is, have an OpenSource framework or reference design or S DK for routers that does things right that is up to date and offers the latest features and over time, we see that from these old vendors the case, the old kernels and user land, they can't get any new features in there. IPv6 support is like, well, so‑and‑so in those boxes usually, and then you want to have advanced features, buffer load and so oranges it's very hard to fix that based on those old K DKs and the CPE vendors usually take what they get from their ‑‑ use the K CK such that ‑‑ which is why things are hard to fix.

So, open W R N a nutshell, we will see the project is now over ten years old (OpenWrt) and it's basically a custom Linux or Linux with a custom user land, try to upstream all our changes. But nevertheless, there is a lot of /STHAUF we do differently than other Linux distributions. We have provide variety of user land tools and daemons, for example, when you look at desktop Linux distributions you see things like Bbos for message exchange, just like all kinds of network manager for network interface management. We all have our own tools for that, specifically designed to be useful on embedded devices because we want things to be small, we have routers that only have four megabytes or 8 megabytes of flash so to get all those features in there we need to have really lean and lightweight system to support all of these features.

But maybe a little admintive stuff: OpenWrt is registered as project of software in the public interest which is kind of like the umbrella thing that also covers DBN and so on, but they don't do that much for us. We have a trademark registered in the US and they handle donations for us and that is about it. There is mainly OpenWrt is loosely associated group of core developers and a lot of other contributors. Mainly all over the world but it seems to be the core developers are mainly based in Europe and mainly Germany and neighbouring countries, so it's really, yeah, RIPE area if you want to call it that, and we are actually, even if it doesn't say so, there are many million devices out there that actually one OpenWrt but most of them do not claim to because it's like the vendor or the manufacturer used OpenWrt, at some point years ago and just wrote their name on it which is fine with us as long as they just release the GPL sources, which is another issue.

But, OK, I guess enough for OpenWrt and by the way, we should have our next release candidate out in a few days, so hopefully, we are always late and we apologise for that but usually it's done when it's done.

So, talking about IPv6 now. What is the difficulty of building an IPv6 router? Well, /TPWHABG the IPv4 days, on home routers, there is this thing static configuration. It just configured your LAN addresses, your prefix or subnet and it always stayed the same. You got your IP address from your ISP, that was changing maybe, but the router didn't really care because all the NAT was hiding these changes and maybe the only thing that might change a bit is like the DNS server address from the ISP or so. More ‑‑ there was some fail‑over capabilities or that add some more logic. But all in all it was pretty straightforward, got DHCP or PPP connection from your ISP and that got you an IP address and then you did DHCP through your clients and you could also handle host names so that when you plugged in your printer could you type in http printer and reach the web interface of your printer from any device.

So, there was neat and easy and clean, and native, v6, right? .

So, this slide, I try to more or less cover what we have to do now to set autopsy v6 connection only with the ISP and there was an interesting talk in the IPv6 session just earlier about client configuration, about router advertisements in DH IPv 6 and all those interconnectedness between them. What we see here is you get a prefix from your ISP using DHCPv6, public dress for the /AOUTer either, that is where the trouble starts, you don't know which method to use, really. So, you just try one and then try the other, if it fails because there is no sane fall back path really. And usually, there is no way to signal routes in DHCPv6, RA for that, so you can either fail or just assume I will magically send my package to the DHCPv6 source address and hope that works and in most cases it did when there was no RA so that is what we had to do then. And also other things that we have seen with ISPs is, there is ISPs that send us RAs every three seconds, even though there are no changes, but then still they re‑set the timers for the routes, they re‑set the timers for the address and so you basically get an update every three seconds and if you trickle that update through all your processes internally, you get a lot of access CPU load or whatever, if you don't filter them out properly, and even then you have to think about if you filter too much you are losing some events and that is bad as well.

So, back a few years ago, I decided, OK, handling RAs and DHCPv6 is an issue, all ‑‑ better create a new project, which is well, doesn't really have a good name, it's just OD T ‑‑ it handle PD and tries to be clever about ISPs not sending RAs or doing nasty things with PD or not. And it also tries to get you into all of the funny transitioning technologies, which is what I will cover next.

Because, you know, just simple IPv6 connections wouldn't be fun enough or dual stack or whatever, so ISPs and network people and the IETF they came up with, I don't know, maybe good dozen of transitional technologies which probably still all have their place somewhere, and you could see from 6 and 4 which is the stuff you get with HEAnet or ‑‑ 6 RD which usually was used in early roll outs of IPv6 by bigger ISPs based on encapsulating v6 on top of 4, configured statically or using DH ‑‑ run them in parallel and also have all kinds of encapsulating or translating v4 to v6 things like late weight 406, MAP‑E, which is mainly used in mobile networks these days. So, the difficulty here is that all these things have different configuration mechanisms, they have different methods to interact with the firewalls. So you need to really flexible network configuration system, you need to be able to stack protocols on top of another, if you like connect 4 G modem or 3G set autopsy PP connection, you run the DHP ‑‑ run the 464 X LAN. That is really difficult and difficulty doesn't stop there because you need to configure clients, right? And as we seen earlier, there is RAs and DHCP 6 and they are a bit intertwined and some features supported by one and some features by the other. And there are many differences in handling these in different OSes and there is prefix delegation if you want to support downstream routers behind your CPE and you have to work around a lot of quirks and, for example, the truth at that advertisements you can push an update if there is a renumbering event from ISP or if you do failover from one to another, but with DHCPv6 you can't because your clients only pull updates at a certain interval, you have to deal with, you have to work around that. So could you say why do I care about stateful DHCPv6 at my home? There is a naming issue, there is no good way to name or to add name entries for IPv6 addresses other than DHCPv6. You could use MD M S but it isn't cross platform. It maybe works on Linux and Mcbut not on Windows. You are really stuck there and you have to find a way out. What I usually do, or in our defaults we support SLAAC for all the usual clients and, for example, android doesn't do DHCPv6 at all, we enable the ‑‑ offer stateful addresses but if you get them in parallel then we are only hand out a ULA address using v6 and global addresses using RAs, we can still renumber but still do the naming using the ULAs. And there is also an OpenSource project for that which is DHCPv6 and RA server which supports reconfiguration and so on.

So, what do we do if you have multiple up links on multiple routers? So, in the v4 world you had this NAT thing which just translated the source addresses but now we have v6 you don't that simply ‑‑ I mean you could do stateless NAT but do you really want to? The other way around is, you do source address rerouting, if you get a packet you not only examine the destination of your packet but also the source address and if it has a source address from ISP A can you not send it out the interface via B because it might just get source filtered. So, you have to either do fancy policy routing or operating systems like Linux now have real support for source aware routing so you have ‑‑ you have to generate from usual RAs, not only destination routes but source of routes, have to correlate them, but then again if you have multiple route /TPHERS your home. What do you do? Could you do layer bridging which gets nasty if you have multiple link types and especially on wi‑fi you don't want to have that much Layer 2 traffic going on anyway, especially with broadcast and multicast. So, what could you do is, DHCPv6‑PD or before NAT 44 cascade. That limits to you 3 topologist and you have multiple routers with up links then you can see not every device in your network will get addresses from both up links, so there is that.

How can we tackle that? Well, so, as you have seen, we can more or less deal with all the transitional technologies, the router can more or less configure itself, but what we really want to do is build plug and play routers. We don't want to care about what a WAN port is or LAN bridge; we want to have network ports you plug in somewhere and the router should figure itself out, is it an ISP connection, another internal router, is it something else? And also if you start bringing in multiple ISPs in through your network, you have the issue that who actually owns the router. Many ISPs have technologies like T R169 or net con so manage a whole lot of features. When you bring in multiple routers from different ISPs there is a kind of conflict of authority there. So what you really have to do is try to find a consensus and this is what I am and colleagues of mine are working on in the IETF on Working Group where we try to create protocols to do these, distribute it and consensus mechanism, I think Martin gave a talk on one of the last RIPEs about the Homenet stuff if I remember correctly.

So does that work? What the network should do, is all these routers have to figure out their topology and the borders of the network, am I connected to the ISP, is this internal to set up routing at home? This might sound scary when I start with, I don't know, OSPF IS Is in your home but really, what is the other way or what's ‑‑ what other ways can we solve this problem other than introducing routing or doing nasty things on Layer 2 with bridges or bridge routers and so on. So, really, once we have set up this we can easily add features like naming and services discovery, usually runs on the local link so you have to proxy around and do stuff that a device being on one router can detect a printer or some multi media thing on another router. So we actually implemented some proxies there which hook in our HNCP and DCP systems which announce the names for all of these links so we actually have an interconnected network and the nice thing about this, since this is designed a bit like a link state protocol, you can just go to any router in your Homenet and connect to it, get a status and you instantly information, topology addresses and so on, from all the other routers in your Homenet. So even in ISP case, if an ISP were to run this technology, then it could not only see your ‑‑ it's own router that it sent to the customer but it can also see the whole network topology behind that and the customer gives support call and it can actually maybe find out easy ‑‑ more easily what is going wrong there. So what we also do security boot strap so there are mechanisms in there that prevent you from actually falsely detecting, oh there as an ISP, oh, no, it's just some person trying to hijack your Internet traffic and so on. And it actually also prevents from you connecting or from introducing falsely ‑‑ or some routers that are trying to impersonate your router.

If you are interested in that and I hope so, you can go to our little project home page for this Homenet stuff, please also go to the IETF home networking group, we have Internet drafts for all of that and they are in last call or soon to be in last call, so if you want to add feedback to this, please do so now or in the near future because otherwise it's basically finished.

So, what do we see in the future here? Probably more routers at home, right, because if we only had single routers at home why do we tare about multi router Homenet. We have I O T devices with different link technologies, at some point hopefully multipath TCP which makes good use of having multiple up links. As I said, heterogeneral us link types, all of their different characteristics, especially in connection with broadcast and multicast and at some point we may see client applications actively selecting some ISPs for specific servers or feature. At the moment you may think, well, would the user bother to select something? But have a look at your smart phones and your iPhone or Android, there are many apps which let you to decide I want to only use this using wi‑fi or it's OK to use this on my 4G network and I guess we will see something similar on regular client devices as well.

So, I guess that was a lot of stuff here, I hope I could enlighten you about OpenWrt and IPv6. Thank you.

MARTIN WINTER: Thank you. Any questions? I don't see any questions. Thank you.

So we are getting to our lightning talks so the first person is Willem Toorop.

WILLEM TOOROP: First, I am going to give a little or a quick recap of what what get DNS is and why. Along the way I go over all the things that have been updated during the last year.

So, get DNS is API specification designed by and for application developers. And Verisign Labs and NL labs have collaborated in an effort to implement this specification, so we are not the writers of this specification itself.

‑‑ also on board.

This is from the specification. The motivation for it was they didn't like what was around for resolution to applications. And application developers at the IETF basically decided to design their own application programmes interface that they would like to have for the applications.

Now, why would application want to have a specific ‑ for DNS resolving besides get other info? There is the last issue with DNSSEC, there is a part ‑‑ piece missing from DNSSEC. The local network is not protected, does not protect you against cache poisoning. And also, if it is, and it's successfully preventing a bonus name going to your computer, the user doesn't know about it, to the user it appears to be a connection error and this is not very friendly, right, in the example here, the use it twice to access dot order ‑‑ the network is fairly dating and the user cannot get to that. But might blame the network instead of HPO who made a mistake in the first place.

Also, DNSSEC or DNS is a great place to store data that can be authenticated but then you have to be sure that it is DNSSEC authenticated. You could have a hash of the certificates authority that is signing your certificate in the DNS and you better check the signature yourselves, to be sure. GetDNS can do all this can do DNSSEC validation also as a step. It doesn't need a DNSSEC validating resolver in the network, to which it's talking to as a step. But if if your network resolver is not working then it can also fall back to full recursion. So it can do both stub and full recursive mode, it gives fine‑grained control to the DNS answers. There is URL you can check on our website, what it looks like, you basically do a query selection submissions and can look at the data. It's like a JSON dictionary.

You also have ‑‑ can set your own custom memory functions with the get DNS API and this is special and what sets it apart from other libraries, resolver libraries. Also it does async Ron us input/output by default, not only can you register the event base that you are using in your application in get DNS but it's also possible to interface on a need get DNS, ask to schedule its events in your own custom event mechanism. And I think this is pretty special, and also, very appealing to developers of serious application because they spent a lot of time making sure that memory management and I O management is optimum getDNS can hook into that, is incorporated into the application, comes with first class citizen in this case.

So, since we have updated the stub resolver it's no longer used as forwarder and this enable all sorts of hop communication options, it does DNS cookies. Or you can use DNS cookies by the library, TCP fast open new transport options, TCP keep connections open and TLS as well. I see that I have to stop the presentation so I am going to do that now. There is a bunch more slides, also telling about hack ton we did, maybe you can have a look.

MARTIN WINTER: Sorry, we are out of time. Too many slides. These are the lightning talks which should be like five minutes or something on it. You are on about eight minutes. OK.

So we have Valadon Guillaume on /SKAPy, a packet manipulation.

VALADON GUILLAUME: Very brief introduction to /SKAPy which is a the goal is to create packets, send them ‑‑ save them in pcap file and modify the packet on the fly and send them back. In Scapy we implemented default values at work, if you want to send CP segment will select the destination port as. Inside does some stuff for you, check sum computations, interface selection and MAC address and so on. The developed was developed since 2003 and maintained by the tools by 2013 along with pier. And I have been involved in Scapy development in IPv6.

So first, Scapy can be used as command tool so the goal here is to build a packet layer by layer so in this simple example we build DNS query towards server, we had three parts, an IP layer, specified destination, IP layer, specified nothing, Scapy will take care of everything and DNS query. At the end we get Python object which is the packet we send.

Scapy can of course send packet which also match queries and replies. Here we are sending the query because using the send and receive one function. And then because reply is also we are able to navigate in the packet so here we are accessing DNS layer and name server feed so we get the list of the name servers.

Also you have useful function in Scapy, you can write a file to pcap which is W RP cap function. So that was for the ‑‑ you can use as module, a simple example and we are /RAOEUG here to show you can do your own ping 6 demand ten lines, what you feed to do is import Scapy as a module, that is the first line. From Scapy dot all imports and you have some ‑‑ tend the last two lines you are sending and request to other destination and this displays the reply, it's a simple example.

You can do many things, you can do mobile IPv6 with the same kind of packet, you just need to input one more layer, you can put, for example, an IPv6 routing header and a regular ping 6 tool.

Supported protocols, we have many of them, IPv6 of course, IP, ICMP v6, includes funny protocol ICMP v6, DHCPv6 and contributions are welcome and for OpenFlow and MPLS and most specifically, the MPLS ICMP ex text pensions and home plug. If you have a specific wi‑fi card if you can inject packet to the low level Scapy is able to ‑‑ you can access points directly from Python which is kind of cool.

Scapy is used by researchers and sometimes networking people when they want to add in new protocol. Let's add a new protocol. I call it new protocol here. So what we do, we create an object, Python object, which narrarates one packet we need to give it a name, which is new protocol and you need to describe it, so first one is ‑‑ we have a byte and then MAC address. And then it's easy to bind the layer using the Bind AS function which is a function of Scapy. I want to put new protocol and the type will be A, B, C, D  ‑‑ and that is it, you have new protocol in Scapy.

You have again, that is a short presentation so there is more features available and answering machines and so on. We have an display, trace route in 3D, if you have specific goggles you can see the effect. If you teach you can see the packet dump, the left left ‑‑ it's good to point ‑‑ to point things to students. It works on Linux, so please don't use it on Windows, the port is kind of broken. You have the link for the repository and finally, how can you help? I think many people know Scapy so if you use it please tell us, it's always nice to know people are using our tool. If you do presentation please put the name of the module. If you have issues you can come to me and we can try to fix that today. Again, contributions are welcome so you can share your protocols. And if you don't know anything about Scapy and you want to, can invite us or talk to me and I will show you more stuff on the command line. Thank you.

MARTIN WINTER: Thank you. And last person we have Leslie from Cumulus networks talking about all the cool things on automation.

LESLIE CARR: I have a lot of slides so if you are interested in more I am going to gloss over a lot of these so download the PDF.

So Cumulus networks, we do a Linux distribution available on broad com based switches, and a lot of us showing current and former members of the team have written a bunch of automate enmodules for chef, Puppet and Ansible, the most important thing we think you should treat them just like your servers. If you don't knowance Intel a cool automation tool, Puppet and chef are agentful automation tools. We wanted to make a unified idea so if you knew one language could you easily use the same tools in the other languages, we tried to make the configuration options as similar as possible for ease and human going between configuration languages. And Ansible has their repository called Galaxy, chef has Supermarket and the Forge.

There are four tools, CL licence, we require a licence to enable the switching features so this checks if it's installed, up to date, expiration dates, and all that fun stuff. So there is the language. Ports. Because we have a lot of 40 gig platforms sometimes you want to have break out cable and sometimes you want to take four 10 G ports and have a break‑in cable for backlog to combine it to a 40 gig port and so this will write out all of your The interfaces, which we all know interfaces need managing, probably the most important part of running a switch so this will write out interfaces into etsee network, we decided to go with one final per interface, we felt that was a little bit easier and since we are using functions instead of temp place my favourite thing is we could do ranges right there, if you are doing a template in all of these languages it's hard tore write a human readable range.

And then of course, we also wanted to have interface policy to enforce the fact that you don't want people manually configuring interfaces on there because what is the point of automation, right, it gets overwritten so this will also enforce and make sure that nobody is writing, you know, writing interfaces where they should not be, they should all be using the configuration management tool.

So, like I said, treat all of your systems like servers, partner with your SS M and use all the tool monitoring.

I made this last night, and automate all the things.

MARTIN WINTER: OK. Thank you, Leslie. So that is basically it. So we have a few closing remarks from Ondrej. If you have not yet voted for the PC Chairs, the vote is still open until end of today, so please do ‑‑ cast your vote there.

ONDREJ FILIP: I would like to comment two things: First is again the meeting minutes, we haven't received any comments and if we will not receive anything in very short time, we will take them as approved. So please, if you have ‑‑ if you want to comment, do it as soon as possible, otherwise the minutes are going to be approved. And one more topic regarding two administrative issues: We sent to the list the plan how we think that the Chairs of the Working Group should be selected or elected. We have not received any comments so far. There is still some time but we would like to finish this discussion on the mailing list also quite soon, definitely after the next meeting, because then we believe that this procedures should be, you know, working approved, but please, if you have some opinion, comment on the mailing list. We don't have much time during this ‑‑ during this day, during this meeting as we had so many presentations but still, we have a few seconds now so if you have something important to say to the Working Group, on this topic, please do it now.

SPEAKER: Peter HSE letter. There was a comment earlier about one of the projects doing custom memory management and I would like to remind everyone that is how Heartbleed happened and we need to be very careful when attempting to override system built in mall walks or anything like that. This would prevent either the system from detecting it and preventing it or a monitoring tool from being able to find and declare this to the developer.

ONDREJ FILIP: OK. Thank you very much. Any other comments? If not, then we are moving definitely to any other business. Is there any final remarks, people would like to share with the Working Group? As expected, I don't see anybody. So, now is the best time to conclude the meeting. Thank you very much for coming. I have to think all the speakers and all the people that made this possible, transcribers, scribers and see you in the next meeting.

MARTIN WINTER: If you want to give a presentation at the next meeting, I appreciate early notifications, if you send something early that would really help us, too. Thank you.