DNS Working Group, Wednesday 13th May, 2015, at 9 a.m.:
PETER KOCH: Good morning, Amsterdam. So, good morning, you can see me, I can't see you because I'm blinded by the light. So good morning, welcome everybody to the first of two sessions of the DNS Working Group at RIPE 70. If you would like to discuss Address Policy, now is your chance to change rooms. One person left, OK. So, short introduction, this Working Group has three co‑chairs, one of you is speaking at the mic, my name is Peter Koch, I work for DENIC, the other co‑chairs are Jaap Akkerhuis and Jim Reid, somewhere. Some random person off the street as he refers to himself preferably. So as I said, we are going to have two sessions, this is the agenda for today's session, we are going to have a few administrative topics, we have a report from the RIPE NCC as usual, which will be given by Anand and then Romeo is adding a bit of that. We might have a very short report about the IETF DNS of virtual interim that was held yesterday, it was virtual, it was held everywhere, with so many being here there was a side room. Jan from cz.nic is going to give us an update on Knot. Sandoche Balakrichenan will be talking about zone master, a test tool and criteria and so on. We will have Olafur with an update or a ‑‑ about using ECDSA algorithm and then Jim from ISC will give us news about the DLV sun set. Just for the record, more or less literally, this session is going to be recorded, we have a videotape for your safety and security, it will be audio tapped and wire tapped and everything. So if you come to the mic, please state your name and your affiliation if you so desire for the benefit of remote participants. Do we have remote participants? Tim is watching the Jabber and Emile is going to be our scribe. We will again have our perfect stenographer, and that should be it. Now, for admin stiff I can't. I am expecting Jim, because he is our process person. That was an insult that didn't even call him to stage, so sorry about that. The only point that remains there was communicated to the mailing list actually, a couple of days ago. You know that for good governance and a couple of other reasons, all the Working Groups are expected to have a Chair election/appointment procedure in place. We had a bit of discussion with a couple of suggestions and Jim is actually leading this, if I understand his plans correctly the idea is to leave this for discussion, if somebody would like to contribute right now you are welcome to do so, we cut the microphone after a few persons, though, since we don't have too much time for that. Preferably this is all going to happen on the list and a couple of days after the meeting we probably need to call it a day because we need to deliver something. Most of the other Working Groups have a process in place and you want eventually to get rid of us one by one so that is your chance to have that. This really doesn't look like an audience you have for revolution. But it probably can improve. Anyway, is Anand in the room? Cool. So I hand over the stage to Anand for the RIPE NCC report.
ANAND BUDDHDEV: Thanks, Peter. Good morning everyone, I am Anand of the RIPE NCC, and I am here to gave short update on DNS goings on at the RIPE NCC. The second part of this presentation will be done by Romeo and he will tell you a bit more about our plans for K‑root. I will start with our Reverse‑DNS and secondary DNS services. So the RIPE NCC runs on the Reverse‑DNS servers for all the address space that is allocated to the RIPE NCC. And we run this on a cluster of three Anycast sites with a total of nine servers, these sites are in London, Amsterdam and Stockholm. So, at each site we have routers and behind these routers we have the servers and the load is balanced internally using either ExaBGP or Quagga. This cluster receives a big query rate of about 100,000 queries per second so that is, it's not too bad; not too shabby. This cluster has approximately 5,000 zones loaded, most of these zones are actually Reverse‑DNS zones of the LIRs who request secondary DNS service from the RIPE NCC by requesting the use of NS.ripe.net as a secondary. We also have 76 ccTLD zones and there are sub zones on these servers. We have ripe.net, we have E164 dot Arp, we have ARPA ‑‑ sorry, in‑addr.arpa, 96.arpa and various other secondary zones of the other RIRs. The serves running in this cluster are running a mix of BIND, Knot and NSD, and plans for this cluster include a refresh of the London site, the service there are a little bit old so we are going to replace them very soon, and we are also considering a fourth site somewhere else, we don't quite know where this will be; we are still thinking about this. The other thing we have been trying to get off the ground is provisioning resiliency, at the moment we have one provisioning server in Amsterdam and this is where all the zones are provisioned and all the dynamic updates happen and all that. This is obviously a little bit fragile because if this server were to fail or something were to happen to Amsterdam, then all provisioning would fail, so we have added a second server and this is at our backup site in Stockholm. One of the things we wanted to do is to have stable IP address for this server in Stockholm, and one reason is that lots of our users configure their ACLs and notifies with IP addresses so the IP address of the second server can't really be changing too often, so we recently acquired a v4 and a v6 prefix for this purpose and we are announcing this from Stockholm, and this prefix is independent of any provider so in case we need to move this server we can also take the addresses without affecting or without clients having to reconfigure their servers.
So, we have lots of slave zones, the ccTLDs, the reverse zones off the other RIRs and stuff and these will be moved first, because this is the easy, the low hanging fruit, we just have to configure the serves and communicate with all the master server operators out there to additionally send notifies to our second serve but also to allow zone transfers to it, for example. So this involves a lot of communication mainly and it is expected that this will last several months because we have loads of users to contact and so we are going to start with this first. We also have our dynamically provisioned Reverse‑DNS zones and then the manually maintained ones such as ripe.net and we will move those later this year. One of the issues we are facing with this is how to keep the two servers syncronised, and yet independent of each other, and so we have some ideas and we are hoping to write some RIPE Labs articles and invite community input on this because, you know, there is lots of experts out there, several of you are sitting here so your ideas to these labs articles will be very welcome.
At the last RIPE meeting we mentioned that we wanted to do algorithm roll‑over. At that time, we were unable to do so because the vendor of the signer product we use was not able to support this. We heard very strong opinions from the community that we should get the vendor to support this or move away so we have voiced our opinion to the vendor and they are now adding some for algorithm roll‑over. We should be able to do some tests this summer, and if all goes well, then we plan to roll over from SHA‑1 to SHA 56 in November 2015 when we do the next batch of KSK roll‑overs.
Something that came up recently, the RIPE NCC does assisted registry checks, this is where the RIPE NCC helps an LIR with an audit of all its resources, and this includes Reverse‑DNS and one of the things that someone notices that the Reverse‑DNS checks that we were pointing use /TOERS were a little bit old because we weren't automatically doing Reverse‑DNS checks for all the reverse zones in the RIPE database; sometimes the check would be up to two years old. And it was suggested that we could run these checks periodically ourselves so that the data in, that is presented by RIPE Stat is fresh so we have recently started doing monthly bug updates, so every month we run through, you know, close to 600,000 zones in the RIPE database and check them automatically, and the results are all available via RIPE Stat. And we have also recently refreshed our AS 112 server at the Amsterdam Internet Exchange, and the timing was actually quite opportune because we wanted to refresh all this hardware and start doing IPv6 and as it turns out the IPv6 prefixes it's prefixes were allocated so we were able to start announcing them. This server receives about 3,000 queries per second and about 10% of them are over IPv6, and since we were refreshing this we took the opportunity to set the serve up to be fully compliant with RFC ‑‑ it can also do the DNAME redirection when any zones are actually ‑‑ that was from me, our expansion plans for K‑root.
ROMEO ZWART: Good morning, everyone. Thank you, Anand. And that leaves two topics for me and I will try to take you through them quickly, primarily the idea is that we can share these ideas with you and get some feedback from the people here about these situations and ideas that we have, the plans that we have. I am suspecting that most of you in the audience here will have seen our announcements about what we are planning to do with K‑root expansion, so this has been sent to the DNS Working Group mailing list, there have been some publications on the RIPE Labs and so I will briefly summarise these in the next few slides but I I am sure people are mostly aware of what we are doing. Feedback that we have ‑‑ yes, feedback that we have been having in the meantime, we are mostly in the nature of great idea, when can we start talking to you about having local K‑root instance. And the good news for those people that have that question is that we actually start doing that today, so later today we will announce to the DNS Working Group mailing list that we are now finally actually opening our discussion or are opening up for people that actually would like to have a local K‑root instance.
That applies basically to any organisation. There are some Kaveh /KWRETS of course, we do expect a professional co‑location environment, professional security, etc.. it's not that we would like to see K‑root distributed over every broom closet and bedroom globally. In principle, any organisation can apply, can come to us and talk to us about possibilities.
So, the next slide, yes. For those of you that haven't been able to look at the RIPE articles that we have published about this, very, very condensed summary of what we have considered and what we are doing. The ‑‑ we previously also had, what we called K‑root local instance which was still a fairly, it was smaller than our core sites but it was still a fairly large deployment. The new model that we have is actually a much more condensed one, it will be server in euro network as a host and (single Dell) and that single Dell serve will have a one peering with your, a local host router. In the discussion or the publications that we have made earlier on RIPE lapse, some people responded to us saying, explaining that this would be potentially a complicated or less optimal set‑up, for example, in an IXP configuration, and at least for some IXPs so we have worked with some people to work with idea for ISP set‑up, Nick Hilliard has been very helpful. But he has been very helpful to work with us on testing some. And so the all the that I have we now have is in an IXP scenario that we can talk to route servers on the exchange, and work with one default route to the local host in relation to that. As I said, the very, very condensed summary, if you are interested to learn more there is some links at the back of this slide back and of course you can talk to us during the week and contact us as well. Then, a few additional notes that I think are relevant to share with you, one of the real important binary conditions, binary conditions that we have set on ourselves this is principally going to be a budget‑neutral exercise which means any additional K‑root instance which will be funded by the local operator, the local party that we work with. The RIPE NCC will not pay for that. And also, importantly, we will ‑‑ we do not plan to ex pant RIPE NCC staff to be able to do this. I think we have achieved a lot of additional efficiencies in how we do configuration mentioned in the team. We think we have ample capacity in the back end systems to accommodate all this without having to increase our budget on that. Would that be the case at a certain /TPOEUPBT there is an over/WHEPLing demand that actually leads us to a situation where we think we would require additional budgets, that will of course be discussed in the regular way with the RIPE NCC members.
One of the to keep in mind is that because of this limitation that we have set upon ourselves that we will not have additional staff to handle this, it also means that we in the case that we do have an overwhelming demand we will need to base the, a number of notes ‑‑ the number of locations that we can work with at any single period, so that means we will do some prioritisation of whom we will work with initially and basically base that out over the coming months.
The choices in how to /PRAOEURLT advertise this, will basically be made, we will focus on areas that are currently under‑served. As you may have seen we have done some work late last year to look at that approximate a bit more carefully and there are some locations even within the RIPE service region where return times to RR T times to K‑root services are up into the several hundreds of milliseconds range. If we can work with hosts in such locations, such regions, we think that it makes a lot of sense to first go there, and so that is the plan. We will of course have Atlas, RIPE Atlas, a nice tool if you haven't heard of it yet. It's a tool that you can of course use very well in this context as well. And it will be the basis of our decision of that, would that be needed. In addition, we will emphasise rolling out notes to the RIPE service region but it's not that we will exclude any other host ‑‑ any host out of this service region.
As I said, there are some more details in these labs articles. If you would have an interest, the application or the process of how we will work with addition allocations, additional potential hosts, will be published to the DNS Working Group mailing list later today, and so keep an eye on that and come back to us when you are interested.
Obviously, there is RIPE NCC staff here this week, so you can talk to me, Anand, Kaveh, he is around this week /SW‑RBLGS some of our other colleagues. Basically you can talk to any RIPE NCC staff member and they will direct you to the right people if necessary.
I am interested to hear feedback, I will skip on to the second topic that I had as well, and then if there are ‑‑ if there are any questions or comments, I am happy to take them. So the other topic, completely unrelated to the previous one, that I wanted to briefly discuss with you or raise it with you, is the visualisation of the old DNSMON system, for those of you that work with DNSMON, this will actually still look a bit familiar, the good old times of DNSMON. Things have changed a bit, this is the old interface, I think we have come a long way with the new Atlas based DNSMON visualisations and we are able to do a lot more nicer things. However we still have the old visualisations in place when we announced last year that the data collection on the old system would be ended by ‑‑ ended in July 2014. We had a request from particularly this group, this community, to keep the old visualisations in place in ‑‑ for those that would want to look at the older data for some time. We are now basically a year further down the line and we still have those visualisations, and basically, well, very bluntly said, we would like to get rid of them. They are a bit of a pain in the neck, really. We run these visualisations on all the systems, all the hardware. Still, basically very nicely but there are occasional operational issues with them that require engineering time that we would like to spend on other things. There is obviously security risks with running old library versions, code, etc., operating systems and moving away from that to just provide these visualisations to all data, I think that doesn't make a lot of sense. Particularly considering the fact that we don't see any real usage on these old visualisations. We still see actually a fair bit ‑‑ a relatively high number of hits per day on these sites, up to some hundreds, a few hundreds of hits per day, which is a lot more than I actually expected but when we started to dig into that more deeply we noticed that the vast majority of those are server farms from search engines. We see network management environments from people that are, my suspicion is not really, really keeping a close eye on that, because the skips are basically homing in on to the main page of the DNSMON, old DNSMON visualisation, pulling in all of the graphs from there and if you remember, we stopped collecting data on this a year ago, so the graphs that these scripts are collecting are actually blank graphs for over a year now, so some of these scripts come back several times a day to pull down the same blank graphs over and over again. We don't have the feeling that that is a very useful use of the facilities.
So, what we propose to do is to keep them in place for just a little bit longer, have the visualisations there until the end of this calendar year and by that time, basically disable this and dismanned he will these old servers. Obviously the raw data for these older data sets related to the old DNSMON data, will still be available for those people that have an interest, researchers or anyone else. The draw data will not be retracted but this is the old visualisations. Also obviously the new visualisations, the real DNSMON that we currently have, that is certainly intended to be kept in place.
So with that, can I ask you if you have any comments to us on questions or suggestions or both topics and also with relation to Anand's earlier topics.
AUDIENCE SPEAKER: Do you realise in respect of these old visualisations that the ‑ won't go away, the scripts won't go away, you can remove the systems but they will still try.
ROMEO ZWART: I am not really bothered by the scripts coming in, it's just keeping the visualisations I am bothered with, but yes, I rail ease that.
GEORGE MICHAELSON: I was really, really interested in the relative query loads on your AS 112 server now that it's visible on a dual stack A and AAAA record. We have seen a far larger asymmetry in favour of v4 in our query patterns as an authority, and an AS112 is an authoritative serve, I mean it is, it's listed at the delegation of the /SPWHROEPBS. So, that was very interesting for us because 10% over 6 is significantly higher than we have been see, and we had assumed that there was behaviour in the BIND as resolvers so the client side favouring four, so I am going to have to think about that. That was very interesting to hear. Do you have any feelings /THAOU relates to distribution of query load in other dual NSA, AAAA server zones?
ROMEO ZWART: I think Anand will be able to answer that a bit nor detail but from my perspective I was also somewhat surprised by the relatively high level of 6 response queries, yes. Mike meek we should talk about that. Dud dud I would just like to response to George's question quickly. Yes, we do have approximately similar amounts of queries over v6 on our other authoritative servers so I think it seems to be just how the world is going.
JIM REID: Just another guy that is wandered in off the street.
ROMEO ZWART: You seem to /WAUPBD /TPHER a lot into these meetings.
JIM REID: I think it must be something in the air. I think it's a good idea, we have a discussion about this DNSMON thing and killing off the old visualisations and we had a discussion about this back in Warsaw, and we felt we would keep this open because we thought there were still some people potentially using it and it's good see what the statistics R if it's at all possible on those web pages you could put /STHAUPG says this service is going away at the end of the year in the unlikely event there is a human being they can't say they haven't been warned. But if it's the people that are actually looking at that data, they can then be told this is no longer going to be happening, it's going to go away make alternative arrange /‑PLS.
ROMEO ZWART: I think that is very good suggestion. In fact, we did that a year ago.
JIM REID: You still keep coming.
ROMEO ZWART: Still keep coming. What is somewhat surprising is we see different people coming. Maybe from old bookmarks that people industrial that point to this particular location. Maybe from search engines that still refer to it but we see a relatively high number of, well, OK, the bulk of queries that we still get is from these serve farms from search engines but we also see some individual IP addresses pop up, the main page is a response and then they go away and never return back. So, we see a relatively high number of IP addresses, source IP addresses Thor these queries that come back once and then never again. So it seems to work but surprisingly there is still apparently a pool of people that come into it see that message and then go to the real, I would a/SAOURPBLGS the now current DNSMON. If you look at the pattern of behaviour, we never ‑‑ OK, from the sampleing that we took which is a couple of dozen, from the sampleing that we took, we actually never saw people that come in to the old visualisation and then start digging into the historic data, that really doesn't happen. What we would plan to do is when we will actually dismantle this it makes a lot of sense to have the main page, the mainlanding page of the old DNSMON just point out thank to the new DNSMON because my suspicion is that the vast majority of anyone hiting that wants to go to current data and see the new DNSMON.
JIM REID: Thanks.
PETER KOCH: Thanks, Romeo, thanks, Anand.
PETER KOCH: And I guess the take away from this is we are not going to have an action item if anybody is really unhappy, they go to the RIPE NCC and help operating the old system.
Otherwise it will be down by the end of the year and maybe with that hint that Jim suggested, you are looking into this and we are done with that, and thanks Anand for the first part of the presentation.
So, the agenda now shows an IETF report. The reason is some of you heard, some ‑‑ how many of you are the DNS op IETF mailing list? Wow. So since you are already up to date with reading that list you have seen there was an interim meeting yesterday, a virtual interim meeting, a virtual interim meeting with a very physical an ex‑so to speak, because a couple of us met here in Amsterdam in a side room to jointly participate in the virtual meeting. This was very technical topic, not. It was about registration, opportunities, and actual applications for so‑called special names, all of which happens to be /TO*P top level domains so there is obviously lots of layer 9 stuff involved. Instead of going through all this right now and here and because we are over time I would like to point everybody who is really interested in that to the voice recordings and the Jabber archive that should be available by now on the IETF site and I haven't looked at the list this morning but pointers to that will be shortly posted and the discussion will be ongoing on the IETF DNS op mailing list. It's a topic that is probably interesting for those looking at the overlap between IETF and the names community and as I said, much of layer 9 stuff, there is probably a result or progress to report during the next meeting but this is just too fresh to dig into it deeper right now. With that, I suggest we call it a day on that item and I'd like to invite Jan to the stage to go back to a real technical and operational topic, Knot DNS 20.
JAN VCELAK: So, hello, if we haven't met, I work for cz.nic in research and development department and I am currently leading the development of Knot DNS and our alternative DNS server, and today, I am here to present you with new features we are planning for the upcoming release Knot DNS 2.0.
So, just to put you into the context. We are here for some time, the first release of Knot DNS happened in 2011, and well, last year we were thinking about the new features we want to implement, we were aware that a lot of our users are already satisfied with the things we are doing, and we thought it is almost feature complete. But there were still voices from the community that required huge changes which were incompatible with the existing architecture of the server, so in October we released version 1.6 and said this will be the long‑term support version. We will provide only bug fixes and security features, security fixes for this version, and let's do some more significant changes to the code, incompatible changes and this is how ‑‑ which is basically Knot DNS 2.0. So, the first version related to Knot DNS 2.0, was released in February. That was version 199. Which was basically (1.99) version 1.6 with the new DNSSEC, which will talk about in a few minutes. And actually, two weeks ago we released first Beta version of 2.0 which brings the second largest feature in this new version and that is new configuration format.
So, I will start with the configuration. And start with the motivation, actually why we are doing this. We have some users which serve, I don't know, ten zones and these are quite ‑‑ with the configuration we had, but we also have users which will thousands of zones or maybe a million, and with the old configuration it was very problematic to reconfigure the server. For example, if you just wanted to add one zone, it meant that you have to update the configure file, and with millions ‑‑ with million zones the world, even parsing the configuration file took a few minutes so the took a long time. So we decided to switch to binary configuration because binary configuration means that we can change the config on the fly and we don't have to parse the configuration file. And so, with Knot DNS 2.0 we use ‑‑ we use internally a binary database and a new text format which can be used on an import and export basis so if you are user with just a few zones, you can use the text format. If you are user with a million of zones, you will be able to use the binary format in the upcoming release.
And while we are ‑‑ when we were in this, we also revised the configuration scheme. We added support for zone templates, which I will also mention in few minutes, and we organised how the remote and ACLs are defined.
Yes, this is an example of a configuration, how it looks in practice. We use simplify YAML format, we have our own parser so at the moment it can't parse everything, which your script can write. But well, this is basically how it looks like. This is the sample configuration for master server which pulls the zone Knot DNS.cz from hidden master and allows /TKR‑RGS of this zone to (distribution) servers. And this example is the example, like demonstrating how the zone templates work with Knot DNS 5.6 there were always ‑‑ we kind of supported zone templates but it was only one. You could define the ‑‑ some variables which defined defaults for a zone configuration so if you are then initiating a zone all these full palm a.m. terse were applied. With Knot DNS 2.0 you can define multiple so in this example there are two templates defined, the default and the second template called slave, which is used for slave zones, and as you can see, the location for the zone files is different for, in these templates, and for the slave zone or for this in the slave templates we allow pulling the zones from some master server. And then you can just list the zones and apply these templates to the zones, so the zones, not DNS C Z and DNS C Z are on the server master zones and slave.
That is probably all to the configuration, and now I will talk about the new DNSSEC, which is the ‑‑ as I said, the second largest feature in the upcoming release. We switched from open SSL to GnuTLS, the reason why we did this is not because of security or something, we might not like about open SSL, we were completely rewriting the engine and it has much better documentation and support for PC K is 11 interface which allows us in the future to add support for hardware security models or some and tokens to start your private keys. With DNS 1.6 we use the (job dictionary is not selected) you generated the signing keys, and the server was signing the zone automatically based on this met da data. With the new version, with the Knot DNS 2.0, we use something called KASP, key and signature policy which is a concept which is similar to OpenDNSSEC, instead of defining generating the keys manually and manual ‑‑ I want to sign my zones with this algorithm, with this key sizes and I want to role the key every month or something. So this is what the key and signature policy allows. ‑‑ to keep and signature policy is start in something we call KASP database, it's basically directory on your file system which contains some and the private keys in X509 format in ‑‑ the format you are used to from PKI, and we no longer depends on any utilities from bind or LDNS, we can do it all with everything can be done with our utility. And at the moment, this automatic signing supporting only generating of initial signing keys and signing key rotation. We didn't add any other features for a moment because we want to know if this works for our current users. And we haven't decided yet how to communicate with the parent zone, which is necessary for KSK rotation or single signing ‑‑ type signing scheme. Etc..
So, this was just an overview, and now I have a a few examples how it actually works in practice. The key MG R is to manage KASP database so if you want to use this automatic signing based on key end signature policy, you just create a new directory, go to the directory and issue key MG R in it which creates a database. The second command you can see the key MG R policy at actually creates a new policy, and this policy will be called Lap and it uses all defaults, except the algorithm number or used algorithm which is in this case RSA‑SHA 256. And the third command just creates an entry for the zone called or named test zone and applies this policy to the zone. And if your server is configured correctly, the zone is in the config and DNSSEC is enabled for the zone. You just start the server or ‑‑ and everything will happen automatically. The output on the bottom is snippet from the log file, which basically says that we are withholding the zone, the engine finds out there is no signing key so it generates the initial key, signs the zone and that's all, that is all you have to do to sign the zone.
The next /STEL you will could is probably add your DS record for the key signing key to the parent zone. At the moment, you have to do it manually.
Well, but you are still able to do the manual signing as you were used with Knot DNS 1 of 6 or BIND or anything older. It can be done in a way that you disable the automatic policy or create the zone without a policy. That means that no automatic actions will happen and all you have to ‑‑ all actions have to be performed manually. So, I won't probably read the commands but the third one and the last one, the third one is equivalent of DNSSEC key from IC BIND and the last one is equivalent DNSSEC set time. So the principle is the same as with ‑‑ as with Knot DNS 1.6 or BIND.
As we are using a different format for the keys, you can also import the existing keys in legacy format. This can be done by the second command key MG R zone key import and in future we will probably export to allow you migrate back to another signing solutions.
And we are also thinking about DNSSEC signing. We originally didn't think that this will fit into (on line signing) 2.0 version but we still have some time and we found out that it's not that difficult to implement this, so we expect that the final release will contain an experimental model which will do DNSSEC on line signing as alternative to the press sign zones which I was talking about before, and the reason why we also want to have a support for on line signing is that our server supports something we call answering models, which can alter the processing of the message in the ‑‑ processing of the query in the server. So, you can write your own module to modify the answer or to sent size some records, for example, we have model which synthesizes PTR and AAAA, so if you want to use this model and you want to use DNSSEC with that then you need on line signing. So, here, this is just a demonstration, this is just a snippet from a ‑‑ which I complete from my laptop yesterday, it's not faked, it's real synthesized and it's on line signed by Knot DNS. It's not in the repository yet, it needs some clean‑up but I think that in the final version, this will be available for testing.
And the last thing I want to mention or maybe I wanted to say that I want to thank to our community, this is the list of our probably the most significant users. On the first place, of course, RIPE NCC. RIPE NCC ‑‑ well, Anand was talking about it, on K‑root and various top level domains, and I would like to especially thank to Anand because he is always the first one in a row who is testing our new releases and pre releases, which is great. Thank you. And also, Knot DNS is by some top level domain operators, CZ domain, DK domain, CL domain, we are finally allowed to say Microsoft is using Knot DNS. Also, Telefonica O2 in Czech Republic, they use the model for the PTR records synthesis for IPv6. Netriplex and yesterday I talked to people from ICANN who are testing Knot DNS currently with intention to use it on L root, which is great news for us. Various web hosters using Knot DNS. If you are not on this list and want to be on this list, just, if you are just using Knot DNS and we don't know about you, we will be glad to talk to you to talk about what you need and what you we can do for you. And that is all. Thank you for your attention and I spent probably sometime, so I am ready to answer all your questions. If there are any? I hope so.
OLAFUR GUOMUNDSSON: I love your server. I have a question about your key management. Is it time ward driven or predicate driven, i.e. does it check whether a parent is motified the DS records or not before it rolls the keys over?
JAN VCELAK: We actually, at the moment we don't communicate with parent zone so we can't check the DS records.
OLAFUR GUOMUNDSSON: Can you check it?
JAN VCELAK: We probably can check it. Our plan is to add some hooks or call backs, you just define your own script, for example, to check if the DS record is published and the same way for publishing the DS records. At the moment we don't check it from the server, but, for example, the key rotation doesn't work on timely manner only F your server was shut down and the zone wasn't updated in time or something went wrong, then the steps of the roll over are postponed. Yes.
OLAFUR GUOMUNDSSON: Consider it a feature request. /SREL thank you.
PATRIK FALSTROM: Random seasons hobby user something. Anyways, I have said this before, but I want to emphasise what you just said, that you were going to start to just do a call‑back mechanism so it's possible to have external software that do something whenever you have internal events for the key management and I just repeat that, so please do that and the rest of us can start to do really cool things. Thank you.
JAN VCELAK: OK, we will do it, thank you.
PETER KOCH: That is probably it. Thank you very much, Jan.
And there is, the next speaker is Sandoche, I think. You have all seen now that you can read what everybody says so thanks for fixing that.
There is one new thing this time, you have experienced that through the plenary, you can rate talks, and starting this meeting you can also rate the Working Group talks and contributions, so please do so; you need a RIPE NCC access account for that. That is created in a second. Please do rate the talks that gives us Chairs hints how to make these meetings Bert or keep them in quality.
SANDOCHE BALAKRICHENAN: I work with the French network information centre. The objective of this presentation is to give an idea of what is the TRT F that is test requirements task force and I have ‑‑ have your thoughts, feed backs on this task force. The task force is now created under the scope of centre, that is the Council of European national TLD registries. So, here, before going into the task force, just a background on the DNS zone validation software. I am sure that many of you have an idea of what is DNS zone validation software but for people who are not aware of that, these types of software helps to validate the health check of a DNS zone. Actually, AFNIC had already developed a DNS validation software called zone check and dot SC was partner for working on this project as developer software called DNS check and now we are developing a new software, we are going to leave with the legacy side called zone master in partnership with ‑‑ the idea came into mind when we are in the process of developing this software, so that is why I will just give a brief introduction of why we do TRT F.
So, we did a survey that how many people, how people are using DNS validation software so it's like there are users who use this software to check whether the domain zone is valid. There are registries who is this software for validating the zone, for example, AFNIC, until three years ago if you have to register a zone ‑‑ a domain in your ‑‑ under .FR you had to pass the validation of zone check software but now we are not doing it because there are complaints from the customers. But, there are still, many of the registries and registrars validate the zone after the zone is registered and if there is an issue they indicate to the customers that there is a problem with the zone.
So, the DNS validation software is quite useful for the DNS community and that is why we wanted to develop that. And now ‑‑ and after that, and this presentation, you will see why, as I said earlier, why do we have to have the TRTF and finally the feedback.
So, when we presented Zone Master the new validated software in RIPE, the first question is everybody asks, why do you want to develop a new software? Actually, the zone check and DNS check, which has already been developed by AFNIC and .se we had issues with it, for example, developed by single software developer and he developed in ruby and it does not quite model that, extend it. DNS check also had similar issues. That is why we got into team and then we asked ourselves what we have to do better, we upgrade one of the existing software or develop one from scratch. Finally, when we looked at the trade‑off, the decision was that to develop one from the scratch. And that is where the Zone Master.
Just to give you a higher level architecture, we have different types of input interfaces, the command line interface where you can just type Zone Master CLI and put your domain name to verify whether the zone is valid, and we have a if you go to Zone Master .net you can test it. And then we have, also, possibly people to use in batch method like 100s of domains at one instance and to have the output for all these domains.
So, the heart of the tool is a test engine, which is like a black box where it gets input, processes it and provides us output where you can convert it to any format which you want, like where you see in the right side. So the engine has a framework and there are a battery of tests which has been done to validate a zone.
Some useful information which is that when we started developing Zone Master we said that we have to have documentation. So, there is ‑‑ there is documentation in GitHub and all created under common licence, even the code is in the BSD licence and the Zone Master project has like four repositories where you have the engine, the GUI part, the command line part and back end. Now, we have from last October, 2014, we have a stable release and the good news is that both .se and AFNIC have planned to invest in maintaining this tool until the end of 2016 and after that it will be in the best effort and it will be asked by the community to provide us feed backs.
So, if you want to give us any feedback regarding the software you have these two mailing lists, Zone Master users and Zone Master devil.
Before implementing Zone Master we had a dilemma. There was no specific document which we could take up and say that, OK, these are the test requirements, test that is we have to do to validate a DNS zone. So what we did was that, these are certain tests that we have collected the test requirements but how did we collect that? For example, what we did was that we first took the existing test requirements from both DNS check and zone check and when we discussed that there were seven tests that are not needed, they were needed earlier but not now. And we moved all the mail test requirements and all and then there are some ‑‑ there are some which came from the team, and here is IANA profile and not IANA policy. We also ‑‑ IANA has given for the new gTLDs, we saw that we also included that and also some external input from people outside was also added in to it.
So, once the test requirements have been collected, we tried to classify it into different categories, like connectivity, where we have connectivity issues, addressing issues, problems in the syntax, delegation issues and DNS issues, etc., etc.
For each test requirement, we wrote test specifications so that there are not false positive or false negative results so we made sure that each test case is like this, we have an objective and the test case to some RFCs or PCP from RIPE, we say what is the input for the test case. Then we have a description of what has to be done and we say that this is the output that we want, if we don't get this output then there is a problem.
Finally, what happens if the test case special requirements for the ‑‑ if it fails, for example, this is a basic test case and other tests does not continue.
So, the idea that when we write this test specifications, there are two things that we wanted: One is is there any document where we can go and ‑‑ somebody wants to develop a DNS validation software, is there a document one can go and see these are the test requirements that I have to do? As far as we know, such a document does not exist. There are about 200 RFCs for the DNS and other BCPs, we have to look at all these things and a long time of experience that is available like at AFNIC or .se to have. Another thing is that while RFCs does not clearly indicate how to ‑‑ how the output should be, so we wanted to also specify, for example, for this test case, this is the output that is needed, once it has to be coded so that is why we ‑‑ when we were presenting Zone Master different avenues like DNS op or CENTR there were some ideas saying why don't you create BCP or information RFC for that, so from this idea, the idea was born. But there are issues, for example, last year when we presented at ‑‑ in Los Angeles, there were some people saying OK you are going to a dead end. Many people have tried this and nothing has materialised. And another issue was that different people have different requirements, for example AFNIC has a different profile test cases or DENIC has different test cases.
Other thing is that we have created a task force and there is a mailing list under the centre but there is no discussion going on, the mailing list is getting lukewarm response. There are some people but there is nothing great going on there. And another thing is that people are, even inside the team, many are worried that there might be ‑‑ this might be a huge task and there might be dedicated resources needed. So these are number of issues that exist with the TRTF. So, what we want to know, are we going in a correct direction, does the DNS community think that this is needed? Are we sure that we can have best practice for validating the delegation of our domain. For example, tomorrow, if somebody uses this document, and creates two tools and if one goes wrong we can clearly say that this is because of the code, not of the specification. So, this is ‑‑ this is the reason why I am presenting here and we want to have your feedback either here or in the mailing list, to say that do we have to do this. Thank you.
PETER KOCH: Thank you, Sandoche. We should have a minute or two for questions or comments.
AUDIENCE SPEAKER: New Zealand registry services. How sensitive are you to add features not to test for correctness or compliance but for this stuff?
SANDOCHE BALAKRICHENAN: Actually, we sent this requirement features to all the mailing list, asking ‑‑ we have a set of features asking whether the community thinks that these are the features that are needed or do you want to add new one. Actually we got only 18 responses and we are taking all those 18 responses into consideration and ‑‑
AUDIENCE SPEAKER: Yes. I saw the message, I didn't have the time to reply because we use DNS check and at some point we ask for our feature ‑‑ we actually have the code ready for the feature, we sent here is do you want it and we said ‑‑ the answer we got no, because it wasn't compliance check, it was like a discovery check.
SANDOCHE BALAKRICHENAN: Actually, as I said, the everything is in GitHub, once you have add, if you have the code and you say that this is what we need and if everybody thinks that, there is a consensus that we need that, we will add it.
AUDIENCE SPEAKER: OK. We can have this conversation later. Thank you.
PETER KOCH: Thanks. One more question.
AUDIENCE SPEAKER: Yelta. In answering these questions, I don't know, no, yes, yes, no, I don't know.
PETER KOCH: Can you repeat that?
AUDIENCE SPEAKER: No. I think this is very good effort and I am actually already on the group here so I may be a bit biased by that, but I don't think it's bad if different tool sets have a slightly different conclusion. I am not convinced that we can come to a generally accepted set of requirements but I think we should at least try those. I am very happy with the initiative.
PETER KOCH: Good. Thank you. Thanks, Sandoche for that presentation.
And also thanks to the task force for taking up that work. It's not a RIPE Task Force, though, not another one. There is two more presentations and I think the next in line is Olafur, giving us some insight into eliptic curves.
OLAFUR GUOMUNDSSON: I am from CloudFlare and working on with my co‑workers on deploying DNSSEC in our infrastructure, and one of the things we looked at is that we like to keep answers small and the current crypt algorithms, i.e. RSA get bigger and bigger and to make a reasonably strong key these days, you have to have a really large RSA keys, that is a problem in our opinion and mainly because of packet size, it makes us a lot more attractive amplifier so we want to try to keep answers as small as possible so we select easy DSA and I am going to tell you a bit about what we have find out in this process, which is still ongoing.
So, EC C is much ‑‑ keys are much smaller, 256 bit key is equivalent about 3,000 bit RSA key. It was defined about three years ago, so would you expect it would be accepted by everybody and everywhere and all that. We like the ‑‑ and for us, which have a gazillion zones under management, that change a lot, we may have geographical answers and other stuff like that, we want to sign on the fly. We are going to do absolute minimum central site we just move data to the edges when it is needed and we sign it when somebody asks for it. So ECC has this that signing is really fast. And Geoff Huston has been doing all of these experts how things are working in the Internet and about a year ago he came out with results that said only about half of the resolvers in the world understood ECDSA, it's now up to one in five that is ignorant and the numbers are getting better so we are happy and we consider this an acceptable risk and hopefully by deploying lots of ECDSA signed zones it will enable others to jump on the bandwagon.
OK. People saying ECDSA performance sucks, well it doesn't, that much. And there have been a significant ‑‑ sorry, there have been amazing advances in the quality of the code that is doing cryptographic operations. We are seeing enormous speedups, and this is not the most recent code or best code but yeah, this is what happens, if you compare open SSL version 1.98 which is a three or four‑year‑old code, with what it is doing today in the latest and greatest, every algorithm happened and improved and there are better executables available. We see the ECDS performance is so good on our servers we don't need to rate limit, we can generate more signatures than answers we can send out. So, it's not a limiting factor.
There is lots of things that we are going to be doing. Here, you can see what are the features and what is the time line for this. It will be happening. This is good. This is not good. There are servers that don't like this so if somebody wanted to slave data from us, we have registries that do not allow you to specified records with ECDSA algorithms. There are registrars that don't accept that and registrars can have two reasons why they don't do it: Their user interface doesn't athrough or they don't get the RIPE EPP command from the registrars to enable it. And even if the registry is at fault the registrar gets yelled at. I have a registrar that I use and I discovered this problem when I couldn't put the DS record on one of my registrations that was in a different ‑‑ bay different registry operators. Some of the registries are very amenable and say yes, we will fix it; others say no.
So, if you have any influence in any registries or registrars, please, add support, it will be good. RIPE would like to use ECDSA in the not‑too‑distant future, let's get the support everywhere.
Validators. Yeah, it doesn't support, there are two resolvers on this network here, one of them did not support ECDSA on Sunday, it does today. The other one looks like this. I have this little check programme that you can download from GitHub, it does this, it checks every combination, I have published all the signatures for all the possible algorithm combinations. Yeah, so, tests are all local, want to complain about them. Don't test Google they already fixed theirs after I reported it to them. Latest versions of BIND and unbound if they are compiled with the right crypto libraries will support everything, including cast which nobody seems to do by default. That is also a reasonable ECC algorithm. So ECDSA is coming, be ready and then start taking advantage of it. Questions?
PETER KOCH: Thank you, Olafur. Nobody is running to the mic. Good. I need to cut the mic lines now.
ANAND BUDDHDEV: Thanks for this presentation. Two issues: A resolver I have talked up steam, they fixed one I am not sure why it hasn't been fixed but for the next meeting we will definitely make sure they are all OK. The other thing; you presented a slide on signing and validation numbers and speed, which was very interesting. And one of the things that is obvious is that verification in ‑‑ with ECDSAs is slower, so you can sign quickly but the burden is now shifting to the validating resolvers and I would like to ask if you can get one of your whiz kids to improve validation speeds in ECDSA and contribute back to the community and I think more ‑‑
OLAFUR GUOMUNDSSON: We have released code for open SSL, sometimes it gets compiled by distribution and sometimes it doesn't. We have released same code for /TKPWO*EL, libraries and this is all publically available. It's written in a sampler for Intel, so yes it helps the Intel, we haven't started improving on the risk but we are very aware of the code has begun. We are taking care of some of the theoretical attacks based on timings against the keys. And also, on validators, I am very sympathetic to that but this is one car, they ‑‑ core, they usually have multiple core unless they served implementations that are single threaded and how many are doing 9,000 validations a second?
ANAND BUDDHDEV: Maybe not yet.
OLAFUR GUOMUNDSSON: How many of them are limited to one core.
GEOFF HUSTON: Firstly I want to comment, you know, I seem recall things like the NG N programme at the ITU that took all the shit work and combined it in a fancy acronym and it was going to work. You receive to have the opposite approach of taking decent technologies and bundling them, thank you, this is the right way of going about it. I wanted to address what you are doing which I think is really valuable. I keep there is no DNSSEC out there. No one is using it. And that is actually bullshit. One quarter of all the world's users send their queries to resolvers that validate. You may not like the surf foul answer and go to another resolver that doesn't, so signing zones is really important, and it helps. We are getting there with ECDSA. Last September, it was one in three folk were not doing ECDSA when they did RSA, they had the old libraries. We are now to one in five, in other words the story is improving and I think a lot of it is thanks to the efforts of folks like you, fast code, well distributed it's making a difference. So this is really good and what we need. Because the real thing is, you are doing TLSA, and if there is one thing we know that is completely crap in the Internet, it's domain name certificates and CAs, and the only thing that is secure about going to a site that has /RA green on it is that the colour green is very secure. Thank you.
OLAFUR GUOMUNDSSON: Thank you.
PETER KOCH: Finally and as fully appropriate for the end of the session we are hearing contribution about sun set.
JIM MARTIN: I want to talk about what we are planning on doing with the ISC DLV reg. DLV is for anybody who is not familiar with it was really a chicken and egg problem with regards to the DNSSEC look aside validation. The idea was, DNSSEC had just rebooted from the protocol development process and now you had no keys in the root, no keys on the parent zones and no keys in your zone. So why in the world would the root be signed because there is nobody who is using it and why would anybody on the edge be actually able to validate it? So it was sort of a how do we get started.
So, as, you know, DNSSEC goes all the way up the chain, requires a chain of trust all the way up to the root. With that had not place before the root was signed there had to be some other mechanism if we wanted to be able to validate to actually get to a root anchor. So there is an alternate anchor that was put in to the DLV and that was a mechanism that would allow you to look to the side to actually get your key information. This was a perfectly valid reasonable thing to do in 2006 when it started. There was nothing out there, .se guys were the only ones who had signed anything at the TLD level and people who on the edge, who wanted to do it, had really no viable way of doing it. So we introduced a registry, there was a web form that you would go to, submit your information in and you would be in the TLD registry. That registry allowed validation and everything was great, and it allowed there to be a way for people to play with things. Well, other ccTLDs eventually became signed and more and more work was done to build the infrastructure for DNSSEC up. Ultimately, in 2010, the root was signed and /TKWHRAOPBD, we keep seeing more zones, the TLDs as well as the edge zones, all being signed, and you see a much greater deployment of validating resolvers.
So, where we are now is that the infrastructure is all there and the real need for DLV we believe, is no longer there. And in fact, we believe that it's delaying the deployment of DNSSEC.
So, the advantage of DLV is that it allows a signed zone to be validated even if the parent is not signed. And it will definitely accept DS records, your registrar may or may not, and it's free. The downside is that there is ‑‑ it's reducing the pressure on the parents to actually get signed, and the ‑‑ reduces the pressure on the registrars to actually accept DS records. And it requires additional infrastructure. As additional round trip to the ISC servers, to do the potential look aside, and it's yet another single point of failure ‑‑ single area where there is one domain of control that could cause problems. And so we think it's probably a good idea to step away from it.
So the question; who needs it right now? So entities that have signed zones, that their parents aren't signed still need some kind of a look aside and entities whose registrars don't accept DS records still could make good use of this. And then there is a corner case that has been brought up, if you are transitioning from one registrar to another that having DLV be a constant way to do DNSSEC lookup is a way to manage the migration between registrars. That is more of a corner case rather than a primary case.
So let's look at these two main issues. The unsigned parent issue. Well, if you look at that curve it's pretty clear that we are getting there in terms of getting the TLDs signed and with keys in the root. So, probably the parent is not being signed is not a huge issue and if we give a little bit of time we should be OK on that.
On a registrar support level there is certainly are registrars out there that don't accept the DS records. However, as of the 2013 registrar accreditation agreement with ICANN it's required that you accept DS records. It may be via e‑mail or web form but they must require them. Now, not everyone signed that. There is a five year cycle on those agreements and so it may be up to 2018 before this entirely gets out, but frankly, if you are really into DNSSEC you might want to consider changing your registrars and we think this is a good way to put pressure on registrars /TPOF this be possible.
So, we are thinking the root is signed, most of the TLDs are signed, a lot of registrars support it and frankly, we think by announcing the sun set for DLV we can pure more registrars to do this so we think it's the right thing to do. So, this is what we are planning on doing and this is what we are looking for feedback on from the community. So, currently in DLV 45100 or so zones configured of which about 2,800 of them are fully configured and working entirely. Of those, only 397 are ‑‑ have unsigned apparent so everything September 397 could go into the DNS and validate down from the root as we would appropriate. In fact, 20% of that 4500 already does. There is old cruft in the DLV never got cleaned up.
Our plan is to go through the DLV right now and I mean our registry, this is separate than the DLV code that is currently in BIND that is a separate discussion. But we are going to go through the DLV registry and identify everyone who could potentially register with a parent, or in fact, already has, and just may not even remember that they have this data in there and we are going to send out some e‑mails on that. We are then in the future going to stop adding new zones that you can't ‑‑ you could potentially validate all the way up and then eventually we are going to remove all the zones. The time line on that is that right now, we are requesting the owner remove the zone if they can validate all the way up or if there registry in fact is ‑‑ would support it, their registrar may not but if their registry supports it, then we are going to ask them to do whatever internal work they can to get to a supported point within properly signed and then move off of the DLV. That is right now. In early 2016, the plan is to do no more registrations of zones that could validate all the way up. And that is just because we don't want to encourage anyone to do the wrong thing. Then, in the ‑‑ we also intend to, after having given that one year's worth of notice, we intend to purge any zones that could be validated all the way up. Then, in 2017, at the beginning of 2017, the intention is is just to remove all of the records in the zone except for the SOA, we will keep the zone around just so that the lookups will work. And the idea there is that by 2017, we should be completely clean and completely out of the game of providing DLV. Our plan is to discuss it in several major environments here so we have been to ICANN, I presented at last weekend, I am here, the idea is to get feedback from people who think whether this is a good or bad idea. We are e‑mailing out to various mailing lists. And we are certainly going to notify all the current DLV users. We are also going to be talking to the validator resolver publishers as well as packageers to make sure defaults are set in a reasonable way moving forward.
So ultimately, the goal, by 2017 it's all gone. And I'd love feedback from anybody in the room that has been opinion, one way or the other.
PETER KOCH: Thank you. If we are running into the break it's your time, think of our support staff, they have probably deserved a break.
AUDIENCE SPEAKER: I think you could make the time line more aggressive, in light of the Oxford institute, what you are doing is entirely reasonable, you are being kind. 350‑odd zones, the potentially have not yet got a signable parent, the resolver side failure mode, what is the failure mode if the DLV went away, does it default to unsigned or to invalid?
JIM MARTIN: It depends on which failure mode you are describing. If you do a lookup to the DLV and you get any valid response ‑‑
AUDIENCE SPEAKER: No if the service did not respond.
JIM MARTIN: If the service ceases to respond then you get a surf fail. Let me rephrase that. If you are running bind that is what happened.
AUDIENCE SPEAKER: For the bind population the ‑‑ goes to the classic unverified insecurity mode or goes to the bad don't go there, this cannot be resolved outcome? If it goes to ServFail, you didn't fall back to the true root state and parse down the name as an unsigned name because there is no verifiable chain down to the label?
JIM MARTIN: I have had this ‑‑ there is an interesting conclusion ‑‑ between protocol community, the development side believes this is from a security perspective the right thing to do is to ServFail, failing ‑‑ nailing a more reasonable fashion from an operator's perspective is ‑‑
AUDIENCE SPEAKER: OK. And I think the people who provably signed from the root, get them out now, ethically boot them off.
Warren: Google. Kind of following up what George said. You made this presentation at DNS OARC and here. I suspect at least a number of the people using DLV are folk who are in this room. What would be interesting to do would be to run those same numbers and present them tomorrow showing how many less zones there are now, if any.
JIM REID: Just a DNS engineer from Scotland. Jim, I disagree with what George has said. I think the time you have arranged for the retiree of the registry is giving people plenty of time, whole bunch of other factors involved, releases and ‑‑ I think the sort of timescale you put in place is very reasonable. I would like to thank ‑‑ I think I would also like to thank the ISC because the death of DLV has been long, long, long overdue in my opinion and I will quite happily dance on its grave. I would like you to think about the path of the retirement plan in your announcement nor also makes some kind of a statement of the removal of DLV code from BIND, whenever it's going to be a BIND 9 release comes around about the time when this is retired, remove that from the code base.
JIM MARTIN: On that point we haven't fully discussed this internally but the bind code that does DLV and the DLV registry are inherently separate things and that are potentially use cases where people have private DLV registries and hence the code is a separate question. There have been some internal discussions on potentially making it a compile time option but again these are just discussions at this point.
JIM REID: Yes. But I think, though, some more data is needed, if there is nobody else else that has a DLV registry around there what is the point in keeping that crafty code around that and keeping it alive. It's still in your archive and repository, so if there is a need to exhume it you could do that but it's plain to make it clear this stuff is going away. I think there has to be a fairly clear statement, we are taking the code out of the code base just as you have done in the past with v6 records and stuff of that nature.
JIM MARTIN: I certainly will take that back to the development side of the house. Any other questions? Go get some coffee. Thank you.
PETER KOCH: Thanks, everybody.
Thanks, Jim. We will adjourn the meeting and reconvene tomorrow morning, 9 o'clock, with another big basket of interesting presentations. Enjoy your coffee.
LIVE CAPTIONING BY AOIFE DOWNES RPR
DOYLE COURT REPORTERS LTD, DUBLIN IRELAND.