Hello all,
We currently have a planned feature to clean up/improve our config format for known (remote) machines:
https://trello.com/c/WSEOANNY/268-finalize-machines-json-format
This is a bit thin and not easy to understand/rationalize. So I took this plus what I remembered from last week's discussion plus some thoughts what would make sense and wrote a draft for what we want to achieve and how it should look like:
https://github.com/cockpit-project/cockpit/wiki/Config-format-for-known-mach...
I'd appreciate some feedback about whether this makes sense, and opinions about the per-user → global SSH host key transfer [1].
Thanks!
Martin
[1] My personal opinion is that we should not second-guess what SSH does and STOP doing it.
Thanks for summarizing this well.
It's worth noting that Peter Volpe is moving the libssh code that consumes SSH keys ... and I believe he has working code for using /etc/ssh/known_hosts.
On 07.02.2017 15:17, Martin Pitt wrote:
Hello all,
We currently have a planned feature to clean up/improve our config format for known (remote) machines:
https://trello.com/c/WSEOANNY/268-finalize-machines-json-format
This is a bit thin and not easy to understand/rationalize. So I took this plus what I remembered from last week's discussion plus some thoughts what would make sense and wrote a draft for what we want to achieve and how it should look like:
https://github.com/cockpit-project/cockpit/wiki/Config-format-for-known-mach...
I'd appreciate some feedback about whether this makes sense, and opinions about the per-user → global SSH host key transfer [1].
The intent has been that once the dashboard is setup by a single user on the system it becomes usable by all users on the system. Hence the global storage of SSH keys was driven by that.
However we should have always used /etc/ssh/known_hosts. I believe /var/lib was an implementation detail from before we had "{ superuser: true }" [0] style privilege escalation.
Not an opinion per-se ... but I just wanted to make sure the decision reflects the end user experience goal.
Cheers,
Stef
[0]
Hello,
Stef Walter [2017-02-07 15:49 +0100]:
It's worth noting that Peter Volpe is moving the libssh code that consumes SSH keys ... and I believe he has working code for using /etc/ssh/known_hosts.
Noted, I'll coordinate with him.
I'd appreciate some feedback about whether this makes sense, and opinions about the per-user → global SSH host key transfer [1].
The intent has been that once the dashboard is setup by a single user on the system it becomes usable by all users on the system. Hence the global storage of SSH keys was driven by that.
That's the sort of design feedback I was looking for, I wasn't aware of that design goal. As this is restricted to admins (I just checked that as a non-admin I cannot add servers at all), this makes more sense and justifies being different to ssh's default behaviour. I adjusted the wiki:
However we should have always used /etc/ssh/known_hosts. I believe /var/lib was an implementation detail from before we had "{ superuser: true }" [0] style privilege escalation.
Makes sense to use that for new hosts, and either move or always read the file in /var too.
It still seems that for the purpose of programmatically pre-seeding machines it makes sense to be able to put a host key right into a machines/foo.json file, as otherwise we'd have the same "concurrent racy access" problem as for machines.json itself.
Thanks,
Martin
On 07.02.2017 21:25, Martin Pitt wrote:
Hello,
Stef Walter [2017-02-07 15:49 +0100]:
It's worth noting that Peter Volpe is moving the libssh code that consumes SSH keys ... and I believe he has working code for using /etc/ssh/known_hosts.
Noted, I'll coordinate with him.
I'd appreciate some feedback about whether this makes sense, and opinions about the per-user → global SSH host key transfer [1].
The intent has been that once the dashboard is setup by a single user on the system it becomes usable by all users on the system. Hence the global storage of SSH keys was driven by that.
That's the sort of design feedback I was looking for, I wasn't aware of that design goal. As this is restricted to admins (I just checked that as a non-admin I cannot add servers at all), this makes more sense and justifies being different to ssh's default behaviour. I adjusted the wiki:
However we should have always used /etc/ssh/known_hosts. I believe /var/lib was an implementation detail from before we had "{ superuser: true }" [0] style privilege escalation.
Makes sense to use that for new hosts, and either move or always read the file in /var too.
It still seems that for the purpose of programmatically pre-seeding machines it makes sense to be able to put a host key right into a machines/foo.json file, as otherwise we'd have the same "concurrent racy access" problem as for machines.json itself.
In large and properly setup systems/clusters, all the known hosts will be made globally available. Either via a shared drop in file ... or a better example of this is how FreeIPA uses the following configuration option to lookup known hosts from the directory:
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
So I would be cautious here about having another way to distribute SSH known hosts. /etc/ssh/known_hosts is a well known format and there's lots of tooling for populating it.
Note that in the corner case, even if the dashboard machines were prepopulated and SSH known host were missing, then the user will be prompted in Cockpit with the fingerprint when accessing the machine.
Lastly, if we do need to add this feature later (ie: an additional host key field in the machines/foo.json file) we can certainly add it without backwards compatibility problems.
Stef
Stef Walter [2017-02-08 8:36 +0100]:
It still seems that for the purpose of programmatically pre-seeding machines it makes sense to be able to put a host key right into a machines/foo.json file, as otherwise we'd have the same "concurrent racy access" problem as for machines.json itself.
In large and properly setup systems/clusters, all the known hosts will be made globally available. Either via a shared drop in file ... or a better example of this is how FreeIPA uses the following configuration option to lookup known hosts from the directory:
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
So I would be cautious here about having another way to distribute SSH known hosts. /etc/ssh/known_hosts is a well known format and there's lots of tooling for populating it.
If our interface for pre-populating hosts is a CLI only, then using /etc/ssh/known_hosts is sufficient indeed -- but then we also don't need to split machines.json either, as that tool could just as well change machines.json inline (and do locking, etc) in the same way as known_hosts is updated.
I thought the goal was to provide a drop-in dir where machines snippets can be added -- but then we should also offer providing the host keys as well. Otherwise...
Note that in the corner case, even if the dashboard machines were prepopulated and SSH known host were missing, then the user will be prompted in Cockpit with the fingerprint when accessing the machine.
... you'll get this, and this feels like an incomplete solution. In practice, nobody will actually verify the fingerprints. How is it only a corner case? It will happen every time machines.json gets updated and the first time someone connects to that new machine. This will also affect e. g. VMs that get dynamically added to the list (if someone uses that feature for this), so this won't only be a "do it once, ever" effect.
Lastly, if we do need to add this feature later (ie: an additional host key field in the machines/foo.json file) we can certainly add it without backwards compatibility problems.
Sure, this doesn't need to be in the first PR, but I think we need to keep it in mind for designing this: It seems of questionable utility to me to split machines.json without also splitting known_hosts (or even just merging the two).
Thanks,
Martin
Martin Pitt martin@piware.de writes:
https://github.com/cockpit-project/cockpit/wiki/Config-format-for-known-mach...
Thanks!
If someone drops /etc/cockpit/machines.d/10-foo.json, will those machines immediatly show up on the dashboard? Do we expect 10-foo.json to specify a reasonable color for each?
Or would those machines only be available for autocompletion when the user explicitly adds a machine to the dashboard?
If the user now adds or removes such a machine or changes the color for it, where does Cockpit store this? In 10-foo.json or in 99-webui.json?
I think this is implicitly addressed in the wiki page, since it proposes to merge all the json files. So changing a color or visibility would go to 99-webui.json, which would overwrite what is already in 10-foo.json.
Correct?
Hello Marius,
Marius Vollmer [2017-02-08 14:27 +0200]:
If someone drops /etc/cockpit/machines.d/10-foo.json, will those machines immediatly show up on the dashboard?
Would be nice at some point, with an inotify watch. My gut feeling was that this was a version 2 thing, as we don't do this right now either.
Do we expect 10-foo.json to specify a reasonable color for each?
They can of course, but the UI should pick an unused color if they don't.
Or would those machines only be available for autocompletion when the user explicitly adds a machine to the dashboard?
I think they should appear in full, like an entry from our current /var/.../machines.json.
If the user now adds or removes such a machine or changes the color for it, where does Cockpit store this? In 10-foo.json or in 99-webui.json?
Hmm, good question. My original intent was to change the file that defines the host entry, i. e. 10-foo.json in this example. This would look less confusing, but it might cause conflicts if 10-foo.json is being by puppet or similar which would then stomp over the config again. However, the same is true for pretty well every other setting that cockpit (indirectly) can make, such as hostname or time zone.
I think this is implicitly addressed in the wiki page, since it proposes to merge all the json files. So changing a color or visibility would go to 99-webui.json, which would overwrite what is already in 10-foo.json.
That would technically work too, yes. When reading we must support merging, the question is just where we write updates to. I don't have a strong opinion about that, though.
Thanks,
Martin
Martin Pitt martin@piware.de writes:
Marius Vollmer [2017-02-08 14:27 +0200]:
If someone drops /etc/cockpit/machines.d/10-foo.json, will those machines immediatly show up on the dashboard?
Would be nice at some point, with an inotify watch. My gut feeling was that this was a version 2 thing, as we don't do this right now either.
We do, actually. If you change /var/lib/cockpit/machines.json with vi, say, this has immediate effect on the dashboard. The magic is provided by cockpit.file(...).watch():
http://cockpit-project.org/guide/latest/cockpit-file.html
We have equivalent magic for watching directories, but there is no nice JavaScript API for that yet, I think. (The "fslist1" channel payload, documented in .../cockpit/doc/protocol.md.)
Or would those machines only be available for autocompletion when the user explicitly adds a machine to the dashboard?
I think they should appear in full, like an entry from our current /var/.../machines.json.
The current machines.json has a "visibility" property, and removing a machine actually just sets that property to false. This way, Cockpit remembers the address and color of the machine and can offer it for autocompletion and the machine gets the same color again next time it is added.
If the user now adds or removes such a machine or changes the color for it, where does Cockpit store this? In 10-foo.json or in 99-webui.json?
Hmm, good question. My original intent was to change the file that defines the host entry, i. e. 10-foo.json in this example. This would look less confusing, but it might cause conflicts if 10-foo.json is being by puppet or similar which would then stomp over the config again.
However, the same is true for pretty well every other setting that cockpit (indirectly) can make, such as hostname or time zone.
Yeah, but that's a bug, not a feature. :-) Ideally, we would like to prevent people from running into conflicts with other management tools, see
https://trello.com/c/lPweIUlQ/325-spike-markers-for-config-management-tools
I think this is implicitly addressed in the wiki page, since it proposes to merge all the json files. So changing a color or visibility would go to 99-webui.json, which would overwrite what is already in 10-foo.json.
That would technically work too, yes. When reading we must support merging, the question is just where we write updates to. I don't have a strong opinion about that, though.
I would write to 99-webui.json only.
Hello Marius,
Marius Vollmer [2017-02-10 10:31 +0200]:
Would be nice at some point, with an inotify watch. My gut feeling was that this was a version 2 thing, as we don't do this right now either.
We do, actually. If you change /var/lib/cockpit/machines.json with vi, say, this has immediate effect on the dashboard. The magic is provided by cockpit.file(...).watch():
http://cockpit-project.org/guide/latest/cockpit-file.html
Thanks for pointing out. So v1 then :-)
We have equivalent magic for watching directories, but there is no nice JavaScript API for that yet, I think. (The "fslist1" channel payload, documented in .../cockpit/doc/protocol.md.)
I'm moving the whole reading/writing logic from JS into the bridge behind a D-Bus interface (as per Stef's recommendation), so that the handling/watching/merging of the entire machines/ directory does not need to happen on the client side. So my thought was to add a GFileMonitor to that in the bridge, and send a D-Bus signal to the client on any change, then the frontend can get the Machines property again.
We don't seem to use a lot of D-Bus signals in cockpit, but protocol.md documents them and e. g. pkg/networkmanager/interfaces.js subscribes to one, so I figure that works in general?
If the user now adds or removes such a machine or changes the color for it, where does Cockpit store this? In 10-foo.json or in 99-webui.json?
Hmm, good question. My original intent was to change the file that defines the host entry, i. e. 10-foo.json in this example. This would look less confusing, but it might cause conflicts if 10-foo.json is being by puppet or similar which would then stomp over the config again.
However, the same is true for pretty well every other setting that cockpit (indirectly) can make, such as hostname or time zone.
Yeah, but that's a bug, not a feature. :-) Ideally, we would like to prevent people from running into conflicts with other management tools, see
https://trello.com/c/lPweIUlQ/325-spike-markers-for-config-management-tools
OK, fair enough, but for now I'd treat that as an user bug (trying to simultaneously manage a machine with a config management tool and interactively) :-) Anyway, this should be moot with the following.
I would write to 99-webui.json only.
OK, works for me.
Thanks for your input!
Martin
cockpit-devel@lists.fedorahosted.org