Hi,
On Mon, Jun 04, 2012 at 10:30:35AM -0400, Jeff Darcy wrote:
Sorry for the delay getting back to you.
No problem : days only have 24h :) and with the release of glusterfs 3.3, I think you were
a litte bit busy :)
On Fri, 1 Jun 2012 15:48:48 +0200
Yves Pagani <ypagani(a)aps.edu.pl> wrote:
> - man 8 hekafs :
> for creating keys, it is written "openssl genrsa 1024 -out
> server.key". In fact the number of bits must be at the end of the
> command or the output file will not be created so the line has to be
> "openssl genrsa -out server.key 1024".
Correct.
> - following the link given in fedora wiki
> (
https://fedoraproject.org/wiki/Features/CloudFS), I have access to
> the file named README.ssl. But when I cloned the git repository, this
> file does not exist. I tried to search in the git historybut I can
> not find it (but my knowledge of git is (very) low so I could have
> missed something).
http://git.fedorahosted.org/git/?p=CloudFS.git;a=blob;f=scripts/README.ss...
works for me. Did it not work for you? I've attached the file, but be
aware that it's slightly out of date. Some of the updated information
is now in the main man page, and some is implicit in the following
commands so see theirs:
hfs_update_cert
hfs_start_volume
hfs_mount
Thanks for the file. The link you gave works, but when I clone the repo ( git clone -v
git://git.fedorahosted.org/CloudFS.git ), I can't find this file. I tried :
- git checkout 3547f6b354d8b3455359ea0 (thinking that the hash in the link was the
"commit hash ")
- git log
- gitk just in case I missed something
without success.
> For my configuration, I got the information that the
different
> files server*.pem must be concatened in a file named "root.pem". But,
> this information is not in the man of hekafs. So does the file
> containing the different (server) certificate must be named
> "root.pem" or can we set another name for it ?
The combination of certificates (specified with hfs_update_cert of
through the GUI) is done automatically in hfs_start_volume.
Thanks for this information. It is great that hfs_start_volume does all the required magic
:)
> - from the hfs_mount manpage, I was not able to use the data
key
> parameter. Does this parameter work only with branch aes from the git
> repository ?
What error did you get? The data key must be in a specific format, and
our diagnostics for an incorrect format are probably not very good.
I did a dd if=/dev/random of=data.key bs=1M count=1...
After your mail, I digged a little bit in the source code and by doing a openssl enc
-aes-256-cbc -k secret -P -md sha1 and copying the (key part only) outcome into a file, I
was able to do a hfs_mount.
Neverthless I obtained a strange result : from the client, I copied some file to the mount
point, files were copied on the different servers. On the client, I then unmounted the
filesystem and remounted it without giving the data key. And of course, the files were
encrypted as expected. But on the servers, the files are always in cleared form ! Is it a
normal behaviour ? I thought that when I copied the files to the servers, they were
encrypted before being sent.
> During my testing, maybe the most annoying thing was that if
> something went wrong during a hfs_mount command, no information that
> a problem occured is given. For example, if you launch a hfs_mount
> command with a wrong password or bad client's certificate, you always
> get 0 with echo $? . The log file contains something like this :
> " [2012-06-01 13:37:39.634914] E
> [graph.c:526:glusterfs_graph_activate] 0-graph: init failed
> [2012-06-01 13:37:39.635219] W [glusterfsd.c:727:cleanup_and_exit]
> (-->/usr/sbin/glusterfs(main+0x295) [0x405e85]
> (-->/usr/sbin/glusterfs(glusterfs_volumes_init+0x145) [0x404d45]
> (-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x198) [0x404bf8])))
> 0-: received signum (0), shutting down [2012-06-01 13:37:39.635290] I
> [fuse-bridge.c:3727:fini] 0-fuse: Unmounting '/gluster/'. " So fuse
> silently unmounts "/gluster" (which succeeds), and $? contains then
> the 0 value instead of a code error. Maybe, in this case the function
> which unmounts the folder should "remember" that something went
> wrong ?
I'll look into that.
If you need more verbose logs, I can send them.
> - when I want to do a hfs_add_node (cli), I get a SSH error
"The
> authenticity of host '192.168.1.199 (192.168.1.199)' can't be
> established". So I create a ssh key and copy it to the host by a
> ssh-copy-id command and after that the hfs_add_node works without any
> problem.
This should only affect first-time setup. We need to be able to enable
the HekaFS daemon on the remote node before we can do that. The
default implementation of make_remote (in hfs_utils.py) does this using
ssh, but it should be easy to make it use any other method you want.
Ok
> -in order to test the self-healing facility, I shutted down a
server
> and copy some file from the client. When i turn back on the server,
> the "missing" files do not appear (I waited 10 minutes since I read
> that glusterfs runs a self-healing every 600s). On the client, I need
> to unmount and mount the folder (then files appears on the server but
> have 0 size) and then launch a "find /gluster/ -noleaf -print0 |
> xargs --null stat >/dev/null" for "really" having the files on
the
> server.
Yes, in GlusterFS 3.2.x (on which HekaFS is based), there's no
automatic self-heal so an explicit find/ls/whatever is necessary to
touch the files. GlusterFS 3.3 does have automatic self-heal, but is
currently incompatible with HekaFS. We're trying to avoid doing a 3.3
version of HekaFS, because all of that work would become obsolete when
the functionality is fully integrated into GlusterFS itself (probably
in 3.4 but I can't promise that).
Ok. It will be very nice to seem them merged :)
> - I try to compile hekafs from source via the fedora-ize script
but
> it fails. It seems that some folders have beem renamed
> (pkg->packaging ?). A make fedora in the packaging folder does the
> trick.
Yes, Kaleb sort of went a different route with the packaging/* stuff,
so "fedora-ize" is basically obsolete.
Ok
> - each server is running django at the 8080 port. But I could not
> find a way to turn it off (except by a iptable rule). I think it
> could be a security problem because a local attaker can run an nmap
> scan on then local network and then can have access to the
> configuration of the servers.
Yes, this is an issue we've inherited from GlusterFS (which is
similarly insecure). A while ago I did some work to enable SSL for the
management interfaces, and Pete Zaitcev did something similar, but
neither has made it into the master branch yet.
> - In glusterfs, you could share files via acls to "few people" . Is
> it possible to do such kinds of thing with different tenants ? ( I
> suspect the answer to be no since on the server side each tenant has
> its own folder).
Tenants are *fully* isolated from one another, by design. Thus, there
is no sharing at all between them.
Ok
> I am writing some documentation of my setup. Do you think it
could be
> interesting to other users?
I'm sure it would.
I hope I will finish this in a few days. And I will send it to the mailing list. Which
format do you prefer : simple text or latex+pdf ?
> Sorry for this quite long mail and many thanks to all for this
great
> peace of software .
You're quite welcome, and thank you for your feedback.
Thanks your time and answers.
--
Your heart is pure, and your mind clear, and your soul devout.