Ok, what's the "correct" way to deal with systems developed in-house, that have their own sets up subdirectories.
And why, for that matter, does running sealert give me the full path to the executable, like openjdk... but *not* the full path to the file it's trying to operate on, and I'm left going "ok, where was the file it deleted? (we're running in permissive mode - overwhelmingly, developers and subject matter experts no less than nothing about selinux).
mark
On Wed, 16 May 2018 13:25:55 -0400 m.roth@5-cent.us wrote:
And why, for that matter, does running sealert give me the full path to the executable, like openjdk... but *not* the full path to the file it's trying to operate on, and I'm left going "ok, where was the file it deleted?
Ok, there is a current working directory and a relative path, but I believe that in such cases, the SELinux denial or error is thrown before the full path to the file is even spelled out or made available for the application.
On 05/16/2018 01:25 PM, m.roth@5-cent.us wrote:
Ok, what's the "correct" way to deal with systems developed in-house, that have their own sets up subdirectories.
By "systems developed in-house", do you mean application software that you are developing? And if so, do you mean something that runs as a service/daemon or something that runs as a user application? What besides the application itself needs access to these directories? What if any security concerns do you have for the application (either to protect it from other processes running on the system or to confine/restrict it)?
And why, for that matter, does running sealert give me the full path to the executable, like openjdk... but *not* the full path to the file it's trying to operate on, and I'm left going "ok, where was the file it deleted? (we're running in permissive mode - overwhelmingly, developers and subject matter experts no less than nothing about selinux).
Kernel limitation; we don't have that information readily available (e.g. no vfsmount and/or dentry at that layer) or safely generatable (e.g. calling d_path at certain points can trigger a deadlock) at the point where the permission check occurs. It can be captured and reported however by turning on system call auditing and pathname collection in the following manner. This audit functionality is disabled by default due to its performance overhead.
1) Enable system call auditing and pathname reporting.
Edit /etc/audit/rules.d/audit.rules (or /etc/audit/audit.rules if running a version that predates the introduction of /etc/audit/rules.d), and comment out the last line: #-a task,never
Then add a watch or filter to turn on pathname collection: echo "-w /etc/shadow -p w" > /etc/audit/rules.d/shadow.rules
The particular watch or filter is unimportant; it is the presence of any such watch/filter that turns on audit pathname collection. Just don't pick one that will trigger too often or you'll generate lots of irrelevant audit messages.
Then you can run: service auditd reload
And it should regenerate /etc/audit/audit.rules and reload that.
2) Re-run your application that was triggering the errors. Then run "ausearch -m avc -i -ts recent" (or similar) to view the audit records.
You should then get a series of records, including a type=PROCTITLE (if supported by your kernel), type=PATH, type=CWD, type=SYSCALL, and type=AVC for each permission denial. These are generated at different points in the processing. All records with the same timestamp/serial number were generated during the same system call.
(alternatively sealert might collect these up for you and present them more nicely; I don't know as I don't use it myself)
3) Restore your audit configuration to its original state if you want to avoid the performance overhead.
rm /etc/audit/rules.d/shadow.rules Uncomment the last line of /etc/audit/rules.d/audit.rules. service auditd reload
Stephen Smalley wrote:
On 05/16/2018 01:25 PM, m.roth@5-cent.us wrote:
Ok, what's the "correct" way to deal with systems developed in-house, that have their own sets up subdirectories.
By "systems developed in-house", do you mean application software that you are developing?
That's usually what I said means. And I'm the sysadmin - there's multiple teams developing and enhancing a good number of systems, from websites to computation.
And if so, do you mean something that runs as a service/daemon or something that runs as a user application?
Webiste application, tomcat application, and user applications.
What besides the application itself needs access to these directories?
I'm not involved, so I don't know, but I would think that application is what needs to access it, and they may create directories and subdirectories on the fly, and possibly delete them.
What if any security concerns do you have for the application (either to protect it from other processes running on the system or to confine/restrict it)?
We're not that worried about other processes. Our security is being scanned, as it is, by the IRT and pen testers. Me... I just want to shut up selinux so it stops flooding our logs, and, when in production, in the event that some day we're required to make selinux enforcing, I REALLY don't want that to break anything.
And why, for that matter, does running sealert give me the full path to the executable, like openjdk... but *not* the full path to the file it's trying to operate on, and I'm left going "ok, where was the file it deleted? (we're running in permissive mode - overwhelmingly, developers and subject matter experts no less than nothing about selinux).
Kernel limitation; we don't have that information readily available (e.g. no vfsmount and/or dentry at that layer) or safely generatable (e.g. calling d_path at certain points can trigger a deadlock) at the point where the permission check occurs. It can be captured and reported however by turning on system call auditing and pathname collection in the following manner. This audit functionality is disabled by default due to its performance overhead.
Given that some jobs can run hours or days, I'm not going there. <snip>
- Re-run your application that was triggering the errors.
Then run "ausearch -m avc -i -ts recent" (or similar) to view the audit records.
You should then get a series of records, including a type=PROCTITLE (if supported by your kernel), type=PATH, type=CWD, type=SYSCALL, and type=AVC for each permission denial. These are generated at different points in the processing. All records with the same timestamp/serial number were generated during the same system call.
That's what I'll probably try.
Thanks a lot, more things added to my to-do list. <g>
mark
On Wed, May 16, 2018 at 01:25:55PM -0400, m.roth@5-cent.us wrote:
Ok, what's the "correct" way to deal with systems developed in-house, that have their own sets up subdirectories.
Assuming the directories fit a standard use-case, add fcontexts so those directories get labeled properly for that use case. E.g. if you have web content in a custom directory:
semanage fcontext -a -t httpd_sys_content_t /custom/path/to/html/files(/.*)? semanage fcontext -a -t httpd_sys_script_exec_t /custom/path/to/script/files(/.*)?
Then just relabel those directories:
restorecon -R /custom/path/to/*
Also check selinux booleans to see if there is already one available to enable some functionality you need, e.g. for http:
getsebool -a | grep http
If those two steps aren't applicable, you have to develop your own policy. Start in permissive mode and use audit2allow.
Use a config management system like Puppet to automate the above steps when deploying custom code.
selinux@lists.fedoraproject.org