[389-ds-base] branch 389-ds-base-1.3.9 updated: Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
tbordaz pushed a commit to branch 389-ds-base-1.3.9
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.3.9 by this push:
new d0877d2 Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
d0877d2 is described below
commit d0877d2a719c2ccf3921c0ac2d8e5bd9ed1bab22
Author: Thierry Bordaz <tbordaz(a)redhat.com>
AuthorDate: Mon Mar 18 13:48:03 2019 +0100
Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
Bug Description:
When a failure occurs during betxn_post plugin callback, the betxn_post plugins are called again.
This is to process some kind of undo action (for example usn or dna that manage counters).
If MEP plugin is called for a managing entry, it deletes the managed entry (that become a tombstone).
If later an other betxn_postop fails, then MEP is called again.
But as it does not detect the operation failure (for DEL and ADD), then it tries again
to delete the managed entry that is already a tombstone.
Fix Description:
The MEP betxn_post plugin callbacks (ADD and DEL) should catch the operation failure
and return.
It is already in place for MODRDN and MOD.
https://pagure.io/389-ds-base/issue/49561
Reviewed by: Mark Reynold, thanks !!
Platforms tested: F28
Flag Day: no
Doc impact: no
---
ldap/servers/plugins/mep/mep.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/ldap/servers/plugins/mep/mep.c b/ldap/servers/plugins/mep/mep.c
index 7f30f41..a7b60e1 100644
--- a/ldap/servers/plugins/mep/mep.c
+++ b/ldap/servers/plugins/mep/mep.c
@@ -2471,6 +2471,11 @@ mep_add_post_op(Slapi_PBlock *pb)
slapi_log_err(SLAPI_LOG_TRACE, MEP_PLUGIN_SUBSYSTEM,
"--> mep_add_post_op\n");
+ /* Just bail if we aren't ready to service requests yet. */
+ if (!mep_oktodo(pb)) {
+ return SLAPI_PLUGIN_SUCCESS;
+ }
+
/* Reload config if a config entry was added. */
if ((sdn = mep_get_sdn(pb))) {
if (mep_dn_is_config(sdn)) {
@@ -2543,6 +2548,11 @@ mep_del_post_op(Slapi_PBlock *pb)
slapi_log_err(SLAPI_LOG_TRACE, MEP_PLUGIN_SUBSYSTEM,
"--> mep_del_post_op\n");
+ /* Just bail if we aren't ready to service requests yet. */
+ if (!mep_oktodo(pb)) {
+ return SLAPI_PLUGIN_SUCCESS;
+ }
+
/* Reload config if a config entry was deleted. */
if ((sdn = mep_get_sdn(pb))) {
if (mep_dn_is_config(sdn))
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
tbordaz pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new 906e093 Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
906e093 is described below
commit 906e093f5a3598c90bdc875dcf7d2623d0401309
Author: Thierry Bordaz <tbordaz(a)redhat.com>
AuthorDate: Mon Mar 18 13:48:03 2019 +0100
Ticket 49561 - MEP plugin, upon direct op failure, will delete twice the same managed entry
Bug Description:
When a failure occurs during betxn_post plugin callback, the betxn_post plugins are called again.
This is to process some kind of undo action (for example usn or dna that manage counters).
If MEP plugin is called for a managing entry, it deletes the managed entry (that become a tombstone).
If later an other betxn_postop fails, then MEP is called again.
But as it does not detect the operation failure (for DEL and ADD), then it tries again
to delete the managed entry that is already a tombstone.
Fix Description:
The MEP betxn_post plugin callbacks (ADD and DEL) should catch the operation failure
and return.
It is already in place for MODRDN and MOD.
https://pagure.io/389-ds-base/issue/49561
Reviewed by: Mark Reynold, thanks !!
Platforms tested: F28
Flag Day: no
Doc impact: no
---
ldap/servers/plugins/mep/mep.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/ldap/servers/plugins/mep/mep.c b/ldap/servers/plugins/mep/mep.c
index 7f30f41..a7b60e1 100644
--- a/ldap/servers/plugins/mep/mep.c
+++ b/ldap/servers/plugins/mep/mep.c
@@ -2471,6 +2471,11 @@ mep_add_post_op(Slapi_PBlock *pb)
slapi_log_err(SLAPI_LOG_TRACE, MEP_PLUGIN_SUBSYSTEM,
"--> mep_add_post_op\n");
+ /* Just bail if we aren't ready to service requests yet. */
+ if (!mep_oktodo(pb)) {
+ return SLAPI_PLUGIN_SUCCESS;
+ }
+
/* Reload config if a config entry was added. */
if ((sdn = mep_get_sdn(pb))) {
if (mep_dn_is_config(sdn)) {
@@ -2543,6 +2548,11 @@ mep_del_post_op(Slapi_PBlock *pb)
slapi_log_err(SLAPI_LOG_TRACE, MEP_PLUGIN_SUBSYSTEM,
"--> mep_del_post_op\n");
+ /* Just bail if we aren't ready to service requests yet. */
+ if (!mep_oktodo(pb)) {
+ return SLAPI_PLUGIN_SUCCESS;
+ }
+
/* Reload config if a config entry was deleted. */
if ((sdn = mep_get_sdn(pb))) {
if (mep_dn_is_config(sdn))
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.3.9 updated: Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
tbordaz pushed a commit to branch 389-ds-base-1.3.9
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.3.9 by this push:
new bcacbf2 Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
bcacbf2 is described below
commit bcacbf24bdfca1716c0cb033535fcb8836306c38
Author: Thierry Bordaz <tbordaz(a)redhat.com>
AuthorDate: Thu Mar 14 17:33:35 2019 +0100
Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
Bug Description:
When automember and memberof are enabled, if a user is member of a group
because of an automember rule. Then when the group is deleted,
memberof updates the member (to update 'memberof' attribute) that
trigger automember to reevaluate the automember rule and add the member
to the group. But at this time the group is already deleted.
Chaining back the failure up to the top level operation the deletion
of the group fails
Fix Description:
The fix consists to check that if a automember rule tries to add a user
in a group, then to check that the group exists before updating it.
https://pagure.io/389-ds-base/issue/50282
Reviewed by: Mark Reynolds, William Brown
Platforms tested: F29
Flag Day: no
Doc impact: no
---
ldap/servers/plugins/automember/automember.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
index bb6ff1f..fcf0cdb 100644
--- a/ldap/servers/plugins/automember/automember.c
+++ b/ldap/servers/plugins/automember/automember.c
@@ -1636,6 +1636,29 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char
char *member_value = NULL;
int freeit = 0;
int rc = 0;
+ Slapi_DN *group_sdn;
+ Slapi_Entry *group_entry = NULL;
+
+ /* First thing check that the group still exists */
+ group_sdn = slapi_sdn_new_dn_byval(group_dn);
+ rc = slapi_search_internal_get_entry(group_sdn, NULL, &group_entry, automember_get_plugin_id());
+ slapi_sdn_free(&group_sdn);
+ if (rc != LDAP_SUCCESS || group_entry == NULL) {
+ if (rc == LDAP_NO_SUCH_OBJECT) {
+ /* the automember group (default or target) does not exist, just skip this definition */
+ slapi_log_err(SLAPI_LOG_PLUGIN, AUTOMEMBER_PLUGIN_SUBSYSTEM,
+ "automember_update_member_value - group (default or target) does not exist (%s)\n",
+ group_dn);
+ rc = 0;
+ } else {
+ slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
+ "automember_update_member_value - group (default or target) can not be retrieved (%s) err=%d\n",
+ group_dn, rc);
+ }
+ slapi_entry_free(group_entry);
+ return rc;
+ }
+ slapi_entry_free(group_entry);
/* If grouping_value is dn, we need to fetch the dn instead. */
if (slapi_attr_type_cmp(grouping_value, "dn", SLAPI_TYPE_CMP_EXACT) == 0) {
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.3.9 updated: Ticket 50077 - Do not automatically turn automember postop modifies on
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
mreynolds pushed a commit to branch 389-ds-base-1.3.9
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.3.9 by this push:
new 6eae9e0 Ticket 50077 - Do not automatically turn automember postop modifies on
6eae9e0 is described below
commit 6eae9e02b969c6f459cc263d3d1ae0b52af9bcd3
Author: Mark Reynolds <mreynolds(a)redhat.com>
AuthorDate: Tue Mar 12 16:03:29 2019 -0400
Ticket 50077 - Do not automatically turn automember postop modifies on
Description: Although we have set the new postop processing on by
default in the template-dse.ldif, we do not want to
enable it by default for upgrades (only new installs).
So if the attribute is not set, it is assumed "off".
https://pagure.io/389-ds-base/issue/50077
Reviewed by: firstyear(Thanks!)
(cherry picked from commit d318d060f49b67ed1b10f22b52f98e038afa356a)
---
ldap/servers/plugins/automember/automember.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
index abd6df8..bb6ff1f 100644
--- a/ldap/servers/plugins/automember/automember.c
+++ b/ldap/servers/plugins/automember/automember.c
@@ -90,7 +90,7 @@ static void automember_task_export_destructor(Slapi_Task *task);
static void automember_task_map_destructor(Slapi_Task *task);
#define DEFAULT_FILE_MODE PR_IRUSR | PR_IWUSR
-static uint64_t plugin_do_modify = 1;
+static uint64_t plugin_do_modify = 0;
static uint64_t plugin_is_betxn = 0;
/*
@@ -345,15 +345,14 @@ automember_start(Slapi_PBlock *pb)
}
/* Check and set if we should process modify operations */
- plugin_do_modify = 1; /* default is "on" */
if ((slapi_pblock_get(pb, SLAPI_ADD_ENTRY, &plugin_entry) == 0) && plugin_entry){
if ((do_modify = slapi_fetch_attr(plugin_entry, AUTOMEMBER_DO_MODIFY, NULL)) ) {
if (strcasecmp(do_modify, "on") && strcasecmp(do_modify, "off")) {
slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
"automember_start - %s: invalid value \"%s\". Valid values are \"on\" or \"off\". Using default of \"on\"\n",
AUTOMEMBER_DO_MODIFY, do_modify);
- } else if (strcasecmp(do_modify, "off") == 0 ){
- plugin_do_modify = 0;
+ } else if (strcasecmp(do_modify, "on") == 0 ){
+ plugin_do_modify = 1;
}
}
}
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Ticket 50077 - Do not automatically turn automember postop modifies on
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
mreynolds pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new 4ab9bd5 Ticket 50077 - Do not automatically turn automember postop modifies on
4ab9bd5 is described below
commit 4ab9bd595e1d9d09d13487d5807a6b59ec32b62f
Author: Mark Reynolds <mreynolds(a)redhat.com>
AuthorDate: Tue Mar 12 16:03:29 2019 -0400
Ticket 50077 - Do not automatically turn automember postop modifies on
Description: Although we have set the new postop processing on by
default in the template-dse.ldif, we do not want to
enable it by default for upgrades (only new installs).
So if the attribute is not set, it is assumed "off".
https://pagure.io/389-ds-base/issue/50077
Reviewed by: firstyear(Thanks!)
(cherry picked from commit d318d060f49b67ed1b10f22b52f98e038afa356a)
---
ldap/servers/plugins/automember/automember.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
index c7b83e8..fcf0cdb 100644
--- a/ldap/servers/plugins/automember/automember.c
+++ b/ldap/servers/plugins/automember/automember.c
@@ -90,7 +90,7 @@ static void automember_task_export_destructor(Slapi_Task *task);
static void automember_task_map_destructor(Slapi_Task *task);
#define DEFAULT_FILE_MODE PR_IRUSR | PR_IWUSR
-static uint64_t plugin_do_modify = 1;
+static uint64_t plugin_do_modify = 0;
static uint64_t plugin_is_betxn = 0;
/*
@@ -345,15 +345,14 @@ automember_start(Slapi_PBlock *pb)
}
/* Check and set if we should process modify operations */
- plugin_do_modify = 1; /* default is "on" */
if ((slapi_pblock_get(pb, SLAPI_ADD_ENTRY, &plugin_entry) == 0) && plugin_entry){
if ((do_modify = slapi_fetch_attr(plugin_entry, AUTOMEMBER_DO_MODIFY, NULL)) ) {
if (strcasecmp(do_modify, "on") && strcasecmp(do_modify, "off")) {
slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
"automember_start - %s: invalid value \"%s\". Valid values are \"on\" or \"off\". Using default of \"on\"\n",
AUTOMEMBER_DO_MODIFY, do_modify);
- } else if (strcasecmp(do_modify, "off") == 0 ){
- plugin_do_modify = 0;
+ } else if (strcasecmp(do_modify, "on") == 0 ){
+ plugin_do_modify = 1;
}
}
}
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
tbordaz pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new ada0f84 Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
ada0f84 is described below
commit ada0f84baac0286db64a413bb8896cd458eccacd
Author: Thierry Bordaz <tbordaz(a)redhat.com>
AuthorDate: Thu Mar 14 17:33:35 2019 +0100
Ticket 50282 - OPERATIONS ERROR when trying to delete a group with automember members
Bug Description:
When automember and memberof are enabled, if a user is member of a group
because of an automember rule. Then when the group is deleted,
memberof updates the member (to update 'memberof' attribute) that
trigger automember to reevaluate the automember rule and add the member
to the group. But at this time the group is already deleted.
Chaining back the failure up to the top level operation the deletion
of the group fails
Fix Description:
The fix consists to check that if a automember rule tries to add a user
in a group, then to check that the group exists before updating it.
https://pagure.io/389-ds-base/issue/50282
Reviewed by: Mark Reynolds, William Brown
Platforms tested: F29
Flag Day: no
Doc impact: no
---
.../suites/automember_plugin/automember_test.py | 114 ++++++++++++++++++++-
ldap/servers/plugins/automember/automember.c | 23 +++++
2 files changed, 136 insertions(+), 1 deletion(-)
diff --git a/dirsrvtests/tests/suites/automember_plugin/automember_test.py b/dirsrvtests/tests/suites/automember_plugin/automember_test.py
index b13c1b2..1659ab6 100644
--- a/dirsrvtests/tests/suites/automember_plugin/automember_test.py
+++ b/dirsrvtests/tests/suites/automember_plugin/automember_test.py
@@ -4,7 +4,7 @@ import os
import ldap
from lib389.utils import ds_is_older
from lib389._constants import *
-from lib389.plugins import AutoMembershipPlugin, AutoMembershipDefinition, AutoMembershipDefinitions
+from lib389.plugins import AutoMembershipPlugin, AutoMembershipDefinition, AutoMembershipDefinitions, AutoMembershipRegexRule
from lib389._mapped_object import DSLdapObjects, DSLdapObject
from lib389 import agreement
from lib389.idm.user import UserAccount, UserAccounts, TEST_USER_PROPERTIES
@@ -137,3 +137,115 @@ def test_adduser(automember_fixture, topo):
user = users.create(properties=TEST_USER_PROPERTIES)
assert group.is_member(user.dn)
+ user.delete()
+
+def test_delete_default_group(automember_fixture, topo):
+ """If memberof is enable and a user became member of default group
+ because of automember rule then delete the default group should succeeds
+
+ :id: 8b55d077-8851-45a2-a547-b28a7983a3c2
+ :setup: Standalone instance, enabled Auto Membership Plugin
+ :steps:
+ 1. Enable memberof plugin
+ 2. Create a user
+ 3. Assert that the user is member of the default group
+ 4. Delete the default group
+ :expectedresults:
+ 1. Should be success
+ 2. Should be success
+ 3. Should be success
+ 4. Should be success
+ """
+
+ (group, automembers, automember) = automember_fixture
+
+ from lib389.plugins import MemberOfPlugin
+ memberof = MemberOfPlugin(topo.standalone)
+ memberof.enable()
+ topo.standalone.restart()
+ topo.standalone.setLogLevel(65536)
+
+ users = UserAccounts(topo.standalone, DEFAULT_SUFFIX)
+ user_1 = users.create_test_user(uid=1)
+
+ try:
+ assert group.is_member(user_1.dn)
+ group.delete()
+ error_lines = topo.standalone.ds_error_log.match('.*auto-membership-plugin - automember_update_member_value - group .default or target. does not exist .%s.$' % group.dn)
+ assert (len(error_lines) == 1)
+ finally:
+ user_1.delete()
+ topo.standalone.setLogLevel(0)
+
+def test_delete_target_group(automember_fixture, topo):
+ """If memberof is enabld and a user became member of target group
+ because of automember rule then delete the target group should succeeds
+
+ :id: bf5745e3-3de8-485d-8a68-e2fd460ce1cb
+ :setup: Standalone instance, enabled Auto Membership Plugin
+ :steps:
+ 1. Recreate the default group if it was deleted before
+ 2. Create a target group (using regex)
+ 3. Create a target group automember rule (regex)
+ 4. Enable memberof plugin
+ 5. Create a user that goes into the target group
+ 6. Assert that the user is member of the target group
+ 7. Delete the target group
+ 8. Check automember skipped the regex automember rule because target group did not exist
+ :expectedresults:
+ 1. Should be success
+ 2. Should be success
+ 3. Should be success
+ 4. Should be success
+ 5. Should be success
+ 6. Should be success
+ 7. Should be success
+ 8. Should be success
+ """
+
+ (group, automembers, automember) = automember_fixture
+
+ # default group that may have been deleted in previous tests
+ try:
+ groups = Groups(topo.standalone, DEFAULT_SUFFIX)
+ group = groups.create(properties={'cn': 'testgroup'})
+ except:
+ pass
+
+ # target group that will receive regex automember
+ groups = Groups(topo.standalone, DEFAULT_SUFFIX)
+ group_regex = groups.create(properties={'cn': 'testgroup_regex'})
+
+ # regex automember definition
+ automember_regex_prop = {
+ 'cn': 'automember regex',
+ 'autoMemberTargetGroup': group_regex.dn,
+ 'autoMemberInclusiveRegex': 'uid=.*1',
+ }
+ automember_regex_dn = 'cn=automember regex, %s' % automember.dn
+ automember_regexes = AutoMembershipRegexRule(topo.standalone, automember_regex_dn)
+ automember_regex = automember_regexes.create(properties=automember_regex_prop)
+
+ from lib389.plugins import MemberOfPlugin
+ memberof = MemberOfPlugin(topo.standalone)
+ memberof.enable()
+
+ topo.standalone.restart()
+ topo.standalone.setLogLevel(65536)
+
+ # create a user that goes into the target group but not in the default group
+ users = UserAccounts(topo.standalone, DEFAULT_SUFFIX)
+ user_1 = users.create_test_user(uid=1)
+
+ try:
+ assert group_regex.is_member(user_1.dn)
+ assert not group.is_member(user_1.dn)
+
+ # delete that target filter group
+ group_regex.delete()
+ error_lines = topo.standalone.ds_error_log.match('.*auto-membership-plugin - automember_update_member_value - group .default or target. does not exist .%s.$' % group_regex.dn)
+ # one line for default group and one for target group
+ assert (len(error_lines) == 1)
+ finally:
+ user_1.delete()
+ topo.standalone.setLogLevel(0)
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
index abd6df8..c7b83e8 100644
--- a/ldap/servers/plugins/automember/automember.c
+++ b/ldap/servers/plugins/automember/automember.c
@@ -1637,6 +1637,29 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char
char *member_value = NULL;
int freeit = 0;
int rc = 0;
+ Slapi_DN *group_sdn;
+ Slapi_Entry *group_entry = NULL;
+
+ /* First thing check that the group still exists */
+ group_sdn = slapi_sdn_new_dn_byval(group_dn);
+ rc = slapi_search_internal_get_entry(group_sdn, NULL, &group_entry, automember_get_plugin_id());
+ slapi_sdn_free(&group_sdn);
+ if (rc != LDAP_SUCCESS || group_entry == NULL) {
+ if (rc == LDAP_NO_SUCH_OBJECT) {
+ /* the automember group (default or target) does not exist, just skip this definition */
+ slapi_log_err(SLAPI_LOG_PLUGIN, AUTOMEMBER_PLUGIN_SUBSYSTEM,
+ "automember_update_member_value - group (default or target) does not exist (%s)\n",
+ group_dn);
+ rc = 0;
+ } else {
+ slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
+ "automember_update_member_value - group (default or target) can not be retrieved (%s) err=%d\n",
+ group_dn, rc);
+ }
+ slapi_entry_free(group_entry);
+ return rc;
+ }
+ slapi_entry_free(group_entry);
/* If grouping_value is dn, we need to fetch the dn instead. */
if (slapi_attr_type_cmp(grouping_value, "dn", SLAPI_TYPE_CMP_EXACT) == 0) {
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Ticket 49873: (cont) Contention on virtual attribute lookup
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
tbordaz pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new b998fed Ticket 49873: (cont) Contention on virtual attribute lookup
b998fed is described below
commit b998fed9cb1123849486ff897c9b53fe9979cfdb
Author: Thierry Bordaz <tbordaz(a)redhat.com>
AuthorDate: Wed Mar 6 16:07:53 2019 +0100
Ticket 49873: (cont) Contention on virtual attribute lookup
Bug Description:
The previous fix was incomplete.
It created the thread private counter before the fork.
The deamon process was not inheriting it.
There is a possiblity that an callback of an internal search
tries to update the map. (cos thread monitoring cos definition)
In such case the RW lock was first acquired in read at the top level
of the internal search, then later the callback try to acquire it in write.
this created a deadlock
It stored in in private counter a value (int) rather than the address of
of the value (int*).
Fix Description:
The fix consists to create the thread private counter after the deamon creation.
In adding, when acquiring the lock in write, if the lock was already acquired
at the top level (in read), it release the lock and reset the counter. Then acquires
the lock in write.
In the opposite when releasing the lock in read, if the lock was not already acquired
it assumes it was acquired in write and do nothing
https://pagure.io/389-ds-base/issue/49873
Reviewed by: Mark Reynolds, William Brown (thanks !!)
Platforms tested: F30
Flag Day: no
Doc impact: no
---
ldap/servers/slapd/connection.c | 1 -
ldap/servers/slapd/main.c | 4 ++
ldap/servers/slapd/opshared.c | 2 +-
ldap/servers/slapd/proto-slap.h | 6 +-
ldap/servers/slapd/psearch.c | 1 -
ldap/servers/slapd/vattr.c | 126 +++++++++++++++++++++++++++++-----------
6 files changed, 102 insertions(+), 38 deletions(-)
diff --git a/ldap/servers/slapd/connection.c b/ldap/servers/slapd/connection.c
index fcc46cd..8b88568 100644
--- a/ldap/servers/slapd/connection.c
+++ b/ldap/servers/slapd/connection.c
@@ -1509,7 +1509,6 @@ connection_threadmain()
long bypasspollcnt = 0;
enable_nunc_stans = config_get_enable_nunc_stans();
- vattr_global_lock_init();
#if defined(hpux)
/* Arrange to ignore SIGPIPE signals. */
SIGNAL(SIGPIPE, SIG_IGN);
diff --git a/ldap/servers/slapd/main.c b/ldap/servers/slapd/main.c
index 185ba90..5a86e2e 100644
--- a/ldap/servers/slapd/main.c
+++ b/ldap/servers/slapd/main.c
@@ -950,6 +950,10 @@ main(int argc, char **argv)
return_value = 1;
goto cleanup;
}
+ /* The thread private counter needs to be allocated after the fork
+ * it is not inherited from parent process
+ */
+ vattr_global_lock_create();
/*
* Create our thread pool here for tasks to utilise.
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index 8b895a1..dd69173 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -987,7 +987,7 @@ free_and_return:
slapi_be_Unlock(be_single);
}
if (vattr_lock_acquired) {
- vattr_unlock();
+ vattr_rd_unlock();
}
free_and_return_nolock:
diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h
index 2029f41..f87c747 100644
--- a/ldap/servers/slapd/proto-slap.h
+++ b/ldap/servers/slapd/proto-slap.h
@@ -1419,9 +1419,11 @@ void subentry_create_filter(Slapi_Filter **filter);
* vattr.c
*/
void vattr_init(void);
-void vattr_global_lock_init(void);
+void vattr_global_lock_create(void);
void vattr_rdlock();
-void vattr_unlock();
+void vattr_rd_unlock();
+void vattr_wrlock();
+void vattr_wr_unlock();
void vattr_cleanup(void);
/*
diff --git a/ldap/servers/slapd/psearch.c b/ldap/servers/slapd/psearch.c
index e7b97a7..8ad268a 100644
--- a/ldap/servers/slapd/psearch.c
+++ b/ldap/servers/slapd/psearch.c
@@ -267,7 +267,6 @@ ps_send_results(void *arg)
Operation *pb_op = NULL;
g_incr_active_threadcnt();
- vattr_global_lock_init();
slapi_pblock_get(ps->ps_pblock, SLAPI_CONNECTION, &pb_conn);
slapi_pblock_get(ps->ps_pblock, SLAPI_OPERATION, &pb_op);
diff --git a/ldap/servers/slapd/vattr.c b/ldap/servers/slapd/vattr.c
index 155afca..ce63f50 100644
--- a/ldap/servers/slapd/vattr.c
+++ b/ldap/servers/slapd/vattr.c
@@ -119,20 +119,45 @@ void
vattr_init()
{
statechange_api = 0;
- PR_NewThreadPrivateIndex(&thread_private_global_vattr_lock, NULL);
vattr_map_create();
#ifdef VATTR_TEST_CODE
vattr_basic_sp_init();
#endif
}
-
+/* Create a private variable for each individual thread of the current process */
void
-vattr_global_lock_init()
+vattr_global_lock_create()
+{
+ if (PR_NewThreadPrivateIndex(&thread_private_global_vattr_lock, NULL) != PR_SUCCESS) {
+ slapi_log_err(SLAPI_LOG_ALERT,
+ "vattr_global_lock_create", "Failure to create global lock for virtual attribute !\n");
+ PR_ASSERT(0);
+ }
+}
+static int
+global_vattr_lock_get_acquired_count()
{
- if (thread_private_global_vattr_lock) {
- PR_SetThreadPrivate(thread_private_global_vattr_lock, (void *) 0);
+ int *nb_acquired;
+ nb_acquired = (int *) PR_GetThreadPrivate(thread_private_global_vattr_lock);
+ if (nb_acquired == NULL) {
+ /* if it was not initialized set it to zero */
+ nb_acquired = (int *) slapi_ch_calloc(1, sizeof(int));
+ PR_SetThreadPrivate(thread_private_global_vattr_lock, (void *) nb_acquired);
}
+ return *nb_acquired;
+}
+static void
+global_vattr_lock_set_acquired_count(int nb_acquired)
+{
+ int *val;
+ val = (int *) PR_GetThreadPrivate(thread_private_global_vattr_lock);
+ if (val == NULL) {
+ /* if it was not initialized set it to zero */
+ val = (int *) slapi_ch_calloc(1, sizeof(int));
+ }
+ *val = nb_acquired;
+ PR_SetThreadPrivate(thread_private_global_vattr_lock, (void *) val);
}
/* The map lock can be acquired recursively. So only the first rdlock
* will acquire the lock.
@@ -142,18 +167,15 @@ vattr_global_lock_init()
void
vattr_rdlock()
{
- if (thread_private_global_vattr_lock) {
- int nb_acquire = (int) PR_GetThreadPrivate(thread_private_global_vattr_lock);
+ int nb_acquire = global_vattr_lock_get_acquired_count();
- if (nb_acquire == 0) {
- /* The lock was not held just acquire it */
- slapi_rwlock_rdlock(the_map->lock);
- }
- nb_acquire++;
- PR_SetThreadPrivate(thread_private_global_vattr_lock, (void *) nb_acquire);
- } else {
+ if (nb_acquire == 0) {
+ /* The lock was not held just acquire it */
slapi_rwlock_rdlock(the_map->lock);
}
+ nb_acquire++;
+ global_vattr_lock_set_acquired_count(nb_acquire);
+
}
/* The map lock can be acquired recursively. So only the last unlock
* will release the lock.
@@ -161,25 +183,63 @@ vattr_rdlock()
* later calls during the operation processing will just increase/decrease a counter.
*/
void
-vattr_unlock()
+vattr_rd_unlock()
{
- if (thread_private_global_vattr_lock) {
- int nb_acquire = (int) PR_GetThreadPrivate(thread_private_global_vattr_lock);
+ int nb_acquire = global_vattr_lock_get_acquired_count();
- if (nb_acquire >= 1) {
- nb_acquire--;
- if (nb_acquire == 0) {
- slapi_rwlock_unlock(the_map->lock);
- }
- PR_SetThreadPrivate(thread_private_global_vattr_lock, (void *) nb_acquire);
- } else {
- slapi_log_err(SLAPI_LOG_CRIT,
- "vattr_unlock", "The lock was not acquire. We should not be here\n");
- PR_ASSERT(nb_acquire >= 1);
+ if (nb_acquire >= 1) {
+ nb_acquire--;
+ if (nb_acquire == 0) {
+ slapi_rwlock_unlock(the_map->lock);
}
+ global_vattr_lock_set_acquired_count(nb_acquire);
} else {
+ /* this is likely the consequence of lock acquire in read during an internal search
+ * but the search callback updated the map and release the readlock and acquired
+ * it in write.
+ * So after the update the lock was no longer held but when completing the internal
+ * search we release the global read lock, that now has nothing to do
+ */
+ slapi_log_err(SLAPI_LOG_INFO,
+ "vattr_rd_unlock", "vattr lock no longer acquired in read.\n");
+ }
+}
+
+/* The map lock is acquired in write (updating the map)
+ * It exists a possibility that lock is acquired in write while it is already
+ * hold in read by this thread (internal search with updating callback)
+ * In such situation, the we must abandon the read global lock and acquire in write
+ */
+void
+vattr_wrlock()
+{
+ int nb_read_acquire = global_vattr_lock_get_acquired_count();
+
+ if (nb_read_acquire) {
+ /* The lock was acquired in read but we need it in write
+ * release it and set the global vattr_lock counter to 0
+ */
slapi_rwlock_unlock(the_map->lock);
+ global_vattr_lock_set_acquired_count(0);
}
+ slapi_rwlock_wrlock(the_map->lock);
+}
+/* The map lock is release from a write write (updating the map)
+ */
+void
+vattr_wr_unlock()
+{
+ int nb_read_acquire = global_vattr_lock_get_acquired_count();
+
+ if (nb_read_acquire) {
+ /* The lock being acquired in write, the private thread counter
+ * (that count the number of time it was acquired in read) should be 0
+ */
+ slapi_log_err(SLAPI_LOG_INFO,
+ "vattr_unlock", "The lock was acquired in write. We should not be here\n");
+ PR_ASSERT(nb_read_acquire == 0);
+ }
+ slapi_rwlock_unlock(the_map->lock);
}
/* Called on server shutdown, free all structures, inform service providers that we're going down etc */
void
@@ -1999,7 +2059,7 @@ vattr_map_lookup(const char *type_to_find, vattr_map_entry **result)
*result = (vattr_map_entry *)PL_HashTableLookupConst(the_map->hashtable,
(void *)basetype);
/* Release ze lock */
- vattr_unlock();
+ vattr_rd_unlock();
if (tmp) {
slapi_ch_free_string(&tmp);
@@ -2018,13 +2078,13 @@ vattr_map_insert(vattr_map_entry *vae)
{
PR_ASSERT(the_map);
/* Get the writer lock */
- slapi_rwlock_wrlock(the_map->lock);
+ vattr_wrlock();
/* Insert the thing */
/* It's illegal to call this function if the entry is already there */
PR_ASSERT(NULL == PL_HashTableLookupConst(the_map->hashtable, (void *)vae->type_name));
PL_HashTableAdd(the_map->hashtable, (void *)vae->type_name, (void *)vae);
/* Unlock and we're done */
- slapi_rwlock_unlock(the_map->lock);
+ vattr_wr_unlock();
return 0;
}
@@ -2161,13 +2221,13 @@ schema_changed_callback(Slapi_Entry *e __attribute__((unused)),
void *caller_data __attribute__((unused)))
{
/* Get the writer lock */
- slapi_rwlock_wrlock(the_map->lock);
+ vattr_wrlock();
/* go through the list */
PL_HashTableEnumerateEntries(the_map->hashtable, vattr_map_entry_rebuild_schema, 0);
/* Unlock and we're done */
- slapi_rwlock_unlock(the_map->lock);
+ vattr_wr_unlock();
}
@@ -2204,7 +2264,7 @@ slapi_vattr_schema_check_type(Slapi_Entry *e, char *type)
obj = obj->pNext;
}
- vattr_unlock();
+ vattr_rd_unlock();
}
slapi_valueset_free(vs);
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.3.9 updated: Ticket 50260 - backend txn plugins can corrupt entry cache
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
mreynolds pushed a commit to branch 389-ds-base-1.3.9
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.3.9 by this push:
new 679f0fe Ticket 50260 - backend txn plugins can corrupt entry cache
679f0fe is described below
commit 679f0fe1af9b85c1d8565189739d9813f78ed398
Author: Mark Reynolds <mreynolds(a)redhat.com>
AuthorDate: Thu Mar 7 15:38:25 2019 -0500
Ticket 50260 - backend txn plugins can corrupt entry cache
Bug Description: If a nested backend txn plugin fails, any updates
it made that went into the entry cache still persist
after the database transaction is aborted.
Fix Description: In order to be sure the entry cache is not corrupted
after a backend txn plugin failure we need to flush
all the cache entries that were added to the cache
after the parent operation was started.
To do this we record the start time the original operation,
(or parent operation), and we record the time any entry
is added to the cache. Then on failure we do a comparision
and remove the entry from the cache if it's not in use.
If it is in use we add a "invalid" flag which triggers
the entry to be removed when the cache entry is returned
by the owner.
https://pagure.io/389-ds-base/issue/50260
CI tested and ASAN approved.
Reviewed by: firstyear, tbordaz, and lkrispen (Thanks!!!)
(cherry picked from commit 7ba8a80cfbaed9f6d727f98ed8c284943b3295e1)
---
dirsrvtests/tests/suites/betxns/betxn_test.py | 114 +++++++++++++++++++++++--
ldap/servers/slapd/back-ldbm/back-ldbm.h | 68 ++++++++-------
ldap/servers/slapd/back-ldbm/backentry.c | 3 +-
ldap/servers/slapd/back-ldbm/cache.c | 112 ++++++++++++++++++++++--
ldap/servers/slapd/back-ldbm/ldbm_add.c | 14 +++
ldap/servers/slapd/back-ldbm/ldbm_delete.c | 14 +++
ldap/servers/slapd/back-ldbm/ldbm_modify.c | 14 +++
ldap/servers/slapd/back-ldbm/ldbm_modrdn.c | 32 ++++---
ldap/servers/slapd/back-ldbm/proto-back-ldbm.h | 1 +
ldap/servers/slapd/slapi-plugin.h | 6 ++
10 files changed, 321 insertions(+), 57 deletions(-)
diff --git a/dirsrvtests/tests/suites/betxns/betxn_test.py b/dirsrvtests/tests/suites/betxns/betxn_test.py
index 48181a9..f03fb93 100644
--- a/dirsrvtests/tests/suites/betxns/betxn_test.py
+++ b/dirsrvtests/tests/suites/betxns/betxn_test.py
@@ -7,12 +7,10 @@
# --- END COPYRIGHT BLOCK ---
#
import pytest
-import six
import ldap
from lib389.tasks import *
from lib389.utils import *
from lib389.topologies import topology_st
-
from lib389._constants import DEFAULT_SUFFIX, PLUGIN_7_BIT_CHECK, PLUGIN_ATTR_UNIQUENESS, PLUGIN_MEMBER_OF
logging.getLogger(__name__).setLevel(logging.DEBUG)
@@ -249,8 +247,8 @@ def test_betxn_memberof(topology_st, dynamic_plugins):
log.info('test_betxn_memberof: PASSED')
-def test_betxn_modrdn_memberof(topology_st):
- """Test modrdn operartions and memberOf
+def test_betxn_modrdn_memberof_cache_corruption(topology_st):
+ """Test modrdn operations and memberOf
:id: 70d0b96e-b693-4bf7-bbf5-102a66ac5994
@@ -285,18 +283,18 @@ def test_betxn_modrdn_memberof(topology_st):
# Create user and add it to group
users = UserAccounts(topology_st.standalone, basedn=DEFAULT_SUFFIX)
- user = users.create(properties=TEST_USER_PROPERTIES)
+ user = users.ensure_state(properties=TEST_USER_PROPERTIES)
if not ds_is_older('1.3.7'):
user.remove('objectClass', 'nsMemberOf')
group.add_member(user.dn)
# Attempt modrdn that should fail, but the original entry should stay in the cache
- with pytest.raises(ldap.OBJECTCLASS_VIOLATION):
+ with pytest.raises(ldap.OBJECT_CLASS_VIOLATION):
group.rename('cn=group_to_people', newsuperior=peoplebase)
# Should fail, but not with NO_SUCH_OBJECT as the original entry should still be in the cache
- with pytest.raises(ldap.OBJECTCLASS_VIOLATION):
+ with pytest.raises(ldap.OBJECT_CLASS_VIOLATION):
group.rename('cn=group_to_people', newsuperior=peoplebase)
#
@@ -305,6 +303,108 @@ def test_betxn_modrdn_memberof(topology_st):
log.info('test_betxn_modrdn_memberof: PASSED')
+def test_ri_and_mep_cache_corruption(topology_st):
+ """Test RI plugin aborts change after MEP plugin fails.
+ This is really testing the entry cache for corruption
+
+ :id: 70d0b96e-b693-4bf7-bbf5-102a66ac5995
+
+ :setup: Standalone instance
+
+ :steps: 1. Enable and configure mep and ri plugins
+ 2. Add user and add it to a group
+ 3. Disable MEP plugin and remove MEP group
+ 4. Delete user
+ 5. Check that user is still a member of the group
+
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. It fails with NO_SUCH_OBJECT
+ 5. Success
+
+ """
+ # Start plugins
+ topology_st.standalone.config.set('nsslapd-dynamic-plugins', 'on')
+ mep_plugin = ManagedEntriesPlugin(topology_st.standalone)
+ mep_plugin.enable()
+ ri_plugin = ReferentialIntegrityPlugin(topology_st.standalone)
+ ri_plugin.enable()
+
+ # Add our org units
+ ous = OrganizationalUnits(topology_st.standalone, DEFAULT_SUFFIX)
+ ou_people = ous.create(properties={'ou': 'managed_people'})
+ ou_groups = ous.create(properties={'ou': 'managed_groups'})
+
+ # Configure MEP
+ mep_templates = MEPTemplates(topology_st.standalone, DEFAULT_SUFFIX)
+ mep_template1 = mep_templates.create(properties={
+ 'cn': 'MEP template',
+ 'mepRDNAttr': 'cn',
+ 'mepStaticAttr': 'objectclass: posixGroup|objectclass: extensibleObject'.split('|'),
+ 'mepMappedAttr': 'cn: $cn|uid: $cn|gidNumber: $uidNumber'.split('|')
+ })
+ mep_configs = MEPConfigs(topology_st.standalone)
+ mep_configs.create(properties={'cn': 'config',
+ 'originScope': ou_people.dn,
+ 'originFilter': 'objectclass=posixAccount',
+ 'managedBase': ou_groups.dn,
+ 'managedTemplate': mep_template1.dn})
+
+ # Add an entry that meets the MEP scope
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='ou={}'.format(ou_people.rdn))
+ user = users.create(properties={
+ 'uid': 'test-user1',
+ 'cn': 'test-user',
+ 'sn': 'test-user',
+ 'uidNumber': '10011',
+ 'gidNumber': '20011',
+ 'homeDirectory': '/home/test-user1'
+ })
+
+ # Add group
+ groups = Groups(topology_st.standalone, DEFAULT_SUFFIX)
+ user_group = groups.ensure_state(properties={'cn': 'group', 'member': user.dn})
+
+ # Check if a managed group entry was created
+ mep_group = Group(topology_st.standalone, dn='cn={},{}'.format(user.rdn, ou_groups.dn))
+ if not mep_group.exists():
+ log.fatal("MEP group was not created for the user")
+ assert False
+
+ # Mess with MEP so it fails
+ mep_plugin.disable()
+ mep_group.delete()
+ mep_plugin.enable()
+
+ # Add another group for verify entry cache is not corrupted
+ test_group = groups.create(properties={'cn': 'test_group'})
+
+ # Delete user, should fail, and user should still be a member
+ with pytest.raises(ldap.NO_SUCH_OBJECT):
+ user.delete()
+
+ # Verify membership is intact
+ if not user_group.is_member(user.dn):
+ log.fatal("Member was incorrectly removed from the group!! Or so it seems")
+
+ # Restart server and test again in case this was a cache issue
+ topology_st.standalone.restart()
+ if user_group.is_member(user.dn):
+ log.info("The entry cache was corrupted")
+ assert False
+
+ assert False
+
+ # Verify test group is still found in entry cache by deleting it
+ test_group.delete()
+
+ # Success
+ log.info("Test PASSED")
+
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
index 4727961..6cac605 100644
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
@@ -312,48 +312,52 @@ typedef struct
struct backcommon
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache */
-#define ENTRY_STATE_DELETED 0x1 /* entry is marked as deleted */
-#define ENTRY_STATE_CREATING 0x2 /* entry is being created; don't touch it */
-#define ENTRY_STATE_NOTINCACHE 0x4 /* cache_add failed; not in the cache */
- int ep_refcnt; /* entry reference cnt */
- size_t ep_size; /* for cache tracking */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache */
+#define ENTRY_STATE_DELETED 0x1 /* entry is marked as deleted */
+#define ENTRY_STATE_CREATING 0x2 /* entry is being created; don't touch it */
+#define ENTRY_STATE_NOTINCACHE 0x4 /* cache_add failed; not in the cache */
+#define ENTRY_STATE_INVALID 0x8 /* cache entry is invalid and needs to be removed */
+ int32_t ep_refcnt; /* entry reference cnt */
+ size_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
};
-/* From ep_type through ep_size MUST be identical to backcommon */
+/* From ep_type through ep_create_time MUST be identical to backcommon */
struct backentry
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache */
- int ep_refcnt; /* entry reference cnt */
- size_t ep_size; /* for cache tracking */
- Slapi_Entry *ep_entry; /* real entry */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache */
+ int32_t ep_refcnt; /* entry reference cnt */
+ size_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
+ Slapi_Entry *ep_entry; /* real entry */
Slapi_Entry *ep_vlventry;
- void *ep_dn_link; /* linkage for the 3 hash */
- void *ep_id_link; /* tables used for */
- void *ep_uuid_link; /* looking up entries */
- PRMonitor *ep_mutexp; /* protection for mods; make it reentrant */
+ void *ep_dn_link; /* linkage for the 3 hash */
+ void *ep_id_link; /* tables used for */
+ void *ep_uuid_link; /* looking up entries */
+ PRMonitor *ep_mutexp; /* protection for mods; make it reentrant */
};
-/* From ep_type through ep_size MUST be identical to backcommon */
+/* From ep_type through ep_create_time MUST be identical to backcommon */
struct backdn
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache; share ENTRY_STATE_* */
- int ep_refcnt; /* entry reference cnt */
- size_t ep_size; /* for cache tracking */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache; share ENTRY_STATE_* */
+ int32_t ep_refcnt; /* entry reference cnt */
+ size_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
Slapi_DN *dn_sdn;
- void *dn_id_link; /* for hash table */
+ void *dn_id_link; /* for hash table */
};
/* for the in-core cache of entries */
diff --git a/ldap/servers/slapd/back-ldbm/backentry.c b/ldap/servers/slapd/back-ldbm/backentry.c
index f2fe780..972842b 100644
--- a/ldap/servers/slapd/back-ldbm/backentry.c
+++ b/ldap/servers/slapd/back-ldbm/backentry.c
@@ -23,7 +23,8 @@ backentry_free(struct backentry **bep)
return;
}
ep = *bep;
- PR_ASSERT(ep->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE));
+
+ PR_ASSERT(ep->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE | ENTRY_STATE_INVALID));
if (ep->ep_entry != NULL) {
slapi_entry_free(ep->ep_entry);
}
diff --git a/ldap/servers/slapd/back-ldbm/cache.c b/ldap/servers/slapd/back-ldbm/cache.c
index 86e1f7b..458d791 100644
--- a/ldap/servers/slapd/back-ldbm/cache.c
+++ b/ldap/servers/slapd/back-ldbm/cache.c
@@ -56,11 +56,14 @@
#define LOG(...)
#endif
-#define LRU_DETACH(cache, e) lru_detach((cache), (void *)(e))
+typedef enum {
+ ENTRY_CACHE,
+ DN_CACHE,
+} CacheType;
+#define LRU_DETACH(cache, e) lru_detach((cache), (void *)(e))
#define CACHE_LRU_HEAD(cache, type) ((type)((cache)->c_lruhead))
#define CACHE_LRU_TAIL(cache, type) ((type)((cache)->c_lrutail))
-
#define BACK_LRU_NEXT(entry, type) ((type)((entry)->ep_lrunext))
#define BACK_LRU_PREV(entry, type) ((type)((entry)->ep_lruprev))
@@ -185,6 +188,7 @@ new_hash(u_long size, u_long offset, HashFn hfn, HashTestFn tfn)
int
add_hash(Hashtable *ht, void *key, uint32_t keylen, void *entry, void **alt)
{
+ struct backcommon *back_entry = (struct backcommon *)entry;
u_long val, slot;
void *e;
@@ -202,6 +206,7 @@ add_hash(Hashtable *ht, void *key, uint32_t keylen, void *entry, void **alt)
e = HASH_NEXT(ht, e);
}
/* ok, it's not already there, so add it */
+ back_entry->ep_create_time = slapi_current_rel_time_hr();
HASH_NEXT(ht, entry) = ht->slot[slot];
ht->slot[slot] = entry;
return 1;
@@ -492,6 +497,89 @@ cache_make_hashes(struct cache *cache, int type)
}
}
+/*
+ * Helper function for flush_hash() to calculate if the entry should be
+ * removed from the cache.
+ */
+static int32_t
+flush_remove_entry(struct timespec *entry_time, struct timespec *start_time)
+{
+ struct timespec diff;
+
+ slapi_timespec_diff(entry_time, start_time, &diff);
+ if (diff.tv_sec >= 0) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+/*
+ * Flush all the cache entries that were added after the "start time"
+ * This is called when a backend transaction plugin fails, and we need
+ * to remove all the possible invalid entries in the cache.
+ *
+ * If the ref count is 0, we can straight up remove it from the cache, but
+ * if the ref count is greater than 1, then the entry is currently in use.
+ * In the later case we set the entry state to ENTRY_STATE_INVALID, and
+ * when the owning thread cache_returns() the cache entry is automatically
+ * removed so another thread can not use/lock the invalid cache entry.
+ */
+static void
+flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
+{
+ void *e, *laste = NULL;
+ Hashtable *ht = cache->c_idtable;
+
+ cache_lock(cache);
+
+ for (size_t i = 0; i < ht->size; i++) {
+ e = ht->slot[i];
+ while (e) {
+ struct backcommon *entry = (struct backcommon *)e;
+ uint64_t remove_it = 0;
+ if (flush_remove_entry(&entry->ep_create_time, start_time)) {
+ /* Mark the entry to be removed */
+ slapi_log_err(SLAPI_LOG_CACHE, "flush_hash", "[%s] Removing entry id (%d)\n",
+ type ? "DN CACHE" : "ENTRY CACHE", entry->ep_id);
+ remove_it = 1;
+ }
+ laste = e;
+ e = HASH_NEXT(ht, e);
+
+ if (remove_it) {
+ /* since we have the cache lock we know we can trust refcnt */
+ entry->ep_state |= ENTRY_STATE_INVALID;
+ if (entry->ep_refcnt == 0) {
+ entry->ep_refcnt++;
+ lru_delete(cache, laste);
+ if (type == ENTRY_CACHE) {
+ entrycache_remove_int(cache, laste);
+ entrycache_return(cache, (struct backentry **)&laste);
+ } else {
+ dncache_remove_int(cache, laste);
+ dncache_return(cache, (struct backdn **)&laste);
+ }
+ } else {
+ /* Entry flagged for removal */
+ slapi_log_err(SLAPI_LOG_CACHE, "flush_hash",
+ "[%s] Flagging entry to be removed later: id (%d) refcnt: %d\n",
+ type ? "DN CACHE" : "ENTRY CACHE", entry->ep_id, entry->ep_refcnt);
+ }
+ }
+ }
+ }
+
+ cache_unlock(cache);
+}
+
+void
+revert_cache(ldbm_instance *inst, struct timespec *start_time)
+{
+ flush_hash(&inst->inst_cache, start_time, ENTRY_CACHE);
+ flush_hash(&inst->inst_dncache, start_time, DN_CACHE);
+}
+
/* initialize the cache */
int
cache_init(struct cache *cache, uint64_t maxsize, long maxentries, int type)
@@ -1142,7 +1230,7 @@ entrycache_return(struct cache *cache, struct backentry **bep)
} else {
ASSERT(e->ep_refcnt > 0);
if (!--e->ep_refcnt) {
- if (e->ep_state & ENTRY_STATE_DELETED) {
+ if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_INVALID)) {
const char *ndn = slapi_sdn_get_ndn(backentry_get_sdn(e));
if (ndn) {
/*
@@ -1154,6 +1242,13 @@ entrycache_return(struct cache *cache, struct backentry **bep)
LOG("entrycache_return -Failed to remove %s from dn table\n", ndn);
}
}
+ if (e->ep_state & ENTRY_STATE_INVALID) {
+ /* Remove it from the hash table before we free the back entry */
+ slapi_log_err(SLAPI_LOG_CACHE, "entrycache_return",
+ "Finally flushing invalid entry: %d (%s)\n",
+ e->ep_id, backentry_get_ndn(e));
+ entrycache_remove_int(cache, e);
+ }
backentry_free(bep);
} else {
lru_add(cache, e);
@@ -1535,7 +1630,7 @@ cache_lock_entry(struct cache *cache, struct backentry *e)
/* make sure entry hasn't been deleted now */
cache_lock(cache);
- if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE)) {
+ if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE | ENTRY_STATE_INVALID)) {
cache_unlock(cache);
PR_ExitMonitor(e->ep_mutexp);
LOG("<= cache_lock_entry (DELETED)\n");
@@ -1696,7 +1791,14 @@ dncache_return(struct cache *cache, struct backdn **bdn)
} else {
ASSERT((*bdn)->ep_refcnt > 0);
if (!--(*bdn)->ep_refcnt) {
- if ((*bdn)->ep_state & ENTRY_STATE_DELETED) {
+ if ((*bdn)->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_INVALID)) {
+ if ((*bdn)->ep_state & ENTRY_STATE_INVALID) {
+ /* Remove it from the hash table before we free the back dn */
+ slapi_log_err(SLAPI_LOG_CACHE, "dncache_return",
+ "Finally flushing invalid entry: %d (%s)\n",
+ (*bdn)->ep_id, slapi_sdn_get_dn((*bdn)->dn_sdn));
+ dncache_remove_int(cache, (*bdn));
+ }
backdn_free(bdn);
} else {
lru_add(cache, (void *)*bdn);
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_add.c b/ldap/servers/slapd/back-ldbm/ldbm_add.c
index 32c8e71..aa5b59a 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_add.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_add.c
@@ -97,6 +97,8 @@ ldbm_back_add(Slapi_PBlock *pb)
PRUint64 conn_id;
int op_id;
int result_sent = 0;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -147,6 +149,13 @@ ldbm_back_add(Slapi_PBlock *pb)
slapi_entry_delete_values(e, numsubordinates, NULL);
dblayer_txn_init(li, &txn);
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
if (parent_txn) {
@@ -1212,6 +1221,11 @@ ldbm_back_add(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_delete.c b/ldap/servers/slapd/back-ldbm/ldbm_delete.c
index f5f6c1e..3f687eb 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_delete.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_delete.c
@@ -79,6 +79,8 @@ ldbm_back_delete(Slapi_PBlock *pb)
ID tomb_ep_id = 0;
int result_sent = 0;
Connection *pb_conn;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -100,6 +102,13 @@ ldbm_back_delete(Slapi_PBlock *pb)
dblayer_txn_init(li, &txn);
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
if (parent_txn) {
txn.back_txn_txn = parent_txn;
} else {
@@ -1270,6 +1279,11 @@ replace_entry:
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
if (parent_found) {
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_modify.c b/ldap/servers/slapd/back-ldbm/ldbm_modify.c
index cc4319e..b90b3e0 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_modify.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_modify.c
@@ -412,6 +412,8 @@ ldbm_back_modify(Slapi_PBlock *pb)
int fixup_tombstone = 0;
int ec_locked = 0;
int result_sent = 0;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
slapi_pblock_get(pb, SLAPI_BACKEND, &be);
slapi_pblock_get(pb, SLAPI_PLUGIN_PRIVATE, &li);
@@ -426,6 +428,13 @@ ldbm_back_modify(Slapi_PBlock *pb)
dblayer_txn_init(li, &txn); /* must do this before first goto error_return */
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
if (parent_txn) {
txn.back_txn_txn = parent_txn;
} else {
@@ -864,6 +873,11 @@ ldbm_back_modify(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODIFY_FN);
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c b/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
index e4d0337..73e50eb 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
@@ -97,6 +97,8 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
int op_id;
int result_sent = 0;
Connection *pb_conn = NULL;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -134,6 +136,13 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
/* dblayer_txn_init needs to be called before "goto error_return" */
dblayer_txn_init(li, &txn);
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
if (parent_txn) {
@@ -1208,6 +1217,11 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
@@ -1353,8 +1367,13 @@ error_return:
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
}
- retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
+ retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
/* Release SERIAL LOCK */
dblayer_txn_abort(be, &txn); /* abort crashes in case disk full */
@@ -1411,17 +1430,6 @@ common_return:
"operation failed, the target entry is cleared from dncache (%s)\n", slapi_entry_get_dn(ec->ep_entry));
CACHE_REMOVE(&inst->inst_dncache, bdn);
CACHE_RETURN(&inst->inst_dncache, &bdn);
- /*
- * If the new/invalid entry (ec) is in the cache, that means we need to
- * swap it out with the original entry (e) --> to undo the swap that
- * modrdn_rename_entry_update_indexes() did.
- */
- if (cache_is_in_cache(&inst->inst_cache, ec)) {
- if (cache_replace(&inst->inst_cache, ec, e) != 0) {
- slapi_log_err(SLAPI_LOG_ALERT, "ldbm_back_modrdn",
- "failed to replace cache entry after error\n");
- }
- }
}
if (ec && inst) {
diff --git a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
index b56f6ef..e68765b 100644
--- a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
@@ -55,6 +55,7 @@ void cache_unlock_entry(struct cache *cache, struct backentry *e);
int cache_replace(struct cache *cache, void *oldptr, void *newptr);
int cache_has_otherref(struct cache *cache, void *bep);
int cache_is_in_cache(struct cache *cache, void *ptr);
+void revert_cache(ldbm_instance *inst, struct timespec *start_time);
#ifdef CACHE_DEBUG
void check_entry_cache(struct cache *cache, struct backentry *e);
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
index 54c195e..0bc3a6f 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -6766,6 +6766,12 @@ time_t slapi_current_time(void) __attribute__((deprecated));
*/
struct timespec slapi_current_time_hr(void);
/**
+ * Returns the current system time as a hr clock
+ *
+ * \return timespec of the current monotonic time.
+ */
+struct timespec slapi_current_rel_time_hr(void);
+/**
* Returns the current system time as a hr clock in UTC timezone.
* This clock adjusts with ntp steps, and should NOT be
* used for timer information.
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Ticket 50260 - backend txn plugins can corrupt entry cache
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
mreynolds pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new 09b5a2c Ticket 50260 - backend txn plugins can corrupt entry cache
09b5a2c is described below
commit 09b5a2c32435f02aa69546256a6fe3631891111c
Author: Mark Reynolds <mreynolds(a)redhat.com>
AuthorDate: Thu Mar 7 15:38:25 2019 -0500
Ticket 50260 - backend txn plugins can corrupt entry cache
Bug Description: If a nested backend txn plugin fails, any updates
it made that went into the entry cache still persist
after the database transaction is aborted.
Fix Description: In order to be sure the entry cache is not corrupted
after a backend txn plugin failure we need to flush
all the cache entries that were added to the cache
after the parent operation was started.
To do this we record the start time the original operation,
(or parent operation), and we record the time any entry
is added to the cache. Then on failure we do a comparision
and remove the entry from the cache if it's not in use.
If it is in use we add a "invalid" flag which triggers
the entry to be removed when the cache entry is returned
by the owner.
https://pagure.io/389-ds-base/issue/50260
CI tested and ASAN approved.
Reviewed by: firstyear, tbordaz, and lkrispen (Thanks!!!)
(cherry picked from commit 7ba8a80cfbaed9f6d727f98ed8c284943b3295e1)
---
dirsrvtests/tests/suites/betxns/betxn_test.py | 161 ++++++++++++++++-----
.../tests/suites/plugins/acceptance_test.py | 2 +-
ldap/servers/slapd/back-ldbm/back-ldbm.h | 68 +++++----
ldap/servers/slapd/back-ldbm/backentry.c | 3 +-
ldap/servers/slapd/back-ldbm/cache.c | 112 +++++++++++++-
ldap/servers/slapd/back-ldbm/ldbm_add.c | 14 ++
ldap/servers/slapd/back-ldbm/ldbm_delete.c | 14 ++
ldap/servers/slapd/back-ldbm/ldbm_modify.c | 14 ++
ldap/servers/slapd/back-ldbm/ldbm_modrdn.c | 32 ++--
ldap/servers/slapd/back-ldbm/proto-back-ldbm.h | 1 +
ldap/servers/slapd/slapi-plugin.h | 6 +
11 files changed, 343 insertions(+), 84 deletions(-)
diff --git a/dirsrvtests/tests/suites/betxns/betxn_test.py b/dirsrvtests/tests/suites/betxns/betxn_test.py
index 3b81434..2aaddde 100644
--- a/dirsrvtests/tests/suites/betxns/betxn_test.py
+++ b/dirsrvtests/tests/suites/betxns/betxn_test.py
@@ -7,22 +7,23 @@
# --- END COPYRIGHT BLOCK ---
#
import pytest
-import six
import ldap
from lib389.tasks import *
from lib389.utils import *
from lib389.topologies import topology_st
-
-from lib389.plugins import SevenBitCheckPlugin, AttributeUniquenessPlugin, MemberOfPlugin
-
+from lib389.plugins import (SevenBitCheckPlugin, AttributeUniquenessPlugin,
+ MemberOfPlugin, ManagedEntriesPlugin,
+ ReferentialIntegrityPlugin, MEPTemplates,
+ MEPConfigs)
from lib389.idm.user import UserAccounts, TEST_USER_PROPERTIES
-from lib389.idm.group import Groups
-
-from lib389._constants import DEFAULT_SUFFIX, PLUGIN_7_BIT_CHECK, PLUGIN_ATTR_UNIQUENESS, PLUGIN_MEMBER_OF
+from lib389.idm.organizationalunit import OrganizationalUnits
+from lib389.idm.group import Groups, Group
+from lib389._constants import DEFAULT_SUFFIX
logging.getLogger(__name__).setLevel(logging.DEBUG)
log = logging.getLogger(__name__)
+
def test_betxt_7bit(topology_st):
"""Test that the 7-bit plugin correctly rejects an invalid update
@@ -52,7 +53,6 @@ def test_betxt_7bit(topology_st):
sevenbc.enable()
topology_st.standalone.restart()
-
users = UserAccounts(topology_st.standalone, basedn=DEFAULT_SUFFIX)
user = users.create(properties=TEST_USER_PROPERTIES)
@@ -69,7 +69,7 @@ def test_betxt_7bit(topology_st):
user_check = users.get("testuser")
- assert user_check.dn == user.dn
+ assert user_check.dn.lower() == user.dn.lower()
#
# Cleanup - remove the user
@@ -100,9 +100,6 @@ def test_betxn_attr_uniqueness(topology_st):
5. Test user entry should be removed
"""
- USER1_DN = 'uid=test_entry1,' + DEFAULT_SUFFIX
- USER2_DN = 'uid=test_entry2,' + DEFAULT_SUFFIX
-
attruniq = AttributeUniquenessPlugin(topology_st.standalone)
attruniq.enable()
topology_st.standalone.restart()
@@ -110,26 +107,22 @@ def test_betxn_attr_uniqueness(topology_st):
users = UserAccounts(topology_st.standalone, basedn=DEFAULT_SUFFIX)
user1 = users.create(properties={
'uid': 'testuser1',
- 'cn' : 'testuser1',
- 'sn' : 'user1',
- 'uidNumber' : '1001',
- 'gidNumber' : '2001',
- 'homeDirectory' : '/home/testuser1'
+ 'cn': 'testuser1',
+ 'sn': 'user1',
+ 'uidNumber': '1001',
+ 'gidNumber': '2001',
+ 'homeDirectory': '/home/testuser1'
})
- try:
- user2 = users.create(properties={
+ with pytest.raises(ldap.LDAPError):
+ users.create(properties={
'uid': ['testuser2', 'testuser1'],
- 'cn' : 'testuser2',
- 'sn' : 'user2',
- 'uidNumber' : '1002',
- 'gidNumber' : '2002',
- 'homeDirectory' : '/home/testuser2'
+ 'cn': 'testuser2',
+ 'sn': 'user2',
+ 'uidNumber': '1002',
+ 'gidNumber': '2002',
+ 'homeDirectory': '/home/testuser2'
})
- log.fatal('test_betxn_attr_uniqueness: The second entry was incorrectly added.')
- assert False
- except ldap.LDAPError as e:
- log.error('test_betxn_attr_uniqueness: Failed to add test user as expected:')
user1.delete()
@@ -191,8 +184,8 @@ def test_betxn_memberof(topology_st):
log.info('test_betxn_memberof: PASSED')
-def test_betxn_modrdn_memberof(topology_st):
- """Test modrdn operartions and memberOf
+def test_betxn_modrdn_memberof_cache_corruption(topology_st):
+ """Test modrdn operations and memberOf
:id: 70d0b96e-b693-4bf7-bbf5-102a66ac5994
@@ -227,18 +220,18 @@ def test_betxn_modrdn_memberof(topology_st):
# Create user and add it to group
users = UserAccounts(topology_st.standalone, basedn=DEFAULT_SUFFIX)
- user = users.create(properties=TEST_USER_PROPERTIES)
+ user = users.ensure_state(properties=TEST_USER_PROPERTIES)
if not ds_is_older('1.3.7'):
user.remove('objectClass', 'nsMemberOf')
group.add_member(user.dn)
# Attempt modrdn that should fail, but the original entry should stay in the cache
- with pytest.raises(ldap.OBJECTCLASS_VIOLATION):
+ with pytest.raises(ldap.OBJECT_CLASS_VIOLATION):
group.rename('cn=group_to_people', newsuperior=peoplebase)
# Should fail, but not with NO_SUCH_OBJECT as the original entry should still be in the cache
- with pytest.raises(ldap.OBJECTCLASS_VIOLATION):
+ with pytest.raises(ldap.OBJECT_CLASS_VIOLATION):
group.rename('cn=group_to_people', newsuperior=peoplebase)
#
@@ -247,6 +240,108 @@ def test_betxn_modrdn_memberof(topology_st):
log.info('test_betxn_modrdn_memberof: PASSED')
+def test_ri_and_mep_cache_corruption(topology_st):
+ """Test RI plugin aborts change after MEP plugin fails.
+ This is really testing the entry cache for corruption
+
+ :id: 70d0b96e-b693-4bf7-bbf5-102a66ac5995
+
+ :setup: Standalone instance
+
+ :steps: 1. Enable and configure mep and ri plugins
+ 2. Add user and add it to a group
+ 3. Disable MEP plugin and remove MEP group
+ 4. Delete user
+ 5. Check that user is still a member of the group
+
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. It fails with NO_SUCH_OBJECT
+ 5. Success
+
+ """
+ # Start plugins
+ topology_st.standalone.config.set('nsslapd-dynamic-plugins', 'on')
+ mep_plugin = ManagedEntriesPlugin(topology_st.standalone)
+ mep_plugin.enable()
+ ri_plugin = ReferentialIntegrityPlugin(topology_st.standalone)
+ ri_plugin.enable()
+
+ # Add our org units
+ ous = OrganizationalUnits(topology_st.standalone, DEFAULT_SUFFIX)
+ ou_people = ous.create(properties={'ou': 'managed_people'})
+ ou_groups = ous.create(properties={'ou': 'managed_groups'})
+
+ # Configure MEP
+ mep_templates = MEPTemplates(topology_st.standalone, DEFAULT_SUFFIX)
+ mep_template1 = mep_templates.create(properties={
+ 'cn': 'MEP template',
+ 'mepRDNAttr': 'cn',
+ 'mepStaticAttr': 'objectclass: posixGroup|objectclass: extensibleObject'.split('|'),
+ 'mepMappedAttr': 'cn: $cn|uid: $cn|gidNumber: $uidNumber'.split('|')
+ })
+ mep_configs = MEPConfigs(topology_st.standalone)
+ mep_configs.create(properties={'cn': 'config',
+ 'originScope': ou_people.dn,
+ 'originFilter': 'objectclass=posixAccount',
+ 'managedBase': ou_groups.dn,
+ 'managedTemplate': mep_template1.dn})
+
+ # Add an entry that meets the MEP scope
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='ou={}'.format(ou_people.rdn))
+ user = users.create(properties={
+ 'uid': 'test-user1',
+ 'cn': 'test-user',
+ 'sn': 'test-user',
+ 'uidNumber': '10011',
+ 'gidNumber': '20011',
+ 'homeDirectory': '/home/test-user1'
+ })
+
+ # Add group
+ groups = Groups(topology_st.standalone, DEFAULT_SUFFIX)
+ user_group = groups.ensure_state(properties={'cn': 'group', 'member': user.dn})
+
+ # Check if a managed group entry was created
+ mep_group = Group(topology_st.standalone, dn='cn={},{}'.format(user.rdn, ou_groups.dn))
+ if not mep_group.exists():
+ log.fatal("MEP group was not created for the user")
+ assert False
+
+ # Mess with MEP so it fails
+ mep_plugin.disable()
+ mep_group.delete()
+ mep_plugin.enable()
+
+ # Add another group for verify entry cache is not corrupted
+ test_group = groups.create(properties={'cn': 'test_group'})
+
+ # Delete user, should fail, and user should still be a member
+ with pytest.raises(ldap.NO_SUCH_OBJECT):
+ user.delete()
+
+ # Verify membership is intact
+ if not user_group.is_member(user.dn):
+ log.fatal("Member was incorrectly removed from the group!! Or so it seems")
+
+ # Restart server and test again in case this was a cache issue
+ topology_st.standalone.restart()
+ if user_group.is_member(user.dn):
+ log.info("The entry cache was corrupted")
+ assert False
+
+ assert False
+
+ # Verify test group is still found in entry cache by deleting it
+ test_group.delete()
+
+ # Success
+ log.info("Test PASSED")
+
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/dirsrvtests/tests/suites/plugins/acceptance_test.py b/dirsrvtests/tests/suites/plugins/acceptance_test.py
index 894c0ff..f44b684 100644
--- a/dirsrvtests/tests/suites/plugins/acceptance_test.py
+++ b/dirsrvtests/tests/suites/plugins/acceptance_test.py
@@ -18,7 +18,7 @@ from lib389.utils import *
from lib389.plugins import *
from lib389._constants import *
from lib389.dseldif import DSEldif
-from lib389.idm.user import UserAccounts, UserAccount
+from lib389.idm.user import UserAccounts
from lib389.idm.group import Groups
from lib389.idm.organizationalunit import OrganizationalUnits
from lib389.idm.domain import Domain
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
index 79115fe..f690ad9 100644
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
@@ -312,48 +312,52 @@ typedef struct
struct backcommon
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache */
-#define ENTRY_STATE_DELETED 0x1 /* entry is marked as deleted */
-#define ENTRY_STATE_CREATING 0x2 /* entry is being created; don't touch it */
-#define ENTRY_STATE_NOTINCACHE 0x4 /* cache_add failed; not in the cache */
- int ep_refcnt; /* entry reference cnt */
- size_t ep_size; /* for cache tracking */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache */
+#define ENTRY_STATE_DELETED 0x1 /* entry is marked as deleted */
+#define ENTRY_STATE_CREATING 0x2 /* entry is being created; don't touch it */
+#define ENTRY_STATE_NOTINCACHE 0x4 /* cache_add failed; not in the cache */
+#define ENTRY_STATE_INVALID 0x8 /* cache entry is invalid and needs to be removed */
+ int32_t ep_refcnt; /* entry reference cnt */
+ size_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
};
-/* From ep_type through ep_size MUST be identical to backcommon */
+/* From ep_type through ep_create_time MUST be identical to backcommon */
struct backentry
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache */
- int ep_refcnt; /* entry reference cnt */
- size_t ep_size; /* for cache tracking */
- Slapi_Entry *ep_entry; /* real entry */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache */
+ int32_t ep_refcnt; /* entry reference cnt */
+ size_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
+ Slapi_Entry *ep_entry; /* real entry */
Slapi_Entry *ep_vlventry;
- void *ep_dn_link; /* linkage for the 3 hash */
- void *ep_id_link; /* tables used for */
- void *ep_uuid_link; /* looking up entries */
- PRMonitor *ep_mutexp; /* protection for mods; make it reentrant */
+ void *ep_dn_link; /* linkage for the 3 hash */
+ void *ep_id_link; /* tables used for */
+ void *ep_uuid_link; /* looking up entries */
+ PRMonitor *ep_mutexp; /* protection for mods; make it reentrant */
};
-/* From ep_type through ep_size MUST be identical to backcommon */
+/* From ep_type through ep_create_time MUST be identical to backcommon */
struct backdn
{
- int ep_type; /* to distinguish backdn from backentry */
- struct backcommon *ep_lrunext; /* for the cache */
- struct backcommon *ep_lruprev; /* for the cache */
- ID ep_id; /* entry id */
- char ep_state; /* state in the cache; share ENTRY_STATE_* */
- int ep_refcnt; /* entry reference cnt */
- uint64_t ep_size; /* for cache tracking */
+ int32_t ep_type; /* to distinguish backdn from backentry */
+ struct backcommon *ep_lrunext; /* for the cache */
+ struct backcommon *ep_lruprev; /* for the cache */
+ ID ep_id; /* entry id */
+ uint8_t ep_state; /* state in the cache; share ENTRY_STATE_* */
+ int32_t ep_refcnt; /* entry reference cnt */
+ uint64_t ep_size; /* for cache tracking */
+ struct timespec ep_create_time; /* the time the entry was added to the cache */
Slapi_DN *dn_sdn;
- void *dn_id_link; /* for hash table */
+ void *dn_id_link; /* for hash table */
};
/* for the in-core cache of entries */
diff --git a/ldap/servers/slapd/back-ldbm/backentry.c b/ldap/servers/slapd/back-ldbm/backentry.c
index f2fe780..972842b 100644
--- a/ldap/servers/slapd/back-ldbm/backentry.c
+++ b/ldap/servers/slapd/back-ldbm/backentry.c
@@ -23,7 +23,8 @@ backentry_free(struct backentry **bep)
return;
}
ep = *bep;
- PR_ASSERT(ep->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE));
+
+ PR_ASSERT(ep->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE | ENTRY_STATE_INVALID));
if (ep->ep_entry != NULL) {
slapi_entry_free(ep->ep_entry);
}
diff --git a/ldap/servers/slapd/back-ldbm/cache.c b/ldap/servers/slapd/back-ldbm/cache.c
index a27505c..ba9d26f 100644
--- a/ldap/servers/slapd/back-ldbm/cache.c
+++ b/ldap/servers/slapd/back-ldbm/cache.c
@@ -56,11 +56,14 @@
#define LOG(...)
#endif
-#define LRU_DETACH(cache, e) lru_detach((cache), (void *)(e))
+typedef enum {
+ ENTRY_CACHE,
+ DN_CACHE,
+} CacheType;
+#define LRU_DETACH(cache, e) lru_detach((cache), (void *)(e))
#define CACHE_LRU_HEAD(cache, type) ((type)((cache)->c_lruhead))
#define CACHE_LRU_TAIL(cache, type) ((type)((cache)->c_lrutail))
-
#define BACK_LRU_NEXT(entry, type) ((type)((entry)->ep_lrunext))
#define BACK_LRU_PREV(entry, type) ((type)((entry)->ep_lruprev))
@@ -185,6 +188,7 @@ new_hash(u_long size, u_long offset, HashFn hfn, HashTestFn tfn)
int
add_hash(Hashtable *ht, void *key, uint32_t keylen, void *entry, void **alt)
{
+ struct backcommon *back_entry = (struct backcommon *)entry;
u_long val, slot;
void *e;
@@ -202,6 +206,7 @@ add_hash(Hashtable *ht, void *key, uint32_t keylen, void *entry, void **alt)
e = HASH_NEXT(ht, e);
}
/* ok, it's not already there, so add it */
+ back_entry->ep_create_time = slapi_current_rel_time_hr();
HASH_NEXT(ht, entry) = ht->slot[slot];
ht->slot[slot] = entry;
return 1;
@@ -492,6 +497,89 @@ cache_make_hashes(struct cache *cache, int type)
}
}
+/*
+ * Helper function for flush_hash() to calculate if the entry should be
+ * removed from the cache.
+ */
+static int32_t
+flush_remove_entry(struct timespec *entry_time, struct timespec *start_time)
+{
+ struct timespec diff;
+
+ slapi_timespec_diff(entry_time, start_time, &diff);
+ if (diff.tv_sec >= 0) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+/*
+ * Flush all the cache entries that were added after the "start time"
+ * This is called when a backend transaction plugin fails, and we need
+ * to remove all the possible invalid entries in the cache.
+ *
+ * If the ref count is 0, we can straight up remove it from the cache, but
+ * if the ref count is greater than 1, then the entry is currently in use.
+ * In the later case we set the entry state to ENTRY_STATE_INVALID, and
+ * when the owning thread cache_returns() the cache entry is automatically
+ * removed so another thread can not use/lock the invalid cache entry.
+ */
+static void
+flush_hash(struct cache *cache, struct timespec *start_time, int32_t type)
+{
+ void *e, *laste = NULL;
+ Hashtable *ht = cache->c_idtable;
+
+ cache_lock(cache);
+
+ for (size_t i = 0; i < ht->size; i++) {
+ e = ht->slot[i];
+ while (e) {
+ struct backcommon *entry = (struct backcommon *)e;
+ uint64_t remove_it = 0;
+ if (flush_remove_entry(&entry->ep_create_time, start_time)) {
+ /* Mark the entry to be removed */
+ slapi_log_err(SLAPI_LOG_CACHE, "flush_hash", "[%s] Removing entry id (%d)\n",
+ type ? "DN CACHE" : "ENTRY CACHE", entry->ep_id);
+ remove_it = 1;
+ }
+ laste = e;
+ e = HASH_NEXT(ht, e);
+
+ if (remove_it) {
+ /* since we have the cache lock we know we can trust refcnt */
+ entry->ep_state |= ENTRY_STATE_INVALID;
+ if (entry->ep_refcnt == 0) {
+ entry->ep_refcnt++;
+ lru_delete(cache, laste);
+ if (type == ENTRY_CACHE) {
+ entrycache_remove_int(cache, laste);
+ entrycache_return(cache, (struct backentry **)&laste);
+ } else {
+ dncache_remove_int(cache, laste);
+ dncache_return(cache, (struct backdn **)&laste);
+ }
+ } else {
+ /* Entry flagged for removal */
+ slapi_log_err(SLAPI_LOG_CACHE, "flush_hash",
+ "[%s] Flagging entry to be removed later: id (%d) refcnt: %d\n",
+ type ? "DN CACHE" : "ENTRY CACHE", entry->ep_id, entry->ep_refcnt);
+ }
+ }
+ }
+ }
+
+ cache_unlock(cache);
+}
+
+void
+revert_cache(ldbm_instance *inst, struct timespec *start_time)
+{
+ flush_hash(&inst->inst_cache, start_time, ENTRY_CACHE);
+ flush_hash(&inst->inst_dncache, start_time, DN_CACHE);
+}
+
/* initialize the cache */
int
cache_init(struct cache *cache, uint64_t maxsize, int64_t maxentries, int type)
@@ -1142,7 +1230,7 @@ entrycache_return(struct cache *cache, struct backentry **bep)
} else {
ASSERT(e->ep_refcnt > 0);
if (!--e->ep_refcnt) {
- if (e->ep_state & ENTRY_STATE_DELETED) {
+ if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_INVALID)) {
const char *ndn = slapi_sdn_get_ndn(backentry_get_sdn(e));
if (ndn) {
/*
@@ -1154,6 +1242,13 @@ entrycache_return(struct cache *cache, struct backentry **bep)
LOG("entrycache_return -Failed to remove %s from dn table\n", ndn);
}
}
+ if (e->ep_state & ENTRY_STATE_INVALID) {
+ /* Remove it from the hash table before we free the back entry */
+ slapi_log_err(SLAPI_LOG_CACHE, "entrycache_return",
+ "Finally flushing invalid entry: %d (%s)\n",
+ e->ep_id, backentry_get_ndn(e));
+ entrycache_remove_int(cache, e);
+ }
backentry_free(bep);
} else {
lru_add(cache, e);
@@ -1535,7 +1630,7 @@ cache_lock_entry(struct cache *cache, struct backentry *e)
/* make sure entry hasn't been deleted now */
cache_lock(cache);
- if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE)) {
+ if (e->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_NOTINCACHE | ENTRY_STATE_INVALID)) {
cache_unlock(cache);
PR_ExitMonitor(e->ep_mutexp);
LOG("<= cache_lock_entry (DELETED)\n");
@@ -1696,7 +1791,14 @@ dncache_return(struct cache *cache, struct backdn **bdn)
} else {
ASSERT((*bdn)->ep_refcnt > 0);
if (!--(*bdn)->ep_refcnt) {
- if ((*bdn)->ep_state & ENTRY_STATE_DELETED) {
+ if ((*bdn)->ep_state & (ENTRY_STATE_DELETED | ENTRY_STATE_INVALID)) {
+ if ((*bdn)->ep_state & ENTRY_STATE_INVALID) {
+ /* Remove it from the hash table before we free the back dn */
+ slapi_log_err(SLAPI_LOG_CACHE, "dncache_return",
+ "Finally flushing invalid entry: %d (%s)\n",
+ (*bdn)->ep_id, slapi_sdn_get_dn((*bdn)->dn_sdn));
+ dncache_remove_int(cache, (*bdn));
+ }
backdn_free(bdn);
} else {
lru_add(cache, (void *)*bdn);
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_add.c b/ldap/servers/slapd/back-ldbm/ldbm_add.c
index f269115..8c0439c 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_add.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_add.c
@@ -97,6 +97,8 @@ ldbm_back_add(Slapi_PBlock *pb)
PRUint64 conn_id;
int op_id;
int result_sent = 0;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -147,6 +149,13 @@ ldbm_back_add(Slapi_PBlock *pb)
slapi_entry_delete_values(e, numsubordinates, NULL);
dblayer_txn_init(li, &txn);
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
if (parent_txn) {
@@ -1212,6 +1221,11 @@ ldbm_back_add(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_delete.c b/ldap/servers/slapd/back-ldbm/ldbm_delete.c
index 3a27fd0..98b3d82 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_delete.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_delete.c
@@ -79,6 +79,8 @@ ldbm_back_delete(Slapi_PBlock *pb)
ID tomb_ep_id = 0;
int result_sent = 0;
Connection *pb_conn;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -100,6 +102,13 @@ ldbm_back_delete(Slapi_PBlock *pb)
dblayer_txn_init(li, &txn);
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
if (parent_txn) {
txn.back_txn_txn = parent_txn;
} else {
@@ -1270,6 +1279,11 @@ replace_entry:
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
if (parent_found) {
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_modify.c b/ldap/servers/slapd/back-ldbm/ldbm_modify.c
index cc4319e..b90b3e0 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_modify.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_modify.c
@@ -412,6 +412,8 @@ ldbm_back_modify(Slapi_PBlock *pb)
int fixup_tombstone = 0;
int ec_locked = 0;
int result_sent = 0;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
slapi_pblock_get(pb, SLAPI_BACKEND, &be);
slapi_pblock_get(pb, SLAPI_PLUGIN_PRIVATE, &li);
@@ -426,6 +428,13 @@ ldbm_back_modify(Slapi_PBlock *pb)
dblayer_txn_init(li, &txn); /* must do this before first goto error_return */
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
if (parent_txn) {
txn.back_txn_txn = parent_txn;
} else {
@@ -864,6 +873,11 @@ ldbm_back_modify(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODIFY_FN);
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c b/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
index e4d0337..73e50eb 100644
--- a/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
+++ b/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c
@@ -97,6 +97,8 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
int op_id;
int result_sent = 0;
Connection *pb_conn = NULL;
+ int32_t parent_op = 0;
+ struct timespec parent_time;
if (slapi_pblock_get(pb, SLAPI_CONN_ID, &conn_id) < 0) {
conn_id = 0; /* connection is NULL */
@@ -134,6 +136,13 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
/* dblayer_txn_init needs to be called before "goto error_return" */
dblayer_txn_init(li, &txn);
+
+ if (txn.back_txn_txn == NULL) {
+ /* This is the parent operation, get the time */
+ parent_op = 1;
+ parent_time = slapi_current_rel_time_hr();
+ }
+
/* the calls to perform searches require the parent txn if any
so set txn to the parent_txn until we begin the child transaction */
if (parent_txn) {
@@ -1208,6 +1217,11 @@ ldbm_back_modrdn(Slapi_PBlock *pb)
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
goto error_return;
}
retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
@@ -1353,8 +1367,13 @@ error_return:
slapi_pblock_set(pb, SLAPI_PLUGIN_OPRETURN, ldap_result_code ? &ldap_result_code : &retval);
}
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &ldap_result_message);
+
+ /* Revert the caches if this is the parent operation */
+ if (parent_op) {
+ revert_cache(inst, &parent_time);
+ }
}
- retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
+ retval = plugin_call_mmr_plugin_postop(pb, NULL,SLAPI_PLUGIN_BE_TXN_POST_MODRDN_FN);
/* Release SERIAL LOCK */
dblayer_txn_abort(be, &txn); /* abort crashes in case disk full */
@@ -1411,17 +1430,6 @@ common_return:
"operation failed, the target entry is cleared from dncache (%s)\n", slapi_entry_get_dn(ec->ep_entry));
CACHE_REMOVE(&inst->inst_dncache, bdn);
CACHE_RETURN(&inst->inst_dncache, &bdn);
- /*
- * If the new/invalid entry (ec) is in the cache, that means we need to
- * swap it out with the original entry (e) --> to undo the swap that
- * modrdn_rename_entry_update_indexes() did.
- */
- if (cache_is_in_cache(&inst->inst_cache, ec)) {
- if (cache_replace(&inst->inst_cache, ec, e) != 0) {
- slapi_log_err(SLAPI_LOG_ALERT, "ldbm_back_modrdn",
- "failed to replace cache entry after error\n");
- }
- }
}
if (ec && inst) {
diff --git a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
index 5749e26..00d4aea 100644
--- a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
+++ b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h
@@ -55,6 +55,7 @@ void cache_unlock_entry(struct cache *cache, struct backentry *e);
int cache_replace(struct cache *cache, void *oldptr, void *newptr);
int cache_has_otherref(struct cache *cache, void *bep);
int cache_is_in_cache(struct cache *cache, void *ptr);
+void revert_cache(ldbm_instance *inst, struct timespec *start_time);
#ifdef CACHE_DEBUG
void check_entry_cache(struct cache *cache, struct backentry *e);
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
index 4bf2268..9135a12 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -6748,6 +6748,12 @@ time_t slapi_current_time(void) __attribute__((deprecated));
*/
struct timespec slapi_current_time_hr(void);
/**
+ * Returns the current system time as a hr clock
+ *
+ * \return timespec of the current monotonic time.
+ */
+struct timespec slapi_current_rel_time_hr(void);
+/**
* Returns the current system time as a hr clock in UTC timezone.
* This clock adjusts with ntp steps, and should NOT be
* used for timer information.
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months
[389-ds-base] branch 389-ds-base-1.4.0 updated: Issue 50041 - Add CLI functionality for special plugins
by pagure@pagure.io
This is an automated email from the git hooks/post-receive script.
mreynolds pushed a commit to branch 389-ds-base-1.4.0
in repository 389-ds-base.
The following commit(s) were added to refs/heads/389-ds-base-1.4.0 by this push:
new 1f15e96 Issue 50041 - Add CLI functionality for special plugins
1f15e96 is described below
commit 1f15e966cb8265fb636e12e18ac516bb127c2db0
Author: Simon Pichugin <spichugi(a)redhat.com>
AuthorDate: Mon Feb 18 22:45:01 2019 +0100
Issue 50041 - Add CLI functionality for special plugins
Description: Add the functionality for
account-policy, attr-uniq, automember, dna, linked-attr,
managed-entries, memberof, pass-through-auth, refer-init,
retro-changelog, root-dn, usn commands.
Make DSLdapObject create an entry with only DN and attributes
(cases when RDN is not specified).
Fix two small typos in pwpolicy CLI's arguments.
Port test for DNA plugin.
https://pagure.io/389-ds-base/issue/50041
Reviewed by: wibrown, mreynolds, mhonek (Thanks!)
(cherry picked from commit 46e28cb4229f590c225f2a52bc8169e6fcc2d65b)
---
dirsrvtests/tests/suites/plugins/dna_test.py | 84 +++++
dirsrvtests/tests/tickets/ticket47937_test.py | 122 -------
.../389-console/src/lib/plugins/accountPolicy.jsx | 2 +-
.../src/lib/plugins/attributeUniqueness.jsx | 2 +-
.../src/lib/plugins/linkedAttributes.jsx | 2 +-
.../389-console/src/lib/plugins/managedEntries.jsx | 2 +-
.../389-console/src/lib/plugins/memberOf.jsx | 4 +-
.../src/lib/plugins/passthroughAuthentication.jsx | 2 +-
.../src/lib/plugins/referentialIntegrity.jsx | 2 +-
.../389-console/src/lib/plugins/retroChangelog.jsx | 2 +-
.../src/lib/plugins/rootDNAccessControl.jsx | 2 +-
src/cockpit/389-console/src/plugins.jsx | 2 +-
src/lib389/lib389/_mapped_object.py | 6 +-
src/lib389/lib389/cli_conf/__init__.py | 23 +-
src/lib389/lib389/cli_conf/plugin.py | 10 +-
.../lib389/cli_conf/plugins/accountpolicy.py | 112 ++++++-
src/lib389/lib389/cli_conf/plugins/attruniq.py | 116 ++++++-
src/lib389/lib389/cli_conf/plugins/automember.py | 363 ++++++++++++---------
src/lib389/lib389/cli_conf/plugins/dna.py | 240 +++++++++++++-
src/lib389/lib389/cli_conf/plugins/linkedattr.py | 119 ++++++-
.../lib389/cli_conf/plugins/managedentries.py | 225 ++++++++++++-
src/lib389/lib389/cli_conf/plugins/memberof.py | 101 +++---
.../lib389/cli_conf/plugins/passthroughauth.py | 76 ++++-
src/lib389/lib389/cli_conf/plugins/referint.py | 208 ++----------
.../lib389/cli_conf/plugins/retrochangelog.py | 42 ++-
src/lib389/lib389/cli_conf/plugins/rootdn_ac.py | 265 +++------------
src/lib389/lib389/cli_conf/plugins/usn.py | 39 ++-
src/lib389/lib389/cli_conf/plugins/whoami.py | 16 -
src/lib389/lib389/cli_conf/pwpolicy.py | 4 +-
src/lib389/lib389/plugins.py | 208 +++++++++++-
30 files changed, 1609 insertions(+), 792 deletions(-)
diff --git a/dirsrvtests/tests/suites/plugins/dna_test.py b/dirsrvtests/tests/suites/plugins/dna_test.py
new file mode 100644
index 0000000..3418048
--- /dev/null
+++ b/dirsrvtests/tests/suites/plugins/dna_test.py
@@ -0,0 +1,84 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2019 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+"""Test DNA plugin functionality"""
+
+import logging
+import pytest
+from lib389._constants import DEFAULT_SUFFIX
+from lib389.plugins import DNAPlugin, DNAPluginSharedConfigs, DNAPluginConfigs
+from lib389.idm.organizationalunit import OrganizationalUnits
+from lib389.idm.user import UserAccounts
+from lib389.topologies import topology_st
+import ldap
+
+log = logging.getLogger(__name__)
+
+
+(a)pytest.mark.ds47937
+def test_dnatype_only_valid(topology_st):
+ """Test that DNA plugin only accepts valid attributes for "dnaType"
+
+ :id: 0878ecff-5fdc-47d7-8c8f-edf4556f9746
+ :setup: Standalone Instance
+ :steps:
+ 1. Create a use entry
+ 2. Create DNA shared config entry container
+ 3. Create DNA shared config entry
+ 4. Add DNA plugin config entry
+ 5. Enable DNA plugin
+ 6. Restart the instance
+ 7. Replace dnaType with invalid value
+ :expectedresults:
+ 1. Successful
+ 2. Successful
+ 3. Successful
+ 4. Successful
+ 5. Successful
+ 6. Successful
+ 7. Unwilling to perform exception should be raised
+ """
+
+ inst = topology_st.standalone
+ plugin = DNAPlugin(inst)
+
+ log.info("Creating an entry...")
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ users.create_test_user(uid=1)
+
+ log.info("Creating \"ou=ranges\"...")
+ ous = OrganizationalUnits(inst, DEFAULT_SUFFIX)
+ ou_ranges = ous.create(properties={'ou': 'ranges'})
+ ou_people = ous.get("People")
+
+ log.info("Creating DNA shared config entry...")
+ shared_configs = DNAPluginSharedConfigs(inst, ou_ranges.dn)
+ shared_configs.create(properties={'dnaHostName': str(inst.host),
+ 'dnaPortNum': str(inst.port),
+ 'dnaRemainingValues': '9501'})
+
+ log.info("Add dna plugin config entry...")
+ configs = DNAPluginConfigs(inst, plugin.dn)
+ config = configs.create(properties={'cn': 'dna config',
+ 'dnaType': 'description',
+ 'dnaMaxValue': '10000',
+ 'dnaMagicRegen': '0',
+ 'dnaFilter': '(objectclass=top)',
+ 'dnaScope': ou_people.dn,
+ 'dnaNextValue': '500',
+ 'dnaSharedCfgDN': ou_ranges.dn})
+
+ log.info("Enable the DNA plugin...")
+ plugin.enable()
+
+ log.info("Restarting the server...")
+ inst.restart()
+
+ log.info("Apply an invalid attribute to the DNA config(dnaType: foo)...")
+ with pytest.raises(ldap.UNWILLING_TO_PERFORM):
+ config.replace('dnaType', 'foo')
diff --git a/dirsrvtests/tests/tickets/ticket47937_test.py b/dirsrvtests/tests/tickets/ticket47937_test.py
deleted file mode 100644
index 0a4c18d..0000000
--- a/dirsrvtests/tests/tickets/ticket47937_test.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2016 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-#
-import logging
-import time
-
-import ldap
-import pytest
-from lib389 import Entry
-from lib389._constants import *
-from lib389.topologies import topology_st
-
-log = logging.getLogger(__name__)
-
-
-def test_ticket47937(topology_st):
- """
- Test that DNA plugin only accepts valid attributes for "dnaType"
- """
-
- log.info("Creating \"ou=people\"...")
- try:
- topology_st.standalone.add_s(Entry(('ou=people,' + SUFFIX, {
- 'objectclass': 'top organizationalunit'.split(),
- 'ou': 'people'
- })))
-
- except ldap.ALREADY_EXISTS:
- pass
- except ldap.LDAPError as e:
- log.error('Failed to add ou=people org unit: error ' + e.args[0]['desc'])
- assert False
-
- log.info("Creating \"ou=ranges\"...")
- try:
- topology_st.standalone.add_s(Entry(('ou=ranges,' + SUFFIX, {
- 'objectclass': 'top organizationalunit'.split(),
- 'ou': 'ranges'
- })))
-
- except ldap.LDAPError as e:
- log.error('Failed to add ou=ranges org unit: error ' + e.args[0]['desc'])
- assert False
-
- log.info("Creating \"cn=entry\"...")
- try:
- topology_st.standalone.add_s(Entry(('cn=entry,ou=people,' + SUFFIX, {
- 'objectclass': 'top groupofuniquenames'.split(),
- 'cn': 'entry'
- })))
-
- except ldap.LDAPError as e:
- log.error('Failed to add test entry: error ' + e.args[0]['desc'])
- assert False
-
- log.info("Creating DNA shared config entry...")
- try:
- topology_st.standalone.add_s(Entry(('dnaHostname=localhost.localdomain+dnaPortNum=389,ou=ranges,%s' % SUFFIX, {
- 'objectclass': 'top dnaSharedConfig'.split(),
- 'dnaHostname': 'localhost.localdomain',
- 'dnaPortNum': '389',
- 'dnaSecurePortNum': '636',
- 'dnaRemainingValues': '9501'
- })))
-
- except ldap.LDAPError as e:
- log.error('Failed to add shared config entry: error ' + e.args[0]['desc'])
- assert False
-
- log.info("Add dna plugin config entry...")
- try:
- topology_st.standalone.add_s(
- Entry(('cn=dna config,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config', {
- 'objectclass': 'top dnaPluginConfig'.split(),
- 'dnaType': 'description',
- 'dnaMaxValue': '10000',
- 'dnaMagicRegen': '0',
- 'dnaFilter': '(objectclass=top)',
- 'dnaScope': 'ou=people,%s' % SUFFIX,
- 'dnaNextValue': '500',
- 'dnaSharedCfgDN': 'ou=ranges,%s' % SUFFIX
- })))
-
- except ldap.LDAPError as e:
- log.error('Failed to add DNA config entry: error ' + e.args[0]['desc'])
- assert False
-
- log.info("Enable the DNA plugin...")
- try:
- topology_st.standalone.plugins.enable(name=PLUGIN_DNA)
- except e:
- log.error("Failed to enable DNA Plugin: error " + e.args[0]['desc'])
- assert False
-
- log.info("Restarting the server...")
- topology_st.standalone.stop(timeout=120)
- time.sleep(1)
- topology_st.standalone.start(timeout=120)
- time.sleep(3)
-
- log.info("Apply an invalid attribute to the DNA config(dnaType: foo)...")
-
- try:
- topology_st.standalone.modify_s('cn=dna config,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config',
- [(ldap.MOD_REPLACE, 'dnaType', b'foo')])
- except ldap.LDAPError as e:
- log.info('Operation failed as expected (error: %s)' % e.args[0]['desc'])
- else:
- log.error('Operation incorectly succeeded! Test Failed!')
- assert False
-
-
-if __name__ == '__main__':
- # Run isolated
- # -s for DEBUG mode
- CURRENT_FILE = os.path.realpath(__file__)
- pytest.main("-s %s" % CURRENT_FILE)
diff --git a/src/cockpit/389-console/src/lib/plugins/accountPolicy.jsx b/src/cockpit/389-console/src/lib/plugins/accountPolicy.jsx
index 90ff501..fae8652 100644
--- a/src/cockpit/389-console/src/lib/plugins/accountPolicy.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/accountPolicy.jsx
@@ -13,7 +13,7 @@ class AccountPolicy extends React.Component {
serverId={this.props.serverId}
cn="Account Policy Plugin"
pluginName="Account Policy"
- cmdName="accountpolicy"
+ cmdName="account-policy"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/attributeUniqueness.jsx b/src/cockpit/389-console/src/lib/plugins/attributeUniqueness.jsx
index 3d708de..0521a89 100644
--- a/src/cockpit/389-console/src/lib/plugins/attributeUniqueness.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/attributeUniqueness.jsx
@@ -13,7 +13,7 @@ class AttributeUniqueness extends React.Component {
serverId={this.props.serverId}
cn="attribute uniqueness"
pluginName="Attribute Uniqueness"
- cmdName="attruniq"
+ cmdName="attr-uniq"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/linkedAttributes.jsx b/src/cockpit/389-console/src/lib/plugins/linkedAttributes.jsx
index 5982fcc..5216b15 100644
--- a/src/cockpit/389-console/src/lib/plugins/linkedAttributes.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/linkedAttributes.jsx
@@ -13,7 +13,7 @@ class LinkedAttributes extends React.Component {
serverId={this.props.serverId}
cn="Linked Attributes"
pluginName="Linked Attributes"
- cmdName="linkedattr"
+ cmdName="linked-attr"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/managedEntries.jsx b/src/cockpit/389-console/src/lib/plugins/managedEntries.jsx
index 4bd5657..11771b7 100644
--- a/src/cockpit/389-console/src/lib/plugins/managedEntries.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/managedEntries.jsx
@@ -13,7 +13,7 @@ class ManagedEntries extends React.Component {
serverId={this.props.serverId}
cn="Managed Entries"
pluginName="Managed Entries"
- cmdName="managedentries"
+ cmdName="managed-entries"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/memberOf.jsx b/src/cockpit/389-console/src/lib/plugins/memberOf.jsx
index 51ecd59..d838054 100644
--- a/src/cockpit/389-console/src/lib/plugins/memberOf.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/memberOf.jsx
@@ -369,7 +369,7 @@ class MemberOf extends React.Component {
}
editConfig() {
- this.cmdOperation("edit");
+ this.cmdOperation("set");
}
handleCheckboxChange(e) {
@@ -473,7 +473,7 @@ class MemberOf extends React.Component {
"ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"plugin",
"memberof",
- "edit",
+ "set",
"--scope",
memberOfEntryScope || "delete",
"--exclude",
diff --git a/src/cockpit/389-console/src/lib/plugins/passthroughAuthentication.jsx b/src/cockpit/389-console/src/lib/plugins/passthroughAuthentication.jsx
index dfa08c7..5b6f76c 100644
--- a/src/cockpit/389-console/src/lib/plugins/passthroughAuthentication.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/passthroughAuthentication.jsx
@@ -13,7 +13,7 @@ class PassthroughAuthentication extends React.Component {
serverId={this.props.serverId}
cn="Pass Through Authentication"
pluginName="Pass Through Authentication"
- cmdName="passthroughauth"
+ cmdName="pass-through-auth"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/referentialIntegrity.jsx b/src/cockpit/389-console/src/lib/plugins/referentialIntegrity.jsx
index 20e97ff..96e8464 100644
--- a/src/cockpit/389-console/src/lib/plugins/referentialIntegrity.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/referentialIntegrity.jsx
@@ -13,7 +13,7 @@ class ReferentialIntegrity extends React.Component {
serverId={this.props.serverId}
cn="referential integrity postoperation"
pluginName="Referential Integrity"
- cmdName="referint"
+ cmdName="referential-integrity"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/retroChangelog.jsx b/src/cockpit/389-console/src/lib/plugins/retroChangelog.jsx
index 51d7bb4..4e3490b 100644
--- a/src/cockpit/389-console/src/lib/plugins/retroChangelog.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/retroChangelog.jsx
@@ -13,7 +13,7 @@ class RetroChangelog extends React.Component {
serverId={this.props.serverId}
cn="Retro Changelog Plugin"
pluginName="Retro Changelog"
- cmdName="retrochangelog"
+ cmdName="retro-changelog"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/lib/plugins/rootDNAccessControl.jsx b/src/cockpit/389-console/src/lib/plugins/rootDNAccessControl.jsx
index 27c1d37..3e4d820 100644
--- a/src/cockpit/389-console/src/lib/plugins/rootDNAccessControl.jsx
+++ b/src/cockpit/389-console/src/lib/plugins/rootDNAccessControl.jsx
@@ -13,7 +13,7 @@ class RootDNAccessControl extends React.Component {
serverId={this.props.serverId}
cn="RootDN Access Control"
pluginName="RootDN Access Control"
- cmdName="rootdn"
+ cmdName="root-dn"
savePluginHandler={this.props.savePluginHandler}
pluginListHandler={this.props.pluginListHandler}
addNotification={this.props.addNotification}
diff --git a/src/cockpit/389-console/src/plugins.jsx b/src/cockpit/389-console/src/plugins.jsx
index d2b6932..5481d1a 100644
--- a/src/cockpit/389-console/src/plugins.jsx
+++ b/src/cockpit/389-console/src/plugins.jsx
@@ -196,7 +196,7 @@ export class Plugins extends React.Component {
"-j",
"ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"plugin",
- "edit",
+ "set",
data.name,
"--type",
data.type || "delete",
diff --git a/src/lib389/lib389/_mapped_object.py b/src/lib389/lib389/_mapped_object.py
index bc0c8e6..6e2538e 100644
--- a/src/lib389/lib389/_mapped_object.py
+++ b/src/lib389/lib389/_mapped_object.py
@@ -693,7 +693,11 @@ class DSLdapObject(DSLogging):
if self._must_attributes is not None:
for attr in self._must_attributes:
if properties.get(attr, None) is None:
- raise ldap.UNWILLING_TO_PERFORM('Attribute %s must not be None' % attr)
+ # Put RDN to properties
+ if attr == self._rdn_attribute and rdn is not None:
+ properties[self._rdn_attribute] = ldap.dn.str2dn(rdn)[0][0][1]
+ else:
+ raise ldap.UNWILLING_TO_PERFORM('Attribute %s must not be None' % attr)
# Make sure the naming attribute is present
if properties.get(self._rdn_attribute, None) is None and rdn is None:
diff --git a/src/lib389/lib389/cli_conf/__init__.py b/src/lib389/lib389/cli_conf/__init__.py
index 1ba3b4a..9e1daed 100644
--- a/src/lib389/lib389/cli_conf/__init__.py
+++ b/src/lib389/lib389/cli_conf/__init__.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -23,19 +23,30 @@ def _args_to_attrs(args, arg_to_attr):
return attrs
-def generic_object_add(dsldap_object, log, args, arg_to_attr, props={}):
- """Create an entry using DSLdapObject interface
+def generic_object_add(dsldap_objects_class, inst, log, args, arg_to_attr, dn=None, basedn=None, props={}):
+ """Create an entry using DSLdapObjects interface
- dsldap_object should be a single instance of DSLdapObject with a set dn
+ dsldap_objects should be a class inherited from the DSLdapObjects class
"""
log = log.getChild('generic_object_add')
# Gather the attributes
attrs = _args_to_attrs(args, arg_to_attr)
- # Update the parameters (which should have at least 'cn') with arg attributes
props.update({attr: value for (attr, value) in attrs.items() if value != ""})
- new_object = dsldap_object.create(properties=props)
+
+ # Get RDN attribute and Base DN from the DN if Base DN is not specified
+ if dn is not None and basedn is None:
+ dn_parts = ldap.dn.explode_dn(dn)
+
+ rdn = dn_parts[0]
+ basedn = ",".join(dn_parts[1:])
+ else:
+ raise ValueError('If Base DN is not specified - DN parameter should be')
+
+ new_object = dsldap_objects_class(inst, dn=dn)
+ new_object.create(rdn=rdn, basedn=basedn, properties=props)
log.info("Successfully created the %s", new_object.dn)
+ return new_object
def generic_object_edit(dsldap_object, log, args, arg_to_attr):
diff --git a/src/lib389/lib389/cli_conf/plugin.py b/src/lib389/lib389/cli_conf/plugin.py
index 5cc7c8c..9509b84 100644
--- a/src/lib389/lib389/cli_conf/plugin.py
+++ b/src/lib389/lib389/cli_conf/plugin.py
@@ -17,7 +17,6 @@ from lib389.cli_conf import generic_object_edit
from lib389.cli_conf.plugins import memberof as cli_memberof
from lib389.cli_conf.plugins import usn as cli_usn
from lib389.cli_conf.plugins import rootdn_ac as cli_rootdn_ac
-from lib389.cli_conf.plugins import whoami as cli_whoami
from lib389.cli_conf.plugins import referint as cli_referint
from lib389.cli_conf.plugins import accountpolicy as cli_accountpolicy
from lib389.cli_conf.plugins import attruniq as cli_attruniq
@@ -42,7 +41,8 @@ arg_to_attr = {
'vendor': 'nsslapd-pluginVendor',
'description': 'nsslapd-pluginDescription',
'depends_on_type': 'nsslapd-plugin-depends-on-type',
- 'depends_on_named': 'nsslapd-plugin-depends-on-named'
+ 'depends_on_named': 'nsslapd-plugin-depends-on-named',
+ 'precedence': 'nsslapd-pluginPrecedence'
}
@@ -111,16 +111,15 @@ def create_parser(subparsers):
cli_managedentries.create_parser(subcommands)
cli_passthroughauth.create_parser(subcommands)
cli_retrochangelog.create_parser(subcommands)
- cli_whoami.create_parser(subcommands)
list_parser = subcommands.add_parser('list', help="List current configured (enabled and disabled) plugins")
list_parser.set_defaults(func=plugin_list)
- get_parser = subcommands.add_parser('get', help='Get the plugin data')
+ get_parser = subcommands.add_parser('show', help='Show the plugin data')
get_parser.set_defaults(func=plugin_get)
get_parser.add_argument('selector', nargs='?', help='The plugin to search for')
- edit_parser = subcommands.add_parser('edit', help='Edit the plugin')
+ edit_parser = subcommands.add_parser('set', help='Edit the plugin')
edit_parser.set_defaults(func=plugin_edit)
edit_parser.add_argument('selector', nargs='?', help='The plugin to edit')
edit_parser.add_argument('--type', help='The type of plugin.')
@@ -138,3 +137,4 @@ def create_parser(subparsers):
edit_parser.add_argument('--depends-on-named',
help='The plug-in name matching one of the following values will be '
'started by the server prior to this plug-in')
+ edit_parser.add_argument('--precedence', help='The priority it has in the execution order of plug-ins')
diff --git a/src/lib389/lib389/cli_conf/plugins/accountpolicy.py b/src/lib389/lib389/cli_conf/plugins/accountpolicy.py
index d33e054..e2fa1e1 100644
--- a/src/lib389/lib389/cli_conf/plugins/accountpolicy.py
+++ b/src/lib389/lib389/cli_conf/plugins/accountpolicy.py
@@ -1,16 +1,118 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-from lib389.plugins import AccountPolicyPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+import ldap
+from lib389.plugins import AccountPolicyPlugin, AccountPolicyConfigs, AccountPolicyConfig
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
+
+arg_to_attr = {
+ 'config_entry': 'nsslapd-pluginConfigArea'
+}
+
+arg_to_attr_config = {
+ 'alt_state_attr': 'altstateattrname',
+ 'always_record_login': 'alwaysRecordLogin',
+ 'always_record_login_attr': 'alwaysRecordLoginAttr',
+ 'limit_attr': 'limitattrname',
+ 'spec_attr': 'specattrname',
+ 'state_attr': 'stateattrname'
+}
+
+def accountpolicy_edit(inst, basedn, log, args):
+ log = log.getChild('accountpolicy_edit')
+ plugin = AccountPolicyPlugin(inst)
+ generic_object_edit(plugin, log, args, arg_to_attr)
+
+
+def accountpolicy_add_config(inst, basedn, log, args):
+ log = log.getChild('accountpolicy_add_config')
+ targetdn = args.DN
+ config = generic_object_add(AccountPolicyConfig, inst, log, args, arg_to_attr_config, dn=targetdn)
+ plugin = AccountPolicyPlugin(inst)
+ plugin.replace('nsslapd_pluginConfigArea', config.dn)
+ log.info('Account Policy attribute nsslapd-pluginConfigArea (config_entry) '
+ 'was set in the main plugin config')
+
+
+def accountpolicy_edit_config(inst, basedn, log, args):
+ log = log.getChild('accountpolicy_edit_config')
+ targetdn = args.DN
+ config = AccountPolicyConfig(inst, targetdn)
+ generic_object_edit(config, log, args, arg_to_attr_config)
+
+
+def accountpolicy_show_config(inst, basedn, log, args):
+ log = log.getChild('accountpolicy_show_config')
+ targetdn = args.DN
+ config = AccountPolicyConfig(inst, targetdn)
+
+ if not config.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % targetdn)
+ if args and args.json:
+ o_str = config.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(config.display())
+
+
+def accountpolicy_del_config(inst, basedn, log, args):
+ log = log.getChild('accountpolicy_del_config')
+ targetdn = args.DN
+ config = AccountPolicyConfig(inst, targetdn)
+ config.delete()
+ log.info("Successfully deleted the %s", targetdn)
+
+
+def _add_parser_args(parser):
+ parser.add_argument('--always-record-login', choices=['yes', 'no'],
+ help='Sets that every entry records its last login time (alwaysRecordLogin)')
+ parser.add_argument('--alt-state-attr',
+ help='Provides a backup attribute for the server to reference '
+ 'to evaluate the expiration time (altStateAttrName)')
+ parser.add_argument('--always-record-login-attr',
+ help='Specifies the attribute to store the time of the last successful '
+ 'login in this attribute in the users directory entry (alwaysRecordLoginAttr)')
+ parser.add_argument('--limit-attr',
+ help='Specifies the attribute within the policy to use '
+ 'for the account inactivation limit (limitAttrName)')
+ parser.add_argument('--spec-attr',
+ help='Specifies the attribute to identify which entries '
+ 'are account policy configuration entries (specAttrName)')
+ parser.add_argument('--state-attr',
+ help='Specifies the primary time attribute used to evaluate an account policy (stateAttrName)')
def create_parser(subparsers):
- accountpolicy_parser = subparsers.add_parser('accountpolicy', help='Manage and configure Account Policy plugin')
- subcommands = accountpolicy_parser.add_subparsers(help='action')
+ accountpolicy = subparsers.add_parser('account-policy', help='Manage and configure Account Policy plugin')
+ subcommands = accountpolicy.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, AccountPolicyPlugin)
+
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=accountpolicy_edit)
+ edit.add_argument('--config-entry', help='The value to set as nsslapd-pluginConfigArea')
+
+ config = subcommands.add_parser('config-entry', help='Manage the config entry')
+ config_subcommands = config.add_subparsers(help='action')
+
+ add_config = config_subcommands.add_parser('add', help='Add the config entry')
+ add_config.set_defaults(func=accountpolicy_add_config)
+ add_config.add_argument('DN', help='The config entry full DN')
+ _add_parser_args(add_config)
+
+ edit_config = config_subcommands.add_parser('set', help='Edit the config entry')
+ edit_config.set_defaults(func=accountpolicy_edit_config)
+ edit_config.add_argument('DN', help='The config entry full DN')
+ _add_parser_args(edit_config)
+
+ show_config_parser = config_subcommands.add_parser('show', help='Display the config entry')
+ show_config_parser.set_defaults(func=accountpolicy_show_config)
+ show_config_parser.add_argument('DN', help='The config entry full DN')
+
+ del_config_parser = config_subcommands.add_parser('delete', help='Delete the config entry')
+ del_config_parser.set_defaults(func=accountpolicy_del_config)
+ del_config_parser.add_argument('DN', help='The config entry full DN')
diff --git a/src/lib389/lib389/cli_conf/plugins/attruniq.py b/src/lib389/lib389/cli_conf/plugins/attruniq.py
index 4c04b05..17dac15 100644
--- a/src/lib389/lib389/cli_conf/plugins/attruniq.py
+++ b/src/lib389/lib389/cli_conf/plugins/attruniq.py
@@ -1,16 +1,122 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-from lib389.plugins import AttributeUniquenessPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+import json
+import ldap
+from lib389.plugins import AttributeUniquenessPlugin, AttributeUniquenessPlugins
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
+from lib389._constants import DN_PLUGIN
+
+arg_to_attr = {
+ 'attr-name': 'uniqueness-attribute-name',
+ 'subtree': 'uniqueness-subtrees',
+ 'across-all-subtrees': 'uniqueness-across-all-subtrees',
+ 'top-entry-oc': 'uniqueness-top-entry-oc',
+ 'subtree-entries-oc': 'uniqueness-subtree-entries-oc'
+}
+
+
+def attruniq_list(inst, basedn, log, args):
+ log = log.getChild('attruniq_list')
+ plugins = AttributeUniquenessPlugins(inst)
+ result = []
+ result_json = []
+ for plugin in plugins.list():
+ if args.json:
+ result_json.append(plugin.get_all_attrs_json())
+ else:
+ result.append(plugin.rdn)
+ if args.json:
+ print(json.dumps({"type": "list", "items": result_json}))
+ else:
+ if len(result) > 0:
+ for i in result:
+ print(i)
+ else:
+ print("No Attribute Uniqueness plugin instances")
+
+
+def attruniq_add(inst, basedn, log, args):
+ log = log.getChild('attruniq_add')
+ props = {'cn': args.NAME}
+ generic_object_add(AttributeUniquenessPlugin, inst, log, args, arg_to_attr, basedn=DN_PLUGIN, props=props)
+
+
+def attruniq_edit(inst, basedn, log, args):
+ log = log.getChild('attruniq_edit')
+ plugins = AttributeUniquenessPlugins(inst)
+ plugin = plugins.get(args.NAME)
+ generic_object_edit(plugin, log, args, arg_to_attr)
+
+
+def attruniq_show(inst, basedn, log, args):
+ log = log.getChild('attruniq_show')
+ plugins = AttributeUniquenessPlugins(inst)
+ plugin = plugins.get(args.NAME)
+
+ if not plugin.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.name)
+ if args and args.json:
+ o_str = plugin.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(plugin.display())
+
+
+def attruniq_del(inst, basedn, log, args):
+ log = log.getChild('attruniq_del')
+ plugins = AttributeUniquenessPlugins(inst)
+ plugin = plugins.get(args.NAME)
+ plugin.delete()
+ log.info("Successfully deleted the %s", plugin.dn)
+
+
+def _add_parser_args(parser):
+ parser.add_argument('NAME', help='Sets the name of the plug-in configuration record. (cn) You can use any string, '
+ 'but "attribute_name Attribute Uniqueness" is recommended.')
+ parser.add_argument('--attr-name', nargs='+',
+ help='Sets the name of the attribute whose values must be unique. '
+ 'This attribute is multi-valued. (uniqueness-attribute-name)')
+ parser.add_argument('--subtree', nargs='+',
+ help='Sets the DN under which the plug-in checks for uniqueness of '
+ 'the attributes value. This attribute is multi-valued (uniqueness-subtrees)')
+ parser.add_argument('--across-all-subtrees', choices=['on', 'off'],
+ help='If enabled (on), the plug-in checks that the attribute is unique across all subtrees '
+ 'set. If you set the attribute to off, uniqueness is only enforced within the subtree '
+ 'of the updated entry (uniqueness-across-all-subtrees)')
+ parser.add_argument('--top-entry-oc',
+ help='Verifies that the value of the attribute set in uniqueness-attribute-name '
+ 'is unique in this subtree (uniqueness-top-entry-oc)')
+ parser.add_argument('--subtree-entries-oc',
+ help='Verifies if an attribute is unique, if the entry contains the object class '
+ 'set in this parameter (uniqueness-subtree-entries-oc)')
def create_parser(subparsers):
- attruniq_parser = subparsers.add_parser('attruniq', help='Manage and configure Attribute Uniqueness plugin')
- subcommands = attruniq_parser.add_subparsers(help='action')
+ attruniq = subparsers.add_parser('attr-uniq', help='Manage and configure Attribute Uniqueness plugin')
+ subcommands = attruniq.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, AttributeUniquenessPlugin)
+
+ list = subcommands.add_parser('list', help='List available plugin configs')
+ list.set_defaults(func=attruniq_list)
+
+ add = subcommands.add_parser('add', help='Add the config entry')
+ add.set_defaults(func=attruniq_add)
+ _add_parser_args(add)
+
+ edit = subcommands.add_parser('set', help='Edit the config entry')
+ edit.set_defaults(func=attruniq_edit)
+ _add_parser_args(edit)
+
+ show = subcommands.add_parser('show', help='Display the config entry')
+ show.add_argument('NAME', help='The name of the plug-in configuration record')
+ show.set_defaults(func=attruniq_show)
+
+ delete = subcommands.add_parser('delete', help='Delete the config entry')
+ delete.add_argument('NAME', help='Sets the name of the plug-in configuration record')
+ delete.set_defaults(func=attruniq_del)
diff --git a/src/lib389/lib389/cli_conf/plugins/automember.py b/src/lib389/lib389/cli_conf/plugins/automember.py
index a4e757e..d9fe1dd 100644
--- a/src/lib389/lib389/cli_conf/plugins/automember.py
+++ b/src/lib389/lib389/cli_conf/plugins/automember.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -8,165 +8,226 @@
import ldap
import json
-from lib389.plugins import AutoMembershipPlugin, AutoMembershipDefinitions
-from lib389.cli_conf import add_generic_plugin_parsers
+from lib389.plugins import (AutoMembershipPlugin, AutoMembershipDefinition, AutoMembershipDefinitions,
+ AutoMembershipRegexRule, AutoMembershipRegexRules)
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
-def list_definition(inst, basedn, log, args):
- """List automember definition if instance name
- is given else show all automember definitions.
+arg_to_attr_definition = {
+ 'default-group': 'autoMemberDefaultGroup',
+ 'filter': 'autoMemberFilter',
+ 'grouping-attr': 'autoMemberGroupingAttr',
+ 'scope': 'autoMemberScope'
+}
- :param name: An instance
- :type name: lib389.DirSrv
- """
-
- automembers = AutoMembershipDefinitions(inst)
-
- if args.name is not None:
- if args.json:
- print(automembers.get_all_attrs_json(args.name))
- else:
- automember = automembers.get(args.name)
- log.info(automember.display())
- else:
- all_definitions = automembers.list()
- if args.json:
- result = {'type': 'list', 'items': []}
- if len(all_definitions) > 0:
- for definition in all_definitions:
- if args.json:
- result['items'].append(definition)
- else:
- log.info(definition.display())
- else:
- log.info("No automember definitions were found")
-
- if args.json:
- print(json.dumps(result))
-
-
-def create_definition(inst, basedn, log, args):
- """
- Create automember definition.
-
- :param name: An instance
- :type name: lib389.DirSrv
- :param groupattr: autoMemberGroupingAttr value
- :type groupattr: str
- :param defaultgroup: autoMemberDefaultGroup value
- :type defaultgroup: str
- :param scope: autoMemberScope value
- :type scope: str
- :param filter: autoMemberFilter value
- :type filter: str
-
- """
- automember_prop = {
- 'cn': args.name,
- 'autoMemberScope': args.scope,
- 'autoMemberFilter': args.filter,
- 'autoMemberDefaultGroup': args.defaultgroup,
- 'autoMemberGroupingAttr': args.groupattr,
- }
-
- plugin = AutoMembershipPlugin(inst)
- plugin.enable()
-
- automembers = AutoMembershipDefinitions(inst)
-
- try:
- automember = automembers.create(properties=automember_prop)
- log.info("Automember definition created successfully!")
- except Exception as e:
- log.info("Failed to create Automember definition: {}".format(str(e)))
- raise e
-
-
-def edit_definition(inst, basedn, log, args):
- """
- Edit automember definition
-
- :param name: An instance
- :type name: lib389.DirSrv
- :param groupattr: autoMemberGroupingAttr value
- :type groupattr: str
- :param defaultgroup: autoMemberDefaultGroup value
- :type defaultgroup: str
- :param scope: autoMemberScope value
- :type scope: str
- :param filter: autoMemberFilter value
- :type filter: str
-
- """
- automembers = AutoMembershipDefinitions(inst)
- automember = automembers.get(args.name)
+arg_to_attr_regex = {
+ 'exclusive': 'autoMemberExclusiveRegex',
+ 'inclusive': 'autoMemberInclusiveRegex',
+ 'target-group': 'autoMemberTargetGroup'
+}
- if args.scope is not None:
- automember.replace("automemberscope", args.scope)
- if args.filter is not None:
- automember.replace("automemberfilter", args.filter)
- if args.defaultgroup is not None:
- automember.replace("automemberdefaultgroup", args.defaultgroup)
- if args.groupattr is not None:
- automember.replace("automembergroupingattr", args.groupattr)
- log.info("Definition updated successfully.")
-
-def remove_definition(inst, basedn, log, args):
- """
- Remove automember definition for the given
- instance.
-
- :param name: An instance
- :type name: lib389.DirSrv
-
- """
+def definition_list(inst, basedn, log, args):
automembers = AutoMembershipDefinitions(inst)
- automember = automembers.get(args.name)
-
- automember.delete()
- log.info("Definition deleted successfully.")
+ all_definitions = automembers.list()
+ if args.json:
+ result = {'type': 'list', 'items': []}
+ if len(all_definitions) > 0:
+ for definition in all_definitions:
+ if args.json:
+ result['items'].append(definition)
+ else:
+ log.info(definition.rdn)
+ else:
+ log.info("No automember definitions were found")
+
+ if args.json:
+ print(json.dumps(result))
+
+
+def definition_add(inst, basedn, log, args):
+ log = log.getChild('definition_add')
+ plugin = AutoMembershipPlugin(inst)
+ props = {'cn': args.DEF_NAME}
+ generic_object_add(AutoMembershipDefinition, inst, log, args, arg_to_attr_definition, basedn=plugin.dn, props=props)
+
+
+def definition_edit(inst, basedn, log, args):
+ log = log.getChild('definition_edit')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ generic_object_edit(definition, log, args, arg_to_attr_definition)
+
+
+def definition_show(inst, basedn, log, args):
+ log = log.getChild('definition_show')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+
+ if not definition.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.name)
+ if args and args.json:
+ o_str = definition.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(definition.display())
+
+
+def definition_del(inst, basedn, log, args):
+ log = log.getChild('definition_del')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ definition.delete()
+ log.info("Successfully deleted the %s definition", args.name)
+
+
+def regex_list(inst, basedn, log, args):
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ regexes = AutoMembershipRegexRules(inst, definition.dn)
+ all_regexes = regexes.list()
+ if args.json:
+ result = {'type': 'list', 'items': []}
+ if len(all_regexes) > 0:
+ for regex in all_regexes:
+ if args.json:
+ result['items'].append(regex)
+ else:
+ log.info(regex.rdn)
+ else:
+ log.info("No automember regexes were found")
+
+ if args.json:
+ print(json.dumps(result))
+
+
+def regex_add(inst, basedn, log, args):
+ log = log.getChild('regex_add')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ props = {'cn': args.REGEX_NAME}
+ generic_object_add(AutoMembershipRegexRule, inst, log, args, arg_to_attr_regex, basedn=definition.dn, props=props)
+
+
+def regex_edit(inst, basedn, log, args):
+ log = log.getChild('regex_edit')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ regexes = AutoMembershipRegexRules(inst, definition.dn)
+ regex = regexes.get(args.REGEX_NAME)
+ generic_object_edit(regex, log, args, arg_to_attr_regex)
+
+
+def regex_show(inst, basedn, log, args):
+ log = log.getChild('regex_show')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ regexes = AutoMembershipRegexRules(inst, definition.dn)
+ regex = regexes.get(args.REGEX_NAME)
+
+ if not regex.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.name)
+ if args and args.json:
+ o_str = regex.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(regex.display())
+
+
+def regex_del(inst, basedn, log, args):
+ log = log.getChild('regex_del')
+ definitions = AutoMembershipDefinitions(inst)
+ definition = definitions.get(args.DEF_NAME)
+ regexes = AutoMembershipRegexRules(inst, definition.dn)
+ regex = regexes.get(args.REGEX_NAME)
+ regex.delete()
+ log.info("Successfully deleted the %s regex", regex.dn)
+
+
+def fixup(inst, basedn, log, args):
+ plugin = AutoMembershipPlugin(inst)
+ log.info('Attempting to add task entry... This will fail if Automembership plug-in is not enabled.')
+ if not plugin.status():
+ log.error("'%s' is disabled. Rebuild membership task can't be executed" % plugin.rdn)
+ fixup_task = plugin.fixup(args.DN, args.filter)
+ fixup_task.wait()
+ exitcode = fixup_task.get_exit_code()
+ if exitcode != 0:
+ log.error('Rebuild membership task for %s has failed. Please, check logs')
+ else:
+ log.info('Successfully added task entry')
+
+
+def _add_parser_args_definition(parser):
+ parser.add_argument('--grouping-attr',
+ help='Specifies the name of the member attribute in the group entry and '
+ 'the attribute in the object entry that supplies the member attribute value, '
+ 'in the format group_member_attr:entry_attr (autoMemberGroupingAttr)')
+ parser.add_argument('--default-group', required=True,
+ help='Sets default or fallback group to add the entry to as a member '
+ 'member attribute in group entry (autoMemberDefaultGroup)')
+ parser.add_argument('--scope', required=True,
+ help='Sets the subtree DN to search for entries (autoMemberScope)')
+ parser.add_argument('--filter',
+ help='Sets a standard LDAP search filter to use to search for '
+ 'matching entries (autoMemberFilter)')
+
+
+def _add_parser_args_regex(parser):
+ parser.add_argument("--exclusive",
+ help='Sets a single regular expression to use to identify '
+ 'entries to exclude (autoMemberExclusiveRegex)')
+ parser.add_argument('--inclusive', required=True,
+ help='Sets a single regular expression to use to identify '
+ 'entries to include (autoMemberInclusiveRegex)')
+ parser.add_argument('--target-group', required=True,
+ help='Sets which group to add the entry to as a member, if it meets '
+ 'the regular expression conditions (autoMemberTargetGroup)')
def create_parser(subparsers):
- automember_parser = subparsers.add_parser('automember', help="Manage and configure automember plugin")
-
- subcommands = automember_parser.add_subparsers(help='action')
-
+ automember = subparsers.add_parser('automember', help="Manage and configure Automembership plugin")
+ subcommands = automember.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, AutoMembershipPlugin)
- create_parser = subcommands.add_parser('create', help='Create automember definition.')
- create_parser.set_defaults(func=create_definition)
-
- create_parser.add_argument("name", help='Set cn for group entry.')
-
- create_parser.add_argument("--groupattr", help='Set member attribute in group entry.', default='member:dn')
-
- create_parser.add_argument('--defaultgroup', required=True, help='Set default group to add member to.')
-
- create_parser.add_argument('--scope', required=True, help='Set automember scope.')
-
- create_parser.add_argument('--filter', help='Set automember filter.', default= '(objectClass=*)')
-
- show_parser = subcommands.add_parser('list', help='List automember definition.')
- show_parser.set_defaults(func=list_definition)
-
- show_parser.add_argument("--name", help='Set cn for group entry. If not specified show all automember definitions.')
-
- edit_parser = subcommands.add_parser('edit', help='Edit automember definition.')
- edit_parser.set_defaults(func=edit_definition)
-
- edit_parser.add_argument("name", help='Set cn for group entry.')
-
- edit_parser.add_argument("--groupattr", help='Set member attribute in group entry.')
-
- edit_parser.add_argument('--defaultgroup', help='Set default group to add member to.')
-
- edit_parser.add_argument('--scope', help='Set automember scope.')
-
- edit_parser.add_argument('--filter', help='Set automember filter.')
-
- remove_parser = subcommands.add_parser('remove', help='Remove automember definition.')
- remove_parser.set_defaults(func=remove_definition)
+ list = subcommands.add_parser('list', help='List Automembership definitions or regex rules.')
+ subcommands_list = list.add_subparsers(help='action')
+ list_definitions = subcommands_list.add_parser('definitions', help='List Automembership definitions.')
+ list_definitions.set_defaults(func=definition_list)
+ list_regexes = subcommands_list.add_parser('regexes', help='List Automembership regex rules.')
+ list_regexes.add_argument('DEF-NAME', help='The definition entry CN.')
+ list_regexes.set_defaults(func=regex_list)
+
+ definition = subcommands.add_parser('definition', help='Manage Automembership definition.')
+ definition.add_argument('DEF-NAME', help='The definition entry CN.')
+ subcommands_definition = definition.add_subparsers(help='action')
+
+ add_def = subcommands_definition.add_parser('add', help='Create Automembership definition.')
+ add_def.set_defaults(func=definition_add)
+ _add_parser_args_definition(add_def)
+ edit_def = subcommands_definition.add_parser('set', help='Edit Automembership definition.')
+ edit_def.set_defaults(func=definition_edit)
+ _add_parser_args_definition(edit_def)
+ delete_def = subcommands_definition.add_parser('delete', help='Remove Automembership definition.')
+ delete_def.set_defaults(func=definition_del)
+
+ regex = subcommands_definition.add_parser('regex', help='Manage Automembership regex rules.')
+ regex.add_argument('REGEX-NAME', help='The regex entry CN.')
+ subcommands_regex = regex.add_subparsers(help='action')
+
+ add_regex = subcommands_regex.add_parser('add', help='Create Automembership regex.')
+ add_regex.set_defaults(func=regex_add)
+ _add_parser_args_definition(add_regex)
+ edit_regex = subcommands_regex.add_parser('set', help='Edit Automembership regex.')
+ edit_regex.set_defaults(func=regex_edit)
+ _add_parser_args_definition(edit_regex)
+ delete_regex = subcommands_regex.add_parser('delete', help='Remove Automembership regex.')
+ delete_regex.set_defaults(func=regex_del)
+
+ fixup = subcommands.add_parser('fixup', help='Run a rebuild membership task.')
+ fixup.set_defaults(func=fixup)
+ fixup.add_argument('DN', help="Base DN that contains entries to fix up")
+ fixup.add_argument('-f', '--filter', required=True, help='LDAP filter for entries to fix up.')
+ fixup.add_argument('-s', '--scope', required=True, choices=['sub', 'base', 'one'], type=str.lower,
+ help='LDAP search scope for entries to fix up')
- remove_parser.add_argument("name", help='Set cn for group entry.')
diff --git a/src/lib389/lib389/cli_conf/plugins/dna.py b/src/lib389/lib389/cli_conf/plugins/dna.py
index 50dd37f..08f66a4 100644
--- a/src/lib389/lib389/cli_conf/plugins/dna.py
+++ b/src/lib389/lib389/cli_conf/plugins/dna.py
@@ -1,16 +1,246 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-from lib389.plugins import DNAPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+import json
+import ldap
+from lib389.plugins import DNAPlugin, DNAPluginConfig, DNAPluginConfigs, DNAPluginSharedConfig, DNAPluginSharedConfigs
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add, _args_to_attrs
+
+arg_to_attr = {
+ 'type': 'dnaType',
+ 'prefix': 'dnaPrefix',
+ 'next_value': 'dnaNextValue',
+ 'max_value': 'dnaMaxValue',
+ 'interval': 'dnaInterval',
+ 'magic_regen': 'dnaMagicRegen',
+ 'filter': 'dnaFilter',
+ 'scope': 'dnaScope',
+ 'remote_bind_dn': 'dnaRemoteBindDN',
+ 'remote_bind_cred': 'dnaRemoteBindCred',
+ 'shared_config_entry': 'dnaSharedCfgDN',
+ 'threshold': 'dnaThreshold',
+ 'next_range': 'dnaNextRange',
+ 'range_request_timeout': 'dnaRangeRequestTimeout'
+}
+
+arg_to_attr_config = {
+ 'hostname': 'dnaHostname',
+ 'port': 'dnaPortNum',
+ 'secure_port': 'dnaSecurePortNum',
+ 'remaining_values': 'dnaRemainingValues',
+ 'remote_bind_method': 'dnaRemoteBindMethod',
+ 'remote_conn_protocol': 'dnaRemoteConnProtocol'
+}
+
+
+def dna_list(inst, basedn, log, args):
+ log = log.getChild('dna_list')
+ configs = DNAPluginConfigs(inst)
+ config_list = configs.list()
+ if args.json:
+ result = {'type': 'list', 'items': []}
+ if len(config_list) > 0:
+ for config in config_list:
+ if args.json:
+ result['items'].append(config)
+ else:
+ log.info(config.rdn)
+ else:
+ log.info("No DNA configurations were found")
+
+ if args.json:
+ print(json.dumps(result))
+
+
+def dna_add(inst, basedn, log, args):
+ log = log.getChild('dna_add')
+ plugin = DNAPlugin(inst)
+ props = {'cn': args.NAME}
+ generic_object_add(DNAPluginConfig, inst, log, args, arg_to_attr, basedn=plugin.dn, props=props)
+
+
+def dna_edit(inst, basedn, log, args):
+ log = log.getChild('dna_edit')
+ configs = DNAPluginConfigs(inst)
+ config = configs.get(args.NAME)
+ generic_object_edit(config, log, args, arg_to_attr)
+
+
+def dna_show(inst, basedn, log, args):
+ log = log.getChild('dna_show')
+ configs = DNAPluginConfigs(inst)
+ config = configs.get(args.NAME)
+
+ if not config.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.NAME)
+ if args and args.json:
+ o_str = config.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(config.display())
+
+
+def dna_del(inst, basedn, log, args):
+ log = log.getChild('dna_del')
+ configs = DNAPluginConfigs(inst)
+ config = configs.get(args.NAME)
+ config.delete()
+ log.info("Successfully deleted the %s", config.dn)
+
+
+def dna_config_list(inst, basedn, log, args):
+ log = log.getChild('dna_list')
+ configs = DNAPluginSharedConfigs(inst, args.BASEDN)
+ config_list = configs.list()
+ if args.json:
+ result = {'type': 'list', 'items': []}
+ if len(config_list) > 0:
+ for config in config_list:
+ if args.json:
+ result['items'].append(config.get_all_attrs_json())
+ else:
+ log.info(config.dn)
+ else:
+ log.info("No DNA shared configurations were found")
+
+ if args.json:
+ print(json.dumps(result))
+
+
+def dna_config_add(inst, basedn, log, args):
+ log = log.getChild('dna_config_add')
+ targetdn = args.BASEDN
+
+ shared_configs = DNAPluginSharedConfigs(inst, targetdn)
+ attrs = _args_to_attrs(args, arg_to_attr_config)
+ props = {attr: value for (attr, value) in attrs.items() if value != ""}
+
+ shared_config = shared_configs.create(properties=props)
+ log.info("Successfully created the %s" % shared_config.dn)
+
+ configs = DNAPluginConfigs(inst)
+ config = configs.get(args.NAME)
+ config.replace('dnaSharedCfgDN', config.dn)
+ log.info('DNA attribute dnaSharedCfgDN (shared-config-entry) '
+ 'was set in the %s plugin config' % config.rdn)
+
+
+def dna_config_edit(inst, basedn, log, args):
+ log = log.getChild('dna_config_edit')
+ targetdn = args.DN
+ shared_config = DNAPluginSharedConfig(inst, targetdn)
+ generic_object_edit(shared_config, log, args, arg_to_attr_config)
+
+
+def dna_config_show(inst, basedn, log, args):
+ log = log.getChild('dna_config_show')
+ targetdn = args.DN
+ shared_config = DNAPluginSharedConfig(inst, targetdn)
+
+ if not shared_config.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % targetdn)
+ if args and args.json:
+ o_str = shared_config.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(shared_config.display())
+
+
+def dna_config_del(inst, basedn, log, args):
+ log = log.getChild('dna_config_del')
+ targetdn = args.DN
+ shared_config = DNAPluginSharedConfig(inst, targetdn)
+ shared_config.delete()
+ log.info("Successfully deleted the %s", targetdn)
+
+
+def _add_parser_args(parser):
+ parser.add_argument('--type', help='Sets which attributes have unique numbers being generated for them (dnaType)')
+ parser.add_argument('--prefix', help='Defines a prefix that can be prepended to the generated '
+ 'number values for the attribute (dnaPrefix)')
+ parser.add_argument('--next-value', help='Gives the next available number which can be assigned (dnaNextValue)')
+ parser.add_argument('--max-value', help='Sets the maximum value that can be assigned for the range (dnaMaxValue)')
+ parser.add_argument('--interval', help='Sets an interval to use to increment through numbers in a range (dnaInterval)')
+ parser.add_argument('--magic-regen', help='Sets a user-defined value that instructs the plug-in '
+ 'to assign a new value for the entry (dnaMagicRegen)')
+ parser.add_argument('--filter', help='Sets an LDAP filter to use to search for and identify the entries '
+ 'to which to apply the distributed numeric assignment range (dnaFilter)')
+ parser.add_argument('--scope', help='Sets the base DN to search for entries to which '
+ 'to apply the distributed numeric assignment (dnaScope)')
+ parser.add_argument('--remote-bind-dn', help='Specifies the Replication Manager DN (dnaRemoteBindDN)')
+ parser.add_argument('--remote-bind-cred', help='Specifies the Replication Manager\'s password (dnaRemoteBindCred)')
+ parser.add_argument('--shared-config-entry', help='Defines a shared identity that the servers can use '
+ 'to transfer ranges to one another (dnaSharedCfgDN)')
+ parser.add_argument('--threshold', help='Sets a threshold of remaining available numbers in the range. When the '
+ 'server hits the threshold, it sends a request for a new range (dnaThreshold)')
+ parser.add_argument('--next-range',
+ help='Defines the next range to use when the current range is exhausted (dnaNextRange)')
+ parser.add_argument('--range-request-timeout',
+ help='sets a timeout period, in seconds, for range requests so that the server '
+ 'does not stall waiting on a new range from one server and '
+ 'can request a range from a new server (dnaRangeRequestTimeout)')
+
+
+def _add_parser_args_config(parser):
+ parser.add_argument('--hostname',
+ help='Identifies the host name of a server in a shared range, as part of the DNA '
+ 'range configuration for that specific host in multi-master replication (dnaHostname)')
+ parser.add_argument('--port', help='Gives the standard port number to use to connect to '
+ 'the host identified in dnaHostname (dnaPortNum)')
+ parser.add_argument('--secure-port', help='Gives the secure (TLS) port number to use to connect '
+ 'to the host identified in dnaHostname (dnaSecurePortNum)')
+ parser.add_argument('--remote-bind-method', help='Specifies the remote bind method (dnaRemoteBindMethod)')
+ parser.add_argument('--remote-conn-protocol', help='Specifies the remote connection protocol (dnaRemoteConnProtocol)')
+ parser.add_argument('--remaining-values', help='Contains the number of values that are remaining and '
+ 'available to a server to assign to entries (dnaRemainingValues)')
def create_parser(subparsers):
- dna_parser = subparsers.add_parser('dna', help='Manage and configure DNA plugin')
- subcommands = dna_parser.add_subparsers(help='action')
+ dna = subparsers.add_parser('dna', help='Manage and configure DNA plugin')
+ subcommands = dna.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, DNAPlugin)
+
+ list = subcommands.add_parser('list', help='List available plugin configs')
+ subcommands_list = list.add_subparsers(help='action')
+ list_configs = subcommands_list.add_parser('configs', help='List main DNA plugin config entries')
+ list_configs.set_defaults(func=dna_list)
+ list_shared_configs = subcommands_list.add_parser('shared-configs', help='List DNA plugin shared config entries')
+ list_shared_configs.add_argument('BASEDN', help='The search DN')
+ list_shared_configs.set_defaults(func=dna_config_list)
+
+ config = subcommands.add_parser('config', help='Manage plugin configs')
+ config.add_argument('NAME', help='The DNA configuration name')
+ config_subcommands = config.add_subparsers(help='action')
+ add = config_subcommands.add_parser('add', help='Add the config entry')
+ add.set_defaults(func=dna_add)
+ _add_parser_args(add)
+ edit = config_subcommands.add_parser('set', help='Edit the config entry')
+ edit.set_defaults(func=dna_edit)
+ _add_parser_args(edit)
+ show = config_subcommands.add_parser('show', help='Display the config entry')
+ show.set_defaults(func=dna_show)
+ delete = config_subcommands.add_parser('delete', help='Delete the config entry')
+ delete.set_defaults(func=dna_del)
+ shared_config = config_subcommands.add_parser('shared-config-entry', help='Manage the shared config entry')
+ shared_config_subcommands = shared_config.add_subparsers(help='action')
+
+ add_config = shared_config_subcommands.add_parser('add', help='Add the shared config entry')
+ add_config.add_argument('BASEDN', help='The shared config entry BASE DN. The new DN will be constructed with '
+ 'dnaHostname and dnaPortNum')
+ add_config.set_defaults(func=dna_config_add)
+ _add_parser_args_config(add_config)
+ edit_config = shared_config_subcommands.add_parser('edit', help='Edit the shared config entry')
+ edit_config.add_argument('DN', help='The shared config entry DN')
+ edit_config.set_defaults(func=dna_config_edit)
+ _add_parser_args_config(edit_config)
+ show_config_parser = shared_config_subcommands.add_parser('show', help='Display the shared config entry')
+ show_config_parser.add_argument('DN', help='The shared config entry DN')
+ show_config_parser.set_defaults(func=dna_config_show)
+ del_config_parser = shared_config_subcommands.add_parser('delete', help='Delete the shared config entry')
+ del_config_parser.add_argument('DN', help='The shared config entry DN')
+ del_config_parser.set_defaults(func=dna_config_del)
diff --git a/src/lib389/lib389/cli_conf/plugins/linkedattr.py b/src/lib389/lib389/cli_conf/plugins/linkedattr.py
index 7581e80..716f0e2 100644
--- a/src/lib389/lib389/cli_conf/plugins/linkedattr.py
+++ b/src/lib389/lib389/cli_conf/plugins/linkedattr.py
@@ -1,16 +1,127 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
+# Copyright (C) 2019 William Brown <william(a)blackhats.net.au>
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-from lib389.plugins import LinkedAttributesPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+import json
+import ldap
+from lib389.plugins import LinkedAttributesPlugin, LinkedAttributesConfig, LinkedAttributesConfigs
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
+
+arg_to_attr = {
+ 'link_type': 'linkType',
+ 'managed_type': 'managedType',
+ 'link_scope': 'linkScope',
+}
+
+
+def linkedattr_list(inst, basedn, log, args):
+ log = log.getChild('linkedattr_list')
+ configs = LinkedAttributesConfigs(inst)
+ result = []
+ result_json = []
+ for config in configs.list():
+ if args.json:
+ result_json.append(config.get_all_attrs_json())
+ else:
+ result.append(config.rdn)
+ if args.json:
+ print(json.dumps({"type": "list", "items": result_json}))
+ else:
+ if len(result) > 0:
+ for i in result:
+ print(i)
+ else:
+ print("No Linked Attributes plugin instances")
+
+
+def linkedattr_add(inst, basedn, log, args):
+ log = log.getChild('linkedattr_add')
+ plugin = LinkedAttributesPlugin(inst)
+ props = {'cn': args.NAME}
+ generic_object_add(LinkedAttributesConfig, inst, log, args, arg_to_attr, basedn=plugin.dn, props=props)
+
+
+def linkedattr_edit(inst, basedn, log, args):
+ log = log.getChild('linkedattr_edit')
+ configs = LinkedAttributesConfigs(inst)
+ config = configs.get(args.NAME)
+ generic_object_edit(config, log, args, arg_to_attr)
+
+
+def linkedattr_show(inst, basedn, log, args):
+ log = log.getChild('linkedattr_show')
+ configs = LinkedAttributesConfigs(inst)
+ config = configs.get(args.NAME)
+
+ if not config.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.name)
+ if args and args.json:
+ o_str = config.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(config.display())
+
+
+def linkedattr_del(inst, basedn, log, args):
+ log = log.getChild('linkedattr_del')
+ configs = LinkedAttributesConfigs(inst)
+ config = configs.get(args.NAME)
+ config.delete()
+ log.info("Successfully deleted the %s", config.dn)
+
+
+def fixup(inst, basedn, log, args):
+ plugin = LinkedAttributesPlugin(inst)
+ log.info('Attempting to add task entry... This will fail if LinkedAttributes plug-in is not enabled.')
+ if not plugin.status():
+ log.error("'%s' is disabled. Fix up task can't be executed" % plugin.rdn)
+ fixup_task = plugin.fixup(args.basedn, args.filter)
+ fixup_task.wait()
+ exitcode = fixup_task.get_exit_code()
+ if exitcode != 0:
+ log.error('LinkedAttributes fixup task for %s has failed. Please, check logs')
+ else:
+ log.info('Successfully added fixup task')
+
+
+def _add_parser_args(parser):
+ parser.add_argument('--link-type',
+ help='Sets the attribute that is managed manually by administrators (linkType)')
+ parser.add_argument('--managed-type',
+ help='Sets the attribute that is created dynamically by the plugin (managedType)')
+ parser.add_argument('--link-scope',
+ help='Sets the scope that restricts the plugin to a specific part of the directory tree (linkScope)')
def create_parser(subparsers):
- linkedattr_parser = subparsers.add_parser('linkedattr', help='Manage and configure Linked Attributes plugin')
+ linkedattr_parser = subparsers.add_parser('linked-attr', help='Manage and configure Linked Attributes plugin')
subcommands = linkedattr_parser.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, LinkedAttributesPlugin)
+
+ fixup_parser = subcommands.add_parser('fixup', help='Run the fix-up task for linked attributes plugin')
+ fixup_parser.add_argument('basedn', help="basedn that contains entries to fix up")
+ fixup_parser.add_argument('-f', '--filter', help='Filter for entries to fix up linked attributes.')
+ fixup_parser.set_defaults(func=fixup)
+
+ list = subcommands.add_parser('list', help='List available plugin configs')
+ list.set_defaults(func=linkedattr_list)
+
+ config = subcommands.add_parser('config', help='Manage plugin configs')
+ config.add_argument('NAME', help='The Linked Attributes configuration name')
+ config_subcommands = config.add_subparsers(help='action')
+ add = config_subcommands.add_parser('add', help='Add the config entry')
+ add.set_defaults(func=linkedattr_add)
+ _add_parser_args(add)
+ edit = config_subcommands.add_parser('set', help='Edit the config entry')
+ edit.set_defaults(func=linkedattr_edit)
+ _add_parser_args(edit)
+ show = config_subcommands.add_parser('show', help='Display the config entry')
+ show.set_defaults(func=linkedattr_show)
+ delete = config_subcommands.add_parser('delete', help='Delete the config entry')
+ delete.set_defaults(func=linkedattr_del)
+
diff --git a/src/lib389/lib389/cli_conf/plugins/managedentries.py b/src/lib389/lib389/cli_conf/plugins/managedentries.py
index 18dca1b..cb5235b 100644
--- a/src/lib389/lib389/cli_conf/plugins/managedentries.py
+++ b/src/lib389/lib389/cli_conf/plugins/managedentries.py
@@ -1,16 +1,231 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-from lib389.plugins import ManagedEntriesPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+import ldap
+import json
+from lib389.plugins import ManagedEntriesPlugin, MEPConfig, MEPConfigs, MEPTemplate, MEPTemplates
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
+
+arg_to_attr = {
+ 'config_area': 'nsslapd-pluginConfigArea'
+}
+
+arg_to_attr_config = {
+ 'scope': 'originScope',
+ 'filter': 'originFilter',
+ 'managed_base': 'managedBase',
+ 'managed_template': 'managedTemplate'
+}
+
+arg_to_attr_template = {
+ 'rdn_attr': 'mepRDNAttr',
+ 'static_attr': 'mepStaticAttr',
+ 'mapped_attr': 'mepMappedAttr'
+}
+
+
+def mep_edit(inst, basedn, log, args):
+ log = log.getChild('mep_edit')
+ plugin = ManagedEntriesPlugin(inst)
+ generic_object_edit(plugin, log, args, arg_to_attr)
+
+
+def mep_config_list(inst, basedn, log, args):
+ log = log.getChild('mep_config_list')
+ plugin = ManagedEntriesPlugin(inst)
+ config_area = plugin.get_attr_val_utf8_l('nsslapd-pluginConfigArea')
+ configs = MEPConfigs(inst, config_area)
+ result = []
+ result_json = []
+ for config in configs.list():
+ if args.json:
+ result_json.append(config.get_all_attrs_json())
+ else:
+ result.append(config.rdn)
+ if args.json:
+ print(json.dumps({"type": "list", "items": result_json}))
+ else:
+ if len(result) > 0:
+ for i in result:
+ print(i)
+ else:
+ print("No Linked Attributes plugin instances")
+
+
+def mep_config_add(inst, basedn, log, args):
+ log = log.getChild('mep_config_add')
+ plugin = ManagedEntriesPlugin(inst)
+ config_area = plugin.get_attr_val_utf8_l('nsslapd-pluginConfigArea')
+ if config_area is None:
+ config_area = plugin.dn
+ props = {'cn': args.NAME}
+ generic_object_add(MEPConfig, inst, log, args, arg_to_attr_config, basedn=config_area, props=props)
+
+
+def mep_config_edit(inst, basedn, log, args):
+ log = log.getChild('mep_config_edit')
+ plugin = ManagedEntriesPlugin(inst)
+ config_area = plugin.get_attr_val_utf8_l('nsslapd-pluginConfigArea')
+ configs = MEPConfigs(inst, config_area)
+ config = configs.get(args.NAME)
+ generic_object_edit(config, log, args, arg_to_attr_config)
+
+
+def mep_config_show(inst, basedn, log, args):
+ log = log.getChild('mep_config_show')
+ plugin = ManagedEntriesPlugin(inst)
+ config_area = plugin.get_attr_val_utf8_l('nsslapd-pluginConfigArea')
+ configs = MEPConfigs(inst, config_area)
+ config = configs.get(args.NAME)
+
+ if not config.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.name)
+ if args and args.json:
+ o_str = config.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(config.display())
+
+
+def mep_config_del(inst, basedn, log, args):
+ log = log.getChild('mep_config_del')
+ plugin = ManagedEntriesPlugin(inst)
+ config_area = plugin.get_attr_val_utf8_l('nsslapd-pluginConfigArea')
+ configs = MEPConfigs(inst, config_area)
+ config = configs.get(args.NAME)
+ config.delete()
+ log.info("Successfully deleted the %s", config.dn)
+
+
+def mep_template_list(inst, basedn, log, args):
+ log = log.getChild('mep_template_list')
+ templates = MEPTemplates(inst, args.BASEDN)
+ result = []
+ result_json = []
+ for template in templates.list():
+ if args.json:
+ result_json.append(template.get_all_attrs_json())
+ else:
+ result.append(template.rdn)
+ if args.json:
+ print(json.dumps({"type": "list", "items": result_json}))
+ else:
+ if len(result) > 0:
+ for i in result:
+ print(i)
+ else:
+ print("No Linked Attributes plugin instances")
+
+
+def mep_template_add(inst, basedn, log, args):
+ log = log.getChild('mep_template_add')
+ targetdn = args.DN
+ generic_object_add(MEPTemplate, inst, log, args, arg_to_attr_config, dn=targetdn)
+ log.info('Don\'t forget to assign the template to Managed Entry Plugin config '
+ 'attribute - managedTemplate')
+
+
+def mep_template_edit(inst, basedn, log, args):
+ log = log.getChild('mep_template_edit')
+ targetdn = args.DN
+ templates = MEPTemplates(inst)
+ template = templates.get(targetdn)
+ generic_object_edit(template, log, args, arg_to_attr_config)
+
+
+def mep_template_show(inst, basedn, log, args):
+ log = log.getChild('mep_template_show')
+ targetdn = args.DN
+ templates = MEPTemplates(inst)
+ template = templates.get(targetdn)
+
+ if not template.exists():
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % targetdn)
+ if args and args.json:
+ o_str = template.get_all_attrs_json()
+ print(o_str)
+ else:
+ print(template.display())
+
+
+def mep_template_del(inst, basedn, log, args):
+ log = log.getChild('mep_template_del')
+ targetdn = args.DN
+ templates = MEPTemplates(inst)
+ template = templates.get(targetdn)
+ template.delete()
+ log.info("Successfully deleted the %s", targetdn)
+
+
+def _add_parser_args_config(parser):
+ parser.add_argument('--scope', help='Sets the scope of the search to use to see '
+ 'which entries the plug-in monitors (originScope)')
+ parser.add_argument('--filter', help='Sets the search filter to use to search for and identify the entries '
+ 'within the subtree which require a managed entry (originFilter)')
+ parser.add_argument('--managed-base', help='Sets the subtree under which to create '
+ 'the managed entries (managedBase)')
+ parser.add_argument('--managed-template', help='Identifies the template entry to use to create '
+ 'the managed entry (managedTemplate)')
+
+
+def _add_parser_args_template(parser):
+ parser.add_argument('--rdn-attr', help='Sets which attribute to use as the naming attribute '
+ 'in the automatically-generated entry (mepRDNAttr)')
+ parser.add_argument('--static-attr', help='Sets an attribute with a defined value that must be added '
+ 'to the automatically-generated entry (mepStaticAttr)')
+ parser.add_argument('--mapped-attr', nargs='+',
+ help='Sets an attribute in the Managed Entries template entry which must exist '
+ 'in the generated entry (mepMappedAttr)')
def create_parser(subparsers):
- managedentries_parser = subparsers.add_parser('managedentries', help='Manage and configure Managed Entries plugin')
- subcommands = managedentries_parser.add_subparsers(help='action')
+ mep = subparsers.add_parser('managed-entries', help='Manage and configure Managed Entries Plugin')
+ subcommands = mep.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, ManagedEntriesPlugin)
+
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=mep_edit)
+ edit.add_argument('--config-area', help='The value to set as nsslapd-pluginConfigArea')
+
+ list = subcommands.add_parser('list', help='List Managed Entries Plugin configs and templates')
+ subcommands_list = list.add_subparsers(help='action')
+ list_configs = subcommands_list.add_parser('configs', help='List Managed Entries Plugin configs (list config-area '
+ 'if specified in the main plugin entry)')
+ list_configs.set_defaults(func=mep_config_list)
+ list_templates = subcommands_list.add_parser('templates',
+ help='List Managed Entries Plugin templates in the directory')
+ list_templates.add_argument('BASEDN', help='The base DN where to search the templates.')
+ list_templates.set_defaults(func=mep_template_list)
+
+ config = subcommands.add_parser('config', help='Handle Managed Entries Plugin configs')
+ config.add_argument('NAME', help='The config entry CN.')
+ config_subcommands = config.add_subparsers(help='action')
+ add = config_subcommands.add_parser('add', help='Add the config entry')
+ add.set_defaults(func=mep_config_add)
+ _add_parser_args_config(add)
+ edit = config_subcommands.add_parser('set', help='Edit the config entry')
+ edit.set_defaults(func=mep_config_edit)
+ _add_parser_args_config(edit)
+ show = config_subcommands.add_parser('show', help='Display the config entry')
+ show.set_defaults(func=mep_config_show)
+ delete = config_subcommands.add_parser('delete', help='Delete the config entry')
+ delete.set_defaults(func=mep_config_del)
+
+ template = subcommands.add_parser('template', help='Handle Managed Entries Plugin templates')
+ template.add_argument('DN', help='The template entry DN.')
+ template_subcommands = template.add_subparsers(help='action')
+ add = template_subcommands.add_parser('add', help='Add the template entry')
+ add.set_defaults(func=mep_template_add)
+ _add_parser_args_template(add)
+ edit = template_subcommands.add_parser('set', help='Edit the template entry')
+ edit.set_defaults(func=mep_template_edit)
+ _add_parser_args_template(edit)
+ show = template_subcommands.add_parser('show', help='Display the template entry')
+ show.set_defaults(func=mep_template_show)
+ delete = template_subcommands.add_parser('delete', help='Delete the template entry')
+ delete.set_defaults(func=mep_template_del)
diff --git a/src/lib389/lib389/cli_conf/plugins/memberof.py b/src/lib389/lib389/cli_conf/plugins/memberof.py
index 0ccbed3..666dd74 100644
--- a/src/lib389/lib389/cli_conf/plugins/memberof.py
+++ b/src/lib389/lib389/cli_conf/plugins/memberof.py
@@ -1,5 +1,6 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
+# Copyright (C) 2019 William Brown <william(a)blackhats.net.au>
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -7,7 +8,7 @@
# --- END COPYRIGHT BLOCK ---
import ldap
-from lib389.plugins import MemberOfPlugin, Plugins, MemberOfSharedConfig
+from lib389.plugins import MemberOfPlugin, Plugins, MemberOfSharedConfig, MemberOfSharedConfigs
from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit, generic_object_add
arg_to_attr = {
@@ -25,18 +26,15 @@ arg_to_attr = {
def memberof_edit(inst, basedn, log, args):
log = log.getChild('memberof_edit')
- plugins = Plugins(inst)
- plugin = plugins.get("MemberOf Plugin")
+ plugin = MemberOfPlugin(inst)
generic_object_edit(plugin, log, args, arg_to_attr)
def memberof_add_config(inst, basedn, log, args):
log = log.getChild('memberof_add_config')
targetdn = args.DN
- config = MemberOfSharedConfig(inst, targetdn)
- generic_object_add(config, log, args, arg_to_attr)
- plugins = Plugins(inst)
- plugin = plugins.get("MemberOf Plugin")
+ config = generic_object_add(MemberOfSharedConfig, inst, log, args, arg_to_attr, dn=targetdn)
+ plugin = MemberOfPlugin(inst)
plugin.replace('nsslapd-pluginConfigArea', config.dn)
log.info('MemberOf attribute nsslapd-pluginConfigArea (config-entry) '
'was set in the main plugin config')
@@ -83,50 +81,61 @@ def fixup(inst, basedn, log, args):
def _add_parser_args(parser):
- parser.add_argument('--attr', nargs='+', help='The value to set as memberOfAttr')
- parser.add_argument('--groupattr', nargs='+', help='The value to set as memberOfGroupAttr')
+ parser.add_argument('--attr', nargs='+',
+ help='Specifies the attribute in the user entry for the Directory Server '
+ 'to manage to reflect group membership (memberOfAttr)')
+ parser.add_argument('--groupattr', nargs='+',
+ help='Specifies the attribute in the group entry to use to identify '
+ 'the DNs of group members (memberOfGroupAttr)')
parser.add_argument('--allbackends', choices=['on', 'off'], type=str.lower,
- help='The value to set as memberOfAllBackends')
+ help='Specifies whether to search the local suffix for user entries on '
+ 'all available suffixes (memberOfAllBackends)')
parser.add_argument('--skipnested', choices=['on', 'off'], type=str.lower,
- help='The value to set as memberOfSkipNested')
- parser.add_argument('--scope', help='The value to set as memberOfEntryScope')
- parser.add_argument('--exclude', help='The value to set as memberOfEntryScopeExcludeSubtree')
- parser.add_argument('--autoaddoc', type=str.lower, help='The value to set as memberOfAutoAddOC')
+ help='Specifies wherher to skip nested groups or not (memberOfSkipNested)')
+ parser.add_argument('--scope', help='Specifies backends or multiple-nested suffixes '
+ 'for the MemberOf plug-in to work on (memberOfEntryScope)')
+ parser.add_argument('--exclude', help='Specifies backends or multiple-nested suffixes '
+ 'for the MemberOf plug-in to exclude (memberOfEntryScopeExcludeSubtree)')
+ parser.add_argument('--autoaddoc', type=str.lower,
+ help='If an entry does not have an object class that allows the memberOf attribute '
+ 'then the memberOf plugin will automatically add the object class listed '
+ 'in the memberOfAutoAddOC parameter')
def create_parser(subparsers):
- memberof_parser = subparsers.add_parser('memberof', help='Manage and configure MemberOf plugin')
+ memberof = subparsers.add_parser('memberof', help='Manage and configure MemberOf plugin')
- subcommands = memberof_parser.add_subparsers(help='action')
+ subcommands = memberof.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, MemberOfPlugin)
- edit_parser = subcommands.add_parser('edit', help='Edit the plugin')
- edit_parser.set_defaults(func=memberof_edit)
- _add_parser_args(edit_parser)
- edit_parser.add_argument('--config-entry', help='The value to set as nsslapd-pluginConfigArea')
-
- config_parser = subcommands.add_parser('config-entry', help='Manage the config entry')
- config_subcommands = config_parser.add_subparsers(help='action')
- add_config_parser = config_subcommands.add_parser('add', help='Add the config entry')
- add_config_parser.set_defaults(func=memberof_add_config)
- add_config_parser.add_argument('DN', help='The config entry full DN')
- _add_parser_args(add_config_parser)
- edit_config_parser = config_subcommands.add_parser('edit', help='Edit the config entry')
- edit_config_parser.set_defaults(func=memberof_edit_config)
- edit_config_parser.add_argument('DN', help='The config entry full DN')
- _add_parser_args(edit_config_parser)
- show_config_parser = config_subcommands.add_parser('show', help='Display the config entry')
- show_config_parser.set_defaults(func=memberof_show_config)
- show_config_parser.add_argument('DN', help='The config entry full DN')
- del_config_parser = config_subcommands.add_parser('delete', help='Delete the config entry')
- del_config_parser.set_defaults(func=memberof_del_config)
- del_config_parser.add_argument('DN', help='The config entry full DN')
-
- fixup_parser = subcommands.add_parser('fixup', help='Run the fix-up task for memberOf plugin')
- fixup_parser.set_defaults(func=fixup)
- fixup_parser.add_argument('DN', help="base DN that contains entries to fix up")
- fixup_parser.add_argument('-f', '--filter',
- help='Filter for entries to fix up.\n If omitted, all entries with objectclass '
- 'inetuser/inetadmin/nsmemberof under the specified base will have '
- 'their memberOf attribute regenerated.')
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=memberof_edit)
+ _add_parser_args(edit)
+ edit.add_argument('--config-entry', help='The value to set as nsslapd-pluginConfigArea')
+
+ config = subcommands.add_parser('config-entry', help='Manage the config entry')
+ config_subcommands = config.add_subparsers(help='action')
+ add_config = config_subcommands.add_parser('add', help='Add the config entry')
+ add_config.set_defaults(func=memberof_add_config)
+ add_config.add_argument('DN', help='The config entry full DN')
+ _add_parser_args(add_config)
+ edit_config = config_subcommands.add_parser('set', help='Edit the config entry')
+ edit_config.set_defaults(func=memberof_edit_config)
+ edit_config.add_argument('DN', help='The config entry full DN')
+ _add_parser_args(edit_config)
+ show_config = config_subcommands.add_parser('show', help='Display the config entry')
+ show_config.set_defaults(func=memberof_show_config)
+ show_config.add_argument('DN', help='The config entry full DN')
+ del_config_ = config_subcommands.add_parser('delete', help='Delete the config entry')
+ del_config_.set_defaults(func=memberof_del_config)
+ del_config_.add_argument('DN', help='The config entry full DN')
+
+ fixup = subcommands.add_parser('fixup', help='Run the fix-up task for memberOf plugin')
+ fixup.set_defaults(func=fixup)
+ fixup.add_argument('DN', help="Base DN that contains entries to fix up")
+ fixup.add_argument('-f', '--filter',
+ help='Filter for entries to fix up.\n If omitted, all entries with objectclass '
+ 'inetuser/inetadmin/nsmemberof under the specified base will have '
+ 'their memberOf attribute regenerated.')
+
diff --git a/src/lib389/lib389/cli_conf/plugins/passthroughauth.py b/src/lib389/lib389/cli_conf/plugins/passthroughauth.py
index ef6729e..616119a 100644
--- a/src/lib389/lib389/cli_conf/plugins/passthroughauth.py
+++ b/src/lib389/lib389/cli_conf/plugins/passthroughauth.py
@@ -1,16 +1,88 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
+import json
+import ldap
from lib389.plugins import PassThroughAuthenticationPlugin
from lib389.cli_conf import add_generic_plugin_parsers
+def pta_list(inst, basedn, log, args):
+ log = log.getChild('pta_list')
+ plugin = PassThroughAuthenticationPlugin(inst)
+ result = []
+ urls = plugin.get_urls()
+ if args.json:
+ print(json.dumps({"type": "list", "items": urls}))
+ else:
+ if len(urls) > 0:
+ for i in result:
+ print(i)
+ else:
+ print("No Pass Through Auth attributes were found")
+
+
+def pta_add(inst, basedn, log, args):
+ log = log.getChild('pta_add')
+ plugin = PassThroughAuthenticationPlugin(inst)
+ urls = list(map(lambda url: url.lower(), plugin.get_urls()))
+ if args.URL.lower() in urls:
+ raise ldap.ALREADY_EXISTS("Entry %s already exists" % args.URL)
+ plugin.add("nsslapd-pluginarg%s" % len(urls), args.URL)
+
+
+def pta_edit(inst, basedn, log, args):
+ log = log.getChild('pta_edit')
+ plugin = PassThroughAuthenticationPlugin(inst)
+ urls = list(map(lambda url: url.lower(), plugin.get_urls()))
+ old_url_l = args.OLD_URL.lower()
+ if old_url_l not in urls:
+ log.info("Entry %s doesn't exists. Adding a new value." % args.OLD_URL)
+ url_num = len(urls)
+ else:
+ url_num = urls.index(old_url_l)
+ plugin.remove("nsslapd-pluginarg%s" % url_num, old_url_l)
+ plugin.add("nsslapd-pluginarg%s" % url_num, args.NEW_URL)
+
+
+def pta_del(inst, basedn, log, args):
+ log = log.getChild('pta_del')
+ plugin = PassThroughAuthenticationPlugin(inst)
+ urls = list(map(lambda url: url.lower(), plugin.get_urls()))
+ old_url_l = args.URL.lower()
+ if old_url_l not in urls:
+ raise ldap.NO_SUCH_OBJECT("Entry %s doesn't exists" % args.URL)
+
+ plugin.remove_all("nsslapd-pluginarg%s" % urls.index(old_url_l))
+ log.info("Successfully deleted %s", args.URL)
+
+
def create_parser(subparsers):
- passthroughauth_parser = subparsers.add_parser('passthroughauth', help='Manage and configure Pass-Through Authentication plugin')
+ passthroughauth_parser = subparsers.add_parser('pass-through-auth', help='Manage and configure Pass-Through Authentication plugin')
subcommands = passthroughauth_parser.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, PassThroughAuthenticationPlugin)
+
+ list = subcommands.add_parser('list', help='List available plugin configs')
+ list.set_defaults(func=pta_list)
+
+ add = subcommands.add_parser('add', help='Add the config entry')
+ add.add_argument('URL', help='The full LDAP URL in format '
+ '"ldap|ldaps://authDS/subtree maxconns,maxops,timeout,ldver,connlifetime,startTLS". '
+ 'If one optional parameter is specified the rest should be specified too')
+ add.set_defaults(func=pta_add)
+
+ edit = subcommands.add_parser('modify', help='Edit the config entry')
+ edit.add_argument('OLD_URL', help='The full LDAP URL you get from the "list" command')
+ edit.add_argument('NEW_URL', help='The full LDAP URL in format '
+ '"ldap|ldaps://authDS/subtree maxconns,maxops,timeout,ldver,connlifetime,startTLS". '
+ 'If one optional parameter is specified the rest should be specified too')
+ edit.set_defaults(func=pta_edit)
+
+ delete = subcommands.add_parser('delete', help='Delete the config entry')
+ delete.add_argument('URL', help='The full LDAP URL you get from the "list" command')
+ delete.set_defaults(func=pta_del)
diff --git a/src/lib389/lib389/cli_conf/plugins/referint.py b/src/lib389/lib389/cli_conf/plugins/referint.py
index bf4d07c..9482a14 100644
--- a/src/lib389/lib389/cli_conf/plugins/referint.py
+++ b/src/lib389/lib389/cli_conf/plugins/referint.py
@@ -1,197 +1,57 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-import ldap
-
from lib389.plugins import ReferentialIntegrityPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
-
-
-def manage_update_delay(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- if args.value is None:
- val = plugin.get_update_delay_formatted()
- log.info(val)
- else:
- plugin.set_update_delay(args.value)
- log.info('referint-update-delay set to "{}"'.format(args.value))
-
-def display_membership_attr(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- log.info(plugin.get_membership_attr_formatted())
-
-def add_membership_attr(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.add_membership_attr(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- log.info('Value "{}" already exists.'.format(args.value))
- else:
- log.info('successfully added membership attribute "{}"'.format(args.value))
-
-def remove_membership_attr(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.remove_membership_attr(args.value)
- except ldap.OPERATIONS_ERROR:
- log.error("Error: Failed to delete. At least one value for membership attribute should exist.")
- except ldap.NO_SUCH_ATTRIBUTE:
- log.error('Error: Failed to delete. No value "{0}" found.'.format(args.value))
- else:
- log.info('successfully removed membership attribute "{}"'.format(args.value))
-
-def display_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- val = plugin.get_entryscope_formatted()
- if not val:
- log.info("nsslapd-pluginEntryScope is not set")
- else:
- log.info(val)
-
-def add_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.add_entryscope(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- log.info('Value "{}" already exists.'.format(args.value))
- else:
- log.info('successfully added nsslapd-pluginEntryScope value "{}"'.format(args.value))
-
-def remove_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.remove_entryscope(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- log.error('Error: Failed to delete. No value "{0}" found.'.format(args.value))
- else:
- log.info('successfully removed nsslapd-pluginEntryScope value "{}"'.format(args.value))
-
-def remove_all_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- plugin.remove_all_entryscope()
- log.info('successfully removed all nsslapd-pluginEntryScope values')
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit
-def display_excludescope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- val = plugin.get_excludescope_formatted()
- if not val:
- log.info("nsslapd-pluginExcludeEntryScope is not set")
- else:
- log.info(val)
+arg_to_attr = {
+ 'update_delay': 'referint-update-delay',
+ 'membership_attr': 'referint-membership-attr',
+ 'entry_scope': 'nsslapd-pluginEntryScope',
+ 'exclude_entry_scope': 'nsslapd-pluginExcludeEntryScope',
+ 'container_scope': 'nsslapd-pluginContainerScope',
+}
-def add_excludescope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.add_excludescope(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- log.info('Value "{}" already exists.'.format(args.value))
- else:
- log.info('successfully added nsslapd-pluginExcludeEntryScope value "{}"'.format(args.value))
-def remove_excludescope(inst, basedn, log, args):
+def referint_edit(inst, basedn, log, args):
+ log = log.getChild('referint_edit')
plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.remove_excludescope(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- log.error('Error: Failed to delete. No value "{0}" found.'.format(args.value))
- else:
- log.info('successfully removed nsslapd-pluginExcludeEntryScope value "{}"'.format(args.value))
+ generic_object_edit(plugin, log, args, arg_to_attr)
-def remove_all_excludescope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- plugin.remove_all_excludescope()
- log.info('successfully removed all nsslapd-pluginExcludeEntryScope values')
-def display_container_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- val = plugin.get_container_scope_formatted()
- if not val:
- log.info("nsslapd-pluginContainerScope is not set")
- else:
- log.info(val)
-
-def add_container_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.add_container_scope(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- log.info('Value "{}" already exists.'.format(args.value))
- else:
- log.info('successfully added nsslapd-pluginContainerScope value "{}"'.format(args.value))
-
-def remove_container_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- try:
- plugin.remove_container_scope(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- log.error('Error: Failed to delete. No value "{0}" found.'.format(args.value))
- else:
- log.info('successfully removed nsslapd-pluginContainerScope value "{}"'.format(args.value))
-
-def remove_all_container_scope(inst, basedn, log, args):
- plugin = ReferentialIntegrityPlugin(inst)
- plugin.remove_all_container_scope()
- log.info('successfully removed all nsslapd-pluginContainerScope values')
+def _add_parser_args(parser):
+ parser.add_argument('--update-delay',
+ help='Sets the update interval. Special values: 0 - The check is performed immediately, '
+ '-1 - No check is performed (referint-update-delay)')
+ parser.add_argument('--membership-attr', nargs='+',
+ help='Specifies attributes to check for and update (referint-membership-attr)')
+ parser.add_argument('--entry-scope',
+ help='Defines the subtree in which the plug-in looks for the delete '
+ 'or rename operations of a user entry (nsslapd-pluginEntryScope)')
+ parser.add_argument('--exclude-entry-scope',
+ help='Defines the subtree in which the plug-in ignores any operations '
+ 'for deleting or renaming a user (nsslapd-pluginExcludeEntryScope)')
+ parser.add_argument('--container_scope',
+ help='Specifies which branch the plug-in searches for the groups to which the user belongs. '
+ 'It only updates groups that are under the specified container branch, '
+ 'and leaves all other groups not updated (nsslapd-pluginContainerScope)')
def create_parser(subparsers):
- referint_parser = subparsers.add_parser('referint', help='Manage and configure Referential Integrity plugin')
+ referint = subparsers.add_parser('referential-integrity',
+ help='Manage and configure Referential Integrity Postoperation plugin')
- subcommands = referint_parser.add_subparsers(help='action')
+ subcommands = referint.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, ReferentialIntegrityPlugin)
- delay_parser = subcommands.add_parser('delay', help='get or set update delay')
- delay_parser.set_defaults(func=manage_update_delay)
- delay_parser.add_argument('value', nargs='?', help='The value to set as update delay')
-
- attr_parser = subcommands.add_parser('attrs', help='get or manage membership attributes')
- attr_parser.set_defaults(func=display_membership_attr)
- attr_subcommands = attr_parser.add_subparsers(help='action')
- add_attr_parser = attr_subcommands.add_parser('add', help='add membership attribute')
- add_attr_parser.set_defaults(func=add_membership_attr)
- add_attr_parser.add_argument('value', help='membership attribute to add')
- del_attr_parser = attr_subcommands.add_parser('del', help='remove membership attribute')
- del_attr_parser.set_defaults(func=remove_membership_attr)
- del_attr_parser.add_argument('value', help='membership attribute to remove')
-
- scope_parser = subcommands.add_parser('scope', help='get or manage referint scope')
- scope_parser.set_defaults(func=display_scope)
- scope_subcommands = scope_parser.add_subparsers(help='action')
- add_scope_parser = scope_subcommands.add_parser('add', help='add entry scope value')
- add_scope_parser.set_defaults(func=add_scope)
- add_scope_parser.add_argument('value', help='The value to add in referint entry scope')
- del_scope_parser = scope_subcommands.add_parser('del', help='remove entry scope value')
- del_scope_parser.set_defaults(func=remove_scope)
- del_scope_parser.add_argument('value', help='The value to remove from entry scope')
- delall_scope_parser = scope_subcommands.add_parser('delall', help='remove all entry scope values')
- delall_scope_parser.set_defaults(func=remove_all_scope)
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=referint_edit)
+ _add_parser_args(edit)
- exclude_parser = subcommands.add_parser('exclude', help='get or manage referint exclude scope')
- exclude_parser.set_defaults(func=display_excludescope)
- exclude_subcommands = exclude_parser.add_subparsers(help='action')
- add_exclude_parser = exclude_subcommands.add_parser('add', help='add exclude scope value')
- add_exclude_parser.set_defaults(func=add_excludescope)
- add_exclude_parser.add_argument('value', help='The value to add in exclude scope')
- del_exclude_parser = exclude_subcommands.add_parser('del', help='remove exclude scope value')
- del_exclude_parser.set_defaults(func=remove_excludescope)
- del_exclude_parser.add_argument('value', help='The value to remove from exclude scope')
- delall_exclude_parser = exclude_subcommands.add_parser('delall', help='remove all exclude scope values')
- delall_exclude_parser.set_defaults(func=remove_all_excludescope)
- container_parser = subcommands.add_parser('container', help='get or manage referint container scope')
- container_parser.set_defaults(func=display_container_scope)
- container_subcommands = container_parser.add_subparsers(help='action')
- add_container_parser = container_subcommands.add_parser('add', help='add container scope value')
- add_container_parser.set_defaults(func=add_container_scope)
- add_container_parser.add_argument('value', help='The value to add in container scope')
- del_container_parser = container_subcommands.add_parser('del', help='remove container scope value')
- del_container_parser.set_defaults(func=remove_container_scope)
- del_container_parser.add_argument('value', help='The value to remove from container scope')
- delall_container_parser = container_subcommands.add_parser('delall', help='remove all container scope values')
- delall_container_parser.set_defaults(func=remove_all_container_scope)
diff --git a/src/lib389/lib389/cli_conf/plugins/retrochangelog.py b/src/lib389/lib389/cli_conf/plugins/retrochangelog.py
index 133d811..912c127 100644
--- a/src/lib389/lib389/cli_conf/plugins/retrochangelog.py
+++ b/src/lib389/lib389/cli_conf/plugins/retrochangelog.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -7,10 +7,44 @@
# --- END COPYRIGHT BLOCK ---
from lib389.plugins import RetroChangelogPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit
+
+arg_to_attr = {
+ 'is-replicated': 'isReplicated',
+ 'attribute': 'nsslapd-attribute',
+ 'directory': 'nsslapd-changelogdir',
+ 'max-age': 'nsslapd-changelogmaxage',
+}
+
+
+def retrochangelog_edit(inst, basedn, log, args):
+ log = log.getChild('retrochangelog_edit')
+ plugin = RetroChangelogPlugin(inst)
+ generic_object_edit(plugin, log, args, arg_to_attr)
+
+
+def _add_parser_args(parser):
+ parser.add_argument('--is-replicated', choices=['true', 'false'],
+ help='Sets a flag to indicate on a change in the changelog whether the change is newly made '
+ 'on that server or whether it was replicated over from another server (isReplicated)')
+ parser.add_argument('--attribute',
+ help='Specifies another Directory Server attribute which must be included in '
+ 'the retro changelog entries (nsslapd-attribute)')
+ parser.add_argument('--directory',
+ help='Specifies the name of the directory in which the changelog database '
+ 'is created the first time the plug-in is run')
+ parser.add_argument('--max-age',
+ help='This attribute specifies the maximum age of any entry '
+ 'in the changelog (nsslapd-changelogmaxage)')
def create_parser(subparsers):
- retrochangelog_parser = subparsers.add_parser('retrochangelog', help='Manage and configure Retro Changelog plugin')
- subcommands = retrochangelog_parser.add_subparsers(help='action')
+ retrochangelog = subparsers.add_parser('retro-changelog', help='Manage and configure Retro Changelog plugin')
+ subcommands = retrochangelog.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, RetroChangelogPlugin)
+
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=retrochangelog_edit)
+ _add_parser_args(edit)
+
+
diff --git a/src/lib389/lib389/cli_conf/plugins/rootdn_ac.py b/src/lib389/lib389/cli_conf/plugins/rootdn_ac.py
index 7e1c017..63838a9 100644
--- a/src/lib389/lib389/cli_conf/plugins/rootdn_ac.py
+++ b/src/lib389/lib389/cli_conf/plugins/rootdn_ac.py
@@ -1,229 +1,68 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
-import ldap
-
from lib389.plugins import RootDNAccessControlPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
-
-
-def display_time(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- val = plugin.get_open_time_formatted()
- if not val:
- log.info("rootdn-open-time is not set")
- else:
- log.info(val)
- val = plugin.get_close_time_formatted()
- if not val:
- log.info("rootdn-close-time is not set")
- else:
- log.info(val)
-
-def set_open_time(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.set_open_time(args.value)
- log.info('rootdn-open-time set to "{}"'.format(args.value))
-
-def set_close_time(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.set_close_time(args.value)
- log.info('rootdn-close-time set to "{}"'.format(args.value))
-
-def clear_time(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.remove_open_time()
- plugin.remove_close_time()
- log.info('time-based policy was cleared')
-
-def display_ips(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- allowed_ips = plugin.get_allow_ip_formatted()
- denied_ips = plugin.get_deny_ip_formatted()
- if not allowed_ips and not denied_ips:
- log.info("No ip-based access control policy has been configured")
- else:
- log.info(allowed_ips)
- log.info(denied_ips)
-
-def allow_ip(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
-
- # remove ip from denied ips
- try:
- plugin.remove_deny_ip(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- pass
-
- try:
- plugin.add_allow_ip(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- pass
- log.info('{} added to rootdn-allow-ip'.format(args.value))
-
-def deny_ip(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
-
- # remove ip from allowed ips
- try:
- plugin.remove_allow_ip(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- pass
-
- try:
- plugin.add_deny_ip(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- pass
- log.info('{} added to rootdn-deny-ip'.format(args.value))
-
-def clear_all_ips(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.remove_all_allow_ip()
- plugin.remove_all_deny_ip()
- log.info('ip-based policy was cleared')
-
-def display_hosts(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- allowed_hosts = plugin.get_allow_host_formatted()
- denied_hosts = plugin.get_deny_host_formatted()
- if not allowed_hosts and not denied_hosts:
- log.info("No host-based access control policy has been configured")
- else:
- log.info(allowed_hosts)
- log.info(denied_hosts)
-
-def allow_host(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
-
- # remove host from denied hosts
- try:
- plugin.remove_deny_host(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- pass
-
- try:
- plugin.add_allow_host(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- pass
- log.info('{} added to rootdn-allow-host'.format(args.value))
-
-def deny_host(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
-
- # remove host from allowed hosts
- try:
- plugin.remove_allow_host(args.value)
- except ldap.NO_SUCH_ATTRIBUTE:
- pass
-
- try:
- plugin.add_deny_host(args.value)
- except ldap.TYPE_OR_VALUE_EXISTS:
- pass
- log.info('{} added to rootdn-deny-host'.format(args.value))
-
-def clear_all_hosts(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.remove_all_allow_host()
- plugin.remove_all_deny_host()
- log.info('host-based policy was cleared')
-
-def display_days(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- days = plugin.get_days_allowed_formatted()
- if not days:
- log.info("No day-based access control policy has been configured")
- else:
- log.info(days)
-
-def allow_day(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- args.value = args.value[0:3]
- plugin.add_allow_day(args.value)
- log.info('{} added to rootdn-days-allowed'.format(args.value))
-
-def deny_day(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- args.value = args.value[0:3]
- plugin.remove_allow_day(args.value)
- log.info('{} removed from rootdn-days-allowed'.format(args.value))
-
-def clear_all_days(inst, basedn, log, args):
- plugin = RootDNAccessControlPlugin(inst)
- plugin.remove_days_allowed()
- log.info('day-based policy was cleared')
+from lib389.cli_conf import add_generic_plugin_parsers, generic_object_edit
+
+arg_to_attr = {
+ 'allow_host': 'rootdn-allow-host',
+ 'deny_host': 'rootdn-deny-host',
+ 'allow_ip': 'rootdn-allow-ip',
+ 'deny_ip': 'rootdn-deny-ip',
+ 'open_time': 'rootdn-open-time',
+ 'close_time': 'rootdn-close-time',
+ 'days_allowed': 'rootdn-days-allowed'
+}
+
+
+def rootdn_edit(inst, basedn, log, args):
+ log = log.getChild('rootdn_edit')
+ plugin = RootDNAccessControlPlugin(inst)
+ generic_object_edit(plugin, log, args, arg_to_attr)
+
+
+def _add_parser_args(parser):
+ parser.add_argument('--allow-host',
+ help='Sets what hosts, by fully-qualified domain name, the root user is allowed to use '
+ 'to access the Directory Server. Any hosts not listed are implicitly denied '
+ '(rootdn-allow-host)')
+ parser.add_argument('--deny-host',
+ help='Sets what hosts, by fully-qualified domain name, the root user is not allowed to use '
+ 'to access the Directory Server Any hosts not listed are implicitly allowed '
+ '(rootdn-deny-host). If an host address is listed in both the rootdn-allow-host and '
+ 'rootdn-deny-host attributes, it is denied access.')
+ parser.add_argument('--allow-ip',
+ help='Sets what IP addresses, either IPv4 or IPv6, for machines the root user is allowed '
+ 'to use to access the Directory Server Any IP addresses not listed are implicitly '
+ 'denied (rootdn-allow-ip)')
+ parser.add_argument('--deny-ip',
+ help='Sets what IP addresses, either IPv4 or IPv6, for machines the root user is not allowed '
+ 'to use to access the Directory Server. Any IP addresses not listed are implicitly '
+ 'allowed (rootdn-deny-ip) If an IP address is listed in both the rootdn-allow-ip and '
+ 'rootdn-deny-ip attributes, it is denied access.')
+ parser.add_argument('--open-time',
+ help='Sets part of a time period or range when the root user is allowed to access '
+ 'the Directory Server. This sets when the time-based access begins (rootdn-open-time)')
+ parser.add_argument('--close-time',
+ help='Sets part of a time period or range when the root user is allowed to access '
+ 'the Directory Server. This sets when the time-based access ends (rootdn-close-time)')
+ parser.add_argument('--days-allowed',
+ help='Gives a comma-separated list of what days the root user is allowed to use to access '
+ 'the Directory Server. Any days listed are implicitly denied (rootdn-days-allowed)')
def create_parser(subparsers):
- rootdnac_parser = subparsers.add_parser('rootdn', help='Manage and configure RootDN Access Control plugin')
+ rootdnac_parser = subparsers.add_parser('root-dn', help='Manage and configure RootDN Access Control plugin')
subcommands = rootdnac_parser.add_subparsers(help='action')
add_generic_plugin_parsers(subcommands, RootDNAccessControlPlugin)
- time_parser = subcommands.add_parser('time', help='get or set rootdn open and close times')
- time_parser.set_defaults(func=display_time)
-
- time_subcommands = time_parser.add_subparsers(help='action')
-
- open_time_parser = time_subcommands.add_parser('open', help='set open time value')
- open_time_parser.set_defaults(func=set_open_time)
- open_time_parser.add_argument('value', help='Value to set as open time')
-
- close_time_parser = time_subcommands.add_parser('close', help='set close time value')
- close_time_parser.set_defaults(func=set_close_time)
- close_time_parser.add_argument('value', help='Value to set as close time')
-
- time_clear_parser = time_subcommands.add_parser('clear', help='reset time-based access policy')
- time_clear_parser.set_defaults(func=clear_time)
-
- ip_parser = subcommands.add_parser('ip', help='get or set ip access policy')
- ip_parser.set_defaults(func=display_ips)
-
- ip_subcommands = ip_parser.add_subparsers(help='action')
-
- ip_allow_parser = ip_subcommands.add_parser('allow', help='allow IP addr or IP addr range')
- ip_allow_parser.set_defaults(func=allow_ip)
- ip_allow_parser.add_argument('value', help='IP addr or IP addr range')
-
- ip_deny_parser = ip_subcommands.add_parser('deny', help='deny IP addr or IP addr range')
- ip_deny_parser.set_defaults(func=deny_ip)
- ip_deny_parser.add_argument('value', help='IP addr or IP addr range')
-
- ip_clear_parser = ip_subcommands.add_parser('clear', help='reset IP-based access policy')
- ip_clear_parser.set_defaults(func=clear_all_ips)
-
- host_parser = subcommands.add_parser('host', help='get or set host access policy')
- host_parser.set_defaults(func=display_hosts)
-
- host_subcommands = host_parser.add_subparsers(help='action')
-
- host_allow_parser = host_subcommands.add_parser('allow', help='allow host address')
- host_allow_parser.set_defaults(func=allow_host)
- host_allow_parser.add_argument('value', help='host address')
-
- host_deny_parser = host_subcommands.add_parser('deny', help='deny host address')
- host_deny_parser.set_defaults(func=deny_host)
- host_deny_parser.add_argument('value', help='host address')
-
- host_clear_parser = host_subcommands.add_parser('clear', help='reset host-based access policy')
- host_clear_parser.set_defaults(func=clear_all_hosts)
-
- day_parser = subcommands.add_parser('day', help='get or set days access policy')
- day_parser.set_defaults(func=display_days)
-
- day_subcommands = day_parser.add_subparsers(help='action')
-
- day_allow_parser = day_subcommands.add_parser('allow', help='allow day of the week')
- day_allow_parser.set_defaults(func=allow_day)
- day_allow_parser.add_argument('value', type=str.capitalize, help='day of the week')
+ edit = subcommands.add_parser('set', help='Edit the plugin')
+ edit.set_defaults(func=rootdn_edit)
+ _add_parser_args(edit)
- day_deny_parser = day_subcommands.add_parser('deny', help='deny day of the week')
- day_deny_parser.set_defaults(func=deny_day)
- day_deny_parser.add_argument('value', type=str.capitalize, help='day of the week')
- day_clear_parser = day_subcommands.add_parser('clear', help='reset day-based access policy')
- day_clear_parser.set_defaults(func=clear_all_days)
diff --git a/src/lib389/lib389/cli_conf/plugins/usn.py b/src/lib389/lib389/cli_conf/plugins/usn.py
index 59349fe..634ca7f 100644
--- a/src/lib389/lib389/cli_conf/plugins/usn.py
+++ b/src/lib389/lib389/cli_conf/plugins/usn.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
+# Copyright (C) 2019 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -17,40 +17,55 @@ def display_usn_mode(inst, basedn, log, args):
else:
log.info("USN global mode is disabled")
+
def enable_global_mode(inst, basedn, log, args):
plugin = USNPlugin(inst)
plugin.enable_global_mode()
log.info("USN global mode enabled")
+
def disable_global_mode(inst, basedn, log, args):
plugin = USNPlugin(inst)
plugin.disable_global_mode()
log.info("USN global mode disabled")
+
def tombstone_cleanup(inst, basedn, log, args):
plugin = USNPlugin(inst)
- log.info('Attempting to add task entry... This will fail if replication is enabled or if USN plug-in is disabled.')
+ log.info('Attempting to add task entry...')
+ if not plugin.status():
+ log.error("'%s' is disabled. Fix up task can't be executed" % plugin.rdn)
task = plugin.cleanup(args.suffix, args.backend, args.maxusn)
- log.info('Successfully added task entry ' + task.dn)
+ task.wait()
+ exitcode = task.get_exit_code()
+ if exitcode != 0:
+ log.error('USM tombstone cleanup task has failed. Please, check logs')
+ else:
+ log.info('Successfully added task entry')
+
def create_parser(subparsers):
usn_parser = subparsers.add_parser('usn', help='Manage and configure USN plugin')
-
subcommands = usn_parser.add_subparsers(help='action')
-
add_generic_plugin_parsers(subcommands, USNPlugin)
- global_mode_parser = subcommands.add_parser('global', help='get or manage global usn mode')
+ global_mode_parser = subcommands.add_parser('global', help='Get or manage global usn mode (nsslapd-entryusn-global)')
global_mode_parser.set_defaults(func=display_usn_mode)
global_mode_subcommands = global_mode_parser.add_subparsers(help='action')
- on_global_mode_parser = global_mode_subcommands.add_parser('on', help='enable usn global mode')
+ on_global_mode_parser = global_mode_subcommands.add_parser('on', help='Enable usn global mode')
on_global_mode_parser.set_defaults(func=enable_global_mode)
- off_global_mode_parser = global_mode_subcommands.add_parser('off', help='disable usn global mode')
+ off_global_mode_parser = global_mode_subcommands.add_parser('off', help='Disable usn global mode')
off_global_mode_parser.set_defaults(func=disable_global_mode)
- cleanup_parser = subcommands.add_parser('cleanup', help='run the USN tombstone cleanup task')
+ cleanup_parser = subcommands.add_parser('cleanup', help='Run the USN tombstone cleanup task')
cleanup_parser.set_defaults(func=tombstone_cleanup)
cleanup_group = cleanup_parser.add_mutually_exclusive_group(required=True)
- cleanup_group.add_argument('-s', '--suffix', help="suffix where USN tombstone entries are cleaned up")
- cleanup_group.add_argument('-n', '--backend', help="backend instance in which USN tombstone entries are cleaned up (alternative to suffix)")
- cleanup_parser.add_argument('-m', '--maxusn', type=int, help="USN tombstone entries are deleted up to the entry with maxusn")
+ cleanup_group.add_argument('-s', '--suffix',
+ help='Gives the suffix or subtree in the Directory Server to run the cleanup operation '
+ 'against. If the suffix is not specified, then the back end must be given (suffix)')
+ cleanup_group.add_argument('-n', '--backend',
+ help='Gives the Directory Server instance back end, or database, to run the cleanup '
+ 'operation against. If the back end is not specified, then the suffix must be '
+ 'specified.Backend instance in which USN tombstone entries (backend)')
+ cleanup_parser.add_argument('-m', '--maxusn', type=int, help='Gives the highest USN value to delete when '
+ 'removing tombstone entries (max_usn_to_delete)')
diff --git a/src/lib389/lib389/cli_conf/plugins/whoami.py b/src/lib389/lib389/cli_conf/plugins/whoami.py
deleted file mode 100644
index 2c3e62a..0000000
--- a/src/lib389/lib389/cli_conf/plugins/whoami.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2018 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-
-from lib389.plugins import WhoamiPlugin
-from lib389.cli_conf import add_generic_plugin_parsers
-
-
-def create_parser(subparsers):
- whoami_parser = subparsers.add_parser('whoami', help='Manage and configure whoami plugin')
- subcommands = whoami_parser.add_subparsers(help='action')
- add_generic_plugin_parsers(subcommands, WhoamiPlugin)
diff --git a/src/lib389/lib389/cli_conf/pwpolicy.py b/src/lib389/lib389/cli_conf/pwpolicy.py
index ae97626..5755fb8 100644
--- a/src/lib389/lib389/cli_conf/pwpolicy.py
+++ b/src/lib389/lib389/cli_conf/pwpolicy.py
@@ -260,8 +260,8 @@ def create_parser(subparsers):
set_parser.add_argument('--pwdlockout', help="Set to \"on\" to enable account lockout")
set_parser.add_argument('--pwdunlock', help="Set to \"on\" to allow an account to become unlocked after the lockout duration")
set_parser.add_argument('--pwdlockoutduration', help="The number of seconds an account stays locked out")
- set_parser.add_argument('--pwdmaxfailures', help="The maximum number of allowed failed password attempts beforet the acocunt gets locked")
- set_parser.add_argument('--pwdresetfailcount', help="The number of secondsto wait before reducingthe failed login count on an account")
+ set_parser.add_argument('--pwdmaxfailures', help="The maximum number of allowed failed password attempts before the account gets locked")
+ set_parser.add_argument('--pwdresetfailcount', help="The number of seconds to wait before reducing the failed login count on an account")
# Syntax settings
set_parser.add_argument('--pwdchecksyntax', help="Set to \"on\" to Enable password syntax checking")
set_parser.add_argument('--pwdminlen', help="The minimum number of characters required in a password")
diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py
index adc5eb3..71615e0 100644
--- a/src/lib389/lib389/plugins.py
+++ b/src/lib389/lib389/plugins.py
@@ -6,6 +6,7 @@
# See LICENSE for details.
# --- END COPYRIGHT BLOCK ---
+import collections
import ldap
import copy
import os.path
@@ -13,6 +14,7 @@ import os.path
from lib389 import tasks
from lib389._mapped_object import DSLdapObjects, DSLdapObject
from lib389.lint import DSRILE0001
+from lib389.utils import ensure_str, ensure_list_bytes
from lib389._constants import DN_PLUGIN
from lib389.properties import (
PLUGINS_OBJECTCLASS_VALUE, PLUGIN_PROPNAME_TO_ATTRNAME,
@@ -158,6 +160,72 @@ class AttributeUniquenessPlugin(Plugin):
self.set('uniqueness-across-all-subtrees', 'off')
+class AttributeUniquenessPlugins(DSLdapObjects):
+ """A DSLdapObjects entity which represents Attribute Uniqueness plugin instances
+
+ :param instance: An instance
+ :type instance: lib389.DirSrv
+ :param basedn: Base DN for all account entries below
+ :type basedn: str
+ """
+
+ def __init__(self, instance, basedn="cn=plugins,cn=config"):
+ super(Plugins, self).__init__(instance=instance)
+ self._objectclasses = ['top', 'nsslapdplugin']
+ self._filterattrs = ['cn', 'nsslapd-pluginPath']
+ self._childobject = AttributeUniquenessPlugin
+ self._basedn = basedn
+ # This is used to allow entry to instance to work
+ self._list_attrlist = ['dn', 'nsslapd-pluginPath']
+ self._search_filter = "(nsslapd-pluginId=NSUniqueAttr)"
+
+ def list(self):
+ """Get a list of all plugin instances where nsslapd-pluginId: NSUniqueAttr
+
+ :returns: A list of children entries
+ """
+
+ try:
+ results = self._instance.search_ext_s(
+ base=self._basedn,
+ scope=ldap.SCOPE_ONELEVEL,
+ filterstr=self._search_filter,
+ attrlist=self._list_attrlist,
+ serverctrls=self._server_controls, clientctrls=self._client_controls
+ )
+ insts = [self._entry_to_instance(dn=r.dn, entry=r) for r in results]
+ except ldap.NO_SUCH_OBJECT:
+ # There are no objects to select from, se we return an empty array
+ insts = []
+ return insts
+
+ def _get_dn(self, dn):
+ # This will yield and & filter for objectClass with as many terms as needed.
+ self._log.debug('_gen_dn filter = %s' % self._search_filter)
+ self._log.debug('_gen_dn dn = %s' % dn)
+ return self._instance.search_ext_s(
+ base=dn,
+ scope=ldap.SCOPE_BASE,
+ filterstr=self._search_filter,
+ attrlist=self._list_attrlist,
+ serverctrls=self._server_controls, clientctrls=self._client_controls
+ )
+
+ def _get_selector(self, selector):
+ # Filter based on the objectclasses and the basedn
+ # Based on the selector, we should filter on that too.
+ # This will yield and & filter for objectClass with as many terms as needed.
+ filterstr = "&(cn=%s)%s" % (selector, self._search_filter)
+ self._log.debug('_gen_selector filter = %s' % filterstr)
+ return self._instance.search_ext_s(
+ base=self._basedn,
+ scope=self._scope,
+ filterstr=filterstr,
+ attrlist=self._list_attrlist,
+ serverctrls=self._server_controls, clientctrls=self._client_controls
+ )
+
+
class LdapSSOTokenPlugin(Plugin):
"""An instance of ldapssotoken plugin entry
@@ -222,11 +290,14 @@ class MEPConfigs(DSLdapObjects):
:type basedn: str
"""
- def __init__(self, instance, basedn="cn=managed entries,cn=plugins,cn=config"):
+ def __init__(self, instance, basedn=None):
super(MEPConfigs, self).__init__(instance)
self._objectclasses = ['top', 'extensibleObject']
self._filterattrs = ['cn']
self._childobject = MEPConfig
+ # So we can set the configArea easily
+ if basedn is None:
+ basedn = "cn=managed entries,cn=plugins,cn=config"
self._basedn = basedn
@@ -243,7 +314,7 @@ class MEPTemplate(DSLdapObject):
super(MEPTemplate, self).__init__(instance, dn)
self._rdn_attribute = 'cn'
self._must_attributes = ['cn']
- self._create_objectclasses = ['top', 'extensibleObject', 'mepTemplateEntry']
+ self._create_objectclasses = ['top', 'mepTemplateEntry']
self._protected = False
@@ -258,7 +329,7 @@ class MEPTemplates(DSLdapObjects):
def __init__(self, instance, basedn):
super(MEPTemplates, self).__init__(instance)
- self._objectclasses = ['top', 'extensibleObject']
+ self._objectclasses = ['top', 'mepTemplateEntry']
self._filterattrs = ['cn']
self._childobject = MEPTemplate
self._basedn = basedn
@@ -774,6 +845,23 @@ class MemberOfSharedConfig(DSLdapObject):
self._exit_code = None
+class MemberOfSharedConfigs(DSLdapObjects):
+ """A DSLdapObjects entity which represents MemberOf config entry
+
+ :param instance: An instance
+ :type instance: lib389.DirSrv
+ :param basedn: Base DN for all account entries below
+ :type basedn: str
+ """
+
+ def __init__(self, instance, basedn=None):
+ super(MemberOfSharedConfigs, self).__init__(instance)
+ self._objectclasses = ['top', 'extensibleObject']
+ self._filterattrs = ['cn']
+ self._childobject = MemberOfSharedConfig
+ self._basedn = basedn
+
+
class RetroChangelogPlugin(Plugin):
"""An instance of Retro Changelog plugin entry
@@ -1163,6 +1251,19 @@ class PassThroughAuthenticationPlugin(Plugin):
def __init__(self, instance, dn="cn=Pass Through Authentication,cn=plugins,cn=config"):
super(PassThroughAuthenticationPlugin, self).__init__(instance, dn)
+ def get_urls(self):
+ """Get all URLs from nsslapd-pluginargNUM attributes
+
+ :returns: a list
+ """
+
+ attr_dict = collections.OrderedDict(sorted(self.get_all_attrs().items()))
+ result = []
+ for attr, value in attr_dict.items():
+ if attr.startswith("nsslapd-pluginarg"):
+ result.append(ensure_str(value[0]))
+ return result
+
class USNPlugin(Plugin):
"""A single instance of USN (Update Sequence Number) plugin entry
@@ -1629,6 +1730,107 @@ class DNAPluginConfigs(DSLdapObjects):
self._basedn = basedn
+class DNAPluginSharedConfig(DSLdapObject):
+ """A single instance of DNA Plugin config entry
+
+ :param instance: An instance
+ :type instance: lib389.DirSrv
+ :param dn: Entry DN
+ :type dn: str
+ """
+
+ def __init__(self, instance, dn=None):
+ super(DNAPluginSharedConfig, self).__init__(instance, dn)
+ self._rdn_attribute = 'dnaHostname'
+ self._must_attributes = ['dnaHostname', 'dnaPortNum']
+ self._create_objectclasses = ['top', 'dnaSharedConfig']
+ self._protected = False
+
+ def create(self, properties=None, basedn=None, ensure=False):
+ """The shared config DNA plugin entry has two RDN values
+ The function takes care about that special case
+ """
+
+ for attr in self._must_attributes:
+ if properties.get(attr, None) is None:
+ raise ldap.UNWILLING_TO_PERFORM('Attribute %s must not be None' % attr)
+
+ assert basedn is not None, "Base DN should be specified"
+
+ # Make a DN with the two items RDN and base DN
+ decomposed_dn = [[('dnaHostname', properties['dnaHostname'], 1),
+ ('dnaPortNum', properties['dnaPortNum'], 1)]] + ldap.dn.str2dn(basedn)
+ dn = ldap.dn.dn2str(decomposed_dn)
+
+ exists = False
+ if ensure:
+ # If we are running in stateful ensure mode, we need to check if the object exists, and
+ # we can see the state that it is in.
+ try:
+ self._instance.search_ext_s(dn, ldap.SCOPE_BASE, self._object_filter, attrsonly=1,
+ serverctrls=self._server_controls, clientctrls=self._client_controls,
+ escapehatch='i am sure')
+ exists = True
+ except ldap.NO_SUCH_OBJECT:
+ pass
+
+ if exists and ensure:
+ # update properties
+ self._log.debug('Exists %s' % dn)
+ self._dn = dn
+ # Now use replace_many to setup our values
+ mods = []
+ for k, v in list(properties.items()):
+ mods.append((ldap.MOD_REPLACE, k, v))
+ self._instance.modify_ext_s(self._dn, mods, serverctrls=self._server_controls,
+ clientctrls=self._client_controls, escapehatch='i am sure')
+ else:
+ self._log.debug('Creating %s' % dn)
+ mods = [('objectclass', ensure_list_bytes(self._create_objectclasses))]
+ # Bring our mods to one type and do ensure bytes on the list
+ for attr, value in properties.items():
+ if not isinstance(value, list):
+ value = [value]
+ mods.append((attr, ensure_list_bytes(value)))
+ # We rely on exceptions here to indicate failure to the parent.
+ self._log.debug('Creating entry %s : %s' % (dn, mods))
+ self._instance.add_ext_s(dn, mods, serverctrls=self._server_controls, clientctrls=self._client_controls,
+ escapehatch='i am sure')
+ # If it worked, we need to fix our instance dn
+ self._dn = dn
+
+ return self
+
+
+class DNAPluginSharedConfigs(DSLdapObjects):
+ """A DSLdapObjects entity which represents DNA Plugin config entry
+
+ :param instance: An instance
+ :type instance: lib389.DirSrv
+ :param basedn: Base DN for all account entries below
+ :type basedn: str
+ """
+
+ def __init__(self, instance, basedn=None):
+ super(DNAPluginSharedConfigs, self).__init__(instance)
+ self._objectclasses = ['top', 'dnaSharedConfig']
+ self._filterattrs = ['dnaHostname', 'dnaPortNum']
+ self._childobject = DNAPluginSharedConfig
+ self._basedn = basedn
+
+ def create(self, properties=None):
+ """Create an object under base DN of our entry
+
+ :param properties: Attributes for the new entry
+ :type properties: dict
+
+ :returns: DSLdapObject of the created entry
+ """
+
+ co = self._entry_to_instance(dn=None, entry=None)
+ return co.create(properties, self._basedn)
+
+
class Plugins(DSLdapObjects):
"""A DSLdapObjects entity which represents plugin entry
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
5 years, 2 months