On 9/10/19, 04:44, "Ludwig Krispenz" <lkrispen(a)redhat.com> wrote:
On 09/10/2019 02:37 AM, William Brown wrote:
> On 10 Sep 2019, at 09:58, Morgan, Iain (ARC-TN)[InuTeq, LLC]
<iain.morgan(a)nasa.gov> wrote:
>
>
>
> On 9/8/19, 15:40, "William Brown" <wbrown(a)suse.de> wrote:
>
>
>
>> On 7 Sep 2019, at 08:33, Morgan, Iain (ARC-TN)[InuTeq, LLC]
<iain.morgan(a)nasa.gov> wrote:
>>
>> Hello Marc,
>>
>> Yes, it is 389-ds-base1.3.9.1-10.el7, but we are not using IPA and there are no
memberOf plugin errors. The actual modrdn operations have error=0.
> If we may, can we ask what the modrdn operations were, and to ask a little about
your tree layout so we could try to understand more about this?
>
>
> Sure, there's nothing unusual regarding the tree layout. There's a single
backend with ou=People,.... and ou=Groups,.... One thing that may be relevant is that this
is a stand-alone server -- replication has not been configured.
Have you configured the replica id, replication agreement or changelog though? I want to
follow up with our replication expert about why URP is getting involved here, when you say
replication isn't configured.
if this is really a standalone consumer without replication configured,
this is probably a consequence of splitting the multimaster(mmr) and the
other plugin calls and the mmr plugin could be called and do nothing,
just try :-(
so the messages is harmless although annoying and we should fix it.
Yes, this is truly a stand-alone server. It's currently being used for testing of
locally written administrative scripts and no replication ID, replication agreements, or
changelog modifications have been configured. The changes thus far have largely been
limited to the password policy and TLS configuration.
Your hypothesis seems reasonable. The next step in our testing is to create another
instance and test replication. Hopefully, these errors will disappear once we get to that
stage in our testing.
--
Iain Morgan
> From the access log:
>
> [09/Sep/2019:15:05:08.577399600 -0700] conn=2233897 op=2 SRCH
base="ou=Groups,dc=nas,dc=nasa,dc=gov" scope=2
filter="(|(cn=temp)(cn=testgroup))" attrs="cn"
> [09/Sep/2019:15:05:08.577630700 -0700] conn=2233897 op=2 RESULT err=0 tag=101
nentries=1 etime=0.0000323261
> [09/Sep/2019:15:05:08.578316196 -0700] conn=2233897 op=3 MODRDN
dn="cn=temp,ou=Groups,dc=nas,dc=nasa,dc=gov" newrdn="cn=testgroup"
newsuperior="(null)"
> [09/Sep/2019:15:05:08.581595207 -0700] conn=2233897 op=3 RESULT err=0 tag=109
nentries=0 etime=0.0003326527
>
> From the errors log:
>
> [09/Sep/2019:15:05:08.580235717 -0700] - ERR - conn=2233897 op=3 -
urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
I think you can ignore this - with no replication cenotaphs not being added is not an
issue. If you were in a replication topology this would be a more significant error.
Cenotaphs are needed to resolve some replication edge cases in update resolution and are
an internal part of our replication system. But as mentioned, I will follow up why URP is
being accessed here, so I need some more information from you.
Thanks, hope this helps,
> --
> Iain Morgan
>
--