FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora Directory

 
 
LinkBack Thread Tools
 
Old 04-19-2012, 05:42 PM
Rich Megginson
 
Default 389 vs Sun DS ldapmodify performance

On 04/19/2012 11:38 AM, Russell Beall wrote:
That is by far the largest example we have. *We use
groups with the uniquemember attribute linking to account entries
and more than a few of the groups have tens of thousands of values
for uniquemember. *We create more of these groups regularly and it
will be a problem for it to take many hours to construct such a
group versus seconds/minutes with Sun DS. *Our metadirectory
process does not use ldapadd to create the group pre-populated,
the group is created and then ldapmodify is run to add members.
*Three times a year at the change of semester, many thousands of
group membership changes are processed and we already have a
problem with it taking multiple days to process the entire set...


OK.* If you've ruled out the possibility that some plugin is
interfering with the processing, then it must be something we will
have to fix in the core server.* Please file a ticket at
https://fedorahosted.org/389







We also have large quantities of eduPersonEntitlement on
account records, but those sets are not nearly as numerously
populated. *I can delete and re-add the eduPersonEntitlement
attribute across 110,000 records in about 40 minutes (20 minutes
each way -- with 389).





Russ.



On Apr 19, 2012, at 10:18 AM, Rich Megginson wrote:



On 04/19/2012 10:50
AM, Russell Beall wrote:
Thanks for the tips. *I scanned the dse.ldif
for those plugins and I found definitions for them all,
but they all have nsslapd-pluginEnabled: off.



There is something special about the uniquemember
attribute that requires additional processing
different from other attributes... *Ldapmodify of
other attributes runs pretty quick.



Is uniquemember the only attribute using large numbers of
multiple values in ldapmodify operations?







Regards,
Russ.



On Apr 19, 2012, at 2:20 AM, Andrey Ivanov
wrote:

Hi Russel,





Le 18 avril 2012 23:06,
Russell Beall <beall@usc.edu>
a écrit :




On Apr 18, 2012, at 11:15 AM, Rich
Megginson wrote:

Yeah,

this particular operation has not
been optimized.* I believe SunDS
added explicit optimizations for
this particular case.











It

is becoming painfully apparent as I
write more detailed tests. *389 takes
time to add or delete uniquemember
values proportionate to the number of
values being operated on and is using
about twice as much time to delete as it
does to add. *Sun DS appears to have
perhaps an almost O(1) algorithm in play
on both adding and deleting values.



Is

there perhaps some kind of referential
integrity setting that is being used and
forcing some kind of lookup of each
value, one that we could perhaps turn
off? *We wouldn't need such a check
because our metadirectory process
handles the integrity/consistency
checking already.




There is memberOf plugin that maintains the
memberOf attribute for groups. I don't know
whether* it is activated by default or not.
You could try to disable it. There is also
referential integrity plugin, attribute
uniqueness plugin, maybe USN plugin or custom
indexes that could consume a lot of CPU. Make
sure you've disabled them if you don't need
them.



@+



--

389 users mailing list

389-users@lists.fedoraproject.org

https://admin.fedoraproject.org/mailman/listinfo/389-users









--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users














--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users





--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-19-2012, 05:45 PM
"Michael R. Gettes"
 
Default 389 vs Sun DS ldapmodify performance

Ditto!
/mrg
On Apr 19, 2012, at 13:38, Russell Beall wrote:That is by far the largest example we have. *We use groups with the uniquemember attribute linking to account entries and more than a few of the groups have tens of thousands of values for uniquemember. *We create more of these groups regularly and it will be a problem for it to take many hours to construct such a group versus seconds/minutes with Sun DS. *Our metadirectory process does not use ldapadd to create the group pre-populated, the group is created and then ldapmodify is run to add members. *Three times a year at the change of semester, many thousands of group membership changes are processed and we already have a problem with it taking multiple days to process the entire set...
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-19-2012, 06:47 PM
Andrey Ivanov
 
Default 389 vs Sun DS ldapmodify performance

I've forgotten Linked Attributes plugin, you could also disable it.

Don't you have some exotic type of index activated for uniqueMember (like substring)? The default value is only the equality index in dse.ldif



dn: cn=uniquemember,cn=default indexes,cn=config,cn=ldbm database,cn=plugins,c
*n=config
objectClass: top
objectClass: nsIndex
cn: uniquemember
nsSystemIndex: false
nsIndexType: eq


In any case, batch write loads are quite particular. You could try to play with nsslapd-db-checkpoint-interval and nsslapd-db-durable-transaction config attributes while you run your batch uniqueMember modifications. You could also try disabling completely or limiting logging intensity (http://directory.fedoraproject.org/wiki/Named_Pipe_Log_Script) .



In my VM tests (RHEL5, 1 vCPU Xeon E5640* @ 2.67GHz, 1Gb mem, 389DS v1.2.10.6) with our production environment (~20k users, groups of ~6000 members created as in your case with perl scripts using ldapmodify running on the same VM)* a group of 6000 uniqueMembers is created in 3 minutes 10 sec (190s) from scratch. Using "dstat" i see that the main problem is disk writes (transaction logs of db4):


----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--usr sys idl wai hiq siq| read* writ| recv* send|* in** out | int** csw

*91** 0** 0** 9** 0** 0|** 0*** 40M| 120B* 978B|** 0**** 0 | 316** 325
*90** 3** 0** 7** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 184** 294
*92** 2** 0** 5** 0** 1|** 0*** 40M|* 60B* 310B|** 0**** 0 | 160** 297


*93** 1** 0** 5** 0** 1|** 0*** 40M|* 60B* 310B|** 0**** 0 | 159** 312
*93** 3** 1** 1** 0** 2|** 0*** 24M|* 60B* 310B|** 0**** 0 | 206** 372
*76** 4** 2* 18** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 165** 265


*94** 0** 0** 6** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 221** 275
*90** 1** 0** 7** 1** 1|** 0*** 41M|* 60B* 310B|** 0**** 0 | 479** 313
*86** 2** 0* 11** 1** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 403** 306


*93** 0** 0** 6** 0** 1|** 0*** 20M| 120B* 364B|** 0**** 0 | 489** 298
*90** 1** 0** 9** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 389** 296
*88** 1** 0* 11** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 358** 319


*76** 0** 0* 23** 1** 0|** 0*** 41M|* 60B* 310B|** 0**** 0 | 403** 303

@+



Le 19 avril 2012 18:50, Russell Beall <beall@usc.edu> a écrit :


Thanks for the tips. *I scanned the dse.ldif for those plugins and I found definitions for them all, but they all have nsslapd-pluginEnabled: off.


There is something special about the uniquemember attribute that requires additional processing different from other attributes... *Ldapmodify of other attributes runs pretty quick.


Regards,Russ.
On Apr 19, 2012, at 2:20 AM, Andrey Ivanov wrote:
Hi Russel,




Le 18 avril 2012 23:06, Russell Beall <beall@usc.edu> a écrit :




On Apr 18, 2012, at 11:15 AM, Rich Megginson wrote:Yeah, this particular operation has not been optimized.* I believe SunDS added explicit optimizations for this particular case.










It is becoming painfully apparent as I write more detailed tests. *389 takes time to add or delete uniquemember values proportionate to the number of values being operated on and is using about twice as much time to delete as it does to add. *Sun DS appears to have perhaps an almost O(1) algorithm in play on both adding and deleting values.








Is there perhaps some kind of referential integrity setting that is being used and forcing some kind of lookup of each value, one that we could perhaps turn off? *We wouldn't need such a check because our metadirectory process handles the integrity/consistency checking already.




There is memberOf plugin that maintains the memberOf attribute for groups. I don't know whether* it is activated by default or not. You could try to disable it. There is also referential integrity plugin, attribute uniqueness plugin, maybe USN plugin or custom indexes that could consume a lot of CPU. Make sure you've disabled them if you don't need them.





@+

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users



--

389 users mailing list

389-users@lists.fedoraproject.org

https://admin.fedoraproject.org/mailman/listinfo/389-users


--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-19-2012, 06:54 PM
Russell Beall
 
Default 389 vs Sun DS ldapmodify performance

Hmm... *Thanks again for the data.
I will see what happens with a modification of those attributes.
However, I tried running the modifications with and without the index (I deleted it entirely). *When it was present, it matched the default index which only uses the eq index type. *I think it went slightly faster without the index but the difference was negligible compared to the overall time.
Regards,Russ.
On Apr 19, 2012, at 11:47 AM, Andrey Ivanov wrote:I've forgotten Linked Attributes plugin, you could also disable it.

Don't you have some exotic type of index activated for uniqueMember (like substring)? The default value is only the equality index in dse.ldif



dn: cn=uniquemember,cn=default indexes,cn=config,cn=ldbm database,cn=plugins,c
*n=config
objectClass: top
objectClass: nsIndex
cn: uniquemember
nsSystemIndex: false
nsIndexType: eq


In any case, batch write loads are quite particular. You could try to play with nsslapd-db-checkpoint-interval and nsslapd-db-durable-transaction config attributes while you run your batch uniqueMember modifications. You could also try disabling completely or limiting logging intensity (http://directory.fedoraproject.org/wiki/Named_Pipe_Log_Script) .



In my VM tests (RHEL5, 1 vCPU Xeon E5640* @ 2.67GHz, 1Gb mem, 389DS v1.2.10.6) with our production environment (~20k users, groups of ~6000 members created as in your case with perl scripts using ldapmodify running on the same VM)* a group of 6000 uniqueMembers is created in 3 minutes 10 sec (190s) from scratch. Using "dstat" i see that the main problem is disk writes (transaction logs of db4):


----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--usr sys idl wai hiq siq| read* writ| recv* send|* in** out | int** csw

*91** 0** 0** 9** 0** 0|** 0*** 40M| 120B* 978B|** 0**** 0 | 316** 325
*90** 3** 0** 7** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 184** 294
*92** 2** 0** 5** 0** 1|** 0*** 40M|* 60B* 310B|** 0**** 0 | 160** 297


*93** 1** 0** 5** 0** 1|** 0*** 40M|* 60B* 310B|** 0**** 0 | 159** 312
*93** 3** 1** 1** 0** 2|** 0*** 24M|* 60B* 310B|** 0**** 0 | 206** 372
*76** 4** 2* 18** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 165** 265


*94** 0** 0** 6** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 221** 275
*90** 1** 0** 7** 1** 1|** 0*** 41M|* 60B* 310B|** 0**** 0 | 479** 313
*86** 2** 0* 11** 1** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 403** 306


*93** 0** 0** 6** 0** 1|** 0*** 20M| 120B* 364B|** 0**** 0 | 489** 298
*90** 1** 0** 9** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 389** 296
*88** 1** 0* 11** 0** 0|** 0*** 40M|* 60B* 310B|** 0**** 0 | 358** 319


*76** 0** 0* 23** 1** 0|** 0*** 41M|* 60B* 310B|** 0**** 0 | 403** 303

@+



Le 19 avril 2012 18:50, Russell Beall <beall@usc.edu> a écrit :


Thanks for the tips. *I scanned the dse.ldif for those plugins and I found definitions for them all, but they all have nsslapd-pluginEnabled: off.


There is something special about the uniquemember attribute that requires additional processing different from other attributes... *Ldapmodify of other attributes runs pretty quick.


Regards,Russ.
On Apr 19, 2012, at 2:20 AM, Andrey Ivanov wrote:
Hi Russel,




Le 18 avril 2012 23:06, Russell Beall <beall@usc.edu> a écrit :




On Apr 18, 2012, at 11:15 AM, Rich Megginson wrote:Yeah, this particular operation has not been optimized.* I believe SunDS added explicit optimizations for this particular case.










It is becoming painfully apparent as I write more detailed tests. *389 takes time to add or delete uniquemember values proportionate to the number of values being operated on and is using about twice as much time to delete as it does to add. *Sun DS appears to have perhaps an almost O(1) algorithm in play on both adding and deleting values.








Is there perhaps some kind of referential integrity setting that is being used and forcing some kind of lookup of each value, one that we could perhaps turn off? *We wouldn't need such a check because our metadirectory process handles the integrity/consistency checking already.




There is memberOf plugin that maintains the memberOf attribute for groups. I don't know whether* it is activated by default or not. You could try to disable it. There is also referential integrity plugin, attribute uniqueness plugin, maybe USN plugin or custom indexes that could consume a lot of CPU. Make sure you've disabled them if you don't need them.





@+

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users



--

389 users mailing list

389-users@lists.fedoraproject.org

https://admin.fedoraproject.org/mailman/listinfo/389-users


--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-23-2012, 02:01 PM
Russell Beall
 
Default 389 vs Sun DS ldapmodify performance

I've been running some more tests before setting up the ticket, but I think I have enough information now. *The uniqueMember attribute has extra processing overhead, but the necessary optimization might apply across the board for all attributes. *I found also that adding large sets of values for other attributes also increases modification times heavily, though not quite as much as uniqueMember. *Luckily, the modification delay is based on the size of the modification rather than the size of the entry, so even if the modification is done to a 100K-value attribute, if the modification is only to remove a few members and add a few others, then the change is still relatively quick. *The delay is noticed most when first setting up a group, for instance, adding 100K members to an empty group takes 2.5 hours on 389 as opposed to 1 minute on Sun DS.
Also during this testing I have noticed a memory leak when running large quantities of ldapmodify operations. *When I set up a loop to delete and then re-add the eduPersonEntitlement attribute across 100K entries, I found that memory consumption continuously increased and the server crashed after the fifth iteration of this loop. *(And this one really is with ldapmodify and is not related to my earlier issues with excessively creating tombstones by deleting and adding entire entries). *Before digging into this too deeply and making another ticket, I wanted to ask if this had been noticed and fixed in the 1.2.10 release? *I am using the default 1.2.9.16 release. I'm guessing it hasn't since I didn't see it in the release notes.
I am starting up the server with the valgrind command you recommended a few messages back to see if I can spot the leak, though of course with valgrind in the mix, the overhead and runtimes are, as might be expected, much increased.
Regards,Russ.
On Apr 19, 2012, at 1:42 PM, Rich Megginson wrote:OK.* If you've ruled out the possibility that some plugin is interfering with the processing, then it must be something we will have to fix in the core server.* Please file a ticket at*https://fedorahosted.org/389
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-23-2012, 02:28 PM
Rich Megginson
 
Default 389 vs Sun DS ldapmodify performance

On 04/23/2012 08:01 AM, Russell Beall wrote:
I've been running some more tests before setting up
the ticket, but I think I have enough information now. *The
uniqueMember attribute has extra processing overhead, but the
necessary optimization might apply across the board for all
attributes. *I found also that adding large sets of values for
other attributes also increases modification times heavily, though
not quite as much as uniqueMember.


uniqueMember is a DN syntax attribute.* DN syntax values are
"expensive" to handle due to normalization overhead.



Luckily, the modification delay is based on the size
of the modification rather than the size of the entry, so even if
the modification is done to a 100K-value attribute, if the
modification is only to remove a few members and add a few others,
then the change is still relatively quick. *The delay is noticed
most when first setting up a group, for instance, adding 100K
members to an empty group takes 2.5 hours on 389 as opposed to 1
minute on Sun DS.


That's very interesting.* Does Sun DS have some sort of tuning
parameter for number of values?* That is, they may have some
threshold for number of values in an attribute - once the number
hits that threshold, they may switch to using some sort of ADT to
store the values, like a AVL tree or hash table, rather than the
simple linked list used by default.







Also during this testing I have noticed a memory leak when
running large quantities of ldapmodify operations. *When I set
up a loop to delete and then re-add the eduPersonEntitlement
attribute across 100K entries, I found that memory consumption
continuously increased and the server crashed after the fifth
iteration of this loop. *(And this one really is with ldapmodify
and is not related to my earlier issues with excessively
creating tombstones by deleting and adding entire entries).
*Before digging into this too deeply and making another ticket,
I wanted to ask if this had been noticed and fixed in the 1.2.10
release? *I am using the default 1.2.9.16 release. I'm guessing
it hasn't since I didn't see it in the release notes.



Try increasing your nsslapd-cachememsize and monitoring it closely.*
Using the size of id2entry.db4 is a good place to start, but that
will not be enough.



http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Monitoring_Server_and_Database_Activity-Monitoring_Database_Activity.html



See also https://fedorahosted.org/389/ticket/51 and
https://bugzilla.redhat.com/show_bug.cgi?id=697701







I am starting up the server with the valgrind command you
recommended a few messages back to see if I can spot the leak,
though of course with valgrind in the mix, the overhead and
runtimes are, as might be expected, much increased.



Yes, and valgrind will report many false positives that are hard to
weed through.



The issue you are seeing may not be a memory leak per se - see the
ticket/bug above.







Regards,
Russ.



On Apr 19, 2012, at 1:42 PM, Rich Megginson wrote:

OK.* If you've ruled out the
possibility that some plugin is interfering with the
processing, then it must be something we will have to fix
in the core server.* Please file a ticket at*https://fedorahosted.org/389









--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users





--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-23-2012, 06:20 PM
Russell Beall
 
Default 389 vs Sun DS ldapmodify performance

On Apr 23, 2012, at 10:28 AM, Rich Megginson wrote:





That's very interesting.* Does Sun DS have some sort of tuning
parameter for number of values?* That is, they may have some
threshold for number of values in an attribute - once the number
hits that threshold, they may switch to using some sort of ADT to
store the values, like a AVL tree or hash table, rather than the
simple linked list used by default.

I've compared the dse.ldif of both servers looking specifically for attributes that I should transfer from our production environment to 389. *The configurations for major components are virtually identical and I have seen no attribute that relates to the number of values in a multi-valued attribute. *I expect that the optimization is a behind-the-scenes code improvement.

Also during this testing I have noticed a memory leak when
running large quantities of ldapmodify operations. *When I set
up a loop to delete and then re-add the eduPersonEntitlement
attribute across 100K entries, I found that memory consumption
continuously increased and the server crashed after the fifth
iteration of this loop. *(And this one really is with ldapmodify
and is not related to my earlier issues with excessively
creating tombstones by deleting and adding entire entries).
*Before digging into this too deeply and making another ticket,
I wanted to ask if this had been noticed and fixed in the 1.2.10
release? *I am using the default 1.2.9.16 release. I'm guessing
it hasn't since I didn't see it in the release notes.



Try increasing your nsslapd-cachememsize and monitoring it closely.*
Using the size of id2entry.db4 is a good place to start, but that
will not be enough.

Early on in the process of setting up 389 I optimized the cachememsize. *I configured a 12G cache, and the cache usage after loading all 600K entries is just under 10G. *While the ldapmodify operations are in progress, I am pretty sure I did not have an increase in the cacheentryusage monitor attribute under cn=config, but I'd have to re-check to be sure.
Unfortunately, with valgrind attached, the server uses much extra memory on startup and does not complete the startup operation before running out of memory on my 32GB machine. *I have had to reduce the cachememsize so that it will start. *It's been starting up for two hours and finally stopped allocating more memory at 24G (with only a 3G cachememsize configured). *I'll probably have to delete out a large quantity of entries to run the test within the bounds of the cachememsize.
http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Monitoring_Server_and_Database_Activity-Monitoring_Database_Activity.html



See also https://fedorahosted.org/389/ticket/51 and
https://bugzilla.redhat.com/show_bug.cgi?id=697701

This bug appears very different from what I am looking at. *The ldapmodify I run makes a single connection and transmits a large file of operations to perform value deletions on 100K entries, followed by a new connection to transmit*value additions to*100K entries contained within a single large file, and then loop around to do the same thing again. *This emulates the behavior of our directory synchronization script which calculates large quantities of necessary modifications and then submits them all in an ldif file.

I am starting up the server with the valgrind command you
recommended a few messages back to see if I can spot the leak,
though of course with valgrind in the mix, the overhead and
runtimes are, as might be expected, much increased.



Yes, and valgrind will report many false positives that are hard to
weed through.



The issue you are seeing may not be a memory leak per se - see the
ticket/bug above.

Ok. *I'll see if there is anything I can pull from the rough.
Regards,Russ.
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 04-24-2012, 01:53 PM
Rich Megginson
 
Default 389 vs Sun DS ldapmodify performance

On 04/23/2012 12:20 PM, Russell Beall wrote:



On Apr 23, 2012, at 10:28 AM, Rich Megginson wrote:




That's very
interesting.* Does Sun DS have some sort of tuning parameter
for number of values?* That is, they may have some threshold
for number of values in an attribute - once the number hits
that threshold, they may switch to using some sort of ADT to
store the values, like a AVL tree or hash table, rather than
the simple linked list used by default.






I've compared the dse.ldif of both servers looking
specifically for attributes that I should transfer from our
production environment to 389. *The configurations for major
components are virtually identical and I have seen no
attribute that relates to the number of values in a
multi-valued attribute. *I expect that the optimization is a
behind-the-scenes code improvement.





Also during this testing I have noticed a memory leak
when running large quantities of ldapmodify operations.
*When I set up a loop to delete and then re-add the
eduPersonEntitlement attribute across 100K entries, I
found that memory consumption continuously increased and
the server crashed after the fifth iteration of this
loop. *(And this one really is with ldapmodify and is
not related to my earlier issues with excessively
creating tombstones by deleting and adding entire
entries). *Before digging into this too deeply and
making another ticket, I wanted to ask if this had been
noticed and fixed in the 1.2.10 release? *I am using the
default 1.2.9.16 release. I'm guessing it hasn't since I
didn't see it in the release notes.



Try increasing your nsslapd-cachememsize and monitoring it
closely.* Using the size of id2entry.db4 is a good place to
start, but that will not be enough.






Early on in the process of setting up 389 I optimized the
cachememsize. *I configured a 12G cache, and the cache usage
after loading all 600K entries is just under 10G. *While the
ldapmodify operations are in progress, I am pretty sure I did
not have an increase in the cacheentryusage monitor attribute
under cn=config, but I'd have to re-check to be sure.




You will see an increase due to replication metadata and possibly
other factors.








Unfortunately, with valgrind attached, the server uses much
extra memory on startup and does not complete the startup
operation before running out of memory on my 32GB machine. *I
have had to reduce the cachememsize so that it will start.
*It's been starting up for two hours and finally stopped
allocating more memory at 24G (with only a 3G cachememsize
configured). *I'll probably have to delete out a large
quantity of entries to run the test within the bounds of the
cachememsize.




Ok - so valgrind is probably not an option.







http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Monitoring_Server_and_Database_Activity-Monitoring_Database_Activity.html



See also https://fedorahosted.org/389/ticket/51
and https://bugzilla.redhat.com/show_bug.cgi?id=697701







This bug appears very different from what I am looking at.
*The ldapmodify I run makes a single connection and transmits
a large file of operations to perform value deletions on 100K
entries, followed by a new connection to transmit*value
additions to*100K entries contained within a single large
file, and then loop around to do the same thing again. *This
emulates the behavior of our directory synchronization script
which calculates large quantities of necessary modifications
and then submits them all in an ldif file.




The thing in common is this - when the cache usage hits the cache
max size, you see unbounded memory growth.









I am starting up the server with the valgrind command
you recommended a few messages back to see if I can spot
the leak, though of course with valgrind in the mix, the
overhead and runtimes are, as might be expected, much
increased.



Yes, and valgrind will report many false positives that are
hard to weed through.



The issue you are seeing may not be a memory leak per se -
see the ticket/bug above.






Ok. *I'll see if there is anything I can pull from the rough.



Regards,
Russ.








--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users





--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 05-23-2012, 04:27 PM
Russell Beall
 
Default 389 vs Sun DS ldapmodify performance

Hi,
I've been doing a lot more testing to try to flesh out the issue here. *I upgraded to the latest stable version from the rmeggins repo, but found the same memory growth behavior in that instance.
I have a few more details which clarify much better what I'm experiencing.
Unbounded memory growth for an endless chain of ldapmodify operations is seen in both cases where the cache size limit is reached as well as when the maximum cache size is well above the total data size of all entries and all entries are loaded.
On the contrary, when I reduce the cachememsize to nothing, (which then is reset for me to the minimum value of 512000), I see no memory growth at all, and the only memory consumed is just slightly larger than the DB cache size set.
I found that I can use some cache and still get a stable configuration by setting a cache size of only 3GB, and then the memory usage reaches a plateau of 24G (including a DB cache size that I don't remember).
A similar ratio is seen when setting a cachememsize of 1GB. *The server starts out grabbing 4GB (including the 2GB of DB cache I set), and then grows to 9GB and then goes up and down between 8 and 9 GB while processing.
It seems that the server believes it can have an in-memory workspace equivalent to (6 * cachememsize), and this behavior seems directly linked to the cache management code.
I need to be able to set my server to use cachememsize=12GB or more, but I can't have the server believing it then has a right to 72GB of working memory. *With 12GB set, the server quickly eats up the 32GB RAM, and goes well into the 16GB of swap before finally crashing.
Is this something I should just go ahead and file as a bug?
Thanks,Russ.

==============================
Russell Beall
Programmer Analyst IVEnterprise Identity ManagementUniversity*of*Southern Californiabeall@usc.edu=========================== ===



On Apr 24, 2012, at 6:53 AM, Rich Megginson wrote:The thing in common is this - when the cache usage hits the cache max size, you see unbounded memory growth.
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 
Old 05-23-2012, 04:36 PM
Rich Megginson
 
Default 389 vs Sun DS ldapmodify performance

On 05/23/2012 10:27 AM, Russell Beall wrote:
Hi,



I've been doing a lot more testing to try to flesh out the
issue here. *I upgraded to the latest stable version from the
rmeggins repo, but found the same memory growth behavior in that
instance.



I have a few more details which clarify much better what I'm
experiencing.



Unbounded memory growth for an endless chain of ldapmodify
operations is seen in both cases where the cache size limit is
reached as well as when the maximum cache size is well above the
total data size of all entries and all entries are loaded.



But based on what you say later in the post, it's not unbounded,
it's just not bounded by what you set as the cache size?







On the contrary, when I reduce the cachememsize to nothing,
(which then is reset for me to the minimum value of 512000), I
see no memory growth at all, and the only memory consumed is
just slightly larger than the DB cache size set.



I found that I can use some cache and still get a stable
configuration by setting a cache size of only 3GB, and then the
memory usage reaches a plateau of 24G (including a DB cache size
that I don't remember).



A similar ratio is seen when setting a cachememsize of 1GB.
*The server starts out grabbing 4GB (including the 2GB of DB
cache I set), and then grows to 9GB and then goes up and down
between 8 and 9 GB while processing.



It seems that the server believes it can have an in-memory
workspace equivalent to (6 * cachememsize), and this behavior
seems directly linked to the cache management code.



I need to be able to set my server to use cachememsize=12GB
or more, but I can't have the server believing it then has a
right to 72GB of working memory. *With 12GB set, the server
quickly eats up the 32GB RAM, and goes well into the 16GB of
swap before finally crashing.



Is this something I should just go ahead and file as a bug?



Yes, please.







Thanks,
Russ.







==============================

Russell Beall

Programmer Analyst IV
Enterprise Identity
Management
University*of*Southern
California
beall@usc.edu



==============================












On Apr 24, 2012, at 6:53 AM, Rich Megginson wrote:

The thing in common is this -
when the cache usage hits the cache max size, you see
unbounded memory growth.









--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users





--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
 

Thread Tools




All times are GMT. The time now is 07:08 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org