Eddie Lascu
2009-07-27 15:32:35 UTC
Hi guys,
I need your help with an issue I am having right now. I have several windows
services deployed. These services gather some XML files from the internet
and then parse the content and save the information in the database. The
database is Oracle 11g, so the access is done through ODP.NET. I had some
issues before (and some of you with good memory may recall my previous
questions) where I noticed the memory used by these services growing very
high. At the time, I understood that some classes in the ODP.NET library use
some JAVA code and they are known for not cleaning properly after them. As a
result I rewrote my code in such a way that every dynamic allocation of an
object from the ODP.NET library was encompassed in an "using" statement,
like this:
using (OracleCommand objDbCommand = new OracleCommand(strSqlSelectStatement,
objConnection))
All was fine for a very long time. All my services kept a constant size when
it came to the total memory used, a size that hovered around 55-60 MB. This
was the case for months. At the beginning I checked the size almost on a
daily basis. After a while, my confidence grew and I only checked it once a
week or so. Every time I looked at the size of all my processes (I had 4
that followed pretty much the same approach on dealing with those Oracle
objects) the size was around that 55-60 MB mark.
Now, all of a sudden, this last weekend, the size for some of the services
shut up to about 690 MB, despite the fact the service was running with the
exact same code as it did a month ago. This represents a potential fatal
problem for the server that can get to a point where it can no longer
allocate memory, so I need to find a way to understand what the heck is
going on. Unfortunately, I do not have any metrics implemented in my
application to log the total amount of memory used. That would have been a
very good indication on when exactly did this surge started to happen. I
could have checked the Event Viewer to see if the server got hit by a
nuclear bomb or something on that precise moment. I really don't get this. I
mean that's why we moved to .NET in the first place, to be rid of such
concerns, like making sure all dynamically allocated objects are then
destroyed.
Can anyone give me an idea on how to investigate this issue? Are there tools
out there that could be used to pick at the memory allocated by a process
and see what are those objects that take up so much memory? For the time
being, I have the processes running, but two of them are at or close to 690
MB and I don't know how long will I be able to keep them running.
If you had an issue like this, what would you try to do?
Any suggestion will be greatly appreciated.
TIA,
Eddie
===================================
View archives and manage your subscription(s) at http://peach.ease.lsoft.com/archives
I need your help with an issue I am having right now. I have several windows
services deployed. These services gather some XML files from the internet
and then parse the content and save the information in the database. The
database is Oracle 11g, so the access is done through ODP.NET. I had some
issues before (and some of you with good memory may recall my previous
questions) where I noticed the memory used by these services growing very
high. At the time, I understood that some classes in the ODP.NET library use
some JAVA code and they are known for not cleaning properly after them. As a
result I rewrote my code in such a way that every dynamic allocation of an
object from the ODP.NET library was encompassed in an "using" statement,
like this:
using (OracleCommand objDbCommand = new OracleCommand(strSqlSelectStatement,
objConnection))
All was fine for a very long time. All my services kept a constant size when
it came to the total memory used, a size that hovered around 55-60 MB. This
was the case for months. At the beginning I checked the size almost on a
daily basis. After a while, my confidence grew and I only checked it once a
week or so. Every time I looked at the size of all my processes (I had 4
that followed pretty much the same approach on dealing with those Oracle
objects) the size was around that 55-60 MB mark.
Now, all of a sudden, this last weekend, the size for some of the services
shut up to about 690 MB, despite the fact the service was running with the
exact same code as it did a month ago. This represents a potential fatal
problem for the server that can get to a point where it can no longer
allocate memory, so I need to find a way to understand what the heck is
going on. Unfortunately, I do not have any metrics implemented in my
application to log the total amount of memory used. That would have been a
very good indication on when exactly did this surge started to happen. I
could have checked the Event Viewer to see if the server got hit by a
nuclear bomb or something on that precise moment. I really don't get this. I
mean that's why we moved to .NET in the first place, to be rid of such
concerns, like making sure all dynamically allocated objects are then
destroyed.
Can anyone give me an idea on how to investigate this issue? Are there tools
out there that could be used to pick at the memory allocated by a process
and see what are those objects that take up so much memory? For the time
being, I have the processes running, but two of them are at or close to 690
MB and I don't know how long will I be able to keep them running.
If you had an issue like this, what would you try to do?
Any suggestion will be greatly appreciated.
TIA,
Eddie
===================================
View archives and manage your subscription(s) at http://peach.ease.lsoft.com/archives