Before any questions / explanations from my side, please forgive me if I wasn't clear in the question title.
I just started using the CLRProfiler 64bit that can be downloaded from Microsoft websites, and I was trying to investigate how much bytes are being allocated by my application and how much of those are being collected by the GC in each generation and what is the final results that ends on the heap.
I have used the application for several minutes and then I generated the report from the CLRProfiler.
I have got all the information that I was asking for over here Clrprofile summary
Yet, the problem is I want to know or rather understand, if my object allocation logic throughout the application is resulting in such a huge final heap result is a necessarily a bad thing or ... or does it depends on factors like keeping looped references (a and b has a reference of c, a released c reference, yet GC can't collect b because it's still referenced) or how can I allocate a number of objects that is equal to X bytes and the final heap size is much larger than X or am I not understanding the heap allocation properly ?
Any guidance of how to asses if I am handling the allocations, and my heap size properly or not as well ?
I know that I am lacking directions of asking a specific questions, but please do ask if you want more elaboration.
EDIT:
I did something terribly silly, I generated the report before the application terminates and in turn the number of allocated bytes weren't calculated probably, I re-generated after I terminated the application, and the results seem to be making more sense now.
But my questions are still valid, I suppose, any help in offering more understanding to allocation vs final heap size will be appreciated.