Search Results

Search found 17770 results on 711 pages for 'inside the concurrent collections'.

Page 10/711 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Shallow Copy vs DeepCopy in C#.NET

    Hope below example helps to understand the difference. Please drop a comment if any doubts. using System; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace ShallowCopyVsDeepCopy {     class Program     {         static void Main(string[] args)         {             var e1 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e2 = e1.ShallowClone();             e1.Department.DeptName = "Accounts";             Console.WriteLine(e2.Department.DeptName);             var e3 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e4 = e3.DeepClone();             e3.Department.DeptName = "Accounts";             Console.WriteLine(e4.Department.DeptName);         }     }     [Serializable]     class Dep     {         public int DeptNo { get; set; }         public String DeptName { get; set; }     }     [Serializable]     class Emp     {         public int EmpNo { get; set; }         public String EmpName { get; set; }         public Dep Department { get; set; }         public Emp ShallowClone()         {             return (Emp)this.MemberwiseClone();         }         public Emp DeepClone()         {             MemoryStream ms = new MemoryStream();             BinaryFormatter bf = new BinaryFormatter();             bf.Serialize(ms, this);             ms.Seek(0, SeekOrigin.Begin);             object copy = bf.Deserialize(ms);             ms.Close();             return copy as Emp;         }     } } span.fullpost {display:none;}

    Read the article

  • How can a collection class instantiate many objects with one database call?

    - by Buttle Butkus
    I have a baseClass where I do not want public setters. I have a load($id) method that will retrieve the data for that object from the db. I have been using static class methods like getBy($property,$values) to return multiple class objects using a single database call. But some people say that static methods are not OOP. So now I'm trying to create a baseClassCollection that can do the same thing. But it can't, because it cannot access protected setters. I don't want everyone to be able to set the object's data. But it seems that it is an all-or-nothing proposition. I cannot give just the collection class access to the setters. I've seen a solution using debug_backtrace() but that seems inelegant. I'm moving toward just making the setters public. Are there any other solutions? Or should I even be looking for other solutions?

    Read the article

  • ArrayList in Java [on hold]

    - by JNL
    I was implementing a program to remove the duplicates from the 2 character array. I implemented these 2 solutions, Solution 1 worked fine, but Solution 2 given me UnSupportedoperationException. I am wonderring why i sthat so? The two solutions are given below; public void getDiffernce(Character[] inp1, Character[] inp2){ // Solution 1: // ********************************************************************************** List<Character> list1 = new ArrayList<Character>(Arrays.asList(inp1)); List<Character> list2 = new ArrayList<Character>(Arrays.asList(inp2)); list1.removeAll(list2); System.out.println(list1); System.out.println("*********************************************************************************"); // Solution 2: Character a[] = {'f', 'x', 'l', 'b', 'y'}; Character b[] = {'x', 'b','d'}; List<Character> al1 = new ArrayList<Character>(); List<Character> al2 = new ArrayList<Character>(); al1 = (Arrays.asList(a)); System.out.println(al1); al2 = (Arrays.asList(b)); System.out.println(al2); al1.removeAll(al2); // retainAll(al2); System.out.println(al1); }

    Read the article

  • 421 Concurrent Connections - Ratelimit from helpdesk to rackspace server

    - by g18c
    We have Kayako helpdesk running on our WHM Linux server. When e-mails come in from customers, notifications are sent out by Kayako to a number of staff whose mailboxes are hosted on Rackspace mail servers. I noticed a large queue in the Exim queued message viewer of WHM - when looking in Exim logs I can see many lines 2012-10-13 20:06:56 1TN72s-0007Cw-1l SMTP error from remote mail server after initial connection: host mx2.emailsrvr.com [173.203.2.32]: 421 Too many concurrent connections from this client. One client email results in about 5 emails to rackspace servers, perhaps 60 emails per 1 hour on average - not a huge amount but enough to cause messages to be rejected when sent in short bursts. In this case ideally if we can limit the connections sent to the rackspace server we can comply with their limit. For our requirements if we send 1 email every10 seconds or so, this would be OK. Messages to all other servers should go through a normal rates, only mx1.emailsrvr.com and mx2.emailsrvr.com should have this connection limit policy applied. Is this possible?

    Read the article

  • wifi routers and concurrent devices

    - by Joelio
    We have a Linksys WRT54G WiFi router in our office which was working great when we had 5-6 folks. Now on peak days we have 10-15 people, each with a computer, smartphone, etc, and an ooma VOIP device. On average 1-2 times a day I need to go hard reboot the router, and sometimes the border router (Cox-supplied device). I assume this is just because the router cant handle this many concurrent users. So my question is can these consumer routers handle this kind of load? If not, would adding more devices solve the problem, and how close proximity can I put 2 routers without having interference problems (our office area is not that big physically)?

    Read the article

  • Limit number of concurrent user logins in Windows Server 2008 Active Directory

    - by smhnaji
    Is there the possibility to limit Active Directory users' max concurrent login sessions? I've read many articles and discussions about the solution, but none of them seem to be working. Many had suggested UserLogin script that doesn't work in Windows Server 2008. Some other suggested CConnect that is not good enough. It's also very complicated. Some others have introduced UserLock that should be paid for. It's wondering that Windows Server 2003 DOES have the feature (wile as a third-party), but Windows Server 2008 doesn't have! One of the articles I've read: http://www.edugeek.net/forums/windows-server-2008-r2/61216-multiple-logins.html

    Read the article

  • Remove Windows 7's limitation on number of concurrent tcp connections (http web requests)

    - by Ghita
    I have an application that tries to open as many http requests as possible (in order to stress test a proxy implementation) It seems to me that Win7 (SP1) may have a limitation on number of concurrent opened connection (it may be the so called half-open state if I'm not wrong). Is there something I can do for client ? and also I test using a vista PC that acts as a proxy server. It would be great if I could configure it to sustain at least 50 new connections initiated / second on client side and many more on server. I made the modification according to this technet article by setting TcpNumConnections = 150 but it doesn't make a difference. I still only see about 20 tcp sockets associated with my http client by using tcpview.

    Read the article

  • include component inside an article in joomla

    - by user1212
    How can one include a component inside an article in joomla. I have tried using the Plugin Include Component in Joomla. I have a Phoca guestbook set up which works fine on its own. But when I try to include it inside an article using {component url='index.php?option=com_phocaguestbook'} It just does not display any component. This same procedure will work fine for other components. Any idea about what i am missing here?

    Read the article

  • Need to extract thousands of folders and mass rename files inside folder corresponding to folder title

    - by user140393
    I did a site rip with wget and all the files I want are arranged with a webid which I do not want for example site.com/ID/title It appears as so in the directory siteIDFoldernameFile It's just a single file labelled index.html for the 10k+ ID's So essentially I want to remove ID folder while preserving the folder (file) inside. Then I want to rename the index.html inside of foldername to replace index.html

    Read the article

  • How do I implement a collection in Scala 2.8?

    - by Simon Reinhardt
    In trying to write an API I'm struggling with Scala's collections in 2.8(.0-beta1). Basically what I need is to write something that: adds functionality to immutable sets of a certain type where all methods like filter and map return a collection of the same type without having to override everything (which is why I went for 2.8 in the first place) where all collections you gain through those methods are constructed with the same parameters the original collection had (similar to how SortedSet hands through an ordering via implicits) which is still a trait in itself, independent of any set implementations. Additionally I want to define a default implementation, for example based on a HashSet. The companion object of the trait might use this default implementation. I'm not sure yet if I need the full power of builder factories to map my collection type to other collection types. I read the paper on the redesign of the collections API but it seems like things have changed a bit since then and I'm missing some details in there. I've also digged through the collections source code but I'm not sure it's very consistent yet. Ideally what I'd like to see is either a hands-on tutorial that tells me step-by-step just the bits that I need or an extensive description of all the details so I can judge myself which bits I need. I liked the chapter on object equality in "Programming in Scala". :-) But I appreciate any pointers to documentation or examples that help me understand the new collections design better.

    Read the article

  • How do I use a Lambda expression to sort INTEGERS inside a object?

    - by punkouter
    I have a collection of objects and I know that I can sort by NAME (string type) by saying collEquipment.Sort((x, y) = string.Compare(x.ItemName, y.ItemName)); that WORKS. But I want to sort by a ID (integer type) and there is no such thing as Int32.Compare So how do I do this? This doesnt work collEquipment.Sort((x, y) = (x.ID < y.ID); //error I know the answer is going to be really simple. Lambda expressions confuse me.

    Read the article

  • Concurrent cartesian product algorithm in Clojure

    - by jqno
    Is there a good algorithm to calculate the cartesian product of three seqs concurrently in Clojure? I'm working on a small hobby project in Clojure, mainly as a means to learn the language, and its concurrency features. In my project, I need to calculate the cartesian product of three seqs (and do something with the results). I found the cartesian-product function in clojure.contrib.combinatorics, which works pretty well. However, the calculation of the cartesian product turns out to be the bottleneck of the program. Therefore, I'd like to perform the calculation concurrently. Now, for the map function, there's a convenient pmap alternative that magically makes the thing concurrent. Which is cool :). Unfortunately, such a thing doesn't exist for cartesian-product. I've looked at the source code, but I can't find an easy way to make it concurrent myself. Also, I've tried to implement an algorithm myself using map, but I guess my algorithmic skills aren't what they used to be. I managed to come up with something ugly for two seqs, but three was definitely a bridge too far. So, does anyone know of an algorithm that's already concurrent, or one that I can parallelize myself?

    Read the article

  • Google Collections sources don't compile

    - by Carl Rosenberger
    I just downloaded the Google Collections sources and imported them into a new Eclipse project with JDK 1.6. They don't compile for a couple of reasons: javax.annotation.Nullable can not be found javax.annotation.ParametersAreNonnullByDefault can not be found Cannot reduce the visibility of the inherited method #createCollection() from AbstractMultimap + 11 similar ones Name clash: The method forcePut(K, V) of type AbstractBiMap has the same erasure as forcePut(Object, Object) of type BiMap but does not override it + 2 similar ones What am I missing? I also wonder if unit tests for these collections are available to the public.

    Read the article

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Caching in the .NET Stack: Inside-Out

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/06/28/caching-in-the-.net-stack-inside-out.aspxI'm delighted to have my first course published on Pluralsight - Caching in the .NET Stack: Inside-out.   It's a pretty comprehensive look at caching in .NET solutions. The first half covers using local, remote and persistent cache stores inside the solution, including the .NET MemoryCache, NCache Express, AppFabric Caching, memcached, Azure Table Storage and local disk stores. The second half covers caching outside the solution in HTTP clients and proxies, and how to set up ASP.NET WebForms, MVC, Web API and WCF projects to use HTTP validation and expiration caching.   The course takes a hands-on approach, starting with a distributed solution that has no caching, analysing key points which can benefit from caching, and adding different types of cache. At the end of the course I run through a set of before and after performance tests, stressing the solution under load. Without caching and with 60 concurrent users the page response time maxes out at 18 seconds - with caching that falls to 2 seconds, so it's a huge improvement from very little effort. I’d be glad to hear feedback if you watch the course, especially if it’s as positive as my editor’s.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • SSH tunnel for socks5 proxy is slow with concurrent load

    - by RawwrBag
    I ssh to a remote AWS server using Ubuntu. I use ssh's port forwarding capabilities to do this. I have tried forwarding a dynamic port (ssh -D) or a single port (ssh -L with dante running as a remote socks server). Both are equally slow. I also tried different ciphers (ssh -c). Concurrent TCP connections pretty much do not work. For example, I can go to speedtest.net and start a test (which is fairly fast, probably maxes out my line speed) and if I try and do anything (i.e. load google.com) while the test is still running, all the additional connections seem to hang until the speed test is over. I realize OpenSSH is single-threaded. Is this the problem? It doesn't even show up on my top. Same goes for sshd on the remote server -- no processor hit. Is there anyway to bump ssh performance or should I step up to OpenVPN or something better suited for this?

    Read the article

  • 100% CPU when doing 4 or more concurrent requests with Magento

    - by pancake
    Currently I'm having trouble with a server running Magento, it's unbelievably slow. It's a VPS with a few Magento installations on it used for development, so I'm the only one using them. When I do 4 request all 2 seconds after each other I'm finished in 10 seconds. Slow, but still within the limits of my patience. When I do 4 "concurrent" requests, however (opening 4 tabs in a row, very quickly) all four cores go to 100% and stay there for like a minute. How is this possible? I know that there are a lot of possibilities here, so any tips on how to make an Apache/PHP server go faster are also welcome. It used to go a lot faster before, and I've also tried APC, but it kept causing problems (PHP errors, something with memory pools) so I've disabled it. By the way, the Magento cache is off and compiling is also off. I know this makes Magento slower than usual, but I don't think a 60 second response time is normal for any Magento installation. Virtual hardware: 4 Cores and 4096MB RAM Swap is never used (checked with htop) 100GB disk space, of which 10% is in use Software: Debian 6 DirectAdmin and apache custombuild PHP 5.2.17 (CLI) If you need more info, please tell me how to get it, because I probably don't know how. I do know how to use the command line in linux and the usage of quite a few commands, but my experience with managing a server is limited.

    Read the article

  • Does .NET have a built in IEnumerable for multiple collections?

    - by Bryce Wagner
    I need an easy way to iterate over multiple collections without actually merging them, and I couldn't find anything built into .NET that looks like it does that. It feels like this should be a somewhat common situation. I don't want to reinvent the wheel. Is there anything built in that does something like this: public class MultiCollectionEnumerable<T> : IEnumerable<T> { private MultiCollectionEnumerator<T> enumerator; public MultiCollectionEnumerable(params IEnumerable<T>[] collections) { enumerator = new MultiCollectionEnumerator<T>(collections); } public IEnumerator<T> GetEnumerator() { enumerator.Reset(); return enumerator; } IEnumerator IEnumerable.GetEnumerator() { enumerator.Reset(); return enumerator; } private class MultiCollectionEnumerator<T> : IEnumerator<T> { private IEnumerable<T>[] collections; private int currentIndex; private IEnumerator<T> currentEnumerator; public MultiCollectionEnumerator(IEnumerable<T>[] collections) { this.collections = collections; this.currentIndex = -1; } public T Current { get { if (currentEnumerator != null) return currentEnumerator.Current; else return default(T); } } public void Dispose() { if (currentEnumerator != null) currentEnumerator.Dispose(); } object IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (currentIndex >= collections.Length) return false; if (currentIndex < 0) { currentIndex = 0; if (collections.Length > 0) currentEnumerator = collections[0].GetEnumerator(); else return false; } while (!currentEnumerator.MoveNext()) { currentEnumerator.Dispose(); currentEnumerator = null; currentIndex++; if (currentIndex >= collections.Length) return false; currentEnumerator = collections[currentIndex].GetEnumerator(); } return true; } public void Reset() { if (currentEnumerator != null) { currentEnumerator.Dispose(); currentEnumerator = null; } this.currentIndex = -1; } } }

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • WCF: Serializing and Deserializing generic collections

    - by Fabiano
    I have a class Team that holds a generic list: [DataContract(Name = "TeamDTO", IsReference = true)] public class Team { [DataMember] private IList<Person> members = new List<Person>(); public Team() { Init(); } private void Init() { members = new List<Person>(); } [System.Runtime.Serialization.OnDeserializing] protected void OnDeserializing(StreamingContext ctx) { Log("OnDeserializing of Team called"); Init(); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnSerializing] private void OnSerializing(StreamingContext ctx) { Log("OnSerializing of Team called"); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnDeserialized] protected void OnDeserialized(StreamingContext ctx) { Log("OnDeserialized of Team called"); if (members != null) Log(members.ToString()); } [System.Runtime.Serialization.OnSerialized] private void OnSerialized(StreamingContext ctx) { Log("OnSerialized of Team called"); Log(members.ToString()); } When I use this class in a WCF service, I get following log output OnSerializing of Team called System.Collections.Generic.List 1[Person] OnSerialized of Team called System.Collections.Generic.List 1[Person] OnDeserializing of Team called System.Collections.Generic.List 1[ENetLogic.ENetPerson.Model.FirstPartyPerson] OnDeserialized of Team called ENetLogic.ENetPerson.Model.Person[] After the deserialization members is an Array and no longer a generic list although the field type is IList< (?!) When I try to send this object back over the WCF service I get the log output OnSerializing of Team called ENetLogic.ENetPerson.Model.FirstPartyPerson[] After this my unit test crashes with a System.ExecutionEngineException, which means the WCF service is not able to serialize the array. (maybe because it expected a IList<) So, my question is: Does anybody know why the type of my IList< is an array after deserializing and why I can't serialize my Team object any longer after that? Thanks

    Read the article

  • Deserialize generic collections - coming up empty

    - by AC
    I've got a settings object for my app that has two collections in it. The collections are simple List generics that contain a collection of property bags. When I serialize it, everything is saved with no problem: XmlSerializer x = new XmlSerializer(settings.GetType()); TextWriter tw = new StreamWriter(@"c:\temp\settings.cpt"); x.Serialize(tw, settings); However when I deserialize it, everything is restored except for the two collections (verified by setting a breakpoint on the setters: XmlSerializer x = new XmlSerializer(typeof(CourseSettings)); XmlReader tr = XmlReader.Create(@"c:\temp\settings.cpt"); this.DataContext = (CourseSettings)x.Deserialize(tr); What would cause this? Everything is pretty vanilla... here's a snippet from the settings object... omitting most of it. The PresentationSourceDirectory works just fine, but the PresentationModules' setter isn't hit: private string _presentationSourceDirectory = string.Empty; public string PresentationSourceDirectory { get { return _presentationSourceDirectory; } set { if (_presentationSourceDirectory != value) { OnPropertyChanged("PresentationSourceDirectory"); _presentationSourceDirectory = value; } } } private List<Module> _presentationModules = new List<Module>(); public List<Module> PresentationModules { get { var sortedModules = from m in _presentationModules orderby m.ModuleOrder select m; return sortedModules.ToList<Module>(); } set { if (_presentationModules != value) { _presentationModules = value; OnPropertyChanged("PresentationModules"); } } }

    Read the article

  • Best practice when removing entity regarding mappedBy collections?

    - by Daniel Bleisteiner
    I'm still kind of undecided which is the best practice to handle em.remove(entity) with this entity being in several collections mapped using mappedBy in JPA. Consider an entity like a Property that references three other entities: a Descriptor, a BusinessObject and a Level entity. The mapping is defined using @ManyToOne in the Property entity and using @OneToMany(mappedBy...) in the other three objects. That inverse mapping is defined because there are some situations where I need to access those collections. Whenever I remove a Property using em.remove(prop) this element is not automatically removed from managed entities of the other three types. If I don't care about that and the following page load (webapp) doesn't reload those entities the Property is still found and some decisions might be taken that are no longer true. The inverse mappings may become quite large and though I don't want to use something like descriptor.getProperties().remove(prop) because it will load all those properties that might have been lazy loaded until then. So my currently preferred way is to refresh the entity if it is managed: if (em.contains(descriptor)) em.refresh(descriptor) - which unloads a possibly loaded collection and triggers a reload upon the next access. Is there another feasible way to handle all those mappedBy collections of already loaded entites?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >