Search Results

Search found 4274 results on 171 pages for 'references'.

Page 22/171 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • Minimizing use of paper

    - by Abody97
    I've recently faced this problem in a dynamic programming curriculum, and I honestly have no idea about how to determine the appropriate state. You're given N (1 <= N <= 70) paragraphs and M (1 <= M <= N) figures. Each paragraph i requires PL_i (1 <= PL_i <= 100) lines and references at most one figure. Each figure is referenced exactly once (i.e., no two paragraphs can reference the same figure, and for each figure there's a paragraph that references it.) Each figure requires PF_i (1 <= PF_i <= 100) lines. The task is to distribute those figures and paragraphs on paper in the order they're given, where one paper fits for L lines at most. No paragraph or figure is too large to fit on one paper. If a paragraph x placed on paper x_p references a figure y then y must be placed on either the paper x_p - 1 or x_p or x_p + 1. We have to find the minimum number of lines (and thus pages) to allocate in order to distribute all the figures and paragraphs. Any help would be extremely appreciated. Thanks in advance!

    Read the article

  • What does this WCF error mean: "Custom tool warning: Cannot import wsdl:portType"

    - by stiank81
    I created a WCF service library project in my solution, and have service references to this. I use the services from a class library, so I have references from my WPF application project in addition to the class library. Services are set up straight forward - only changed to get async service functions. Everything was working fine - until I wanted to update my service references. It failed, so I eventually rolled back and retried, but it failed even then! So - updating the service references fails without doing any changes to it. Why?! The error I get is this one: Custom tool error: Failed to generate code for the service reference 'MyServiceReference'. Please check other error and warning messages for details. The warning gives more information: Custom tool warning: Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: List of referenced types contains more than one type with data contract name 'Patient' in namespace 'http://schemas.datacontract.org/2004/07/MyApp.Model'. Need to exclude all but one of the following types. Only matching types can be valid references: "MyApp.Dashboard.MyServiceReference.Patient, Medski.Dashboard, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) "MyApp.Model.Patient, MyApp.Model, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] There are two similar warnings too saying: Custom tool warning: Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='WSHttpBinding_ISomeService'] And the same for: Custom tool warning: Cannot import wsdl:port .. I find this all confusing.. I don't have a Patient class on the client side Dashboard except the one I got through the service reference. So what does it mean? And why does it suddenly show? Remember: I didn't even change anything! Now, the solution to this was found here, but without an explanation to what this means. So; in the "Configure service reference" for the service I uncheck the "Reuse types in the referenced assemblies" checkbox. Rebuilding now it all works fine without problems. But what did I really change? Will this make an impact on my application? And when should one uncheck this? I do want to reuse the types I've set up DataContract on, but no more. Will I still get access to those without this checked?

    Read the article

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • Having Fun with Coding4Fun&rsquo;s Windows Phone 7 Controls

    - by mbcrump
    I’m a big believer in having a hobby project as you can probably tell from the first sentence in my “personal webpage using Silverlight” article. One of my current hobby projects is to re-do my current WP7 application in the marketplace. I knew up front that I needed a “Loading” animation and a better “About” box. After starting to develop my own, I noticed a great set of WP7 controls by Coding4Fun and decided to use them in my new application. Before I go any further they are FREE and Open-Source. It is really simple to get started, just go to the CodePlex site and click the download button. After you have downloaded it then extract it to a Folder and you will have 4 DLL files. They are listed below: Now create a Windows Phone 7 Project and add references to the DLL’s by right clicking on the References folder and clicking “Add references”.   After adding the references, we can get started. I needed a ProgressOverlay animation or “Loading Screen” while my RSS feed is downloading. Basically, you just need to add the following namespace to whatever page you want the control on: xmlns:Controls="clr-namespace:Coding4Fun.Phone.Controls;assembly=Coding4Fun.Phone.Controls" And then the code inside your Grid or wherever you want the Loading screen placed. <Controls:ProgressOverlay Name="progressOverlay" > <Controls:ProgressOverlay.Content> <TextBlock>Loading</TextBlock> </Controls:ProgressOverlay.Content> </Controls:ProgressOverlay> Bam, you now have a great looking loading screen. Of course inside the ProgressOverlay, you may want to add a Visibility property to turn it off after your data loads if you are using MVVM or similar pattern.   Next up, I needed a nice clean “About Box” that looks good but is also functional. Meaning, if they click on my twitter name, web or email to launch the appropriate task. Again, this is only a few lines of code: var p = new AboutPrompt(); p.VersionNumber = "2.0"; p.Show("Michael Crump", "@mbcrump", "[email protected]", @"http://michaelcrump.net"); A nice clean “About” box with just a few lines of code! I’m all for code that I don’t have to write. It also comes with a pretty sweet InputPrompt for grabbing info from a user: The code for this is also very simple: InputPrompt input = new InputPrompt(); input.Completed += (s, e) => { MessageBox.Show(e.Result.ToString()); }; input.Title = "Input Box"; input.Message = "What does a \"Developer Large\" T-Shirt Mean? "; input.Show(); I also enjoyed the PhoneHelper that allows you to get data out of the WMAppManifest File very easy. So for example if I wanted the Version info from the WMAppManifest file. I could write one line and get it. PhoneHelper.GetAppAttribute("Version") Of course you would want to make sure you add the following using statement: using Coding4Fun.Phone.Controls.Data; You can’t have all these cool controls without a great set of Converters. The included BooleanToVisibility converter will convert a Boolean to and from a Visibility value. This is excellent when using something like a CheckBox to display a TextBox when its checked. See the example below: The code is below: <phone:PhoneApplicationPage.Resources> <Converters:BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </phone:PhoneApplicationPage.Resources> <CheckBox x:Name="checkBox"/> <TextBlock Text="Display Text" Visibility="{Binding ElementName=checkBox, Path=IsChecked, Converter={StaticResource BooleanToVisibilityConverter} }"/> That’s not all the goodies included. They also provide a RoundedButton, TimePicker and several other converters. The documentation is great and I would recommend you give them a shot if you need any of this functionality. Btw, thank Brian Peek for his awesome work on Coding4Fun!  Subscribe to my feed

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • MySQL Error 1452 - Cannot add or update a child row: a foreign key constraint fails

    - by dscher
    I've looked at other people's questions on this topic but can't seem to find where my error is coming from. Any help would be greatly appreciated. I'm including as much as I can think of that might help find the problem: CREATE TABLE stocks ( id INT AUTO_INCREMENT NOT NULL, user_id INT(11) UNSIGNED NOT NULL, ticker VARCHAR(20) NOT NULL, name VARCHAR(20), rating INT(11), position ENUM("strong buy", "buy", "sell", "strong sell", "neutral"), next_look DATE, privacy ENUM("public", "private"), PRIMARY KEY(id), FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE IF NOT EXISTS `stocks_tags` ( `stock_id` INT NOT NULL, `tag_id` INT NOT NULL, PRIMARY KEY (`stock_id`,`tag_id`), KEY `fk_stock_tag` (`tag_id`), KEY `fk_tag_stock` (`stock_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `stocks_tags` ADD CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, ADD CONSTRAINT `fk_tag_stock` FOREIGN KEY (`stock_id`) REFERENCES `stocks` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; CREATE TABLE tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tags VARCHAR(30), UNIQUE(tags) ) ENGINE=INNODB DEFAULT CHARSET=utf8; And the error I'm getting: Database_Exception [ 1452 ]: Cannot add or update a child row: a foreign key constraint fails (`ddmachine`.`stocks_tags`, CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE) [ INSERT INTO `stocks_tags` (`stock_id`, `tag_id`) VALUES (19, 'cash') ] I did see that someone else had a similar problem based on their enum columns but don't think that's it.

    Read the article

  • How should I define a composite foreign key for domain constraints in the presence of surrogate keys

    - by Samuel Danielson
    I am writing a new app with Rails so I have an id column on every table. What is the best practice for enforcing domain constraints using foreign keys? I'll outline my thoughts and frustration. Here's what I would imagine as "The Rails Way". It's what I started with. Companies: id: integer, serial company_code: char, unique, not null Invoices: id: integer, serial company_id: integer, not null Products: id: integer, serial sku: char, unique, not null company_id: integer, not null LineItems: id: integer, serial invoice_id: integer, not null, references Invoices (id) product_id: integer, not null, references Products (id) The problem with this is that a product from one company might appear on an invoice for a different company. I added a (company_id: integer, not null) to LineItems, sort of like I'd do if only using natural keys and serials, then added a composite foreign key. LineItems (product_id, company_id) references Products (id, company_id) LineItems (invoice_id, company_id) references Invoices (id, company_id) This properly constrains LineItems to a single company but it seems over-engineered and wrong. company_id in LineItems is extraneous because the surrogate foreign keys are already unique in the foreign table. Postgres requires that I add a unique index for the referenced attributes so I am creating a unique index on (id, company_id) in Products and Invoices, even though id is simply unique. The following table with natural keys and a serial invoice number would not have these issues. LineItems: company_code: char, not null sku: char, not null invoice_id: integer, not null I can ignore the surrogate keys in the LineItems table but this also seems wrong. Why make the database join on char when it has an integer already there to use? Also, doing it exactly like the above would require me to add company_code, a natural foreign key, to Products and Invoices. The compromise... LineItems: company_id: integer, not null sku: integer, not null invoice_id: integer, not null does not require natural foreign keys in other tables but it is still joining on char when there is a integer available. Is there a clean way to enforce domain constraints with foreign keys like God intended, but in the presence of surrogates, without turning the schema and indexes into a complicated mess?

    Read the article

  • Two n x m relationships with the same table in mysql

    - by Christian
    I want to create a database in which there's an n x m relationship between the table drug and the table article and an n x m relationship between the table target and the table article. I get the error: Cannot delete or update a parent row: a foreign key constraint fails What do I have to change in my code? DROP TABLE IF EXISTS `textmine`.`article`; CREATE TABLE `textmine`.`article` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Pubmed ID', `abstract` blob NOT NULL, `authors` blob NOT NULL, `journal` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`drugs`; CREATE TABLE `textmine`.`drugs` ( `id` int(10) unsigned NOT NULL COMMENT 'This ID is taken from the biosemantics dictionary', `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`targets`; CREATE TABLE `textmine`.`targets` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`containstarget`; CREATE TABLE `textmine`.`containstarget` ( `targetid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `target` (`targetid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `target` FOREIGN KEY (`targetid`) REFERENCES `targets` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`contiansdrug`; CREATE TABLE `textmine`.`contiansdrug` ( `drugid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `drug` (`drugid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `drug` FOREIGN KEY (`drugid`) REFERENCES `drugs` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • Database design MySQL using foreign keys

    - by dscher
    I'm having some a little trouble understanding how to handle the database end of a program I'm making. I'm using an ORM in Kohana, but am hoping that a generalized understanding of how to solve this issue will lead me to an answer with the ORM. I'm writing a program for users to manage their stock research information. My tables are basically like so: CREATE TABLE tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tags VARCHAR(30), UNIQUE(tags) ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE stock_tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tag_id INT NOT NULL, stock_id INT NOT NULL, FOREIGN KEY (tag_id) REFERENCES tags(id), FOREIGN KEY(stock_id) REFERENCES stocks(id) ON DELETE CASCADE ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE notes( id INT AUTO_INCREMENT NOT NULL, stock_id INT NOT NULL, notes TEXT NOT NULL, FOREIGN KEY (stock_id) REFERENCES stocks(id) ON DELETE CASCADE, PRIMARY KEY(id) ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE links( id INT AUTO_INCREMENT NOT NULL, stock_id INT NOT NULL, links VARCHAR(2083) NOT NULL, FOREIGN KEY (stock_id) REFERENCES stocks(id) ON DELETE CASCADE, PRIMARY KEY(id) ) ENGINE=INNODB DEFAULT CHARSET=utf8; How would I get all the attributes of a single stock, including its links, notes, and tags? Do I have to add links, notes, and tags columns to the stocks table and then how do you call it? I know this differs using an ORM and I'd assume that I can use join tables in SQL. Thanks for any help, this will really help me understand the issue a lot better.

    Read the article

  • Do I need to write a trigger for such a simple constraint?

    - by Paul Hanbury
    I really had a hard time knowing what words to put into the title of my question, as I am not especially sure if there is a database pattern related to my problem. I will try to simplify matters as much as possible to get directly to the heart of the issue. Suppose I have some tables. The first one is a list of widget types: create table widget_types ( widget_type_id number(7,0) primary key, description varchar2(50) ); The next one contains icons: create table icons ( icon_id number(7,0) primary key, picture blob ); Even though the users get to select their preferred widget, there is a predefined subset of widgets that they can choose from for each widget type. create table icon_associations ( widget_type_id number(7,0) references widget_types, icon_id number(7,0) references icons, primary key (widget_type_id, icon_id) ); create table icon_prefs ( user_id number(7,0) references users, widget_type_id number(7,0), icon_id number(7,0), primary key (user_id, widget_type_id), foreign key (widget_type_id, icon_id) references icon_associations ); Pretty simple so far. Let us now assume that if we are displaying an icon to a user who has not set up his preferences, we choose one of the appropriate images associated with the current widget. I'd like to specify the preferred icon to display in such a case, and here's where I run into my problem: alter table icon_associations add ( is_preferred char(1) check( is_preferred in ('y','n') ) ) ; I do not see how I can enforce that for each widget_type there is one, and only one, row having is_preferred set to 'y'. I know that in MySQL, I am able to write a subquery in my check constraint to quickly resolve this issue. This is not possible with Oracle. Is my mistake that this column has no business being in the icon_associations table? If not where should it go? Is this a case where, in Oracle, the constraint can only be handled with a trigger? I ask only because I'd like to go the constraint route if at all possible. Thanks so much for your help, Paul

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

  • Actionscript - combining AS2 assets into a single SWF

    - by Justin Bachorik
    Hi guys, I have a flash project that I'm trying to export as a single SWF. There's a main SWF file that loads about 6 other SWFs, and both the main and the child SWFs reference other external assets (images, sounds, etc). I'd like to package everything as a single .swf file so I don't have to tote the other assets around with the .swf. All the coding is done in the timeline, but the assets haven't been imported into the Flash Authoring environment and I don't have time to do that right now (there are too many references to them everywhere). I'm hoping that there's just an option I'm missing that allows this sort of packaged export, but I haven't found anything like that. I don't have access to Flex or mxmlc (and as the AS is timeline-based, they wouldn't necessarily help me). Any thoughts? Thanks! PS...if there's no way of doing exactly what I'm saying, I could deal with having all the assets in a "assets" folder or something like that, so I'd just be toting around main.swf and an assets folder. The problem here is that all the references to the assets assume that they're in the same folder as the main.swf file, so everything's assumed to be local...is there a way to change the scope of all external references in Flash (so, for example, all local references in the code are actually searched in /assets)?

    Read the article

  • SQL - Query to display average as either "longer than" or "shorter than"

    - by user1840801
    Here are the tables I've created: CREATE TABLE Plane_new (Pnum char(3), Feature varchar2(20), Ptype varchar2(15), primary key (Pnum)); CREATE TABLE Employee_new (eid char(3), ename varchar(10), salary number(7,2), mid char(3), PRIMARY KEY (eid), FOREIGN KEY (mid) REFERENCES Employee_new); CREATE TABLE Pilot_new (eid char(3), Licence char(9), primary key (eid), foreign key (eid) references Employee_new on delete cascade); CREATE TABLE FlightI_new (Fnum char(4), Fdate date, Duration number(2), Pid char(3), Pnum char(3), primary key (Fnum), foreign key (Pid) references Pilot_new (eid), foreign key (Pnum) references Plane_new); And here is the query I must complete: For each flight, display its number, the name of the pilot who implemented the flight and the words ‘Longer than average’ if the flight duration was longer than average or the words ‘Shorter than average’ if the flight duration was shorter than or equal to the average. For the column holding the words ‘Longer than average’ or ‘Shorter than average’ make a header Length. Here is what I've come up with - with no luck! SELECT F.Fnum, E.ename, CASE Length WHEN F.Duration>(SELECT AVG(F.Duration) FROM FlightI_new F) THEN "Longer than average" WHEN F.Duration<=(SELECT AVG(F.Duration) FROM FlightI_new F) THEN 'Shorter than average' END FROM FlightI_new F LEFT OUTER JOIN Employee_new E ON F.Pid=E.eid GROUP BY F.Fnum, E.ename; Where am I going wrong?

    Read the article

  • Refer to the same footnote multiple times in Pages '09

    - by Zsub
    I would like to refer to the same footnote multiple times in Pages '09. What it does now, is create a new (identical except for the little number) footnote. So I have three or four identical footnotes which is a waste of space. it looks like this: These are the first1, second2 and third3 references, all numbers referring to their own (further identical) footnotes. I would like to have it like this: These are the first1, second1 and third1 references, all numbers referring to the same footnote.

    Read the article

  • Refer to the same footnote multiple times in Pages '09

    - by Zsub
    I would like to refer to the same footnote multiple times in Pages '09. What it does now, is create a new (identical except for the little number) footnote. So I have three or four identical footnotes which is a waste of space. it looks like this: These are the first1, second2 and third3 references, all numbers referring to their own (further identical) footnotes. I would like to have it like this: These are the first1, second1 and third1 references, all numbers referring to the same footnote.

    Read the article

  • Do browsers change URLs of saved bookmarks in response to 301 redirection?

    - by elliot100
    HTTP status code 301 is used to indicate that content has moved permanently, and that the returned URL should be used to access the requested content in future. RFC 2616 says Clients with link editing capabilities ought to automatically re-link references to the request-URI to one or more of the new references returned by the server, where possible. Do any browsers actually implement this and change a bookmark's URL?

    Read the article

  • How to change the style of a source reference in Word ?

    - by ldigas
    I have a number of references in Word 2007; there is several way of referencing them and the one I find the most fitting is purely numeric, e.g. (3) for reference number 3 in the list. But since I reference equations by round parenthesis, I'd like to change the literature references to square brackets. How can I do that ?

    Read the article

  • Access to content of Lotus Notes database without Lotus Notes software installed

    - by Fortilan
    I am looking for a programatic way to access content in a Lotus Notes database (.nsf file) without having Lotus Notes software installed. Python would be preferred but I'm also willing to look at other languages e.g. C/C++ or other means e.g. SQL From what I have read, all of the methods e.g. Python COM access, pyodbc rely on having Lotus Notes server software installed. The problem I am trying to solve is to read the content and look for references (URL's back to a web site that is undergoing maintenance and the addresses in the web site will change) As a start, I want to get a list of references and hope to be able to replace them with the new references to the modified web site. Any ideas on how best to do this welcome :)

    Read the article

  • Grails / GORM, read-only cache and transient fields

    - by Stephen Swensen
    Suppose I have the following Domain object mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject instances seem to be shared due to the caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas?

    Read the article

  • How do I upgrade my Crystal Report libraries in a .NET 3.5 project to CR XI R2?

    - by Stuart B
    Our project currently uses Crystal Reports for Visual Studio 2008. We need to upgrade to XI R2, but I'm having problems doing so. Here are the steps I followed: Install Crystal Reports XI R2. Collect updated assemblies from the GAC. I did this because I couldn't find version XI libraries in the "Add References..." dialog. I verified that these assemblies were of version 11.5.*. The libraries I gathered were: CrystalDecisions.CrystalReports.Engine CrystalDecisions.Enterprise.Framework CrystalDecisions.Enterprise.InfoStore CrystalDecisions.ReportSource CrystalDecisions.Shared CrystalDecisions.Windows.Forms Replace all references in my projects to version 10.5 Crystal libraries with references to the newer assemblies. Everything builds fine, but when I try to instantiate a ReportDocument, I get this error: The type initializer for 'CrystalDecisions.CrystalReports.Engine.ReportDocument' threw an exception. Is there anything I'm missing? Will this just not work?

    Read the article

  • EF4 generates invalid script

    - by Jaxidian
    When I right-click in a .EDMX file and click Generate Database From Model, the resulting script is obviously wrong because of the table names. What it generates is the following script. Note the table names in the DROP TABLE part versus the CREATE TABLE part. Why is this inconsistent? This is obviously not a reusable script. What I created was an Entity named "Address" and an Entity named "Company", etc (all singular). The EntitySet names are pluralized. The "Pluralize New Objects" boolean does not change this either. So what's the deal? For what it's worth, I originally generated the EDMX by pointing it to a database that had tables with non-pluralized names and now that I've made some changes, I want to go back the other way. I'd like to have the option to go back and forth as neither the db-first nor the model-first model is ideal in all scenarios, and I have the control to ensure that there will be no merging issues from multiple people going both ways at the same time. -- -------------------------------------------------- -- Dropping existing FOREIGN KEY constraints -- NOTE: if the constraint does not exist, an ignorable error will be reported. -- -------------------------------------------------- ALTER TABLE [Address] DROP CONSTRAINT [FK_Address_StateID-State_ID]; GO ALTER TABLE [Company] DROP CONSTRAINT [FK_Company_AddressID-Address_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_BossEmployeeID-Employee_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_CompanyID-Company_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_PersonID-Person_ID]; GO ALTER TABLE [Person] DROP CONSTRAINT [FK_Person_AddressID-Address_ID]; GO -- -------------------------------------------------- -- Dropping existing tables -- NOTE: if the table does not exist, an ignorable error will be reported. -- -------------------------------------------------- DROP TABLE [Address]; GO DROP TABLE [Company]; GO DROP TABLE [Employee]; GO DROP TABLE [Person]; GO DROP TABLE [State]; GO -- -------------------------------------------------- -- Creating all tables -- -------------------------------------------------- -- Creating table 'Addresses' CREATE TABLE [Addresses] ( [ID] int IDENTITY(1,1) NOT NULL, [StreetAddress] nvarchar(100) NOT NULL, [City] nvarchar(100) NOT NULL, [StateID] int NOT NULL, [Zip] nvarchar(10) NOT NULL ); GO -- Creating table 'Companies' CREATE TABLE [Companies] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'People' CREATE TABLE [People] ( [ID] int IDENTITY(1,1) NOT NULL, [FirstName] nvarchar(100) NOT NULL, [LastName] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'States' CREATE TABLE [States] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [Abbreviation] nvarchar(2) NOT NULL ); GO -- Creating table 'Employees' CREATE TABLE [Employees] ( [ID] int IDENTITY(1,1) NOT NULL, [PersonID] int NOT NULL, [CompanyID] int NOT NULL, [Position] nvarchar(100) NOT NULL, [BossEmployeeID] int NULL ); GO -- -------------------------------------------------- -- Creating all PRIMARY KEY constraints -- -------------------------------------------------- -- Creating primary key on [ID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [PK_Addresses] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [PK_Companies] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [PK_People] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'States' ALTER TABLE [States] ADD CONSTRAINT [PK_States] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [PK_Employees] PRIMARY KEY ([ID] ); GO -- -------------------------------------------------- -- Creating all FOREIGN KEY constraints -- -------------------------------------------------- -- Creating foreign key on [StateID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [FK_Address_StateID_State_ID] FOREIGN KEY ([StateID]) REFERENCES [States] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Address_StateID_State_ID' CREATE INDEX [IX_FK_Address_StateID_State_ID] ON [Addresses] ([StateID]); GO -- Creating foreign key on [AddressID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [FK_Company_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Company_AddressID_Address_ID' CREATE INDEX [IX_FK_Company_AddressID_Address_ID] ON [Companies] ([AddressID]); GO -- Creating foreign key on [AddressID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [FK_Person_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Person_AddressID_Address_ID' CREATE INDEX [IX_FK_Person_AddressID_Address_ID] ON [People] ([AddressID]); GO -- Creating foreign key on [CompanyID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_CompanyID_Company_ID] FOREIGN KEY ([CompanyID]) REFERENCES [Companies] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_CompanyID_Company_ID' CREATE INDEX [IX_FK_Employee_CompanyID_Company_ID] ON [Employees] ([CompanyID]); GO -- Creating foreign key on [BossEmployeeID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_BossEmployeeID_Employee_ID] FOREIGN KEY ([BossEmployeeID]) REFERENCES [Employees] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_BossEmployeeID_Employee_ID' CREATE INDEX [IX_FK_Employee_BossEmployeeID_Employee_ID] ON [Employees] ([BossEmployeeID]); GO -- Creating foreign key on [PersonID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_PersonID_Person_ID] FOREIGN KEY ([PersonID]) REFERENCES [People] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_PersonID_Person_ID' CREATE INDEX [IX_FK_Employee_PersonID_Person_ID] ON [Employees] ([PersonID]); GO -- -------------------------------------------------- -- Script has ended -- --------------------------------------------------

    Read the article

  • How to automatically fix MISSING reference in a dll when a referenced library is broken in VB6?

    - by systempuntoout
    What do you do when you break compatibility on a common library used by many other libraries? What i usually do is: For every dll that reference the broken one Checkout dll Checkout vbp project Open vpb project with VB6 Ide Click on References button Uncheck MISSING reference and OK Click on References button Check references and OK Click on Make dll Close project This can be a pita activity, when you have many Dll to recompile and it can be error prone because you could miss some Dll (anyway we have continuous integration that alert this cases). What's your best practice to handle this scenario?

    Read the article

  • Rails uniqueness constraint and matching db unique index for null column

    - by Dave
    I have the following in my migration file def self.up create_table :payment_agreements do |t| t.boolean :automatic, :default => true, :null => false t.string :payment_trigger_on_order t.references :supplier t.references :seller t.references :product t.timestamps end end I want to ensure that if a product_id is specified it is unique but I also want to allow null so I have the following in my model: validates :product_id, :uniqueness => true, :allow_nil => true Works great but I should then add an index to the migration file add_index :payment_agreements, :product_id, :unique => true Obviously this will throw an exception when two null values are inserted for product_id. I could just simply omit the index in the migration but then there's the chance that I'll get two PaymentAgreements with the same product_id as shown here: Concurrency and integrity My question is what is the best/most common way to deal with this problem

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >