Search Results

Search found 13829 results on 554 pages for 'temporary objects'.

Page 111/554 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Help with force close occurrences in my app

    - by Ken
    This is the last issue with this app. Periodic force close situations. I think something should be on another thread but I'm not sure what. Anyway, I can always count on a freeze on first install. If I wait, eventually (maybe 10 seconds) the app comes around, maybe more. here is an excerpt from logcat--the three lines occur after full layout is displayed and I attempt to touch a [game] 'peg' which should spawn a sprite, but the freeze occurs there. Can anybody tell what the issue might be?: I/System.out( 279): TouchDown (17.0,106.0) I/System.out( 279): checking (17,106 I/System.out( 279): hit for bounds Rect(3, 98 - 32, 130) [FREEZE BEGINS] W/webcore ( 279): Can't get the viewWidth after the first layout W/WindowManager( 60): Key dispatching timed out sending to com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree W/WindowManager( 60): Previous dispatch state: null W/WindowManager( 60): Current dispatch state: {{null to Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} @ 1295232880017 lw=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} lb=android.os.BinderProxy@440523b8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false}}} I/Process ( 60): Sending signal. PID: 279 SIG: 3 I/dalvikvm( 279): threadid=3: reacting to signal 3 D/dalvikvm( 124): GC_EXPLICIT freed 1754 objects / 106104 bytes in 7365ms I/Process ( 60): Sending signal. PID: 60 SIG: 3 I/dalvikvm( 60): threadid=3: reacting to signal 3 I/dalvikvm( 60): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 263 SIG: 3 I/dalvikvm( 263): threadid=3: reacting to signal 3 I/dalvikvm( 279): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 117 SIG: 3 I/dalvikvm( 117): threadid=3: reacting to signal 3 I/dalvikvm( 117): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 254 SIG: 3 I/Process ( 60): Sending signal. PID: 121 SIG: 3 I/dalvikvm( 121): threadid=3: reacting to signal 3 D/AudioSink( 34): bufferCount (4) is too small and increased to 12 I/System.out( 279): making white sprite I/Process ( 60): Sending signal. PID: 186 SIG: 3 I/Process ( 60): Sending signal. PID: 232 SIG: 3 D/MillennialMediaAdSDK( 279): size: 1 D/MillennialMediaAdSDK( 279): num: 1 D/AdWhirl SDK( 279): Millennial success D/AdWhirl SDK( 279): Will call rotateAd() in 120 seconds I/dalvikvm( 232): threadid=3: reacting to signal 3 I/dalvikvm( 121): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 222 SIG: 3 I/MillennialMediaAdSDK( 279): Millennial ad return success D/MillennialMediaAdSDK( 279): View height: 0 D/MillennialMediaAdSDK( 279): nextUrl: [deleted] I/Process ( 60): Sending signal. PID: 239 SIG: 3 I/Process ( 60): Sending signal. PID: 213 SIG: 3 D/AdWhirl SDK( 279): Added subview D/AdWhirl SDK( 279): Pinging URL: [deleted] I/Process ( 60): Sending signal. PID: 197 SIG: 3 I/dalvikvm( 197): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 164 SIG: 3 I/dalvikvm( 164): threadid=3: reacting to signal 3 D/dalvikvm( 279): GC_FOR_MALLOC freed 7735 objects / 639688 bytes in 217ms I/Process ( 60): Sending signal. PID: 124 SIG: 3 I/dalvikvm( 124): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 158 SIG: 3 I/dalvikvm( 158): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 127 SIG: 3 E/ActivityManager( 60): ANR in com.live.brainbuilderfree (com.live.brainbuilderfree/.BrainBuilderFree) E/ActivityManager( 60): Reason: keyDispatchingTimedOut E/ActivityManager( 60): Load: 3.46 / 1.69 / 0.65 E/ActivityManager( 60): CPU usage from 28095ms to 140ms ago: E/ActivityManager( 60): system_server: 30% = 25% user + 4% kernel / faults: 3119 minor 66 major E/ActivityManager( 60): mediaserver: 11% = 7% user + 4% kernel / faults: 746 minor 17 major E/ActivityManager( 60): com.svox.pico: 1% = 0% user + 1% kernel / faults: 2833 minor 8 major E/ActivityManager( 60): d.process.acore: 1% = 0% user + 0% kernel / faults: 1146 minor 36 major E/ActivityManager( 60): ndroid.launcher: 1% = 0% user + 0% kernel / faults: 852 minor 6 major E/ActivityManager( 60): m.android.phone: 0% = 0% user + 0% kernel / faults: 621 minor 7 major E/ActivityManager( 60): kswapd0: 0% = 0% user + 0% kernel E/ActivityManager( 60): ronsoft.openwnn: 0% = 0% user + 0% kernel / faults: 337 minor 2 major E/ActivityManager( 60): adbd: 0% = 0% user + 0% kernel / faults: 3 minor E/ActivityManager( 60): zygote: 0% = 0% user + 0% kernel / faults: 169 minor E/ActivityManager( 60): events/0: 0% = 0% user + 0% kernel E/ActivityManager( 60): rild: 0% = 0% user + 0% kernel / faults: 103 minor 3 major E/ActivityManager( 60): pdflush: 0% = 0% user + 0% kernel E/ActivityManager( 60): .quicksearchbox: 0% = 0% user + 0% kernel / faults: 61 minor E/ActivityManager( 60): id.defcontainer: 0% = 0% user + 0% kernel / faults: 12 minor E/ActivityManager( 60): +rainbuilderfree: 0% = 0% user + 0% kernel E/ActivityManager( 60): +sh: 0% = 0% user + 0% kernel E/ActivityManager( 60): +app_process: 0% = 0% user + 0% kernel E/ActivityManager( 60): TOTAL: 100% = 76% user + 21% kernel + 2% iowait + 0% irq + 0% softirq I/dalvikvm( 127): threadid=3: reacting to signal 3 I/dalvikvm( 186): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 3747 objects / 228920 bytes in 609ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.759MB for 36896-byte allocation I/dalvikvm( 239): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 226 objects / 9952 bytes in 546ms I/dalvikvm( 213): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 5816 bytes in 492ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.815MB for 49188-byte allocation I/dalvikvm( 222): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 77 objects / 5232 bytes in 546ms I/dalvikvm( 254): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 55856 bytes in 521ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.876MB for 98360-byte allocation D/dalvikvm( 60): GC_FOR_MALLOC freed 58 objects / 3632 bytes in 340ms D/dalvikvm( 60): GC_FOR_MALLOC freed 1093 objects / 185256 bytes in 572ms W/WindowManager( 60): Continuing to wait for key to be dispatched I/System.out( 279): TouchMove (117.0,124.0) I/System.out( 279): TouchUP (117.0,124.0) D/dalvikvm( 60): GC_FOR_MALLOC freed 141 objects / 108328 bytes in 564ms I/ARMAssembler( 60): generated scanline__00000077:03515104_00000000_00000000 [ 33 ipp] (47 ins) at [0x313d78:0x313e34] in 11621593 ns W/InputManagerService( 60): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43f66a10 I/dalvikvm( 239): Wrote stack traces to '/data/anr/traces.txt' I/dalvikvm( 263): Wrote stack traces to '/data/anr/traces.txt' etc...

    Read the article

  • Django unable to update model

    - by user292652
    i have the following function to override the default save function in a model match def save(self, *args, **kwargs): if self.Match_Status == "F": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1) if self.Winner !="": Team.objects.filter(pk=self.Winner.id).update(Win=F('Win')+1, Points=F('Points')+3) else: return if self.Match_Status == "D": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) super(Match, self).save(*args, **kwargs) I am able to save the match model just fine but Team model does not seem to be updating at all and no error is being thrown. am i missing some thing here ?

    Read the article

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • Circular Dependency Solution

    - by gfoley
    Our current project has ran into a circular dependency issue. Our business logic assembly is using classes and static methods from our SharedLibrary assembly. The SharedLibrary contains a whole bunch of helper functions, such as a SQL Reader class, Enumerators, Global Variables, Error Handling, Logging and Validation. The SharedLibrary needs access to the Business objects, but the Business objects need access to SharedLibrary. The old developers solved this obvious code smell by replicating the functionality of the business objects in the shared library (very anti-DRY). I've spent a day now trying to read about my options to solve this but i'm hitting a dead end. I'm open to the idea of architecture redesign, but only as a last resort. So how can i have a Shared Helper Library which can access the business objects, with the business objects still accessing the Shared Helper Library?

    Read the article

  • Best way to access database from android

    - by Brandon Delany
    I am working on a Android app and I have a dilemma. I have a list of Objects. I have to update each of these objects with a database. I have 2 methods: Method 1: I can loop through the Objects. For each object I can connect to the server, update it, and then move on to the next Object, and so forth. Method 2: I can store the Objects in a list, send the whole list to the server, update it on the server side, then return a list of updated objects. My questions are: Which method is faster? Which method is easier on the phone's battery? By the way, Method 1 is easier for me to code :). Thank you.

    Read the article

  • Chaning coding style due to Android GC performance, how far is too far?

    - by Benju
    I keep hearing that Android applications should try to limit the number of objects created in order to reduce the workload on the garbage collector. It makes sense that you may not want to created massive numbers of objects to track on a limited memory footprint, for example on a traditional server application created 100,000 objects within a few seconds would not be unheard of. The problem is how far should I take this? I've seen tons of examples of Android applications relying on static state in order supposedly "speed things up". Does increasing the number of instances that need to be garbage collected from dozens to hundreds really make that big of a difference? I can imagine changing my coding style to now created hundreds of thousands of objects like you might have on a full-blown Java-EE server but relying on a bunch of static state to (supposedly) reduce the number of objects to be garbage collected seems odd. How much is it really necessary to change your coding style in order to create performance Android apps?

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Forcing a method to be non-transactional in JPA (Eclipselink)

    - by rhinds
    Hi, I am developing an application using Eclipselink and as part of the app I need to be able to manipulate some of the objects which involves changing data without it being persisted to the database (i merging/changing objects for some batch generation processes). I am reluctant to change the data in the Entity objects, as there is a risk that even though i have not marked the methods as @Transactional, this method could in the future be inadvertantly called from within a transactional method and these changes could be persisted. So my question is, is there anyway to get around this? Such as force a method to always be non-transactional regardless; terminate any transactionality as soon as the method is started; etc. I know there is a .detach() method that can detach the objects from the Entity Manager, however, there are many objects and this seems like a potentially error prone fail-safe on my code.

    Read the article

  • 2 (or more) ComboBoxes dependent on each other

    - by Mcad001
    Hi, I have an Organisation entity and a Region entity. An object of type Organisation can have one or more Region objects connected to it, thus I have a foreign key in my Region entity to the Organisation Entity. The Organisation and Region objects are pulled from my database using WCF RIA and entity framework. I want to put the Organisation objects in one ComboBox and the Region objects in another ComboBox, and when selecting an organsation having the ComboBox for Region objects automatically only showing regions that are connected to the selected organisation. Should be pretty basic, but the way I've designed it right now it doesnt work at all. So, any hint to how I can achive this? A simple simple codeexample is much appreciated! (I'm using SL4,WCF RIA MVVM)

    Read the article

  • How to make my view better to save Django

    - by user558251
    Hy guys sorry for this post but i need help with my application, i need optimize my view. I have 5 models, how i can do this? def save(request): # get the request.POST in content if request.POST: content = request.POST dicionario = {} # create a dict to get the values in content for key,value in content.items(): # get my fk Course.objects if key == 'curso' : busca_curso = Curso.objects.get(id=value) dicionario.update({key:busca_curso}) else: dicionario.update({key:value}) #create the new teacher Professor.objects.create(**dicionario) my questions are? 1 - How i can do this function in a generic way? Can I pass a variable in a %s to create and get? like this way ? foo = "Teacher" , bar = "Course" def save(request, bar, foo): if request post: ... if key == 'course' : get_course = (%s.objects.get=id=value) %bar ... (%s.objects.create(**dict)) %foo ??? i tried do this in my view but don't work =/, can somebody help me to make this work ? Thanks

    Read the article

  • a couple of Makefile issues

    - by user1623249
    I've got this Makefile: CFLAGS = -c -Wall CC = g++ EXEC = main SOURCES = main.cpp listpath.cpp Parser.cpp OBJECTS = $(SOURCES: .cpp=.o) EXECUTABLE = tp DIR_SRC = /src/ DIR_OBJ = /obj/ all: $(SOURCES) $(OBJECTS) $(EXECUTABLE): $(OBJECTS) $(CC) $(CFLAGS) $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm $(OBJECTS) $(EXECUTABLE) Note this: I'm in the directory "." which contains the makefile The folder "./src" EXISTS, and has all the .h and .cpp files The folder "./obj" doesn't exist, I want makefile to create it and put all the .o there The error I get is: No rules to build "main.cpp", necessary for "all". Stopping. Help!

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • Domain entities into (ASP.NET) Session, or better some kind of DTOs?

    - by Robert
    Currently we put Domain Objects into our ASP.NET Sessions. Now we considering moving from InProc sessions to state server. This requires that all objects inside session are serializable. Instead to annotate all objects with the [Serializable] attribute, we thought about creating custom-session objects (DTO Session Objects?), which only contain the information we need: CONS: Entities must be reloaded, which requires additional DB round-trips PROS: Session State is smaller Session information is more specific (could be a CON) No unneeded annotation of Domain-Entities What do you think? Should we use some kind of DTOs to store inside the session, or should we stick with god old entities?

    Read the article

  • Is this a php memory leak?

    - by mseifert
    I have memory_get_usage() in the footer of my page and with each refresh of the page, I watch it increase by about 100k each time. My page load creates many objects and destroys them when done . My parent objects each have __destruct() which uses unset() with all child objects. Child objects with a reference back to the parent, have __destruct() to unset() these references. Inserting memory_get_usage() before and after processing different parts of my page only tells me how much of the total usage was added due to that part of the script. How do I go about determining what memory is lost and not recycled for garbage collection after the page finishes loading? I have one global $_SESSION var containing objects storing user info, but have verified using strlen(serialize($object)) that this object is not growing in size. I presume that what I am seeing is a memory leak and that php garbage collection should be in effect after the script ends. Any ideas how to debug this?

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Is it posible with ajax to send one json-array (array with json objects) and also include a separate parameter to receive in MVC action method?

    - by david2342
    Is it posible with ajax to send one json-array (array with json objects) and also include a separate parameter to receive in MVC action method? var n = { number: 1 }; $.ajax({ type: "POST", url: url, contentType: "application/json; charset=utf-8", dataType: "html", data: JSON.stringify({jsonObjects:json, number:n}), success: function (response) { $('#body').html(response) } } }); public ActionResult Create(List jsonObjects, int? number) JsonObjects is comming in like it supposed to but number is null.

    Read the article

  • Repository pattern with lazying loading using POCO

    - by Simon G
    Hi, I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries: 1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules. 2) Repository - this is for all my data access. The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to. I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see: 1) The object "knows" how to get itself I would rather no data access logic be held in the object. 2) This required both classes to reference each other which in visual studio throws a circular dependency error. Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved? Thanks

    Read the article

  • application-context.xml problem for Spring ActionScript

    - by jiri
    content of The application-content.xml is <?xml version="1.0" encoding="utf-8"?> <objects xmlns="http://www.springactionscript.org/schema/objects" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springactionscript.org/schema/objects http://www.springactionscript.org/schema/objects/spring-actionscript-objects-1.0.xsd"> <property file="strings.properties" /> <object id="string1" class="String"/> <object id="string2" class="String"/> <object id="nlslzf" class="com.sgb.model.MyClass"/> </objects> content of strings.properties is: s1=Firststring s2=Secondstring but error: Main Thread (Suspended: Error: An object definition for 'string1' was not found.) org.springextensions.actionscript.ioc.factory.support::AbstractObjectFactory/getObject SpringActionScript/test SpringActionScript/onCreationComplete SpringActionScript/___SpringActionScript_Application1_creationComplete flash.events::EventDispatcher/dispatchEventFunction [no source] mx.core::UIComponent/dispatchEvent mx.core::UIComponent/set initialized mx.managers::LayoutManager/doPhasedInstantiation Function/http://adobe.com/AS3/2006/builtin::apply [no source] mx.core::UIComponent/callLaterDispatcher2 mx.core::UIComponent/callLaterDispatcher2 mx.core::UIComponent/callLaterDispatcher i can run normal if removed the '' why it is? springactionscript bug?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >