Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 147/184 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • WPF Enable/Disabled controls from a page placed inside main window application

    - by toni
    Hi! I have an WPF application that has a main window. In the left side of this Window there are some buttons into a listbox, it is a kind of menu to access faster to pages. These buttons belongs to pages that they are loaded inside the window when the user selects one. Main window also has another main menu in the top for doing other tasks. When a page is loaded in the main window and the user clicks a button of this currently loaded page, it starts a task that takes a long time. While this long task is executing I want the user can not select (or press) any of the buttons into the listbox because In the loaded page the long task also is updating the UI for this page. I would like to disabled (isEnabled=false) the listbox when long task is executing and not to enabled it until the long task has finished. How can I do this? I mean, from the page is currently loaded I want to disabled the listbox placed in the main window that is the owner. The listbox doesn't belong to the currently loaded page. Thanks!

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

  • Generating the SQL query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • Fast way to get a list of group members in Active Directory with C#

    - by Jeremy
    In a web app, we're looking to display a list of sam accounts for users that are a member of a certain group. Groups could have 500 or more members in many cases and we need the page to be responsive. With a group of about 500 members it takes 7-8 seconds to get a list of sam accounts for all members of the group. Are there faster ways? I know the Active Directory Management Console does it in under a second. I've tried a few methods: 1) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); List<string> lst = grp.Members.Select(g => g.SamAccountName).ToList(); 2) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); PrincipalSearchResult<Principal> lstMembers = grp.GetMembers(true); List<string> lst = new List<string>(); foreach (Principal member in lstMembers ) { if (member.StructuralObjectClass.Equals("user")) { lst.Add(member .SamAccountName); } } 3) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); System.DirectoryServices.DirectoryEntry de = (System.DirectoryServices.DirectoryEntry)grp.GetUnderlyingObject(); List<string> lst = new List<string>(); foreach (string sDN in de.Properties["member"]) { System.DirectoryServices.DirectoryEntry deMember = new System.DirectoryServices.DirectoryEntry("LDAP://" + sDN); lst.Add(deMember.Properties["samAccountName"].Value.ToString()); }

    Read the article

  • Finding distance to the closest point in a point cloud on an uniform grid

    - by erik
    I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below? Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points. Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d) Since there will be a great number of grid points and points to search from, a simple exhaustive: for each grid point: for each point: if distance between points < minDistance: minDistance = distance between points is not a good alternative. I was thinking of doing something along the lines of: create a container of size A*B*C where each element holds a container of points for each point: define indexX = round((point position x - grid min position x)/d) // same for y and z add the point to the correct index of the container for each grid point: search the container of that grid point and find the closest point if no points in container and D > 0.5d: search the 26 container indices nearest to the grid point for a closest point .. continue with next layer until a point is found or the distance to that layer is greater than D Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • UIScrollView notifications

    - by ryyst
    Hi, I'm coding an app that works much like Apple's Weather.app: There's a UIPageControl at the bottom and a UIScrollView in the middle of the screen. In my code, I implemented the - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView method to figure out when the user did move to a new page. If they move to a new page, I load the adjacent pages' data, as to make further page-switching faster. (In one of Apple's examples, the - (void)scrollViewDidScroll:(UIScrollView *)sender is used, but that causes my app to shortly hang when loading a new page, so it's not suitable.) That code works very well. I'm using scrollRectToVisible:: to programmatically scroll inside the scrollview when the user clicks the UIPageControl. The problem is that the scrollRectToVisible: doesn't post a notification to the UIScrollViewDelegate when it's done scrolling - so the code responsible for loading adjacent pages never get's called when using the UIPageControl. Is there any way to make the UIScrollView notify its delegate when it gets called by the scrollRectToVisible: method? Or will I have to use threads in order to prevent my app from freezing? Thanks! -- Ry

    Read the article

  • Why calling Process.killProcess(Process.myPid()) is a bad idea?

    - by Tal Kanel
    I've read some posts saying using this method is "not good", shouldn't been use, it's not the right way to "close" the application and it's not how android works... I understand and accept the fact that Android OS knows better then me when it's the right time to terminate the process, but I didn't heard yet a good explanation why it's wrong using the killProcess() method?. after all - it's part of the android API... what I do know is that calling this method while other threads doing in potential an important work (operations on files, writing to DB, HTTP requests, running services..) can be terminated in the middle, and it's clearly not good. also I know I can benefit from the fact that "re-open" the application will be faster, cause the system maybe still "holds" in memory state from last time been used, and killProcess() prevents that. beside this reason, in assumption I don't have such operations, and I don't care my application will load from scratch each run, there are other reasons why not using the killProcess() method? I know about finish() method to close an Activity, so don't write me about that please.. finish() is only for Activity. not to all application, and I think I know exactly why and when to use it... and another thing - I'm developing also games with the Unity3D framework, and exporting the project to android. when I decompiled the generated apk, I was very suprised to find out that the java source code created from unity - implementing Unity's - Application.quit() method, with Process.killProcess(Process.myPid()). Application.quit() is suppose to be the right way to close game according to Unity3d guides (is it really?? maybe I'm wrong, and missed something), so how it happens that the Unity's framework developers which doing a very good work as it seems implemented this in native android to killProcess()? anyway - I wish to have a "list of reasons" why not using the killProcess() method, so please write down your answer - if you have something interesting to say about that. TIA

    Read the article

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • Differences between iPhone/iPod Simulator and Devices

    - by Allisone
    Hi, since I started iPhone/iPod Development I have come across some differences between how the simulator and how real device react. Maybe I will come across some other differences I will have to figure out as well, maybe other people haven't met these problems here (YET) and can profit from the knowledge, and maybe you know some problems/differences that you would have been happy to know about earlier before you spent several hours or days figuring out what the heck is going on. So here is what I came across. Simulator is not case sensitive, Devices are case sensitive. This means a default.png or Icon.png will work in simulator, but not on a device where they must be named Default.png and icon.png (if it's still not working read this answer) Simulator has different codecs to play audio and video If you use f.e. MPMoviePlayerController you might play certain video on the simulator while on the device it won't work (use Handbrake-presets-iPhone & iPod Touch to create playable videos for Simulator and Device). If you play audio with AudioServicesPlaySystemSound(&soundID) you might here the sound on simulator but not an a device. (use Audacity to open your soundfile, export as wav and run afconvert -f caff -d LEI16@44100 -c 1 audacity.wav output.caf in terminal) Also there is this flickering on second run problem which can be resolved with an playerViewCtrl.initialPlaybackTime = -1.0; either on the end of playing or before each beginning. Simulator is mostly much faster cause it doesn't simulate the hardware but uses Mac resources, therefore f.e. sio2 Apps (OpenGL,OpenAL,etc. framework) run much better on simulator, well everything that uses more resources will run visibly better in simulator than on device. I hope we can add some more to this.

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • Dividing sections inside an omp parallel for : OpenMP

    - by Sayan Ghosh
    Hi, I have a situation like: #pragma omp parallel for private(i, j, k, val, p, l) for (i = 0; i < num1; i++) { for (j = 0; j < num2; j++) { for (k = 0; k < num3; k++) { val = m[i + j*somenum + k*2] if (val != 0) for (l = start; l <= end; l++) { someFunctionThatWritesIntoGlobalArray((i + l), j, k, (someFunctionThatGetsValueFromAnotherArray((i + l), j, k) * val)); } } } for (p = 0; p < num4; p++) { m[p] = 0; } } Thanks for reading, phew! Well I am noticing a very minor difference in the results (0.999967[omp] against 1[serial]), when I use the above (which is 3 times faster) against the serial implementation. Now I know I am doing a mistake here...especially the connection between loops is evident. Is it possible to parallelize this using omp sections? I tried some options like making shared(p) {doing this, I got correct values, as in the serial form}, but there was no speedup then. Any general advice on handling openmp pragmas over a slew of for loops would also be great for me!

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

  • Is there a standard mapping between JSON and Protocol Buffers?

    - by Daniel Earwicker
    From a comment on the announcement blog post: Regarding JSON: JSON is structured similarly to Protocol Buffers, but protocol buffer binary format is still smaller and faster to encode. JSON makes a great text encoding for protocol buffers, though -- it's trivial to write an encoder/decoder that converts arbitrary protocol messages to and from JSON, using protobuf reflection. This is a good way to communicate with AJAX apps, since making the user download a full protobuf decoder when they visit your page might be too much. It may be trivial to cook up a mapping, but is there a single "obvious" mapping between the two that any two separate dev teams would naturally settle on? If two products supported PB data and could interoperate because they shared the same .proto spec, I wonder if they would still be able to interoperate if they independently introduced a JSON reflection of the same spec. There might be some arbitrary decisions to be made, e.g. should enum values be represented by a string (to be human-readable a la typical JSON) or by their integer value? So is there an established mapping, and any open source implementations for generating JSON encoder/decoders from .proto specs?

    Read the article

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • Optimising RSS parsing on App Engine to avoid high CPU warnings

    - by Danny Tuppeny
    I'm pulling some RSS feeds into a datastore in App Engine to serve up to an iPhone app. I use cron to schedule updating the RSS every x minutes. Each task only parses one RSS feed (which has 15-20 items). I frequently get warnings about high CPU usage in the App Engine dashboard, so I'm looking for ways to optimise my code. Currently, I use minidom (since it's already there on App Engine), but I suspect it's not very efficient! Here's the code: dom = minidom.parseString(urlfetch.fetch(url).content) if dom: items = [] for node in dom.getElementsByTagName('item'): item = RssItem( key_name = self.getText(node.getElementsByTagName('guid')[0].childNodes), title = self.getText(node.getElementsByTagName('title')[0].childNodes), description = self.getText(node.getElementsByTagName('description')[0].childNodes), modified = datetime.now(), link = self.getText(node.getElementsByTagName('link')[0].childNodes), categories = [self.getText(category.childNodes) for category in node.getElementsByTagName('category')] ); items.append(item); db.put(items); def getText(self, nodelist): rc = '' for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc There isn't much going on, but the scripts often take 2-6 seconds CPU time, which seems a bit excessive for looping through 20ish items and reading a few attributes. What can I do to make this faster? Is there anything particularly bad in the above code, or should I change to another way of parsing? Are there are any libraries (that work on App Engine) that would be better, or would I be better parsing the RSS myself?

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >