Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 345/763 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • My timer code is failing when IAR is configured to do max optimization

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code [Code] //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled [/code] Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • Which Java library for Binary Decision Diagrams?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

  • jquery-sortable using behavior of a linkedlist

    - by BabaBooey
    I suspect I'm not looking at this issue in the right way so here goes. I have essentially a LinkedList of data on a web page (http://en.wikipedia.org/wiki/Linked_list) that I'd like to manipulate using traditional Linked List behavior (i.e. just updating the reference/id of the "next" object) for performance reasons. Where this gets a bit tricky is I'd ideally like to use Jquery's sortable to do this. Like the user would drag something up/down and I could just do an Ajax call to the server with the id of the object that moved and the new parent id of that object (and then behind the scenes I could figure out how to reconnect things..maybe need more data than that...). But every example I've seen where sortable is used they were sending the whole re-indexed list to the database to update which seems unnecessary to me. With a linked list to change an element's "index" I only need to make 3 updates which depending on the size of the list could be a big performance savings. Anyone have an example of what I'm trying to do...am I too far in left field?

    Read the article

  • Which articles I've should read before starting to make my custom drawn winforms app?

    - by Dmitriy Matveev
    Hello! I'm currently developing a windows forms application with a lot of user controls. Some of them are just custom drawn buttons or panels and some of them are a compositions of these buttons and panels inside of FlowLayoutPanels and TableLayoutPanels. And the window itself is also custom drawn. I don't have much experience in winforms development, but I've made a proper decomposition of proposed design into user controls and implementation is already almost finished. I've already solved many arisen problems during development by the help of the google, msdn, SO and several dirty hacks (when nothing were helping) and still experiencing some of them. There are a lot of gaps in my knowledge base, since I don't know answers to many questions like: When I should use things like double buffer, suspended layout, suspended redraw ? What should I do with the controls which shouldn't be visible at some moment ? Common performance pitfalls (I think I've fallen in in several ones) ? So I think there should be some great articles which can give some knowledge enough to avoid most common problems and improve performance and maintainability of my application. Maybe some of you can recommend a few?

    Read the article

  • What are the benefits of the PHP the different PHP compression libraries?

    - by Christopher W. Allen-Poole
    I've been looking into ways to compress PHP libraries, and I've found several libraries which might be useful, but I really don't know much about them. I've specifically been reading about bcompiler and PHAR libraries. Is there any performance benefit in either of these? Are there any "gotchas" I need to watch out for? What are the relative benefits? Do either of them add to/detract from performance? I'm also interested in learning of other libs which might be out there which are not obvious in the documentation? As an aside, does anyone happen to know whether these work more like zip files which just happen to have the code in there, or if they operate more like Python's pre-compiling which actually runs a pseudo-compiler? ======================= EDIT ======================= I've been asked, "What are you trying to accomplish?" Well, I suppose the answer is that this is all hypothetical. It is a combination of these: What if my pet project becomes the most popular web project on earth and I want to distribute it quickly and easily? (hay, a man can dream, right?) It also seems if using PHAR can be done easily, it would be the best way to create a subversion snapshot. Python has this really cool pre-compiling policy, I wonder if PHP has something like that? These libraries seem to do something similar. Will they do that? Hey, these libraries seem pretty neat, but I'd like clarification on the differences as they seem to do the same thing

    Read the article

  • Flash External Interface issue with Firefox

    - by majestiq
    I am having a hard time getting ExternalInterface to work on Firefox. I am trying to call a AS3 function from javascript. The SWF is setup with the right callbacks and it is working in IE. I am using AC_RunActiveContent.js to embed the swf into my page. However, I have modified it to add an ID to the Object / Embed Tags. Below are object and embed tag that are generated for IE and for Firefox respectively. <object codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="400" height="400" align="middle" id="jpeg_encoder2" name="jpeg_encoder3" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" > <param name="movie" value="/jpeg_encoder/jpeg_encoder3.swf" /> <param name="quality" value="high" /> <param name="play" value="true" /> <param name="loop" value="true" /> <param name="scale" value="showall" /> <param name="wmode" value="window" /> <param name="devicefont" value="false" /> <param name="bgcolor" value="#ffffff" /> <param name="menu" value="false" /> <param name="allowFullScreen" value="false" /> <param name="allowScriptAccess" value="always" /> </object> <embed width="400" height="400" src="/jpeg_encoder/jpeg_encoder3.swf" quality="high" pluginspage="http://www.macromedia.com/go/getflashplayer" align="middle" play="true" loop="true" scale="showall" wmode="window" devicefont="false" id="jpeg_encoder2" bgcolor="#ffffff" name="jpeg_encoder3" menu="false" allowFullScreen="false" allowScriptAccess="always" type="application/x-shockwave-flash" > </embed> I am calling the function like this... <script> try { document.getElementById('jpeg_encoder2').processImage(z); } catch (e) { alert(e.message); } </script> In Firefox, I get an error saying "document.getElementById("jpeg_encoder2").processImage is not a function" Any Ideas?

    Read the article

  • SQL Selects on subsets

    - by Adam
    I need to check if a row exists in a database; however, I am trying to find the way to do this that offers the best performance. This is best summarised with an example. Let's assume I have the following table: dbo.Person( FirstName varchar(50), LastName varchar(50), Company varchar(50) ) Assume this table has millions of rows, however ONLY the column Company has an index. I want to find out if a particular combination of FirstName, LastName and Company exists. I know I can do this: IF EXISTS(select 1 from dbo.Person where FirstName = @FirstName and LastName = @LastName and Company = @Company) Begin .... End However, unless I'm mistaken, that will do a full table scan. What I'd really like it to do is a query where it utilises the index. With the table above, I know that the following query will have great performance, since it uses the index: Select * from dbo.Person where Company = @Company Is there anyway to make the search only on that subset of data? e.g. something like this: select * from ( Select * from dbo.Person where Company = @Company ) where FirstName = @FirstName and LastName = @LastName That way, it would only be doing a table scan on a much narrower collection of data. I know the query above won't work, but is there a query that would? Oh, and I am unable to create temporary tables, as the user will only have read access.

    Read the article

  • Perl Regex - Condensing groups of find/replace

    - by brydgesk
    I'm using Perl to perform some file cleansing, and am running into some performance issues. One of the major parts of my code involves standardizing name fields. I have several sections that look like this: sub substitute_titles { my ($inStr) = @_; ${$inStr} =~ s/ PHD./ PHD /; ${$inStr} =~ s/ P H D / PHD /; ${$inStr} =~ s/ PROF./ PROF /; ${$inStr} =~ s/ P R O F / PROF /; ${$inStr} =~ s/ DR./ DR /; ${$inStr} =~ s/ D.R./ DR /; ${$inStr} =~ s/ HON./ HON /; ${$inStr} =~ s/ H O N / HON /; ${$inStr} =~ s/ MR./ MR /; ${$inStr} =~ s/ MRS./ MRS /; ${$inStr} =~ s/ M R S / MRS /; ${$inStr} =~ s/ MS./ MS /; ${$inStr} =~ s/ MISS./ MISS /; } I'm passing by reference to try and get at least a little speed, but I fear that running so many (literally hundreds) of specific string replaces on tens of thousands (likely hundreds of thousands eventually) of records is going to hurt the performance. Is there a better way to implement this kind of logic than what I'm doing currently? Thanks Edit: Quick note, not all the replace functions are just removing periods and spaces. There are string deletions, soundex groups, etc.

    Read the article

  • Should I persist images on EBS or S3?

    - by javanes
    I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing MySql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind: Save uploaded images to EBS volume. Use the S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • UNIX-style RegExp Replace running extremely slowly under windows. Help? EDIT: Negative lookahead ass

    - by John Sullivan
    I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file is took about 10 minutes, but worked. Obviously something is murdering performance by several orders of magnitude. Find: ^(?!.*155[0-2][0-9]{4}\s.*).*$ -- NOTE: match any line NOT starting 155[0-2]NNNN where in is a number 0-9. Replace with: ''. Is there some justifiable reason for my regExp to take this long to replace matching text, or is the program I am using (this is windows / a program called "grepWin") most likely poorly optimized? Thanks. UPDATE: I am noticing that searching for ^(155[0-2]).$ takes ~7 seconds in a 5.6 MB file with 77 matches. Adding the Negative Lookahead Assertion, ?=, so that the regExp becomes ^(?!155[0-2]).$ is causing it to take at least 5-10 minutes; granted, there will be thousands and thousands of matches. Should the negative lookahead assertion be extremely detrimental to performance, and/or a large quantity of matches?

    Read the article

  • Change flash src with jquery?

    - by Elliott
    Hi I have a flash menu showing a few links, but when the user is logged in I want to change the menu from menu1 to menu2 ... so that it will display "My Account" rather than "Signup" The code below is for my flash: <div id="menu"> <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="825" height="69" id="menu1" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="movie" value="menu1.swf" /><param name="quality" value="high" /><param name="wmode" value="transparent" /><param name="bgcolor" value="#ffffff" /> <embed src="menu1.swf" quality="high" wmode="transparent" bgcolor="#ffffff" width="825" height="69" name="menu1" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> </div> Php: if (loggedin()) { echo '<script type="text/javascript"> CHANGE FLASH LINK HERE </script>'; } Could this be done without having to write all the above code out again? Thanks :)

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Insert MANY key value pairs fast into berkeley db with hash access

    - by Kungi
    Hi, i'm trying to build a hash with berkeley db, which shall contain many tuples (approx 18GB of key value pairs), but in all my tests the performance of the insert operations degrades drastically over time. I've written this script to test the performance: #include<iostream> #include<db_cxx.h> #include<ctime> #define MILLION 1000000 int main () { long long a = 0; long long b = 0; int passes = 0; int i = 0; u_int32_t flags = DB_CREATE; Db* dbp = new Db(NULL,0); dbp->set_cachesize( 0, 1024 * 1024 * 1024, 1 ); int ret = dbp->open( NULL, "test.db", NULL, DB_HASH, flags, 0); time_t time1 = time(NULL); while ( passes < 100 ) { while( i < MILLION ) { Dbt key( &a, sizeof(long long) ); Dbt data( &b, sizeof(long long) ); dbp->put( NULL, &key, &data, 0); a++; b++; i++; } DbEnv* dbep = dbp->get_env(); int tmp; dbep->memp_trickle( 50, &tmp ); i=0; passes++; std::cout << "Inserted one million --> pass: " << passes << " took: " << time(NULL) - time1 << "sec" << std::endl; time1 = time(NULL); } } Perhaps you can tell me why after some time the "put" operation takes increasingly longer and maybe how to fix this. Thanks for your help, Andreas

    Read the article

  • Better way of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Groovy / Scala / Java under the hood

    - by Jack
    I used Java for like 6-7 years, then some months ago I discovered Groovy and started to save a lot of typing.. then I wondered how certain things worked under the hood (because groovy performance is really poor) and understood that to give you dynamic typing every Groovy object is a MetaClass object that handles all the things that the JVM couldn't handle by itself. Of course this introduces a layer in the middle between what you write and what you execute that slows down everything. Then somedays ago I started getting some infos about Scala. How these two languages compare in their byte code translations? How much things they add to the normal structure that it would be obtained by plain Java code? I mean, Scala is static typed so wrapper of Java classes should be lighter, since many things are checked during compile time but I'm not sure about the real differences of what's going inside. (I'm not talking about the functional aspect of Scala compared to the other ones, that's a different thing) Can someone enlighten me? From WizardOfOdds it seems like that the only way to get less typing and same performance would be to write an intermediate translator that translates something in Java code (letting javac compile it) without alterating how things are executed, just adding synctatic sugar withour caring about other fallbacks of the language itself.

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >