Search Results

Search found 6104 results on 245 pages for 'fast esp'.

Page 224/245 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • simple jquery dropdown - clearTimeout, setTimeout issues

    - by user210757
    HTML: <ul class="topnav"> <li><a href="#"><span>One</span></a></li> <li><a href="#"><span>Two</span></a></li> <li> <li><a href="#"><span>Three</span></a></li> <ul class="subnav"> <li><a href="#">A</a></li> <li><a href="#">B</a></li> <li><a href="#">C</a></li> </ul> </li> </ul> jquery: var timeout = null; $(document).ready(function() { $("ul.topnav li").mouseover(function() { if (timeout) clearTimeout(timeout); $(this).find("ul.subnav").slideDown('fast').show(); }).mouseout(function() { timeout = setTimeout(closemenu, 500); }); // sub menu mouseovers keep dropdown open $("ul.subnav li").mouseover(function() { if (timeout) clearTimeout(timeout); } ).mouseout(function() { timeout = setTimeout(closemenu, 500); // alert(timeout); }); // any click closes $(document).click(closemenu); }); // Closes all open menus function closemenu() { $('ul.subnav:visible').hide(); if (timeout) clearTimeout(timeout); } I'm having issues with timeout. In use, if i mouseover "Three", the dropdown stays out forever. if i mouseover "A", dropdown will stay out forever, but if I mouseover "B" or anything lower, the menu will close on me. if you uncomment "// alert(timeout);" it gets there for B, (and A) but timeout will have a value. why is this? i thought clearTimeout would null the timeout variable?

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • ASP.Net JSON Web Service Post Form Data

    - by Will D
    I have a ASP.NET web service decorated with System.Web.Script.Services.ScriptService() so it can return json formatted data. This much is working for me, but ASP.Net has a requirement that parameters to the web service must be in json in order to get json out. I'm using jquery to run my ajax calls and there doesn't seem to be an easy way to create a nice javascript object from the form elements. I have looked at serialiseArray in the json2 library but it doesn't encode the field names as property name in the object. If you have 2 form elements like this <input type="text" name="namefirst" id="namefirst" value="John"/> <input type="text" name="namelast" id="namelast" value="Doe"/> calling $("form").serialize() will get you the standard query string namefirst=John&namelast=Doe calling JSON.stringify($("form").serializeArray()) will get you the (bulky) json representation [{"name":"namefirst","value":"John"},{"name":"namelast","value":"Doe"}] This will work when passing to the web service but its ugly as you have to have code like this to read it in: Public Class NameValuePair Public name As String Public value As String End Class <WebMethod()> _ Public Function GetQuote(ByVal nvp As NameValuePair()) As String End Function You would also have to wrap that json text inside another object nameed nvp to make the web service happy. Then its more work as all you have is an array of NameValuePair when you want an associative array. I might be kidding myself but i imagined something more elegant when i started this project - more like this Public Class Person Public namefirst As String Public namelast As String End Class which would require the json to look something like this: {"namefirst":"John","namelast":"Doe"} Is there an easy way to do this? Obviously it is simple for a form with two parameters but when you have a very large form concatenating strings gets ugly. Having nested objects would also complicate things The cludge I have settled on for the moment is to use the standard name value pair format stuffed inside a json object. This is compact and fast {"q":"namefirst=John&namelast=Doe"} then have a web method like this on the server that parses the query string into an associate array. <WebMethod()> _ Public Function AjaxForm(ByVal q As String) as string Dim params As NameValueCollection = HttpUtility.ParseQueryString(q) 'do stuff return "Hello" End Sub As far a cludges go this one seems reasonably elegant in terms of amount of code, but my question is: is there a better way? Is there a generally accepted way of passing form data to asp.net web/script services?

    Read the article

  • jQuery ajaxSubmit ignored by IE8

    - by George Burrell
    Hi there, I am combing the jQuery validation plug-in with the jQuery Form Plugin to submit the form via AJAX. This works perfectly in Firefox & Chrome, but (as usual) Internet Explorer is being a pain. For reasons that are alluding me, IE is ignoring the ajaxSubmit, as a result it submits the form in the normal fashion. I've followed the validation plug-in's documentation when constructing my code: JS: $(document).ready(function() { var validator = $("#form_notify").validate({ messages: { email: { required: 'Please insert your email address. Without your email address we will not be able to contact you!', email:'Please enter a valid email address. Without a valid email address we will not be able to contact you!' } }, errorLabelContainer: "#error", success: "valid", submitHandler: function(form) {$(form).ajaxSubmit();} }); $('#email').blur(function() { if (validator.numberOfInvalids() 0) { $("#label").addClass("label_error"); return false; } else {$("#label").removeClass("label_error");} }); $('#form_notify').submit(function() { if (validator.numberOfInvalids() == 0) { $(this).fadeOut('fast', function() {$('#thank-you').fadeIn();}); return true; } return false; }); }); Form HTML: <form id="form_notify" class="cmxform" name="form_notify" action="optin.pl" method="get"> <fieldset> <div class="input"> <label id="label" for="email">Email Address:</label> <input type="text" id="email" name="email" value="" title="email address" class="{required:true, email:true}"/> <div class="clearfix"></div> </div> <input type="hidden" name="key" value="sub-745-9.224;1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0;;subscribe-224.htm"> <input type="hidden" name="followup" value="19"> <input type="submit" name="submit" id="submit-button" value="Notify Me"> <div id="error"></div> </fieldset> </form> I can't understand what is causing IE to act differently, any assistance would be greatly appreciated. I can provide more information if needed. Thanks in advance!

    Read the article

  • How to pass ctor args in Activator.CreateInstance?

    - by thames
    I need a performance enhanced Activator.CreateInstance() and came across this article by Miron Abramson that uses a factory to create the instance in IL and then cache it. (I've included code below from Miron Abramson's site in case it somehow disappears). I'm new to IL Emit code and anything beyond Activator.CreateInstance() for instantiating a class and any help would be much appreciative. My problem is that I need to create an instance of an object that takes a ctor with a parameter. I see there is a way to pass in the Type of the parameter, but is there a way to pass in the value of the ctor parameter as well? If possible, I would like to use a method similar to CreateObjectFactory<T>(params object[] constructorParams) as some objects I want to instantiate may have more than 1 ctor param. // Source: http://mironabramson.com/blog/post/2008/08/Fast-version-of-the-ActivatorCreateInstance-method-using-IL.aspx public static class FastObjectFactory { private static readonly Hashtable creatorCache = Hashtable.Synchronized(new Hashtable()); private readonly static Type coType = typeof(CreateObject); public delegate object CreateObject(); /// /// Create an object that will used as a 'factory' to the specified type T /// public static CreateObject CreateObjectFactory() where T : class { Type t = typeof(T); FastObjectFactory.CreateObject c = creatorCache[t] as FastObjectFactory.CreateObject; if (c == null) { lock (creatorCache.SyncRoot) { c = creatorCache[t] as FastObjectFactory.CreateObject; if (c != null) { return c; } DynamicMethod dynMethod = new DynamicMethod("DM$OBJ_FACTORY_" + t.Name, typeof(object), null, t); ILGenerator ilGen = dynMethod.GetILGenerator(); ilGen.Emit(OpCodes.Newobj, t.GetConstructor(Type.EmptyTypes)); ilGen.Emit(OpCodes.Ret); c = (CreateObject)dynMethod.CreateDelegate(coType); creatorCache.Add(t, c); } } return c; } } Update to Miron's code from commentor on his post 2010-01-11 public static class FastObjectFactory2<T> where T : class, new() { public static Func<T> CreateObject { get; private set; } static FastObjectFactory2() { Type objType = typeof(T); var dynMethod = new DynamicMethod("DM$OBJ_FACTORY_" + objType.Name, objType, null, objType); ILGenerator ilGen = dynMethod.GetILGenerator(); ilGen.Emit(OpCodes.Newobj, objType.GetConstructor(Type.EmptyTypes)); ilGen.Emit(OpCodes.Ret); CreateObject = (Func<T>) dynMethod.CreateDelegate(typeof(Func<T>)); } }

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • JQuery fadeIn after src changed but fadeIn on the previous src anyway !

    - by Anna
    Hello ! I have a jquery bug and I've been looking for hours now, I can't figure out what's wrong... I have this code : $(document).ready(function(){ $('#ulPhotos a').click(function() { var newSrc= $(this).find('img').attr('src').split("/"); bigPictureName = 'big'+newSrc[2]; $('#pho').hide(); $('#imageBig').attr("src", "images/photos/"+bigPictureName); $('#pho').fadeIn('slow'); var alt = $(this).find('img').attr('alt'); $('#legend').html(alt); }); }); and this in html : <ul id="ulPhotos"> <li><a href="#centre"><img src="images/photos/09.jpg" title="La Reine de la Nuit au Comedia" alt="<em>La Reine de la Nuit</em> au Comedia"/></a> <a href="#centre"><img src="images/photos/03.jpg" title="Manuelita, La Périchole à l&rsquo;Opéra Comique" alt="Manuelita, <em>La Périchole</em> à l&#8217;Opéra Comique" /></a></li> <li><a href="#centre" ><img src="images/photos/12.png" title="" alt="Marion Baglan Carnac Ré" /></a> and this in for bigImage : </div> <div id="pho" a name="centre"> <p id="legend"> La Reine de la Nuit</p> <img src="images/photos/big09.jpg" alt="Marion Baglan" id="imageBig"/> </div> It simply changes the source of my img in a div named pho... but sometimes when the new image is too heavy, the fadeIn executes on the previous src !! so we see the fadeIn first on the previous image, and then, the right picture appears without fadeIn.... am I missing something? ps : the page is here http://www.marion-baglan.net/photos.htm#centre if you click fast you can see it... and when I try to put some bigger photos, it's very obvious...

    Read the article

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • How are you using IronPython?

    - by Will Dean
    I'm keen to drink some modern dynamic language koolaid, so I've believed all the stuff on Michael Foord's blog and podcasts, I've bought his book (and read some of it), and I added an embedded IPy runtime to a large existing app a year or so ago (though that was for someone else and I didn't really use it myself). Now I need to do some fairly simple code generation stuff, where I'm going to call a few methods on a few .net objects (custom, C#-authored objects), create a few strings, write some files, etc. The experience of trying this leaves me feeling like the little boy who thinks he's the only one who can see that The Emperor has no clothes on. If you're using IronPython, I'd really appreciate knowing how you deal with the following aspects of it: Code editing - do you use the .NET framework without Intellisense? Refactoring - I know a load of 'refactoring' is about working around language-related busywork, so if Python is sufficiently lightweight then we won't need that, But things like renames seem to me to be essential to iteratively developing quality code regardless of language. Crippling startup time - One of the things which is supposed to be good about interpreted languages is the lack of compile time leading to fast interactive development. Unfortunately I can compile a C# application and launch it quicker than IPy can start up. Interactive hacking - the IPy console/repl is supposed to be good for this, but I haven't found a good way to take the code you've interactively arrived at and persist it into a file - cut and paste from the console is fairly miserable. And the console seems to hold references to .NET assemblies you've imported, so you have to quit it and restart it if you're working on the C# stuff as well. Hacking on C# in something like LinqPad seems a much faster and easier way to try things out (and has proper Intellisense). Do you use the console? Debugging - what's the story here? I know someone on the IPy team is working on a command-line hobby-project, but let's just say I'm not immediately attracted to a command line debugger. I don't really need a debugger from little Python scripts, but I would if I were to use IPy for scripting unit tests, for example. Unit testing - I can see that dynamic languages could be great for this, but is there any IDE test-runner integration (like for Resharper, etc). The Foord book has a chapter about this, which I'll admit I have not yet read properly, but it does seem to involve driving a console-mode test-runner from the command prompt, which feels to be an enormous step back from using an integrated test runner like TestDriven.net or Resharper. I really want to believe in this stuff, so I am still working on the assumption that I've missed something. I would really like to know how other people are dealing with IPy, particularly if they're doing it in a way which doesn't feel like we've just lost 15 years'-worth of tool development.

    Read the article

  • SharedObject (Flex 3.2) behaving unexpectedly when query string present in URL

    - by rhtx
    Summary: The behavior detailed below seems to indicate that if your app at www.someplace.com sets/retrieves data via a SharedObject, there is some sort of .sol collision if the user hits your app at someplace.com, and then later at someplace.com?name=value. Can anyone confirm or refute this? I'm working on a Flex web app that presents the user with a login page. When the user has logged in, he/she is presented with a 'room' which is associated with a 'group'. We store the last-visited room/group combination in a SharedObject - so when a given user logs in, they are taken into the most recent room in which they were active. That works fine, but we also have an auto-login system which involves the user clicking on a link to the app url with a query string attached. There are two types of these links. 1) the query string includes username, groupId, and roomId 2) the query string includes only the username Because we are working fast and have only a few developers, the auto-login system is built on the last-vist system. During the auto-login process, the url is inspected and if groupId and roomId values are found in the query string, the SharedObject is opened and the last-visit group/room id values are overwritten by the param values. That works fine, also, when the app is hit with a query string of the second type (no groupId and roomId params), the app goes to the SharedObject to get the stored room and group id values, as it normally would. And here's the problem: The values it comes back with are whatever the last room/group param values were, not whatever the last last-visit room/group values are. And if the given user has never hit the app with query string that included group and room id values, the app gets null values from the SharedObject. It took some digging around, but what it looks like is happening is that a second set of data is being stored/expected in the SharedObject if a query string is present in the URL. Looking at the .sol file in a text editor I see more untranslated code, and additional group and room values, once I've hit the app with URLs that contain query strings. I'm not finding anything on the web about this, but that may just be due to a lack of necessary search skills. Has anyone else run into anything similar? Or do you know how to address this? I've tried setting Security.exactSettings to false, already - was really hoping that was going to work.

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Clickin issue in Flash and XML

    - by ortho
    Hi, I have the php backend that displays an xml page with data for flash consuming. Flash takes it and creates a textfields dynamicaly based on this information. I have a few items in menu on top and when I click one of them, data is taken from php and everything is displayed in scroll in flash. The problem is that if I click too fast between menu items, then I get buggy layout. The text (only the text) is becoming part of the layout and is displayed no metter what item in menu I am currently in and only refreshing the page helps. var myXML:XML = new XML(); myXML.ignoreWhite=true; myXML.load("/getBud1.php"); myXML.onLoad = function(success){ if (success){ var myNode = this.firstChild.childNodes; var myTxt:Array = Array(0); for (var i:Number = 0; i<myNode.length; i++) { myTxt[i] = "text"+i+"content"; createTextField(myTxt[i],i+1,65,3.5,150, 20); var pole = eval(myTxt[i]); pole.embedFonts = true; var styl:TextFormat = new TextFormat(); styl.font = "ArialFont"; pole.setNewTextFormat(styl); pole.text = String(myNode[i].childNodes[1].firstChild.nodeValue); pole.wordWrap = true; pole.autoSize = "left"; if(i > 0) { var a:Number = eval(myTxt[i-1])._height + eval(myTxt[i-1])._y + 3; pole._y = a; } attachMovie("kropka2", "test"+i+"th", i+1000); eval("test"+i+"th")._y = pole._y + 5; eval("test"+i+"th")._x = 52; } } } I tried to load the info and ceate text fields from top frame and then refer to correct place by instance names string e.g. budData.dataHolder.holder.createTextField , but then when I change between items in menu the text dissapears completely untill I refresh the page. Please help

    Read the article

  • SET game odds simulation (MATLAB)

    - by yuk
    Here is an interesting problem for your weekend. :) I recently find the great card came - SET. Briefly, there are 81 cards with the four features: symbol (oval, squiggle or diamond), color (red, purple or green), number (one, two or three) or shading (solid, striped or open). The task is to find (from selected 12 cards) a SET of 3 cards, in which each of the four features is either all the same on each card or all different on each card (no 2+1 combination). In my free time I've decided to code it in MATLAB to find a solution and to estimate odds of having a set in randomly selected cards. Here is the code: %% initialization K = 12; % cards to draw NF = 4; % number of features (usually 3 or 4) setallcards = unique(nchoosek(repmat(1:3,1,NF),NF),'rows'); % all cards: rows - cards, columns - features setallcomb = nchoosek(1:K,3); % index of all combinations of K cards by 3 %% test tic NIter=1e2; % number of test iterations setexists = 0; % test results holder % C = progress('init'); % if you have progress function from FileExchange for d = 1:NIter % C = progress(C,d/NIter); % cards for current test setdrawncardidx = randi(size(setallcards,1),K,1); setdrawncards = setallcards(setdrawncardidx,:); % find all sets in current test iteration for setcombidx = 1:size(setallcomb,1) setcomb = setdrawncards(setallcomb(setcombidx,:),:); if all(arrayfun(@(x) numel(unique(setcomb(:,x))), 1:NF)~=2) % test one combination setexists = setexists + 1; break % to find only the first set end end end fprintf('Set:NoSet = %g:%g = %g:1\n', setexists, NIter-setexists, setexists/(NIter-setexists)) toc 100-1000 iterations are fast, but be careful with more. One million iterations takes about 15 hours on my home computer. Anyway, with 12 cards and 4 features I've got around 13:1 of having a set. This is actually a problem. The instruction book said this number should be 33:1. And it was recently confirmed by Peter Norvig. He provides the Python code, but I didn't test it. So can you find an error?

    Read the article

  • log4j performance

    - by Bob
    Hi, I'm developing a web app, and I'd like to log some information to help me improve and observe the app. (I'm using Tomcat6) First I thought I would use StringBuilders, append the logs to them and a task would persist them into the database like every 2 minutes. Because I was worried about the out-of-the-box logging system's performance. Then I made some test. Especially with log4j. Here is my code: Main.java public static void main(String[] args) { Thread[] threads = new Thread[LoggerThread.threadsNumber]; for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i] = new Thread(new LoggerThread("name - " + i)); } LoggerThread.startTimestamp = System.currentTimeMillis(); for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i].start(); } LoggerThread.java public class LoggerThread implements Runnable{ public static int threadsNumber = 10; public static long startTimestamp; private static int counter = 0; private String name; public LoggerThread(String name) { this.name = name; } private Logger log = Logger.getLogger(this.getClass()); @Override public void run() { for(int i=0; i<10000; ++i){ log.info(name + ": " + i); if(i == 9999){ int c = increaseCounter(); if(c == threadsNumber){ System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTimestamp)); } } } } private synchronized int increaseCounter(){ return ++counter; } } } log4j.properties log4j.logger.main.LoggerThread=debug, f log4j.appender.f=org.apache.log4j.RollingFileAppender log4j.appender.f.layout=org.apache.log4j.PatternLayout log4j.appender.f.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.f.File=c:/logs/logging.log log4j.appender.f.MaxFileSize=15000KB log4j.appender.f.MaxBackupIndex=50 I think this is a very common configuration for log4j. First I used log4j 1.2.14 then I realized there was a newer version, so I switched to 1.2.16 Here are the figures (all in millisec) LoggerThread.threadsNumber = 10 1.2.14: 4235, 4267, 4328, 4282 1.2.16: 2780, 2781, 2797, 2781 LoggerThread.threadsNumber = 100 1.2.14: 41312, 41014, 42251 1.2.16: 25606, 25729, 25922 I think this is very fast. Don't forget that: in every cycle the run method not just log into the file, it has to concatenate strings (name + ": " + i), and check an if test (i == 9999). When threadsNumber is 10, there are 100.000 loggings and if tests and concatenations. When it is 100, there are 1.000.000 loggings and if tests and concatenations. (I've read somewhere JVM uses StringBuilder's append for concatenation, not simple concatenation). Did I missed something? Am I doing something wrong? Did I forget any factor that could decrease the performance? If these figures are correct I think, I don't have to worry about log4j's performance even if I heavily log, do I?

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Combinations and Permutations in F#

    - by Noldorin
    I've recently written the following combinations and permutations functions for an F# project, but I'm quite aware they're far from optimised. /// Rotates a list by one place forward. let rotate lst = List.tail lst @ [List.head lst] /// Gets all rotations of a list. let getRotations lst = let rec getAll lst i = if i = 0 then [] else lst :: (getAll (rotate lst) (i - 1)) getAll lst (List.length lst) /// Gets all permutations (without repetition) of specified length from a list. let rec getPerms n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> getRotations |> Seq.collect (fun r -> Seq.map ((@) [List.head r]) (getPerms (k - 1) (List.tail r))) /// Gets all permutations (with repetition) of specified length from a list. let rec getPermsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> Seq.collect (fun x -> Seq.map ((@) [x]) (getPermsWithRep (k - 1) lst)) // equivalent: | k, _ -> lst |> getRotations |> Seq.collect (fun r -> List.map ((@) [List.head r]) (getPermsWithRep (k - 1) r)) /// Gets all combinations (without repetition) of specified length from a list. let rec getCombs n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombs (k - 1) xs)) (getCombs k xs) /// Gets all combinations (with repetition) of specified length from a list. let rec getCombsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombsWithRep (k - 1) lst)) (getCombsWithRep k xs) Does anyone have any suggestions for how these functions (algorithms) can be sped up? I'm particularly interested in how the permutation (with and without repetition) ones can be improved. The business involving rotations of lists doesn't look too efficient to me in retrospect. Update Here's my new implementation for the getPerms function, inspired by Tomas's answer. Unfortunately, it's not really any fast than the existing one. Suggestions? let getPerms n lst = let rec getPermsImpl acc n lst = seq { match n, lst with | k, x :: xs -> if k > 0 then for r in getRotations lst do yield! getPermsImpl (List.head r :: acc) (k - 1) (List.tail r) if k >= 0 then yield! getPermsImpl acc k [] | 0, [] -> yield acc | _, [] -> () } getPermsImpl List.empty n lst

    Read the article

  • Choosing the right .NET architecture. WCF? WPF/Forms, ASP.NET (MVC)?

    - by Tommy Jakobsen
    I’m in the situation that I have to design and implement a rather big system from the bottom. I’m having some (a lot actually) questions about the architecture that I would like your comments and thoughts on. I don’t hope that I’ve written too much here, but I wanted to give you all an idea of what the system is. Quick info about the applications, read if you want: I can’t share much detail about the project, but basically it’s a system where we offer our customer a service to manage their users. We have a hotline where the users call and our hotline uses an (windows) application (intranet) to manage the user’s data, etc. The customer also has a web application where they can see reports, information about their business and users, and the ability to modify their data. Modifying data is not just user data like address and so, but also information about the products/services the user has, which can be complicated. The applications will be built on Microsoft .NET Framework 4, with a MS SQL Server 2008 database. There will be a few applications that have to access this database, such as: Intranet application (used by us and our hotline) Customer web application type 1 Customer web application type 2 Customer web application type n different applications) … Now my big problem is what .NET parts I should use for such a system. For the “backend” I’ve considered using Windows Communication Foundation: Would WCF be a good choice? The intranet application will be an application that has to edit lots of records in the database. It has to be easy to navigate using the keyboard (fast to work with). Has a feature such as “find customer, find that, lookup this, choose this and update that”. What would be the best choice to develop this application in? Would it be WPF or good old Windows Forms? I don’t need all of the fancy graphics features in WPF, like 3D, but the application has to look nice (maybe something like the new Visual Studio/Office tools). And the same question goes for the web pages. They have much the same work to do, but not as many features as the intranet application, and not the same amount of data (much less). That is my questions for now. I’m hoping to get a discussion going that will open my eyes to some of these technologies, helping me decide architecture to go with. I would like to say thanks in advance, and let you all know that any thoughts will be much appreciated.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >