Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 428/763 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Recommended textbook for machine-level programming?

    - by Norman Ramsey
    I'm looking at textbooks for an undergraduate course in machine-level programming. If the perfect book existed, this is what it would look like: Uses examples written in C or assembly language, or both. Covers machine-level operations such as two's-complement integer arithmetic, bitwise operations, and floating-point arithmetic. Explains how caches work and how they affect performance. Explains machine instructions or assembly instructions. Bonus if the example assembly language includes x86; triple bonus if it includes x86-64 (aka AMD64). Explains how C values and data structures are represented using hardware registers and memory. Explains how C control structures are translated into assembly language using conditional and unconditional branch instructions. Explains something about procedure calling conventions and how procedure calls are implemented at the machine level. Books I might be interested in would probably have the words "machine organization" or "computer architecture" in the title. Here are some books I'm considering but am not quite happy with: Computer Systems: A Programmer's Perspective by Randy Bryant and Dave O'Hallaron. This is quite a nice book, but it's a book for a broad, shallow course in systems programming, and it contains a great deal of material my students don't need. Also, it is just out in a second edition, which will make it expensive. Computer Organization and Design: The Hardware/Software Interface by Dave Patterson and John Hennessy. This is also a very nice book, but it contains way more information about how the hardware works than my students need. Also, the exercises look boring. Finally, it has a show-stopping bug: it is based very heavily on MIPS hardware and the use of a MIPS simulator. My students need to learn how to use DDD, and I can't see getting this to work on a simulator. Not to mention that I can't see them cross-compiling their code for the simulator, and so on and so forth. Another flaw is that the book mentions the x86 architecture only to sneer at it. I am entirely sympathetic to this point of view, but news flash! You guys lost! Write Great Code Vol I: Understanding the Machine by Randall Hyde. I haven't evaluated this book as thoroughly as the other two. It has a lot of what I need, but the translation from high-level language to assembler is deferred to Volume Two, which has mixed reviews. My students will be annoyed if I make them buy a two-volume series, even if the price of those two volumes is smaller than the price of other books. I would really welcome other suggestions of books that would help students in a class where they are to learn how C-language data structures and code are translated to machine-level data structures and code and where they learn how to think about performance, with an emphasis on the cache.

    Read the article

  • Exploding by Array of Delimiters

    - by JoeC
    Is there any way to explode() using an array of delimiters? PHP Manual: array explode ( string $delimiter , string $string [, int $limit ] ) Instead of using string $delimiter is there any way to use array $delimiter without affecting performance too much?

    Read the article

  • Is DateTime.ParseExact() faster than DateTime.Parse()

    - by Nassign
    I would like to know if ParseExact is faster than Parse. I think that it should be ParseExact since you already gave the format but I also think all the checking for the Culture info would slow it down. Does microsoft say in any document on performance difference between the two. The format to be used is a generic 'yyyy/MM/dd' format .

    Read the article

  • PostgreSQL or MS SQL Server?

    - by mmiika
    I'm considering using PostgreSQL with a .Net web app. Basically 3 reasons: Mature Geo Queries Small footprint + Linux Price I'm wondering a bit about tools though, SQL Server Profiler and query plans and performance monitors have been helpful. How is this world with Postgres? Some other things I should consider? Edit: Will most likely use NHibernate as ORM

    Read the article

  • nokogiri vs hpricot?

    - by roshan
    Which one would you choose? My important attributes are (not in order) Support & Future enhancements Community & general knowledge base (on the Internet) Comprehensive (i.e proven to parse a wide range of *.*ml pages) Performance Memory Footprint (runtime, not the code-base)

    Read the article

  • Edit very large xml files

    - by Matt
    I would like to create a text box which loads xml files and let users edit them. However, I cannot use XmlDocument to load since the files can be very large. I am looking for options to stream/load the xml document in chunks so that I do not get out of memory errors -- at the same time, performance is important too. Could you let me know what would be good options?

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • What is the need of collection framework in java?

    - by JavaUser
    Hi, What is the need of Collection framework in Java since all the data operations(sorting/adding/deleting) are possible with Arrays and moreover array is suitable for memory consumption and performance is also better compared with Collections. Can anyone point me a real time data oriented example which shows the difference in both(array/Collections) of these implementations. Thx

    Read the article

  • How to reduce UIImage size to a maximum as possible

    - by Tharindu Madushanka
    I am using following code to resize the image. Resize a UIImage Right Way And I use interpolation quality as kCGInterpolationLow. And then I use UIImageJPEGRepresentation(image,0.0) to get the NSData of that image. Still its a little bit high in size around 100kb. when I send it over the network. Can I reduce it further. If I am to reduce it more what could I do ? Thanks and Kind Regards,

    Read the article

  • Information Gain and Entropy

    - by dhorn
    I recently read this question regarding information gain and entropy. I think I have a semi-decent grasp on the main idea, but I'm curious as what to do with situations such as follows: If we have a bag of 7 coins, 1 of which is heavier than the others, and 1 of which is lighter than the others, and we know the heavier coin + the lighter coin is the same as 2 normal coins, what is the information gain associated with picking two random coins and weighing them against each other? Our goal here is to identify the two odd coins. I've been thinking this problem over for a while, and can't frame it correctly in a decision tree, or any other way for that matter. Any help? EDIT: I understand the formula for entropy and the formula for information gain. What I don't understand is how to frame this problem in a decision tree format. EDIT 2: Here is where I'm at so far: Assuming we pick two coins and they both end up weighing the same, we can assume our new chances of picking H+L come out to 1/5 * 1/4 = 1/20 , easy enough. Assuming we pick two coins and the left side is heavier. There are three different cases where this can occur: HM: Which gives us 1/2 chance of picking H and a 1/4 chance of picking L: 1/8 HL: 1/2 chance of picking high, 1/1 chance of picking low: 1/1 ML: 1/2 chance of picking low, 1/4 chance of picking high: 1/8 However, the odds of us picking HM are 1/7 * 5/6 which is 5/42 The odds of us picking HL are 1/7 * 1/6 which is 1/42 And the odds of us picking ML are 1/7 * 5/6 which is 5/42 If we weight the overall probabilities with these odds, we are given: (1/8) * (5/42) + (1/1) * (1/42) + (1/8) * (5/42) = 3/56. The same holds true for option B. option A = 3/56 option B = 3/56 option C = 1/20 However, option C should be weighted heavier because there is a 5/7 * 4/6 chance to pick two mediums. So I'm assuming from here I weight THOSE odds. I am pretty sure I've messed up somewhere along the way, but I think I'm on the right path! EDIT 3: More stuff. Assuming the scale is unbalanced, the odds are (10/11) that only one of the coins is the H or L coin, and (1/11) that both coins are H/L Therefore we can conclude: (10 / 11) * (1/2 * 1/5) and (1 / 11) * (1/2) EDIT 4: Going to go ahead and say that it is a total 4/42 increase.

    Read the article

  • Reasons for Parallel Extensions working slowly

    - by darja
    I am trying to make my calculating application faster by using Parallel Extensions. I am new in it, so I have just replaced the main foreach loop with Parallel.ForEach. But calculating became more slow. What can be common reasons for decreasing performance of parallel extensions? Thanks

    Read the article

  • Error in playing a swf file on internet explorer

    - by Rajeev
    In the below code i get an error saying Error #2007: Parameter url must be non-null on Ineternet explorer only.What am i doing wrong here html <OBJECT classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" WIDTH="50" HEIGHT="50" id="myMovieName"> <PARAM NAME="movie" VALUE="/media/players/testsound.swf" /> <PARAM NAME="quality" VALUE="high" /> <PARAM NAME="bgcolor" VALUE="#FFFFFF" /> <EMBED href="/media/players/testsound.swf" src="/media/players/testsound.swf" flashvars="soundUrl=sound.mp3" quality=high bgcolor=#FFFFFF NAME="myMovieName" ALIGN="" TYPE="application/x-shockwave-flash"> </EMBED> </OBJECT> mxml ; import flash.net.; import mx.controls.Alert; import mx.controls.Button; import flash.events.Event; import flash.media.Sound; import flash.net.URLRequest; private function clickhandler(event:Event):void { var musicfile:String; var s:Sound = new Sound(); s.addEventListener(Event.COMPLETE, onSoundLoaded); var req:URLRequest = new URLRequest("/opt/cloodon/site/media/players/sound.mp3"); //musicfile = stage.loaderInfo.parameters["soundUrl"]; //var req:URLRequest = new URLRequest(musicfile); s.load(req); } private function onSoundLoaded(event:Event):void { //Alert.show("1"); //var localSound:Sound = event.currentTarget as Sound; var localSound:Sound = event.target as Sound; localSound.play(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <!--<mx:Button id="play" label="PLAY" click="clickhandler(event)" />--> <mx:Image id="loader1" source="@Embed(source='/opt/cloodon/site/media/img/speaker.gif')" click="clickhandler(event)" /> </s:Application>

    Read the article

  • WPF: ComboBox Max Items?

    - by Jonathan Allen
    What is the maximum number of items you can put in a WPF ComboBox before it starts suffering serious performance degration? (Assume bare-bones XP business-class computer.) What is the maximum number of items you can put in a WPF ComboBox before a typical user will start complaining?

    Read the article

  • Uploading youtube videos through a common account

    - by Dave
    Is it possible to have users of my site upload videos from their home computers onto MY youtube account? What does that involve and can someone point me to some relevant examples. I DONT want to host the videos myself, even temporarily, unless completely unavoidable.These will be short videos though, maybe about a min or two long, not high def, so its not too unimaginable...

    Read the article

  • How to create sleek customized buttons, tables and other views for iPhone/iPad apps?

    - by wgpubs
    I'm looking to know both what can be customized as well as the recommended way to customize some of the major UIView subclasses in the iPhone SDK (in particular UIButton, UITableView/Cell ... but really open to any of the views in the SDK). Any recommended tutorials? Examples? Are there bad practices that can actually hinder performance and/or destablize your app in any way that should be avoided? Thanks

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >