Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 107/682 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Looking for a good book on microprocessor internals

    - by David Holm
    I'm looking for a good book on how modern microprocessors are designed and work as I would like to increase my understanding of what makes them tick. Something that covers pipelines, superscalar architectures, caches etc. A book that is suitable for a programmer with several years of experience and has done and understands assembly programming and machine language, so basically not "CPUs for Dummies" or anything such. What books do people who design today's processors read for instance?

    Read the article

  • branch prediction

    - by Alexander
    Consider the following sequence of actual outcomes for a single static branch. T means the branch is taken. N means the branch is not taken. For this question, assume that this is the only branch in the program. T T T N T N T T T N T N T T T N T N Assume a two-level branch predictor that uses one bit of branch history—i.e., a one-bit BHR. Since there is only one branch in the program, it does not matter how the BHR is concatenated with the branch PC to index the BHT. Assume that the BHT uses one-bit counters and that, again, all entries are initialized to N. Which of the branches in this sequence would be mis-predicted? Use the table below. Now I am not asking answers to this question, rather than guides and pointers on this. What does a two level branch predictor means and how does it works? What does the BHR and BHT stands for?

    Read the article

  • cache memory performance

    - by Krewie
    Hello, i just have a general question about cache memory. How would a program perform badly on a cache based system ? , since cache memory stores adresses from main memory that is requested, aswell as adresses that ranges around the same adress as the one copied from the main memory.

    Read the article

  • Linux HA / cluster: what are the differences between Pacemaker, Heartbeat, Corosync, wackamole?

    - by Continuation
    Can you help me understand Linux HA? Pacemaker, Heartbeat, Corosync seem to be part of a whole HA stack, but how do they fit together? How does wackamole differ from Pacemaker/Heartbeat/Corosync? I've seen opinions that wackamole is better than Heartbeat because it's peer-based. Is that valid? The last release of wackamole was 2.5 years ago. Is it still being maintained or active? What would you recommend for HA setup for web/application/database servers?

    Read the article

  • Why is the JVM stack-based and the DalvikVM register based?

    - by aioobe
    I'm curious, why did Sun decide to make the JVM stack-based and Google decide to make the DalvikVM register based? I suppose the JVM can't really assume that a certain number of registers are available on the target platform, since it is supposed to be platform independent. Therefor it just postpones the register-allocation etc, to the JIT compiler. (Correct me if I'm wrong.) So the Android guys thought, "hey, that's inefficient, let's go for a register based vm right away..."? But wait, there are multiple different android devices, what number of registers did the Dalvik target? Are the Dalvik opcodes hardcoded for a certain number of registers? Do all current Android devices on the market have about the same number of registers? Or, is there a register re-allocation performed during dex-loading? How does all this fit together?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • Could this code damage my processor??!!

    - by Osama Gamal
    A friend sent me that code and alleges that it could damage the processor. Is that true? void damage_processor() { while (true) { // Assembly code that sets the five control registers bits to ones which causes a bunch of exceptions in the system and then damages the processor Asm( "mov cr0, 0xffffffff \n\t" "mov cr1, 0xffffffff \n\t" "mov cr2, 0xffffffff \n\t" "mov cr3, 0xffffffff \n\t" "mov cr4, 0xffffffff \n\t" ) } } Is that true?

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • Zero Downtime with Hibernate

    - by Stephan Schmidt
    What changes to a database (MySQL in this case) does Hibernate survive (data, schema, ...)? I ask this because of zero downtime with Hibernate. Change database, split app servers into two clusters, redeploy the application on one of the clusters and switch application. Thanks Stephan

    Read the article

  • lowest latency, least overhead app server?

    - by Mark Harrison
    I'm designing an application which will have a network interface for feeding out large numbers of very small metadata requests. The application code itself is very fast, basically looking up data cached in memory and sending it to the client. What's the absolute lowest latency I can get for a network application server running on a linux box? This will be an internal app running on gigE with no authentication. Any language/framework considered, with a preference for C, C++, or Python. Likewise for protocol, although HTTP would be nice.

    Read the article

  • Working with ieee format numbers in ARM

    - by Jake Sellers
    I'm trying to write an ARM program that will convert an ieee number to a TNS format number. TNS is a format used by some super computers, and is similar to ieee but different. I'm trying to use several masks to place the three different "part" of the ieee number in separate registers so I can move them around accordingly. Here is my unpack subroutine: UnpackIEEE LDR r1, SMASK ;load the sign bit mask into r1 LDR r2, EMASK ;load the exponent mask into r2 LDR r3, GMASK ;load the significand mask into r3 AND r4, r0, r1 ;apply sign mask to IEEE and save into r4 AND r5, r0, r2 ;apply exponent mask to IEEE and save into r5 AND r6, r0, r3 ;apply significand mask to IEEE and save into r6 MOV pc, r14 ;return And here are the masks and number declarations so you can understand: IEEE DCD 0x40300000 ;2.75 decimal or 01000000001100000000000000000000 binary SMASK DCD 0x80000000 ;Sign bit mask EMASK DCD 0x7F800000 ;Exponent mask GMASK DCD 0x007FFFFF ;Significand mask When I step through with the debugger, the results I get are not what I expect after working through it on paper. EDIT: What I mean, is that after the subroutine runs, registers 4, 5, and 6 all remain 0. I can't figure out why the masks are not working. I think I do not fully understand how the number is being stored in the register or using the masks wrong. Any help appreciated. If you need more info just ask. EDIT: entry point: Very simple, just trying to get these subroutines working. ENTRY LDR r1, IEEE ;load IEEE num into r1 BL UnpackIEEE ;call unpack sub SWI SWI_Exit ;finish

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Unary NOT/Integersize of the architecture

    - by sid_com
    From "Mastering Perl/Chapter 16/Bit Operators/Unary NOT,~": The unary NOT operator (sometimes called the complement operator), ~, returns the bitwise negation, or 1's complement, of the value, based on integer size of the architecture Why does the following script output two different values? #!/usr/local/bin/perl use warnings; use 5.012; use Config; my $int_size = $Config{intsize} * 8; my $value = 0b1111_1111; my $complement = ~ $value; say length sprintf "%${int_size}b", $value; say length sprintf "%${int_size}b", $complement; Output: 32 64

    Read the article

  • Why don't stacks grow upwards (for security)?

    - by AshleysBrain
    This is related to the question 'Why do stacks typically grow downwards?', but more from a security point of view. I'm generally referring to x86. It strikes me as odd that the stack would grow downwards, when buffers are usually written to upwards in memory. For example a typical C++ string has its end at a higher memory address than the beginning. This means that if there's a buffer overflow you're overwriting further up the call stack, which I understand is a security risk, since it opens the possibility of changing return addresses and local variable contents. If the stack grew upwards in memory, wouldn't buffer overflows simply run in to dead memory? Would this improve security? If so, why hasn't it been done? What about x64, do those stacks grow upwards and if not why not?

    Read the article

  • Is there any .Net JIT Support from chip vendors?

    - by NoMoreZealots
    I know that ARM actually has some support for Java and SUN obviously, but I haven't really references seen any chip vendor supporting a .Net JIT compiler. I know IBM and Intel both support C compilers, as well as TI and many of the embedded chip vendors. When you think of it, all a JIT compiler is, is the last stages of compilation and optimization which you would think would be a good match for a chip vendor's expertize. Perhaps a standardized Plug In compilation engine for the VM would make sense. Microsoft is targeting .Net to embedded Windows platforms as well, so they are fair game. Pete

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • Should I be regularly shrinking my DB or at least my log file?

    - by Tom
    My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) / 128.0 AS AvailableSpaceInMB FROM sys.database_files;` produces this output.. MyDB: 159.812500 MB free MyDB_Log: 149476.390625 MB free So it seems there is some free space. We backup transaction logs every hour, diff backup 5 nights a week, full backup the other 2 nights of the week.

    Read the article

  • Best scaling methodologies for a highly traffic web application?

    - by tester2001
    We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month. Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP? Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >