Search Results

Search found 740 results on 30 pages for 'processors'.

Page 21/30 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How to specify execution time of x86 and PowerPC instructions?

    - by Goofy
    Hello! I have to approximate execution time of PowerPC and x86 assembler code.I understand that I cannot compute exact it dependson many problems (current processor state - x86 processor dicides internal instructions in microinstructions, memory access time obtainig code from cache of from slower memory etc.). I found some information in Intel Optimizaction reference (APPENDIX C), but it does not provide information about all general purpose instructions. Is there any complete reference about it? What about PowerPC processors? Where can I fund such information?

    Read the article

  • Read a remote zipped xml with just XSL

    - by Emaguel
    Hello there I want to know if it's possible for an XSLT file to read data from an XML located within folders of a remote zip(from the server at work), without any external processors (saxon and so forth) and without downloading it. Failing that, I'll resort to just reading the information from the zip... which brings me to my other (newb)issue. I currently have an XSLT that accesses and gets the data from the downloaded and extracted XML file, but I can't do this without extracting it. I've read that with Altova and xslt 2.0 it is possible to read from within a zip file using the document() function, though, as of yet I have not been able achieve this. this is how I'm trying to do it: document('name.zip|zip/folder/folder2/iwantthis.xml') It just doesn't seem to find the file. I'd be almost eternally grateful if you show me the error of my ways and guide me into XSLThood. Thank you kindly

    Read the article

  • Hardware-specific questions

    - by overflow
    I'm good at programming yet I feel like I don't know enough about the architecture of the hardware I'm working on. What does the Northbridge on the mainboard do? What does the L2 cache of my processor do? Can Windows XP use multiple processors? Not in terms of concrete multitasking in all programs but using the capacity of all cores if needed instead of always only one core. How can my processor/mainboard interact with multiple kinds of graphics/sound cards?

    Read the article

  • Is there any kind of standard for 8086 multiprocessing?

    - by Earlz
    Back when I made an 8086 emulator I noticed that there was the LOCK prefix intended for synchonization in a multiprocessor environment. Yet the only multitasking I know of for the x86 arch. involves use of the APIC which didn't come around until either the Pentiums or 486s. Was there any kind of standard for 8086 multitasking or was it done by some manufacturer specific extensions to the instruction set and/or special ports? By standard, I mean things like: How do you separate the 2 processors if they both use the same memory? This is impossible without some kind of way to make each processor execute a different piece of code. (or cause an interrupt on only one processor)

    Read the article

  • What application domains are CPU bound and will tend to benefit from multi-core technologies?

    - by Glomek
    I hear a lot of people talking about the revolution that is coming in programming due to multi-core processors and parallelism, but I can't shake the feeling that for most of us, CPU cycles aren't the bottleneck. Pretty much all of my programs have been I/O bound in one way or another (database, filesystem, network, user interaction, etc.) for a very long time. Now I can think of a few areas where CPU cycles are a limiting factor, like code breaking, graphics, sound, some forms of simulation (weather, physics, etc.), and some forms of mathematical research, but they all seem like fairly specialized application domains. My general impression is that most programs are still I/O bound and that for most of our industry CPUs have been plenty fast for quite a while now. Am I off my rocker? What other application domains are CPU bound today? Do any of them include a large portion of the programming population? In essence, I'm wondering whether the multi-core CPUs will impact very many of us, and if so, how?

    Read the article

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • Primary reasons why programming language runtimes use stacks?

    - by manuel aldana
    Many programming language runtime environments use stacks as their primary storage structure (e.g. see JVM bytecode to runtime example). Quickly recalling I see following advantages: Simple structure (pop/push), trivial to implement Most processors are anyway optimized for stack operations, so it is very fast Less problems with memory fragmentation, it is always about moving memory-pointer up and down for allocation and freeing complete blocks of memory by resetting the pointer to the last entry offset. Is the list complete or did I miss something? Are there programming language runtime environments which are not using stacks for storage at all?

    Read the article

  • Working with QWords

    - by Glenn1234
    I'm learning and in the course of that working on an assembler conversion which uses QWORDs a lot (x86-32bit). Now my reference material doesn't have anything on working with such values beyond the obvious of splitting them up into the 32-bit registers. I guess they're on the old side. The newer processors have mmx and sse instructions and the like. Would I be served well to look into those instructions for solving this? What is the best way to handle doing work on QWORD values?

    Read the article

  • What information do you capture your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • MPI Barrier C++

    - by aryan
    Dear all, I want to use MPI (MPICH2) on windows. I write this command: MPI_Barrier(MPI_COMM_WORLD); And I expect it blocks all Processors until all group members have called it. But it is not happen. I add a schematic of my code: int a; if(myrank == RootProc) a = 4; MPI_Barrier(MPI_COMM_WORLD); cout << "My Rank = " << myrank << "\ta = " << a << endl; (With 2 processor:) Root processor (0) acts correctly, but processor with rank 1 doesn't know the a variable, so it display -858993460 instead of 4. Can any one help me? Regards

    Read the article

  • What information do you capture when your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • RAM Questions in Iphone

    - by senthilmuthu
    Hi, Every RAM must have stack and heap (like CS,ES,DS,SS 4 segments).but is there like stack size in iphone,is only heap available?some tutorial say when we increase stack size , heap will be decreased,when we increase heap size ,stack will be decreased ...is it true..? or fixed stack size or fixed heap size ? any help please? Based on processor,will be segments changed?is there no need to have 4 segements for all processors?IF RAM does not have stack ,heap, where is it? IF RAM does not have stack ,heap ,where the heap and stack is managed?

    Read the article

  • Where does the compiler store methods for C++ classes?

    - by Mashmagar
    This is more a curiosity than anything else... Suppose I have a C++ class Kitty as follows: class Kitty { void Meow() { //Do stuff } } Does the compiler place the code for Meow() in every instance of Kitty? Obviously repeating the same code everywhere requires more memory. But on the other hand, branching to a relative location in nearby memory requires fewer assembly instructions than branching to an absolute location in memory on modern processors, so this is potentially faster. I suppose this is an implementation detail, so different compilers may perform differently. Keep in mind, I'm not considering static or virtual methods here.

    Read the article

  • Byte order (endian) of int in NSLog?

    - by Eonil
    NSLog function accepts printf format specifiers. My question is about %x specifier. Does this print hex codes as sequence on memory? Or does it have it's own printing sequence style? unsigned int a = 0x000000FF; NSLog(@"%x", a); Results of above code on little or big endian processors are equal or different? And how about NSString's -initWithFormat method? Does it follows this rule equally?

    Read the article

  • Does anyone know of a vim plugin or script to convert special characters to their corresponding HTML

    - by Alan
    I develop websites for corporate clients, so we see the ®, ™, etc. chars a whole lot. Sometimes I paste in huge blocks of copy, which might even contain pretty quotes (“ ”) or other strange characters from word processors. So, my question is this: Does anyone know of a vim plugin or script that can, in one fell swoop, convert all these characters to html entities? I think this covers all the bases of the entities it would be nice to have: http://web.forret.com/tools/charmap.asp So, for the characters above, they would be replaced with &reg;, &trade;, &ldquo;, &rdquo;, etc. I tried the htmlspecialchars vimball (http://www.vim.org/scripts/script.php?script_id=2377), but no dice. It only performs its replacement like the PHP htmlsepcialchars function, replacing html-conflicting characters, and doesn't cover any additional special characters.

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • XNA Multi-Thread Jitters

    - by Ice Phoenix
    Hi guys, brand new question. Just implemented multi-threading into my XNA game as it was unable to keep up with using 1 processor. MT is all implemented fine and everything, however the player seems to jitter all over the spot every now and then. I originally thought it was a loss of data between the update and render, but even when i did the player update in the render it did the same thing. It's not a memory/processor issue as i'm no where near maxing out my RAM or processors. It's strange aswell because none of the other entities in the game seem to have any of these issues. Any ideas at all??

    Read the article

  • Does the XML specification states that parser need to convert \n\r to \n always, even when \n\r appe

    - by mic.sca
    Hi, I've stumbled in a problem handling the \line-feed and \carriage-return characters in xml. I know that, according to http://www.w3.org/TR/REC-xml/#sec-line-ends, xml processors are required to replace any "\n\r" or lone "\r" sequences with "\n". The specification states that this has to be the behaviour for handling any "external parsed entity", does this apply to CDATA sections inside of an element as well? thank you, Michele I'm sure that msxml library for example converts every \n\r" or lone "\r" sequences to "\n", regardless of their being in a cdata section or not.

    Read the article

  • Implementing traceback on i386

    - by markelliott2000
    Hi, I am currently porting our code from an alpha (Tru64) to an i386 processor (Linux) in C. Everything has gone pretty smoothly up until I looked into porting our exception handling routine. Currently we have a parent process which spawns lots of sub processes, and when one of these sub-processes fatal's (unfielded) I have routines to catch the process. I am currently struggling to find the best method of implementing a traceback routine which can list the function addresses in the error log, currently my routine just prints the the signal which caused the exception and the exception qualifier code. Any help would be greatly received, ideally I would write error handling for all processors, however at this stage I only really care about i386, and x86_64. Thanks Mark

    Read the article

  • Is dynamic evaluation of xpath variable string possible using .net 2.0 xslt implementation?

    - by Crocked
    Hi, I'm trying to evaluate an xpath varable I'm building dynamically based on the position of the node. I can create the xpath string in a variable but when I select the value of this just get the string and not the node set I need. I use the following to create the xpath <xsl:variable name="xpathstring" select="normalize-space(concat(&quot;//anAttribute[@key='pos&quot;,position(),&quot;']&quot;))"/> and try to output the value with the following. <xsl:value-of select="$xpathstring"/> If I execute the xpath in my debugger I get the nodeset but in my xml output only get the xpath string which looks like this //anAttribute[@key='pos1'] I had a look at exslt dyn:evaluate which seems to enable this but this seems to be only supported by certain processors and doesn't provide a standalone implementation or at least as far as I could see (currently using the standard .net 2.0 xslt whihc is only xslt 1.0 as far as I recall) Is there anyway to handle this without changing processor? Kind Regards, Crocked

    Read the article

  • paperclipt get error can't dump File when upload video in rails

    - by user3510728
    when i try to upload video using paperclipt, i get error message can't dump File? model video : class Video < ActiveRecord::Base has_attached_file :avatar, :storage => :s3, :styles => { :mp4 => { :geometry => "640x480", :format => 'mp4' }, :thumb => { :geometry => "300x300>", :format => 'jpg', :time => 5 } }, :processors => [:ffmpeg] validates_attachment_presence :avatar validates_attachment_content_type :avatar, :content_type => /video/, :message => "Video not supported" end when i try to create video, im get this error?

    Read the article

  • Visual Studio C++ adds "junk" to my programs

    - by sub
    I have looked into the binaries produced by MSVC 2010 from my source code, and saw everything being filled with "junk". I don't know how to explain, but my executables are being added too much unnecessary information, like: Lots of Microsoft default error messages, I don't want them XML schema settings (Why!?) Other things not important for the execution of the main program How can I stop MSVC doing this? Do I have to switch to GCC? In all other programs (written in C++ too, from Word processors to games), this junk simply doesn't exist.

    Read the article

  • ARM TechCon 2013: Oracle, ARM expand collaboration on servers, Internet of Things

    - by Henrik Stahl
    If you have been following Java news, you are already aware of the fact that there has been a lot of investment in Java for ARM-based devices and servers over the last couple of years (news, more news, even more, and lots more). We have released Java ME Embedded binaries for ARM Cortex-M micro controllers, Java SE Embedded for ARM application processors, and a port of the Oracle JDK for ARM-based servers. We have been making Java available to the Beagleboard, Raspberry Pi and Lego Mindstorms/LeJOS communities and worked with them and the Java User Groups to evangelize Java as a great development environment for IoT devices. We have announced commercial relationships with Freescale, Qualcomm, Gemalto M2M, SIMCom to name a few. ARM and Freescale on their side have joined the JCP, recently been voted in as members of the Executive Committee, and have worked with Oracle to evangelize Java in their ecosystem. It is with this background, Nandini Ramani, Vice President, Java Platform at Oracle, announced a expanded collaboration with ARM in a TechCon 2013 keynote titled "Enabling Compelling Services for IoT". To summarize the announcement: ARM and Oracle will work together on interoperability between the ARM Sensinode communications stack (based on CoAP, DTLS and 6LoWPAN) and Oracle's Java ME, Java SE and middleware products. ARM will donate the Sensinode CoAP protocol engine to OpenJDK to stimulate broad adoption of the CoAP protocol, and work with Oracle to extend the relevant Java specifications with CoAP support. CoAP (Constrained Application Protocol) is an IETF specification that provides a low-bandwidth request/response protocol suitable for IoT applications. ARM will work with Oracle and Freescale to enable the mbed Hardware Abstraction Layer (HAL) to act as a portability layer for Java ME Embedded. Oracle will enable mbed as a tier one platform for Java ME Embedded. Over time, this effort will allow any mbed-enabled platforms (mostly based on Cortex-M microcontrollers) to work with off the shelf Java ME Embedded binaries, extending the reach of Java ME into IoT edge nodes. In Nandini's keynote, Oracle showed a roadmap to port the Oracle JDK for Linux on 64-bit ARMv8 servers in the 2015 time frame, preceded by an extended early access program. We expect this binary to have full feature parity with Oracle JDK on other platforms, and be available under the same royalty-free license. This effort has been going on for some time, but is now accelerated due to availability of hardware from Applied Micro. Oracle will be working with Applied Micro on the ARMv8 port, and on optimizing Java for their X-Gene products. Oracle and ARM will work closely on IoT architecture, and on evangelizing Java on ARM for both servers and IoT devices. These announcements reinforce Java's position as a first-class citizen in the ARM ecosystem, and signal a commitment from us to collaborate on driving standards and open ecosystem for the Internet of Things. If you are active in this area and not already in touch with us, or interested in learning more - please reach out to us!

    Read the article

  • Multiple Monitors

    - by mroberts
    At my workplace .Net developers get pretty much the same equipment. A decent Dell workstation / Desktop, mine is a Dell Precision 390. One dual core 2.40 GHz. Eight GB RAM. Windows 7 Enterprise 64-bit. Two Dell 20.1 Monitors. I'm happy with this.  The machine is about 3 years old but still runs with some decent speed. New developers are getting a Dell workstation with dual quad processors. I just put in a request for myself and three other developers for an upgraded video card and two additional monitors, for a total of four monitors per person.  We suggested this card, BTW, mainly for the cost.  The move from one monitor to two was fantastic (one might even say life (or work) changing) and truly did increase productivity. Now what about going from 2 monitors to 4?  I'm sure the change is not as dramatic as one to two, but I can't help but to think four monitors is better than two.  But if four is better than two, should we have asked for six?!? Also what about mixing monitor types?  Right now my monitors are the older square type vs. wide-screen.  It's been rumored that we will be getting monitors out of current stock and they will be 22 inch wide-screens.  I understand this, recession and all.  2-20 inch square monitors with 2-22 inch wide-screen monitors...hmmmmm.  I'm thinking I'd rather get 2 additional 17 inch square monitors to put on each side of my 20's. Also, a question was raised about the layout of four monitors. By default, my thought was I'll just put them all on my desk, kinda in a line. I've heard others say they want to stack them in a 2 x 2 square. BTW, loving multi monitor support in Visual studio 2010! I’d love some comments on your experience with one, two, four, or however many monitors from a developers perspective.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >