Search Results

Search found 6912 results on 277 pages for 'assembly resolution'.

Page 61/277 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Insight into how things get printed onto the screen (cout,printf) and origin of really complex stuff

    - by sil3nt
    I've always wondered this, and still haven't found the answer. Whenever we use "cout" or "printf" how exactly is that printed on the screen?. How does the text come out as it does...(probably quite a vague question here, ill work with whatever you give me.). So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on more questions like how on earth have they made openGl/directx functions.. break it down people break it down.:)

    Read the article

  • an x86 question

    - by wide
    i'm working for my exam. i didn't resolved this question. does anyone help me? assume that there are two block, BLOCK1 AND BLOCK2. every block has 50 bytes. write a program to add BLOCK1 with BLOCK2 , and store result to BLOCK2 using LODS, STOS and LOOP etc. assembly commands?

    Read the article

  • learn ia32 for C

    - by David Lee
    I am trying to translate the following: Action: pushl %ebp movl %esp, %eax subl $32, %esp movl $0, -8(%eax) movl $0, -4(%eax) movl -4(%eax), %eax cmpl 32(%eax), %ebp movl -4(%ebp), %eax sall $2, %ebp addl 8(%ebp), %ebp movl (%ebp), %ebp addl %ebp, -8(%eax) addl $1, -4(%eax) What is the best way to learn assembly and translating this code?

    Read the article

  • Printing a string and variable in MIPS

    - by Matt
    Here's the C representation of what I'm trying to do in MIPS assembly: printf ("x=%d\n", x); I know that I can do a syscall to easily print x= and I can also do a syscall to print the int x (which is stored in a register). However, that prints them like this (let's say x is 5): x= 5 How can I make them print on the same line?

    Read the article

  • Do any FASM veterans want to become a mentor?

    - by Sam152
    Learning assembly has so far been pretty hard, I have read every tutorial I could find and I'm still having trouble getting some of the basics down. Does anyone out there want to mentor me and answer a few questions every now and then? Thanks to anyone considering.

    Read the article

  • Neccessity of push and pop operands on CPUs

    - by Hawken
    Why do we have commands like push and pop? From what I understand pop and push are basically the same as doing a (mov then add) and (sub then mov) on esp respectively. For example wouldn't: pushl %eax be equivalent to: subl $4, %esp movl %eax, (%esp-4) please correct me if stack access is not (%esp-4), I'm still learning assembly The only true benefit I can see is if doing both operation simultaneously offers some advantage; however I don't see how it could.

    Read the article

  • writing boot sector code

    - by JGC
    hi I want to write a code which put something in bootsector but when I run the assembly 8086 code which does this purpose, nothing happens. does any one know what can I do or does any one has code (in any language) which answer my need?

    Read the article

  • How can the AssemblyName class be used for existing Assemblies?

    - by IbrarMumtaz
    This is another exam related question. I want to know how can I use the AssemblyName class to represent an existing assembly that already exists on disk??? I am talking about from the perspective of using the AppDomain's instance method .Load(), that takes an AssemblyName object as a parameter. I know what MSDN has to say about what the .Load() method was designed for but that aside, I still want to know how to use it !!!

    Read the article

  • How do machine code instructions get transferred to the CPU?

    - by user3711789
    I'm currently investigating what the runtime of different programming languages looks like behind the scenes. For a compiled language like C, people usually give the explanation of "Code is compiled to assembly which is assembled and linked into a binary executable. The executable is then loaded into memory and the CPU interprets it." My question is how does the CPU know where to look for the next instruction to execute? Is it a memory address stored in one of the registers?

    Read the article

  • How can I go about writing to the console in fasm?

    - by codinggoose
    The code I currently have can be found at: http://fasm.pastebin.com/yY3C0aVF I'm exceptionally new to assembly, only picked it up yesterday and I've looked through many an example and still can't figure out for myself how to write to the console. I always get an error when I seem to replicate it in my own way. If I'm not on the right track at all please let me know, also if you can suggest a good book on fasm it would be greatly appreciated.

    Read the article

  • Using .align in inline assemby

    - by tech74
    Hi, I'm using ".align 16 \n\t" in some inline ARM assembly that is implementing some loops to align it on a 16 byte boundary however gcc asm compiler is complaining that alignement is too large i want to implement -falign-loops=16 in asm for a particular loop Thanks

    Read the article

  • Pin Control in HCS12

    - by Brian Lindsey
    A HCS12 microcontroller I had to buy for a class I had recently taken has 40 pins on the back side of it. The class was merely about computer organization, and so unfortunately, we never had a chance to cover all the capabilities of the chip itself. Now that the class is over, I have been thinking about using the to familiarize myself with the assembly language. I haven't found any sources that cover pin control and was wondering if anyone could possibly provide me with a hands-on pin tutorial.

    Read the article

  • Whats the proper way of accessing a database through an assembly?

    - by H4mm3rHead
    Hi, I have a ASP.NET MVC application which is build up as an assembly that queries the database and a asp.net frontend that references this assembly and this assembly abstracts the underlying database. This means that my Assembly contains a app.config file that contains the connectionstring to the database (Linq to Sql data model). How do I go about making this more flexible? Should i make a "initialize()" method somewhere in my assembly which takes the connection string from the asp.net mvc application and then that controls which database to use? or how is this done?

    Read the article

  • How to add reference an assembly that is not in the GAC from a t4mvc template (.tt)

    - by stephen
    I have found the place near the very top in a T4MVC template file (.tt) where assembly references can be added, which looks like: <#@ assembly name="System.Core" #> <#@ import namespace="System.Collections.Generic" #> However, it seems that I can only reference assemblies that are in the GAC. i.e. if I have an assembly MyProject.Stuff.dll (not in the GAC) added as a reference to the VS project containing the template then I expected to be able to add something like the following: <#@ assembly name="MyProject.Stuff" #> <#@ import namespace="MyProject.Stuff" #> If I do this then I get the following error: Error 1 Compiling transformation: Metadata file 'MyProject.Stuff' could not be found C:\Work\Development\DotNetSolution\MyProject\Utils\T4MVC\T4MVC.tt 1 1 How can I add a reference to an assembly that isn't in the GAC?

    Read the article

  • How to copy referenced assembly's dependecies to ASP.NET output bin folder?

    - by LD2008
    Hi all, In Visual Studio 2010, I have project A (asp.net application). Project A references project B (class library). Project B references assembly C (direct reference to a DLL). When building project A, only project A and project B binaries are present in the /bin directory of project A, but not the assembly C. Why is that? If project B depends on assembly C, why is assembly C not copied together to the output folder? "Copy local" is already set to "true" for assembly C. Any information would be appreciated. Thanks!

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >