Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 161/233 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • Determining whether compiling on Windows or other system

    - by NumberFour
    Hi, Im currently developing a cross-platform C application. Is there any compiler macro which is defined only during compilation on Windows, so I can #ifdef some Windows specific #includes? Typical example is selecting between WinSock and Berkeley sockets headers: #ifdef _WINDOWS #include <winsock.h> #else #include <sys/socket.h> #include <netinet/in.h> #include <sys/un.h> #include <arpa/inet.h> #include <netdb.h> #endif So the thing Im looking for is something like that _WINDOWS macro. Thanks for any tips.

    Read the article

  • Which format does static library (*.lib) files use? Where can I find "Official" specifications of *.

    - by claws
    Just now I found that static libraries in *nix systems, in other words *.a libraries are nothing but archives of relocatables(*.o files) in ar fromat. What about static libraries(*.lib files) in windows? Which format are they in? I found an article: http://www.microsoft.com/msj/0498/hood0498.aspx which explains *.lib file structure. But Where can I find "Official" specifications of *.lib file structure/format? Other than ar.exe of mingw is there any tool from Microsoft which extracts relocatable objects of *.lib & *.a files? EDIT: I wonder why I'm unable to get to this question. If there are no official specifications. Then how does the compiler ('linker' to be more correct) writers work with *.LIB files?

    Read the article

  • java.lang.UnsupportedClassVersionError in eclipse

    - by Derek
    Hi all, I am not a Java programmer really, so I am posting this question. The exception is being thrown java.lang.UnsupportedClassVersionError in my main class in an eclipse project. If I comment out the imports that this class has, it compiles and runs fine. If I put the imports back in, it does not work. Does this mean that the libraries I am importing were compiled with a newer or older version of java than I have? when i do java -version on the system i get 1.5_07 I could've sworn this was actually working last week, but maybe some setting in eclipse got tweaked? Is the Java Build Path in eclipse what I need to look for to check the JRE and compiler versions?

    Read the article

  • Two questions on ensuring EndInvoke() gets called on a list of IAsyncResult objects

    - by RobV
    So this question is regarding the .Net IAsyncResult design pattern and the necessity of calling EndInvoke as covered in this question Background I have some code where I'm firing off potentially many asynchronous calls to a particular function and then waiting for all these calls to finish before using EndInvoke() to get back all the results. Question 1 I don't know whether any of the calls has encountered an exception until I call EndInvoke() and in the event that an exception occurs in one of the calls the entire method should fail and the exception gets wrapped into an API specific exception and thrown upwards. So my first question is what's the best way then to ensure that the remainder of the async calls get properly terminated? Is a finally block which calls EndInvoke() on the remainder of the unterminated calls (and ignores any further exceptions) the best way to do this? Question 2 Secondly when I first fire off all my asyc calls I then call WaitHandle.WaitAll() on the array of WaitHandle instances that I've got from my IAsyncResult instances. The method which is firing all these async calls has a timeout to adhere to so I provide this to the WaitAll() method. Then I test whether all the calls have completed, if not then the timeout must have been reached so the method should also fail and throw another API specific exception. So my second question is what should I do in this case? I need to call EndInvoke() to terminate all these async calls before I throw the error but at the same time I don't want the code to get stuck since EndInvoke() is blocking. In theory at least if the WaitAll() call times out then all the async calls should themselves have timed out and thrown exceptions (thus completing the call) since they are also governed by a timeout but this timeout is potentially different from the main timeout

    Read the article

  • Understanding Ruby Enumerable#map (with more complex blocks)

    - by mstksg
    Let's say I have a function def odd_or_even n if n%2 == 0 return :even else return :odd end end And I had a simple enumerable array simple = [1,2,3,4,5] And I ran it through map, with my function, using a do-end block: simple.map do |n| odd_or_even(n) end # => [:odd,:even,:odd,:even,:odd] How could I do this without, say, defining the function in the first place? For example, # does not work simple.map do |n| if n%2 == 0 return :even else return :odd end end # Desired result: # => [:odd,:even,:odd,:even,:odd] is not valid ruby, and the compiler gets mad at me for even thinking about it. But how would I implement an equivalent sort of thing, that works?

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • How many instructions to access pointer in C?

    - by Derek
    Hi All, I am trying to figure out how many clock cycles or total instructions it takes to access a pointer in C. I dont think I know how to figure out for example, p-x = d-a + f-b i would assume two loads per pointer, just guessing that there would be a load for the pointer, and a load for the value. So in this operations, the pointer resolution would be a much larger factor than the actual addition, as far as trying to speed this code up, right? This may depend on the compiler and architecture implemented, but am I on the right track? I have seen some code where each value used in say, 3 additions, came from a f2->sum = p1->p2->p3->x + p1->p2->p3->a + p1->p2->p3->m type of structure, and I am trying to define how bad this is

    Read the article

  • Target .NET 3.5 C++/CLI in Visual Studio 2010 Beta 2

    - by jeffora
    Has anyone had any success converting a VS 2008 C++/CLI (vcproj) project to a VS 2010 project (vcxproj), whilst maintaining .NET 3.5 as the target framework? I haven't been able to do this and get the project to build successfully. The project compiles fine in VS2008 as .NET 3.5, and fine in VS2010 as .NET 4.0, but I am unable to target .NET 3.5 in 2010. The IDE doesn't seem to provide an option for it, and modifying the vcxproj file by adding <TargetFrameworkVersion>v3.5</TargetFrameworkVersion> causes compilation to fail with the folling error: Error 1 error C1001: An internal error has occurred in the compiler. According to this link, there is apparently some differences in compilers used between VS2008 and 2010, but manually editing the project file was still suggested as a solution. Does anyone have any idea on this?

    Read the article

  • Boost::Container::Vector with Enum Template Argument - Not Legal Base Class

    - by CuppM
    Hi, I'm using Visual Studio 2008 with the Boost v1.42.0 library. If I use an enum as the template argument, I get a compile error when adding a value using push_back(). The compiler error is: 'T': is not a legal base class and the location of the error is move.hpp line 79. #include <boost/interprocess/containers/vector.hpp> class Test { public: enum Types { Unknown = 0, First = 1, Second = 2, Third = 3 }; typedef boost::container::vector<Types> TypesVector; }; int main() { Test::TypesVector o; o.push_back(Test::First); return 0; } If I use a std::vector instead it works. And if I resize the Boost version first and then set the values using the [] operator it also works. Is there some way to make this work using push_back()?

    Read the article

  • What should I do if a IOException is thrown?

    - by Roman
    I have the following 3 lines of the code: ServerSocket listeningSocket = new ServerSocket(earPort); Socket serverSideSocket = listeningSocket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(serverSideSocket.getInputStream())); The compiler complains about all of these 3 lines and its complain is the same for all 3 lines: unreported exception java.io.IOException; In more details, these exception are thrown by new ServerSocket, accept() and getInputStream(). I know I need to use try ... catch .... But for that I need to know what this exceptions mean in every particular case (how should I interpret them). When they happen? I mean, not in general, but in these 3 particular cases.

    Read the article

  • performSelector:withObject:afterDelay: not working from scrollViewDidZoom

    - by oldbeamer
    Hi all, I feel like I should know this but I've been stumped for hours now and I've run out of ideas. The theory is simple, the user manipulates the zoom and positioning in a scrollview using a pinch. If they hold that pinch for a short period of time then the scrollview records the zoom level and content offsets. So I thought I'd start a performSelector:withObject:withDelay on the scrollViewDidZoom delegate method. If it expires then I record the settings. I delete the scheduled call every time scrollViewDidZoom is called and when the pinch gesture finishes. This is what I have: - (void)scrollViewDidZoom:(UIScrollView *)scrollView{ NSLog(@"resetting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; [self performSelector:@selector(positionLock) withObject:nil afterDelay:0.4]; } -(void)positionLock{ NSLog(@"position locked "); } - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale{ NSLog(@"deleting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; } This is the output: 2010-05-19 22:43:01.931 resetting timer 2010-05-19 22:43:01.964 resetting timer 2010-05-19 22:43:02.231 resetting timer 2010-05-19 22:43:02.253 resetting timer 2010-05-19 22:43:02.269 resetting timer 2010-05-19 22:43:02.298 resetting timer 2010-05-19 22:43:05.399 deleting timer As you can see the delay between the last and second last events should have been more than enough for the delayed selector call to execute. If I replace performSelector:withObject:withDelay with plain old performSelector: I get this: 2010-05-19 23:08:30.333 resetting timer 2010-05-19 23:08:30.333 position locked 2010-05-19 23:08:30.366 resetting timer 2010-05-19 23:08:30.367 position locked 2010-05-19 23:08:30.688 deleting timer Which isn't what I want but serves to show that it's only the delay that's causing it to not function, not something to do with the selector not being declared in the header, being misspelt or any other reason. Any ideas as to why it's not working?

    Read the article

  • Opening Large (24 GB) File In C

    - by zacaj
    I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?

    Read the article

  • bridge methods explaination

    - by xdevel2000
    If I do an override of a clone method the compiler create a bridge method to guarantee a correct polymorphism: class Point { Point() { } protected Point clone() throws CloneNotSupportedException { return this; // not good only for example!!! } protected volatile Object clone() throws CloneNotSupportedException { return clone(); } } so when is invoked the clone method the bridge method is invoked and inside it is invoked the correct clone method. But my question is when into the bridge method is called return clone() how do the VM to say that it must invoke Point clone() and not itself again???

    Read the article

  • Why C++ virtual function defined in header may not be compiled and linked in vtable?

    - by 0xDEAD BEEF
    Situation is following. I have shared library, which contains class definition - QueueClass : IClassInterface { virtual void LOL() { do some magic} } My shared library initialize class member QueueClass *globalMember = new QueueClass(); My share library export C function which returns pointer to globalMember - void * getGlobalMember(void) { return globalMember;} My application uses globalMember like this ((IClassInterface*)getGlobalMember())->LOL(); Now the very uber stuff - if i do not reference LOL from shared library, then LOL is not linked in and calling it from application raises exception. Reason - VTABLE contains nul in place of pointer to LOL() function. When i move LOL() definition from .h file to .cpp, suddenly it appears in VTABLE and everything works just great. What explains this behavior?! (gcc compiler + ARM architecture_)

    Read the article

  • How does compilation work with AOP?

    - by alee
    I need quick answer to a simple thing in AOP. If i have a code deployed at client side and i have written new aspects, which i want in the client side software. do i have to "recompile" complete software with "original" code and new "AOP" code? (with aop compiler)? i.e. do i need the source code of original program with source code of new AOP and compile 'em boht? P.S: I am asking in general, not being specific to any language.

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • EHsc vc EHa (synchronous vs asynchronous exception handling)

    - by watson1180
    Could you give a bullet list of practical differences/implication? I read relevant MSDN article, but my understanding asynchronous exceptions is still a bit hazy. I am writing a test suite using Boost.Test and my compiler emits a warning that EHa should be enabled: warning C4535: calling _set_se_translator() requires /EHa The project itself uses only plain exceptions (from STL) and doesn't need /EHa switch. Do I have to recompile it with /EHa switch to make the test suite work properly? My feeling is that I need /EHa for the test suit only. Thank you and happy new year.

    Read the article

  • problems with overloaded function members C++

    - by Dr Deo
    I have declared a class as class DCFrameListener : public FrameListener, public OIS::MouseListener, public OIS::KeyListener { bool keyPressed(const OIS::KeyEvent & kEvt); bool keyReleased(const OIS::KeyEvent &kEvt); //*******some code missing************************ }; But if i try defining the members like this bool DCFrameListener::keyPressed(const OIS::KeyEvent kEvt) { return true; } The compiler refuses with this error error C2511: 'bool DCFrameListener::keyPressed(const OIS::KeyEvent)' : overloaded member function not found in 'DCFrameListener' see declaration of 'DCFrameListener' Why is this happening, yet i declared the member keyPressed(const OIS::KeyEvent) in my function declaration. any help will be appreciated. Thanks

    Read the article

  • how to create a new variant in bjam

    - by steve jaffe
    I've tried reading the documentation but it is rather impenetrable so I'm hoping someone may have a simple answer. I want to define a new 'variant', based on 'debug', which just adds some macro definitions to the compiler command line, eg "-DSOMEMACRO". I think I may be able to do this as a "sub-variant" of debug, or else just define a new variant copying 'debug', but I'm not even sure where to do this. It looks like feature.jam in $BOOST_BUILD_DIR/build may be the place. Perhaps what I really want is simply a new 'feature' but it's still not clear to me exactly what I need to do and where, and I don't know if a 'feature' allows me to direct the build products to a different directory to the 'debug' build. Any suggestions will be appreciated. (In case you're wondering, I have to use bjam since it has been adopted as our corporate standard.)

    Read the article

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • 2-byte (UCS-2) wide strings under GCC

    - by Seva Alekseyev
    Hi all, when porting my Visual C++ project to GCC, I found out that the wchar_t datatype is 4-byte UTF-32 by default. I could override that with a compiler option, but then the whole wcs* (wcslen, wcscmp, etc.) part of RTL is rendered unusable, since it assumes 4-byte wide strings. For now, I've reimplemented 5-6 of these functions from scratch and #defined my implementations in. But is there a more elegant option - say, a build of GCC RTL with 2-byte wchar-t quietly sitting somewhere, waiting to be linked? The specific flavors of GCC I'm after are Xcode on Mac OS X, Cygwin, and the one that comes with Debian Linux Etch.

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • Why calling ISet<dynamic>.Contains() compiles, but throws an exception at runtime?

    - by Andrey Breslav
    Please, help me to explain the following behavior: dynamic d = 1; ISet<dynamic> s = new HashSet<dynamic>(); s.Contains(d); The code compiles with no errors/warnings, but at the last line I get the following exception: Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.Collections.Generic.ISet<object>' does not contain a definition for 'Contains' at CallSite.Target(Closure , CallSite , ISet`1 , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid2[T0,T1](CallSite site, T0 arg0, T1 arg1) at FormulaToSimulation.Program.Main(String[] args) in As far as I can tell, this is related to dynamic overload resolution, but the strange things are (1) If the type of s is HashSet<dynamic>, no exception occurs. (2) If I use a non-generic interface with a method accepting a dynamic argument, no exception occurs. Thus, it looks like this problem is related particularly with generic interfaces, but I could not find out what exactly causes the problem. Is it a bug in the compiler/typesystem, or legitimate behavior?

    Read the article

  • Problem with array of elements of a structure type

    - by kobac
    I'm writing an app in Visual Studio C++ and I have problem with assigning values to the elements of the array, which is array of elements of structure type. Compiler is reporting syntax error for the assigning part of the code. Is it possible in anyway to assign elements of array which are of structure type? typedef struct { CString x; double y; } Point; Point p[3]; p[0] = {"first", 10.0}; p[1] = {"second", 20.0}; p[2] = {"third", 30.0};

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >