Search Results

Search found 5842 results on 234 pages for 'compiler warnings'.

Page 162/234 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • Extract Generic types from extended Generic

    - by Brigham
    I'm trying to refactor a class and set of subclasses where the M type does extend anything, even though we know it has to be a subclass of a certain type. That type is parametrized and I would like its parametrized types to be available to subclasses that already have values for M. Is there any way to define this class without having to include the redundant K and V generic types in the parameter list. I'd like to be able to have the compiler infer them from whatever M is mapped to by subclasses. public abstract class NewParametrized<K, V, M extends SomeParametrized<K, V>> { public void someMethodThatTakesKAndV(K k1, V v1) { } } In other words, I'd like the class declaration to look something like: public class NewParametrized<M extends SomeParametrized<K, V>> { And K and V's types would be inferred from the definition of M.

    Read the article

  • CString error, 'CString': is not a member of 'ATL::CStringT<BaseType, StringTraits>'

    - by flavour404
    Hi, I am trying to do this: #include <atlstr.h> CHAR Filename; // [sp+26Ch] [bp-110h]@1 char v31; // [sp+36Ch] [bp-10h]@1 int v32; // [sp+378h] [bp-4h]@1 GetModuleFileNameA(0, &Filename, 0x100u); CString::CString(&v31, &Filename); But I am getting the compiler error C2039:'CString': is not a member of 'ATL::CStringT' This is a non MFC based dll, but according to the docs you should be able to use CString functionality with the include #include atlstr.h how do I make it work? Thanks

    Read the article

  • Boost::Container::Vector with Enum Template Argument - Not Legal Base Class

    - by CuppM
    Hi, I'm using Visual Studio 2008 with the Boost v1.42.0 library. If I use an enum as the template argument, I get a compile error when adding a value using push_back(). The compiler error is: 'T': is not a legal base class and the location of the error is move.hpp line 79. #include <boost/interprocess/containers/vector.hpp> class Test { public: enum Types { Unknown = 0, First = 1, Second = 2, Third = 3 }; typedef boost::container::vector<Types> TypesVector; }; int main() { Test::TypesVector o; o.push_back(Test::First); return 0; } If I use a std::vector instead it works. And if I resize the Boost version first and then set the values using the [] operator it also works. Is there some way to make this work using push_back()?

    Read the article

  • Declaring pointers; asterisk on the left or right of the space between the type and name?

    - by GenTiradentes
    I've seen mixed versions of this in a lot of code. (This applies to C and C++, by the way.) People seem to declare pointers in one of two ways, and I have no idea which one is correct, of if it even matters. The first way it to put the asterisk adjacent the type name, like so: someType* somePtr; The second way is to put the asterisk adjacent the name of the variable, like so: someType *somePtr; This has been driving me nuts for some time now. Is there any standard way of declaring pointers? Does it even matter how pointers are declared? I've used both declarations before, and I know that the compiler doesn't care which way it is. However, the fact that I've seen pointers declared in two different ways leads me to believe that there's a reason behind it. I'm curious if either method is more readable or logical in some way that I'm missing.

    Read the article

  • C# method generic params parameter bug?

    - by Mike M
    Hey, I appears to me as though there is a bug/inconsistency in the C# compiler. This works fine (first method gets called): public void SomeMethod(string message, object data); public void SomeMethod(string message, params object[] data); // .... SomeMethod("woohoo", item); Yet this causes "The call is ambiguous between the following methods" error: public void SomeMethod(string message, T data); public void SomeMethod(string message, params T[] data); // .... SomeMethod("woohoo", (T)item); I could just use the dump the first method entirely, but since this is a very performance sensitive library and the first method will be used about 75% of the time, I would rather not always wrap things in an array and instantiate an iterator to go over a foreach if there is only one item. Splitting into different named methods would be messy at best IMO. Thoughts?

    Read the article

  • The rules to connect a web service trough the SSL and Certificates

    - by blgnklc
    There is a web service running on tomcat on a server. It is built on Java Servlet. It is listening others to call itself on a SSL enabled http port. so its web service adreess looks like: https://172.29.12.12/axis/services/XYZClient?wsdl On the other hand I want to connect the web service above from a windows application which is built on .NET frame work. Finally, when I want to connect the web service from my computer; I get some specific erros; Firstly I get; Proxy authentication error; then I added some new line to my code; Dim cr As System.Net.NetworkCredential = New System.Net.NetworkCredential("xname", "xsurname", "xdomainname") Dim myProxy As New WebProxy("http://mar.xxxyyy.com", True) myProxy.Credentials = cr Secondly, after this modifications It says that bad request. I did not get over this error. Moreover I did try to connect the web server on the same computer. I copied my executable program to the computer where the web service runs. The error was like; The underlying connection was closed: Could not establish trust relationship for SSL/TLS secure channel PS: When I try to connect to web service by using Internet Explorer; I see firstly some warnings about accepting an unknown certificate and I click take me to web service an I get there clearly. I want to know what are the basic elements to connect a web service, could you please tell me the requirements that I have to use on my windows project. regards bk

    Read the article

  • How does compilation work with AOP?

    - by alee
    I need quick answer to a simple thing in AOP. If i have a code deployed at client side and i have written new aspects, which i want in the client side software. do i have to "recompile" complete software with "original" code and new "AOP" code? (with aop compiler)? i.e. do i need the source code of original program with source code of new AOP and compile 'em boht? P.S: I am asking in general, not being specific to any language.

    Read the article

  • Determining whether compiling on Windows or other system

    - by NumberFour
    Hi, Im currently developing a cross-platform C application. Is there any compiler macro which is defined only during compilation on Windows, so I can #ifdef some Windows specific #includes? Typical example is selecting between WinSock and Berkeley sockets headers: #ifdef _WINDOWS #include <winsock.h> #else #include <sys/socket.h> #include <netinet/in.h> #include <sys/un.h> #include <arpa/inet.h> #include <netdb.h> #endif So the thing Im looking for is something like that _WINDOWS macro. Thanks for any tips.

    Read the article

  • vector<vector<largeObject>> vs. vector<vector<largeObject>*> in c++

    - by Leif Andersen
    Obviously it will vary depending on the compiler you use, but I'm curious as to the performance issues when doing vector<vector<largeObject>> vs. vector<vector<largeObject>*>, especially in c++. In specific: let's say that you have the outer vector full, and you want to start inserting elements into first inner vector. How will that be stored in memory if the outer vector is just storing pointers, as apposed to storing the whole inner vector. Will the whole outer vector have to be moved to gain more space, or will the inner vector be moved (assuming that space wasn't pre-allocated), causing problems with the outer vector? Thank you

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • Do I have to create a static library to test my application?

    - by Christopher Gateley
    I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++. My general approach so far has been to do either one of three things: Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library. Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit. Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application. None of these solutions are particularly elegant... How do you do it?

    Read the article

  • What should I do if a IOException is thrown?

    - by Roman
    I have the following 3 lines of the code: ServerSocket listeningSocket = new ServerSocket(earPort); Socket serverSideSocket = listeningSocket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(serverSideSocket.getInputStream())); The compiler complains about all of these 3 lines and its complain is the same for all 3 lines: unreported exception java.io.IOException; In more details, these exception are thrown by new ServerSocket, accept() and getInputStream(). I know I need to use try ... catch .... But for that I need to know what this exceptions mean in every particular case (how should I interpret them). When they happen? I mean, not in general, but in these 3 particular cases.

    Read the article

  • typename resolution in cases of ambiguity

    - by parapura rajkumar
    I was playing with Visual Studio and templates. Consider this code struct Foo { struct Bar { }; static const int Bar=42; }; template<typename T> void MyFunction() { typename T::Bar f; } int main() { MyFunction<Foo>(); return 0; } When I compile this is either Visual Studio 2008 and 11, I get the following error error C2146: syntax error : missing ';' before identifier 'f' Is Visual Studio correct in this regard ? Is the code violating any standards ? If I change the code to struct Foo { struct Bar { }; static const int Bar=42; }; void SecondFunction( const int& ) { } template<typename T> void MyFunction() { SecondFunction( T::Bar ); } int main() { MyFunction<Foo>(); return 0; } it compiles without any warnings. In Foo::BLAH a member preferred over a type in case of conflicts ?

    Read the article

  • How can I get this week's dates in Perl?

    - by ABach
    I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in Perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works, but is it the quickest or most efficient? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • java.lang.UnsupportedClassVersionError in eclipse

    - by Derek
    Hi all, I am not a Java programmer really, so I am posting this question. The exception is being thrown java.lang.UnsupportedClassVersionError in my main class in an eclipse project. If I comment out the imports that this class has, it compiles and runs fine. If I put the imports back in, it does not work. Does this mean that the libraries I am importing were compiled with a newer or older version of java than I have? when i do java -version on the system i get 1.5_07 I could've sworn this was actually working last week, but maybe some setting in eclipse got tweaked? Is the Java Build Path in eclipse what I need to look for to check the JRE and compiler versions?

    Read the article

  • Understanding Ruby Enumerable#map (with more complex blocks)

    - by mstksg
    Let's say I have a function def odd_or_even n if n%2 == 0 return :even else return :odd end end And I had a simple enumerable array simple = [1,2,3,4,5] And I ran it through map, with my function, using a do-end block: simple.map do |n| odd_or_even(n) end # => [:odd,:even,:odd,:even,:odd] How could I do this without, say, defining the function in the first place? For example, # does not work simple.map do |n| if n%2 == 0 return :even else return :odd end end # Desired result: # => [:odd,:even,:odd,:even,:odd] is not valid ruby, and the compiler gets mad at me for even thinking about it. But how would I implement an equivalent sort of thing, that works?

    Read the article

  • How to redefine symbol names in objects with RVCT?

    - by Batuu
    I currently develop a small OS for an embedded platform based on a ARM Cortex-M3 microcontroller. The OS provides an API for customer application development. The OS kernel and the API is compiled into a static lib by the ARMCC compiler and customer can link his application against it. The lib and the containing object files offer the complete list of symbols used in kernel. To "protect" the kernel and its inner states from extern hooking into obvious variables and functions, I would like to do some easy obfuscation by renaming the symbols randomly. The GNU binutils seems to do this by calling objcopy with the --redefine-sym flag. The GNU binutils cannot read the ARMCC / RVCT objects. Is there any solution to do this kind of obfuscation with RVCT?

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • How many instructions to access pointer in C?

    - by Derek
    Hi All, I am trying to figure out how many clock cycles or total instructions it takes to access a pointer in C. I dont think I know how to figure out for example, p-x = d-a + f-b i would assume two loads per pointer, just guessing that there would be a load for the pointer, and a load for the value. So in this operations, the pointer resolution would be a much larger factor than the actual addition, as far as trying to speed this code up, right? This may depend on the compiler and architecture implemented, but am I on the right track? I have seen some code where each value used in say, 3 additions, came from a f2->sum = p1->p2->p3->x + p1->p2->p3->a + p1->p2->p3->m type of structure, and I am trying to define how bad this is

    Read the article

  • bridge methods explaination

    - by xdevel2000
    If I do an override of a clone method the compiler create a bridge method to guarantee a correct polymorphism: class Point { Point() { } protected Point clone() throws CloneNotSupportedException { return this; // not good only for example!!! } protected volatile Object clone() throws CloneNotSupportedException { return clone(); } } so when is invoked the clone method the bridge method is invoked and inside it is invoked the correct clone method. But my question is when into the bridge method is called return clone() how do the VM to say that it must invoke Point clone() and not itself again???

    Read the article

  • Flash doesn't connect to socket even though policy allows it

    - by Bart van Heukelom
    In my Flash app, I'm connecting to my server like this: Security.loadPolicyFile("xmlsocket://example.com:12860"); socket = new Socket("example.com", 12869); socket.writeByte(...); ... socket.flush(); At port 12860 I'm running a socket policy server, which (according to this document) correctly serves up my policy like this: 00000000 3c 70 6f 6c 69 63 79 2d 66 69 6c 65 2d 72 65 71 <policy- file-req 00000010 75 65 73 74 2f 3e 00 uest/>. 00000000 3c 63 72 6f 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f <cross-d omain-po 00000010 6c 69 63 79 3e 3c 73 69 74 65 2d 63 6f 6e 74 72 licy><si te-contr 00000020 6f 6c 20 70 65 72 6d 69 74 74 65 64 2d 63 72 6f ol permi tted-cro 00000030 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f 6c 69 63 69 ss-domai n-polici 00000040 65 73 3d 22 6d 61 73 74 65 72 2d 6f 6e 6c 79 22 es="mast er-only" 00000050 20 2f 3e 3c 61 6c 6c 6f 77 2d 61 63 63 65 73 73 /><allo w-access 00000060 2d 66 72 6f 6d 20 64 6f 6d 61 69 6e 3d 22 2a 22 -from do main="*" 00000070 20 74 6f 2d 70 6f 72 74 73 3d 22 31 32 38 36 39 to-port s="12869 00000080 22 20 2f 3e 3c 2f 63 72 6f 73 73 2d 64 6f 6d 61 " /></cr oss-doma 00000090 69 6e 2d 70 6f 6c 69 63 79 3e 00 in-polic y>. I get no security warnings, which I used to get before the policy server was in place. Still, the connection to port 12869 doesn't work. It's made (I can see with Wireshark and on the server), but no data is sent by Flash. It might be worth knowing that the SWF itself is served from example.com as well.

    Read the article

  • how to create a new variant in bjam

    - by steve jaffe
    I've tried reading the documentation but it is rather impenetrable so I'm hoping someone may have a simple answer. I want to define a new 'variant', based on 'debug', which just adds some macro definitions to the compiler command line, eg "-DSOMEMACRO". I think I may be able to do this as a "sub-variant" of debug, or else just define a new variant copying 'debug', but I'm not even sure where to do this. It looks like feature.jam in $BOOST_BUILD_DIR/build may be the place. Perhaps what I really want is simply a new 'feature' but it's still not clear to me exactly what I need to do and where, and I don't know if a 'feature' allows me to direct the build products to a different directory to the 'debug' build. Any suggestions will be appreciated. (In case you're wondering, I have to use bjam since it has been adopted as our corporate standard.)

    Read the article

  • Configuring an offscreen framebuffer fails the completeness test

    - by randallmeadows
    I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES. If I insert a call go glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g., GLuint framebuffer; glGenFramebuffersOES(1, &framebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer); the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings). I'm at a loss as to what to do to fix this. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >