Search Results

Search found 5842 results on 234 pages for 'compiler warnings'.

Page 203/234 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Macro to improve callback registration readability

    - by Warren Seine
    I'm trying to write a macro to make a specific usage of callbacks in C++ easier. All my callbacks are member functions and will take this as first argument and a second one whose type inherits from a common base class. The usual way to go is: register_callback(boost::bind(&my_class::member_function, this, _1)); I'd love to write: register_callback(HANDLER(member_function)); Note that it will always be used within the same class. Even if typeof is considered as a bad practice, it sounds like a pretty solution to the lack of __class__ macro to get the current class name. The following code works: typedef typeof(*this) CLASS; boost::bind(& CLASS :: member_function, this, _1)(my_argument); but I can't use this code in a macro which will be given as argument to register_callback. I've tried: #define HANDLER(FUN) \ boost::bind(& typeof(*this) :: member_function, this, _1); which doesn't work for reasons I don't understand. Quoting GCC documentation: A typeof-construct can be used anywhere a typedef name could be used. My compiler is GCC 4.4, and even if I'd prefer something standard, GCC-specific solutions are accepted.

    Read the article

  • C/C++ Control Structure Limitations?

    - by STingRaySC
    I have heard of a limitation in VC++ (not sure which version) on the number of nested if statements (somewhere in the ballpark of 300). The code was of the form: if (a) ... else if (b) ... else if (c) ... ... I was surprised to find out there is a limit to this sort of thing, and that the limit is so small. I'm not looking for comments about coding practice and why to avoid this sort of thing altogether. Here's a list of things that I'd imagine could have some limitation: Number of functions in a scope (global, class, or namespace). Number of expressions in a single statement (e.g., compound conditionals). Number of cases in a switch. Number of parameters to a function. Number of classes in a single hierarchy (either inheritance or containment). What other control structures/language features have limits such as this? Do the language standards say anything about these limits (perhaps minimum requirements for an implementation)? Has anyone run into a particular language limitation like this with a particular compiler/implementation? EDIT: Please note that the above form of if statements is indeed "nested." It is equivalent to: if (a) { //... } else { if (b) { //... } else { if (c) { //... } else { //... } } }

    Read the article

  • Does cout need to be terminated with a semicolon ?

    - by Philippe Harewood
    I am reading Bjarne Stroustrup's Programming : Principles and Practice Using C++ In the drill section for Chapter 2 it talks about various ways to look at typing errors when compiling the hello_world program #include "std_lib_facilities.h" int main() //C++ programs start by executing the function main { cout << "Hello, World!\n", // output "Hello, World!" keep_window_open(); // wait for a character to be entered return 0; } In particular this section asks: Think of at least five more errors you might have made typing in your program (e.g. forget keep_window_open(), leave the Caps Lock key on while typing a word, or type a comma instead of a semicolon) and try each to see what happens when you try to compile and run those versions. For the cout line, you can see that there is a comma instead of a semicolon. This compiles and runs (for me). Is it making an assumption ( like in the javascript question: Why use semicolon? ) that the statement has been terminated ? Because when I try for keep_terminal_open(); the compiler informs me of the semicolon exclusion.

    Read the article

  • (Cpp) Linker, Libraries & Directories Information

    - by m00st
    I've finished both my C++ 1/2 classes and we did not cover anything on Linking to libraries or adding additional libraries to C++ code. I've been having a hay-day trying to figure this out; I've been unable to find basic information linking to objects. Initially I thought the problem was the IDE (Netbeans; and Code::Blocks). However I've been unable to get wxWidgets and GTKMM setup. Can someone point me in the right direction on the terminology and basic information about #including files and linking files in a Cpp application? Basically I want/need to know everything in regards to this process. The difference between .dll, .lib, .o, .lib.a, .dll.a. The difference between a .h and a "library" (.dll, .lib correct?) I understand I need to read the compiler documentation I am using; however all compilers (that I know of) use linker and headers; I need to learn this information. Please point me in the right direction! :] Thanks

    Read the article

  • Boost Binary Endian parser not working?

    - by Hai
    I am studying how to use boost spirit Qi binary endian parser. I write a small test parser program according to here and basics examples, but it doesn't work proper. It gave me the msg:"Error:no match". Here is my code. #include "boost/spirit/include/qi.hpp" #include "boost/spirit/include/phoenix_core.hpp" #include "boost/spirit/include/phoenix_operator.hpp" #include "boost/spirit/include/qi_binary.hpp" // parsing binary data in various endianness template '<'typename P, typename T void binary_parser( char const* input, P const& endian_word_type, T& voxel, bool full_match = true) { using boost::spirit::qi::parse; char const* f(input); char const* l(f + strlen(f)); bool result1 = parse(f,l,endian_word_type,voxel); bool result2 =((!full_match) || (f ==l)); if ( result1 && result2) { //doing nothing, parsing data is pass to voxel alreay } else { std::cerr << "Error: not match!!" << std::endl; exit(1); } } typedef boost::uint16_t bs_int16; typedef boost::uint32_t bs_int32; int main ( int argc, char *argv[] ) { namespace qi = boost::spirit::qi; namespace ascii = boost::spirit::ascii; using qi::big_word; using qi::big_dword; boost::uint32_t ui; float uf; binary_parser("\x01\x02\x03\x04",big_word,ui); assert(ui=0x01020304); binary_parser("\x01\x02\x03\x04",big_word,uf); assert(uf=0x01020304); return 0; }' I almost copy the example, but why this binary parser doesn't work. I use Mac OS 10.5.8 and gcc 4.01 compiler.

    Read the article

  • adding virtual function to the end of the class declaration avoids binary incompatibility?

    - by bob
    Could someone explain to me why adding a virtual function to the end of a class declaration avoids binary incompatibility? If I have: class A { public: virtual ~A(); virtual void someFuncA() = 0; virtual void someFuncB() = 0; virtual void other1() = 0; private: int someVal; }; And later modify this function to: class A { public: virtual ~A(); virtual void someFuncA(); virtual void someFuncB(); virtual void someFuncC(); virtual void other1() = 0; private: int someVal; }; I get a coredump from another .so compiled against the previous declaration. But if I put someFuncC() at the end of the class declaration (after "int someVal"), I don't see coredump anymore. Could someone tell me why this is? And does this trick always work? PS. compiler is gcc, does this work with other compilers?

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • map operator [] operands

    - by Jamie Cook
    Hi all I have the following in a member function int tt = 6; vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; set<int>& egressCandidateStops = temp.at(dest); and the following declaration of a member variable map<int, vector<set<int>>> m_egressCandidatesByDestAndOtMode; However I get an error when compiling (Intel Compiler 11.0) 1>C:\projects\svn\bdk\Source\ZenithAssignment\src\Iteration\PtBranchAndBoundIterationOriginRunner.cpp(85): error: no operator "[]" matches these operands 1> operand types are: const std::map<int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>, std::less<int>, std::allocator<std::pair<const int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>>>> [ const int ] 1> vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; 1> ^ I know it's got to be something silly but I can't see what I've done wrong.

    Read the article

  • Boost ASIO Headache

    - by bobber205
    Man... thought using ASIO in Boost was going to be easy and intuitive. :P I am starting to get it finally but I am having some trouble. Here's a snippet. I am having several compiler errors on the async_accept line. What am I doing wrong? :P I've based my code off of this page: http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html bool TestSocket::StartListening(int port) { bool didStart = false; if (!this->listening) { //try to listen acceptor = new tcp::acceptor(this->myService, tcp::endpoint(tcp::v4(), port)); didStart = true; //probably change? tcp::socket* tempNewSocket = new tcp::socket(this->myService); acceptor->async_accept(tempNewSocket, boost::bind(&AlexSocket::NewConnection, this, tempNewSocket, boost::asio::placeholders::error) ); } else //already started! return false; this->listening = didStart; return didStart; } void TestSocket::NewConnection(tcp::socket* s, const boost::system::error_code& error) { }

    Read the article

  • erlide, which eclipse/which packages?

    - by KevinDTimm
    I have downloaded eclipse 3.4 (java version) for MacOSX (carbon). I have tried to 'update' to the erlide, but see many (duplicated) options (many erlide, options that say 'only for erl SDK updates', etc.) Sometimes I get 403 errors when attempting to access http://erlide.org/update and http://erlide.sourceforge.net/update. Finally, when I get some set of options installed, I either get errors like : Loading of /Users/kevindtimm/Documents/eclipse-java-ganymede-SR2-macosx-carbon/eclipse/plugins/org.erlide.kernel.common_0.8.1.201005250801/ebin/erlide_kernel_common.beam failed: badfile (hello_world@ktmac)1> =ERROR REPORT==== 24-Nov-2010::19:17:32 === beam/beam_load.c(1768): Error loading function erlide_kernel_common:monitor/0: op put_string u u x: please re-compile this module with an R14B compiler or, when I've done different installations of erlide, I get no response in the console to : hello:hello(). Does anybody have a good reference for how to load this plug-in and which items I should install? -module(hello). -export([hello/0]). hello() -> io:write("Hello World\n"). [edit] I have installed eclipse 3.6 (c++) as requested below, and the following code still can't find hello:hello(). %%file_comment -module(hello). %% %% Include files %% %% %% Exported Functions %% -export([hello/0]). %% %% API Functions %% %% %% Local Functions %% hello() -> io:write("Hello World\n"). [/edit]

    Read the article

  • Mutual class instances in C++

    - by SepiDev
    Hi guys. What is the issue with this code? Here we have two files: classA.h and classB.h classA.h: #ifndef _class_a_h_ #define _class_a_h_ #include "classB.h" class B; //???? class A { public: A() { ptr_b = new B(); //???? } virtual ~A() { if(ptr_b) delete ptr_b; //???? num_a = 0; } int num_a; B* ptr_b; //???? }; #endif //_class_a_h_ classB.h: #ifndef _class_b_h_ #define _class_b_h_ #include "classA.h" class A; //???? class B { public: B() { ptr_a = new A(); //???? num_b = 0; } virtual ~B() { if(ptr_a) delete ptr_a; //???? } int num_b; A* ptr_a; //???? }; #endif //_class_b_h_ when I try to compile it, the compiler (g++) says: classB.h: In constructor ‘B::B()’: classB.h:12: error: invalid use of incomplete type ‘struct A’ classB.h:6: error: forward declaration of ‘struct A’ classB.h: In destructor ‘virtual B::~B()’: classB.h:16: warning: possible problem detected in invocation of delete operator: classB.h:16: warning: invalid use of incomplete type ‘struct A’ classB.h:6: warning: forward declaration of ‘struct A’ classB.h:16: note: neither the destructor nor the class-specific operator delete will be called, even if they are declared when the class is defined.

    Read the article

  • Derived interface from generic method

    - by Sunit
    I'm trying to do this: public interface IVirtualInterface{ } public interface IFabricationInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public interface ICoatingInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public class FabInfo : IFabricationInfo { public int Requirement { get { return 1; } set { } } public int Type { get {return 1;} set{} } } public class CoatInfo : ICoatingInfo { public int Type { get { return 1; } set { } } public int Requirement { get { return 1; } set { } } } public class BusinessObj { public T VirtualInterface<T>() where T : IVirtualInterface { Type targetInterface = typeof(T); if (targetInterface.IsAssignableFrom(typeof(IFabricationInfo))) { var oFI = new FabInfo(); return (T)oFI; } if (targetInterface.IsAssignableFrom(typeof(ICoatingInfo))) { var oCI = new CoatInfo(); return (T)oCI; } return default(T); } } But getting a compiler error: Canot convert type 'GenericIntf.FabInfo' to T How do I fix this? thanks Sunit

    Read the article

  • what is meant by normalization in huge pointers

    - by wrapperm
    Hi, I have a lot of confusion on understanding the difference between a "far" pointer and "huge" pointer, searched for it all over in google for a solution, couldnot find one. Can any one explain me the difference between the two. Also, what is the exact normalization concept related to huge pointers. Please donot give me the following or any similar answers: "The only difference between a far pointer and a huge pointer is that a huge pointer is normalized by the compiler. A normalized pointer is one that has as much of the address as possible in the segment, meaning that the offset is never larger than 15. A huge pointer is normalized only when pointer arithmetic is performed on it. It is not normalized when an assignment is made. You can cause it to be normalized without changing the value by incrementing and then decrementing it. The offset must be less than 16 because the segment can represent any value greater than or equal to 16 (e.g. Absolute address 0x17 in a normalized form would be 0001:0001. While a far pointer could address the absolute address 0x17 with 0000:0017, this is not a valid huge (normalized) pointer because the offset is greater than 0000F.). Huge pointers can also be incremented and decremented using arithmetic operators, but since they are normalized they will not wrap like far pointers." Here the normalization concept is not very well explained, or may be I'm unable to understand it very well. Can anyone try explaining this concept from a beginners point of view. Thanks, Rahamath

    Read the article

  • Can't set Visible attribute in ASP.NET Panels

    - by RichW
    I am having trouble with visible attribute of an ASP.NET Panel control. I have a page that calls a database table and returns the results in a datagrid. Requirements If some of the returned values are null I need to hide the image that's next to it. I am using a Panel to determine whether to hide or show the image but am having trouble with the statement: visible='<%# Eval("addr1") <> DBNull.Value %>' I have tried these as well: visible='<%# Eval("addr1") <> DBNull.Value %>' visible='<%# IIf(Eval("addr1") Is DbNull.Value, "False","True") %>' Code is below: <asp:TemplateField > <ItemTemplate> <%# Eval("Name")%> <p> <asp:Panel runat="server" ID="Panel1" visible='<%# Eval("addr1") <> DBNull.Value %>'> <asp:Image Id="imgHouse" runat="server" AlternateText="Address" SkinId="imgHouse"/> </asp:Panel> <%# Eval("addr1") %><p> </ItemTemplate> </asp:TemplateField> What am I doing wrong? Edit If I use visible='<%# IIf(Eval("addr1") Is DbNull.Value, "False","True") %>' I get the following error: Compiler Error Message: CS1026: ) expected

    Read the article

  • Why do C# containers and GUI classes use int and not uint for size related members ?

    - by smerlin
    I usually program in C++, but for school i have to do a project in C#. So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following: const uint size = 10; ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case. Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size). If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ? Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?

    Read the article

  • how to store a file handle in perl class

    - by Haiyuan Zhang
    please look at the following code first. #! /usr/bin/perl package foo; sub new { my $pkg = shift; my $self = {}; my $self->{_fd} = undef; bless $self, $pkg; return $self; } sub Setfd { my $self = shift; my $fd = shift; $self_->{_fd} = $fd; } sub write { my $self = shift; print $self->{_fd} "hello word"; } my $foo = new foo; My intention is to store a file handle within a class using hash. the file handle is undefined at first, but can be initilized afterwards by calling Setfd function. then write can be called to actually write string "hello word" to a file indicated by the file handle, supposed that the file handle is the result of a success "write" open. but, perl compiler just complains that there are syntax error in the "print" line. can anyone of you tells me what's wrong here? thanks in advance.

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • ParseKit.framework won't work, Foundation.h not found

    - by Jeremy
    I'm really stumped trying to get the ParseKit.framework (this) to work in general, not even bothering to implement it till it runs the demo app that comes with it. What happens is the compiler can't locate < Foundation/Foundation.h or something, which I thought the header was in the linked framework. Exact error: "Lexical or Preprocessor Issue: 'Foundation/Foundation.h' file not found." Here's the code, just from the ParseKit_Prefix.pch: // // Prefix header for all source files of the 'ParseKit' target in the 'ParseKit' project. //#ifdef __OBJC__ #import <Foundation/Foundation.h> #endif Nothing unusual about it, did I mess up the file paths some how? I've reinstalled Xcode, re-downloaded the ParseKit, and nothing is helping. The suggestions here did nothing and it's not this. When I make a new project or use a different project and load the Foundation.framework and #import the header it works just fine. If I unlink the framework I can't find it to re-link again. Has anyone else had this kind of problem? Did I download it wrong somewhere? I have a very difficult time finding where exactly the Xcode UI links stuff, apple must get a kick out of frustrating people, so if anyone has anything they can think of please give me some feedback, I'm horribly confused right now. Thanks,

    Read the article

  • Boost Date_Time problem compiling a simple program

    - by Andry
    Hello! I'm writing a very stupid program using Boost Date_Time library. int main(int srgc, char** argv) { using namespace boost::posix_time; date d(2002,Feb,1); //an arbitrary date ptime t1(d, hours(5)+nanosec(100)); //date + time of day offset ptime t2 = t1 - minutes(4)+seconds(2); ptime now = second_clock::local_time(); //use the clock date today = now.date(); //Get the date part out of the time } Well I cannot compile it, compiler does not recognize a type... Well I used many features of Boost libs like serialization and more... I correctly built them and, looking in my /usr/local/lib folder I can see that libboost_date_time.so is there (a good sign which means I was able to build that library) When I compile I write the following: g++ -lboost_date_time main.cpp But the errors it showed me when I specify the lib are the same of those ones where I do not specify any lib. What is this? Anyone knows? The error is main.cpp: In function ‘int main(int, char**)’: main.cpp:9: error: ‘date’ was not declared in this scope main.cpp:9: error: expected ‘;’ before ‘d’ main.cpp:10: error: ‘d’ was not declared in this scope main.cpp:10: error: ‘nanosec’ was not declared in this scope main.cpp:13: error: expected ‘;’ before ‘today’

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • C++ casted realloc causing memory leak

    - by wyatt
    I'm using a function I found here to save a webpage to memory with cURL: struct WebpageData { char *pageData; size_t size; }; size_t storePage(void *input, size_t size, size_t nmemb, void *output) { size_t realsize = size * nmemb; struct WebpageData *page = (struct WebpageData *)output; page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); if(page->pageData) { memcpy(&(page->pageData[page->size]), input, realsize); page->size += realsize; page->pageData[page->size] = 0; } return realsize; } and find the line: page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); is causing a memory leak of a few hundred bytes per call. The only real change I've made from the original source is casting the line in question to a (char *), which my compiler (gcc, g++ specifically if it's a c/c++ issue, but gcc also wouldn't compile with the uncast statement) insisted upon, but I assume this is the source of the leak. Can anyone elucidate? Thanks

    Read the article

  • How do actually castings work at the CLR level?

    - by devoured elysium
    When doing an upcast or downcast, what does really happen behind the scenes? I had the idea that when doing something as: string myString = "abc"; object myObject = myString; string myStringBack = (string)myObject; the cast in the last line would have as only purpose tell the compiler we are safe we are not doing anything wrong. So, I had the idea that actually no casting code would be embedded in the code itself. It seems I was wrong: .maxstack 1 .locals init ( [0] string myString, [1] object myObject, [2] string myStringBack) L_0000: nop L_0001: ldstr "abc" L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: ldloc.1 L_000a: castclass string L_000f: stloc.2 L_0010: ret Why does the CLR need something like castclass string? There are two possible implementations for a downcast: You require a castclass something. When you get to the line of code that does an castclass, the CLR tries to make the cast. But then, what would happen had I ommited the castclass string line and tried to run the code? You don't require a castclass. As all reference types have a similar internal structure, if you try to use a string on an Form instance, it will throw an exception of wrong usage (because it detects a Form is not a string or any of its subtypes). Also, is the following statamente from C# 4.0 in a Nutshell correct? Upcasting and downcasting between compatible reference types performs reference conversions: a new reference is created that points to the same object. Does it really create a new reference? I thought it'd be the same reference, only stored in a different type of variable. Thanks

    Read the article

  • Trying to make a plugin system in C++/Qt

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • Install h5py in Mac OS X 10.6.3

    - by zyq524
    I'm trying to install h5py in Mac OS X 10.6.3. First I installed HDF5 1.8, which used the following commands: ./configure \ --prefix=/Library/Frameworks/Python.framework/Versions/Current \ --enable-shared \ --enable-production \ --enable-threadsafe \ CPPFLAGS=-I/Library/Frameworks/Python.framework/Versions/Current/include \ LDFLAGS=-L/Library/Frameworks/Python.framework/Versions/Current/lib make make check sudo make install Then install h5py: /Library/Frameworks/Python.framework/Versions/Current/bin/python \ setup.py \ build \ --api=18 \ --hdf5=/Library/Frameworks/Python.framework/Versions/Current Then I got the errors: Configure: Autodetecting HDF5 settings... Custom HDF5 dir: /Library/Frameworks/Python.framework/Versions/Current Custom API level: (1, 8) ld: warning: in detect/vers.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Library/Frameworks/Python.framework/Versions/Current/lib/libhdf5.dylib, file was built for unsupported file format which is not the architecture being linked (i386) Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status Failed to compile HDF5 test program. Please check to make sure: * You have a C compiler installed * A development version of Python is installed (including header files) * A development version of HDF5 is installed (including header files) * If HDF5 is not in a default location, supply the argument --hdf5=<path> error: command 'cc' failed with exit status 1 I just updated my Xcode, I don't know whether this is because my gcc's default setting. If so, how can I get rid of this error? Thanks.

    Read the article

  • Opa app does not load in Internet Explorer when compiled with Opa 1.1.1

    - by Marcin Skórzewski
    I did a minor update to the already working application and then had problems using new version of Opa compiler. First problem - runtime exception Since the original deployment Opa 1.1.1 has been released and it resulted in error: events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EADDRINUSE at errnoException (net.js:901:11) at Server._listen2 (net.js:1039:14) at listen (net.js:1061:10) at Server.listen (net.js:1127:5) at global.BslNet_Http_server_init_server (/opt/mlstate/lib/opa/stdlib/server.opp/serverNodeJsPackage.js:223:1405) at global.BslNet_Http_server_init_server_cps (/opt/mlstate/lib/opa/stdlib/server.opp/serverNodeJsPackage.js:226:15) at __v1_bslnet_http_server_init_server_cps_b970f080 (/opt/mlstate/lib/opa/stdlib/stdlib.qmljs/stdlib.core.web.server.opx/main.js:1:175) at /opt/mlstate/lib/opa/stdlib/stdlib.qmljs/stdlib.core.web.server.opx/main.js:440:106 at global.execute_ (/opt/mlstate/lib/opa/static/opa-js-runtime-cps/main.js:19:49) at /opt/mlstate/lib/opa/static/opa-js-runtime-cps/main.js:17:78 I decided to build Opa from sources and it helped, but another problem occurred :( Second problem - stops to support the IE Application stopped to work in Internet Explorer. I tried two different machines (Windows XP and 7) with IE 8 and 10. Web page does not load at all (looks like the network problem, but the same URL works fine in Firefox). I confirmed the same problem with "Hello world" from the Opa tutorial compiled with both Opa stable 1.1.1 and build from sources. I suspected that the problem is due to Node.js update (Opa = 1.1.1 requires Node 0.10.* - now I am using 0.10.12, but I also tried other 0.10-s), but "Hello world" from the Node's from page works fine. I am running the app on OSX developer box and Linux Debian 7.0 server. Any suggestions what am I doing wrong? PS. I was off the business for a while. Anyone knows what happened to the Opa forum? Signing is seams not to work.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >