Search Results

Search found 1221 results on 49 pages for 'argv'.

Page 36/49 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • boost::asio::async_resolve Problem

    - by Moo-Juice
    Hi All, I'm in the process of constructing a Socket class that uses boost::asio. To start with, I made a connect method that took a host and a port and resolved it to an IP address. This worked well, so I decided to look in to async_resolve. However, my callback always gets an error code of 995 (using the same destination host/port as when it worked synchronously). code: Function that starts the resolution: // resolve a host asynchronously template<typename ResolveHandler> void resolveHost(const String& _host, Port _port, ResolveHandler _handler) const { boost::asio::ip::tcp::endpoint ret; boost::asio::ip::tcp::resolver::query query(_host, boost::lexical_cast<std::string>(_port)); boost::asio::ip::tcp::resolver r(m_IOService); r.async_resolve(query, _handler); }; // eo resolveHost Code that calls this function: void Socket::connect(const String& _host, Port _port) { // Anon function for resolution of the host-name and asynchronous calling of the above auto anonResolve = [this](const boost::system::error_code& _errorCode, boost::asio::ip::tcp::resolver_iterator _epIt) { // raise event onResolve.raise(SocketResolveEventArgs(*this, !_errorCode ? (*_epIt).host_name() : String(""), _errorCode)); // perform connect, calling back to anonymous function if(!_errorCode) connect(*_epIt); }; // Resolve the host calling back to anonymous function Root::instance().resolveHost(_host, _port, anonResolve); }; // eo connect The message() function of the error_code is: The I/O operation has been aborted because of either a thread exit or an application request And my main.cpp looks like this: int _tmain(int argc, _TCHAR* argv[]) { morse::Root root; TextSocket s; s.connect("somehost.com", 1234); while(true) { root.performIO(); // calls io_service::run_one() } return 0; } Thanks in advance!

    Read the article

  • How to pass a function in a function?

    - by SoulBeaver
    That's an odd title. I would greatly appreciate it if somebody could clarify what exactly I'm asking because I'm not so sure myself. I'm watching the Stanford videos on Programming Paradigms(that teacher is awesome) and I'm up to video five when he started doing this: void *lSearch( void* key, void* base, int elemSize, int n, int (*cmpFn)(void*, void*)) Naturally, I thought to myself, "Oi, I didn't know you could declare a function and define it later!". So I created my own C++ test version. int foo(int (*bar)(void*, void*)); int bar(void* a, void* b); int main(int argc, char** argv) { int *func = 0; foo(bar); cin.get(); return 0; } int foo(int (*bar)(void*, void*)) { int c(10), d(15); int *a = &c; int *b = &d; bar(a, b); return 0; } int bar(void* a, void* b) { cout << "Why hello there." << endl; return 0; } The question about the code is this: it fails if I declare function int *bar as a parameter of foo, but not int (*bar). Why!? Also, the video confuses me in the fact that his lSearch definition void* lSearch( /*params*/ , int (*cmpFn)(void*, void*)) is calling cmpFn in the definition, but when calling the lSearch function lSearch( /*params*/, intCmp ); also calls the defined function int intCmp(void* elem1, void* elem2); and I don't get how that works. Why, in lSearch, is the function called cmpFn, but defined as intCmp, which is of type int, not int* and still works? And why does the function in lSearch not have to have defined parameters?

    Read the article

  • Incompatible types when assigning to type 'struct compartido'

    - by user1660559
    I have one problem with this code. I should create one structure and share it across 5 new process created from the father: #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <sys/sem.h> #include <time.h> struct compartido { int pid1, pid2, pid3, pid4, pid5; int propietario; int contador; int pidpadre; }; struct compartido var; int main(int argc, char *argv[]) { key_t llave1,llavesem; int idmem,idsem; llave1=ftok("/tmp",'a'); idmem=shmget(llave1,sizeof(int),IPC_CREAT|0600); if (idmem==-1) { perror ("shmget"); return 1; } var=shmat(idmem,0,0); /*This line is giving the error*/ /*rest of the code*/ } The exact error is giving is: error: incompatible types when assigning to type 'struct compartido' from type 'void *' I need to put this structure in the shared variable to be able to see and modify all those data from the 6 process (5 children and the father). What I'm doing bad? Thanks in advance and best regards,

    Read the article

  • Mac OS X and static boost libs -> std::string fail

    - by Ionic
    Hi all, I'm experiencing some very weird problems with static boost libraries under Mac OS X 10.6.6. The error message is main(78485) malloc: *** error for object 0x1000e0b20: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug [1] 78485 abort (core dumped) and a tiny bit of example code which will trigger this problem: #define BOOST_FILESYSTEM_VERSION 3 #include <boost/filesystem.hpp> #include <iostream> int main (int argc, char **argv) { std::cout << boost::filesystem::current_path ().string () << '\n'; } This problem always occurs when linking the static boost libraries into the binary. Linking dynamically will work fine, though. I've seen various reports for quite a similar OS X bug with GCC 4.2 and the _GLIBCXX_DEBUG macro set, but this one seems even more generic, as I'm neither using XCode, nor setting the macro (even undefining it does not help. I tried it just to make sure it's really not related to this problem.) Does anybody have any pointers to why this is happening or even maybe a solution (rather than using the dynamic library workaround)? Best regards, Mihai

    Read the article

  • How can you get the call tree with python profilers?

    - by Oliver
    I used to use a nice Apple profiler that is built into the System Monitor application. As long as your C++ code was compiled with debug information, you could sample your running application and it would print out an indented tree telling you what percent of the parent function's time was spent in this function (and the body vs. other function calls). For instance, if main called function_1 and function_2, function_2 calls function_3, and then main calls function_3: main (100%, 1% in function body): function_1 (9%, 9% in function body): function_2 (90%, 85% in function body): function_3 (100%, 100% in function body) function_3 (1%, 1% in function body) I would see this and think, "Something is taking a long time in the code in the body of function_2. If I want my program to be faster, that's where I should start." Does anyone know how I can most easily get this exact profiling output for a python program? I've seen people say to do this: import cProfile, pstats prof = cProfile.Profile() prof = prof.runctx("real_main(argv)", globals(), locals()) stats = pstats.Stats(prof) stats.sort_stats("time") # Or cumulative stats.print_stats(80) # 80 = how many to print but it's quite messy compared to that elegant call tree. Please let me know if you can easily do this, it would help quite a bit. Cheers!

    Read the article

  • Call Python From PHP And Get Return Code

    - by seaboy
    Hello everyone, I am calling a python script from PHP. The python program has to return some value according to the arguments passed to it. Here is a sample python program, which will give you a basic idea of what i am doing currently: #!/usr/bin/python import sys #get the arguments passed argList = sys.argv #Not enough arguments. Exit with a value of 1. if len(argList) < 3: #Return with a value of 1. sys.exit(1) arg1 = argList[1] arg2 = argList[2] #Check arguments. Exit with the appropriate value. if len(arg1) > 255: #Exit with a value of 4. sys.exit(4) if len(arg2) < 2: #Exit with a value of 8. sys.exit(8) #Do further coding using the arguments------ #If program works successfully, exit with a value of 0 As you can see from the above code, my basic aim is for the python program to return some values (0,1,4,8 etc) depending on the arguments. And then the calling PHP program to access these returned values and do the appropriate operation. Currently i have used "sys.exit(n)", for that purpose. Am i right in using sys.exit, or do I need to use something else? And also what method exists in PHP so that I can access the return code from python? Sorry for the long question, but hopefully it will help in you understanding my dilemma Thanks a ton

    Read the article

  • When memory is actually freeded?

    - by zhyk
    Hello all. I'm trying to understand memory management stuff in Objective-C. If I see the memory usage listed by Activity Monitor, it looks like memory is not being freed (I mean column rsize). But in "Object Allocations" everything looks fine. Here is my simple code: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSInteger i, k=10000; while (k>0) { NSMutableArray *array = [[NSMutableArray alloc]init]; for (i=0;i<1000*k; i++) { NSString *srtring = [[NSString alloc] initWithString:@"string...."]; [array addObject:srtring]; [srtring release]; srtring = nil; } [array release]; array = nil; k-=500; } [NSThread sleepForTimeInterval:5]; [pool release]; return 0; } As for retain and release it's cool, everything is balanced. But rsize decreases only after quitting from this little program. Is it possible to "clean" memory somehow before quitting?

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • dynamic lib can't find static lib

    - by renyufei
    env: gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) app: Bin(main) calls dynamic lib(testb.so), and testb.so contains a static lib(libtesta.a). file list: main.c test.h a.c b.c then compile as: gcc -o testa.o -c a.c ar -r libtesta.a testa.o gcc -shared -fPIC -o testb.so b.c gcc -o main main.c -L. -ltesta -ldl then compile success, but runs an error: ./main: symbol lookup error: ./testb.so: undefined symbol: print code as follows: test.h #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <dlfcn.h> int printa(const char *msg); int printb(const char *msg); a.c #include "test.h" int printa(const char *msg) { printf("\tin printa\n"); printf("\t%s\n", msg); } b.c #include "test.h" int printb(const char *msg) { printf("in printb\n"); printa("called by printb\n"); printf("%s\n", msg); } main.c #include "test.h" int main(int argc, char **argv) { void *handle; int (*dfn)(const char *); printf("before dlopen\n"); handle = dlopen("./testb.so", RTLD_LOCAL | RTLD_LAZY); printf("after dlopen\n"); if (handle == NULL) { printf("dlopen fail: [%d][%s][%s]\n", \ errno, strerror(errno), dlerror()); exit(EXIT_FAILURE); } printf("before dlsym\n"); dfn = dlsym(handle, "printb"); printf("after dlsym\n"); if (dfn == NULL) { printf("dlsym fail: [%d][%s][%s]\n", \ errno, strerror(errno), dlerror()); exit(EXIT_FAILURE); } printf("before dfn\n"); dfn("printb func\n"); printf("after dfn\n"); exit(EXIT_SUCCESS); }

    Read the article

  • Gtk, Trying to set bg to GtkEventBox but it fails.

    - by PP
    I am trying to set bg image to an event box but i am getting error as window GDK_IS_WINDOW (window)' failed what am i doing wrong in it? int main() { GtkWidget *g_winMain; GError *error = NULL; gtk_init(&argc, &argv); g_winMain = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_widget_set_size_request(g_winMain, 300, 300); g_signal_connect(G_OBJECT(g_winMain), "destroy", G_CALLBACK(gtk_main_quit), NULL); GtkWidget *event_box; event_box = gtk_event_box_new(); gtk_event_box_set_visible_window( GTK_EVENT_BOX(event_box), TRUE ); GtkWidget *image; GdkPixmap *pixmap; GdkPixbuf *pixbuf; GdkBitmap *bitmap; pixbuf = gdk_pixbuf_new_from_file("test.jpg", &error); gdk_pixbuf_render_pixmap_and_mask(pixbuf, &pixmap, &bitmap, 0); gdk_window_set_back_pixmap( event_box->window, pixmap, true); //Error On this Line******** gtk_container_add( GTK_CONTAINER(g_winMain), event_box); gtk_widget_show(g_winMain); gtk_main(); }

    Read the article

  • Boost Binary Endian parser not working?

    - by Hai
    I am studying how to use boost spirit Qi binary endian parser. I write a small test parser program according to here and basics examples, but it doesn't work proper. It gave me the msg:"Error:no match". Here is my code. #include "boost/spirit/include/qi.hpp" #include "boost/spirit/include/phoenix_core.hpp" #include "boost/spirit/include/phoenix_operator.hpp" #include "boost/spirit/include/qi_binary.hpp" // parsing binary data in various endianness template '<'typename P, typename T void binary_parser( char const* input, P const& endian_word_type, T& voxel, bool full_match = true) { using boost::spirit::qi::parse; char const* f(input); char const* l(f + strlen(f)); bool result1 = parse(f,l,endian_word_type,voxel); bool result2 =((!full_match) || (f ==l)); if ( result1 && result2) { //doing nothing, parsing data is pass to voxel alreay } else { std::cerr << "Error: not match!!" << std::endl; exit(1); } } typedef boost::uint16_t bs_int16; typedef boost::uint32_t bs_int32; int main ( int argc, char *argv[] ) { namespace qi = boost::spirit::qi; namespace ascii = boost::spirit::ascii; using qi::big_word; using qi::big_dword; boost::uint32_t ui; float uf; binary_parser("\x01\x02\x03\x04",big_word,ui); assert(ui=0x01020304); binary_parser("\x01\x02\x03\x04",big_word,uf); assert(uf=0x01020304); return 0; }' I almost copy the example, but why this binary parser doesn't work. I use Mac OS 10.5.8 and gcc 4.01 compiler.

    Read the article

  • No Debug information of an iPhone app

    - by Markus Pilman
    Hi all, I wrote an iPhone app which uses a third party library. I crosscompiled this library successfully and everything works smoothly. But when I want to debug the application, it would make sense to also be able to debug the library. So I compiled also the external library with debuging information (usign the gcc option -ggdb). But when I want to debug it, I get the correct symbol names, but the positions are always wrong/extremly wierd (locale_facets.tcc:2505 or iostream:76). For example a stack trace could look like this: #0 0x000045e8 in zorba::serialization::SerializeBaseClass::SerializeBaseClass () at iostream:76 #1 0x0001d990 in zorba::RCObject::RCObject () at iostream:76 #2 0x00025187 in zorba::xqpStringStore::xqpStringStore () at iostream:76 #3 0x000719e4 in zorba::String::String () at locale_facets.tcc:2505 #4 0x00030513 in iphone::iLabelModule::getURI (this=0x533f710) at /Users/sausalito/eth/izorba/sandbox/ilabel.cpp:19 #5 0x00356766 in zorba::static_context::bind_external_module () at locale_facets.tcc:2505 #6 0x0006139d in zorba::StaticContextImpl::registerModule () at locale_facets.tcc:2505 #7 0x000333e5 in -[ZorbaCaller init] (self=0x53405c0, _cmd=0x95583398) at /Users/sausalito/eth/izorba/sandbox/ZorbaCaller.mm:61 #8 0x00033180 in +[ZorbaCaller instance] (self=0x11dc2bc, _cmd=0x93679591) at /Users/sausalito/eth/izorba/sandbox/ZorbaCaller.mm:37 #9 0x0003d998 in -[testOne execute:] (self=0x530d560, _cmd=0x9366b126, sender=0x5121da0) at /Users/sausalito/eth/izorba/sandbox/generator/testOne.mm:13 #10 0x01a21405 in -[UIApplication sendAction:to:from:forEvent:] () #11 0x01a84b4e in -[UIControl sendAction:to:forEvent:] () #12 0x01a86d6f in -[UIControl(Internal) _sendActionsForEvents:withEvent:] () #13 0x01a85abb in -[UIControl touchesEnded:withEvent:] () #14 0x01a3addf in -[UIWindow _sendTouchesForEvent:] () #15 0x01a247c8 in -[UIApplication sendEvent:] () #16 0x01a2b061 in _UIApplicationHandleEvent () #17 0x03b6fd59 in PurpleEventCallback () #18 0x034a8b80 in CFRunLoopRunSpecific () #19 0x034a7c48 in CFRunLoopRunInMode () #20 0x03b6e615 in GSEventRunModal () #21 0x03b6e6da in GSEventRun () #22 0x01a2bfaf in UIApplicationMain () #23 0x0002dd7e in main (argc=1, argv=0xbffff044) at /Users/sausalito/eth/izorba/sandbox/main.m:16 Does anybody have an idea, where these wrong locations come from?

    Read the article

  • Providing less than operator for one element of a pair

    - by Koszalek Opalek
    What would be the most elegant way too fix the following code: #include <vector> #include <map> #include <set> using namespace std; typedef map< int, int > row_t; typedef vector< row_t > board_t; typedef row_t::iterator area_t; bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; int main( int argc, char* argv[] ) { int row_num; area_t it; set< pair< int, area_t > > queue; queue.insert( make_pair( row_num, it ) ); // does not compile }; One way to fix it is moving the definition of less< to namespace std (I know, you are not supposed to do it.) namespace std { bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; }; Another obvious solution is defining less than< for pair< int, area_t but I'd like to avoid that and be able to define the operator only for the one element of the pair where it is not defined.

    Read the article

  • struct and rand()

    - by teoz
    I have a struct with an array of 100 int (b) and a variable of type int (a) I have a function that checks if the value of "a" is in the array and i have generated the array elements and the variable with random values. but it doesn't work can someone help me fix it? #include <stdio.h> #include <stdlib.h> #include <time.h> typedef struct { int a; int b[100]; } h; int func(h v){ int i; for (i=0;i<100;i++){ if(v.b[i]==v.a) return 1; else return 0; } } int main(int argc, char** argv) { h str; srand(time(0)); int i; for(i=0;0<100;i++){ str.b[i]=(rand() % 10) + 1; } str.a=(rand() % 10) + 1; str.a=1; printf("%d\n",func(str)); return 0; }

    Read the article

  • Boost Date_Time problem compiling a simple program

    - by Andry
    Hello! I'm writing a very stupid program using Boost Date_Time library. int main(int srgc, char** argv) { using namespace boost::posix_time; date d(2002,Feb,1); //an arbitrary date ptime t1(d, hours(5)+nanosec(100)); //date + time of day offset ptime t2 = t1 - minutes(4)+seconds(2); ptime now = second_clock::local_time(); //use the clock date today = now.date(); //Get the date part out of the time } Well I cannot compile it, compiler does not recognize a type... Well I used many features of Boost libs like serialization and more... I correctly built them and, looking in my /usr/local/lib folder I can see that libboost_date_time.so is there (a good sign which means I was able to build that library) When I compile I write the following: g++ -lboost_date_time main.cpp But the errors it showed me when I specify the lib are the same of those ones where I do not specify any lib. What is this? Anyone knows? The error is main.cpp: In function ‘int main(int, char**)’: main.cpp:9: error: ‘date’ was not declared in this scope main.cpp:9: error: expected ‘;’ before ‘d’ main.cpp:10: error: ‘d’ was not declared in this scope main.cpp:10: error: ‘nanosec’ was not declared in this scope main.cpp:13: error: expected ‘;’ before ‘today’

    Read the article

  • Why does this program take up so much memory?

    - by Adrian
    I am learning Objective-C. I am trying to release all of the memory that I use. So, I wrote a program to test if I am doing it right: #import <Foundation/Foundation.h> #define DEFAULT_NAME @"Unknown" @interface Person : NSObject { NSString *name; } @property (copy) NSString * name; @end @implementation Person @synthesize name; - (void) dealloc { [name release]; [super dealloc]; } - (id) init { if (self = [super init]) { name = DEFAULT_NAME; } return self; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Person *person = [[Person alloc] init]; NSString *str; int i; for (i = 0; i < 1e9; i++) { str = [NSString stringWithCString: "Name" encoding: NSUTF8StringEncoding]; person.name = str; [str release]; } [person release]; [pool drain]; return 0; } I am using a mac with snow leopard. To test how much memory this is using, I open Activity Monitor at the same time that it is running. After a couple of seconds, it is using gigabytes of memory. What can I do to make it not use so much?

    Read the article

  • PyQt QAbstractListModel seems to ignore tristate flags

    - by mcieslak
    I've been trying for a couple days to figure out why my QAbstractLisModel won't allow a user to toggle a checkable item in three states. The model returns the Qt.IsTristate and Qt.ItemIsUserCheckable in the flags() method, but when the program runs only Qt.Checked and Qt.Unchecked are toggled on edit. class cboxModel(QtCore.QAbstractListModel): def __init__(self, parent=None): super(cboxModel, self).__init__(parent) self.cboxes = [ ['a',0], ['b',1], ['c',2], ['d',0] ] def rowCount(self,index=QtCore.QModelIndex()): return len(self.cboxes) def data(self,index,role): if not index.isValid: return QtCore.QVariant() myname,mystate = self.cboxes[index.row()] if role == QtCore.Qt.DisplayRole: return QtCore.QVariant(myname) if role == QtCore.Qt.CheckStateRole: if mystate == 0: return QtCore.QVariant(QtCore.Qt.Unchecked) elif mystate == 1: return QtCore.QVariant(QtCore.Qt.PartiallyChecked) elif mystate == 2: return QtCore.QVariant(QtCore.Qt.Checked) return QtCore.QVariant() def setData(self,index,value,role=QtCore.Qt.EditRole): if index.isValid(): self.cboxes[index.row()][1] = value.toInt()[0] self.emit(QtCore.SIGNAL("dataChanged(QModelIndex,QModelIndex)"), index, index) print self.cboxes return True return False def flags(self,index): if not index.isValid(): return QtCore.Qt.ItemIsEditable return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsEditable | QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsTristate You can test it with this, class MainForm(QtGui.QMainWindow): def __init__(self, parent=None): super(MainForm, self).__init__(parent) model = cboxModel(self) self.view = QtGui.QListView() self.view.setModel(model) self.setCentralWidget(self.view) app = QtGui.QApplication(sys.argv) form = MainForm() form.show() app.exec_() and see that only 2 states are available. I'm assuming there's something simple I'm missing. Any ideas? Thanks!

    Read the article

  • My QFileSystemModel doesn't work as expected in PyQt

    - by Skilldrick
    I'm learning the Qt Model/View architecture at the moment, and I've found something that doesn't work as I'd expect it to. I've got the following code (adapted from Qt Model Classes): from PyQt4 import QtCore, QtGui model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) #prints True print model.data(parentIndex).toString() #prints name of current directory childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows #prints 0 (even though the current directory has directory and file children) The question: Is this a problem with PyQt, have I just done something wrong, or am I completely misunderstanding QFileSystemModel? According to the documentation, model.rowCount(parentIndex) should return the number of children in the current directory. The QFileSystemModel docs say that it needs an instance of a Gui application, so I've also placed the above code in a QWidget as follows, but with the same result: import sys from PyQt4 import QtCore, QtGui class Widget(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) print model.data(parentIndex).toString() childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows def main(): app = QtGui.QApplication(sys.argv) widget = Widget() widget.show() sys.exit(app.exec_()) if __name__ == '__main__': main()

    Read the article

  • why can't i bind ipv6 socket to a linklocal address

    - by Haiyuan Zhang
    #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> void error(char *msg) { perror(msg); exit(0); } int main(int argc, char *argv[]) { int sock, length, fromlen, n; struct sockaddr_in6 server; struct sockaddr_in6 from; int portNr = 5555; char buf[1024]; length = sizeof (struct sockaddr_in6); sock=socket(AF_INET6, SOCK_DGRAM, 0); if (sock < 0) error("Opening socket"); bzero((char *)&server, length); server.sin6_family=AF_INET6; server.sin6_addr=in6addr_any; server.sin6_port=htons(portNr); inet_pton( AF_INET6, "fe80::21f:29ff:feed:2f7e", (void *)&server.sin6_addr.s6_addr); //inet_pton( AF_INET6, "::1", (void *)&server.sin6_addr.s6_addr); if (bind(sock,(struct sockaddr *)&server,length)<0) error("binding"); fromlen = sizeof(struct sockaddr_in6); while (1) { n = recvfrom(sock,buf,1024,0,(struct sockaddr *)&from,&fromlen); if (n < 0) error("recvfrom"); write(1,"Received a datagram: ",21); write(1,buf,n); n = sendto(sock,"Got your message\n",17, 0,(struct sockaddr *)&from,fromlen); if (n < 0) error("sendto"); } } when I compile and run the above code I got : binding: Invalid argument and if change to bind the ::1 and leave other thing unchanged in the source code, the code works! so could you tell me what's wrong with my code ? thanks in advance.

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • Calling Python app/script from C#

    - by Maxim Z.
    I'm building an ASP.NET MVC (C#) site where I want to implement STV (Single Transferable Vote) voting. I've used OpenSTV for voting scenarios before, with great success, but I've never used it programmatically. The OpenSTV Google Code project offers a Python script that allows usage of OpenSTV from other applications: import sys sys.path.append("path to openstv package") from openstv.ballots import Ballots from openstv.ReportPlugins.TextReport import TextReport from openstv.plugins import getMethodPlugins (ballotFname, method, reportFname) = sys.argv[1:] methods = getMethodPlugins("byName") f = open(reportFname, "w") try: b = Ballots() b.loadUnknown(ballotFname) except Exception, msg: print >> f, ("Unable to read ballots from %s" % ballotFname) print >> f, msg sys.exit(-1) try: e = methods[method](b) e.runElection() except Exception, msg: print >> f, ("Unable to count votes using %s" % method) print >> f, msg sys.exit(-1) try: r = TextReport(e, outputFile=f) r.generateReport(); except Exception, msg: print >> f, "Unable to write report" print >> f, msg sys.exit(-1) f.close() Is there a way for me to make such a Python call from my C# ASP.NET MVC site? If so, how? Thanks in advance!

    Read the article

  • Sharing runtime variables between files

    - by nightcracker
    I have a project with a few files that all include the header global.hpp. Those files want to share and update information that is relevant for the whole program during runtime (that data is gathered progressively during the program runs but the fields of data are known at compile-time). Now my idea was to use a struct like this: global.hpp #include <string> #ifndef _GLOBAL_SESSION_STRUCT #define _GLOBAL_SESSION_STRUCT struct session_struct { std::string username; std::string password; std::string hostname; unsigned short port; // more data fields as needed }; #endif extern struct session_struct session; main.cpp #include "global.hpp" struct session_struct session; int main(int argc, char* argv[]) { session.username = "user"; session.password = "secret"; session.hostname = "example.com"; session.port = 80; // other stuff, etc return 0; } Now every file that includes global.hpp can just read & write the fields of the session struct and easily share information. Is this the correct way to do this? NOTE: For this specific project no threading is used. But please (for future projects and other people reading) clarify in your answer how this (or your proposed) solution works when threaded. Also, for this example/project session variables are shared. But this should also apply to any other form of shared variables.

    Read the article

  • Usage of VIsual Memory Leak Detector

    - by Yan Cheng CHEOK
    I found a very interesting memory leak detector by using Visual C++. http://www.codeproject.com/KB/applications/visualleakdetector.aspx I try it out, but cannot make it works to detect a memory leak code. I am using MS Visual Studio 2008. Any step I had missed out? #include "stdafx.h" #include "vld.h" #include <iostream> void fun() { new int[1000]; } int _tmain(int argc, _TCHAR* argv[]) { fun(); std::cout << "lead?" << std::endl; getchar(); return 0; } The output when I run in debug mode is : ... ... 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll', Symbols loaded. 'Test.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll', Symbols loaded (source information stripped). 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC90.DebugCRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_f863c71f\msvcp90d.dll', Symbols loaded. 'Test.exe': Loaded 'C:\Program Files\Visual Leak Detector\bin\dbghelp.dll', Symbols loaded (source information stripped). Visual Leak Detector Version 1.9d installed. No memory leaks detected. Visual Leak Detector is now exiting. The program '[5468] Test.exe: Native' has exited with code 0 (0x0).

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >