Search Results

Search found 2566 results on 103 pages for 'struct'.

Page 95/103 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • Corner Cases, Unexpected and Unusual Matlab

    - by Mikhail
    Over the years, reading others code, I encountered and collected some examples of Matlab syntax which can be at first unusual and counterintuitive. Please, feel free to comment or complement this list. I verified it r2006a. set([], 'Background:Color','red') Matlab is very forgiving sometimes. In this case, setting properties to an array of objects works also with nonsense properties, at least when the array is empty. myArray([1,round(end/2)]) This use of end keyword may seem unclean but is sometimes very handy instead of using length(myArray). any([]) ~= all([]) Surprisigly any([]) returns false and all([]) returns true. And I always thought that all is stronger then any. EDIT: with not empty argument all() returns true for a subset of values for which any() returns true (e.g. truth table). This means that any() false implies all() false. This simple rule is being violated by Matlab with [] as argument. Loren also blogged about it. Select(Range(ExcelComObj)) Procedural style COM object method dispatch. Do not wonder that exist('Select') returns zero! [myString, myCell] Matlab makes in this case an implicit cast of string variable myString to cell type {myString}. It works, also if I would not expect it to do so. [double(1.8), uint8(123)] => 2 123 Another cast example. Everybody would probably expect uint8 value being cast to double but Mathworks have another opinion. a = 5; b = a(); It looks silly but you can call a variable with round brackets. Actually it makes sense because this way you can execute a function given its handle. a = {'aa', 'bb' 'cc', 'dd'}; Surprsisingly this code neither returns a vector nor rises an error but defins matrix, using just code layout. It is probably a relict from ancient times. set(hobj, {'BackgroundColor','ForegroundColor'},{'red','blue'}) This code does what you probably expect it to do. That function set accepts a struct as its second argument is a known fact and makes sense, and this sintax is just a cell2struct away. Equvalence rules are sometimes unexpected at first. For example 'A'==65 returns true (although for C-experts it is self-evident). About which further unexpected/unusual Matlab features are you aware?

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Containers of reference_wrappers (comparison operators required?)

    - by kloffy
    If you use stl containers together with reference_wrappers of POD types, the following code works just fine: int i = 3; std::vector< boost::reference_wrapper<int> > is; is.push_back(boost::ref(i)); std::cout << (std::find(is.begin(),is.end(),i)!=is.end()) << std::endl; However, if you use non-POD types such as (contrived example): struct Integer { int value; bool operator==(const Integer& rhs) const { return value==rhs.value; } bool operator!=(const Integer& rhs) const { return !(*this == rhs); } }; It doesn't suffice to declare those comparison operators, instead you have to declare: bool operator==(const boost::reference_wrapper<Integer>& lhs, const Integer& rhs) { return boost::unwrap_ref(lhs)==rhs; } And possibly also: bool operator==(const Integer& lhs, const boost::reference_wrapper<Integer>& rhs) { return lhs==boost::unwrap_ref(rhs); } In order to get the equivalent code to work: Integer j = { 0 }; std::vector< boost::reference_wrapper<Integer> > js; js.push_back(boost::ref(j)); std::cout << (std::find(js.begin(),js.end(),j)!=js.end()) << std::endl; Now, I'm wondering if this is really the way it's meant to be done, since it seems impractical. It just seems there should be a simpler solution, e.g. templates: template<class T> bool operator==(const boost::reference_wrapper<T>& lhs, const T& rhs) { return boost::unwrap_ref(lhs)==rhs; } template<class T> bool operator==(const T& lhs, const boost::reference_wrapper<T>& rhs) { return lhs==boost::unwrap_ref(rhs); } There's probably a good reason why reference_wrapper behaves the way it does (possibly to accomodate non-POD types without comparison operators?). Maybe there already is an elegant solution and I just haven't found it.

    Read the article

  • Fread binary file dynamic size string [C]

    - by Blackbinary
    I've been working on this assignment, where I need to read in "records" and write them to a file, and then have the ability to read/find them later. On each run of the program, the user can decide to write a new record, or read an old record (either by Name or #) The file is binary, here is its definition: typedef struct{ char * name; char * address; short addressLength, nameLength; int phoneNumber; }employeeRecord; employeeRecord record; The way the program works, it will store the structure, then the name, then the address. Name and address are dynamically allocated, which is why it is necessary to read the structure first to find the size of the name and address, allocate memory for them, then read them into that memory. For debugging purposes I have two programs at the moment. I have my file writing program, and file reading. My actual problem is this, when I read a file I have written, i read in the structure, print out the phone # to make sure it works (which works fine), and then fread the name (now being able to use record.nameLength which reports the proper value too). Fread however, does not return a usable name, it returns blank. I see two problems, either I haven't written the name to the file correctly, or I haven't read it in correctly. Here is how i write to the file: where fp is the file pointer. record.name is a proper value, so is record.nameLength. Also i am writing the name including the null terminator. (e.g. 'Jack\0') fwrite(&record,sizeof record,1,fp); fwrite(record.name,sizeof(char),record.nameLength,fp); fwrite(record.address,sizeof(char),record.addressLength,fp); And i then close the file. here is how i read the file: fp = fopen("employeeRecord","r"); fread(&record,sizeof record,1,fp); printf("Number: %d\n",record.phoneNumber); char *nameString = malloc(sizeof(char)*record.nameLength); printf("\nName Length: %d",record.nameLength); fread(nameString,sizeof(char),record.nameLength,fp); printf("\nName: %s",nameString); Notice there is some debug stuff in there (name length and number, both of which are correct). So i know the file opened properly, and I can use the name length fine. Why then is my output blank, or a newline, or something like that? (The output is just Name: with nothing after it, and program finishes just fine) Thanks for the help.

    Read the article

  • error C2065: 'CComQIPtr' : undeclared identifier

    - by Ken Smith
    I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects). I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this: #include "stdafx.h" #define WIN32_LEAN_AND_MEAN #include <Windows.h> #include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure) #if _MSC_VER < 1400 #define _WIN32_IE 0x0400 #endif #include <atlbase.h> // ATL smart pointers #include <shlguid.h> // shell GUIDs #include <shlobj.h> // IActiveDesktop #include "stdhdrs.h" struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop; #define PACKVERSION(major,minor) MAKELONG(minor,major) static HRESULT EnableActiveDesktop(bool enable) { CoInitialize(NULL); HRESULT hr; CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275) hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER); if (!SUCCEEDED(hr)) { return hr; } COMPONENTSOPT opt; opt.dwSize = sizeof(opt); opt.fActiveDesktop = opt.fEnableComponents = enable; hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0); if (!SUCCEEDED(hr)) { CoUninitialize(); // pIActiveDesktop->Release(); return hr; } hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH); CoUninitialize(); // pIActiveDesktop->Release(); return hr; } This code is throwing the following compiler errors: error C2065: 'CComQIPtr' : undeclared identifier error C2275: 'IActiveDesktop' : illegal use of this type as an expression error C2065: 'pIActiveDesktop' : undeclared identifier The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.) My current workaround is to use a normal "dumb" pointer, like so: IActiveDesktop *pIActiveDesktop; HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop, NULL, // no outer unknown CLSCTX_INPROC_SERVER, IID_IActiveDesktop, (void**)&pIActiveDesktop); And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff. Any thoughts?

    Read the article

  • Variable sized packet structs with vectors

    - by Rev316
    Lately I've been diving into network programming, and I'm having some difficulty constructing a packet with a variable "data" property. Several prior questions have helped tremendously, but I'm still lacking some implementation details. I'm trying to avoid using variable sized arrays, and just use a vector. But I can't get it to be transmitted correctly, and I believe it's somewhere during serialization. Now for some code. Packet Header class Packet { public: void* Serialize(); bool Deserialize(void *message); unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; }; Packet ImpL typedef struct { unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; } Packet; void* Packet::Serialize(int size) { Packet* p = (Packet *) malloc(8 + 30); p->sender_id = htonl(this->sender_id); p->sequence_number = htonl(this->sequence_number); p->data.assign(size,'&'); //just for testing purposes } bool Packet::Deserialize(void *message) { Packet *s = (Packet*)message; this->sender_id = ntohl(s->sender_id); this->sequence_number = ntohl(s->sequence_number); this->data = s->data; } During execution, I simply create a packet, assign it's members, and send/receive accordingly. The above methods are only responsible for serialization. Unfortunately, the data never gets transferred. Couple of things to point out here. I'm guessing the malloc is wrong, but I'm not sure how else to compute it (i.e. what other value it would be). Other than that, I'm unsure of the proper way to use a vector in this fashion, and would love for someone to show me how (code examples please!) :) Edit: I've awarded the question to the most comprehensive answer regarding the implementation with a vector data property. Appreciate all the responses!

    Read the article

  • Problem with combination boost::exception and boost::variant

    - by Rick
    Hello all, I have strange problem with two-level variant struct when boost::exception is included. I have following code snippet: #include <boost/variant.hpp> #include <boost/exception/all.hpp> typedef boost::variant< int > StoredValue; typedef boost::variant< StoredValue > ExpressionItem; inline std::ostream& operator << ( std::ostream & os, const StoredValue& stvalue ) { return os;} inline std::ostream& operator << ( std::ostream & os, const ExpressionItem& stvalue ) { return os; } When I try to compile it, I have following error: boost/exception/detail/is_output_streamable.hpp(45): error C2593: 'operator <<' is ambiguous test.cpp(11): could be 'std::ostream &operator <<(std::ostream &,const ExpressionItem &)' [found using argument-dependent lookup] test.cpp(8): or 'std::ostream &operator <<(std::ostream &,const StoredValue &)' [found using argument-dependent lookup] 1> while trying to match the argument list '(std::basic_ostream<_Elem,_Traits>, const boost::error_info<Tag,T>)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> and 1> [ 1> Tag=boost::tag_original_exception_type, 1> T=const type_info * 1> ] Code snippet is simplified as much as possible, in the real code are structures much more complicated and each variant has five sub-types. When i remove #include and try following test snippet, program is compiled correctly: void TestVariant() { ExpressionItem test; std::stringstream str; str << test; } Could someone please advise me how to define operators << in order to function even when using boost::Exception ? Thanks and regards Rick

    Read the article

  • What is the proper use of boost::fusion::push_back?

    - by Kyle
    // ... snipped includes for iostream and fusion ... namespace fusion = boost::fusion; class Base { protected: int x; public: Base() : x(0) {} void chug() { x++; cout << "I'm a base.. x is now " << x << endl; } }; class Alpha : public Base { public: void chug() { x += 2; cout << "Hi, I'm an alpha, x is now " << x << endl; } }; class Bravo : public Base { public: void chug() { x += 3; cout << "Hello, I'm a bravo; x is now " << x << endl; } }; struct chug { template<typename T> void operator()(T& t) const { t->chug(); } }; int main() { typedef fusion::vector<Base*, Alpha*, Bravo*, Base*> Stuff; Stuff stuff(new Base, new Alpha, new Bravo, new Base); fusion::for_each(stuff, chug()); // Mutates each element in stuff as expected /* Output: I'm a base.. x is now 1 Hi, I'm an alpha, x is now 2 Hello, I'm a bravo; x is now 3 I'm a base.. x is now 1 */ cout << endl; // If I don't put 'const' in front of Stuff... typedef fusion::result_of::push_back<const Stuff, Alpha*>::type NewStuff; // ... then this complains because it wants stuff to be const: NewStuff newStuff = fusion::push_back(stuff, new Alpha); // ... But since stuff is now const, I can no longer mutate its elements :( fusion::for_each(newStuff, chug()); return 0; }; How do I get for_each(newStuff, chug()) to work? (Note: I'm only assuming from the overly brief documentation on boost::fusion that I am supposed to create a new vector every time I call push_back.)

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

  • compiling a program to run in DOS mode

    - by dygi
    I write a simple program, to run in DOS mode. Everything works under emulated console in Win XP / Vista / Seven, but not in DOS. The error says: this program caonnot be run in DOS mode. I wonder is that a problem with compiler flags or something bigger. For programming i use Code::Blocks v 8.02 with such settings for compilation: -Wall -W -pedantic -pedantic-errors in Project \ Build options \ Compiler settings I've tried a clean DOS mode, booting from cd, and also setting up DOS in Virtual Machine. The same error appears. Should i turn on some more compiler flags ? Some specific 386 / 486 optimizations ? UPDATE Ok, i've downloaded, installed and configured DJGPP. Even resolved some problems with libs and includes. Still have two questions. 1) i can't compile a code, which calls _strdate and _strtime, i've double checked the includes, as MSDN says it needs time.h, but still error says: _strdate was not declared in this scope, i even tried to add std::_strdate, but then i have 4, not 2 errors sazing the same 2) the 2nd code is about gotoxy, it looks like that: #include <windows.h> void gotoxy(int x, int y) { COORD position; position.X = x; position.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), position); } error says there is no windows.h, so i've put it in place, but then there are many more errors saying some is missing from windows.h, I SUPPOSE it won't work because this functions is strictly for windows right ? is there any way to write similar gotoxy for DOS ? UPDATE2 1) solved using time(); instead of _strdate(); and _strtime(); here's the code time_t rawtime; struct tm * timeinfo; char buffer [20]; time ( &rawtime ); timeinfo = localtime ( &rawtime ); strftime (buffer,80,"%Y.%m.%d %H:%M:%S\0",timeinfo); string myTime(buffer); It now compiles under DJGPP. UPDATE3 Still need to solve a code using gotoxy - replaced it with some other code that compiles (under DJGPP). Thank You all for help. Just learnt some new things about compiling (flags, old IDE's like DJGPP, OpenWatcom) and refreshed memories setting DOS to work :--)

    Read the article

  • Using a function with reference as a function with pointers?

    - by epatel
    Today I stumbled over a piece of code that looked horrifying to me. The pieces was chattered in different files, I have tried write the gist of it in a simple test case below. The code base is routinely scanned with FlexeLint on a daily basis, but this construct has been laying in the code since 2004. The thing is that a function implemented with a parameter passing using references is called as a function with a parameter passing using pointers...due to a function cast. The construct has worked since 2004 on Irix and now when porting it actually do work on Linux/gcc too. My question now. Is this a construct one can trust? I can understand if compiler constructors implement the reference passing as it was a pointer, but is it reliable? Are there hidden risks? Should I change the fref(..) to use pointers and risk braking anything in the process? What to you think? #include <iostream> using namespace std; // ---------------------------------------- // This will be passed as a reference in fref(..) struct string_struct { char str[256]; }; // ---------------------------------------- // Using pointer here! void fptr(const char *str) { cout << "fptr: " << str << endl; } // ---------------------------------------- // Using reference here! void fref(string_struct &str) { cout << "fref: " << str.str << endl; } // ---------------------------------------- // Cast to f(const char*) and call with pointer void ftest(void (*fin)()) { void (*fcall)(const char*) = (void(*)(const char*))fin; fcall("Hello!"); } // ---------------------------------------- // Let's go for a test int main() { ftest((void (*)())fptr); // test with fptr that's using pointer ftest((void (*)())fref); // test with fref that's using reference return 0; }

    Read the article

  • XPointers in SVG

    - by Nycto
    I've been trying to get XPointer URIs working in an SVG file, but haven't had any luck so far. After trying something more complicated and failing, I simplified it down to just referencing an ID. However, this still fails. The spec seems pretty clear about this implementation: http://www.w3.org/TR/SVG/struct.html#URIReference I found an example online of what should be a working XPointer reference within an svg document. Here is the Original. Here is the version I copied out: <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg width="500" height="200" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <defs> <rect id="simpleRect" width="100px" height="75px"/> </defs> <use xlink:href="#simpleRect" x="50" y="50" style="fill:red"/> <use xlink:href="#xpointer(id('simpleRect'))" x="250" y="50" style="fill:yellow"/> </svg> This should display two rectangles... one red and one yellow. I tried rendering with Firefox 3.6 and Inkscape 0.47. No success. Only the Red rectangle shows. What am I missing? Thanks for any help you can offer

    Read the article

  • Inserting newlines into a GtkTextView widget (GTK+ programming)

    - by Mark Roberts
    I've got a button which when clicked copies and appends the text from a GtkEntry widget into a GtkTextView widget. (This code is a modified version of an example found in the "The Text View Widget" chapter of Foundations of GTK+ Development.) I'm looking to insert a newline character before the text which gets copied and appended, such that each line of text will be on its own line in the GtkTextView widget. How would I do this? I'm brand new to GTK+. Here's the code sample: #include <gtk/gtk.h> typedef struct { GtkWidget *entry, *textview; } Widgets; static void insert_text (GtkButton*, Widgets*); int main (int argc, char *argv[]) { GtkWidget *window, *scrolled_win, *hbox, *vbox, *insert; Widgets *w = g_slice_new (Widgets); gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Text Iterators"); gtk_container_set_border_width (GTK_CONTAINER (window), 10); gtk_widget_set_size_request (window, -1, 200); w->textview = gtk_text_view_new (); w->entry = gtk_entry_new (); insert = gtk_button_new_with_label ("Insert Text"); g_signal_connect (G_OBJECT (insert), "clicked", G_CALLBACK (insert_text), (gpointer) w); scrolled_win = gtk_scrolled_window_new (NULL, NULL); gtk_container_add (GTK_CONTAINER (scrolled_win), w->textview); hbox = gtk_hbox_new (FALSE, 5); gtk_box_pack_start_defaults (GTK_BOX (hbox), w->entry); gtk_box_pack_start_defaults (GTK_BOX (hbox), insert); vbox = gtk_vbox_new (FALSE, 5); gtk_box_pack_start (GTK_BOX (vbox), scrolled_win, TRUE, TRUE, 0); gtk_box_pack_start (GTK_BOX (vbox), hbox, FALSE, TRUE, 0); gtk_container_add (GTK_CONTAINER (window), vbox); gtk_widget_show_all (window); gtk_main(); return 0; } /* Insert the text from the GtkEntry into the GtkTextView. */ static void insert_text (GtkButton *button, Widgets *w) { GtkTextBuffer *buffer; GtkTextMark *mark; GtkTextIter iter; const gchar *text; buffer = gtk_text_view_get_buffer (GTK_TEXT_VIEW (w->textview)); text = gtk_entry_get_text (GTK_ENTRY (w->entry)); mark = gtk_text_buffer_get_insert (buffer); gtk_text_buffer_get_iter_at_mark (buffer, &iter, mark); gtk_text_buffer_insert (buffer, &iter, text, -1); } You can compile this command (assuming the file is named file.c): gcc file.c -o file `pkg-config --cflags --libs gtk+-2.0` Thanks everybody!

    Read the article

  • Can my loop be optimized any more? (C++)

    - by Sagekilla
    Below is one of my inner loops that's run several thousand times, with input sizes of 20 - 1000 or more. Is there anything I can do to help squeeze any more performance out of this? I'm not looking to move this code to something like using tree codes (Barnes-Hut), but towards optimizing the actual calculations happening inside, since the same calculations occur in the Barnes-Hut algorithm. Any help is appreciated! typedef double real; struct Particle { Vector pos, vel, acc, jerk; Vector oldPos, oldVel, oldAcc, oldJerk; real mass; }; class Vector { private: real vec[3]; public: // Operators defined here }; real Gravity::interact(Particle *p, size_t numParticles) { PROFILE_FUNC(); real tau_q = 1e300; for (size_t i = 0; i < numParticles; i++) { p[i].jerk = 0; p[i].acc = 0; } for (size_t i = 0; i < numParticles; i++) { for (size_t j = i+1; j < numParticles; j++) { Vector r = p[j].pos - p[i].pos; Vector v = p[j].vel - p[i].vel; real r2 = lengthsq(r); real v2 = lengthsq(v); // Calculate inverse of |r|^3 real r3i = Constants::G * pow(r2, -1.5); // da = r / |r|^3 // dj = (v / |r|^3 - 3 * (r . v) * r / |r|^5 Vector da = r * r3i; Vector dj = (v - r * (3 * dot(r, v) / r2)) * r3i; // Calculate new acceleration and jerk p[i].acc += da * p[j].mass; p[i].jerk += dj * p[j].mass; p[j].acc -= da * p[i].mass; p[j].jerk -= dj * p[i].mass; // Collision estimation // Metric 1) tau = |r|^2 / |a(j) - a(i)| // Metric 2) tau = |r|^4 / |v|^4 real mij = p[i].mass + p[j].mass; real tau_est_q1 = r2 / (lengthsq(da) * mij * mij); real tau_est_q2 = (r2*r2) / (v2*v2); if (tau_est_q1 < tau_q) tau_q = tau_est_q1; if (tau_est_q2 < tau_q) tau_q = tau_est_q2; } } return sqrt(sqrt(tau_q)); }

    Read the article

  • Retrieve names of running processes

    - by Dave DeLong
    Hi everyone, First off, I know that similar questions have been asked, but the answers provided haven't been very helpful so far (they all recommend one of the following options). I have a user application that needs to determine if a particular process is running. Here's what I know about the process: The name The user (root) It should already be running, since it's a LaunchDaemon, which means Its parent process should be launchd (pid 1) I've tried several ways to get this, but none have worked so far. Here's what I've tried: Running ps and parsing the output. This works, but it's slow (fork/exec is expensive), and I'd like this to be as fast as possible. Using the GetBSDProcessList function listed here. This also works, but the way in which they say to retrieve the process name (accessing kp_proc.p_comm from each kinfo_proc structure) is flawed. The resulting char* only contains the first 16 characters of the process name, which can be seen in the definition of the kp_proc structure: #define MAXCOMLEN 16 //defined in param.h struct extern_proc { //defined in proc.h ...snip... char p_comm[MAXCOMLEN+1]; ...snip... }; Using libProc.h to retrieve process information: pid_t pids[1024]; int numberOfProcesses = proc_listpids(PROC_ALL_PIDS, 0, NULL, 0); proc_listpids(PROC_ALL_PIDS, 0, pids, sizeof(pids)); for (int i = 0; i < numberOfProcesses; ++i) { if (pids[i] == 0) { continue; } char name[1024]; proc_name(pids[i], name, sizeof(name)); printf("Found process: %s\n", name); } This works, except it has the same flaw as GetBSDProcessList. Only the first portion of the process name is returned. Using the ProcessManager function in Carbon: ProcessSerialNumber psn; psn.lowLongOfPSN = kNoProcess; psn.highLongOfPSN = 0; while (GetNextProcess(&psn) == noErr) { CFStringRef procName = NULL; if (CopyProcessName(&psn, &procName) == noErr) { NSLog(@"Found process: %@", (NSString *)procName); } CFRelease(procName); } This does not work. It only returns process that are registered with the WindowServer (or something like that). In other words, it only returns apps with UIs, and only for the current user. I can't use -[NSWorkspace launchedApplications], since this must be 10.5-compatible. In addition, this only returns information about applications that appear in the Dock for the current user. I know that it's possible to retrieve the name of running processes (since ps can do it), but the question is "Can I do it without forking and exec'ing ps?". Any suggestions?

    Read the article

  • Problem in finalizing the link list in C#

    - by Yasin
    My code was almost finished that a maddening bug came up! when i nullify the last node to finalize the link list , it actually dumps all the links and make the first node link Null ! when i trace it , its working totally fine creating the list in the loop but after the loop is done that happens and it destroys the rest of the link by doing so, and i don't understand why something so obvious is becoming problematic! (last line) struct poly { public int coef; public int pow; public poly* link;} ; poly* start ; poly* start2; poly* p; poly* second; poly* result; poly* ptr; poly* start3; poly* q; poly* q2; private void button1_Click(object sender, EventArgs e) { string holder = ""; IntPtr newP = Marshal.AllocHGlobal(sizeof(poly)); q = (poly*)newP.ToPointer(); start = q; int i = 0; while (this.textBox1.Text[i] != ',') { holder += this.textBox1.Text[i]; i++; } q->coef = int.Parse(holder); i++; holder = ""; while (this.textBox1.Text[i] != ';') { holder += this.textBox1.Text[i]; i++; } q->pow = int.Parse(holder); holder = ""; p = start; //creation of the first node finished! i++; for (; i < this.textBox1.Text.Length; i++) { newP = Marshal.AllocHGlobal(sizeof(poly)); poly* test = (poly*)newP.ToPointer(); while (this.textBox1.Text[i] != ',') { holder += this.textBox1.Text[i]; i++; } test->coef = int.Parse(holder); holder = ""; i++; while (this.textBox1.Text[i] != ';' && i < this.textBox1.Text.Length - 1) { holder += this.textBox1.Text[i]; if (i < this.textBox1.Text.Length - 1) i++; } test->pow = int.Parse(holder); holder = ""; p->link = test; //the addresses are correct and the list is complete } p->link = null; //only the first node exists now with a null link! }

    Read the article

  • Runge-Kutta Method with adaptive step

    - by infoholic_anonymous
    I am implementing Runge-Kutta method with adaptive step in matlab. I get different results as compared to matlab's own ode45 and my own implementation of Runge-Kutta method with fixed step. What am I doing wrong in my code? Is it possible? function [ result ] = rk4_modh( f, int, init, h, h_min ) % % f - function handle % int - interval - pair (x_min, x_max) % init - initial conditions - pair (y1(0),y2(0)) % h_min - lower limit for h (step length) % h - initial step length % x - independent variable ( for example time ) % y - dependent variable - vertical vector - in our case ( y1, y2 ) function [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ) % core functionality performed within loop k1 = h * f(x,y); k2 = h * f(x+h/2, y+k1/2); k3 = h * f(x+h/2, y+k2/2); k4 = h * f(x+h, y+k3); ka = (k1 + 2*k2 + 2*k3 + k4)/6; y = y + ka; end % constants % relative error eW = 1e-10; % absolute error eB = 1e-10; s = 0.9; b = 5; % initialization i = 1; x = int(1); y = init; while true hy = y; hx = x; %algorithm [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ); % error estimation for j=1:2 [ hk1, hk2, hk3, hk4, hka, hy ] = iteration( f, h/2, hx, hy ); hx = hx + h/2; end err(:,i) = abs(hy - y); % step adjustment e = abs( hy ) * eW + eB; a = min( e ./ err(:,i) )^(0.2); mul = a * s; if mul >= 1 % step length admitted keepH(i) = h; k(:,:,i) = [ k1, k2, k3, k4, ka ]; previous(i,:) = [ x+h, y' ]; %' i = i + 1; if floor( x + h + eB ) == int(2) break; else h = min( [mul*h, b*h, int(2)-x] ); x = x + keepH(i-1); end else % step length requires further adjustments h = mul * h; if ( h < h_min ) error('Computation with given precision impossible'); end end end result = struct( 'val', previous, 'k', k, 'err', err, 'h', keepH ); end The function in question is: function [ res ] = fun( x, y ) % res(1) = y(2) + y(1) * ( 0.9 - y(1)^2 - y(2)^2 ); res(2) = -y(1) + y(2) * ( 0.9 - y(1)^2 - y(2)^2 ); res = res'; %' end The call is: res = rk4( @fun, [0,20], [0.001; 0.001], 0.008 ); The resulting plot for x1 : The result of ode45( @fun, [0, 20], [0.001, 0.001] ) is:

    Read the article

  • Using boost::iterator

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: template<typename T> class sparse_vector { public: typedef size_t size_type; typedef T value_type; private: typedef T& true_reference; typedef const T* const_pointer; typedef sparse_vector<T> self_type; struct ElementType { ElementType(size_type i, T const& t): index(i), value(t) {} ElementType(size_type i, T&& t): index(i), value(t) {} ElementType(size_type i): index(i) {} ElementType(ElementType const&) = default; size_type index; value_type value; }; typedef vector<ElementType> array_type; public: typedef T* pointer; typedef T& reference; typedef const T& const_reference; private: size_type size_; mutable typename array_type::size_type sorted_filled_; mutable array_type data_; // lots of code for various algorithms... public: class sparse_iterator : public boost::iterator_adaptor< sparse_iterator // Derived , array_type::iterator // Base (the internal array) (this paramater does not compile! -- says expected a type, got 'std::vector::iterator'???) , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal > class iterator_proxy { ??? }; class iterator : public boost::iterator_facade< iterator // Derived , ????? // Base , ????? // Value , boost::?????? // CategoryOrTraversal > { }; };

    Read the article

  • how can we make class with Linked list recursion ...

    - by epsilon_G
    Hi , I'm newbee in The C++ heaven and the OOP, ... I'd like to make some stuffs with the Data structures ... However,I'd like to merge two linked listes ... I made a class before "List" wich contain all what I need to programme something with List ... Add , Display .. The probleme is that I programmed the function "Merge2lists" which give us the third list .. How can I display the third list in the main program after using "Merge2lists" Plz , my prob with the Class and the OOP syntaxe ... try to give me the implementation in the main program ??? Otherwise,How can I apply function given pointer in main program wich declared in class .. Thankx class Liste { private: struct node { int elem ; node *next; }*p; public: LLC(); void Merge2lists (node* a, node * b,node *&result); ~LLC(); }; void List::Merge2lists (node* a, node * b,node *&result) { result = NULL; if (a==NULL) { result=b; return;} else if (b==NULL) { result=a; return;} if (a->elem <= b->elem) { result = a; Merge2lists(a->next, b,result->next); } else { result = b; Merge2lists(a, b->next,result->next); } return; } int main() { liste a ; a.Aff_Val(46); a.Aff_Val(54); a.add_as_first(2); a.add_as_first(1); a.Display(); /*This to displat the elemnts ... Don't care about it it's easy to make*/ list liste2; b.Add(2); b.Add(14); b.Add(16); list result; Merge2lists (a,b,result); /*The probleme is here , how can I use this in my program */

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • Cached ObjectDataSource not firing Select Event even Cache Dependecy Removed

    - by John Polvora
    I have the following scenario. A Page with a DetailsView binded to an ObjectDatasource with cache-enabled. The SelectMethod is assigned at Page_Load event, depending on my User Level Logic. After assigned the selectMethod and Parameters for the ODS, if Cache not exists, then ODS will be cached the first time. The next time, the cache will be applied to the ODS and the select event don't need to be fired since the dataresult is cached. The problem is, the ODS Cache works fine, but I have a Refresh button to clear the cache and rebind the DetailsView. Am I doing correctly ? Below is my code. <asp:DetailsView ID="DetailsView1" runat="server" DataSourceID="ObjectDataSource_Summary" EnableModelValidation="True" EnableViewState="False" ForeColor="#333333" GridLines="None"> </asp:DetailsView> <asp:ObjectDataSource ID="ObjectDataSource_Summary" runat="server" SelectMethod="" TypeName="BL.BusinessLogic" EnableCaching="true"> <SelectParameters> <asp:Parameter Name="idCompany" Type="String" /> <SelectParameters> </asp:ObjectDataSource> <asp:ImageButton ID="ImageButton_Refresh" runat="server" OnClick="RefreshClick" ImageUrl="~/img/refresh.png" /> And here is the code behind public partial class Index : Page { protected void Page_Load(object sender, EventArgs e) { ObjectDataSource_Summary.SelectMethod = ""; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = ""; switch (this._loginData.UserLevel) //this is a struct I use for control permissions e pages behaviour { case OperNivel.SysAdmin: case OperNivel.SysOperator: { ObjectDataSource_Summary.SelectMethod = "SystemSummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = "0"; break; } case OperNivel.CompanyAdmin: case OperNivel.CompanyOperator: { ObjectDataSource_Summary.SelectMethod = "CompanySummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = this._loginData.UserLevel.ToString(); break; } default: break; } } protected void Page_LoadComplete(object sender, EventArgs e) { if (Cache[ObjectDataSource_Summary.CacheKeyDependency] == null) { this._loginData.LoginDatetime = DateTime.Now; Session["loginData"] = _loginData; Cache[ObjectDataSource_Summary.CacheKeyDependency] = _loginData; DetailsView1.DataBind(); } } protected void RefreshClick(object sender, ImageClickEventArgs e) { Cache.Remove(ObjectDataSource_Summary.CacheKeyDependency); } } Can anyone help me? The Select() Event of the ObjectDasource is not firing even I Remove the CacheKey Dependency

    Read the article

  • Using a map with set_intersection

    - by Robin Welch
    Not used set_intersection before, but I believe it will work with maps. I wrote the following example code but it doesn't give me what I'd expect: #include <map> #include <string> #include <iostream> #include <algorithm> using namespace std; struct Money { double amount; string currency; bool operator< ( const Money& rhs ) const { if ( amount != rhs.amount ) return ( amount < rhs.amount ); return ( currency < rhs.currency ); } }; int main( int argc, char* argv[] ) { Money mn[] = { { 2.32, "USD" }, { 2.76, "USD" }, { 4.30, "GBP" }, { 1.21, "GBP" }, { 1.37, "GBP" }, { 6.74, "GBP" }, { 2.55, "EUR" } }; typedef pair< int, Money > MoneyPair; typedef map< int, Money > MoneyMap; MoneyMap map1; map1.insert( MoneyPair( 1, mn[1] ) ); map1.insert( MoneyPair( 2, mn[2] ) ); map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) MoneyMap map2; map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) map1.insert( MoneyPair( 5, mn[5] ) ); map1.insert( MoneyPair( 6, mn[6] ) ); map1.insert( MoneyPair( 7, mn[7] ) ); MoneyMap out; MoneyMap::iterator out_itr( out.begin() ); set_intersection( map1.begin(), map1.end(), map2.begin(), map2.end(), inserter( out, out_itr ) ); cout << "intersection has " << out.size() << " elements." << endl; return 0; } Since the pair labelled (3) and (4) appear in both maps, I was expecting that I'd get 2 elements in the intersection, but no, I get: intersection has 0 elements. I'm sure this is something to do with the comparitor on the map / pair but can't figure it out.

    Read the article

  • Creating .lib files in CUDA Toolkit 5

    - by user1683586
    I am taking my first faltering steps with CUDA Toolkit 5.0 RC using VS2010. Separate compilation has me confused. I tried to set up a project as a Static Library (.lib), but when I try to build it, it does not create a device-link.obj and I don't understand why. For instance, there are 2 files: A caller function that uses a function f #include "thrust\host_vector.h" #include "thrust\device_vector.h" using namespace thrust::placeholders; extern __device__ double f(double x); struct f_func { __device__ double operator()(const double& x) const { return f(x); } }; void test(const int len, double * data, double * res) { thrust::device_vector<double> d_data(data, data + len); thrust::transform(d_data.begin(), d_data.end(), d_data.begin(), f_func()); thrust::copy(d_data.begin(),d_data.end(), res); } And a library file that defines f __device__ double f(double x) { return x+2.0; } If I set the option generate relocatable device code to No, the first file will not compile due to unresolved extern function f. If I set it to -rdc, it will compile, but does not produce a device-link.obj file and so the linker fails. If I put the definition of f into the first file and delete the second it builds successfully, but now it isn't separate compilation anymore. How can I build a static library like this with separate source files? [Updated here] I called the first caller file "caller.cu" and the second "libfn.cu". The compiler lines that VS2010 outputs (which I don't fully understand) are (for caller): nvcc.exe -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" -clean and the same for libfn, then: nvcc.exe -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2010 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" and again for libfn.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >