Search Results

Search found 34893 results on 1396 pages for 'const method'.

Page 329/1396 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • [C++] My First Go With Function Templates

    - by bobber205
    Thought it was pretty straight forward. But I get a "iterator not dereferencable" errro when running the below code. What's wrong? template<typename T> struct SumsTo : public std::binary_function<T, T, bool> { int myInt; SumsTo(int a) { myInt = a; } bool operator()(const T& l, const T& r) { cout << l << " + " << r; if ((l + r) == myInt) { cout << " does add to " << myInt; } else { cout << " DOES NOT add to " << myInt; } return true; } }; void main() { list<int> l1; l1.push_back(1); l1.push_back(2); l1.push_back(3); l1.push_back(4); list<int> l2; l2.push_back(9); l2.push_back(8); l2.push_back(7); l2.push_back(6); transform(l1.begin(), l1.end(), l2.begin(), l2.end(), SumsTo<int>(10) ); }

    Read the article

  • Template compilation error in Sun Studio 12

    - by Jagannath
    We are migrating to Sun Studio 12.1 and with the new compiler [ CC: Sun C++ 5.10 SunOS_sparc 2009/06/03 ]. I am getting compilation error while compiling a code that compiled fine with earlier version of Sun Compiler [ CC: Sun WorkShop 6 update 2 C++ 5.3 2001/05/15 ]. This is the compilation error I get. "Sample.cc": Error: Could not find a match for LoopThrough(int[2]) needed in main(). 1 Error(s) detected. * Error code 1. CODE: #include <iostream> #define PRINT_TRACE(STR) \ std::cout << __FILE__ << ":" << __LINE__ << ":" << STR << "\n"; template<size_t SZ> void LoopThrough(const int(&Item)[SZ]) { PRINT_TRACE("Specialized version"); for (size_t index = 0; index < SZ; ++index) { std::cout << Item[index] << "\n"; } } /* template<typename Type, size_t SZ> void LoopThrough(const Type(&Item)[SZ]) { PRINT_TRACE("Generic version"); } */ int main() { { int arr[] = { 1, 2 }; LoopThrough(arr); } } If I uncomment the code with Generic version, the code compiles fine and the generic version is called. I don't see this problem with MSVC 2010 with extensions disabled and the same case with ideone here. The specialized version of the function is called. Now the question is, is this a bug in Sun Compiler ? If yes, how could we file a bug report ?

    Read the article

  • STL vector performance

    - by iAdam
    STL vector class stores a copy of the object using copy constructor each time I call push_back. Wouldn't it slow down the program? I can have a custom linkedlist kind of class which deals with pointers to objects. Though it would not have some benefits of STL but still should be faster. See this code below: #include <vector> #include <iostream> #include <cstring> using namespace std; class myclass { public: char* text; myclass(const char* val) { text = new char[10]; strcpy(text, val); } myclass(const myclass& v) { cout << "copy\n"; //copy data } }; int main() { vector<myclass> list; myclass m1("first"); myclass m2("second"); cout << "adding first..."; list.push_back(m1); cout << "adding second..."; list.push_back(m2); cout << "returning..."; myclass& ret1 = list.at(0); cout << ret1.text << endl; return 0; } its output comes out as: adding first...copy adding second...copy copy The output shows the copy constructor is called both times when adding and when retrieving the value even then. Does it have any effect on performance esp when we have larger objects?

    Read the article

  • Constraint to array dimension in C language

    - by Summer_More_More_Tea
    int KMP( const char *original, int o_len, const char *substring, int s_len ){ if( o_len < s_len ) return -1; int k = 0; int cur = 1; int fail[ s_len ]; fail[ k ] = -1; while( cur < s_len ){ k = cur - 1; do{ if( substring[ cur ] == substring[ k ] ){ fail[ cur ] = k; break; }else{ k = fail[ k ] + 1; } }while( k ); if( !k && ( substring[ cur ] != substring[ 0 ] ) ){ fail[ cur ] = -1; }else if( !k ){ fail[ cur ] = 0; } cur++; } k = 0; cur = 0; while( ( k < s_len ) && ( cur < o_len ) ){ if( original[ cur ] == substring[ k ] ){ cur++; k++; }else{ if( k == 0 ){ cur++; }else{ k = fail[ k - 1 ] + 1; } } } if( k == s_len ) return cur - k; else return -1; } This is a KMP algorithm I once coded. When I reviewed it this morning, I find it strange that an integer array is defined as int fail[ s_len ]. Does the specification requires dimesion of arrays compile-time constant? How can this code pass the compilation? By the way, my gcc version is 4.4.1. Thanks in advance!

    Read the article

  • How to use boost::fusion::transform on heterogeneous containers?

    - by Kyle
    Boost.org's example given for fusion::transform is as follows: struct triple { typedef int result_type; int operator()(int t) const { return t * 3; }; }; // ... assert(transform(make_vector(1,2,3), triple()) == make_vector(3,6,9)); Yet I'm not "getting it." The vector in their example contains elements all of the same type, but a major point of using fusion is containers of heterogeneous types. What if they had used make_vector(1, 'a', "howdy") instead? int operator()(int t) would need to become template<typename T> T& operator()(T& const t) But how would I write the result_type? template<typename T> typedef T& result_type certainly isn't valid syntax, and it wouldn't make sense even if it was, because it's not tied to the function.

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • Retrieve class name hierarchy as string

    - by Jeff Wain
    Our system complexity has risen to the point that we need to make permission names tied to the client from the database more specific. In the client, permissions are referenced from a static class since a lot of client functionality is dependent on the permissions each user has and the roles have a ton of variety. I've referenced this post as an example, but I'm looking for a more specific use case. Take for instance this reference, where PermissionAlpha would be a const string: return HasPermission(PermissionNames.PermissionAlpha); Which is great, except now that things are growing more complex the classes are being structured like this: public static class PermissionNames { public static class PermissionAlpha { public const string SubPermission; } } I'm trying to find an easy way to reference PermissionAlpha in this new setup that will act similar to the first declaration above. Would the only way to do this be to resort to pulling the value of the class name like in the example below? I'm trying to keep all the names in one place that can be reference anywhere in the application. public static class PermissionAlpha { public static string Name { get { return typeof(PermissionAlpha).Name; } } }

    Read the article

  • What C++ templates issue is going on with this error?

    - by WilliamKF
    Running gcc v3.4.6 on the Botan v1.8.8 I get the following compile time error building my application after successfully building Botan and running its self test: ../../src/Botan-1.8.8/build/include/botan/secmem.h: In member function `Botan::MemoryVector<T>& Botan::MemoryVector<T>::operator=(const Botan::MemoryRegion<T>&)': ../../src/Botan-1.8.8/build/include/botan/secmem.h:310: error: missing template arguments before '(' token What is this compiler error telling me? Here is a snippet of secmem.h that includes line 130: [...] /** * This class represents variable length buffers that do not * make use of memory locking. */ template<typename T> class MemoryVector : public MemoryRegion<T> { public: /** * Copy the contents of another buffer into this buffer. * @param in the buffer to copy the contents from * @return a reference to *this */ MemoryVector<T>& operator=(const MemoryRegion<T>& in) { if(this != &in) set(in); return (*this); } // This is line 130! [...]

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Getting ellipses function parameters without an initial argument

    - by Tox1k
    So I've been making a custom parser for a scripting language, and I wanted to be able to pass only ellipses arguments. I don't need or want an initial variable, however Microsoft and C seem to want something else. FYI, see bottom for info. I've looked at the va_* definitions #define _crt_va_start(ap,v) ( ap = (va_list)_ADDRESSOF(v) + _INTSIZEOF(v) ) #define _crt_va_arg(ap,t) ( *(t *)((ap += _INTSIZEOF(t)) - _INTSIZEOF(t)) ) #define _crt_va_end(ap) ( ap = (va_list)0 ) and the part I don't want is the v in va_start. As a little background I'm competent in goasm and I know how the stack works so I know what's happening here. I was wondering if there is a way to get the function stack base without having to use inline assembly. Ideas I've had: #define im_va_start(ap) (__asm { mov [ap], ebp }) and etc... but really I feel like that's messy and I'm doing it wrong. struct function_table { const char* fname; (void)(*fptr)(...); unsigned char maxArgs; }; function_table mytable[] = { { "MessageBox", &tMessageBoxA, 4 } }; ... some function that sorts through a const char* passed to it to find the matching function in mytable and calls tMessageBoxA with the params. Also, the maxArgs argument is just so I can check that a valid number of parameters is being sent. I have personal reasons for not wanting to send it in the function, but in the meantime we can just say it's because I'm curious. This is just an example; custom libraries are what I would be implementing so it wouldn't just be calling WinAPI stuff. void tMessageBoxA(...) { // stuff to load args passed MessageBoxA(arg1, arg2, arg3, arg4); } I'm using the __cdecl calling convention and I've looked up ways to reliably get a pointer to the base of the stack (not the top) but I can't seem to find any. Also, I'm not worried about function security or typechecking.

    Read the article

  • display multiple errors via bool flag c++

    - by igor
    Been a long night, but stuck on this and now am getting "segmentation fault" in my compiler.. Basically I'm trying to display all the errors (the cout) needed. If there is more than one error, I am to display all of them. bool validMove(const Square board[BOARD_SIZE][BOARD_SIZE], int x, int y, int value) { int index; bool moveError = true; const int row_conflict(0), column_conflict(1), grid_conflict(2); int v_subgrid=x/3; int h_subgrid=y/3; getCoords(x,y); for(index=0;index<9;index++) if(board[x][index].number==value){ cout<<"That value is in conflict in this row\n"; moveError=false; } for(index=0;index<9;index++) if(board[index][y].number==value){ cout<<"That value is in conflict in this column\n"; moveError=false; } for(int i=v_subgrid*3;i<(v_subgrid*3 +3);i++){ for(int j=h_subgrid*3;j<(h_subgrid*3+3);j++){ if(board[i][j].number==value){ cout<<"That value is in conflict in this subgrid\n"; moveError=false; } } } return true; }

    Read the article

  • How to keep the CPU usage down while running an SDL program?

    - by budwiser
    I've done a very basic window with SDL and want to keep it running until I press the X on window. #include "SDL.h" const int SCREEN_WIDTH = 640; const int SCREEN_HEIGHT = 480; int main(int argc, char **argv) { SDL_Init( SDL_INIT_VIDEO ); SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_WM_SetCaption( "SDL Test", 0 ); SDL_Event event; bool quit = false; while (quit != false) { if (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { quit = true; } } SDL_Delay(80); } SDL_Quit(); return 0; } I tried adding SDL_Delay() at the end of the while-clause and it worked quite well. However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%. Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?

    Read the article

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • C++: Reference and Pointer question (example regarding OpenGL)

    - by Jay
    I would like to load textures, and then have them be used by multiple objects. Would this work? class Sprite { GLuint* mTextures; // do I need this to also be a reference? Sprite( GLuint* textures ) // do I need this to also be a reference? { mTextures = textures; } void Draw( textureNumber ) { glBindTexture( GL_TEXTURE_2D, mTextures[ textureNumber ] ); // drawing code } }; // normally these variables would be inputed, but I did this for simplicity. const int NUMBER_OF_TEXTURES = 40; const int WHICH_TEXTURE = 10; void main() { std::vector<GLuint> the_textures; the_textures.resize( NUMBER_OF_TEXTURES ); glGenTextures( NUMBER_OF_TEXTURES, &the_textures[0] ); // texture loading code Sprite the_sprite( &the_textures[0] ); the_sprite.Draw( WHICH_TEXTURE ); } And is there a different way I should do this, even if it would work? Thanks.

    Read the article

  • Coordinating typedefs and structs in std::multiset (C++)

    - by Sarah
    I'm not a professional programmer, so please don't hesitate to state the obvious. My goal is to use a std::multiset container (typedef EventMultiSet) called currentEvents to organize a list of structs, of type Event, and to have members of class Host occasionally add new Event structs to currentEvents. The structs are supposed to be sorted by one of their members, time. I am not sure how much of what I am trying to do is legal; the g++ compiler reports (in "Host.h") "error: 'EventMultiSet' has not been declared." Here's what I'm doing: // Event.h struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; int eventID; int hostID; }; // Host.h ... void calcLifeHist( double, EventMultiSet * ); // produces compiler error ... void addEvent( double, int, int, EventMultiSet * ); // produces compiler error // Host.cpp #include "Event.h" ... // main.cpp #include "Event.h" ... typedef std::multiset< Event, std::less< Event > > EventMultiSet; EventMultiSet currentEvents; EventMultiSet * cePtr = &currentEvents; ... Major questions Where should I include the EventMultiSet typedef? Are my EventMultiSet pointers obviously problematic? Is the compare function within my Event struct (in theory) okay? Thank you very much in advance.

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Cross-platform iteration of Unicode string

    - by kizzx2
    I want to iterate each character of a Unicode string, treating each surrogate pair and combining character sequence as a single unit (one grapheme). Example The text "??????" is comprised of the code points: U+0928, U+092E, U+0938, U+094D, U+0924, U+0947, of which, U+0938 and U+0947 are combining marks. static void Main(string[] args) { const string s = "??????"; Console.WriteLine(s.Length); // Ouptuts "6" var l = 0; var e = System.Globalization.StringInfo.GetTextElementEnumerator(s); while(e.MoveNext()) l++; Console.WriteLine(l); // Outputs "4" } So there we have it in .NET. We also have Win32's CharNextW() #include <Windows.h> #include <iostream> #include <string> int main() { const wchar_t * s = L"??????"; std::cout << std::wstring(s).length() << std::endl; // Gives "6" int l = 0; while(CharNextW(s) != s) { s = CharNextW(s); ++l; } std::cout << l << std::endl; // Gives "4" return 0; } Question Both ways I know of are specific to Microsoft. Are there portable ways to do it? I heard about ICU but I couldn't find something related quickly (UnicodeString(s).length() still gives 6). Would be an acceptable answer to point to the related function/module in ICU. C++ doesn't have a notion of Unicode, so a lightweight cross-platform library for dealing with these issues would make an acceptable answer.

    Read the article

  • Simplest way to mix sequences of types with iostreams?

    - by Kylotan
    I have a function void write<typename T>(const T&) which is implemented in terms of writing the T object to an ostream, and a matching function T read<typename T>() that reads a T from an istream. I am basically using iostreams as a plain text serialisation format, which obviously works fine for most built-in types, although I'm not sure how to effectively handle std::strings just yet. I'd like to be able to write out a sequence of objects too, eg void write<typename T>(const std::vector<T>&) or an iterator based equivalent (although in practice, it would always be used with a vector). However, while writing an overload that iterates over the elements and writes them out is easy enough to do, this doesn't add enough information to allow the matching read operation to know how each element is delimited, which is essentially the same problem that I have with a single std::string. Is there a single approach that can work for all basic types and std::string? Or perhaps I can get away with 2 overloads, one for numerical types, and one for strings? (Either using different delimiters or the string using a delimiter escaping mechanism, perhaps.)

    Read the article

  • Universal oAuth for objective-c class?

    - by phpnerd211
    I have an app that connects to 6+ social networks via APIs. What I want to do is transfer over my oAuth calls to call directly from the phone (not from the server). Here's what I have (for tumblr): // Set some variables NSString *consumerKey = CONSUMER_KEY_HERE; NSString *sharedSecret = SHARED_SECRET_HERE; NSString *callToURL = @"https://tumblr.com/oauth/access_token"; NSString *thePassword = PASSWORD_HERE; NSString *theUsername = USERNAME_HERE; // Calculate nonce & timestamp NSString *nonce = [[NSString stringWithFormat:@"%d", arc4random()] retain]; time_t t; time(&t); mktime(gmtime(&t)); NSString *timestamp = [[NSString stringWithFormat:@"%d", (int)(((float)([[NSDate date] timeIntervalSince1970])) + 0.5)] retain]; // Generate signature NSString *baseString = [NSString stringWithFormat:@"GET&%@&%@",[callToURL urlEncode],[[NSString stringWithFormat:@"oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",consumerKey,nonce,timestamp,thePassword,theUsername] urlEncode]]; NSLog(@"baseString: %@",baseString); const char *cKey = [sharedSecret cStringUsingEncoding:NSASCIIStringEncoding]; const char *cData = [baseString cStringUsingEncoding:NSASCIIStringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *signature = [HMAC base64EncodedString]; NSString *theUrl = [NSString stringWithFormat:@"%@?oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",callToURL,consumerKey,nonce,signature,timestamp,thePassword,theUsername]; From tumblr, I get this error: oauth_signature does not match expected value I've done some forum scouring, and no oAuth for objective-c classes worked for what I want to do. I also don't want to have to download and implement 6+ social API classes into my project and do it that way.

    Read the article

  • What is the rationale to not allow overloading of C++ conversions operator with non-member function

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From > operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration> operator boost::posix_time::ptime( const boost::chrono::time_point<Clock, Duration>& from) { using namespace boost; typedef chrono::time_point<Clock, Duration> time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast<duration_t>( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast<long>(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. For a more complete description of the problem, see here or on my Boost.Conversion library.. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Enumeration trouble: redeclared as different kind of symbol

    - by Matt
    Hello all. I am writing a program that is supposed to help me learn about enumeration data types in C++. The current trouble is that the compiler doesn't like my enum usage when trying to use the new data type as I would other data types. I am getting the error "redeclared as different kind of symbol" when compiling my trangleShape function. Take a look at the relevant code. Any insight is appreciated! Thanks! (All functions are their own .cpp files.) header file #ifndef HEADER_H_INCLUDED #define HEADER_H_INCLUDED #include <iostream> #include <iomanip> using namespace std; enum triangleType {noTriangle, scalene, isoceles, equilateral}; //prototypes void extern input(float&, float&, float&); triangleType extern triangleShape(float, float, float); /*void extern output (float, float, float);*/ void extern myLabel(const char *, const char *); #endif // HEADER_H_INCLUDED main function //8.1 main // this progam... #include "header.h" int main() { float sideLength1, sideLength2, sideLength3; char response; do //main loop { input (sideLength1, sideLength2, sideLength3); triangleShape (sideLength1, sideLength2, sideLength3); //output (sideLength1, sideLength2, sideLength3); cout << "\nAny more triangles to analyze? (y,n) "; cin >> response; } while (response == 'Y' || response == 'y'); myLabel ("8.1", "2/11/2011"); return 0; } triangleShape shape # include "header.h" triangleType triangleShape(sideLenght1, sideLength2, sideLength3) { triangleType triangle; return triangle; }

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • parse string with regular exression

    - by llamerr
    I trying to parse this string: $right = '34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4]'; with following regexp: const word = '(\d{3}\d{2}\)S{0,1}\([^\)]*\)S{0,1}\[[^\]]*\])'; preg_match('/'.word.'{1}(?:\s{1}([+-]{1})\s{1}'.word.'){0,}/', $right, $matches); print_r($matches); i want to return array like this: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => - [3] => 34601)(11)[2] [4] => + [5] => 34601)(3)[2,4] ) but i return only following: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => + [3] => 34601)(3)[2,4] ) i think, its becouse of [^)]* or [^]]* in the word, but how i should correct regexp for matching this in another way? i tryied to specify it: \d+(?:[,#]\d+){0,} so word become const word = '(\d{3}\d{2}\)S{0,1}\(\d+(?:[,#]\d+){0,}\)S{0,1}\[\d+(?:[,#]\d+){0,}\])'; but it gives nothing

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >