Search Results

Search found 5165 results on 207 pages for 'const cast'.

Page 97/207 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Getting ellipses function parameters without an initial argument

    - by Tox1k
    So I've been making a custom parser for a scripting language, and I wanted to be able to pass only ellipses arguments. I don't need or want an initial variable, however Microsoft and C seem to want something else. FYI, see bottom for info. I've looked at the va_* definitions #define _crt_va_start(ap,v) ( ap = (va_list)_ADDRESSOF(v) + _INTSIZEOF(v) ) #define _crt_va_arg(ap,t) ( *(t *)((ap += _INTSIZEOF(t)) - _INTSIZEOF(t)) ) #define _crt_va_end(ap) ( ap = (va_list)0 ) and the part I don't want is the v in va_start. As a little background I'm competent in goasm and I know how the stack works so I know what's happening here. I was wondering if there is a way to get the function stack base without having to use inline assembly. Ideas I've had: #define im_va_start(ap) (__asm { mov [ap], ebp }) and etc... but really I feel like that's messy and I'm doing it wrong. struct function_table { const char* fname; (void)(*fptr)(...); unsigned char maxArgs; }; function_table mytable[] = { { "MessageBox", &tMessageBoxA, 4 } }; ... some function that sorts through a const char* passed to it to find the matching function in mytable and calls tMessageBoxA with the params. Also, the maxArgs argument is just so I can check that a valid number of parameters is being sent. I have personal reasons for not wanting to send it in the function, but in the meantime we can just say it's because I'm curious. This is just an example; custom libraries are what I would be implementing so it wouldn't just be calling WinAPI stuff. void tMessageBoxA(...) { // stuff to load args passed MessageBoxA(arg1, arg2, arg3, arg4); } I'm using the __cdecl calling convention and I've looked up ways to reliably get a pointer to the base of the stack (not the top) but I can't seem to find any. Also, I'm not worried about function security or typechecking.

    Read the article

  • C++: Reference and Pointer question (example regarding OpenGL)

    - by Jay
    I would like to load textures, and then have them be used by multiple objects. Would this work? class Sprite { GLuint* mTextures; // do I need this to also be a reference? Sprite( GLuint* textures ) // do I need this to also be a reference? { mTextures = textures; } void Draw( textureNumber ) { glBindTexture( GL_TEXTURE_2D, mTextures[ textureNumber ] ); // drawing code } }; // normally these variables would be inputed, but I did this for simplicity. const int NUMBER_OF_TEXTURES = 40; const int WHICH_TEXTURE = 10; void main() { std::vector<GLuint> the_textures; the_textures.resize( NUMBER_OF_TEXTURES ); glGenTextures( NUMBER_OF_TEXTURES, &the_textures[0] ); // texture loading code Sprite the_sprite( &the_textures[0] ); the_sprite.Draw( WHICH_TEXTURE ); } And is there a different way I should do this, even if it would work? Thanks.

    Read the article

  • How to keep the CPU usage down while running an SDL program?

    - by budwiser
    I've done a very basic window with SDL and want to keep it running until I press the X on window. #include "SDL.h" const int SCREEN_WIDTH = 640; const int SCREEN_HEIGHT = 480; int main(int argc, char **argv) { SDL_Init( SDL_INIT_VIDEO ); SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_WM_SetCaption( "SDL Test", 0 ); SDL_Event event; bool quit = false; while (quit != false) { if (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { quit = true; } } SDL_Delay(80); } SDL_Quit(); return 0; } I tried adding SDL_Delay() at the end of the while-clause and it worked quite well. However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%. Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Cross-platform iteration of Unicode string

    - by kizzx2
    I want to iterate each character of a Unicode string, treating each surrogate pair and combining character sequence as a single unit (one grapheme). Example The text "??????" is comprised of the code points: U+0928, U+092E, U+0938, U+094D, U+0924, U+0947, of which, U+0938 and U+0947 are combining marks. static void Main(string[] args) { const string s = "??????"; Console.WriteLine(s.Length); // Ouptuts "6" var l = 0; var e = System.Globalization.StringInfo.GetTextElementEnumerator(s); while(e.MoveNext()) l++; Console.WriteLine(l); // Outputs "4" } So there we have it in .NET. We also have Win32's CharNextW() #include <Windows.h> #include <iostream> #include <string> int main() { const wchar_t * s = L"??????"; std::cout << std::wstring(s).length() << std::endl; // Gives "6" int l = 0; while(CharNextW(s) != s) { s = CharNextW(s); ++l; } std::cout << l << std::endl; // Gives "4" return 0; } Question Both ways I know of are specific to Microsoft. Are there portable ways to do it? I heard about ICU but I couldn't find something related quickly (UnicodeString(s).length() still gives 6). Would be an acceptable answer to point to the related function/module in ICU. C++ doesn't have a notion of Unicode, so a lightweight cross-platform library for dealing with these issues would make an acceptable answer.

    Read the article

  • Simplest way to mix sequences of types with iostreams?

    - by Kylotan
    I have a function void write<typename T>(const T&) which is implemented in terms of writing the T object to an ostream, and a matching function T read<typename T>() that reads a T from an istream. I am basically using iostreams as a plain text serialisation format, which obviously works fine for most built-in types, although I'm not sure how to effectively handle std::strings just yet. I'd like to be able to write out a sequence of objects too, eg void write<typename T>(const std::vector<T>&) or an iterator based equivalent (although in practice, it would always be used with a vector). However, while writing an overload that iterates over the elements and writes them out is easy enough to do, this doesn't add enough information to allow the matching read operation to know how each element is delimited, which is essentially the same problem that I have with a single std::string. Is there a single approach that can work for all basic types and std::string? Or perhaps I can get away with 2 overloads, one for numerical types, and one for strings? (Either using different delimiters or the string using a delimiter escaping mechanism, perhaps.)

    Read the article

  • Universal oAuth for objective-c class?

    - by phpnerd211
    I have an app that connects to 6+ social networks via APIs. What I want to do is transfer over my oAuth calls to call directly from the phone (not from the server). Here's what I have (for tumblr): // Set some variables NSString *consumerKey = CONSUMER_KEY_HERE; NSString *sharedSecret = SHARED_SECRET_HERE; NSString *callToURL = @"https://tumblr.com/oauth/access_token"; NSString *thePassword = PASSWORD_HERE; NSString *theUsername = USERNAME_HERE; // Calculate nonce & timestamp NSString *nonce = [[NSString stringWithFormat:@"%d", arc4random()] retain]; time_t t; time(&t); mktime(gmtime(&t)); NSString *timestamp = [[NSString stringWithFormat:@"%d", (int)(((float)([[NSDate date] timeIntervalSince1970])) + 0.5)] retain]; // Generate signature NSString *baseString = [NSString stringWithFormat:@"GET&%@&%@",[callToURL urlEncode],[[NSString stringWithFormat:@"oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",consumerKey,nonce,timestamp,thePassword,theUsername] urlEncode]]; NSLog(@"baseString: %@",baseString); const char *cKey = [sharedSecret cStringUsingEncoding:NSASCIIStringEncoding]; const char *cData = [baseString cStringUsingEncoding:NSASCIIStringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *signature = [HMAC base64EncodedString]; NSString *theUrl = [NSString stringWithFormat:@"%@?oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",callToURL,consumerKey,nonce,signature,timestamp,thePassword,theUsername]; From tumblr, I get this error: oauth_signature does not match expected value I've done some forum scouring, and no oAuth for objective-c classes worked for what I want to do. I also don't want to have to download and implement 6+ social API classes into my project and do it that way.

    Read the article

  • What is the rationale to not allow overloading of C++ conversions operator with non-member function

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From > operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration> operator boost::posix_time::ptime( const boost::chrono::time_point<Clock, Duration>& from) { using namespace boost; typedef chrono::time_point<Clock, Duration> time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast<duration_t>( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast<long>(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. For a more complete description of the problem, see here or on my Boost.Conversion library.. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • parse string with regular exression

    - by llamerr
    I trying to parse this string: $right = '34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4]'; with following regexp: const word = '(\d{3}\d{2}\)S{0,1}\([^\)]*\)S{0,1}\[[^\]]*\])'; preg_match('/'.word.'{1}(?:\s{1}([+-]{1})\s{1}'.word.'){0,}/', $right, $matches); print_r($matches); i want to return array like this: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => - [3] => 34601)(11)[2] [4] => + [5] => 34601)(3)[2,4] ) but i return only following: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => + [3] => 34601)(3)[2,4] ) i think, its becouse of [^)]* or [^]]* in the word, but how i should correct regexp for matching this in another way? i tryied to specify it: \d+(?:[,#]\d+){0,} so word become const word = '(\d{3}\d{2}\)S{0,1}\(\d+(?:[,#]\d+){0,}\)S{0,1}\[\d+(?:[,#]\d+){0,}\])'; but it gives nothing

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • Enumeration trouble: redeclared as different kind of symbol

    - by Matt
    Hello all. I am writing a program that is supposed to help me learn about enumeration data types in C++. The current trouble is that the compiler doesn't like my enum usage when trying to use the new data type as I would other data types. I am getting the error "redeclared as different kind of symbol" when compiling my trangleShape function. Take a look at the relevant code. Any insight is appreciated! Thanks! (All functions are their own .cpp files.) header file #ifndef HEADER_H_INCLUDED #define HEADER_H_INCLUDED #include <iostream> #include <iomanip> using namespace std; enum triangleType {noTriangle, scalene, isoceles, equilateral}; //prototypes void extern input(float&, float&, float&); triangleType extern triangleShape(float, float, float); /*void extern output (float, float, float);*/ void extern myLabel(const char *, const char *); #endif // HEADER_H_INCLUDED main function //8.1 main // this progam... #include "header.h" int main() { float sideLength1, sideLength2, sideLength3; char response; do //main loop { input (sideLength1, sideLength2, sideLength3); triangleShape (sideLength1, sideLength2, sideLength3); //output (sideLength1, sideLength2, sideLength3); cout << "\nAny more triangles to analyze? (y,n) "; cin >> response; } while (response == 'Y' || response == 'y'); myLabel ("8.1", "2/11/2011"); return 0; } triangleShape shape # include "header.h" triangleType triangleShape(sideLenght1, sideLength2, sideLength3) { triangleType triangle; return triangle; }

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

  • How can variadic char template arguments from user defined literals be converted back into numeric types?

    - by Pubby
    This question is being asked because of this one. C++11 allows you to define literals like this for numeric literals: template<char...> OutputType operator "" _suffix(); Which means that 503_suffix would become <'5','0','3'> This is nice, although it isn't very useful in the form it's in. How can I transform this back into a numeric type? This would turn <'5','0','3'> into a constexpr 503. Additionally, it must also work on floating point literals. <'5','.','3> would turn into int 5 or float 5.3 A partial solution was found in the previous question, but it doesn't work on non-integers: template <typename t> constexpr t pow(t base, int exp) { return (exp > 0) ? base * pow(base, exp-1) : 1; }; template <char...> struct literal; template <> struct literal<> { static const unsigned int to_int = 0; }; template <char c, char ...cv> struct literal<c, cv...> { static const unsigned int to_int = (c - '0') * pow(10, sizeof...(cv)) + literal<cv...>::to_int; }; // use: literal<...>::to_int // literal<'1','.','5'>::to_int doesn't work // literal<'1','.','5'>::to_float not implemented

    Read the article

  • Howcome some C++ functions with unspecified linkage build with C linkage?

    - by christoffer
    This is something that makes me fairly perplexed. I have a C++ file that implements a set of functions, and a header file that defines prototypes for them. When building with Visual Studio or MingW-gcc, I get linking errors on two of the functions, and adding an 'extern "C"' qualifier resolved the error. How is this possible? Header file, "some_header.h": // Definition of struct DEMO_GLOBAL_DATA omitted DWORD WINAPI ThreadFunction(LPVOID lpData); void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen); void CheckValid(DEMO_GLOBAL_DATA *pData); int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName); void HandleEnd(DEMO_GLOBAL_DATA *pData); C++ file, "some_implementation.cpp" #include "some_header.h" DWORD WINAPI ThreadFunction(LPVOID lpData) { /* omitted */ } void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen) { /* omitted */ } void CheckValid(DEMO_GLOBAL_DATA *pData) { /* omitted */ } int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName) { /* omitted */ } void HandleEnd(DEMO_GLOBAL_DATA *pData) { /* omitted */ } The implementations compile without warnings, but when linking with the UI code that calls these, I get a normal error LNK2001: unresolved external symbol "int __cdecl HandleStart(struct _DEMO_GLOBAL_DATA *, wchar_t *) error LNK2001: unresolved external symbol "void __cdecl CheckValid(struct _DEMO_MAIN_GLOBAL_DATA * What really confuses me, now, is that only these two functions (HandleStart and CheckValid) seems to be built with C linkage. Explicitly adding "extern 'C'" declarations for only these two resolved the linking error, and the application builds and runs. Adding "extern 'C'" on some other function, such as HandleEnd, introduces a new linking error, so that one is obviously compiled correctly. The implementation file is never modified in any of this, only the prototypes.

    Read the article

  • Compile time type determination in C++

    - by dicroce
    A coworker recently showed me some code that he found online. It appears to allow compile time determination of whether a type has an "is a" relationship with another type. I think this is totally awesome, but I have to admit that I'm clueless as to how this actually works. Can anyone explain this to me? template<typename BaseT, typename DerivedT> inline bool isRelated(const DerivedT&) { DerivedT derived(); char test(const BaseT&); // sizeof(test()) == sizeof(char) char (&test(...))[2]; // sizeof(test()) == sizeof(char[2]) struct conversion { enum { exists = (sizeof(test(derived())) == sizeof(char)) }; }; return conversion::exists; } Once this function is defined, you can use it like this: #include <iostream> class base {}; class derived : public base {}; class unrelated {}; int main() { base b; derived d; unrelated u; if( isRelated<base>( b ) ) std::cout << "b is related to base" << std::endl; if( isRelated<base>( d ) ) std::cout << "d is related to base" << std::endl; if( !isRelated<base>( u ) ) std::cout << "u is not related to base" << std::endl; }

    Read the article

  • add uchar values in ushort array with sse2 or sse3

    - by pompolus
    i have an unsigned short dst[16][16] matrix and a larger unsigned char src[m][n] matrix. Now i have to access in the src matrix and add a 16x16 submatrix to dst, using sse2 or ss3. In a my older implementation, I was sure that my summed values ??were never greater than 256, so i could do this: for (int row = 0; row < 16; ++row) { __m128i subMat = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src)); dst[row] = _mm_add_epi8(dst[row], subMat); src += W; // Step to next row i need to add } where W is an offset to reach the desired rows. This code works, but now my values in src are larger and summed could be greater than 256, so i need to store them as ushort. i've tried this: for (int row = 0; row < 16; ++row) { __m128i subMat = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src)); dst[row] = _mm_add_epi16(dst[row], subMat); src += W; // Step to next row i need to add } but it doesn't work. I'm not so good with sse, so any help will be appreciated.

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • overloading "<<" with a struct (no class) cout style

    - by monkeyking
    I have a struct that I'd like to output using either 'std::cout' or some other output stream. Is this possible without using classes? Thanks #include <iostream> #include <fstream> template <typename T> struct point{ T x; T y; }; template <typename T> std::ostream& dump(std::ostream &o,point<T> p) const{ o<<"x: " << p.x <<"\ty: " << p.y <<std::endl; } template<typename T> std::ostream& operator << (std::ostream &o,const point<T> &a){ return dump(o,a); } int main(){ point<double> p; p.x=0.1; p.y=0.3; dump(std::cout,p); std::cout << p ;//how? return 0; } I tried different syntax' but I cant seem to make it work.

    Read the article

  • C++ class with char pointers returning garbage

    - by JMP
    I created a class "Entry" to handle Dictionary entries, but in my main(), I create the Entry() and try to cout the char typed public members, but I get garbage. When I look at the Watch list in debugger, I see the values being set, but as soon as I access the values, there is garbage. Can anyone elaborate on what I might be missing? #include <iostream> using namespace std; class Entry { public: Entry(const char *line); char *Word; char *Definition; }; Entry::Entry(const char *line) { char tmp[100]; strcpy(tmp, line); Word = strtok(tmp, ",") + '\0'; Definition = strtok(0,",") + '\0'; } int main() { Entry *e = new Entry("drink,What you need after a long day's work"); cout << "Word: " << e->Word << endl; cout << "Def: " << e->Definition << endl; cout << endl; delete e; e = 0; return 0; }

    Read the article

  • Generating authentication header from azure table through objective-c

    - by user923370
    I'm fetching data from iCloud and for that I need to generate a header (azure table storage). I used the code below for that and it is generating the headers. But when I use these headers in my project it is showing "make sure that the value of authorization header is formed correctly including the signature." I googled a lot and tried many codes but in vain. Can anyone kindly please help me with where I'm going wrong in this code. -(id)generat{ NSString *messageToSign = [NSString stringWithFormat:@"%@/%@/%@", dateString,AZURE_ACCOUNT_NAME, tableName]; NSString *key = @"asasasasasasasasasasasasasasasasasasasasas=="; const char *cKey = [key cStringUsingEncoding:NSUTF8StringEncoding]; const char *cData = [messageToSign cStringUsingEncoding:NSUTF8StringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *hash = [Base64 encode:HMAC]; NSLog(@"Encoded hash: %@", hash); NSURL *url=[NSURL URLWithString: @"http://my url"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request addValue:[NSString stringWithFormat:@"SharedKeyLite %@:%@",AZURE_ACCOUNT_NAME, hash] forHTTPHeaderField:@"Authorization"]; [request addValue:dateString forHTTPHeaderField:@"x-ms-date"]; [request addValue:@"application/atom+xml, application/xml"forHTTPHeaderField:@"Accept"]; [request addValue:@"UTF-8" forHTTPHeaderField:@"Accept-Charset"]; NSLog(@"Headers: %@", [request allHTTPHeaderFields]); NSLog(@"URL: %@", [[request URL] absoluteString]); return request; } -(NSString*)rfc1123String:(NSDate *)date { static NSDateFormatter *df = nil; if(df == nil) { df = [[NSDateFormatter alloc] init]; df.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; df.timeZone = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; df.dateFormat = @"EEE',' dd MMM yyyy HH':'mm':'ss 'GMT'"; } return [df stringFromDate:date]; }

    Read the article

  • c++ specialized overload?

    - by acidzombie24
    -edit- i am trying to close the question. i solved the problem with boost::is_base_and_derived In my class i want to do two things. 1) Copy int, floats and other normal values 2) Copy structs that supply a special copy function (template T copyAs(); } the struct MUST NOT return int's unless i explicitly say ints. I do not want the programmer mistaking the mistake by doing int a = thatClass; -edit- someone mention classes dont return anything, i mean using the operator Type() overload. How do i create my copy operator in such a way i can copy both 1) ints, floats etc and the the struct restricted in the way i mention in 2). i tried doing template <class T2> T operator = (const T2& v); which would cover my ints, floats etc. But how would it differentiate from structs? so i wrote T operator = (const SomeGenericBase& v); The idea was the GenericBase would be unsed instead then i can do v.Whatever. But that backfires bc the functions i want wouldnt exist, unless i use virtual, but virtual templates dont exist. Also i would hate to use virtual I think the solution is to get rid of ints and have it convert to something that can do .as(). So i wrote something up but now i have the same problem, how does that differentiate ints and structs that have the .as() function template?

    Read the article

  • Handling Apache Thrift list/map Return Types in C++

    - by initzero
    First off, I'll say I'm not the most competent C++ programmer, but I'm learning, and enjoying the power of Thrift. I've implemented a Thrift Service with some basic functions that return void, i32, and list. I'm using a Python client controlled by a Django web app to make RPC calls and it works pretty well. The generated code is pretty straight forward, except for list returns: namespace cpp Remote enum N_PROTO { N_TCP, N_UDP, N_ANY } service Rcon { i32 ping() i32 KillFlows() i32 RestartDispatch() i32 PrintActiveFlows() i32 PrintActiveListeners(1:i32 proto) list<string> ListAllFlows() } The generated signatures from Rcon.h: int32_t ping(); int32_t KillFlows(); int32_t RestartDispatch(); int32_t PrintActiveFlows(); int32_t PrintActiveListeners(const int32_t proto); int64_t ListenerBytesReceived(const int32_t id); void ListAllFlows(std::vector<std::string> & _return); As you see, the ListAllFlows() function generated takes a reference to a vector of strings. I guess I expect it to return a vector of strings as laid out in the .thrift description. I'm wondering if I am meant to provide the function a vector of strings to modify and then Thrift will handle returning it to my client despite the function returning void. I can find absolutely no resources or example usages of Thrift list< types in C++. Any guidance would be appreciated.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >