Search Results

Search found 7500 results on 300 pages for 'const char'.

Page 152/300 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • How to keep the CPU usage down while running an SDL program?

    - by budwiser
    I've done a very basic window with SDL and want to keep it running until I press the X on window. #include "SDL.h" const int SCREEN_WIDTH = 640; const int SCREEN_HEIGHT = 480; int main(int argc, char **argv) { SDL_Init( SDL_INIT_VIDEO ); SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_WM_SetCaption( "SDL Test", 0 ); SDL_Event event; bool quit = false; while (quit != false) { if (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { quit = true; } } SDL_Delay(80); } SDL_Quit(); return 0; } I tried adding SDL_Delay() at the end of the while-clause and it worked quite well. However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%. Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?

    Read the article

  • display multiple errors via bool flag c++

    - by igor
    Been a long night, but stuck on this and now am getting "segmentation fault" in my compiler.. Basically I'm trying to display all the errors (the cout) needed. If there is more than one error, I am to display all of them. bool validMove(const Square board[BOARD_SIZE][BOARD_SIZE], int x, int y, int value) { int index; bool moveError = true; const int row_conflict(0), column_conflict(1), grid_conflict(2); int v_subgrid=x/3; int h_subgrid=y/3; getCoords(x,y); for(index=0;index<9;index++) if(board[x][index].number==value){ cout<<"That value is in conflict in this row\n"; moveError=false; } for(index=0;index<9;index++) if(board[index][y].number==value){ cout<<"That value is in conflict in this column\n"; moveError=false; } for(int i=v_subgrid*3;i<(v_subgrid*3 +3);i++){ for(int j=h_subgrid*3;j<(h_subgrid*3+3);j++){ if(board[i][j].number==value){ cout<<"That value is in conflict in this subgrid\n"; moveError=false; } } } return true; }

    Read the article

  • While loop not reading in the last item

    - by Gandalf StormCrow
    I'm trying to read in a multi line string then split it then print it .. here is the string : 1T1b5T!1T2b1T1b2T!1T1b1T2b2T!1T3b1T1b1T!3T3b1T!1T3b1T1b1T!5T1*1T 11X21b1X 4X1b1X When I split the string with ! I get this without the last line string : 1T1b5T 1T1b5T1T2b1T1b2T 1T2b1T1b2T1T1b1T2b2T 1T1b1T2b2T1T3b1T1b1T 1T3b1T1b1T3T3b1T 3T3b1T1T3b1T1b1T 1T3b1T1b1T5T1*1T 5T1*1T11X21b1X 11X21b1X Here is my code : import java.io.BufferedInputStream; import java.util.Scanner; public class Main { public static void main(String args[]) { Scanner stdin = new Scanner(new BufferedInputStream(System.in)); while (stdin.hasNext()) { for (String line : stdin.next().split("!")) { System.out.println(line); for (int i = 0; i < line.length(); i++) { System.out.print(line.charAt(i)); } } } } } Where did I make the mistake, why is not reading in the last line? After I read in all lines properly I should go trough each line if I encounter number I should print the next char the n times the number I just read, but that is long way ahead first I need help with this. Thank you UPDATE : Here is how the output should look like : 1T1b5T 1T2b1T1b2T 1T1b1T2b2T 1T3b1T1b1T 3T3b1T 1T3b1T1b1T 5T1*1T 11X21b1X 4X1b1X Here is a solution in C(my friend solved it not me), but I'd stil wanted to do it in JAVA : #include <stdio.h> int main (void) { char row[134]; for (;fgets (row,134,stdin)!=NULL;) { int i,j=0; for (i=0;row[i]!='\0';i++) { if (row[i]<='9'&&row[i]>='1') j+=(row[i]-'0'); else if ((row[i]<='Z'&&row[i]>='A')||row[i]=='*') for (;j;j--) printf ("%c",row[i]); else if (row[i]=='b') for (;j;j--) printf (" "); else if (row[i]=='!'||row[i]=='\n') printf ("\n"); } } return 0; }

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Setting synthesized arrays causing memory leaks using nested arrays

    - by webtoad
    Hello: Why is the following code causing a memory leak in an iPhone App? All of the initted objects below leak, including the arrays, the strings and the numbers. So, I'm thinking it has something to do with the the synthesized array property not releasing the object when I set the property again on the second and subsequent time this piece of code is called. Here is the code: "controller" (below) is my custom view controller class, which I have a reference to, and I am setting with this code snippet: sqlite3_stmt *statement; NSMutableArray *foo_IDs = [[NSMutableArray alloc] init]; NSMutableArray *foo_Names = [[NSMutableArray alloc] init]; NSMutableArray *foo_IDsBySection = [[NSMutableArray alloc] init]; NSMutableArray *foo_NamesBySection = [[NSMutableArray alloc] init]; // Get data: NSString *sql = @"select distinct p.foo_ID, p.foo_Name from foo as p "; if (sqlite3_prepare_v2(...) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { int p_id; NSString *foo_Name; p_id = sqlite3_column_int(statement, 0); char *str2 = (char *)sqlite3_column_text(statement, 1); foo_Name = [NSString stringWithCString:str2]; [foo_IDs addObject:[NSNumber numberWithInt:p_id]]; [foo_Names addObject:foo_Name]; } sqlite3_finalize(statement); } // Pass the array itself into another array: // (normally there is more than one array in each array) [foo_IDsBySection addObject: foo_IDs]; [foo_NamesBySection addObject: foo_Names]; [foo_IDs release]; [foo_Names release]; // Set some synthesized properties (of type NSArray, nonatomic, // retain) in controller: controller.foo_IDsBySection = foo_IDsBySection; controller.foo_NamesBySection = foo_NamesBySection; [foo_IDsBySection release]; [foo_NamesBySection release]; Thanks for any help!

    Read the article

  • Passing an array of structs in C

    - by lelouch
    I'm having trouble passing an array of structs to a function in C. I've created the struct like this in main: int main() { struct Items { char code[10]; char description[30]; int stock; }; struct Items MyItems[10]; } I then access it like: MyItems[0].stock = 10; etc. I want to pass it to a function like so: ReadFile(MyItems); The function should read the array, and be able to edit it. Then I should be able to access the same array from other functions. I've tried heaps of declarations but none of them work. e.g. void ReadFile(struct Items[10]) I've had a look around for other questions, but the thing is they're all done different, with typedefs and asterisks. My teacher hasn't taught us pointers yet, so I'd like to do it with what I know. Any ideas? :S EDIT: Salvatore's answer is working after I fixed my prototype to: void ReadFile(struct Items[9]);

    Read the article

  • Cross-platform iteration of Unicode string

    - by kizzx2
    I want to iterate each character of a Unicode string, treating each surrogate pair and combining character sequence as a single unit (one grapheme). Example The text "??????" is comprised of the code points: U+0928, U+092E, U+0938, U+094D, U+0924, U+0947, of which, U+0938 and U+0947 are combining marks. static void Main(string[] args) { const string s = "??????"; Console.WriteLine(s.Length); // Ouptuts "6" var l = 0; var e = System.Globalization.StringInfo.GetTextElementEnumerator(s); while(e.MoveNext()) l++; Console.WriteLine(l); // Outputs "4" } So there we have it in .NET. We also have Win32's CharNextW() #include <Windows.h> #include <iostream> #include <string> int main() { const wchar_t * s = L"??????"; std::cout << std::wstring(s).length() << std::endl; // Gives "6" int l = 0; while(CharNextW(s) != s) { s = CharNextW(s); ++l; } std::cout << l << std::endl; // Gives "4" return 0; } Question Both ways I know of are specific to Microsoft. Are there portable ways to do it? I heard about ICU but I couldn't find something related quickly (UnicodeString(s).length() still gives 6). Would be an acceptable answer to point to the related function/module in ICU. C++ doesn't have a notion of Unicode, so a lightweight cross-platform library for dealing with these issues would make an acceptable answer.

    Read the article

  • How do you compare using .NET types in an NHibernate ICriteria query for an ICompositeUserType?

    - by gabe
    I have an answered StackOverflow question about how to combine to legacy CHAR database date and time fields into one .NET DateTime property in my POCO here (thanks much Berryl!). Now i am trying to get a custom ICritera query to work against that very DateTime property to no avail. here's my query: ICriteria criteria = Session.CreateCriteria<InputFileLog>() .Add(Expression.Gt(MembersOf<InputFileLog>.GetName(x => x.FileCreationDateTime), DateTime.Now.AddDays(-14))) .AddOrder(Order.Desc(Projections.Id())) .CreateCriteria(typeof(InputFile).Name) .Add(Expression.Eq(MembersOf<InputFile>.GetName(x => x.Id), inputFileName)); IList<InputFileLog> list = criteria.List<InputFileLog>(); And here's the query it's generating: SELECT this_.input_file_token as input1_9_2_, this_.file_creation_date as file2_9_2_, this_.file_creation_time as file3_9_2_, this_.approval_ind as approval4_9_2_, this_.file_id as file5_9_2_, this_.process_name as process6_9_2_, this_.process_status as process7_9_2_, this_.input_file_name as input8_9_2_, gonogo3_.input_file_token as input1_6_0_, gonogo3_.go_nogo_ind as go2_6_0_, inputfile1_.input_file_name as input1_3_1_, inputfile1_.src_code as src2_3_1_, inputfile1_.process_cat_code as process3_3_1_ FROM input_file_log this_ left outer join go_nogo gonogo3_ on this_.input_file_token=gonogo3_.input_file_token inner join input_file inputfile1_ on this_.input_file_name=inputfile1_.input_file_name WHERE this_.file_creation_date > :p0 and this_.file_creation_time > :p1 and inputfile1_.input_file_name = :p2 ORDER BY this_.input_file_token desc; :p0 = '20100401', :p1 = '15:15:27', :p2 = 'LMCONV_JR' The query is exactly what i would expect, actually, except it doesn't actually give me what i want (all the rows in the last 2 weeks) because in the DB it's doing a greater than comparison using CHARs instead of DATEs. I have no idea how to get the query to convert the CHAR values into a DATE in the query without doing a CreateSQLQuery(), which I would like to avoid. Anyone know how to do this?

    Read the article

  • Coordinating typedefs and structs in std::multiset (C++)

    - by Sarah
    I'm not a professional programmer, so please don't hesitate to state the obvious. My goal is to use a std::multiset container (typedef EventMultiSet) called currentEvents to organize a list of structs, of type Event, and to have members of class Host occasionally add new Event structs to currentEvents. The structs are supposed to be sorted by one of their members, time. I am not sure how much of what I am trying to do is legal; the g++ compiler reports (in "Host.h") "error: 'EventMultiSet' has not been declared." Here's what I'm doing: // Event.h struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; int eventID; int hostID; }; // Host.h ... void calcLifeHist( double, EventMultiSet * ); // produces compiler error ... void addEvent( double, int, int, EventMultiSet * ); // produces compiler error // Host.cpp #include "Event.h" ... // main.cpp #include "Event.h" ... typedef std::multiset< Event, std::less< Event > > EventMultiSet; EventMultiSet currentEvents; EventMultiSet * cePtr = &currentEvents; ... Major questions Where should I include the EventMultiSet typedef? Are my EventMultiSet pointers obviously problematic? Is the compare function within my Event struct (in theory) okay? Thank you very much in advance.

    Read the article

  • Simplest way to mix sequences of types with iostreams?

    - by Kylotan
    I have a function void write<typename T>(const T&) which is implemented in terms of writing the T object to an ostream, and a matching function T read<typename T>() that reads a T from an istream. I am basically using iostreams as a plain text serialisation format, which obviously works fine for most built-in types, although I'm not sure how to effectively handle std::strings just yet. I'd like to be able to write out a sequence of objects too, eg void write<typename T>(const std::vector<T>&) or an iterator based equivalent (although in practice, it would always be used with a vector). However, while writing an overload that iterates over the elements and writes them out is easy enough to do, this doesn't add enough information to allow the matching read operation to know how each element is delimited, which is essentially the same problem that I have with a single std::string. Is there a single approach that can work for all basic types and std::string? Or perhaps I can get away with 2 overloads, one for numerical types, and one for strings? (Either using different delimiters or the string using a delimiter escaping mechanism, perhaps.)

    Read the article

  • Suggestions for a django db structure

    - by rh0dium
    Hi Say I have the unknown number of questions. For example: Is the sky blue [y/n] What date were your born on [date] What is pi [3.14] What is a large integ [100] Now each of these questions poses a different but very type specific answer (boolean, date, float, int). Natively django can happily deal with these in a model. class SkyModel(models.Model): question = models.CharField("Is the sky blue") answer = models.BooleanField(default=False) class BirthModel(models.Model): question = models.CharField("What date were your born on") answer = models.DateTimeField(default=today) class PiModel(models.Model) question = models.CharField("What is pi") answer = models.FloatField() But this has the obvious problem in that each question has a specific model - so if we need to add a question later I have to change the database. Yuck. So now I want to get fancy - How do a set up a model where by the answer type conversion happens automagically? ANSWER_TYPES = ( ('boolean', 'boolean'), ('date', 'date'), ('float', 'float'), ('int', 'int'), ('char', 'char'), ) class Questions(models.model): question = models.CharField(() answer = models.CharField() answer_type = models.CharField(choices = ANSWER_TYPES) default = models.CharField() So in theory this would do the following: When I build up my views I look at the type of answer and ensure that I only put in that value. But when I want to pull that answer back out it will return the data in the format specified by the answer_type. Example 3.14 comes back out as a float not as a str. How can I perform this sort of automagic transformation? Or can someone suggest a better way to do this? Thanks much!!

    Read the article

  • How to store a linked list in a struct in C

    - by LuckySlevin
    typedef struct child_list {int count; char vo[100]; child_list*next;} child_list; typedef struct parent_list { char vo[100]; child_list * head; int count; parent_list * next; } parent_list; As you can see there are two structures. child_list is used to create a linked list. And this list will be stored in a linked list of parent list. My problem is to display the child list which in the parent_list. My desire to get while displaying the linked list of parent_list: This lists work with this logic. I already made append and other stuff. For example if i enter ab cd ab ja cd ab Word Count List ab 3 cd->ja cd 2 ab->ab ja 1 cd The problematic part is displaying child_list which is in the parent_list nodes(List column of output). I don't know my question is clear please ask for further info.

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • What are the default return values for operator< and operator[] in C++ (Visual Studio 6)?

    - by DustOff
    I've inherited a large Visual Studio 6 C++ project that needs to be translated for VS2005. Some of the classes defined operator< and operator[], but don't specify return types in the declarations. VS6 allows this, but not VS2005. I am aware that the C standard specifies that the default return type for normal functions is int, and I assumed VS6 might have been following that, but would this apply to C++ operators as well? Or could VS6 figure out the return type on its own? For example, the code defines a custom string class like this: class String { char arr[16]; public: operator<(const String& other) { return something1 < something2; } operator[](int index) { return arr[index]; } }; Would VS6 have simply put the return types for both as int, or would it have been smart enough to figure out that operator[] should return a char and operator< should return a bool (and not convert both results to int all the time)? Of course I have to add return types to make this code VS2005 C++ compliant, but I want to make sure to specify the same type as before, as to not immediately change program behavior (we're going for compatibility at the moment; we'll standardize things later).

    Read the article

  • C: socket connection timeout

    - by The.Anti.9
    I have a simple program to check if a port is open, but I want to shorten the timeout length on the socket connection because the default is far too long. I'm not sure how to do this though. Here's the code: #include <sys/socket.h> #include <sys/time.h> #include <sys/types.h> #include <arpa/inet.h> #include <netinet/in.h> #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <netdb.h> #include <stdlib.h> #include <string.h> #include <unistd.h> int main(int argc, char **argv) { u_short port; /* user specified port number */ char addr[1023]; /* will be a copy of the address entered by u */ struct sockaddr_in address; /* the libc network address data structure */ short int sock = -1; /* file descriptor for the network socket */ if (argc != 3) { fprintf(stderr, "Usage %s <port_num> <address>", argv[0]); return EXIT_FAILURE; } address.sin_addr.s_addr = inet_addr(argv[2]); /* assign the address */ address.sin_port = htons(atoi(argv[2])); /* translate int2port num */ sock = socket(AF_INET, SOCK_STREAM, 0); if (connect(sock,(struct sockaddr *)&address,sizeof(address)) == 0) { printf("%i is open\n", port); } close(sock); return 0; }

    Read the article

  • memcmp,strcmp,strncmp in C

    - by el10780
    I wrote this small piece of code in C to test memcmp() strncmp() strcmp() functions in C. Here is the code that I wrote: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char** argv) { char *word1="apple",*word2="atoms"; if (strncmp(word1,word2,5)==0) printf("strncmp result.\n"); if (memcmp(word1,word2,5)==0) printf("memcmp result.\n"); if (strcmp(word1,word2)==0) printf("strcmp result.\n"); } Can somebody explain me the differences because I am confused with these three functions?My main problem is that I have a file in which I tokenize its line of it,the problem is that when I tokenize the word "atoms" in the file I have to stop the process of tokenizing.I first tried strcmp() but unfortunately when it reached to the point where the word "atoms" were placed in the file it didn't stop and it continued,but when I used either the memcmp() or the strncmp() it stopped and I was happy.But then I thought,what if there will be a case in which there is one string in which the first 5 letters are a,t,o,m,s and these are being followed by other letters.Unfortunately,my thoughts were right as I tested it using the above code by initializing word1 to "atomsaaaaa" and word2 to atoms and memcmp() and strncmp() in the if statements returned 0.On the other hand strcmp() it didn't.It seems that I must use strcmp(). I have done google searches but I got more confused as I have seen sites and other forums to define these three differently.If it is possible for someone to give me correct explanations/definitions so I can use them correctly in my source code,I would be really grateful.

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

  • parse string with regular exression

    - by llamerr
    I trying to parse this string: $right = '34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4]'; with following regexp: const word = '(\d{3}\d{2}\)S{0,1}\([^\)]*\)S{0,1}\[[^\]]*\])'; preg_match('/'.word.'{1}(?:\s{1}([+-]{1})\s{1}'.word.'){0,}/', $right, $matches); print_r($matches); i want to return array like this: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => - [3] => 34601)(11)[2] [4] => + [5] => 34601)(3)[2,4] ) but i return only following: Array ( [0] => 34601)S(1,6)[2] - 34601)(11)[2] + 34601)(3)[2,4] [1] => 34601)S(1,6)[2] [2] => + [3] => 34601)(3)[2,4] ) i think, its becouse of [^)]* or [^]]* in the word, but how i should correct regexp for matching this in another way? i tryied to specify it: \d+(?:[,#]\d+){0,} so word become const word = '(\d{3}\d{2}\)S{0,1}\(\d+(?:[,#]\d+){0,}\)S{0,1}\[\d+(?:[,#]\d+){0,}\])'; but it gives nothing

    Read the article

  • What is the rationale to not allow overloading of C++ conversions operator with non-member function

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From > operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration> operator boost::posix_time::ptime( const boost::chrono::time_point<Clock, Duration>& from) { using namespace boost; typedef chrono::time_point<Clock, Duration> time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast<duration_t>( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast<long>(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. For a more complete description of the problem, see here or on my Boost.Conversion library.. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • creating QT gui using a thread in c++?

    - by rashid
    I am trying to create this QT gui using a thread but no luck. Below is my code. Problem is gui never shows up. /*INCLUDES HERE... .... */ using namespace std; struct mainStruct { int s_argc;<br> char ** s_argv; }; typedef struct mainStruct mas; void *guifunc(void * arg); int main(int argc, char * argv[]) { mas m;<br> m.s_argc = argc;<br> m.s_argv = argv;<br> pthread_t threadGUI; //start a new thread for gui int result = pthread_create(&threadGUI, NULL, guifunc, (void *) &m); if (result) {<br> printf("Error creating gui thread"); exit(0); } return 0; } void *guifunc(void * arg) { mas m = *(mas *)arg; QApplication app(m.s_argc,m.s_argv); //object instantiation<br> guiClass *gui = new guiClass(); //show gui<br> gui->show(); app.exec(); <br> }

    Read the article

  • pointer to a structure in a nested structure

    - by dpka6
    I have a 6 levels of nested structures. I am having problem with last three levels. The program compiles fine but when I run it crashes with Segmentation fault. There is some problem in assignment is what I feel. Kindly point out the error. typedef struct { char addr[6]; int32_t rs; uint16_t ch; uint8_t ap; } C; typedef struct { C *ap_info; } B; typedef struct { union { B wi; } u; } A; function1(char addr , int32_t rs, uint16_t ch, uint8_t ap){ A la; la.u.wi.ap_info->addr[6] = addr; la.u.wi.ap_info->rs = rs; la.u.wi.ap_info->ch = ch; la.u.wi.ap_info->ap = ap; }

    Read the article

  • Template trick to optimize out allocations

    - by anon
    I have: struct DoubleVec { std::vector<double> data; }; DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) { DoubleVec ans(lhs.size()); for(int i = 0; i < lhs.size(); ++i) { ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size() } return ans; } DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) { DoubleVec ans = a + b + c + d; } Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to: DoubleVec ans(a.size()); for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i]; You can assume all DoubleVec's have the same # of elements. The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries. Thanks!

    Read the article

  • Capabilities & Linux & Java

    - by Marek Jelen
    Hi, I am experimenting with linux capabilities for java application ... I do not want to add capabilities to interpreter (JVM), so I tried to write simple wrapper (with debugging information printed to stdout): #include <stdio.h> #include <stdlib.h> #include <sys/capability.h> #include <unistd.h> int main(int argc, char *argv[]){ cap_t cap = cap_get_proc(); if (!cap) { perror("cap_get_proc"); exit(1); } printf("%s: running with caps %s\n", argv[0], cap_to_text(cap, NULL)); return execlp("/usr/bin/java", "-server", "-jar", "project.jar", (char *)NULL); } This way, I can se that the capability is set for this execucatable: ./runner: running with caps = cap_net_bind_service+p And getcap shows runner = cap_net_bind_service+ip I have the capability set to be inheritable, so there should be no problem. However java still don't want to bind to privileged ports :-( I am getting this error: sun/nio/ch/Net.java:-2:in `bind': java.net.SocketException: Permission denied (NativeException) Can someone help me to resolve this? Thanks in advance

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >