Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 212/245 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Counting string length in javascript and Ruby on Rails

    - by williamjones
    I've got a text area on a web site that should be limited in length. I'm allowing users to enter 255 characters, and am enforcing that limit with a Rails validation: validates_length_of :body, :maximum => 255 At the same time, I added a javascript char counter like you see on Twitter, to give feedback to the user on how many characters he has already used, and to disable the submit button when over length, and am getting that length in Javascript with a call like this: element.length Lastly, to enforce data integrity, in my Postgres database, I have created this field as a varchar(255) as a last line of defense. Unfortunately, these methods of counting characters do not appear to be directly compatible. Javascript counts the best, in that it counts what users consider as number of characters where everything is a single character. Once the submission hits Rails, however, all of the carriage returns have been converted to \r\n, now taking up 2 characters worth of space, which makes a close call fail Rails validations. Even if I were to handcode a different length validation in Rails, it would still fail when it hits the database I think, though I haven't confirmed this yet. What's the best way for me to make all this work the way the user would want? Best Solution: an approach that would enable me to meet user expectations, where each character of any type is only one character. If this means increasing the length of the varchar database field, a user should not be able to sneakily send a hand-crafted post that creates a row with more than 255 letters. Somewhat Acceptable Solution: a javascript change that enables the user to see the real character count, such that hitting return increments the counter 2 characters at a time, while properly handling all symbols that might have these strange behaviors.

    Read the article

  • pthread_exit and/or pthread_join causing Abort and SegFaults.

    - by MJewkes
    The following code is a simple thread game, that switches between threads causing the timer to decrease. It works fine for 3 threads, causes and Abort(core dumped) for 4 threads, and causes a seg fault for 5 or more threads. Anyone have any idea why this might be happening? #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <assert.h> int volatile num_of_threads; int volatile time_per_round; int volatile time_left; int volatile turn_id; int volatile thread_running; int volatile can_check; void * player (void * id_in){ int id= (int)id_in; while(1){ if(can_check){ if (time_left<=0){ break; } can_check=0; if(thread_running){ if(turn_id==id-1){ turn_id=random()%num_of_threads; time_left--; } } can_check=1; } } pthread_exit(NULL); } int main(int argc, char *args[]){ int i; int buffer; pthread_t * threads =(pthread_t *)malloc(num_of_threads*sizeof(pthread_t)); thread_running=0; num_of_threads=atoi(args[1]); can_check=0; time_per_round = atoi(args[2]); time_left=time_per_round; srandom(time(NULL)); //Create Threads for (i=0;i<num_of_threads;i++){ do{ buffer=pthread_create(&threads[i],NULL,player,(void *)(i+1)); }while(buffer == EAGAIN); } can_check=1; time_left=time_per_round; turn_id=random()%num_of_threads; thread_running=1; for (i=0;i<num_of_threads;i++){ assert(!pthread_join(threads[i], NULL)); } return 0; }

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • Android XmlSerializer limitation?

    - by Rexb
    Hi all, My task is just to get an xml string using XmlSerializer. Problem is that it seems like the serializer stops adding any new element and/or attribute to the xml document when it reaches certain length (perhaps 10,000 char?). My questions: have you experience this kind of problem? What could be possible solutions? Here is my sample test code: public void doSerialize() throws XmlPullParserException, IllegalArgumentException, IllegalStateException, IOException { StringWriter writer = new StringWriter(); XmlSerializer serializer = XmlPullParserFactory.newInstance().newSerializer(); serializer.setOutput(writer); serializer.startDocument(null, null); serializer.startTag(null, "START"); for (int i = 0; i < 20; i++) { serializer.attribute(null, "ATTR" + i, "VAL " + i); } serializer.startTag(null, "DATA"); for (int i = 0; i < 500; i++) { serializer.attribute(null, "attr" + i, "value " + i); } serializer.endTag(null, "DATA"); serializer.endTag(null, "START"); serializer.endDocument(); String xml = writer.toString(); // value: until 493rd attribute int n = xml.length(); // value: 10125 } Any help will be greatly appreciated.

    Read the article

  • Recursive Maze in Java

    - by Api
    I have written a short Java code for solving a simple maze problem to go from S to G. I do not understand where the problem is going wrong. import java.util.Scanner; public class tester { static char [][] grid={ {'.','.'}, {'.','.'}, {'S','G'}, }; static int a=2; static int b=2; static boolean findpath(int x, int y) { if((x > grid.length-1) || (y > grid[0].length-1) || (x < 0 || y < 0)) { return false; } else if(x==a && y==b){ return true; } else if (findpath(x,y-1) == true){ return true; } else if (findpath(x+1,y) == true){ return true; } else if (findpath(x,y+1) == true) { return true; } else if (findpath(x-1,y) == true){ return true; } return false; } public static void main(String[] args){ boolean result=findpath(2,0); System.out.print(result); } } I am giving the starting position directly and goal is defined in a & b. Do help.

    Read the article

  • When memory is actually freeded?

    - by zhyk
    Hello all. I'm trying to understand memory management stuff in Objective-C. If I see the memory usage listed by Activity Monitor, it looks like memory is not being freed (I mean column rsize). But in "Object Allocations" everything looks fine. Here is my simple code: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSInteger i, k=10000; while (k>0) { NSMutableArray *array = [[NSMutableArray alloc]init]; for (i=0;i<1000*k; i++) { NSString *srtring = [[NSString alloc] initWithString:@"string...."]; [array addObject:srtring]; [srtring release]; srtring = nil; } [array release]; array = nil; k-=500; } [NSThread sleepForTimeInterval:5]; [pool release]; return 0; } As for retain and release it's cool, everything is balanced. But rsize decreases only after quitting from this little program. Is it possible to "clean" memory somehow before quitting?

    Read the article

  • How to use setTimeout / .delay() to wait for typing between characters

    - by Darcy
    Hi all, I am creating a simple listbox filter that takes the user input and returns the matching results in a listbox via javascript/jquery (roughly 5000+ items in listbox). Here is the code snippet: var Listbox1 = $('#Listbox1'); var commands = document.getElementById('DatabaseCommandsHidden'); //using js for speed $('#CommandsFilter').bind('keyup', function() { Listbox1.children().remove(); for (var i = 0; i < commands.options.length; i++) { if (commands.options[i].text.toLowerCase().match($(this).val().toLowerCase())) { Listbox1.append($('<option></option>').val(i).html(commands.options[i].text)); } } }); This works pretty well, but slows down somewhat when the 1st/2nd char's are being typed since there are so many items. I thought a solution I could use would be to add a delay to the textbox that prevents the 'keyup' event from being called until the user stops typing. The problem is, I'm not sure how to do that, or if its even a good idea or not. Any suggestions/help is greatly appreciated.

    Read the article

  • Compilation errors calling find_if using a functor

    - by Jim Wong
    We are having a bit of trouble using find_if to search a vector of pairs for an entry in which the first element of the pair matches a particular value. To make this work, we have defined a trivial functor whose operator() takes a pair as input and compares the first entry against a string. Unfortunately, when we actually add a call to find_if using an instance of our functor constructed using a temporary string value, the compiler produces a raft of error messages. Oddly (to me, anyway), if we replace the temporary with a string that we've created on the stack, things seem to work. Here's what the code (including both versions) looks like: typedef std::pair<std::string, std::string> MyPair; typedef std::vector<MyPair> MyVector; struct MyFunctor: std::unary_function <const MyPair&, bool> { explicit MyFunctor(const std::string& val) : m_val(val) {} bool operator() (const MyPair& p) { return p.first == m_val; } const std::string m_val; }; bool f(const char* s) { MyFunctor f(std::string(s)); // ERROR // std::string str(s); // MyFunctor f(str); // OK MyVector vec; MyVector::const_iterator i = std::find_if(vec.begin(), vec.end(), f); return i != vec.end(); } And here's what the most interesting error message looks like: /usr/include/c++/4.2.1/bits/stl_algo.h:260: error: conversion from ‘std::pair, std::allocator , std::basic_string, std::allocator ’ to non-scalar type ‘std::string’ requested Because we have a workaround, we're mostly curious as to why the first form causes problems. I'm sure we're missing something, but we haven't been able to figure out what it is.

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • Vim: yank and replace -- the same yanked input -- multiple times, and two other questions

    - by Hassan Syed
    Now that I am using vim for everything I type, rather then just for configuring servers, I wan't to sort out the following trivialities. I tried to formulate Google search queries but the results didn't address my questions :D. Question one: How do I yank and replace multiple times ? Once I have something in the yank history (if that is what its called) and then highlight and use the 'p' char in command mode the replaced text is put at the front of the yank history; therefore subsequent replace operations do not use the the text I intended. I imagine this to be a usefull feature under certain circumstances but I do not have a need for it in my workflow. Question two: How do I type text without causing the line to ripple forward ? I use hard-tab stops to allign my code in a certain way -- e.g., FunctionNameX ( lala * land ); FunctionNameProto ( ); When I figure out what needs to go into the second function, how do I insert it without move the text up ? Question three Is there a way of having a uniform yank history across gvim instances on the same machine ? I have 1 monitors. Just wondering, atm I am using highlight + mouse middle click.

    Read the article

  • C socket programming: client send() but server select() doesn't see it

    - by Fantastic Fourier
    Hey all, I have a server and a client running on two different machines where the client send()s but the server doesn't seem to receive the message. The server employs select() to monitor sockets for any incoming connections/messages. I can see that when the server accepts a new connection, it updates the fd_set array but always returns 0 despite the client send() messages. The connection is TCP and the machines are separated by like one router so dropping packets are highly unlikely. I have a feeling that it's not select() but perhaps send()/sendto() from client that may be the problem but I'm not sure how to go about localizing the problem area. while(1) { readset = info->read_set; ready = select(info->max_fd+1, &readset, NULL, NULL, &timeout); } above is the server side code where the server has a thread that runs select() indefinitely. rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); sleep(3); char * somemsg = "is this working yet?\0"; rv = send(sockfd, somemsg, sizeof(somemsg), NULL); if (rv < 0) printf("MAIN: ERROR send() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); rv = sendto(sockfd, somemsg, sizeof(somemsg), NULL, &server_address, sizeof(server_address)); if (rv < 0) printf("MAIN: ERROR sendto() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); and this is the client side where it connects and sends messages and returns connected MAIN: rv is 4 MAIN: rv is 4 any comments or insightful insights are appreciated.

    Read the article

  • scanf segfaults and various other anomalies inside while loop

    - by Shadow
    while(1){ //Command prompt char *command; printf("%s>",current_working_directory); scanf("%s",command);<--seg faults after input has been received. printf("\ncommand:%s\n",command); } I am getting a few different errors and they don't really seem reproducible(except for the segfault at this point .<). This code worked fine about 10 minutes ago, then it infinite looped the printf command and now it seg faults on the line mentioned above. The only thing I changed was scanf("%s",command); to what it currently is. If I change the command variable to be an array it works, obviously this is because the storage is set aside for it. 1) I got prosecuted about telling someone that they needed to malloc a pointer* (But that usually seems to solve the problem such as making it an array) 2) the command I am entering is "magic" 5 characters so there shouldn't be any crazy stack overflow. 3) I am running on mac OSX 10.6 with newest version of xCode(non-OS4) and standard gcc 4) this is how I compile the program: gcc --std=c99 -W sfs.c Just trying to figure out what is going on. Being this is for a school project I am never going to have to see again, I will just code some noob work around that would make my boss cry :) But for afterwards I would love to figure out why this is happening and not just make some fix for it, and if there is some fix for it why that fix works.

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • select failing with C program but not shell

    - by Gary
    So I have a parent and child process, and the parent can read output from the child and send to the input of the child. So far, everything has been working fine with shell scripts, testing commands which input and output data. I just tested with a simple C program and couldn't get it to work. Here's the C program: #include <stdio.h> int main( void ) { char stuff[80]; printf("Enter some stuff:\n"); scanf("%s", stuff); return 0; } The problem with with the C program is that my select fails to read from the child fd and hence the program cannot finish. Here's the bit that does the select.. //wait till child is ready fd_set set; struct timeval timeout; FD_ZERO( &set ); // initialize fd set FD_SET( PARENT_READ, &set ); // add child in to set timeout.tv_sec = 3; timeout.tv_usec = 0; int r = select(FD_SETSIZE, &set, NULL, NULL, &timeout); if( r < 1 ) { // we didn't get any input exit(1); } Does anyone have any idea why this would happen with the C program and not a shell one? Thanks!

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • Non-blocking TCP connection issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • how to fix my error saying expected expression before 'else'

    - by user292489
    this program intended to read a .txt, a set of numbers, file and wwrite to another two .txt files called even amd odd as follows: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { int i=0,even,odd; int number[i]; // check to make sure that all the file names are entered if (argc != 3) { printf("Usage: executable in_file output_file\n"); exit(0); } FILE *dog = fopen(argv[1], "r"); FILE *feven= fopen(argv[2], "w"); FILE *fodd= fopen (argv[3], "w"); // check whether the file has been opened successfully if (dog == NULL) { printf("File %s cannot open!\n", argv[1]); exit(0); } //odd = fopen(argv[2], "w"); { if (i%2!=1) i++;} fprintf(feven, "%d", even); fscanf(dog, "%d", &number[i]); else { i%2==1; i++;} fprintf(fodd, "%d", odd); fscanf(dog, "%d", &number[i]); fclose(feven); fclose(fodd);

    Read the article

  • What are some methods to prevent double posting in a form? (PHP)

    - by jpjp
    I want to prevent users from accidentally posting a comment twice. I use the PRG (post redirect get) method, so that I insert the data on another page then redirect the user back to the page which shows the comment. This allows users to refresh as many times as they want. However this doesn't work when the user goes back and clicks submit again or when they click submit 100 times really fast. I don't want 100 of the same comments. I looked at related questions on SO and found that a token is best. But I am having trouble using it. //makerandomtoken(20) returns a random 20 length char. <form method="post" ... > <input type="text" id="comments" name="comments" class="commentbox" /><br/> <input type="hidden" name="_token" value="<?php echo $token=makerandomtoken(20); ?>" /> <input type="submit" value="submit" name="submit" /> </form> if (isset($_POST['submit']) && !empty($comments)) { $comments= mysqli_real_escape_string($dbc,trim($_POST['comments'])); //how do I make the if-statment to check if the token has been already set once? if ( ____________){ //don't insert comment because already clicked submit } else{ //insert the comment into the database } } So I have the token as a hidden value, but how do I use that to prevent multiple clicking of submit. METHODS: someone suggested using sessions. I would set the random token to $_SESSION['_token'] and check if that session token is equal to the $_POST['_token'], but how do I do that? When I tried, it still doesn't check

    Read the article

  • error message The URI does not identify an external Java class

    - by iHeartGreek
    Hi! I am new to XSL, and thus new to using scripts within the XSL. I have taken example code (also using C#) and adapted it for my own use.. but it does not work. The error message is: The URI urn:cs-scripts does not identify an external Java class The relevant code I have is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:strTok="urn:cs-scripts"> ... ... ... </xsl:template> <xsl:variable name="temp"> <xsl:value-of select="tok:getList('AAA BBB CCC', ' ')"/> </xsl:variable> <msxsl:script language="C#" implements-prefix="tok"> <![CDATA[ public string[] getList(string str, char[] delim) { return str.Split(delim, StringSplitOptions.None); } public string getString(string[] list, int i) { return list[i]; } ]]> </msxsl:script> </xsl:stylesheet>

    Read the article

  • Program repeats each time a character is scanned .. How to stop it ?

    - by ZaZu
    Hello there, I have a program that has this code : #include<stdio.h> main(){ int input; char g; do{ printf("Choose a numeric value"); printf(">"); scanf("\n%c",&input); g=input-'0'; }while((g>=-16 && g<=-1)||(g>=10 && g<=42)||(g>=43 && g<=79)); } It basically uses ASCII manipulation to allow the program to accept numbers only .. '0' is given the value 48 by default...the ASCII value - 48 gives a ranges of numbers above (in the while statement) Anyway, whenever a user inputs numbers AND alphabets, such as : abr39293afakvmienb23 The program ignores : a,b,r .. But takes '3' as the first input. For a b and r, the code under the do loop repeats. So for the above example, I get : Choose a numeric value >Choose a numeric value> Choose a numeric value >3 Is there a way I can stop this ??? I tried using \n%c to scan the character and account for whitespace, but that didnt work :( Please help thank you very much !

    Read the article

  • Chinese records of sqlite databse are in not accessable in iphone app

    - by Mas
    Hi! I'm creating an iphone app. In which I'm reading data from the sqlite db and presenting it in the tableview control. Problem is that the data is in chinese language. Due to unknown reason, when I read/fetch record from the sqlite table, some records are presented and other are missed by objc. Even objc reads some columns of the missing values, But returns the nil results of the some columns. In reality these columns are not empty. I tried many solutions but didn't find any suitable. Same database is working perfectly in the android version of the app. Any one can help here is the code of reading the chinese records from table Quote *q = [[Quote alloc] init]; q.catId = sqlite3_column_int(statement, 0); q.subCatId = sqlite3_column_int(statement, 1); q.quote = [NSString stringWithUTF8String:(char *)sqlite3_column_text(statement, 2)]; and here is the some records which iphone misses ??????·??? ??????,????????! ??????,??????? Thanks

    Read the article

  • Write to memory buffer instead of file with libjpeg?

    - by Richard Knop
    I have found this function which uses libjpeg to write to a file: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } I would actually need to write the jpeg compressed image just to memory buffer, without saving it to a file, to save time. Could somebody give me an example how to do it? I have been searching the web for a while but the documentation is very rare if any and examples are also difficult to come by.

    Read the article

  • How to use dirent.h correctly.

    - by Nick
    Hello, I am new to C++ and I am experimenting with the dirent.h header to manipulate directory entries. The following little app compiles but pukes after you supple a directory name. Can someone give me a hint? The int quit is there to provide a while loop. I removed the loop in an attempt to isolate my problem. thanks! #include <iostream> #include <dirent.h> using namespace std; int main() { char *dirname = 0; DIR *pd = 0; struct dirent *pdirent = 0; int quit = 1; cout<< "Enter a directory path to open (leave blank to quit):\n"; cin >> dirname; if(dirname == NULL) { quit = 0; } pd = opendir(dirname); if(pd == NULL) { cout << "ERROR: Please provide a valid directory path.\n"; } return 0; }

    Read the article

  • Providing less than operator for one element of a pair

    - by Koszalek Opalek
    What would be the most elegant way too fix the following code: #include <vector> #include <map> #include <set> using namespace std; typedef map< int, int > row_t; typedef vector< row_t > board_t; typedef row_t::iterator area_t; bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; int main( int argc, char* argv[] ) { int row_num; area_t it; set< pair< int, area_t > > queue; queue.insert( make_pair( row_num, it ) ); // does not compile }; One way to fix it is moving the definition of less< to namespace std (I know, you are not supposed to do it.) namespace std { bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; }; Another obvious solution is defining less than< for pair< int, area_t but I'd like to avoid that and be able to define the operator only for the one element of the pair where it is not defined.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >