Search Results

Search found 5202 results on 209 pages for 'char'.

Page 166/209 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • how to find the height of html content

    - by ganapati hegde
    Hi, I am trying to display 10 html pages as a single document,with 10 chapters. I am using Webkitgtk+ engine to render the HTML pages. I am getting the content of each html files and concatenating all of them to create a single 'char *content' and using webkit_web_view_load_html_string(WebKitWebView *web_view, const gchar *content, const gchar *base_uri); this function to load all the HTML files, as a document. Now i am trying to build a 'table of contents(toc)' for this document, which displays all the chapter names in order. My requirement is, when i click on a particular chapter in toc, the document should scroll to that point. For that i need to know height of each chapter ( height of each html file content). The point to be noted here is, the width of the document can changed and as HTML is reflowable, as width increases height decreases and vice-versa. So each time when height changes i need to find out the current height of each HTML page(or chapter) and hence calculate the distance to be scrolled. How can i find out the height of content of HTML page dynamically ? Thank you for the answer....

    Read the article

  • ObjC: Alloc instance of Class and performing selectors on it leads to __CFRequireConcreteImplementat

    - by Arakyd
    Hi, I'm new to Objective-C and I'd like to abstract my database access using a model class like this: @interface LectureModel : NSMutableDictionary { } -(NSString*)title; -(NSDate*)begin; ... @end I use the dictionary methods setValue:forKey: to store attributes and return these in the getters. Now I want to read these models from a sqlite database by using the Class dynamically. + (NSArray*)findBySQL:(NSString*)sql intoModelClass:(Class)modelClass { NSMutableArray* models = [[[NSMutableArray alloc] init] autorelease]; sqlite3* db = sqlite3_open(...); sqlite3_stmt* result = NULL; sqlite3_prepare_v2(db, [sql UTF8String], -1, &result, NULL); while(sqlite3_step(result) == SQLITE_ROW) { id modelInstance = [[modelClass alloc] init]; for (int i = 0; i < sqlite3_column_count(result); ++i) { NSString* key = [NSString stringWithUTF8String:sqlite3_column_name(result, i)]; NSString* value = [NSString stringWithUTF8String:(const char*)sqlite3_column_text(result, i)]; if([modelInstance respondsToSelector:@selector(setValue:forKey:)]) [modelInstance setValue:value forKey:key]; } [models addObject:modelInstance]; } sqlite3_finalize(result); sqlite3_close(db); return models; } Funny thing is, the respondsToSelector: works, but if I try (in the debugger) to step over [modelInstance setValue:value forKey:key], it will throw an exception, and the stacktrace looks like: #0 0x302ac924 in ___TERMINATING_DUE_TO_UNCAUGHT_EXCEPTION___ #1 0x991d9509 in objc_exception_throw #2 0x302d6e4d in __CFRequireConcreteImplementation #3 0x00024d92 in +[DBManager findBySQL:intoModelClass:] at DBManager.m:114 #4 0x0001ea86 in -[FavoritesViewController initializeTableData:] at FavoritesViewController.m:423 #5 0x0001ee41 in -[FavoritesViewController initializeTableData] at FavoritesViewController.m:540 #6 0x305359da in __NSFireDelayedPerform #7 0x302454a0 in CFRunLoopRunSpecific #8 0x30244628 in CFRunLoopRunInMode #9 0x32044c31 in GSEventRunModal #10 0x32044cf6 in GSEventRun #11 0x309021ee in UIApplicationMain #12 0x00002988 in main at main.m:14 So, what's wrong with this? Presumably I'm doing something really stupid and just don't see it... Many thanks in advance for your answers, Arakyd :..

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • string parsing to double fails in C++ (Xcode problem?)

    - by helixed
    Here's a fun one I've been trying to figure out. I have the following program: #include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *argv[]) { string s("5"); istringstream stream(s); double theValue; stream >> theValue; cout << theValue << endl; cout << stream.fail(); } The output is: 0 1 I don't understand why this is failing. Could somebody please tell me what I'm doing wrong? Thanks, helixed EDIT: Okay, sorry to turn this into a double post, but this looks like a problem specific to Xcode. If I compile this in g++, the code works without a problem. Does anybody have an idea why this is happening in Xcode, and how I could possibly fix it? Thanks again, helixed

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

  • Unknown Exception on trying to initialize the web service stub created by Axis C++

    - by Harsha Reddy
    Hi, I am trying out the sample calculator program given in the folder of axis c++. I am mainly interested in the client side. So I used the wsdl to create the stubs and my main is pretty much the same as given in the sample. However on executing the call Calculator ws (endpoint) I get an unknown exception "First-chance exception at 0x7c0024b9 in CalculatorClient.exe: 0xC0000005: Access violation reading location 0x00000000. First-chance exception at 0x7c812afb in CalculatorClient.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.. Unhandled exception at 0x7c0024b9 in CalculatorClient.exe: 0xC0000005: Access violation reading location 0x00000000." and the exception causing code is Calculator::Calculator(const char* pchEndpointUri, AXIS_PROTOCOL_TYPE eProtocol) :Stub(pchEndpointUri, eProtocol) { } I had earlier tried to run a a webservice using Axis C++ but had received the same error. At that time my web service was a java ws on WAS. Then I later tried the calculator client (but this time I did not have any server hosting the ws as I just wanted to check if the Client could initialize). Is the problem caused due to the web service not being hosted on Apache in C++ (though I highly doubt it). Any help would be appreciated. Thanks, Harsha

    Read the article

  • Weird switch behavior in .NET 4

    - by RaYell
    I have a problem understanding what's causes the compilation error in the code below: static class Program { static void Main() { dynamic x = ""; var test = foo(x); if (test == "test") { Console.WriteLine(test); } switch (test) { case "test": Console.WriteLine(test); break; } } private static string foo(object item) { return "bar"; } } The error I get is in switch (test) line: A switch expression or case label must be a bool, char, string, integral, enum, or corresponding nullable type. Intellisence shows me that foo operation will be resolved on runtime, which is fine because I'm using a dynamic type as a param. However I don't understand how if condition compiles fine when switch doesn't. Code above is just simplified version of what I have in my application (VSTO) which appeared after migrating the app from VSTO3 to VSTO4 when one method in VSTO was changed to return dynamic type values instead of object. Can anyone give me an explanation what's the problem. I know how to resolve it but I'd like to understand what's happening.

    Read the article

  • Why is Swing Parser's handleText not handling nested tags?

    - by Jim P
    I need to transform some HTML text that has nested tags to decorate 'matches' with a css attribute to highlight it (like firefox search). I can't just do a simple replace (think if user searched for "img" for example), so I'm trying to just do the replace within the body text (not on tag attributes). I have a pretty straightforward HTML parser that I think should do this: final Pattern pat = Pattern.compile(srch, Pattern.CASE_INSENSITIVE); Matcher m = pat.matcher(output); if (m.find()) { final StringBuffer ret = new StringBuffer(output.length()+100); lastPos=0; try { new ParserDelegator().parse(new StringReader(output.toString()), new HTMLEditorKit.ParserCallback () { public void handleText(char[] data, int pos) { ret.append(output.subSequence(lastPos, pos)); Matcher m = pat.matcher(new String(data)); ret.append(m.replaceAll("<span class=\"search\">$0</span>")); lastPos=pos+data.length; } }, false); ret.append(output.subSequence(lastPos, output.length())); return ret; } catch (Exception e) { return output; } } return output; My problem is, when I debug this, the handleText is getting called with text that includes tags! It's like it's only going one level deep. Anyone know why? Is there some simple thing I need to do to HTMLParser (haven't used it much) to enable 'proper' behavior of nested tags? PS - I figured it out myself - see answer below. Short answer is, it works fine if you pass it HTML, not pre-escaped HTML. Doh! Hope this helps someone else. <span>example with <a href="#">nested</a> <p>more nesting</p> </span> <!-- all this gets thrown together -->

    Read the article

  • Opengl Triangle instead of square

    - by Dave
    Im trying to create a spinning square inside of xcode using opengl but instead for some reason I have a spinning triangle? I'm doing this inside of sio2 but I dont think this is the problem. Here is the triangle: http://img220.imageshack.us/img220/7051/snapzproxscreensnapz001.png Here is my code: void templateRender( void ) { const GLfloat squareVertices[] ={ 100.0f, -100.0f, 100.0f, -100.0f, -100.0f, 100.0f, 100.0f, 100.0f, }; const unsigned char squareColors[] = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0, 0, 0, 255, 0, 255, 255, }; glMatrixMode( GL_MODELVIEW ); glLoadIdentity(); glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT ); // Your rendering code here... sio2WindowEnter2D( sio2->_SIO2window, 0.0f, 1.0f ); { glVertexPointer( 2, GL_FLOAT, 0, squareVertices ); glEnableClientState(GL_VERTEX_ARRAY); //set up the color array glColorPointer( 4, GL_UNSIGNED_BYTE, 0, squareColors ); glEnableClientState( GL_COLOR_ARRAY ); glTranslatef( sio2->_SIO2window->scl->x * 0.5f, sio2->_SIO2window->scl->y * 0.5f, 0.0f ); static float rotz = 0.0f; glRotatef( rotz, 0.0f, 0.0f, 1.0f ); rotz += 90.0f * sio2->_SIO2window->d_time; glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } sio2WindowLeave2D(); }

    Read the article

  • iPhone app crashes on start-up, in stack-trace only messages from built-in frameworks

    - by Aleksejs
    My app some times crashes at start-up. In stack-trace only messages from built-in frameworks. An excerpt from a crash log: OS Version: iPhone OS 3.1.3 (7E18) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000e6000 Crashed Thread: 0 Thread 0 Crashed: 0 CoreGraphics 0x339305d8 argb32_image_mark_RGB32 + 704 1 CoreGraphics 0x338dbcd4 argb32_image + 1640 2 libRIP.A.dylib 0x320d99f0 ripl_Mark 3 libRIP.A.dylib 0x320db3ac ripl_BltImage 4 libRIP.A.dylib 0x320cc2a0 ripc_RenderImage 5 libRIP.A.dylib 0x320d5238 ripc_DrawImage 6 CoreGraphics 0x338d7da4 CGContextDelegateDrawImage + 80 7 CoreGraphics 0x338d7d14 CGContextDrawImage + 364 8 UIKit 0x324ee68c compositeCGImageRefInRect 9 UIKit 0x324ee564 -[UIImage(UIImageDeprecated) compositeToRect:fromRect:operation:fraction:] 10 UIKit 0x32556f44 -[UINavigationBar drawBackButtonBackgroundInRect:withStyle:pressed:] 11 UIKit 0x32556b00 -[UINavigationItemButtonView drawRect:] 12 UIKit 0x324ecbc4 -[UIView(CALayerDelegate) drawLayer:inContext:] 13 QuartzCore 0x311cacfc -[CALayer drawInContext:] 14 QuartzCore 0x311cab00 backing_callback 15 QuartzCore 0x311ca388 CABackingStoreUpdate 16 QuartzCore 0x311c978c -[CALayer _display] 17 QuartzCore 0x311c941c -[CALayer display] 18 QuartzCore 0x311c9368 CALayerDisplayIfNeeded 19 QuartzCore 0x311c8848 CA::Context::commit_transaction(CA::Transaction*) 20 QuartzCore 0x311c846c CA::Transaction::commit() 21 QuartzCore 0x311c8318 +[CATransaction flush] 22 UIKit 0x324f5e94 -[UIApplication _reportAppLaunchFinished] 23 UIKit 0x324a7a80 -[UIApplication _runWithURL:sourceBundleID:] 24 UIKit 0x324f8df8 -[UIApplication handleEvent:withNewEvent:] 25 UIKit 0x324f8634 -[UIApplication sendEvent:] 26 UIKit 0x324f808c _UIApplicationHandleEvent 27 GraphicsServices 0x335067dc PurpleEventCallback 28 CoreFoundation 0x323f5524 CFRunLoopRunSpecific 29 CoreFoundation 0x323f4c18 CFRunLoopRunInMode 30 UIKit 0x324a6c00 -[UIApplication _run] 31 UIKit 0x324a5228 UIApplicationMain 32 Journaler 0x000029ac main (main.m:14) 33 Journaler 0x00002948 start + 44 File main.m is simple as possible: #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); // line 14 [pool release]; return retVal; } What my cause the app to crash?

    Read the article

  • Java Access Token PKCS11 Not found Provider

    - by oracleruiz
    Hello I'm trying to access the keystore from my smartcard in Java. And I'm using the following code.. I'm using the Pkcs11 implementation of OpenSc http://www.opensc-project.org/opensc File windows.cnf = name=dnie library=C:\WINDOWS\system32\opensc-pkcs11.dll Java Code = String configName = "windows.cnf" String PIN = "####"; Provider p = new sun.security.pkcs11.SunPKCS11(configName); Security.addProvider(p); KeyStore keyStore = KeyStore.getInstance("PKCS11", "SunPKCS11-dnie"); =)(= char[] pin = PIN.toCharArray(); keyStore.load(null, pin); When the execution goes by the line with =)(= throws me the following exeption java.security.KeyStoreException: PKCS11 not found at java.security.KeyStore.getInstance(KeyStore.java:635) at ObtenerDatos.LeerDatos(ObtenerDatos.java:52) at ObtenerDatos.obtenerNombre(ObtenerDatos.java:19) at main.main(main.java:27) Caused by: java.security.NoSuchAlgorithmException: no such algorithm: PKCS11 for provider SunPKCS11-dnie at sun.security.jca.GetInstance.getService(GetInstance.java:70) at sun.security.jca.GetInstance.getInstance(GetInstance.java:190) at java.security.Security.getImpl(Security.java:662) at java.security.KeyStore.getInstance(KeyStore.java:632) I think the problem is "SunPKCS11-dnie", but I don't know to put there. I had tried with a lot of combinations... Anyone can help me...

    Read the article

  • sqlite3 no insert is done after --> insert into --> SQLITE_DONE

    - by Fra
    Hi all, I'm trying to insert some data in a table using sqlite3 on an iphone... All the error codes I get from the various steps indicate that the operation should have been successful, but in fact, the table remains empty...I think I'm missing something... here the code: sqlite3 *database = nil; NSString *dbPath = [[[ NSBundle mainBundle ] resourcePath ] stringByAppendingPathComponent:@"placemarks.sql"]; if(sqlite3_open([dbPath UTF8String], &database) == SQLITE_OK){ sqlite3_stmt *insert_statement = nil; //where int pk, varchar name,varchar description, blob picture static char *sql = "INSERT INTO placemarks (pk, name, description, picture) VALUES(99,'nnnooooo','dddooooo', '');"; if (sqlite3_prepare_v2(database, sql, -1, &insert_statement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error: failed to prepare statement with message '%s'.", sqlite3_errmsg(database)); } int success = sqlite3_step(insert_statement); int finalized = sqlite3_finalize(insert_statement); NSLog(@"success: %i finalized: %i",success, finalized); NSAssert1(101, @"Error: failed to insert into the database with message '%s'.", sqlite3_errmsg(database)); sqlit3_step returns 101, so SQLITE_DONE, which should be ok.. If I execute the sql statement in the command line it works properly... anyone has an idea? could it be that there's a problem in writing the placemarks.sql because it's in the resources folder? rgds Fra

    Read the article

  • Simple Java calculator

    - by Kevin Duke
    Firstly this is not a homework question. I am practicing my knowledge on java. I figured a good way to do this is to write a simple program without help. Unfortunately, my compiler is telling me errors I don't know how to fix. Without changing much logic and code, could someone kindly point out where some of my errors are? Thanks import java.lang.*; import java.util.*; public class Calculator { private int solution; private int x; private int y; private char operators; public Calculator() { solution = 0; Scanner operators = new Scanner(System.in); Scanner operands = new Scanner(System.in); } public int addition(int x, int y) { return x + y; } public int subtraction(int x, int y) { return x - y; } public int multiplication(int x, int y) { return x * y; } public int division(int x, int y) { solution = x / y; return solution; } public void main (String[] args) { System.out.println("What operation? ('+', '-', '*', '/')"); System.out.println("Insert 2 numbers to be subtracted"); System.out.println("operand 1: "); x = operands; System.out.println("operand 2: "); y = operands.next(); switch(operators) { case('+'): addition(operands); operands.next(); break; case('-'): subtraction(operands); operands.next(); break; case('*'): multiplication(operands); operands.next(); break; case('/'): division(operands); operands.next(); break; } } }

    Read the article

  • How do I get uri of HTTP packet with winpcap?

    - by Gtker
    Based on this article I can get all incoming packets. /* Callback function invoked by libpcap for every incoming packet */ void packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { struct tm *ltime; char timestr[16]; ip_header *ih; udp_header *uh; u_int ip_len; u_short sport,dport; time_t local_tv_sec; /* convert the timestamp to readable format */ local_tv_sec = header->ts.tv_sec; ltime=localtime(&local_tv_sec); strftime( timestr, sizeof timestr, "%H:%M:%S", ltime); /* print timestamp and length of the packet */ printf("%s.%.6d len:%d ", timestr, header->ts.tv_usec, header->len); /* retireve the position of the ip header */ ih = (ip_header *) (pkt_data + 14); //length of ethernet header /* retireve the position of the udp header */ ip_len = (ih->ver_ihl & 0xf) * 4; uh = (udp_header *) ((u_char*)ih + ip_len); /* convert from network byte order to host byte order */ sport = ntohs( uh->sport ); dport = ntohs( uh->dport ); /* print ip addresses and udp ports */ printf("%d.%d.%d.%d.%d -> %d.%d.%d.%d.%d\n", ih->saddr.byte1, ih->saddr.byte2, ih->saddr.byte3, ih->saddr.byte4, sport, ih->daddr.byte1, ih->daddr.byte2, ih->daddr.byte3, ih->daddr.byte4, dport); } But how do I extract URI information in packet_handler?

    Read the article

  • how i print the values from NSArray objects in CGContextShowTextAtpoint()?

    - by Rajendra Bhole
    Hi, I developing an application in which i want to print the values on a line interval, for that i used NSArray with multiple objects and those object i passing into CGContextShowTextAtPoint() method. The code is. CGContextMoveToPoint(ctx, 30.0, 200.0); CGContextAddLineToPoint(ctx, 30.0, 440.0); NSArray *hoursInDays = [[NSArray alloc] initWithObjects:@"0",@"1",@"3",@"4",@"5",@"6",@"7",@"8",@"9",@"10",@"11",@"12", nil]; int intHoursInDays = 0; for(float y = 400.0; y >= 200.0; y-=18, intHoursInDays++) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, 28, y); CGContextAddLineToPoint(ctx, 32, y); CGContextSelectFont(ctx, "Helvetica", 12.0, kCGEncodingMacRoman); CGContextSetTextDrawingMode(ctx, kCGTextFill); CGContextSetRGBFillColor(ctx, 0, 255, 255, 1); CGAffineTransform xform = CGAffineTransformMake( 1.0, 0.0, 0.0, -1.0, 0.0, 0.0); CGContextSetTextMatrix(ctx, xform); NSString *arrayDataForYAxis = [hoursInDays objectAtIndex:intHoursInDays]; CGContextShowTextAtPoint(ctx, 10.0, y+20, [arrayDataForYAxis UTF8String], strlen((char *)arrayDataForYAxis)); CGContextStrokePath(ctx); } The above code is executed but it given me output is {oo, 1o,2o,...........11}, i want the output is {0,1,2,3...........11,12}. The above code given me one extra character "o" after single digit.I think the problem i meet near the parameters type casting of 5th parameter inside the method of CGContextShowTextAtpoint CGContextShowTextAtpoint(). How i resolve the problem of type casting for printing the objects of NSSArray in CGContextShowTextAtpoint() method??????????????

    Read the article

  • C# Error reading two dates from a binary file

    - by Jamie
    Hi all, When reading two dates from a binary file I'm seeing the error below: "The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars" My code is below: static DateTime[] ReadDates() { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Open, System.IO.FileAccess.Read); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryReader br = new System.IO.BinaryReader(appData)) { while (br.PeekChar() > 0) { result.Add(new DateTime(br.ReadInt64())); } br.Close(); } return result.ToArray(); } static void WriteDates(IEnumerable<DateTime> dates) { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Create, System.IO.FileAccess.Write); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(appData)) { foreach (DateTime date in dates) bw.Write(date.Ticks); bw.Close(); } } What could be the cause? Thanks

    Read the article

  • Bitrate Switching in xml playlist not loading video

    - by James Williams
    Using JW Player ver 5.4 and JW Embedder. Plugins are grid by Dabbler and fbit (Facebook). The Bitrate switching is not working. Does work fine for one video with HTML5 Video tag. When more than one video, it only shows the first video pic. Works fine with no bitrate switching. Code - HTML5 <script type="text/javascript" src="jwplayer/jwplayer.js"></script> <video id="container"></div> <script type="text/javascript"> jwplayer("container").setup({ flashplayer: "jwplayer/player.swf", streamer: "rtmp://server/location/", playlistfile: "playlists/playlist.xml", plugins: { grid: { rows: 4, distance: 60, horizontalmargin: 75, verticalmargin: 75 }, fbit: { link: "http://www.domain.com" } }, height: 375, width: 850, dock: true }); </script> XML - ATOM/MEDIA xmlns:jwplayer='http://developer.longtailvideo.com/trac/wiki/FlashFormats' Demostration Playlist Video 1 rtmp rtmp://server/location/ Video 2 rtmp rtmp://server/location/ have tried it with both video and div tags for the container. Div tag just shows a blank video area and a Null exception on Line 1 char 1863, this is probably the jwplayer.js file. XML is larger than this, this is to give you a brief syntax of my code and xml. I have searched for over 6 hrs on both longtail and search engine searches. Thank you in advance.

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • Help me find an appropriate ruby/python parser generator

    - by Geo
    The first parser generator I've worked with was Parse::RecDescent, and the guides/tutorials available for it were great, but the most useful feature it has was it's debugging tools, specifically the tracing capabilities ( activated by setting $RD_TRACE to 1 ). I am looking for a parser generator that can help you debug it's rules. The thing is, it has to be written in python or in ruby, and have a verbose mode/trace mode or very helpful debugging techniques. Does anyone know such a parser generator ? EDIT: when I said debugging, I wasn't referring to debugging python or ruby. I was referring to debugging the parser generator, see what it's doing at every step, see every char it's reading, rules it's trying to match. Hope you get the point. BOUNTY EDIT: to win the bounty, please show a parser generator framework, and illustrate some of it's debugging features. I repeat, I'm not interested in pdb, but in parser's debugging framework. Also, please don't mention treetop. I'm not interested in it.

    Read the article

  • jQuery and XHTML layout problems in ie7

    - by abysslogic
    Hi there, I am back again with more layout problems on my up and coming website. I am able to achieve the proper animation, positioning and results with my layout / splash on every modern browser (excluding ie7 or older). I have an image in the center of the page, that is text-align: center'd, and pushed to a vertical center by having a div (#SPLASH_HEAD) set to 50% on the top half of the page. The loading animation changes the height of #SPLASH_HEAD to 0px, to drag the image to the top (and then do other things). In ie7 (or compatability mode), it appears that there is an error in jquery-1.4.2.min.js, line 116 char 165 (which I dont think has anything to do with the actual jQuery file itself). The splash is not centered either vertically (#SPLASH_HEAD does not register at 50% of window height) and is not centered properly with margin-left. Also, none of the other elements are hidden properly (with .hide()) as ie7 does not appear to be loading all of my jQuery / javascript. heres a link: www.voidsync.com/test (it would be easier to view the source on there) thanks!

    Read the article

  • VERY simple C program won't compile with VC 64

    - by Paperflyer
    Here is a very simple C program: #include <stdio.h> int main (int argc, char *argv[]) { printf("sizeof(short) = %d\n",(int)sizeof(short)); printf("sizeof(int) = %d\n",(int)sizeof(int)); printf("sizeof(long) = %d\n",(int)sizeof(long)); printf("sizeof(long long) = %d\n",(int)sizeof(long long)); printf("sizeof(float) = %d\n",(int)sizeof(float)); printf("sizeof(double) = %d\n",(int)sizeof(double)); return 0; } While it compiles fine on Win32 (command line: cl main.c), it does not using the Win64 compiler ("c:\Program Files(x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\cl.exe" main.c). Specifically, it sais "error LNK2019: unresolved external symbol printf referenced in function main". As far as I understand this, it can not link to printf, right? Obviously, I have Microsoft Visual C++ Compiler 2008 (Standard enu) x86 and x64 installed and am using the 64-bit flavor of Windows (7). What is the problem here?

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • C malloc assertion help

    - by Chris
    I am implementing a divide and conquer polynomial algorithm so i can bench it against an opencl implementation, but i can't seem to get malloc to work. When I run the program it allocates a bunch of stuff, checks some things, then sends the size/2 to the algorithm. Then when I hit the malloc line again it spits out this: malloc.c:3096: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)-bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)-size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed. Aborted The line in question is: int *out, .....other vars....; out = (int *)malloc(sizeof(int) * size * 2); I have checked size with fprintf and it is a positive int (usually 50 at that point). I have tried calling malloc with a plain number as well and i still get the error. I'm just stumped at what's going on, and nothing from google that I have found so far has been too helpful. Any ideas what's going on? I'm trying to figure out how to compile a newer GCC in case it's a compiler error, but i really doubt it.

    Read the article

  • Filtering Wikipedia's XML dump: error on some accents

    - by streetpc
    I'm trying to index Wikpedia dumps. My SAX parser make Article objects for the XML with only the fields I care about, then send it to my ArticleSink, which produces Lucene Documents. I want to filter special/meta pages like those prefixed with Category: or Wikipedia:, so I made an array of those prefixes and test the title of each page against this array in my ArticleSink, using article.getTitle.startsWith(prefix). In English, everything works fine, I get a Lucene index with all the pages except for the matching prefixes. In French, the prefixes with no accent also work (i.e. filter the corresponding pages), some of the accented prefixes don't work at all (like Catégorie:), and some work most of the time but fail on some pages (like Wikipédia:) but I cannot see any difference between the corresponding lines (in less). I can't really inspect all the differences in the file because of its size (5 GB), but it looks like a correct UTF-8 XML. If I take a portion of the file using grep or head, the accents are correct (even on the incriminated pages, the <title>Catégorie:something</title> is correctly displayed by grep). On the other hand, when I rectreate a wiki XML by tail/head-cutting the original file, the same page (here Catégorie:Rock par ville) gets filtered in the small file, not in the original… Any idea ? Alternatives I tried: Getting the file (commented lines were tried wihtout success): FileInputStream fis = new FileInputStream(new File(xmlFileName)); //ReaderInputStream ris = ReaderInputStream.forceEncodingInputStream(fis, "UTF-8" ); //(custom function opening the stream, reading it as UFT-8 into a Reader and returning another byte stream) //InputSource is = new InputSource( fis ); is.setEncoding("UTF-8"); parser.parse(fis, handler); Filtered prefixes: ignoredPrefix = new String[] {"Catégorie:", "Modèle:", "Wikipédia:", "Cat\uFFFDgorie:", "Mod\uFFFDle:", "Wikip\uFFFDdia:", //invalid char "Catégorie:", "Modèle:", "Wikipédia:", // UTF-8 as ISO-8859-1 "Image:", "Portail:", "Fichier:", "Aide:", "Projet:"}; // those last always work

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >