Search Results

Search found 1486 results on 60 pages for 'unsigned'.

Page 41/60 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Division, Remainders and only Real Numbers Allowed

    - by Senica Gonzalez
    Trying to figure out this pseudo code. The following is assumed.... I can only use unsigned and signed integers (or long). Division returns a real number with no remainder. MOD returns a real number. Fractions and decimals are not handled. INT I = 41828; INT C = 15; INT D = 0; D = (I / 65535) * C; How would you handle a fraction (or decimal value) in this situation? Is there a way to use negative value to represent the remainder? In this example I/65535 should be 0.638, however, with the limitations, I get 0 with a MOD of 638. How can I then multiply by C to get the correct answer? Hope that makes sense.

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right?

    Read the article

  • Memory leak in Qt signal and slots

    - by Ajay
    Hello, I am running valgrind on my Qt code,and even on successful exit of the application, get the following report from valgrind 8,832 bytes in 92 blocks are still reachable in loss record 12 of 12 at 0x4025390: operator new(unsigned int) (vg_replace_malloc.c:214) ==3339== by 0x4B75F05: QMutex::QMutex(QMutex::RecursionMode) (qmutex.cpp:123) ==3339== by 0x4B77602: QMutexPool::get(void const*) (qmutexpool.cpp:137) ==3339== by 0x4CA0EC2: signalSlotLock(QObject const*) (qobject.cpp:112) ==3339== by 0x4CA3939: QMetaObjectPrivate::connect(QObject const*, int, QObject const*, int, int, int*) (qobject.cpp:2900) ==3339== by 0x4CA5C00: QObject::connect(QObject const*, char const*, QObject const*, char const*, Qt::ConnectionType) (qobject.cpp:2599) I disconnect all signal connections and also delete the objects. The above mentioned leak increases if i increase the amount of signal and slot connections? Can anybody help with this?

    Read the article

  • On-the-fly lossless image compression

    - by geschema
    I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth. Is there a simple algorithm that I can use to losslessly compress the pixel data? I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.

    Read the article

  • how to select the min value using having key word

    - by LOVE_KING
    I have created the table stu_dep_det CREATE TABLE `stu_dept_cs` ( `s_d_id` int(10) unsigned NOT NULL auto_increment, `stu_name` varchar(15) , `gender` varchar(15) , `address` varchar(15),`reg_no` int(10) , `ex_no` varchar(10) , `mark1` varchar(10) , `mark2` varchar(15) , `mark3` varchar(15) , `total` varchar(15) , `avg` double(2,0), PRIMARY KEY (`s_d_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC AUTO_INCREMENT=8 ; then Inserted the values INSERT INTO `stu_dept_cs` (`s_d_id`, `stu_name`, `gender`, `address`, `reg_no`, `ex_no`, `mark1`, `mark2`, `mark3`, `total`, `avg`) VALUES (1, 'alex', 'm', 'chennai', 5001, 's1', '70', '90', '95', '255', 85), (2, 'peter', 'm', 'chennai', 5002, 's1', '80', '70', '90', '240', 80), (6, 'parv', 'f', 'mumbai', 5003, 's1', '88', '60', '80', '228', 76), (7, 'basu', 'm', 'kolkatta', 5004, 's1', '85', '95', '56', '236', 79); I want to select the min(avg) using having keyword and I have used the following sql statement SELECT * FROM stu_dept_cs s having min(avg) Is it correct or not plz write the correct ans....

    Read the article

  • image scaling with C

    - by sa125
    Hi - I'm trying to read an image file and scale it by multiplying each byte by a scale its pixel levels by some absolute factor. I'm not sure I'm doing it right, though - void scale_file(char *infile, char *outfile, float scale) { // open files for reading FILE *infile_p = fopen(infile, 'r'); FILE *outfile_p = fopen(outfile, 'w'); // init data holders char *data; char *scaled_data; // read each byte, scale and write back while ( fread(&data, 1, 1, infile_p) != EOF ) { *scaled_data = (*data) * scale; fwrite(&scaled_data, 1, 1, outfile); } // close files fclose(infile_p); fclose(outfile_p); } What gets me is how to do each byte multiplication (scale is 0-1.0 float) - I'm pretty sure I'm either reading it wrong or missing something big. Also, data is assumed to be unsigned (0-255). Please don't judge my poor code :) thanks

    Read the article

  • MySQl Error #1064

    - by 01010011
    Hi, I keep getting this error: MySQL said: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_na' at line 15 with this query: USE books; DROP TABLE IF EXISTS book; CREATE TABLE `books`.`book`( `book_id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, `isbn10` VARCHAR(15) NOT NULL, `isbn13` VARCHAR(15) NOT NULL, `title` VARCHAR(50) NOT NULL, `edition` VARCHAR(50) NOT NULL, `author_f_name` VARCHAR(50) NOT NULL, `author_m_name` VARCHAR(50) NOT NULL, `author_l_name` VARCHAR(50) NOT NULL, `cond` ENUM('as new','very good','good','fair','poor') NOT NULL, `price` DECIMAL(8,2) NOT NULL, `genre` VARCHAR(50) NOT NULL, `quantity` INT NOT NULL) INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_name,author_l_name,cond,price,genre,quantity)** VALUES ('0136061699','978-0136061694','Software Engineering: Theory and Practice','4','Shari','Lawrence','Pfleeger','very good','50','Computing','2'); Any idea what the problem is?

    Read the article

  • migrating from mysql to oracle9i.Equivalent create table syntax

    - by Android_Crazy
    Hi Following is the syntax for creating table in mysql. I want to create table with same properties in oracle9i. Can anyone provide me the equivalent syntax for oracle? CREATE TABLE IF NOT EXISTS "tbl_audit_trail" ( "id" int(11) unsigned NOT NULL, "old_value" text NOT NULL, "new_value" text NOT NULL, "action" varchar(20) CHARACTER SET latin1 NOT NULL, "model" varchar(255) CHARACTER SET latin1 NOT NULL, "field" varchar(64) CHARACTER SET latin1 NOT NULL, "stamp" timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, "user_id" int(11) NOT NULL, "model_id" varchar(65) CHARACTER SET latin1 NOT NULL, PRIMARY KEY ("id"), KEY "idx_user_id" ("user_id"), KEY "idx_model_id" ("model_id"), KEY "idx_model" ("model"), KEY "idx_field" ("field"), KEY "idx_old_value" ("old_value"(16)), KEY "idx_new_value" ("new_value"(16)), KEY "idx_action" ("action") ) AUTO_INCREMENT=168 ;

    Read the article

  • Why does this crash with access violation to 0xcccccc...?

    - by Mike
    I have a random piece of code, I use for reading from CSV files... and it's fine... until after about 2000 reads... then the getline line fails with an access violation to 0xcccccc... which I assume means that the input stream (file) has been cleared... Not that I know why :) int CCSVManager::ReadCSVLine ( fstream * fsInputFile, vector <string> * recordData ) { string s; getline ( *fsInputFile, s ); stringstream iss( s ); for ( unsigned int i = 0; i < getNumFields (); i++ ) { getline ( iss, s, ',' ); (*recordData)[i] = s; } return 0; } Any ideas why?

    Read the article

  • WebKit and npapi and mingw-w64

    - by rubenvb
    Hi, The problem is the following: On Windows x64, pointers are 64-bit, but type long is 32-bit. MSVC doesn't seem to care, and even omits warnings about pointer truncation on the default warning level. Since recently, there is a GCC that target x86_64-w64-mingw32, or better Windows x64 native. GCC produces errors when pointers are truncated (which is the logical thing to do...), and this is causing trouble in WebKit and more specifically, the Netscape Plugin API: First, there's the files (I can only post one hyperlink...): http://trac.webkit.org/browser/trunk/WebCore/ bridge/npapi.h -- defines uint32 as 32-bit int type (~line 145) plugins/win/PluginViewWin.cpp -- casts Windows window handles to 32-bit int, truncating them (~line 450) My proposed fix was to change the uint32 casts to uintptr_t, which makes GCC happy, but still puts a 64-bit value in a uint32 (=unsigned long). I have no clue how to solve this, because clearly WebKit is happy truncating pointers on Win64... How can I solve this the right way? Thanks!

    Read the article

  • Move million records from MEMORY table to MYISAM table.

    - by Prashant
    Hi, I am looking for a fast way to move records from a MEMORY table to MYISAM table. MEMORY table has around 0.5 million records. Both tables have exactly the same structure (same number of columns, data types etc.). But the MYISAM table is indexed (B-TREE) on a few columns. There are around 25 columns most of which are unsigned integers. I have already tried using "INSERT INTO SELECT * FROM " query. But is there any faster way to do this? Appreciate your help. Prashant

    Read the article

  • NSNotificationCenter and ASIHTTPRequest

    - by user262325
    Hello everyone: I hope to get the http header info(file size) in asynchronous mode. So I initialize as codes: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(processReadResponseHeaders:) name:@"readResponseHeaders" object:nil]; my codes to read the http header -(void)processReadResponseHeaders: (ASIHTTPRequest *)request ;//(id)sender; { unsigned long long contentLength = [request contentLength]; //error occurs here } It has to change the source code of ASIHTTPRequest.m I did add my codes in function readResponseHeaders to notify the event is triggered ) - (void)readResponseHeaders { //......................... [[NSNotificationCenter defaultCenter] postNotificationName:@"readResponseHeaders" object:self];// } the log file reports: 2010-05-15 13:47:38.034 myapp[2187:6a63] * -[NSConcreteNotification contentLength]: unrecognized selector sent to instance 0x46e5bb0 Welcome any comment Thanks interdev

    Read the article

  • how to get the http header in asynchronous request mode using ASIHHTTP

    - by user262325
    Hello everyone I hope to display the download file size by reading http header. I know there is way do this: ASIHTTPRequest request = [ASIHTTPRequest requestWithURL:url]; [request startSynchronous]; NSString poweredBy = [[request responseHeaders] objectForKey:@"X-Powered-By"]; NSString *contentType = [[request responseHeaders] objectForKey:@"Content-Type"]; but this is Synchronous mode, in Asynchronous mode it can be done as below: (void)requestFinished:(ASIHTTPRequest *)request { unsigned long long contentLength = [request contentLength]; } but 'requestFinished' is at the end of download. Is there an event to get the http header info at the beginning of download? Thank interdev

    Read the article

  • Getting the PC value in ARM assembly

    - by PaulH
    I have a Windows Mobile 6 ARMV4I project where I would like to get the value of the program counter. The function is declared like this: extern "C" unsigned __int32 GetPC(); My assembly code looks like this: GetPC FUNCTION EXPORT GetPC ldr r0, [r15] ; load the PC value in to r0 mov pc, lr ; return the value of r0 ENDFUNC But, when I call the GetPC() function, I get the same number every time. So, I'm assuming my assembly isn't doing what I think it's doing. Can anybody point out what I may be doing wrong? Thanks, PaulH

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • Why does this XML validation via XSD fail in libxml2 (but succeed in xmllint) and how do I fix it?

    - by mtree
    If I run this XML validation via xmllint: xmllint --noout --schema schema.xsd test.xml I get this success message: .../test.xml validates However if I run the same validation via libxml2's C API: int result = xmlSchemaValidateDoc(...) I get a return value of 1845 and this failure message: Element '{http://example.com/XMLSchema/1.0}foo': No matching global declaration available for the validation root. Which I can make absolutely no sense of. :( schema.xsd: <?xml version="1.0" encoding="utf-8" ?> <!DOCTYPE xs:schema PUBLIC "-//W3C//DTD XMLSCHEMA 200102//EN" "XMLSchema.dtd" > <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://example.com/XMLSchema/1.0" targetNamespace="http://example.com/XMLSchema/1.0" elementFormDefault="qualified" attributeFormDefault="unqualified"> <xs:element name="foo"> </xs:element> </xs:schema> test.xml: <?xml version="1.0" encoding="UTF-8"?> <foo xmlns="http://example.com/XMLSchema/1.0"> </foo> main.c: #include <stdio.h> #include <sys/stat.h> #include <sys/types.h> #include <string.h> #include <libxml/parser.h> #include <libxml/valid.h> #include <libxml/xmlschemas.h> u_int32_t get_file_size(const char *file_name) { struct stat buf; if ( stat(file_name, &buf) != 0 ) return(0); return (unsigned int)buf.st_size; } void handleValidationError(void *ctx, const char *format, ...) { char *errMsg; va_list args; va_start(args, format); vasprintf(&errMsg, format, args); va_end(args); fprintf(stderr, "Validation Error: %s", errMsg); free(errMsg); } int main (int argc, const char * argv[]) { const char *xsdPath = argv[1]; const char *xmlPath = argv[2]; printf("\n"); printf("XSD File: %s\n", xsdPath); printf("XML File: %s\n", xmlPath); int xmlLength = get_file_size(xmlPath); char *xmlSource = (char *)malloc(sizeof(char) * xmlLength); FILE *p = fopen(xmlPath, "r"); char c; unsigned int i = 0; while ((c = fgetc(p)) != EOF) { xmlSource[i++] = c; } printf("\n"); printf("XML Source:\n\n%s\n", xmlSource); fclose(p); printf("\n"); int result = 42; xmlSchemaParserCtxtPtr parserCtxt = NULL; xmlSchemaPtr schema = NULL; xmlSchemaValidCtxtPtr validCtxt = NULL; xmlDocPtr xmlDocumentPointer = xmlParseMemory(xmlSource, xmlLength); parserCtxt = xmlSchemaNewParserCtxt(xsdPath); if (parserCtxt == NULL) { fprintf(stderr, "Could not create XSD schema parsing context.\n"); goto leave; } schema = xmlSchemaParse(parserCtxt); if (schema == NULL) { fprintf(stderr, "Could not parse XSD schema.\n"); goto leave; } validCtxt = xmlSchemaNewValidCtxt(schema); if (!validCtxt) { fprintf(stderr, "Could not create XSD schema validation context.\n"); goto leave; } xmlSetStructuredErrorFunc(NULL, NULL); xmlSetGenericErrorFunc(NULL, handleValidationError); xmlThrDefSetStructuredErrorFunc(NULL, NULL); xmlThrDefSetGenericErrorFunc(NULL, handleValidationError); result = xmlSchemaValidateDoc(validCtxt, xmlDocumentPointer); leave: if (parserCtxt) { xmlSchemaFreeParserCtxt(parserCtxt); } if (schema) { xmlSchemaFree(schema); } if (validCtxt) { xmlSchemaFreeValidCtxt(validCtxt); } printf("\n"); printf("Validation successful: %s (result: %d)\n", (result == 0) ? "YES" : "NO", result); return 0; } console output: XSD File: /Users/dephiniteloop/Desktop/xml_validate/schema.xsd XML File: /Users/dephiniteloop/Desktop/xml_validate/test.gkml XML Source: <?xml version="1.0" encoding="UTF-8"?> <foo xmlns="http://example.com/XMLSchema/1.0"> </foo> Validation Error: Element '{http://example.com/XMLSchema/1.0}foo': No matching global declaration available for the validation root. Validation successful: NO (result: 1845) In case it matters: I'm on OSX 10.6.7 with its default libxml2.dylib (/Developer/SDKs/MacOSX10.6.sdk/usr/lib/libxml2.2.7.3.dylib)

    Read the article

  • error: ‘struct mcontext_t’ has no member named ‘eip’

    - by user353573
    original is struct sigcontext *sc; after changing to struct mcontext_t, error occur. How to fix it? error: ‘struct mcontext_t’ has no member named ‘eip’ #include <stdio.h> #include <signal.h> #include <asm/ucontext.h> static unsigned long target; void handler(int signum, siginfo_t *siginfo, void *uc0){ struct ucontext *uc; mcontext_t *sc; uc = (struct ucontext *)uc0; sc = &uc->uc_mcontext; sc->eip = target; }

    Read the article

  • int vs size_t on 64bit

    - by MK
    Porting code from 32bit to 64bit. Lots of places with int len = strlen(pstr); These all generate warnings now because strlen() returns size_t which is 64bit and int is still 32bit. So I've been replacing them with size_t len = strlen(pstr); But I just realized that this is not safe, as size_t is unsigned and it can be treated as signed by the code (I actually ran into one case where it caused a problem, thank you, unit tests!). Blindly casting strlen return to (int) feels dirty. Or maybe it shouldn't? So the question is: is there an elegant solution for this? I probably have a thousand lines of code like that in the codebase; I can't manually check each one of them and the test coverage is currently somewhere between 0.01 and 0.001%.

    Read the article

  • How to prevent application thievery (specific to Android applications)?

    - by Berdon Magnus
    Hey, I was wondering what the most effective way of preventing people from stealing my application (downloading a copy of the .apk online rather than buying it). I've spent a lot of time on one in particular (Droidbox) and won't be releasing Sync until I can guarantee that the people who are providing illegal copies of the pro version aren't able to. Anyone implemented this? I've tried checking my package signature verses an the signature of an unsigned copy but it appears to be the same - perhaps I'm doing something incorrectly here. I'm unsure whether people actually distribute the signed .apk in which case I don't think signature validation would work to begin with... Please note, this question is specific to Android Marketplace Applications - the difference being, application delivery is out of my hands and I have no way of linking between a legitimate purchase and an illegal download.

    Read the article

  • using check contraint in MySQL for controlling string length not working

    - by ptrn
    Dear stackoverflow, I'm tumbled with a problem! I've set up my first check constraint using MySQL, but unfortunately I'm having a problem. When inserting a row that should fail the test, the row is inserted anyway. The structure: CREATE TABLE user ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, uname VARCHAR(10) NOT NULL, fname VARCHAR(50) NOT NULL, lname VARCHAR(50) NOT NULL, mail VARCHAR(50) NOT NULL, PRIMARY KEY (id), CHECK (LENGTH(fname) > 3) ); The insert statement: INSERT INTO user VALUES (null, 'user', 'Fname', 'Lname', '[email protected]'); I'm pretty sure I'm missing something basic here.

    Read the article

  • Dynamically allocated structure and casting.

    - by Simone Margaritelli
    Let's say I have a first structure like this: typedef struct { int ivalue; char cvalue; } Foo; And a second one: typedef struct { int ivalue; char cvalue; unsigned char some_data_block[0xFF]; } Bar; Now let's say I do the following: Foo *pfoo; Bar *pbar; pbar = new Bar; pfoo = (Foo *)pbar; delete pfoo; Now, when I call the delete operator, how much memory does it free? sizeof(int) + sizeof(char) Or sizeof(int) + sizeof(char) + sizeof(char) * 0xFF ? And if it's the first case due to the casting, is there any way to prevent this memory leak from happening? Note: please don't answer "use C++ polymorphism" or similar, I am using this method for a reason.

    Read the article

  • Detecting that the stack is full in C/C++

    - by Martin Kristiansen
    When writing C++ code I've learned that using the stack to store memory is a good idea. But recently I ran into a problem: I had an experiment that had code that looked like this: void fun(unsigned int N) { float data_1[N*N]; float data_2[N*N]; /* Do magic */ } The code exploted with a seqmentation fault at random, and I had no idea why. It turned out that problem was that I was trying to store things that were to big on my stack, is there a way of detecting this? Or at least detecting that it has gone wrong?

    Read the article

  • GetDiskFreeSpaceEx reports wrong number of free bytes

    - by rboorgapally
    __int64 i64FreeBytes unsigned __int64 lpFreeBytesAvailableToCaller, lpTotalNumberOfBytes, lpTotalNumberOfFreeBytes; // variables used to obtain // the free space on the drive GetDiskFreeSpaceEx (Manager.capDir, (PULARGE_INTEGER)&lpFreeBytesAvailableToCaller, (PULARGE_INTEGER)&lpTotalNumberOfBytes, (PULARGE_INTEGER)&lpTotalNumberOfFreeBytes); i64FreeBytes = lpTotalNumberOfFreeBytes; _tprintf(_T ("Number of bytes free on the drive:%I64u \n"), lpTotalNumberOfFreeBytes); I am working on a data management routine which is a Windows CE command line application. The above code shows how I get the number of free bytes on a particular drive which contains the folder Manager.capdir (it is the variable containing the full path name of the directory). My question is, the number of free bytes reported by the above code (the _tprintf statement) doesn't match with the number of free bytes of the drive (which i check by doing a right click on the drive). I wish to know if the reason for this difference?

    Read the article

  • MYSQL: Error: Cannot add or update a child row: a foreign key constraint fails

    - by DalivDali
    Hi all, Using MySQL on Windows OS, and am getting an error upon attempting to create a foreign key between two tables: CREATE TABLE tf_traffic_stats ( domain_name char(100) NOT NULL, session_count int(11) NULL, search_count int(11) NULL, click_count int(11) NULL, revenue float NULL, rpm float NULL, cpc float NULL, traffic_date date NOT NULL DEFAULT '0000-00-00', PRIMARY KEY(domain_name,traffic_date)) and CREATE TABLE td_domain_name ( domain_id int(10) UNSIGNED AUTO_INCREMENT NOT NULL, domain_name char(100) NOT NULL, update_date date NOT NULL, PRIMARY KEY(domain_id)) The following statement gives me the error present in the subject line (cannot add or update a child row: a foreign key constraint fails): ALTER TABLE td_domain_name ADD CONSTRAINT FK_domain_name FOREIGN KEY(domain_name) REFERENCES tf_traffic_stats(domain_name) ON DELETE RESTRICT ON UPDATE RESTRICT Can someone point me in the right direction of what may be causing the error. I also have a foreign key referencing td_domain_name.domain_id, but I don't think this should be interfering... Appreciate it!

    Read the article

  • std::string == operator not working

    - by Paul
    Hello, I've been using std::string's == operator for years on windows and linux. Now I am compiling one of my libraries on linux, it uses == heavily. On linux the following function fails, because the == returns false even when the strings are equal (case sensitive wise equal) const Data* DataBase::getDataByName( const std::string& name ) const { for ( unsigned int i = 0 ; i < m_dataList.getNum() ; i++ ) { if ( m_dataList.get(i)->getName() == name ) { return m_dataList.get(i); } } return NULL; } The getName() method is declared as follows virtual const std::string& getName() const; I am building with gcc 4.4.1 and libstdc++44-4.4.1. Any ideas? it looks perfectly valid to me. Paul

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >