Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 7/245 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Signed and unsigned, and how bit extension works

    - by hatorade
    unsigned short s; s = 0xffff; int i = s; How does the extension work here? 2 larger order bytes are added, but I'm confused whether 1's or 0's are extended there. This is probably platform dependent so let's focus on what Unix does. Would the two bigger order bytes of the int be filled with 1's or 0's, and why? Basically, does the computer know that s is unsigned, and correctly assign 0's to the higher order bits of the int? So i is now 0x0000ffff? Or since ints are default signed in unix does it take the signed bit from s (a 1) and copy that to the higher order bytes?

    Read the article

  • Using unsigned drivers in Windows 8

    - by T. Fabre
    I just migrated from Windows 7 x64 to 8, but I can't get my VPN software to run anymore : the SafeNet IKE service (installed by SafeNet SoftRemoteLT GA, used by my VPN provider) cannot start anymore. I found that by default unsigned drivers are disabled on Win8, and that is what is blocking the driver. The System event log tells me that the driver (apparently, C:\WINDOWS\SysWow64\Drivers\IPSECDRV.sys ) was blocked when I try to manually start the service (SafeNet IKE Service). I get the same messages for another driver, crypto.sys found in the same folder. I tried using bcdedit to enable unsigned drivers : bcdedit /set loadoptions DDISABLE_INTEGRITY_CHECKS bcdedit /set testsigning ON After reboot, same error. I tried by booting into Win 8's test mode, same issue. Applying the code signing policy (Enabled, Ignore) does not help either. Running gpresult does show that the policy is applied. Any help welcome.

    Read the article

  • Why is passing a string literal into a char* arguament only sometimes a compiler error?

    - by Brian Postow
    I'm working in a C, and C++ program. We used to be compiling without the make-strings-writable option. But that was getting a bunch of warnings, so I turned it off. Then I got a whole bunch of errors of the form "Cannot convert const char* to char* in argmuent 3 of function foo". So, I went through and made a whole lot of changes to fix those. However, today, the program CRASHED because the literal "" was getting passed into a function that was expecting a char*, and was setting the 0th character to 0. It wasn't doing anything bad, just trying to edit a constant, and crashing. My question is, why wasn't that a compiler error? In case it matters, this was on a mac compiled with gcc-4.0.

    Read the article

  • "const char *" is incompatible with parameter of type "LPCWSTR" error

    - by N0xus
    I'm trying to incorporate some code from Programming an RTS Game With Direct3D into my game. Before anyone says it, I know the book is kinda old, but it's the particle effects system he creates that I'm trying to use. With his shader class, he intialise it thusly: void SHADER::Init(IDirect3DDevice9 *Dev, const char fName[], int typ) { m_pDevice = Dev; m_type = typ; if(m_pDevice == NULL)return; // Assemble and set the pixel or vertex shader HRESULT hRes; LPD3DXBUFFER Code = NULL; LPD3DXBUFFER ErrorMsgs = NULL; if(m_type == PIXEL_SHADER) hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "ps_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); else hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "vs_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); } How ever, this generates the following error: Error 1 error C2664: 'D3DXCompileShaderFromFileW' : cannot convert parameter 1 from 'const char []' to 'LPCWSTR' The compiler states the issue is with fName in the D3DXCompileShaderFromFile line. I know this has something to do with the character set, and my program was already running with a Unicode Character set on the go. I read that to solve the above problem, I need to switch to a multi-byte character set. But, if I do that, I get other errors in my code, like so: Error 2 error C2664: 'D3DXCreateEffectFromFileA' : cannot convert parameter 2 from 'const wchar_t *' to 'LPCSTR' With it being accredited to the following line of code: if(FAILED(D3DXCreateEffectFromFile(m_pD3DDevice9,effectFileName.c_str(),NULL,NULL,0,NULL,&m_pCurrentEffect,&pErrorBuffer))) This if is nested within another if statement checking my effectmap list. Though it is the FAILED word with the red line. Like wise I get the another error with the following line of code: wstring effectFileName = TEXT("Sky.fx"); With the error message being: Error 1 error C2440: 'initializing' : cannot convert from 'const char [7]' to 'std::basic_string<_Elem,_Traits,_Ax' If I change it back to a Uni code character set, I get the original (fewer) errors. Leaving as a multi-byte, I get more errors. Does anyone know of a way I can fix this issue?

    Read the article

  • OpenAL causing leaks in my iPhone game. Help appreciated

    - by AptoTech
    Hi, I am integrating OpenAL in my iPhone game from code I found in this post, but the compiler gave me an error on this line of code: unsigned char *outData = malloc(fileSize);, so I changed it to this: unsigned char *outData = (unsigned char*) malloc(fileSize);. This got rid of the compiler errors, but seems to have thrown up two leaks: Malloc 32 Bytes 0x505cb40 AudioToolbox SimAggregateDevice::CreateAggregateDevice(__CFString const*, __CFString const*, unsigned long&) and NSCFDictionary 0x505be30 64 AudioToolbox SimAggregateDevice::CreateAggregateDevice(__CFString const*, __CFString const*, unsigned long&) Is this due to me changing the unsigned char line? I would be very grateful if someone could help me to remove these leaks.

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • What makes signed integers behave differently?

    - by 000
    In this example of x86_64 hex/disassembled code I see: 48B80000000000000000 mov rax, 0x0 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 943863860 Unsigned Int 943863860 Signed Int64 3472328296363079732 Unsigned Int64 3472328296363079732 Float 4.630555e-05 Double 1.39804332763832e-76 String 48B80000000000000000 which to me appears to have the same functionality as: 48C7C000000000 mov rax, 0x0 48C7C000000000 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 927152180 Unsigned Int 927152180 Signed Int64 3472328377950746676 Unsigned Int64 3472328377950746676 Float 1.163599e-05 Double 1.39806836023098e-76 String 48C7C000000000 How is the first example treated differently from the second example?

    Read the article

  • Why is the Objective-C Boolean data type defined as a signed char?

    - by EddieCatflap
    Something that has piqued my interest is Objective-C's BOOL type definition. Why is it defined as a signed char (which could cause unexpected behaviour if a value greater than 1 byte in length is assigned to it) rather than as an int, as C does (much less margin for error: a zero value is false, a non-zero value is true)? The only reason I can think of is the Objective-C designers micro-optimising storage because the char will use less memory than the int. Please can someone enlighten me?

    Read the article

  • Initializing a char array through passed pointer segfaults

    - by Bitgarden
    Ie., why does the following work: char* char_array(size_t size){ return new char[size]; } int main(){ const char* foo = "foo"; size_t len = strlen(foo); char* bar=char_array(len); memset(bar, 0, len+1); } But the following segfaults: void char_array(char* out, size_t size){ out= new char[size]; } int main(){ const char* foo = "foo"; size_t len = strlen(foo); char* bar; char_array(bar, len); memset(bar, 0, len+1); }

    Read the article

  • Making a char function parameter const?

    - by Helper Method
    Consider this function declaration: int IndexOf(const char *, char); where char * is a string and char the character to find within the string (returns -1 if the char is not found, otherwise its position). Does it make sense to make the char also const? I always try to use const on pointer parameters but when something is called by value, I normally leave the const away. What are your thoughts?

    Read the article

  • to store char* from function return value

    - by samprat
    Hi folks, I am trying to implement a function which reads from Serial Port ( Linux) and retuns char*. The function works fine but how would I store return value from function. example of function is char *ReadToSerialPort() { char *bufptr; char buffer[256]; // Input buffer/ / //char *bufptr; // Current char in buffer // int nbytes; // Number of bytes read // bufptr = buffer; while ((nbytes = read(fd, bufptr, buffer+sizeof(buffer)-bufptr -1 )) > 0) { bufptr += nbytes; // if (bufptr[-1] == '\n' || bufptr[-1] == '\r') /*if ( bufptr[sizeof(buffer) -1] == '*' && bufptr[0] == '$' ) { break; }*/ } // while ends if ( nbytes ) return bufptr; else return 0; *bufptr = '\0'; } // end ReadAdrPort //In main int main( int argc , char *argv[]) { char *letter; if(strcpy(letter, ReadToSerialPort()) >0 ) { printf("Response is %s\n",letter); } }

    Read the article

  • Error while linking libvorbisfile.dylib into Mac application

    - by computergeek6
    I'm working on a program that loads sounds from Ogg Vorbis files, but whatever I do, the XCode project just doesn't seem to want to link libvorbisfile.a into my program. I keep getting linking errors: "_ov_read", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_clear", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_info", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_open", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o ld: symbol(s) not found

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • how to cast an array of char into a single integer number?

    - by SepiDev
    Hi guys, i'm trying to read contents of PNG file. As you may know, all data is written in a 4-byte manner in png files, both text and numbers. so if we have number 35234 it is save in this way: [1000][1001][1010][0010]. but sometimes numbers are shorter, so the first bytes are zero, and when I read the array and cast it from char* to integer I get wrong number. for example [0000] [0000] [0001] [1011] sometimes numbers are misinterpreted as negative numbers and simetimes as zero! let me give you an intuitive example: char s_num[4] = {120, 80, 40, 1}; int t_num = 0; t_num = int(s_num); => 3215279148 ?????? the result should be 241 but the output is 3215279148? I wish I could explain my problem well! how can i cast such arrays into a single integer value?

    Read the article

  • Why aren't unsigned OpenMP index variables allowed?

    - by Moe
    I have a loop in my C++/OpenMP code that looks like this: #pragma omp parallel for for(unsigned int i=0; i<count; i++) { // do stuff } When I compile it (with Visual Studio 2005) I get the following error: error C3016: 'i' : index variable in OpenMP 'for' statement must have signed integral type I understand that the error occurs because i is unsigned instead of signed, and changing i to be signed removed this error. What I want to know is why is this an error? Why aren't unsigned index variables allowed? Looking at the MSDN page for this error gives me no clues.

    Read the article

  • Unsigned long with negative value

    - by egiakoum1984
    Please see the simple code below: #include <iostream> #include <stdlib.h> using namespace std; int main(void) { unsigned long currentTrafficTypeValueDec; long input; input=63; currentTrafficTypeValueDec = (unsigned long) 1LL << input; cout << currentTrafficTypeValueDec << endl; printf("%u \n", currentTrafficTypeValueDec); printf("%ld \n", currentTrafficTypeValueDec); return 0; } Why printf() displays the currentTrafficTypeValueDec (unsigned long) with negative value? The output is: 9223372036854775808 0 -9223372036854775808

    Read the article

  • Construct a variadic template of unsigned int recursively

    - by Vincent
    I need a tricky thing in a C++ 2011 code. Currently, I have a metafunction of this kind : template<unsigned int N, unsigned int M> static constexpr unsigned int myFunction() This function can generate number based on N and M. I would like to write a metafunction with input N and M, and that will recursively construct a variadic template by decrementing M. For example, by calling this function with M = 3, It will construct a variadic template called List equal to : List... = myFunction<N, 3>, myFunction<N, 2>, myFunction<N, 1>, myFunction<N, 0> How to do that (if it is possible of course) ?

    Read the article

  • how to copy char * into a string and visa versa

    - by user295030
    If i pass a char * into a function. I want to then take that char * convert it to a std::string and once I get my result convert it back to char * from a std::string to show the result. I don't know how to do this for conversion ( I am not talking const char * but just char *) I am not sure how to manipulate the value of the pointer I send in. so steps i need to do take in a char * convert it into a string. take the result of that string and put it back in the form of a char * return the result such that the value should be available outside the function and not get destroyed. If possible can i see how it could be done via reference vs a pointer (whose address I pass in by value however I can still modify the value that pointer is pointing to. so even though the copy of the pointer address in the function gets destroyed i still see the changed value outside. thanks!

    Read the article

  • how to copy char * into a string and vice-versa

    - by user295030
    If i pass a char * into a function. I want to then take that char * convert it to a std::string and once I get my result convert it back to char * from a std::string to show the result. I don't know how to do this for conversion ( I am not talking const char * but just char *) I am not sure how to manipulate the value of the pointer I send in. so steps i need to do take in a char * convert it into a string. take the result of that string and put it back in the form of a char * return the result such that the value should be available outside the function and not get destroyed. If possible can i see how it could be done via reference vs a pointer (whose address I pass in by value however I can still modify the value that pointer is pointing to. so even though the copy of the pointer address in the function gets destroyed i still see the changed value outside. thanks!

    Read the article

  • C++ HW - defining classes - objects that have objects of other class problem in header file (out of

    - by kitfuntastik
    This is my first time with much of this code. With this instancepool.h file below I get errors saying I can't use vector (line 14) or have instance& as a return type (line 20). It seems it can't use the instance objects despite the fact that I have included them. #ifndef _INSTANCEPOOL_H #define _INSTANCEPOOL_H #include "instance.h" #include <iostream> #include <string> #include <vector> #include <stdlib.h> using namespace std; class InstancePool { private: unsigned instances;//total number of instance objects vector<instance> ipp;//the collection of instance objects, held in a vector public: InstancePool();//Default constructor. Creates an InstancePool object that contains no Instance objects InstancePool(const InstancePool& original);//Copy constructor. After copying, changes to original should not affect the copy that was created. ~InstancePool();//Destructor unsigned getNumberOfInstances() const;//Returns the number of Instance objects the the InstancePool contains. const instance& operator[](unsigned index) const; InstancePool& operator=(const InstancePool& right);//Overloading the assignment operator for InstancePool. friend istream& operator>>(istream& in, InstancePool& ip);//Overloading of the >> operator. friend ostream& operator<<(ostream& out, const InstancePool& ip);//Overloading of the << operator. }; #endif Here is the instance.h : #ifndef _INSTANCE_H #define _INSTANCE_H ///////////////////////////////#include "instancepool.h" #include <iostream> #include <string> #include <stdlib.h> using namespace std; class Instance { private: string filenamee; bool categoryy; unsigned featuress; unsigned* featureIDD; unsigned* frequencyy; string* featuree; public: Instance (unsigned features = 0);//default constructor unsigned getNumberOfFeatures() const; //Returns the number of the keywords that the calling Instance object can store. Instance(const Instance& original);//Copy constructor. After copying, changes to the original should not affect the copy that was created. ~Instance() { delete []featureIDD; delete []frequencyy; delete []featuree;}//Destructor. void setCategory(bool category){categoryy = category;}//Sets the category of the message. Spam messages are represented with true and and legit messages with false.//easy bool getCategory() const;//Returns the category of the message. void setFileName(const string& filename){filenamee = filename;}//Stores the name of the file (i.e. “spam/spamsga1.txt”, like in 1st assignment) in which the message was initially stored.//const string& trick? string getFileName() const;//Returns the name of the file in which the message was initially stored. void setFeature(unsigned i, const string& feature, unsigned featureID,unsigned frequency) {//i for array positions featuree[i] = feature; featureIDD[i] = featureID; frequencyy[i] = frequency; } string getFeature(unsigned i) const;//Returns the keyword which is located in the ith position.//const string unsigned getFeatureID(unsigned i) const;//Returns the code of the keyword which is located in the ith position. unsigned getFrequency(unsigned i) const;//Returns the frequency Instance& operator=(const Instance& right);//Overloading of the assignment operator for Instance. friend ostream& operator<<(ostream& out, const Instance& inst);//Overloading of the << operator for Instance. friend istream& operator>>(istream& in, Instance& inst);//Overloading of the >> operator for Instance. }; #endif Also, if it is helpful here is instance.cpp: // Here we implement the functions of the class apart from the inline ones #include "instance.h" #include <iostream> #include <string> #include <stdlib.h> using namespace std; Instance::Instance(unsigned features) { //Constructor that can be used as the default constructor. featuress = features; if (features == 0) return; featuree = new string[featuress]; // Dynamic memory allocation. featureIDD = new unsigned[featuress]; frequencyy = new unsigned[featuress]; return; } unsigned Instance::getNumberOfFeatures() const {//Returns the number of the keywords that the calling Instance object can store. return featuress;} Instance::Instance(const Instance& original) {//Copy constructor. filenamee = original.filenamee; categoryy = original.categoryy; featuress = original.featuress; featuree = new string[featuress]; for(unsigned i = 0; i < featuress; i++) { featuree[i] = original.featuree[i]; } featureIDD = new unsigned[featuress]; for(unsigned i = 0; i < featuress; i++) { featureIDD[i] = original.featureIDD[i]; } frequencyy = new unsigned[featuress]; for(unsigned i = 0; i < featuress; i++) { frequencyy[i] = original.frequencyy[i];} } bool Instance::getCategory() const { //Returns the category of the message. return categoryy;} string Instance::getFileName() const { //Returns the name of the file in which the message was initially stored. return filenamee;} string Instance::getFeature(unsigned i) const { //Returns the keyword which is located in the ith position.//const string return featuree[i];} unsigned Instance::getFeatureID(unsigned i) const { //Returns the code of the keyword which is located in the ith position. return featureIDD[i];} unsigned Instance::getFrequency(unsigned i) const { //Returns the frequency return frequencyy[i];} Instance& Instance::operator=(const Instance& right) { //Overloading of the assignment operator for Instance. if(this == &right) return *this; delete []featureIDD; delete []frequencyy; delete []featuree; filenamee = right.filenamee; categoryy = right.categoryy; featuress = right.featuress; featureIDD = new unsigned[featuress]; frequencyy = new unsigned[featuress]; featuree = new string[featuress]; for(unsigned i = 0; i < featuress; i++) { featureIDD[i] = right.featureIDD[i]; } for(unsigned i = 0; i < featuress; i++) { frequencyy[i] = right.frequencyy[i]; } for(unsigned i = 0; i < featuress; i++) { featuree[i] = right.featuree[i]; } return *this; } ostream& operator<<(ostream& out, const Instance& inst) {//Overloading of the << operator for Instance. out << endl << "<message file=" << '"' << inst.filenamee << '"' << " category="; if (inst.categoryy == 0) out << '"' << "legit" << '"'; else out << '"' << "spam" << '"'; out << " features=" << '"' << inst.featuress << '"' << ">" <<endl; for (int i = 0; i < inst.featuress; i++) { out << "<feature id=" << '"' << inst.featureIDD[i] << '"' << " freq=" << '"' << inst.frequencyy[i] << '"' << "> " << inst.featuree[i] << " </feature>"<< endl; } out << "</message>" << endl; return out; } istream& operator>>(istream& in, Instance& inst) { //Overloading of the >> operator for Instance. string word; string numbers = ""; string filenamee2 = ""; bool categoryy2 = 0; unsigned featuress2; string featuree2; unsigned featureIDD2; unsigned frequencyy2; unsigned i; unsigned y; while(in >> word) { if (word == "<message") {//if at beginning of message in >> word;//grab filename word for (y=6; word[y]!='"'; y++) {//pull out filename from between quotes filenamee2 += word[y];} in >> word;//grab category word if (word[10] == 's') categoryy2 = 1; in >> word;//grab features word for (y=10; word[y]!='"'; y++) { numbers += word[y];} featuress2 = atoi(numbers.c_str());//convert string of numbers to integer Instance tempp2(featuress2);//make a temporary Instance object to hold values read in tempp2.setFileName(filenamee2);//set temp object to filename read in tempp2.setCategory(categoryy2); for (i=0; i<featuress2; i++) {//loop reading in feature reports for message in >> word >> word >> word;//skip two words numbers = "";//reset numbers string for (int y=4; word[y]!='"'; y++) {//grab feature ID numbers += word[y];} featureIDD2 = atoi(numbers.c_str()); in >> word;// numbers = ""; for (int y=6; word[y]!='"'; y++) {//grab frequency numbers += word[y];} frequencyy2 = atoi(numbers.c_str()); in >> word;//grab actual feature string featuree2 = word; tempp2.setFeature(i, featuree2, featureIDD2, frequencyy2); }//all done reading in and setting features in >> word;//read in last part of message : </message> inst = tempp2;//set inst (reference) to tempp2 (tempp2 will be destroyed at end of function call) return in; } } } and instancepool.cpp: // Here we implement the functions of the class apart from the inline ones #include "instancepool.h" #include "instance.h" #include <iostream> #include <string> #include <vector> #include <stdlib.h> using namespace std; InstancePool::InstancePool()//Default constructor. Creates an InstancePool object that contains no Instance objects { instances = 0; ipp.clear(); } InstancePool::~InstancePool() { ipp.clear();} InstancePool::InstancePool(const InstancePool& original) {//Copy constructor. instances = original.instances; for (int i = 0; i<instances; i++) { ipp.push_back(original.ipp[i]); } } unsigned InstancePool::getNumberOfInstances() const {//Returns the number of Instance objects the the InstancePool contains. return instances;} const Instance& InstancePool::operator[](unsigned index) const {//Overloading of the [] operator for InstancePool. return ipp[index];} InstancePool& InstancePool::operator=(const InstancePool& right) {//Overloading the assignment operator for InstancePool. if(this == &right) return *this; ipp.clear(); instances = right.instances; for(unsigned i = 0; i < instances; i++) { ipp.push_back(right.ipp[i]); } return *this; } istream& operator>>(istream& in, InstancePool& ip) {//Overloading of the >> operator. ip.ipp.clear(); string word; string numbers; int total;//int to hold total number of messages in collection while(in >> word) { if (word == "<messagecollection"){ in >> word;//reads in total number of all messages for (int y=10; word[y]!='"'; y++){ numbers = ""; numbers += word[y]; } total = atoi(numbers.c_str()); for (int x = 0; x<total; x++) {//do loop for each message in collection in >> ip.ipp[x];//use instance friend function and [] operator to fill in values and create Instance objects and read them intot he vector } } } } ostream& operator<<(ostream& out, const InstancePool& ip) {//Overloading of the << operator. out << "<messagecollection messages=" << '"' << '>' << ip.instances << '"'<< endl << endl; for (int z=0; z<ip.instances; z++) { out << ip[z];} out << endl<<"</messagecollection>\n"; } This code is currently not writing to files correctly either at least, I'm sure it has many problems. I hope my posting of so much is not too much, and any help would be very much appreciated. Thanks!

    Read the article

  • C - memset segfault for statically allocated char array

    - by user1327031
    I get a segfault when trying to memset an array of chars that was allocated statically, but not for an array of the same length that was allocated using malloc. variable definitions: //static char inBuff[IN_BUFF_LEN]; //dynamic char * inBuffD; function call: //static, cast used because char** != char (*) [n] serverInit(portNum, (char**) &inBuff, &serv_addr, &sockfd) //dynamic serverInit(portNum, &inBuffD, &serv_addr, &sockfd) use within the function: memset(*inBuffAdr, 0, IN_BUFF_LEN); I suspect that my problem is in the difference of the function calls, or to be more precise, my incomplete understanding of the "char** != char (*) [n]" situation. But I have been banging at this for too long and can't see the forest from the trees so any hints and advice would be very appreciated.

    Read the article

  • How to initialize an unsigned long long type?

    - by Sujay
    Hello all, I'm trying to initialize an unsigned long long int type. But the compiler is throwing an error "error: integer constant is too large for "long" type ". The initialization is shown below : unsigned long long temp = 1298307964911120440; Can anybody please let me know what the problem is and suggest a solution for the same. With Regards Sujay

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >