Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 112/151 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • Why do C++ streams use char instead of unsigned char?

    - by Johannes Schaub - litb
    I've always wondered why the C++ Standard library has instantiated basic_[io]stream and all its variants using the char type instead of the unsigned char type. char means (depending on whether it is signed or not) you can have overflow and underflow for operations like get(), which will lead to implementation-defined value of the variables involved. Another example is when you want to output a byte, unformatted, to an ostream using its put function. Any ideas? Note: I'm still not really convinced. So if you know the definitive answer, you can still post it indeed.

    Read the article

  • How to get Host OS (operating system) parameter of ZIP archive in C#

    - by bao
    Look. I have ZIP archives prepared in different os'es: mac, linux, windows. In windows file names encoded in DOS CP866, mac & linux in UTF-8. I need to know (in code) in which os zip file was prepared to decode file names correctly. There is a Host OS paramterer in "Central directory structure" of zip file (look http://www.fileformat.info/format/zip/corion.htm ). How to get 0005h 1 byte Host OS parameter in C#?

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • Resolving Assemblies, the fuzzy way

    - by David Rutten
    Here's the setup: A pure DotNET class library is loaded by an unmanaged desktop application. The Class Library acts as a plugin. This plugin loads little baby plugins of its own (all DotNET Class Libraries), and it does so by reading the dll into memory as a byte-stream, then Assembly asm = Assembly.Load(COFF_Image); The problem arises when those little baby plugins have references to other dlls. Since they are loaded via the memory rather than directly from the disk, the framework often cannot find these referenced assemblies and is thus incapable of loading them. I can add an AssemblyResolver handler to my project and I can see these referenced assemblies drop past. I have a reasonably good idea about where to find these referenced assemblies on the disk, but how can I make sure that the Assmebly I load is the correct one? In short, how do I reliably go from the System.ResolveEventArgs.Name field to a dll file path, presuming I have a list of all the folders where this dll could be hiding)?

    Read the article

  • How can I avoid mutable variables in Scala when using ZipInputStreams and ZipOutpuStreams?

    - by pr1001
    I'm trying to read a zip file, check that it has some required files, and then write all valid files out to another zip file. The basic introduction to java.util.zip has a lot of Java-isms and I'd love to make my code more Scala-native. Specifically, I'd like to avoid the use of vars. Here's what I have: val fos = new FileOutputStream("new.zip"); val zipOut = new ZipOutputStream(new BufferedOutputStream(fos)); while (zipIn.available == 1) { val entry = zipIn.getNextEntry if (entryIsValid(entry)) { val fos = new FileOutputStream("subdir/" + entry.getName()); val dest = new BufferedOutputStream(fos); // read data into the data Array var data: Array[Byte] = null var count = zip.read(data) while (count != -1) { dest.write(data, 0, count) count = zip.read(data) } dest.flush dest.close } }

    Read the article

  • Need ILMerge hint

    - by lakhlaniprashant.blogspot.com
    Hi all, I'm trying to merge vintasoft barcode sdk with my data access dll and it's not working after ilmerge. Any ideas are welcome here is the error: IndexOutOfRangeException: Index was outside the bounds of the array.] 2.+.©(Byte[] param0) in :0 2.+..cctor() in :0 [TypeInitializationException: The type initializer for '2.+' threw an exception.] 2.+.¥S() in :0 Vintasoft.Barcode.WriterSettings..cctor() in :0 [TypeInitializationException: The type initializer for 'Vintasoft.Barcode.WriterSettings' threw an exception.] Vintasoft.Barcode.WriterSettings..ctor() in :0 Vintasoft.Barcode.BarcodeWriter..ctor() in :0 _Default.buttonGenerateBarcode_Click(Object sender, EventArgs e) in E:\ILMergeSample\WebBarcodeWriterDemo\QRBarcode.aspx.vb:27 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +110 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 Thanks in advance

    Read the article

  • CryptGenRandom to generate asp.net session id

    - by DoDo
    Hi! does anyone have working example of CryptGenrRandom class to generate session id (need to use in my iis module). HCRYPTPROV hCryptProv; BYTE pbData[16]; if(CryptAcquireContext( &hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT)) { if(CryptGenRandom(hCryptProv, 8, pbData)) { std::string s(( const char *) pbData); printf(s.c_str()); } else { MyHandleError("Error during CryptGenRandom."); } } else { MyHandleError("Error during CryptAcquireContext!\n"); } i tried this code but, its not working quite well (i get it from msdn) and this example don't work for me ( http://www.codeproject.com/KB/security/plaintextsessionkey.aspx ) so if anyone know how to generate sessionid using this class plz let me know tnx anyway!

    Read the article

  • Arrays/Lists and computing hashvalues (VB, C#)

    - by Jeffrey Kern
    I feel bad asking this question but I am currently not able to program and test this as I'm writing this on my cell-phone and not on my dev machine :P (Easy rep points if someone answers! XD ) Anyway, I've had experience with using hashvalues from String objects. E.g., if I have StringA and StringB both equal to "foo", they'll both compute out the same hashvalue, because they're set to equal values. Now what if I have a List, with T being a native data type. If I tried to compute the hashvalue of ListA and ListB, assuming that they'd both be the same size and contain the same information, wouldn't they have equal hashvalues as well? Assuming as sample dataset of 'byte' with a length of 5 {5,2,0,1,3}

    Read the article

  • how to store passwords in database?

    - by rgksugan
    I use jsp and servlets in my web application. i need to store passwords in the database. I found that hashing will be the best way to do that. I used this code to do it. java.security.MessageDigest d = null; d = java.security.MessageDigest.getInstance("SHA-1"); d.reset(); d.update(pass.getBytes("UTF-8")); byte b[] = d.digest(); String tmp = (new BASE64Encoder()).encode(b); When i tried to print the value of tmp, i get some other value.i guess its the hash value of the password. But when i persist this data to the database the original password gets saved there other than the value in tmp.. What is the problem???

    Read the article

  • getResourceAsStream returns HttpInputStream not of the entire file

    - by khue
    Hi, I am having a web application with an applet which will copy a file packed witht the applet to the client machine. When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ; The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer. The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used. How will I copy the file completely, is there a size limited for HttpInputStream? Thanks a lot.

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • Encoding with url and api

    - by user2950824
    So I have this web app set up and running and it works fine for any username that you request, but when i try http://mrcastelo.pythonanywhere.com/lol/euw/Nazaré, it simply doesnt work - the error that I get on the server is the following: iddata= getJSON(urllolbase+region+urlid+username) #SummonerID UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 5: ordinal not in range(128) It is annoying me greatly, I've tried some other threads but none of them came to a fix. The api that I am using (www.legendaryapi.com) does accept this because this works. Any idea on how to fix this?

    Read the article

  • What are the ways to create draw data structures for latex?

    - by alicephacker
    I tried tikz/pgf a bit but have not had much luck creating a nice diagram to visualize bitfields or byte fields of packed data structures (i.e. in memory). Essentially I want a set of rectangles representing ranges of bits with labels inside, and offsets along the top. There should be multiple rows for each word of the data structure. This is similar to most of the diagrams in most processor manuals labeling opcode encoding etc. Has anyone else tried to do this using latex or is there a package for this?

    Read the article

  • using C# how to convert iso8859-1 encoded text files that contain Latin-1 accented characters to utf

    - by Tim
    I am being sent text files saved in iso88591-1 format that contain accented characters from the Latin-1 range (as well as normal ASCII a-z etc). How to convert these files to utf-8 using C# so that the single-byte accented characters in iso8859-1 become valid utf-8 characters? I have tried to use a StreamReader with ASCIIEncoding, and then converting the ascii string to UTF-8 by instantiating an ascii encoding and a utf8 encoding and then using Encoding.Convert(ascii, utf8, ascii.GetBytes( asciiString) ) — but the accented characters are being rendered as question marks. What step am I missing? Thanks

    Read the article

  • Making your own "int" or "string" class

    - by amerninja13
    I disassembled the .NET 'System' DLL and looked at the source code for the variable classes (string, int, byte, etc.) to see if I could figure out how to make a class that could take on a value. I noticed that the "Int32" class inherits the following: IComparable, IFormattable, IConvertible, IComparable, IEquatable. The String and Int32 classes are not inheritable, and I can't figure out what in these inherited interfaces allows the classes to hold a value. What I would want is something like this: public class MyVariable : //inherits here { //Code in here that allows it to get/set the value } public static class Main(string[] args) { MyVariable a = "This is my own custom variable!"; MyVariable b = 2976; if(a == "Hello") { } if(b = 10) { } Console.WriteLine(a.ToString()); Console.WriteLine(a.ToString()); }

    Read the article

  • Portable way to determine the platform's line separator

    - by Adrian McCarthy
    Different platforms use different line separator schemes (LF, CR-LF, CR, NEL, Unicode LINE SEPARATOR, etc.). C++ (and C) make a lot of this transparent to most programs, by converting '\n' to and from the target platform's native new line encoding. But if your program needs to determine the actual byte sequence used, how could you do it portably? The best method I've come up with is: Write a temporary file in text mode with just '\n' in it, letting the run-time do the translation. Read back the temporary file in binary mode to see the actual bytes. That feels kludgy. Is there a way to do it without temporary files? I tried stringstreams instead, but the run-time doesn't actually translate '\n' in that context (which makes sense). Does the run-time expose this information in some other way?

    Read the article

  • Maximum number of bytes that can be sent on a TCP connection

    - by iamrohitbanga
    I initially assumed that since tcp has a sequence number field of 32 bits and each byte sent on a tcp connection is labeled with a unique number, maximum number of bytes that can be sent on a tcp connection is about 2^32-1 or 2^32-2 (which?). but now I feel that since TCP is a sliding window protocol, the wraparound of sequence numbers during the connection should not have an affect on the maximum number of bytes that can be sent over a tcp connection as long as the when wraparound occurs the old packet is no longer in the network (it is sent after 2*MSL). What is the correct answer?

    Read the article

  • Storing images in a Microsoft SQL Server 2005 database

    - by Rekreativc
    Hello! I have a question about storing an image in a database. I know this topic has been discussed before, however I feel that in my case this is actually a good idea - the images will be small (none should be as large as 1MB) and there shouldn't be too many. I like the idea of not worrying about IO permissions etc. Anyway I have a problem when storing the image (byte[]) to the database type image. Here is my code: OleDbCommand comm = new OleDbCommand(strSql, Program.GetConnection()); comm.Parameters.Add("?", SqlDbType.Image).Value = bytearr; comm.ExecuteNonQuery(); Everything compiles fine, however when I run it, the code only saves the value 63 (0x3F) into the field - no matter which image I am trying to save. What could be the problem? Thank you for your help.

    Read the article

  • Object to Network serialization - with an existing protocol

    - by cpf
    I'm writing a client for a server program written in C++. As is not unusual, all the networking protocol is in a format where packets can be easily memcopied into/out of a C++ structure (1 byte packet code, then different arrangements per packet type). I could do the same thing in C#, but is there an easier way, especially considering lots of the data is fixed-length char arrays that I want to play with as strings? Or should I just suck it up and convert types as needed? I've looked at using the ISerializable interface, but it doesnt look as low level as is required.

    Read the article

  • What platforms have something other than 8-bit char?

    - by Craig McQueen
    Every now and then, someone on SO points out that char (aka 'byte') isn't necessarily 8 bits. It seems that 8-bit char is almost universal. I would have thought that for mainstream platforms, it is necessary to have an 8-bit char to ensure its viability in the marketplace. Both now and historically, what platforms use a char that is not 8 bits, and why would they differ from the "normal" 8 bits? When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char? In the past I've come across some Analog Devices DSPs for which char is 16 bits. DSPs are a bit of a niche architecture I suppose. (Then again, at the time hand-coded assembler easily beat what the available C compilers could do, so I didn't really get much experience with C on that platform.)

    Read the article

  • KeybListener problem

    - by rgksugan
    In my apllication i am using a jpanel in which i want to add a key listener. I did it. But it doesnot work. Is it because i am using a swingworker to update the contents of the panel every second. Here is my code to update the panel RenderedImage image = ImageIO.read(new ByteArrayInputStream((byte[]) get())); Graphics graphics = remote.rdpanel.getGraphics(); if (graphics != null) { Image readyImage = new ImageIcon(UtilityFunctions.convertRenderedImage(image)).getImage(); graphics.drawImage(readyImage, 0, 0, remote.rdpanel.getWidth(), remote.rdpanel.getHeight(), null); }

    Read the article

  • Map derived class as an independent one with FNH's Automap

    - by Anton Gogolev
    Hi! Basically, I have an ImageMetadata class and an Image class, which derives from ImageMetadata. Image adds one property: byte[] Content, which actually contains binary data. What I want to do is to map these two classes onto one table, but I absolutely do not need NHibernates' inheritance support to kick in. I want to tailor FNH Automap to produce something like: <class name="ImageMetadata" ...> <property name="Name" ... /> < ... /> <class name="Image" ...> <property name="Name" ... /> <property name="Content" ... /> < ... /> Is this at all possible?

    Read the article

  • in java, which is better - three arrays of booleans or 1 array of bytes?

    - by joe_shmoe
    I know the question sounds silly, but consider this: I have an array of items and a labelling algorithm. at any point the item is in one of three states. The current version holds these states in a byte array, where 0, 1 and 2 represent the three states. alternatively, I could have three arrays of boolean - one for each state. which is better (consumes less memory) depends on how jvm (sun's version) stores the arrays - is a boolean represented by 1 bit? (p.s. don't start with all that "this is not the way OO/Java works" - I know, but here performance comes in front. plus the algorithm is simple and perfectly readable even in such form). Thanks a lot

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >