Search Results

Search found 3443 results on 138 pages for 'byte'.

Page 3/138 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • byte[] to image android

    - by Sephy
    Hi everybody, My issue is as follows : I have stored a few pictures into the sqlite database, using the blob format, which seems to work ok. now i want to get my pictures out of the DB and put then back into images... to complicate the matter, their format is variable (png, jpg, maybe something else, im not sure) Is there a way of doing so in android? thank you

    Read the article

  • Byte manipulation in PHP

    - by Michael Angstadt
    In PHP, if you have a variable with binary data, how do you get specific bytes from the data? For example, if I have some data that is 30 bytes long, how do I get the first 8 bytes? Right now, I'm treating it like a string, using the substr() function: $data = //... $first8Bytes = substr($data, 0, 8); Is it safe to use substr with binary data? Or are there other functions that I should be using? Thanks.

    Read the article

  • using java.util.Scanner to read a file byte by byte

    - by openidsucks
    I'm trying to read a one line file character by character using java.util.Scanner. However I'm getting this exception": Exception in thread "main" java.util.InputMismatchException: For input string: "contents of my file" at java.util.Scanner.nextByte(Scanner.java:1861) at java.util.Scanner.nextByte(Scanner.java:1814) at p008.main(p008.java:18) <-- line where I do scanner.nextByte() Here's my code: public static void main(String[] args) throws FileNotFoundException { File source = new File("file.txt"); Scanner scanner = new Scanner(source); while(scanner.hasNext()) { System.out.println((char)scanner.nextByte()); } scanner.close() } Does anyone have any ideas as to what I might be doing wrong? Edit: I realized I wrote hasNext() instead of hasNextByte(). However if I do that it doesn't print out anything.

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Using boost::iostreams to parse a binary file byte by byte

    - by Zsol
    So I would like to parse a binary file and extract some data from it. The problem I am facing with this is that I need to convert a stream of chars to a stream of unsigned chars. Reading the boost documentation, it seems that boost::iostreams::code_converter should be the solution for this, so I tried this: typedef unsigned char uint8_t; typedef boost::iostreams::stream<boost::iostreams::code_converter< boost::iostreams::basic_array_source<uint8_t> >, std::codecvt<uint8_t, char, std::mbstate_t> > array_stream; The idea was to specify a codecvt with InternalType=uint8_t and ExternalType=char. Unfortunately this does not compile. So the question is: how do I convert a stream of chars to a stream of uint8_ts?

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Using LINQ to search a byte array for all subarrays that start/stop with certain byte

    - by Joel B
    I'm dealing with a COM port application and we have a defined variable-length packet structure that I'm talking to a micro-controller with. The packet has delimiters for the start and stop bytes. The trouble is that sometimes the read buffer can contain extraneous characters. It seems like I'll always get the whole packet, just some extra chatter before/after the actual data. So I have a buffer that I append data to whenever new data is received from the COM port. What is the best way to search this buffer for any possible occurrences of my packet? For example: Say my packet delimiter is 0xFF and I have an array as such { 0x00, 0xFF, 0x02, 0xDA, 0xFF, 0x55, 0xFF, 0x04 } How can I create a function/LINQ-statment that returns all subarrays that start and end with the delimiter (almost like a sliding-correlator with wildcards)? The sample would return the following 3 arrays: {0xFF, 0x02, 0xDA, 0xFF}, {0xFF, 0x55, 0xFF}, and {0xFF, 0x02, 0xDA, 0xFF, 0x55, 0xFF}

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • To Obtain EPOCH Time Value from a Packed BIT Structure in C [migrated]

    - by xde0037
    This is not a home assignment! We have a binary data file which has following data structure: (It is a 12 byte structure): I need to find out Epoch time value(total time value is packed in 42 bits as described below): Field-1 : Byte 1, Byte 2, + 6 Bits from Byte 3 Time-1 : 2 Bits from Byte 3 + Byte 4 Time-2 : Byte 5, Byte 6, Byte 7, Byte 8 Field-2 : Byte 9, Byte 10, Byte 11, Byte 12 For Field-1 and Field-2 I do not have issue as they can be taken out easily. I need time value in Epoch Time (long) as it has been packed in Bytes 5,6,7,8 and 3 and 4 as follows: (the bit structure for the time value is as follows): Bytes 5 to 8 (32 bit word) Packs time value bits from 0 thru 31 (byte 5 has 0 to 7 bits, byte 6 has 8 to 15, byte 7 has 16 to 23, byte 8 has 24 to 31). the remaining 10 bits of time value are packed in Bytes 3 and byte 4 as follows: byte 3 has 2 bits:32 and 33, and Byte 4 has remaining bits : 34 to 41. So total bits for time value is 42 bits, packed as above. I need to compute epoch value coming out of these 42 bits. How do I do it? I have done something like this but not sure it gives me correct value: typedef struct P_HEADER { unsigned int tmuNumber : 21; unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4 unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8 unsigned int traceKey : 32; } __attribute__((__packed__)) P_HEADER; Then in the code : P_HEADER *header1; //get input string in hexa,etc..etc.. //parse the input with the header as : header1 = (P_HEADER *)inputBuf; // then print the header1->time1, header1->time2 .... long ttime = header1->time1|header1->time2; //?? is this the way to get values out? Any hint tip will be appreciated. Environment is : gcc 4.1, Linux Thanks in advance.

    Read the article

  • byte + byte = int... why?

    - by Robert C. Cartaino
    Looking at this C# code... byte x = 1; byte y = 2; byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte' The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte, so... byte z = (byte)(x + y); // works What I am wondering is why? Is it architectural? Philosophical? We have: int + int = int long + long = long float + float = float double + double = double So why not: byte + byte = byte short + short = short ? A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

    Read the article

  • Creating 3DES key from bytes

    - by AO
    I create a triple DES key from the a byte array ("skBytes") but when calling getEncoded on the triple DES key ("sk") and comparing it to the byte array, they differ! They are almost the same if you look at the console output, though. How would I create a triple DES key that is exactly as "skBytes"? byte[] skBytes = {(byte) 0x41, (byte) 0x0B, (byte) 0xF0, (byte) 0x9B, (byte) 0xBC, (byte) 0x0E, (byte) 0xC9, (byte) 0x4A, (byte) 0xB5, (byte) 0xCE, (byte) 0x0B, (byte) 0xEA, (byte) 0x05, (byte) 0xEF, (byte) 0x52, (byte) 0x31, (byte) 0xD7, (byte) 0xEC, (byte) 0x2E, (byte) 0x75, (byte) 0xC3, (byte) 0x1D, (byte) 0x3E, (byte) 0x61}; DESedeKeySpec keySpec = new DESedeKeySpec(skBytes); SecretKeyFactory keyFactory = SecretKeyFactory.getInstance("DESede"); SecretKey sk = keyFactory.generateSecret(keySpec); for(int i = 0; i < skBytes.length; i++) { System.out.println("(sk.getEncoded()[i], skBytes[i]) = (" + sk.getEncoded()[i] +", " + skBytes[i] + ")"); } Console output: (sk.getEncoded()[i], skBytes[i]) = (64, 65) (sk.getEncoded()[i], skBytes[i]) = (11, 11) (sk.getEncoded()[i], skBytes[i]) = (-15, -16) (sk.getEncoded()[i], skBytes[i]) = (-101, -101) (sk.getEncoded()[i], skBytes[i]) = (-68, -68) (sk.getEncoded()[i], skBytes[i]) = (14, 14) (sk.getEncoded()[i], skBytes[i]) = (-56, -55) (sk.getEncoded()[i], skBytes[i]) = (74, 74) (sk.getEncoded()[i], skBytes[i]) = (-75, -75) (sk.getEncoded()[i], skBytes[i]) = (-50, -50) (sk.getEncoded()[i], skBytes[i]) = (11, 11) (sk.getEncoded()[i], skBytes[i]) = (-22, -22) (sk.getEncoded()[i], skBytes[i]) = (4, 5) (sk.getEncoded()[i], skBytes[i]) = (-17, -17) (sk.getEncoded()[i], skBytes[i]) = (82, 82) (sk.getEncoded()[i], skBytes[i]) = (49, 49) (sk.getEncoded()[i], skBytes[i]) = (-42, -41) (sk.getEncoded()[i], skBytes[i]) = (-20, -20) (sk.getEncoded()[i], skBytes[i]) = (47, 46) (sk.getEncoded()[i], skBytes[i]) = (117, 117) (sk.getEncoded()[i], skBytes[i]) = (-62, -61) (sk.getEncoded()[i], skBytes[i]) = (28, 29) (sk.getEncoded()[i], skBytes[i]) = (62, 62) (sk.getEncoded()[i], skBytes[i]) = (97, 97)

    Read the article

  • Are these two functions the same?

    - by Ranhiru
    There is a function in the AES algorithm, to multiply a byte by 2 in Galois Field. This is the function given in a website private static byte gfmultby02(byte b) { if (b < 0x80) return (byte)(int)(b <<1); else return (byte)( (int)(b << 1) ^ (int)(0x1b) ); } This is the function i wrote. private static byte MulGF2(byte x) { if (x < 0x80) return (byte)(x << 1); else { return (byte)((x << 1) ^ 0x1b); } } What i need to know is, given any byte whether this will perform in the same manner. Actually I am worried about the extra conversion of to int and then again to byte. So far I have tested and it looks fine. Does the extra cast to int and then to byte make a difference?

    Read the article

  • processing: convert int to byte

    - by inspectorG4dget
    Hello SO, I'm trying to convert an int into a byte in Processing 1.0.9. This is the snippet of code that I have been working with: byte xByte = byte(mouseX); byte yByte = byte(mouseY); byte setFirst = byte(128); byte resetFirst = byte(127); xByte = xByte | setFirst; yByte = yByte >> 1; port.write(xByte); port.write(yByte); According to the Processing API, this should work, but I keep getting an error at xByte = xByte | setFirst; that says: cannot convert from int to byte I have tried converting 128 and 127 to they respective hex values (0x80 and 0x7F), but that didn't work either. I have tried everything mentioned in the API as well as some other blogs, but I feel like I'm missing something very trivial. I would appreciate any help. Thank you.

    Read the article

  • Java default Integer value is int

    - by Chris Okyen
    My code looks like this import java.util.Scanner; public class StudentGrades { public static void main(String[] argv) { Scanner keyboard = new Scanner(System.in); byte q1 = keyboard.nextByte() * 10; } } It gives me an error "Type mismatch: cannot convert from int to byte." Why the heck would Java store a literal operand that is small enough to fit in a byte,. into a int type? Do literals get stored in variables/registers before the ALU performs arithmatic operations.

    Read the article

  • C# Compress Triple Byte Array

    - by Mark
    Hi. I currently got this script, which compresses byte arrays. But I need it rewritten, so it can compress triple byte arrays [,,] Thanks! public static byte[] Compress(byte[] buffer) { MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(buffer, 0, buffer.Length); zip.Close(); ms.Position = 0; MemoryStream outStream = new MemoryStream(); byte[] compressed = new byte[ms.Length]; ms.Read(compressed, 0, compressed.Length); byte[] gzBuffer = new byte[compressed.Length + 4]; Buffer.BlockCopy(compressed, 0, gzBuffer, 4, compressed.Length); Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gzBuffer, 0, 4); return gzBuffer; } public static byte[] Decompress(byte[] gzBuffer) { MemoryStream ms = new MemoryStream(); int msgLength = BitConverter.ToInt32(gzBuffer, 0); ms.Write(gzBuffer, 4, gzBuffer.Length - 4); byte[] buffer = new byte[msgLength]; ms.Position = 0; GZipStream zip = new GZipStream(ms, CompressionMode.Decompress); zip.Read(buffer, 0, buffer.Length); return buffer; }

    Read the article

  • glTexImage2D + byte[]

    - by miniMe
    How can I upload pixels from a simple byte array to an OpenGl texture ? I'm using glTexImage2D and all I get is a white rectangle instead of a pixelated texture. The 9th parameter (32-bit pointer to the pixel data) is IMO the problem. I tried lots of parameter types there (byte, ref byte, byte[], ref byte[], int & IntPtr + Marshall, out byte, out byte[], byte*). glGetError() always returns GL_NO_ERROR. There must be something I'm doing wrong because it's never some gibberish pixels. It's always white. glGenTextures works correct. The first id has the value 1 like always in OpenGL. And I draw colored lines without any problem. So something is wrong with my texturing. I'm in control of the DllImport. So I can change the parameter types if necessary. GL.glBindTexture(GL.GL_TEXTURE_2D, id); int w = 4; int h = 4; byte[] bytes = new byte[w * h * 4]; for (int i = 0; i < bytes.Length; i++) bytes[i] = (byte)Utils.random(256); GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bytes); [DllImport(GL_LIBRARY)] public static extern void glTexImage2D(uint what, int level, int internalFormat, int width, int height, int border, int format, int type, byte[] bytes);

    Read the article

  • Design guideline for saving big byte stream in c# [migrated]

    - by Praveen
    I have an application where I am receiving big byte array very fast around per 50 miliseconds. The byte array contains some information like file name etc. The data (byte array ) may come from several sources. Each time I receive the data, I have to find the file name and save the data to that file name. I need some guide lines to how should I design it so that it works efficient. Following is my code... public class DataSaver { private static Dictionary<string, FileStream> _dictFileStream; public static void SaveData(byte[] byteArray) { string fileName = GetFileNameFromArray(byteArray); FileStream fs = GetFileStream(fileName); fs.Write(byteArray, 0, byteArray.Length); } private static FileStream GetFileStream(string fileName) { FileStream fs; bool hasStream = _dictFileStream.TryGetValue(fileName, out fs); if (!hasStream) { fs = new FileStream(fileName, FileMode.Append); _dictFileStream.Add(fileName, fs); } return fs; } public static void CloseSaver() { foreach (var key in _dictFileStream.Keys) { _dictFileStream[key].Close(); } } } How can I improve this code ? I need to create a thread maybe to do the saving.

    Read the article

  • Cannot convert CString to BYTE array

    - by chekalin-v
    I need to convert CString to BYTE array. I don't know why, but everything that I found in internet does not work :( For example, I have CString str = _T("string"); I've been trying so 1) BYTE *pbBuffer = (BYTE*)(LPCTSTR)str; 2) BYTE *pbBuffer = new BYTE[str.GetLength()+1]; memcpy(pbBuffer, (VOID*)(LPCTSTR)StrRegID, str.GetLength()); 3) BYTE *pbBuffer = (BYTE*)str.GetString(); And always pbBuffer contains just first letter of str DWORD dwBufferLen = strlen((char *)pbBuffer)+1; is 2 But if I use const string: BYTE *pbBuffer = (BYTE*)"string"; pbBuffer contains whole string Where is my mistake?

    Read the article

  • Convert arbitrary size of byte[] to BigInteger[] and then safely convert back to exactly the same by

    - by PatlaDJ
    I believe conversion exactly to BigInteger[] would be optimal in my case. Anyone had done or found this written in Java and willing to share? So imagine I have arbitrary size byte[] = {0xff,0x3e,0x12,0x45,0x1d,0x11,0x2a,0x80,0x81,0x45,0x1d,0x11,0x2a,0x80,0x81} How do I convert it to array of BigInteger's and then be able to recover it back the original byte array safely? ty in advance.

    Read the article

  • error while converting the byte arry to image ,after modifying the byte array and the byte array is

    - by geehta
    Hi, this is my code. Here i hv formed the byte array of img, i am trying to add some vlue to this byte array say 10 and i'll take care that the value is not exceeding 255. later if i try to redraw the image via the following code i am getting error at this line... what can be the problem.. without modification if i try to draw the image it is coming but if i cahnge some value it is not drawing.. public Image btoi(byte[] bt) { ms = new MemoryStream(bt, 0, bt.Length); img = Image.FromStream (ms, true); // error at this line ms.Close(); return img; }

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • Need help manipulating WAV (RIFF) Files at a byte level

    - by Eric
    I'm writing an an application in C# that will record audio files (*.wav) and automatically tag and name them. Wave files are RIFF files (like AVI) which can contain meta data chunks in addition to the waveform data chunks. So now I'm trying to figure out how to read and write the RIFF meta data to and from recorded wave files. I'm using NAudio for recording the files, and asked on their forums as well on SO for way to read and write RIFF tags. While I received a number of good answers, none of the solutions allowed for reading and writing RIFF chunks as easily as I would like. But more importantly I have very little experience dealing with files at a byte level, and think this could be a good opportunity to learn. So now I want to try writing my own class(es) that can read in a RIFF file and allow meta data to be read, and written from the file. I've used streams in C#, but always with the entire stream at once. So now I'm little lost that I have to consider a file byte by byte. Specifically how would I go about removing or inserting bytes to and from the middle of a file? I've tried reading a file through a FileStream into a byte array (byte[]) as shown in the code below. System.IO.FileStream waveFileStream = System.IO.File.OpenRead(@"C:\sound.wav"); byte[] waveBytes = new byte[waveFileStream.Length]; waveFileStream.Read(waveBytes, 0, waveBytes.Length); And I could see through the Visual Studio debugger that the first four byte are the RIFF header of the file. But arrays are a pain to deal with when performing actions that change their size like inserting or removing values. So I was thinking I could then to the byte[] into a List like this. List<byte> list = waveBytes.ToList<byte>(); Which would make any manipulation of the file byte by byte a whole lot easier, but I'm worried I might be missing something like a class in the System.IO name-space that would make all this even easier. Am I on the right track, or is there a better way to do this? I should also mention that I'm not hugely concerned with performance, and would prefer not to deal with pointers or unsafe code blocks like this guy. If it helps at all here is a good article on the RIFF/WAV file format.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >