Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 8/151 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • Now that Device Central is dead, how can I test my Flash Lite applications?

    - by Kirby
    I'm trying to use Flash Lite to make a simple game for my girlfriend, who only has a Nokia 5530, but I just realized in CS6 Adobe killed Device Central, so there's no way for me to test it without the device (and it's supposed to be a surprise). Is there any other way for me to test it? I know I can just export the movie and use Flash Player, but Device Central allowed me to test drag and drop and memory/processor usage for example... tl;dr, is there an alternative to Device Central for testing Flash Lite in older devices?

    Read the article

  • Does the compiler provides extra stack space for byte-spilling?

    - by xuwicha
    From the sample code below which I got here, I don't understand why the value of registers are move to specific part in stack when byte-spilling is performed. pushq %rbp movq %rsp, %rbp subq $96, %rsp leaq L__unnamed_cfstring_23(%rip), %rax leaq L__unnamed_cfstring_26(%rip), %rcx movl $42, %edx leaq l_objc_msgSend_fixup_alloc(%rip), %r8 movl $0, -4(%rbp) movl %edi, -8(%rbp) movq %rsi, -16(%rbp) movq %rax, -48(%rbp) ## 8-byte Spill movq %rcx, -56(%rbp) ## 8-byte Spill movq %r8, -64(%rbp) ## 8-byte Spill movl %edx, -68(%rbp) ## 4-byte Spill callq _objc_autoreleasePoolPush movq L_OBJC_CLASSLIST_REFERENCES_$_(%rip), %rcx movq %rcx, %rdi movq -64(%rbp), %rsi ## 8-byte Reload movq %rax, -80(%rbp) ## 8-byte Spill callq *l_objc_msgSend_fixup_alloc(%rip) movq L_OBJC_SELECTOR_REFERENCES_27(%rip), %rsi movq %rax, %rdi movq -56(%rbp), %rdx ## 8-byte Reload movl -68(%rbp), %ecx ## 4-byte Reload And also, I don't know what is the purpose of byte-spilling since the program logic can still be achieved if the function is the one saving the value of the registers it will be used inside it. I really have no idea why is this happening. Please help me understand this.

    Read the article

  • Converting a byte array to a X.509 certificate

    - by ddd
    I'm trying to port a piece of Java code into .NET that takes a Base64 encoded string, converts it to a byte array, and then uses it to make a X.509 certificate to get the modulus & exponent for RSA encryption. This is the Java code I'm trying to convert: byte[] externalPublicKey = Base64.decode("base 64 encoded string"); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(externalPublicKey); Key publicKey = keyFactory.generatePublic(publicKeySpec); RSAPublicKey pbrtk = (java.security.interfaces.RSAPublicKey) publicKey; BigInteger modulus = pbrtk.getModulus(); BigInteger pubExp = pbrtk.getPublicExponent(); I've been trying to figure out the best way to convert this into .NET. So far, I've come up with this: byte[] bytes = Convert.FromBase64String("base 64 encoded string"); X509Certificate2 x509 = new X509Certificate2(bytes); RSA rsa = (RSA)x509.PrivateKey; RSAParameters rsaParams = rsa.ExportParameters(false); byte[] modulus = rsaParams.Modulus; byte[] exponent = rsaParams.Exponent; Which to me looks like it should work, but it throws an exception when I use the base 64 encoded string from the Java code to generate the X509 certificate. Is Java's X.509 implementation just incompatible with .NET's, or am I doing something wrong in my conversion from Java to .NET? Or is there simply no conversion from Java to .NET in this case?

    Read the article

  • java socket send & receive byte array

    - by quan
    in server, I have send a byte array to client through java socket byte[] message = ... ; DataOutputStream dout = new DataOutputStream(client.getOutputStream()); dout.write(message); How can I receive this byte array from client? anyone give me some code example to do this thanks in advance

    Read the article

  • Defining byte arrray in javascript.

    - by kumar
    Hi How do i pass a byte array from javascript to ActiveX control. My javascritp will call WCF servie ( mehtod) and that method will return a byte array. after that i need to passs this byte array to the active x control. could any body provide me a solution for this.

    Read the article

  • Converting a md5 hash byte array to a string

    - by Blankman
    How can I convert the hashed result, which a byte array, to a string? byte[] bytePassword = Encoding.UTF8.GetBytes(password); using (MD5 md5 = MD5.Create()) { byte[] byteHashedPassword = md5.ComputeHash(bytePassword); } So I need to convert byteHashedPassword to a string

    Read the article

  • Byte = 8bits, but why doesn't BitConverter think so

    - by Paul Farry
    Given the following information Public Enum Request As Byte None = 0 Identity = 1 License = 2 End Enum Protected mType As Communication.Request mType = Communication.Request.Identity Debug.Print (BitConverter.GetBytes(mType).Length.tostring) 2 Why does bitconverter report that mType is a length of 2. I would have thought that passing a Byte into BitConverter.GetBytes would just return the Byte. I mean it's no big deal because it's only sending a very small block of data across a TCP Socket, but I'm just intrigued why it thinks it's 2 bytes.

    Read the article

  • Representing a 16 byte variable

    - by Bobby
    I have to represent a 16 byte field as part of a data structure: struct Data_Entry { uint8 CUI_Type; uint8 CUI_Size; uint16 Src_Refresh_Period; uint16 Src_Buffer_Size; uint16 Src_CUI_Offset; uint32 Src_BCW_Address; uint32 Src_Previous_Timestamp; /* The field below should be a 16 byte field */ uint32 Data; }; How would I represent the "Data" field as a 16 byte field instead of the 4 byte field it currently is? Thanks, Bobby

    Read the article

  • how to convert bitmap into byte array in android

    - by satyamurthy
    hi all i am new in android i am implementing image retrieve in sdcard in image convert into bitmap and in bitmap convert in to byte array please forward some solution of this code public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView image = (ImageView) findViewById(R.id.picview); EditText value=(EditText)findViewById(R.id.EditText01); FileInputStream in; BufferedInputStream buf; try { in = new FileInputStream("/sdcard/pictures/1.jpg"); buf = new BufferedInputStream(in,1070); System.out.println("1.................."+buf); byte[] bMapArray= new byte[buf.available()]; buf.read(bMapArray); Bitmap bMap = BitmapFactory.decodeByteArray(bMapArray, 0, bMapArray.length); for (int i = 0; i < bMapArray.length; i++) { System.out.print("bytearray"+bMapArray[i]); } image.setImageBitmap(bMap); value.setText(bMapArray.toString()); if (in != null) { in.close(); } if (buf != null) { buf.close(); } } catch (Exception e) { Log.e("Error reading file", e.toString()); } } } solution is 04-12 16:41:16.168: INFO/System.out(728): 4......................[B@435a2908 this is the result for byte array not display total byte array this array size is 1034 please forward some solution

    Read the article

  • PHP Convert C# Hex Blob Hexadecimal String back to Byte Array prior to decryption

    - by PolishHurricane
    I have a piece of data that I am receiving in hexadecimal string format, example: "65E0C8DEB69EA114567954". It was made this way in C# by converting a byte array to a hexadecimal string. However, I am using PHP to read this string and need to temporarily convert this back to the byte array. If it matters, I will be decrypting this byte array, then reconverting it to unencrypted hexadecimal and or plaintext, but I will figure that out later. So the question is, how do I convert a string like the above back to an encoded byte array/ blob in PHP? Thanks!

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Passing an ActionScript JPG Byte Array to Javscript (and eventually to PHP)

    - by Gus
    Our web application has a feature which uses Flash (AS3) to take photos using the user's web cam, then passes the resulting byte array to PHP where it is reconstructed and saved on the server. However, we need to be able to take this web application offline, and we have chosen Gears to do so. The user takes the app offline, performs his tasks, then when he's reconnected to the server, we "sync" the data back with our central database. We don't have PHP to interact with Flash anymore, but we still need to allow users to take and save photos. We don't know how to save a JPG that Flash creates in a local database. Our hope was that we could save the byte array, a serialized string, or somehow actually persist the object itself, then pass it back to either PHP or Flash (and then PHP) to recreate the JPG. We have tried: - passing the byte array to Javascript instead of PHP, but javascript doesn't seem to be able to do anything with it (the object seems to be stripped of its methods) - stringifying the byte array in Flash, and then passing it to Javascript, but we always get the same string: ÿØÿà Now we are thinking of serializing the string in Flash, passing it to Javascript, then on the return route, passing that string back to Flash which will then pass it to PHP to be reconstructed as a JPG. (whew). Since no one on our team has extensive Flash background, we're a bit lost. Is serialization the way to go? Is there a more realistic way to do this? Does anyone have any experience with this sort of thing? Perhaps we can build a javascript class that is the same as the byte array class in AS?

    Read the article

  • How to retreive SID's byte array

    - by rursw1
    Hello experts, How can I convert a PSID type into a byte array that contains the byte value of the SID? Something like: PSID pSid; byte sidBytes[68];//Max. length of SID in bytes is 68 if(GetAccountSid( NULL, // default lookup logic AccountName,// account to obtain SID &pSid // buffer to allocate to contain resultant SID ) { ConvertPSIDToByteArray(pSid, sidBytes); } --how should I write the function ConvertPSIDToByteArray? Thank you!

    Read the article

  • BuiltIn Function to Convert from Hex String to Byte

    - by Ngu Soon Hui
    This question is similar to the one here. One can easily convert from hex string to byte via the following formula: public static byte[] HexStringToBytes(string hex) { byte[] data = new byte[hex.Length /2]; int j = 0; for (int i = 0; i < hex.Length; i+=2) { data[ j ] = Convert.ToByte(hex.Substring(i, 2), 16); ++j; } return data; } But is there a built-in function ( inside .net framework) for this?

    Read the article

  • Is there a simpler way to convert a byte array to a 2-byte-size hexadecimal string?

    - by Tom Brito
    Is there a simpler way of implement this? Or a implemented method in JDK or other lib? /** * Convert a byte array to 2-byte-size hexadecimal String. */ public static String to2DigitsHex(byte[] bytes) { String hexData = ""; for (int i = 0; i < bytes.length; i++) { int intV = bytes[i] & 0xFF; // positive int String hexV = Integer.toHexString(intV); if (hexV.length() < 2) { hexV = "0" + hexV; } hexData += hexV; } return hexData; } public static void main(String[] args) { System.out.println(to2DigitsHex(new byte[] {8, 10, 12})); } the output is: "08 0A 0C" (without the spaces)

    Read the article

  • Byte from string/int in C++

    - by Tim van Elsloo
    Hi, I'm a beginning user in C++ and I want to know how to do this: How can I 'create' a byte from a string/int. So for example I've: string some_byte = "202"; When I would save that byte to a file, I want that the file is 1 byte instead of 3 bytes. How is that possible? Thanks in advance, Tim

    Read the article

  • How do I convert byte to string?

    - by HardCoder1986
    Hello! Is there any fast way to convert given byte (like, by number - 65) to it's text hex representation? Basically, I want to convert array of bytes into (I am hardcoding resources) their code-representation like BYTE data[] = {0x00, 0x0A, 0x00, 0x01, ... } How do I automate this Given byte -> "0x0A" string conversion?

    Read the article

  • Continue NSURLConnection/NSURLRequest from a given Byte

    - by Sj
    I am working with a web service right now that requires me to upload video binaries straight to their web form via the iPhone SDK. Simple enough. The part that is getting me though is when the connection is interrupted, they want me to be able to continue the upload from a given byte. So here is what I have: the original data & the last byte uploaded What I need to know: how can I continue the data from that byte? I seems like it would be something similar to truncating NSData by the byte but I do not know how to do that for an NSURLConnection/NSURLRequest. Thank you!

    Read the article

  • pointer is always byte aligned

    - by kumar
    Hi, I read something like pointer must be byte-aligned. My understanding in a typical 32bit architecture... all pointers are byte aligned...No ? Please confirm. can there be a pointer which is not byte-aligned ?

    Read the article

  • Convert jpg Byte[] to Texture2D

    - by Damien Sawyer
    I need to import jpeg images into a WP7/XNA app with associated metadata. The program which manages these images exports to an XML file with an encoded byte[] of the jpg files. I've written a custom importer/processor which successfully imports the reserialized objects into my XNA project. My question is, given the byte[] of the jpg, what is the best way to convert it back to Texture2D. // 'Standard' method for importing image Texture2D texture1 = Content.Load<Texture2D>("artwork"); // Uses the standard Content processor "Texture - XNA Framework" to import an image. // 'Custom' method var myCustomObject = Content.Load<CompiledBNBImage>("gamedata"); // Uses my custom content Processor to return POCO "CompiledBNBImage" byte[] myJPEGByteArray = myCustomObject.Image; // byte[] of jpeg Texture2D texture2 = ???? // What is the best way to convert myJPEGByteArray to a Texture2D? Thanks very much for your help. :-) DS

    Read the article

  • C# Image.Clone to byte[] causes EDIT.COM to open on Windows XP

    - by JayDial
    It appears that cloning a Image and converting it to a byte array is causing EDIT.COM to open up on Windows XP machines. This does not happen on a Windows 7 machine. The application is a C# .NET 2.0 application. Does anyone have any idea why this may be happening? Here is my Image conversion code: public static byte[] CovertImageToByteArray(Image imageToConvert) { imageToConvert.Clone() as Image; if(clone == null) return null; imageToConvert.Dispose(); byte[] imageByteArray; using (MemoryStream ms = new MemoryStream()) { clone.Save(ms, clone.RawFormat); imageByteArray = ms.ToArray(); } return imageByteArray; } public static Image ConvertByteArrayToImage(byte[] imageByteArray, ImageFormat formatOfImage) { Image image; using ( MemoryStream ms = new MemoryStream(imageByteArray, 0, imageByteArray.Length)) { ms.Write(imageByteArray, 0, imageByteArray.Length); image = Image.FromStream(ms, true); } return image; } Thanks Justin

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >