Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 20/151 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • AES Cipher not picking up IV

    - by timothyjc
    I am trying to use an IV with AES so that the encrypted text is unpredictable. However, the encrypted hex string is always the same. I have actually tried a few methods of attempting to add some randomness by passing some additional parameters to the cipher init call: 1) Manual IV generation byte[] iv = generateIv(); IvParameterSpec ivspec = new IvParameterSpec(iv); 2) Asking cipher to generate IV AlgorithmParameters params = cipher.getParameters(); params.getParameterSpec(IvParameterSpec.class); 3) Using a PBEParameterSpec byte[] encryptionSalt = generateSalt(); PBEParameterSpec pbeParamSpec = new PBEParameterSpec(encryptionSalt, 1000); All of these seem to have no influence on the encrypted text.... help!!! My code: package com.citc.testencryption; import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.PBEKeySpec; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class Main extends Activity { public static final int SALT_LENGTH = 20; public static final int PBE_ITERATION_COUNT = 1000; private static final String RANDOM_ALGORITHM = "SHA1PRNG"; private static final String PBE_ALGORITHM = "PBEWithSHA256And256BitAES-CBC-BC"; private static final String CIPHER_ALGORITHM = "PBEWithSHA256And256BitAES-CBC-BC"; private static final String TAG = Main.class.getSimpleName(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { String password = "password"; String plainText = "plaintext message to be encrypted"; // byte[] salt = generateSalt(); byte[] salt = "dfghjklpoiuytgftgyhj".getBytes(); Log.i(TAG, "Salt: " + salt.length + " " + HexEncoder.toHex(salt)); PBEKeySpec pbeKeySpec = new PBEKeySpec(password.toCharArray(), salt, PBE_ITERATION_COUNT); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBE_ALGORITHM); SecretKey secretKey = keyFac.generateSecret(pbeKeySpec); byte[] key = secretKey.getEncoded(); Log.i(TAG, "Key: " + HexEncoder.toHex(key)); // PBEParameterSpec pbeParamSpec = new PBEParameterSpec(salt, ITERATION_COUNT); Cipher encryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM); // byte[] encryptionSalt = generateSalt(); // Log.i(TAG, "Encrypted Salt: " + encryptionSalt.length + " " + HexEncoder.toHex(encryptionSalt)); // PBEParameterSpec pbeParamSpec = new PBEParameterSpec(encryptionSalt, 1000); // byte[] iv = params.getParameterSpec(IvParameterSpec.class).getIV(); // Log.i(TAG, encryptionCipher.getParameters() + " "); byte[] iv = generateIv(); IvParameterSpec ivspec = new IvParameterSpec(iv); encryptionCipher.init(Cipher.ENCRYPT_MODE, secretKey, ivspec); byte[] encryptedText = encryptionCipher.doFinal(plainText.getBytes()); Log.i(TAG, "Encrypted: " + HexEncoder.toHex(encryptedText)); // <== Why is this always the same :( Cipher decryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM); decryptionCipher.init(Cipher.DECRYPT_MODE, secretKey, ivspec); byte[] decryptedText = decryptionCipher.doFinal(encryptedText); Log.i(TAG, "Decrypted: " + new String(decryptedText)); } catch (Exception e) { e.printStackTrace(); } } private byte[] generateSalt() throws NoSuchAlgorithmException { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] salt = new byte[SALT_LENGTH]; random.nextBytes(salt); return salt; } private byte[] generateIv() throws NoSuchAlgorithmException { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] iv = new byte[16]; random.nextBytes(iv); return iv; } }

    Read the article

  • retransmission of lost TCP segment

    - by tcpip
    What will happen in the following scenario (assume the connection is already established): (stack A) send 10 byte data (stack B) send ACK for 10 byte data (stack B) send 200 byte data (stack B) send 100 byte data (stack B) send 50 byte data (stack A) send ACK for 350 byte data and also send 70 bytes data This segment gets lost and does not reach machine B. (stack B) retransmit 200 byte data (step 3) (stack A) send ACK for 200 byte data and the next expected seq number as the one for 70 bytes data Question: Should 70 bytes data be also transferred with the ACK in step 8? Note that the retransmission timer for step 6 has not expired yet.

    Read the article

  • PHP Calculating Text to Content Ratio

    - by James
    I am using the following code to calculate text to code ratio. I think it is crazy that no one can agree on how to properly calculate the result. I am looking any suggestions or ideas to improve this code that may make it more accurate. <?php // Returns the size of the content in bytes function findKb($content){ $count=0; $order = array("\r\n", "\n", "\r", "chr(13)", "\t", "\0", "\x0B"); $content = str_replace($order, "12", $content); for ($index = 0; $index < strlen($content); $index ++){ $byte = ord($content[$index]); if ($byte <= 127) { $count++; } else if ($byte >= 194 && $byte <= 223) { $count=$count+2; } else if ($byte >= 224 && $byte <= 239) { $count=$count+3; } else if ($byte >= 240 && $byte <= 244) { $count=$count+4; } } return $count; } // Collect size of entire code $filesize = findKb($content); // Remove anything within script tags $code = preg_replace("@<script[^>]*>.+</script[^>]*>@i", "", $content); // Remove anything within style tags $code = preg_replace("@<style[^>]*>.+</style[^>]*>@i", "", $content); // Remove all tags from the system $code = strip_tags($code); // Remove Extra whitespace from the content $code = preg_replace( '/\s+/', ' ', $code ); // Find the size of the remaining code $codesize = findKb($code); // Calculate Percentage $percent = $codesize/$filesize; $percentage = $percent*100; echo $percentage; ?> I don't know the exact calculations that are used so this function is just my guess. Does anyone know what the proper calculations are or if my functions are close enough for a good judgement.

    Read the article

  • Dilemma of Sending a Byte Array from JavaScript to COM.

    - by Voulnet
    Hello there, I'm having a bit of a problem because neither Javascript nor ActiveX (written in C++) are behaving like good little children. All I'm asking them to do is for Javascript to send a byte array and for the ActiveX to receive the byte array correctly in order to do more computation. This is how I declared my byte array in JS, and I verified it inside JS: var arr = new Array(0x00, 0xA4, 0x04, 0x00, 0x10,0xA0, 0x00, 00, 00, 0x18, 0x30, 0x03, 0x01, 00, 00, 00, 00, 00, 00, 00, 00); Javascript sends this array as an argument to an ActiveX method. Here is the tricky part; I want the ActiveX method to receive the byte array as a SAFEARRAY or VARIANT, but I can't get it to work for the life of me. I tried debugging and seeing the contents received inside the ActiveX as both SAFEARRAY or VARIANT, but to no avail. Here is the IDL segment: [id(7), helpstring("NEW Sends APDU to Card and Reads the Response")] HRESULT NewSendAPDU( [in] VARIANT pAPDU, [out, retval]VARIANT* pAPDUResponse); Any help is greatly appreciated. Thanks in advance!

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • Questions related to writing your own file downloader using multiple threads java

    - by Shekhar
    Hello In my current company, i am doing a PoC on how we can write a file downloader utility. We have to use socket programming(TCP/IP) for downloading the files. One of the requirements of the client is that a file(which will be large in size) should be transfered in chunks for example if we have a file of 5Mb size then we can have 5 threads which transfer 1 Mb each. I have written a small application which downloads a file. You can download the eclipe project from http://www.fileflyer.com/view/QM1JSC0 A brief explanation of my classes FileSender.java This class provides the bytes of file. It has a method called sendBytesOfFile(long start,long end, long sequenceNo) which gives the number of bytes. import java.io.File; import java.io.IOException; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileSender { private static final String FILE_NAME = "C:\\shared\\test.pdf"; public ByteArrayWrapper sendBytesOfFile(long start,long end, long sequenceNo){ try { File file = new File(FILE_NAME); byte[] fileBytes = FileUtils.readFileToByteArray(file); System.out.println("Size of file is " +fileBytes.length); System.out.println(); System.out.println("Start "+start +" end "+end); byte[] bytes = getByteArray(fileBytes, start, end); ByteArrayWrapper wrapper = new ByteArrayWrapper(bytes, sequenceNo); return wrapper; } catch (IOException e) { throw new RuntimeException(e); } } private byte[] getByteArray(byte[] bytes, long start, long end){ long arrayLength = end-start; System.out.println("Start : "+start +" end : "+end + " Arraylength : "+arrayLength +" length of source array : "+bytes.length); byte[] arr = new byte[(int)arrayLength]; for(int i = (int)start, j =0; i < end;i++,j++){ arr[j] = bytes[i]; } return arr; } public static long fileSize(){ File file = new File(FILE_NAME); return file.length(); } } Second Class is FileReceiver.java - This class receives the file. Small Explanation what this file does This class finds the size of the file to be fetched from Sender Depending upon the size of the file it finds the start and end position till the bytes needs to be read. It starts n number of threads giving each thread start,end, sequence number and a list which all the threads share. Each thread reads the number of bytes and creates a ByteArrayWrapper. ByteArrayWrapper objects are added to the list Then i have while loop which basically make sure that all threads have done their work finally it sorts the list based on the sequence number. then the bytes are joined, and a complete byte array is formed which is converted to a file. Code of File Receiver package com.filedownloader; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileReceiver { public static void main(String[] args) { FileReceiver receiver = new FileReceiver(); receiver.receiveFile(); } public void receiveFile(){ long startTime = System.currentTimeMillis(); long numberOfThreads = 10; long filesize = FileSender.fileSize(); System.out.println("File size received "+filesize); long start = filesize/numberOfThreads; List<ByteArrayWrapper> list = new ArrayList<ByteArrayWrapper>(); for(long threadCount =0; threadCount<numberOfThreads ;threadCount++){ FileDownloaderTask task = new FileDownloaderTask(threadCount*start,(threadCount+1)*start,threadCount,list); new Thread(task).start(); } while(list.size() != numberOfThreads){ // this is done so that all the threads should complete their work before processing further. //System.out.println("Waiting for threads to complete. List size "+list.size()); } if(list.size() == numberOfThreads){ System.out.println("All bytes received "+list); Collections.sort(list, new Comparator<ByteArrayWrapper>() { @Override public int compare(ByteArrayWrapper o1, ByteArrayWrapper o2) { long sequence1 = o1.getSequence(); long sequence2 = o2.getSequence(); if(sequence1 < sequence2){ return -1; }else if(sequence1 > sequence2){ return 1; } else{ return 0; } } }); byte[] totalBytes = list.get(0).getBytes(); byte[] firstArr = null; byte[] secondArr = null; for(int i = 1;i<list.size();i++){ firstArr = totalBytes; secondArr = list.get(i).getBytes(); totalBytes = concat(firstArr, secondArr); } System.out.println(totalBytes.length); convertToFile(totalBytes,"c:\\tmp\\test.pdf"); long endTime = System.currentTimeMillis(); System.out.println("Total time taken with "+numberOfThreads +" threads is "+(endTime-startTime)+" ms" ); } } private byte[] concat(byte[] A, byte[] B) { byte[] C= new byte[A.length+B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } private void convertToFile(byte[] totalBytes,String name) { try { FileUtils.writeByteArrayToFile(new File(name), totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } Code of ByteArrayWrapper package com.filedownloader; import java.io.Serializable; public class ByteArrayWrapper implements Serializable{ private static final long serialVersionUID = 3499562855188457886L; private byte[] bytes; private long sequence; public ByteArrayWrapper(byte[] bytes, long sequenceNo) { this.bytes = bytes; this.sequence = sequenceNo; } public byte[] getBytes() { return bytes; } public long getSequence() { return sequence; } } Code of FileDownloaderTask import java.util.List; public class FileDownloaderTask implements Runnable { private List<ByteArrayWrapper> list; private long start; private long end; private long sequenceNo; public FileDownloaderTask(long start,long end,long sequenceNo,List<ByteArrayWrapper> list) { this.list = list; this.start = start; this.end = end; this.sequenceNo = sequenceNo; } @Override public void run() { ByteArrayWrapper wrapper = new FileSender().sendBytesOfFile(start, end, sequenceNo); list.add(wrapper); } } Questions related to this code 1) Does file downloading becomes fast when multiple threads is used? In this code i am not able to see the benefit. 2) How should i decide how many threads should i create ? 3) Are their any opensource libraries which does that 4) The file which file receiver receives is valid and not corrupted but checksum (i used FileUtils of common-io) does not match. Whats the problem? 5) This code gives out of memory when used with large file(above 100 Mb) i.e. because byte array which is created. How can i avoid? I know this is a very bad code but i have to write this in one day -:). Please suggest any other good way to do this? Thanks Shekhar

    Read the article

  • Is there a way to backup all my files and replace with 0-byte files with same name?

    - by laggingreflex
    My main drive on laptop keeps filling up so I take a backup on a USB and delete the original files. But then I find myself getting (downloading or getting from someone else) files that I already have backed-up but couldn't recall at the moment. So is there a way I can keep a 0-byte file with the same name as teh backed-up copy so that when I'm asked whether to overwrite the existing file, I can easily choose no knowing I probably have this file already in the backup. EDIT: better yet, replace with a shortcut(.lnk) on the external drive so I can access the files hassle free and not get any errors because of 0-byte files being accidentally opened.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • alternatyve in php (from java)

    - by Valdas
    I have some problem have class in java. But now need have in php. it's possible rewrite? Code: public class RealityStream { public RealityStream(byte abyte0[]) { m_dstream = null; m_dstream = new DataInputStream(new ByteArrayInputStream(abyte0)); } public int available() throws IOException { return m_dstream.available(); } public float readFloat() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); byte byte2 = m_dstream.readByte(); byte byte3 = m_dstream.readByte(); int i = (byte3 & 0xff) << 24 | (byte2 & 0xff) << 16 | (byte1 & 0xff) << 8 | byte0 & 0xff; return Float.intBitsToFloat(i); } public int readInt() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); byte byte2 = m_dstream.readByte(); byte byte3 = m_dstream.readByte(); return (byte3 & 0xff) << 24 | (byte2 & 0xff) << 16 | (byte1 & 0xff) << 8 | byte0 & 0xff; } public int readShort() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); return byte1 << 8 | byte0 & 0xff; } public int readUnsignedChar() throws IOException { return m_dstream.readUnsignedByte(); } private DataInputStream m_dstream; } very need. very thaks.

    Read the article

  • How to do RLE (run length encoding) in C# on a byte array?

    - by CuriousCoder
    I am trying to XOR two bitmap files (their byte arrays) to produce a byte array that can be used to change image A into image B or vice versa. I am sending this over the network so I would like to do some basic compression before this happens. Is there a way to do RLE (run length encoding) in C# (using a built-in, or fast reliable 3rd party library) on a byte array for this purpose? Notes: If you are going to suggest an alternative to my approach please keep in mind that the decompression and transformation on the remote machine has to be as quick and efficient as possible.

    Read the article

  • Performance: use a BinaryReader on a MemoryStream to read a byte array, or read directly?

    - by Virtlink
    I would like to know whether using a BinaryReader on a MemoryStream created from a byte array (byte[]) would reduce performance significantly. There is binary data I want to read, and I get that data as an array of bytes. I am currently deciding between two approaches to read the data, and have to implement many reading methods accordingly. After each reading action, I need the position right after the read data, and therefor I am considering using a BinaryReader. The first, non-BinaryReader approach: object Read(byte[] data, ref int offset); The second approach: object Read(BinaryReader reader); Such Read() methods will be called very often, in succession on the same data until all data has been read. So, using a BinaryReader feels more natural, but has it much impact on the performance?

    Read the article

  • How do you read data from a ADODB stream in ASP as byte values?

    - by user89691
    I have an ASP routine that gets a binary file's contents and writes it to a stream. The intention is to read it from the stream and process it st the server. So I have: ResponseBody = SomeRequest (SomeURL) ; var BinaryInputStream = Server.CreateObject ("ADODB.Stream") ; BinaryInputStream.Type = 1 ; // binary BinaryInputStream.Open ; BinaryInputStream.Write (ResponseBody) ; BinaryInputStream.Position = 0 ; var DataByte = BinaryInputStream.Read (1) ; Response.Write (typeof (DataByte)) ; // displays "unknown" How do I get the byte value of the byte I have just read from the stream? Asc () and byte () don't work (JScript) TIA

    Read the article

  • Are frameworks using byte-code generation creating leaky abstractions?

    - by Gabriel Šcerbák
    My point is, if you don't understand the abstraction of a framework, you can still decompile it and understand it, because you know the language e.g. Java. However, when byte-code generation happens, you have to understand even a lower level - JVM level byte-codes. I am really affraid of using any of such frameworks, which are many. Most of the time I think the reason for byte-code generation is simply lack of language features such as metaprogramming. Do you agree? What is your opinion and argument? How do you take over the problem with leaky abstractions in those frameworks?

    Read the article

  • Interop Structure: Should Unsigned Short be Mapped to byte[]?

    - by Ngu Soon Hui
    I have such a C++ structure: typedef struct _FILE_OP_BLOCK { unsigned short fid; // objective file ID unsigned short offset; // operating offset unsigned char len; // buffer length(update) // read length(read) unsigned char buff[MAX_BUFF_SIZE]; } FILE_OP_BLOCK; And now I want to map it in .Net. The tricky thing is that the I should pass a 2 byte array for fid, and integer for len, even though in C# fid is an unsigned short and len is an unsigned char I wonder whether my structure ( in C#) below is correct? public struct File_OP_Block { [MarshalAs(UnmanagedType.ByValArray, SizeConst = 2)] public byte[] fid; public ushort offset; public byte length; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 240)] public char[] buff; }

    Read the article

  • byte-sized bit pattern in C and its relevance?

    - by Nikunj Banka
    I a reading Kerninghan and Ritchie's C programming language book and on page 37 it mentions byte sized bit patterns like : '\013' for vertical tab . '\007' for bell character . My doubts : What is byte sized in it and and what's a bit pattern ? What relevance does this hold and where can I apply it ? Is it in any sense related to escape sequences ? I can't seem to find any information what so ever about these byte sized bit patterns on the web . please help . thanks .

    Read the article

  • Perl: How do I extract certain bits from a byte and then covert these bits to a hex value?

    - by Siegfried Hepp
    I need to extract certain bits of a byte and covert the extract bits back to a hex value. Example (the value of the byte is 0xD2) : 76543210 bit position 11010010 is 0xD2 Bit 0-3 defines the channel which is 0010b is 0x2 Bit 4-5 defines the controller which is 01b is 0x1 Bit 6-7 defines the port which is 11b is 0x3 I somehow need to get from the byte is 0xD2 to channel is 0x2, controller is 0x1, port is 0x3 I googled allot and found the functions pack/unpack, vec and sprintf. But I'm scratching by head how to use the functions to achieve this. Any idea how to achieve this in Perl ?

    Read the article

  • Construction an logical expression which will count bits in a byte.

    - by danatel
    When interviewing new candidates, we usually ask them to write a piece of C code to count the number of bits with value 1 in a given byte variable (e.g. the byte 3 has two 1-bits). I know all the common answers, such as right shifting eight times, or indexing constant table of 256 precomputed results. But, is there a smarter way without using the precomputed table? What is the shortest combination of byte operations (AND, OR, XOR, +, -, binary negation, left and right shift) which computes the number of bites?

    Read the article

  • 4K sectors: Why are hard drives moving to 4096 byte sectors, vs. 512 byte sectors? Are 4K sectors a

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the actual de facto standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • AES decryption in Java - IvParameterSpec to big

    - by user1277269
    Im going to decrypt a plaintext with two keys. As you see in the picture were have one encrypted file wich contains KEY1(128 bytes),KEYIV(128 bytes),key2(128bytes) wich is not used in this case then we have the ciphertext. The error I get here is "Exception in thread "main" java.security.InvalidAlgorithmParameterException: Wrong IV length: must be 16 bytes long. but it is 64 bytes." Picture: http://i264.photobucket.com/albums/ii200/XeniuM05/bg_zps0a523659.png public class AES { public static void main(String[] args) throws Exception { byte[] encKey1 = new byte[128]; byte[] EncIV = new byte[256]; byte[] UnEncIV = new byte[128]; byte[] unCrypKey = new byte[128]; byte[] unCrypText = new byte[1424]; File f = new File("C://ftp//ciphertext.enc"); FileInputStream fis = new FileInputStream(F); byte[] EncText = new byte[(int) f.length()]; fis.read(encKey1); fis.read(EncIV); fis.read(EncText); EncIV = Arrays.copyOfRange(EncIV, 128, 256); EncText = Arrays.copyOfRange(EncText, 384, EncText.length); System.out.println(EncText.length); KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); char[] password = "lab1StorePass".toCharArray(); java.io.FileInputStream fos = new java.io.FileInputStream( "C://ftp//lab1Store"); ks.load(fos, password); char[] passwordkey1 = "lab1KeyPass".toCharArray(); PrivateKey Lab1EncKey = (PrivateKey) ks.getKey("lab1EncKeys", passwordkey1); Cipher rsaDec = Cipher.getInstance("RSA"); // set cipher to RSA decryption rsaDec.init(Cipher.DECRYPT_MODE, Lab1EncKey); // initalize cipher ti lab1key unCrypKey = rsaDec.doFinal(encKey1); // Decryps first key UnEncIV = rsaDec.doFinal(EncIV); //decryps encive byte array to undecrypted bytearray---- OBS! Error this is 64 BYTES big, we want 16? System.out.println("lab1key "+ unCrypKey +" IV " + UnEncIV); //-------CIPHERTEXT decryption--------- Cipher AESDec = Cipher.getInstance("AES/CBC/PKCS5Padding"); //---------convert decrypted bytearrays to acctual keys SecretKeySpec unCrypKey1 = new SecretKeySpec(unCrypKey, "AES"); IvParameterSpec ivSpec = new IvParameterSpec(UnEncIV); AESDec.init(Cipher.DECRYPT_MODE, unCrypKey1, ivSpec ); unCrypText = AESDec.doFinal(EncText); // Convert decrypted cipher bytearray to string String deCryptedString = new String(unCrypKey); System.out.println(deCryptedString); }

    Read the article

  • More Efficient Data Structure for Large Layered Tile Map

    - by Stupac
    It seems like the popular method is to break the map up into regions and load them as needed, my problem is that in my game there are many AI entities other than the player out performing actions in virtually all the regions of the map. Let's just say I have a 5000x5000 map, when I use a 2D array of byte's to render it my game uses around 17 MB of memory, as soon as I change that data structure to a my own defined MapCell class (which only contains a single field: byte terrain) my game's memory consumption rockets up to 400+ MB. I plan on adding layering, so an array of byte's won't cut it and I figure I'd need to add a List of some sort to the MapCell class to provide objects in the layers. I'm only rendering tiles that are on screen, but I need the rest of the map to be represented in memory since it is constantly used in Update. So my question is, how can I reduce the memory consumption of my map while still maintaining the above requirements? Thank you for your time! Here's a few snippets my C# code in XNA4: public static void LoadMapData() { // Test map generations int xSize = 5000; int ySize = 5000; MapCell[,] map = new MapCell[xSize,ySize]; //byte[,] map = new byte[xSize, ySize]; Terrain[] terrains = new Terrain[4]; terrains[0] = grass; terrains[1] = dirt; terrains[2] = rock; terrains[3] = water; Random random = new Random(); for(int x = 0; x < xSize; x++) { for(int y = 0; y < ySize; y++) { //map[x,y] = new MapCell(terrains[random.Next(4)]); map[x,y] = new MapCell((byte)random.Next(4)); //map[x, y] = (byte)random.Next(4); } } testMap = new TileMap(map, xSize, ySize); // End test map setup currentMap = testMap; } public class MapCell { //public TerrainType terrain; public byte terrain; public MapCell(byte itsTerrain) { terrain = itsTerrain; } // the type of terrain this cell is treated as /*public Terrain terrain { get; set; } public MapCell(Terrain itsTerrain) { terrain = itsTerrain; }*/ }

    Read the article

  • Ubuntu 12.10 graphics does not work properly

    - by madox2
    My graphic on ubuntu 12.10 does not work as well as on 12.04. After upgrade I installed driver sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current for my Nvidia 450 GTS graphics card. But sometimes I see slight lag on my videos played in VLC player, some of desktop and window effects are lagging, sometimes I can see an indescribable souce of pixels on my screen at the start of ubuntu and so on. I feel difference between 12.04 and 12.10 in favour of former version. Does anyone know whats wrong or what I am missing? here is output of lspci -k: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Giga-byte Technology Device 1c3a Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Giga-byte Technology Device a000 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation H61 Express Chipset Family LPC Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel modules: i2c-i801 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTS 450] (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 01:00.1 Audio device: NVIDIA Corporation GF116 High Definition Audio Controller (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 03:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Subsystem: Giga-byte Technology Device e000 Kernel driver in use: atl1c Kernel modules: atl1c

    Read the article

  • How to Pass byte array as parameter in url in iPhone?

    - by Jindal
    I am using following code to get bytes array. thx to this post. Data *data = [NSData dataWithContentsOfFile:filePath]; NSUInteger len = [data length]; Byte *byteData = (Byte*)malloc(len); memcpy(byteData, [data bytes], len); it is the right method to do so? Now, How i can pass bytes array into url? Thank You for Help,

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >