Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 144/173 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • GZipStream not reading the whole file

    - by Ed
    I have some code that downloads gzipped files, and decompresses them. The problem is, I can't get it to decompress the whole file, it only reads the first 4096 bytes and then about 500 more. Byte[] buffer = new Byte[4096]; int count = 0; FileStream fileInput = new FileStream("input.gzip", FileMode.Open, FileAccess.Read, FileShare.Read); FileStream fileOutput = new FileStream("output.dat", FileMode.Create, FileAccess.Write, FileShare.None); GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress, true); // Read from gzip steam while ((count = gzipStream.Read(buffer, 0, buffer.Length)) > 0) { // Write to output file fileOutput.Write(buffer, 0, count); } // Close the streams ... I've checked the downloaded file; it's 13MB when compressed, and contains one XML file. I've manually decompressed the XML file, and the content is all there. But when I do it with this code, it only outputs the very beginning of the XML file. Anyone have any ideas why this might be happening?

    Read the article

  • Binary file email attachment problem

    - by Alan Harris-Reid
    Hi there, Using Python 3.1.2 I am having a problem sending binary attachment files (jpeg, pdf, etc.) - MIMEText attachments work fine. The code in question is as follows... for file in self.attachments: part = MIMEBase('application', "octet-stream") part.set_payload(open(file,"rb").read()) encoders.encode_base64(part) part.add_header('Content-Disposition', 'attachment; filename="%s"' % file) msg.attach(part) # msg is an instance of MIMEMultipart() server = smtplib.SMTP(host, port) server.login(username, password) server.sendmail(from_addr, all_recipients, msg.as_string()) However, way down in the calling-stack (see traceback below), it looks as though msg.as_string() has received an attachment which creates a payload of 'bytes' type instead of string. Has anyone any idea what might be causing the problem? Any help would be appreciated. Alan builtins.TypeError: string payload expected: File "c:\Dev\CommonPY\Scripts\email_send.py", line 147, in send server.sendmail(self.from_addr, all_recipients, msg.as_string()) File "c:\Program Files\Python31\Lib\email\message.py", line 136, in as_string g.flatten(self, unixfrom=unixfrom) File "c:\Program Files\Python31\Lib\email\generator.py", line 76, in flatten self._write(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 101, in _write self._dispatch(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 127, in _dispatch meth(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 181, in _handle_multipart g.flatten(part, unixfrom=False) File "c:\Program Files\Python31\Lib\email\generator.py", line 76, in flatten self._write(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 101, in _write self._dispatch(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 127, in _dispatch meth(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 155, in _handle_text raise TypeError('string payload expected: %s' % type(payload))

    Read the article

  • How to exploit Diffie-hellman to perform a man in the middle attack

    - by jfisk
    Im doing a project where Alice and Bob send each other messages using the Diffie-Hellman key-exchange. What is throwing me for a loop is how to incorporate the certificate they are using in this so i can obtain their secret messages. From what I understand about MIM attakcs, the MIM acts as an imposter as seen on this diagram: Below are the details for my project. I understand that they both have g and p agreed upon before communicating, but how would I be able to implement this with they both having a certificate to verify their signatures? Alice prepares ?signA(NA, Bob), pkA, certA? where signA is the digital signature algorithm used by Alice, “Bob” is Bob’s name, pkA is the public-key of Alice which equals gx mod p encoded according to X.509 for a fixed g, p as specified in the Diffie-Hellman key- exchange and certA is the certificate of Alice that contains Alice’s public-key that verifies the signature; Finally, NA is a nonce (random string) that is 8 bytes long. Bob checks Alice's signature, and response with ?signB{NA,NB,Alice},pkB,certB?. Alice gets the message she checks her nonce NA and calculates the joint key based on pkA, pkB according to the Diffie-Hellman key exchange. Then Alice submits the message ?signA{NA,NB,Bob},EK(MA),certA? to Bob and Bobrespondswith?SignB{NA,NB,Alice},EK(MB),certB?. where MA and MB are their corresponding secret messages.

    Read the article

  • Add Hexidecimal Header Info to JPEG File Using Java

    - by jboyd
    I need to add header info to a JPEG file in order to get it to work properly when shared on some websites, I've tracked down the correct info through a lot of Hex digging, but now I'm kind of stuck trying to get it into the file. I know where in the file it needs to go, and I know how long it is, my problem is that RandomAccessFile just overwrites existing data in the file and FileOutputStream appends the data to the end. I don't want either, I want to INSERT data starting at the third byte. My example code: File fileToChange = new File("someimage.jpg"); byte[] i = new byte[2]; i[0] = (byte)Integer.decode("0xcc"); i[1] = (byte)Integer.decode("0xcc"); RandomAccessFile f = new RandomAccessFile(new File("videothing.jpg"), "rw"); long aPositionWhereIWantToGo = 2; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write((byte[])i); f.close(); So this doesn't work because it overwrites, and does not insert, I can't find any way to just insert data into a file

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • (PL/SQL)Oracle stored procedure parameter size limit(VARCHAR2)

    - by Jude Lee
    Hello, there. I have an problem with my oracle stored procedure as following. codes : CREATE OR REPLACE PROCEDURE APP.pr_ap_gateway ( aiov_req IN OUT VARCHAR2, aov_rep01 OUT VARCHAR2, aov_rep02 OUT VARCHAR2, aov_rep03 OUT VARCHAR2, aov_rep04 OUT VARCHAR2, aov_rep05 OUT VARCHAR2 ) IS v_header VARCHAR (100); v_case VARCHAR (4); v_err_no_case VARCHAR (5) := '00004'; BEGIN DBMS_OUTPUT.ENABLE (1000000); aov_rep01 := lpad(' ', 190, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 199, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 200, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 201, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); END pr_ap_gateway; / results : >> [190] >> [199] >> [200] and then error 'buffer overflow' ORA-06502: PL/SQL: ?? ?? ? ??: ??? ??? ?? ???? I know that VARCHAR2 type can contain 32KB in PL/SQL. But, in my test, VARCHAR2 parameter contains only 200 Bytes. What's wrong with this situation? This procedure will called by java daemon program. So, There's no declaration of parameters size before calling procedure. Thanks in advance for your reply.

    Read the article

  • Learning AES: the KeyBytes

    - by Tom Brito
    I got the following example from here: import java.security.Security; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; public class MainClass { public static void main(String[] args) throws Exception { Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] input = "www.java2s.com".getBytes(); byte[] keyBytes = new byte[] { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17 }; SecretKeySpec key = new SecretKeySpec(keyBytes, "AES"); Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); System.out.println(new String(input)); // encryption pass cipher.init(Cipher.ENCRYPT_MODE, key); byte[] cipherText = new byte[cipher.getOutputSize(input.length)]; int ctLength = cipher.update(input, 0, input.length, cipherText, 0); ctLength += cipher.doFinal(cipherText, ctLength); System.out.println(new String(cipherText)); System.out.println(ctLength); // decryption pass cipher.init(Cipher.DECRYPT_MODE, key); byte[] plainText = new byte[cipher.getOutputSize(ctLength)]; int ptLength = cipher.update(cipherText, 0, ctLength, plainText, 0); ptLength += cipher.doFinal(plainText, ptLength); System.out.println(new String(plainText)); System.out.println(ptLength); } } I imagine that the byte[] keyBytes should be random generated, so I gone to test the max size before do it. When adding one more byte 0x18 to the array, the exception raised: InvalidKeyException: Key length not 128/192/256 bits. But the original 18 bytes (from 0 to 17) are not multiple of nither 128, 192 or 256. I would like to understand the math here.. can anyone explain me? Thanks!

    Read the article

  • Download, pause and resume an download using Indy components

    - by Salvador
    Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component. this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved. var Http: TIdHTTP; MS : TMemoryStream; begin Result:= True; Http := TIdHTTP.Create(nil); MS := TMemoryStream.Create; try try Http.OnWork:= HttpWork;//this event give me the actual progress of the download process Http.Head(Url); FSize := Http.Response.ContentLength; AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes'); Http.Get(Url, MS); MS.SaveToFile(LocalFile); except on E : Exception do Begin Result:=False; AddLog(E.Message); end; end; finally Http.Free; MS.Free; end; end;

    Read the article

  • How to write PIL image filter for plain pgm format?

    - by Juha
    How can I write a filter for python imaging library for pgm plain ascii format (P2). Problem here is that basic PIL filter assumes constant number of bytes per pixel. My goal is to open feep.pgm with Image.open(). See http://netpbm.sourceforge.net/doc/pgm.html or below. Alternative solution is that I find other well documented ascii grayscale format that is supported by PIL and all major graphics programs. Any suggestions? br, Juha feep.pgm: P2 # feep.pgm 24 7 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0 0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0 0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 edit: Thanks for the answer, I works... but I need a solution that uses Image.open(). Most of python programs out there use PIL for graphics manipulation (google: python image open). Thus, I need to be able to register a filter to PIL. Then, I can use any software that uses PIL. I now think mostly scipy, pylab, etc. dependent programs.

    Read the article

  • Trouble with Berkeley DB JE Base API Secondary Databases and Sequences

    - by milosz
    I have a class Document which consists of Id (int) and Url (String). I would like to have a primary index on Id and secondary index on Url. I would also like to have a sequence for Id auto-incrementation. So I create a SecondaryDatabase and then I create a Sequence. During initialisation of the Sequence I get an exception: Exception in thread "main" java.lang.IllegalArgumentException at com.sleepycat.util.UtfOps.getCharLength(UtfOps.java:137) at com.sleepycat.util.UtfOps.bytesToString(UtfOps.java:259) at com.sleepycat.bind.tuple.TupleInput.readString(TupleInput.java:152) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:12) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:1) at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java:76) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.UrlKeyCreator.createSecondaryKey(UrlKeyCreator.java:20) at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.java:835) at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.java:42) at com.sleepycat.je.Database.notifyTriggers(Database.java:2004) at com.sleepycat.je.Cursor.putNotify(Cursor.java:1692) at com.sleepycat.je.Cursor.putInternal(Cursor.java:1616) at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:663) at com.sleepycat.je.Sequence.<init>(Sequence.java:188) at com.sleepycat.je.Database.openSequence(Database.java:546) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyFullTextSearchEngine.init(MyFullTextSearchEngine.java:131) at pl.edu.mimuw.zbd.berkeley.zadanie.testy.MyFullTextSearchEngineTest.main(MyFullTextSearchEngineTest.java:18) It seems that during the initialisation of the sequence the secondary database is forced to update. When I debug the entryToObject method of MyDocumentBiding the bytes that it tries to convert to object seem random. What am I doing wrong?

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Python: Seeing all files in Hex.

    - by Recursion
    I am writing a python script which looks at common computer files and examines them for similar bytes, words, double word's. Though I need/want to see the files in Hex, ande cannot really seem to get python to open a simple file in python. I have tried codecs.open with hex as the encoding, but when I operate on the file descriptor it always spits back File "main.py", line 41, in <module> main() File "main.py", line 38, in main process_file(sys.argv[1]) File "main.py", line 27, in process_file seeker(line.rstrip("\n")) File "main.py", line 15, in seeker for unit in f.read(2): File "/usr/lib/python2.6/codecs.py", line 666, in read return self.reader.read(size) File "/usr/lib/python2.6/codecs.py", line 472, in read newchars, decodedbytes = self.decode(data, self.errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 50, in decode return hex_decode(input,errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found def seeker(_file): f = codecs.open(_file, "rb", "hex") for LINE in f.read(): print LINE f.close() I really just want to see files, and operate on them as if it was in a hex editor like xxd. Also is it possible to read a file in increments of maybe a word at a time. No this is not homework.

    Read the article

  • Apache Not Accepting a Path in My Home Folder

    - by Promather
    I have trying to set up an Apache site to use a folder in my home folder without any success. I exactly followed the steps in this page: https://help.ubuntu.com/community/ApacheMySQLPHP yet I did not succeed; I keep getting error 403, which says that the server doesn't have permission to access the requested page. I searched forums and many suggested changing the permission of the folder. I went straight away and set the permission to 777, but that didn't solve the problem. I made another search and somebody gave me a clue, which is that it could be because my home folder is encrypted. I believe this could be the problem, but: What is the relation between encryption and Apache? I suppose Apache server is requesting the file from the system, rather than trying to access the file bytes! Is there anyway to solve this problem? I don't want to move the folder to /var/www because I am using this Apache for testing, so I want whatever change I make to be immediately reflected, rather than having to copy files which is error prone.

    Read the article

  • valgrind complains doing a very simple strtok in c

    - by monkeyking
    Hi I'm trying to tokenize a string by loading an entire file into a char[] using fread. For some strange reason it is not always working, and valgrind complains in this very small sample program. Given an input like test.txt first second And the following program #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/stat.h> //returns the filesize in bytes size_t fsize(const char* fname){ struct stat st ; stat(fname,&st); return st.st_size; } int main(int argc, char *argv[]){ FILE *fp = NULL; if(NULL==(fp=fopen(argv[1],"r"))){ fprintf(stderr,"\t-> Error reading file:%s\n",argv[1]); return 0; } char buffer[fsize(argv[1])]; fread(buffer,sizeof(char),fsize(argv[1]),fp); char *str = strtok(buffer," \t\n"); while(NULL!=str){ fprintf(stderr,"token is:%s with strlen:%lu\n",str,strlen(str)); str = strtok(NULL," \t\n"); } return 0; } compiling like gcc test.c -std=c99 -ggdb running like ./a.out test.txt thanks

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • DjangoUnicodeDecodeError while storing pickle'd data.

    - by Jack M.
    I've got a simple dict object I'm trying to store in the database after it has been run through pickle. It seems that Django doesn't like trying to encode this error. I've checked with MySQL, and the query isn't even getting there before it is throwing the error, so I don't believe that is the problem. The dict I'm storing looks like this: { 'ordered': [ { 'value': u'First\xd1ame Last\xd1ame', 'label': u'Full Name' }, { 'value': u'123-456-7890', 'label': u'Phone Number' }, { 'value': u'[email protected]', 'label': u'Email Address' } ], 'cleaned_data': { u'Phone Number': u'123-456-7890', u'Full Name': u'First\xd1ame Last\xd1ame', u'Email Address': u'[email protected]' }, 'post_data': <QueryDict: { u'Phone Number': [u'1234567890'], u'Full Name_1': [u'Last\xd1ame'], u'Full Name_0': [u'First\xd1ame'], u'Email Address': [u'[email protected]'] }>, 'user': <User: itis> } The error that gets thrown is: 'utf8' codec can't decode bytes in position 52-53: invalid data. Position 52-53 is the first instance of \xd1 (Ñ) in the pickled data. So far, I've dug around StackOverflow and found a few questions where the database encoding for the objects was wrong. This doesn't help me because there is no MySQL query yet. This is happening before the database. Google also didn't help much when searching for unicode errors on pickled data. It is probably worth mentioning that if I don't use the Ñ, this code works fine.

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • C#,coding with AES

    - by lolalola
    Hi, why i can coding only 128 bytes text? Work: string plainText = "1234567890123456"; Don't work: string plainText = "12345678901234561"; Don't work: string plainText = "123456789012345"; string plainText = "1234567890123456"; byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText); byte[] keyBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); byte[] initVectorBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); RijndaelManaged symmetricKey = new RijndaelManaged(); symmetricKey.Mode = CipherMode.CBC; symmetricKey.Padding = PaddingMode.Zeros; ICryptoTransform encryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes); MemoryStream memoryStream = new MemoryStream(); CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length); cryptoStream.FlushFinalBlock(); byte[] cipherTextBytes = memoryStream.ToArray(); memoryStream.Close(); cryptoStream.Close(); string cipherText = Convert.ToBase64String(cipherTextBytes); Console.ReadLine();

    Read the article

  • Thread implemented as a Singleton

    - by rocknroll
    Hi all, I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure: void <ProtocolClassName>::run() { while(!mStop) //check whether screen is closed or not { mutex.lock() while(!waitcondition.wait(&mutex,5)) { if(mStop) return; } //Code for receiving and processing incoming data mutex.unlock(); } //end while } Hierarchy of GUI. 1.Login screen. 2. Screen of action. When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using <ProtocolName>::instance() function Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation. Thanks in advance A Hapless programmer

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Actual long double precision does not agree with std::numeric_limits

    - by dmb
    Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using gives the same results as . Does anyone know where the discrepancy might be? long double x = 1e308, y = 1e309; cout << std::numeric_limits::max_exponent10 << endl; cout << x << '\t' << y << endl; cout << sizeof(x) << endl; gives 4932 1e+308 inf 16

    Read the article

  • Java: Cleaning up what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • [Cocoa] Can't find leak in my code.

    - by ryyst
    Hi, I've been spending the last few hours trying to find the memory leak in my code. Here it is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; expression = [expression stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; // expression is an NSString object. NSArray *arguments = [NSArray arrayWithObjects:expression, [@"~/Desktop/file.txt" stringByExpandingTildeInPath], @"-n", @"--line-number", nil]; NSPipe *outPipe = [[NSPipe alloc] init]; NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/grep"]; [task setArguments:arguments]; [task setStandardOutput:outPipe]; [outPipe release]; [task launch]; NSData *data = [[outPipe fileHandleForReading] readDataToEndOfFile]; [task waitUntilExit]; [task release]; NSString *string = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF8StringEncoding]; string = [string stringByReplacingOccurrencesOfString:@"\r" withString:@""]; int linesNum = 0; NSMutableArray *possibleMatches = [[NSMutableArray alloc] init]; if ([string length] > 0) { NSArray *lines = [string componentsSeparatedByString:@"\n"]; linesNum = [lines count]; for (int i = 0; i < [lines count]; i++) { NSString *currentLine = [lines objectAtIndex:i]; NSArray *values = [currentLine componentsSeparatedByString:@"\t"]; if ([values count] == 20) [possibleMatches addObject:currentLine]; } } [string release]; [pool release]; return [possibleMatches autorelease]; I tried to follow the few basic rules of Cocoa memory management, but somehow there still seems to be a leak, I believe it's an array that's leaking. It's noticeable if possibleMatches is large. You can try the code by using any large file as "~/Desktop/file.txt" and as expression something that yields many results when grep-ing. What's the mistake I'm making? Thanks for any help! -- Ry

    Read the article

  • Database version is zero on initial install

    - by mahesh
    I have released an app (World Time) with initial database. Now i want to update the app with a database upgrade. I have put in the upgrade code in OnUpgrade() and checking for the newVersion. But it was not being called in my local testing... So i put in the debug statement to get the database version and it is zero . Any idea why it is not being versioned ? Following is the code to copy the database from my Assets folder ... InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); -- Mahesh http://android.maheshdixit.com

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >