Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 203/238 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • valgrind complains doing a very simple strtok in c

    - by monkeyking
    Hi I'm trying to tokenize a string by loading an entire file into a char[] using fread. For some strange reason it is not always working, and valgrind complains in this very small sample program. Given an input like test.txt first second And the following program #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/stat.h> //returns the filesize in bytes size_t fsize(const char* fname){ struct stat st ; stat(fname,&st); return st.st_size; } int main(int argc, char *argv[]){ FILE *fp = NULL; if(NULL==(fp=fopen(argv[1],"r"))){ fprintf(stderr,"\t-> Error reading file:%s\n",argv[1]); return 0; } char buffer[fsize(argv[1])]; fread(buffer,sizeof(char),fsize(argv[1]),fp); char *str = strtok(buffer," \t\n"); while(NULL!=str){ fprintf(stderr,"token is:%s with strlen:%lu\n",str,strlen(str)); str = strtok(NULL," \t\n"); } return 0; } compiling like gcc test.c -std=c99 -ggdb running like ./a.out test.txt thanks

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • Apache Not Accepting a Path in My Home Folder

    - by Promather
    I have trying to set up an Apache site to use a folder in my home folder without any success. I exactly followed the steps in this page: https://help.ubuntu.com/community/ApacheMySQLPHP yet I did not succeed; I keep getting error 403, which says that the server doesn't have permission to access the requested page. I searched forums and many suggested changing the permission of the folder. I went straight away and set the permission to 777, but that didn't solve the problem. I made another search and somebody gave me a clue, which is that it could be because my home folder is encrypted. I believe this could be the problem, but: What is the relation between encryption and Apache? I suppose Apache server is requesting the file from the system, rather than trying to access the file bytes! Is there anyway to solve this problem? I don't want to move the folder to /var/www because I am using this Apache for testing, so I want whatever change I make to be immediately reflected, rather than having to copy files which is error prone.

    Read the article

  • DjangoUnicodeDecodeError while storing pickle'd data.

    - by Jack M.
    I've got a simple dict object I'm trying to store in the database after it has been run through pickle. It seems that Django doesn't like trying to encode this error. I've checked with MySQL, and the query isn't even getting there before it is throwing the error, so I don't believe that is the problem. The dict I'm storing looks like this: { 'ordered': [ { 'value': u'First\xd1ame Last\xd1ame', 'label': u'Full Name' }, { 'value': u'123-456-7890', 'label': u'Phone Number' }, { 'value': u'[email protected]', 'label': u'Email Address' } ], 'cleaned_data': { u'Phone Number': u'123-456-7890', u'Full Name': u'First\xd1ame Last\xd1ame', u'Email Address': u'[email protected]' }, 'post_data': <QueryDict: { u'Phone Number': [u'1234567890'], u'Full Name_1': [u'Last\xd1ame'], u'Full Name_0': [u'First\xd1ame'], u'Email Address': [u'[email protected]'] }>, 'user': <User: itis> } The error that gets thrown is: 'utf8' codec can't decode bytes in position 52-53: invalid data. Position 52-53 is the first instance of \xd1 (Ñ) in the pickled data. So far, I've dug around StackOverflow and found a few questions where the database encoding for the objects was wrong. This doesn't help me because there is no MySQL query yet. This is happening before the database. Google also didn't help much when searching for unicode errors on pickled data. It is probably worth mentioning that if I don't use the Ñ, this code works fine.

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • C#,coding with AES

    - by lolalola
    Hi, why i can coding only 128 bytes text? Work: string plainText = "1234567890123456"; Don't work: string plainText = "12345678901234561"; Don't work: string plainText = "123456789012345"; string plainText = "1234567890123456"; byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText); byte[] keyBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); byte[] initVectorBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); RijndaelManaged symmetricKey = new RijndaelManaged(); symmetricKey.Mode = CipherMode.CBC; symmetricKey.Padding = PaddingMode.Zeros; ICryptoTransform encryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes); MemoryStream memoryStream = new MemoryStream(); CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length); cryptoStream.FlushFinalBlock(); byte[] cipherTextBytes = memoryStream.ToArray(); memoryStream.Close(); cryptoStream.Close(); string cipherText = Convert.ToBase64String(cipherTextBytes); Console.ReadLine();

    Read the article

  • Thread implemented as a Singleton

    - by rocknroll
    Hi all, I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure: void <ProtocolClassName>::run() { while(!mStop) //check whether screen is closed or not { mutex.lock() while(!waitcondition.wait(&mutex,5)) { if(mStop) return; } //Code for receiving and processing incoming data mutex.unlock(); } //end while } Hierarchy of GUI. 1.Login screen. 2. Screen of action. When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using <ProtocolName>::instance() function Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation. Thanks in advance A Hapless programmer

    Read the article

  • Java: Cleaning up what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Actual long double precision does not agree with std::numeric_limits

    - by dmb
    Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using gives the same results as . Does anyone know where the discrepancy might be? long double x = 1e308, y = 1e309; cout << std::numeric_limits::max_exponent10 << endl; cout << x << '\t' << y << endl; cout << sizeof(x) << endl; gives 4932 1e+308 inf 16

    Read the article

  • Database version is zero on initial install

    - by mahesh
    I have released an app (World Time) with initial database. Now i want to update the app with a database upgrade. I have put in the upgrade code in OnUpgrade() and checking for the newVersion. But it was not being called in my local testing... So i put in the debug statement to get the database version and it is zero . Any idea why it is not being versioned ? Following is the code to copy the database from my Assets folder ... InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); -- Mahesh http://android.maheshdixit.com

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • Keeping track of leading zeros with BitSet in Java

    - by Ryan
    So, according to this question there are two ways to look at the size of a BitSet. size(), which is legacy and not really useful. I agree with this. The size is 64 after doing: BitSet b = new BitSet(8); length(), which returns the index of the highest set bit. In the above example, length() will return 0. This is somewhat useful, but doesn't accurately reflect the number of bits the BitSet is supposed to be representing in the event you have leading zeros. The information I'm dealing with rarely(if ever) falls evenly into 8-bit bytes, and the leading 0s are just as important to me as the 1s. I have some data fields that are 333 bits long, some that are 20, etc. Is there a better way to deal with bit-level details in Java that will keep track of leading zeros? Otherwise, I'm going to have to 'roll my own', so to speak. To which I have a few ideas already, but I'd prefer not to reinvent the wheel if possible.

    Read the article

  • [Cocoa] Can't find leak in my code.

    - by ryyst
    Hi, I've been spending the last few hours trying to find the memory leak in my code. Here it is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; expression = [expression stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; // expression is an NSString object. NSArray *arguments = [NSArray arrayWithObjects:expression, [@"~/Desktop/file.txt" stringByExpandingTildeInPath], @"-n", @"--line-number", nil]; NSPipe *outPipe = [[NSPipe alloc] init]; NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/grep"]; [task setArguments:arguments]; [task setStandardOutput:outPipe]; [outPipe release]; [task launch]; NSData *data = [[outPipe fileHandleForReading] readDataToEndOfFile]; [task waitUntilExit]; [task release]; NSString *string = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF8StringEncoding]; string = [string stringByReplacingOccurrencesOfString:@"\r" withString:@""]; int linesNum = 0; NSMutableArray *possibleMatches = [[NSMutableArray alloc] init]; if ([string length] > 0) { NSArray *lines = [string componentsSeparatedByString:@"\n"]; linesNum = [lines count]; for (int i = 0; i < [lines count]; i++) { NSString *currentLine = [lines objectAtIndex:i]; NSArray *values = [currentLine componentsSeparatedByString:@"\t"]; if ([values count] == 20) [possibleMatches addObject:currentLine]; } } [string release]; [pool release]; return [possibleMatches autorelease]; I tried to follow the few basic rules of Cocoa memory management, but somehow there still seems to be a leak, I believe it's an array that's leaking. It's noticeable if possibleMatches is large. You can try the code by using any large file as "~/Desktop/file.txt" and as expression something that yields many results when grep-ing. What's the mistake I'm making? Thanks for any help! -- Ry

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • How CPU finds ISR and distinguishes between devices

    - by ripunjay-tripathi-gmail-com
    I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :). 1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ? 2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand. Thanks for reading. Thank you very much for answering.

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • php ftp upload problem

    - by Autobyte
    Hi I am trying to write a small php function that will upload files to an FTP server and I keep getting the same error but I cannot find any fix by googling the problem, I am hoping you guys can help me here... The error I get is: Warning: ftp_put() [function.ftp-put]: Unable to build data connection: No route to host in . The file was created at the FTP server but it is zero bytes. Here is the code: <?php $file = "test.dat"; $ftp_server="ftp.server.com"; $ftp_user = "myname"; $ftp_pass = "mypass"; $destination_file = "test.dat"; $cid=ftp_connect($ftp_server); if(!$cid) { exit("Could not connect to server: $ftp_server\n"); } $login_result = ftp_login($cid, $ftp_user, $ftp_pass); if (!$login_result) { echo "FTP connection has failed!"; echo "Attempted to connect to $ftp_server for user $ftp_user"; exit; } else { echo "Connected to $ftp_server, for user $ftp_user"; } $upload = ftp_put($cid, $destination_file, $file, FTP_BINARY); if (!$upload) { echo "Failed upload for $source_file to $ftp_server as $destination_file<br>"; echo "FTP upload has failed!"; } else { echo "Uploaded $source_file to $ftp_server as $destination_file"; } ftp_close($cid); ?>

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Python minidom and UTF-8 encoded XML with hash references

    - by Jakob Simon-Gaarde
    Hi I am experiencing some difficulty in my home project where I need to parse a SOAP request. The SOAP is generated with gSOAP and involves string parameters with special characters like the danish letters "æøå". gSOAP builds SOAP requests with UTF-8 encoding by default, but instead of sending the special chatacters in raw format (ie. bytes C3A6 for the special character "æ") it sends what I think is called character hash references (ie. &#195;&#166;). I don't completely understand why gSOAP does it this way as I can see that it has marked the incomming payload as being UTF-8 encoded anyway (Content-Type: text/xml; charset=utf-8), but this is besides the question (I think). Anyway I guess gSOAP probably is obeying transport rules, or what? When I parse the request from gSOAP in python with xml.dom.minidom.parseString() I get element values as unicode objects which is fine, but the character hash references are not decoded as UTF-8 character codes. It unescapes the character hash references, but does not decode the string afterwards. In the end I have a unicode string object with UTF-8 encoding: So if the string "æble" is contained in the XML, it comes like this in the request: "&#195;&#166;ble" After parsing the XML the unicode string in the DOM Text Node's data member looks like this: u'\xc3\xa6ble' I would expect it to look like this: u'\xe6ble' What am I doing wrong? Should I unescape the SOAP XML before parsing it, or is it somewhere else I should be looking for the solution, maybe gSOAP? Thanks in advance. Best regards Jakob Simon-Gaarde

    Read the article

  • A UnicodeDecodeError that occurs with json in python on Windows, but not Mac.

    - by ventolin
    On windows, I have the following problem: >>> string = "Don´t Forget To Breathe" >>> import json,os,codecs >>> f = codecs.open("C:\\temp.txt","w","UTF-8") >>> json.dump(string,f) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\json\__init__.py", line 180, in dump for chunk in iterable: File "C:\Python26\lib\json\encoder.py", line 294, in _iterencode yield encoder(o) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 3-5: invalid data (Notice the non-ascii apostrophe in the string.) However, my friend, on his mac (also using python2.6), can run through this like a breeze: > string = "Don´t Forget To Breathe" > import json,os,codecs > f = codecs.open("/tmp/temp.txt","w","UTF-8") > json.dump(string,f) > f.close(); open('/tmp/temp.txt').read() '"Don\\u00b4t Forget To Breathe"' Why is this? I've also tried using UTF-16 and UTF-32 with json and codecs, but to no avail.

    Read the article

  • Creating a fixed length output string with sprintf containing floats

    - by Kungi
    Hi, I'm trying to create a file which has the following structure: - Each line has 32 bytes - Each line looks like this format string: "%10i %3.7f %3.7f\n" My Problem is the following: When i have a negative floating point numbers the line gets longer by one or even two characters because the - sign does not count to the "%3.7f". Is there any way to do this more nicely than this? if( node->lng > 0 && node->lat > 0 ) { sprintf( osm_node_repr, "%10i %3.7f %3.7f\n", node->id, node->lng, node->lat ); } else if (node->lng > 0 && node->lat < 0) { sprintf( osm_node_repr, "%10i %3.7f %3.6f\n", node->id, node->lng, node->lat ); } else if (node->lng < 0 && node->lat > 0) { sprintf( osm_node_repr, "%10i %3.6f %3.7f\n", node->id, node->lng, node->lat ); } else if ( node->lng < 0 && node->lat < 0 ) { sprintf( osm_node_repr, "%10i %3.6f %3.6f\n", node->id, node->lng, node->lat ); } Thanks for your Answers, Andreas

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >