Search Results

Search found 223 results on 9 pages for 'wb'.

Page 6/9 | < Previous Page | 2 3 4 5 6 7 8 9  | Next Page >

  • exceptions with python unicode encode/decode functions (why doesn't errors=ignore actually ignore th

    - by gatoatigrado
    Does anyone know why the string conversion functions throw exceptions when errors="ignore" is passed? How can I convert from regular Python string objects to unicode without errors being thrown? Thanks very much! python -c "import codecs; codecs.open('tmp', 'wb', encoding='utf8', errors='ignore').write('?????')" returns Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/codecs.py", line 686, in write return self.writer.write(data) File "/usr/lib/python2.6/codecs.py", line 351, in write data, consumed = self.encode(object, self.errors) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)

    Read the article

  • Python: Retrieve Image from MSSQL

    - by KoRkOnY
    Dear All, I'm working on a Python project that retrieves an image from MSSQL. My code is able to retrieve the images successfully but with a fixed size of 63KB. if the image is greater than that size, it just brings the first 63KB from the image! The following is my code: #!/usr/bin/python import _mssql mssql=_mssql.connect('<ServerIP>','<UserID>','<Password>') mssql.select_db('<Database>') x=1 while x==1: query="select TOP 1 * from table;" if mssql.query(query): rows=mssql.fetch_array() rowNumbers = rows[0][1] #print "Number of rows fetched: " + str(rowNumbers) for row in rows: for i in range(rowNumbers): FILE=open('/home/images/' + str(row[2][i][1]) + '-' + str(row[2][i][2]).strip() + ' (' + str(row[2][i][0]) + ').jpg','wb') FILE.write(row[2][i][4]) FILE.close() print 'Successfully downloaded image: ' + str(row[2][i][0]) + '\t' + str(row[2][i][2]).strip() + '\t' + str(row[2][i][1]) else: print mssql.errmsg() print mssql.stdmsg() mssql.close()

    Read the article

  • Django file uploads - Just can't work it out

    - by phoebebright
    OK I give up - after 5 solid hours trying to get a django form to upload a file, I've checked out all the links in stackoverflow and googled and googled. Why is it so hard, I just want it to work like the admin file upload? So I get that I need code like: if submitForm.is_valid(): handle_uploaded_file(request.FILES['attachment']) obj = submitForm.save() and I can see my file in request.FILES['attachment'] (yes I have enctype set) but what am I supposed to do in handle_uploaded_file? The examples all have a fixed file name but obviously I want to upload the file to the directory I defined in the model, but I can't see where I can find that. def handle_uploaded_file(f): destination = open('fyi.xml', 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() Bet I'm going to feel really stupid when someone points out the obvious!

    Read the article

  • Scrapy Could not find spider Error

    - by Nacari
    I have been trying to get a simple spider to run with scrapy, but keep getting the error: Could not find spider for domain:stackexchange.com when I run the code with the expression scrapy-ctl.py crawl stackexchange.com. The spider is as follow: from scrapy.spider import BaseSpider from __future__ import absolute_import class StackExchangeSpider(BaseSpider): domain_name = "stackexchange.com" start_urls = [ "http://www.stackexchange.com/", ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body) SPIDER = StackExchangeSpider()` Another person posted almost the exact same problem months ago but did not say how they fixed it, http://stackoverflow.com/questions/1806990/scrapy-spider-is-not-working I have been following the turtorial exactly at http://doc.scrapy.org/intro/tutorial.html, and cannot figure out why it is not working.

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • Logical python question - handling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • Weird problem: IE8 user can't authenticate with web service

    - by NovaJoe
    I have an asp.net app. It has a page that requires authentication. The authenticated user can view the page because he/she is authenticated. The page makes a jQuery Ajax call to a WCF service. The WCF service checks that the user is authenticated via HttpContext. I have a user that is using WinXP and IE8. This user can authenticate to the page, but when the Ajax call is made from the page to the wb service, the user recieves my "session not authenticated" message on the page, generated by the service and displayed on the page. When I use the same OS/browser combo, the page and service work just fine, as expected; no errors. What option in this user's IE settings would cause this behavior?

    Read the article

  • Set a script to automatically detect character encoding in a plain-text-file in Python?

    - by Haidon
    I've set up a script that basically does a large-scale find-and-replace on a plain text document. At the moment it works fine with ASCII, UTF-8, and UTF-16 (and possibly others, but I've only tested these three) encoded documents so long as the encoding is specified inside the script (the example code below specifies UTF-16). Is there a way to make the script automatically detect which of these character encodings is being used in the input file and automatically set the character encoding of the output file the same as the encoding used on the input file? findreplace = [ ('term1', 'term2'), ] inF = open(infile,'rb') s=unicode(inF.read(),'utf-16') inF.close() for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext outF = open(outFile,'wb') outF.write(outtext.encode('utf-16')) outF.close() Thanks!

    Read the article

  • How to save bytes to an image and access it from Bottle

    - by Graham Smith
    I'm working on an API wrapper for Snapchat using Python and Bottle, but in order to return the file (retrieved by the Python script) I have to save the bytes (returned by Snapchat) to a .jpg file. I'm not quite sure how I will do this and still be able to access the file so that it can be returned. Here's what I have so far, but it returns a 404. @route('/image') def image(): username = request.query.username token = request.query.auth_token img_id = request.query.id return get_blob(username, token, img_id) def get_blob(usr, token, img_id): # Form URL and download encrypted "blob" blob_url = "https://feelinsonice.appspot.com/ph/blob?id={}".format(img_id) blob_url += "&username=" + usr + "&timestamp=" + str(timestamp()) + "&req_token=" + req_token(token) enc_blob = requests.get(blob_url).content # Save decrypted image FileUpload.save('/images/' + img_id + '.jpg') img = open('images/' + img_id + '.jpg', 'wb') img.write(decrypt(enc_blob)) img.close() return static_file(img_id + '.jpg', root='/images/')

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • Why is lua crashing after extracting zip files?

    - by Brian T Hannan
    I have the following code but it crashes every time it reaches the end of the function, but it successfully extracts all the files and puts them in the right location. require "zip" function ExtractZipAndCopyFiles(zipPath, zipFilename, destinationPath) local zfile, err = zip.open(zipPath .. zipFilename) -- iterate through each file insize the zip file for file in zfile:files() do local currFile, err = zfile:open(file.filename) local currFileContents = currFile:read("*a") -- read entire contents of current file local hBinaryOutput = io.open(destinationPath .. file.filename, "wb") -- write current file inside zip to a file outside zip if(hBinaryOutput)then hBinaryOutput:write(currFileContents) hBinaryOutput:close() end end zfile:close() end -- call the function ExtractZipAndCopyFiles("C:\\Users\\bhannan\\Desktop\\LUA\\", "example.zip", "C:\\Users\\bhannan\\Desktop\\ZipExtractionOutput\\") Why does it crash every time it reaches the end?

    Read the article

  • Logical python question - handeling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • Rails - Permission denied when try to save uploaded file in windows

    - by logoin
    I'm writing my own file upload in rails. I saw some related questions but it doesn't answer my question. I use File.open ("#{RAILS_ROOT}/public/docs/attachments/#{@file_name}", "wb") {|f| f.write(@temp_file.read)} to write the file on my local machine (OS: Windows XP) instead of saving it in database. I got a Permission denied error on the File.open method. Since I have cygwin installed, I chmod 777 the folder that files should write to and also make sure the file I upload can be read. But I'm still getting the same error. Any ideas? Thanks!

    Read the article

  • Improving the join of two wave file?

    - by kaki
    I have written a code for joining two wave files.It works fine when i am joining larger segments but as i need to join very small segments the clarity is not good. I have learned that the signal processing technique such a windowed join can be used to improve the joining of file. y[n] = w[n]s[n] Multiply value of signal at sample number n by the value of a windowing function hamming window w[n]= .54 - .46*cos(2*Pi*n)/L 0 I am not understanding how to get the value to signal at sample n and how to implement this?? the code i am using for joining is import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a, "C:/begpython/S0001_0002.wav", a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] data.append( [w.getparams(), w.readframes(w.getnframes())] ) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • RSA encrypted data block size

    - by calccrypto
    how do you store an rsa encrypted data block? the output might be significantly greater than the original input data block size, and i dont think people waste memory by padding bucket loads of 0s in front of each data block. besides, how would they be removed? or is each block stored on new lines within the file? if that is the case, how would you tell the difference between legitimate new line and a '\n' char written into the file? what am i missing? im writing the "write to file" part in python, so maybe its one of the differences between: open(file,'w') open(file,'w+b') open(file,'wb') that i dont know. or is it something else?

    Read the article

  • Regular expressions in a Python find-and-replace script?

    - by Haidon
    I'm new to Python scripting, so please forgive me in advance if the answer to this question seems inherently obvious. I'm trying to put together a large-scale find-and-replace script using Python. I'm using code similar to the following: findreplace = [ ('term1', 'term2'), ] inF = open(infile,'rb') s=unicode(inF.read(),charenc) inF.close() for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext outF = open(outFile,'wb') outF.write(outtext.encode('utf-8')) outF.close() How would I go about having the script do a find and replace for regular expressions? Specifically, I want it to find some information (metadata) specified at the top of a text file. Eg: Title: This is the title Author: This is the author Date: This is the date and convert it into LaTeX format. Eg: \title{This is the title} \author{This is the author} \date{This is the date} Maybe I'm tackling this the wrong way. If there's a better way than regular expressions please let me know! Thanks!

    Read the article

  • Cannot use fclose on output stream, input stream is fine.

    - by TeeJay
    Whenever I run my program with fclose(outputFile); at the very end, I get an error. glibc detected...corrupted double-linked list The confusing thing about this though, is that I have fclose(inputFile); directly above it and it works fine. Any suggestions? FILE* inputFile = fopen(fileName, "r"); if (inputFile == NULL) { printf("inputFile did not open correctly.\n"); exit(0); } FILE* outputFile = fopen("output.txt", "wb"); if (outputFile == NULL) { printf("outputFile did not open correctly.\n"); exit(0); } /* ... read in inputFile ... */ /* ... some fprintf's to outputFile ... */ fclose(inputFile); fclose(outputFile);

    Read the article

  • For improving the join of two wave files

    - by kaki
    i want to get the values of the last 30 frames of the first wav file and first thirty frames of the second wave file in integer format and stored in a list or array. i have written the code for joining but during this manupalation i am getting in byte format and tried to convert it to integer but couldn't. as told before i want to get the frame detail of 1st 30 and last 30 in integer format,and by performing other operations join can be more successful looking for your help in this,please... thanking you, import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] #print w.readframes(100) data.append( [w.getparams(), w.readframes(w.getnframes())] ) #print w.readframes(1) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() print data output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • write 2d array to a file in C (Operating system)

    - by Bobj-C
    Hello All, I used to use the code below to Write an 1D array to a File: FILE *fp; float floatValue[5] = { 1.1F, 2.2F, 3.3F, 4.4F, 5.5F }; int i; if((fp=fopen("test", "wb"))==NULL) { printf("Cannot open file.\n"); } if(fwrite(floatValue, sizeof(float), 5, fp) != 5) printf("File read error."); fclose(fp); /* read the values */ if((fp=fopen("test", "rb"))==NULL) { printf("Cannot open file.\n"); } if(fread(floatValue, sizeof(float), 5, fp) != 5) { if(feof(fp)) printf("Premature end of file."); else printf("File read error."); } fclose(fp); for(i=0; i<5; i++) printf("%f ", floatValue[i]); My question is if i want to write and read 2D array ??

    Read the article

  • App Engine - Save response from an API in the data store as file (blob)

    - by herrherr
    Hi there, I'm banging my head against the wall with this one: What I want to do is store a file that is returned from an API in the data store as a blob. Here is the code that I use on my local machine (which of course works due to an existing file system): client.convertHtml(html, open('html.pdf', 'wb')) Since I cannot write to a file on App Engine I tried several ways to store the response, without success. Any hints on how to do this? I was trying to do it with StringIO and managed to store the response but then weren't able to store it as a blob in the data store. Thanks, Chris

    Read the article

  • Optimizing memory usage and changing file contents with PHP

    - by errata
    In a function like this function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'wb'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 1024)) === FALSE) { return false; } } fclose($rh); fclose($wh); return true; } what is the best way to rewrite last few bytes of a file with my custom string? Thanks!

    Read the article

  • Binding multiple objects in Grails

    - by WaZ
    I have there domain classes: :: Person. (Person.ID, Name,Address) :: Designation.(Designation.ID, Title, Band) :: SalarySlip (Person.ID, Designation.ID, totalIncome, Tax etc etc.) In the update method the person controller when I assign a person a designation from a list of designation values I want to insert a new record inside SalarySlip. Something like: def update = { def SalarySlipInstance = new SalarySlip() SalarySlipInstance.Person.ID = Params.ID //is this correct? SalarySlipInstance.Designation.ID = ?? //since the value is coming from a list. How can I bind this field? } Much Appreciated, Thanks, WB

    Read the article

  • Python/Numpy - Save Array with Column AND Row Titles

    - by Scott B
    I want to save a 2D array to a CSV file with row and column "header" information (like a table). I know that I could use the header argument to numpy.savetxt to save the column names, but is there any easy way to also include some other array (or list) as the first column of data (like row titles)? Below is an example of how I currently do it. Is there a better way to include those row titles, perhaps some trick with savetxt I'm unaware of? import csv import numpy as np data = np.arange(12).reshape(3,4) # Add a '' for the first column because the row titles go there... cols = ['', 'col1', 'col2', 'col3', 'col4'] rows = ['row1', 'row2', 'row3'] with open('test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow(cols) for row_title, data_row in zip(rows, data): writer.writerow([row_title] + data_row.tolist())

    Read the article

  • Writing to CSV issue in Spyder

    - by 0003
    I am doing the Kaggle Titanic beginner contest. I generally work in Spyder IDE, but I came across a weird issue. The expected output is supposed to be 418 rows. When I run the script from terminal the output I get is 418 rows (as expected). When I run it in Spyder IDE the output is 408 rows not 418. When I re-run it in the current python process, it outputs the expected 418 rows. I posted a redacted portion of the code that has all of the relevant bits. Any ideas? import csv import numpy as np csvFile = open("/train.csv","ra") csvFile = csv.reader(csvFile) header = csvFile.next() testFile = open("/test.csv","ra") testFile = csv.reader(testFile) testHeader = testFile.next() writeFile = open("/gendermodelDebug.csv", "wb") writeFile = csv.writer(writeFile) count = 0 for row in testFile: if row[3] == 'male': do something to row writeFile.writerow(row) count += 1 elif row[3] == 'female': do something to row writeFile.writerow(row) count += 1 else: raise ValueError("Did not find a male or female in %s" % row)

    Read the article

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9  | Next Page >