Search Results

Search found 5303 results on 213 pages for 'encoding'.

Page 23/213 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Detect remote charset in php

    - by yallaa
    Hello, I would like to determine a remote page's encoding through detection of the Content-Type header tag <meta http-equiv="Content-Type" content="text/html; charset=XXXXX" /> if present. I retrieve the remote page and try to do a regex to find the required setting if present. I am still learning hence the problem below... Here is what I have: $EncStart = 'charset='; $EncEnd = '" \/\>'; preg_match( "/$EncStart(.*)$EncEnd/s", $RemoteContent, $RemoteEncoding ); echo = $RemoteEncoding[ 1 ]; The above does indeed echo the name of the encoding but it does not know where to stop so it prints out the rest of the line then most of the rest of the remote page in my test. Example: When testing a remote russian page it printed: windows-1251" / rest of page .... Which means that $EncStart was okay, but the $EncEnd part of the regex failed to stop the matching. This meta header usually ends in 3 different possibility after the name of the encoding. "> | "/> | " /> I do not know weather this is usable to satisfy the end of the maching and if yes how to escape it. I played with different ways of doing it but none worked. Thank you in advance for lending a hand.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Unconvert Text File from Binary Format

    - by Hammer Bro.
    I've got a rather large CSV file (~700MB) which I know to consist of lines of 27-character alpha-numeric hashes; no commas or anything fancy. Somehow, during its migration from Windows to Linux (via winSCP and then a few regular SCPs), it has converted into some kind of binary format I am unfamiliar with. If I open the file in vi, everything appears fine, and it says [converted] at the bottom, although I know it's not a line endings issue (and dos2unix doesn't help). If I 'head' the file, it looks proper except for a "ÿþ" at the beginning of the first line. If I open up the file in nano, however, I see the "ÿþ" at the start and then "^@" before every character (even newlines and EoF). If I try to re-save or copy the file (say via: head file.csv short.txt), this special encoding is preserved. I copied the first ten lines out of vi (which displays it properly) into my Windows clipboard via my SSH client, then pasted it into a new text file, test.txt. This file is visually identical when opened in vi (and similar through 'head', minus the "ÿþ"), although it's roughly half of the filesize. Additionally, file test.txt test.txt: ASCII text file short.txt short.txt: I have no idea what format this once-text file got converted to (it's notoriously hard to search the internet for symbols), but surely there must be some way to convert it back. Any ideas?

    Read the article

  • Digitize video?

    - by kire
    I got some movies on a VCR that i want to move to my computer somehow. (for personal use) I have a video capture card and all hardware required. I'm looking for the software - programs, codecs etc. I like the format that most torrents come in and got some of these i'd like to use as reference for comparison. How can i see what codec, bitrate, etc a movie is using so i can pick this and know that it will work and look good. For AVI-files the bitrate is visible in explorer but it doesn't mention the codec used and i also have a lot of MKV-files that explorer can't handle. All kinds of tips, tricks and other suggestions are welcome. This is completely new to me. How do I avoid that video/audio gets out of sync for example, many movies you download have audio out of sync so i guess this can happen quite easily. The encoding-program has to run on windows and for playback the movies should work on at least VLC for windows.

    Read the article

  • Burn .srt subtitles to AVC encoded video (transcoding with hardsubs) [closed]

    - by Saxtus
    Possible Duplicate: How do I hard code a movie with subtitles? I am looking for a software (or combination of software) that will allow me to hard burn subtitles from an .srt file, that has italics and bold typefaces, to an H.264/AVC encoded video so it can played from a desktop player that can't display external subtitles correctly. Ideally it could use Directshow as input as DirectVobSub makes nice job showing the subtitles as they should (allowing me to globally adjust font and size). CUDA use, to speed up encoding, will be great but not necessary. Video source is also H.264/AVC encoded. Audio is AC-3 5.1 and should be retained too but I have no problem re-muxin it later as long as it keeps synced. Until now I've unsuccessfully tested: Avisynth 2.58 Unable to make Direcvobsub to launch through it TextSub() command renders subtitles with fixed font/size and doesn't decode tags Malformed audio TMPGEnc 4.0 XPress 4.7.4.299 Audio downmixed to 2.0 Importing of subtitles doesn't decode tags Badaboom 1.2.1.7 No importing of subtitles at all SUPER © 2010.build.37 "Directshow decode" has similar effect as Avisynth above Other modes doesn't appear to allow any subtitles in Thank you.

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • Uninitialized constant Encoding with sqlite3-ruby on windows

    - by Ben Scheirman
    On a new machine, installed ruby with the 1-click installer for windows. Installed rails 2.3.2 and all associated gems, then I installed the sqlite3 binaries (into the c:\ruby\bin folder). Lastly I did gem install sqlite3-ruby -v=1.2.3 (which is apparently the latest version that works with windows) This error happens when I run rake db:migrate or when any ActiveRecord object is touched at runtime. The error looks like this: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! **uninitialized constant Encoding** <---- Any help resolving this error would be greatly appreciated! Trace: C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/encoding.rb:9:in `find' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/database.rb:69:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19

    Read the article

  • Encoding in Scene Builder

    - by Agafonova Victoria
    I generate an FXML file with Scene Builder. I need it to contain some cirillic text. When i edit this file with Scene Builder i can see normal cirillic letters (screen 1) After compileing and running my program with this FXML file, i'll see not cirillic letters, but some artefacts (screen 2) But, as you can see on the screen 3, its xml file encoding is UTF-8. Also, you can see there that it is saved in ANSI. I've tried to open it with other editors (default eclipse and sublime text 2) and they shoen wrong encoding either. (screen 4 and screen 5) At first i've tried to convert it from ansi to utf-8 (with notepad++). After that eclipse and sublime text 2 started display cirillic letters as they must be. But. Scene builder gave an error, when i've tried to open this file with it: Error loading file C:\eclipse\workspace\equification\src\main\java\ru\igs\ava\equification\test.fxml. C:\eclipse\workspace\equification\src\main\java\ru\igs\ava\equification\test.fxml:1: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. And java compiler gave me an error: ??? 08, 2012 8:11:03 PM javafx.fxml.FXMLLoader logException SEVERE: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. /C:/eclipse/workspace/equification/target/classes/ru/igs/ava/equification/test.fxml:1 at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at ru.igs.ava.equification.EquificationFX.start(EquificationFX.java:22) at com.sun.javafx.application.LauncherImpl$5.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$4.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$3.run(Unknown Source) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$100(Unknown Source) at com.sun.glass.ui.win.WinApplication$2$1.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in Application start method Exception in thread "main" java.lang.RuntimeException: Exception in Application start method at com.sun.javafx.application.LauncherImpl.launchApplication1(Unknown Source) at com.sun.javafx.application.LauncherImpl.access$000(Unknown Source) at com.sun.javafx.application.LauncherImpl$1.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: javafx.fxml.LoadException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at ru.igs.ava.equification.EquificationFX.start(EquificationFX.java:22) at com.sun.javafx.application.LauncherImpl$5.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$4.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$3.run(Unknown Source) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$100(Unknown Source) at com.sun.glass.ui.win.WinApplication$2$1.run(Unknown Source) ... 1 more Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown Source) at javax.xml.stream.util.StreamReaderDelegate.next(Unknown Source) ... 14 more So, i've converted it back to ANSI. And, having this file in ANSI, changed its "artefacted" text to cirillic letters manually. Now i can see normal text when i run my program, but when i open this fixed file via Scene Builder, Scene Builder shows me some "artefacted" text (screen 7). So, how can i fix this situation?

    Read the article

  • Encoding Problem with Zend Navigation using Zend Translate Spanish in XMLTPX File Special Characters

    - by Routy
    Hello, I have been attempting to use Zend Translate to display translated menu items to the user. It works fine until I introduce special characters into the translation files. I instantiate the Zend_Translate object in my bootstrap and pass it in as a translator into Zend_Navigation: $translate = new Zend_Translate( array('adapter' => 'tmx', 'content' => APPLICATION_PATH .'/languages/translation.tmx', 'locale' => 'es' ) ); $navigation->setUseTranslator($translate); I have used several different adapters (array,tmx) in order to see if that made a difference. I ended up with a TMX file that is encoded using ISO-8859-1 (otherwise that throws an XML parse error when introducing the menu item "Administrar Applicación". <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE tmx SYSTEM "tmx14.dtd"> <tmx version="1.4"> <header creationtoolversion="1.0.0" datatype="tbx" segtype="sentence" adminlang="en" srclang="en" o-tmf="unknown" creationtool="XYZTool" > </header> <body> <tu tuid='link_signout'> <tuv xml:lang="en"><seg>Sign Out</seg></tuv> <tuv xml:lang="es"><seg>Salir</seg></tuv> </tu> <tu tuid='link_signin'> <tuv xml:lang="en"><seg>Login</seg></tuv> <tuv xml:lang="es"><seg>Acceder</seg></tuv> </tu> <tu tuid='Manage Application'> <tuv xml:lang="en"><seg>Manage Application</seg></tuv> <tuv xml:lang="es"><seg>Administrar Applicación</seg></tuv> </tu> </body> </tmx> Once I display the menu in the layout: echo $this->navigation()->menu(); It will display all menu items just fine, EXCEPT the one using special characters. It will simply be blank. NOW - If I use PHP's UTF8-encode inside of the zend framework class 'Menu' which I DO NOT want to do: Line 215 in Zend_View_Helper_Navigation_Menu: if ($this->getUseTranslator() && $t = $this->getTranslator()) { if (is_string($label) && !empty($label)) { $label = utf8_encode($t->translate($label)); } if (is_string($title) && !empty($title)) { $title = utf8_encode($t->translate($title)); } } Then it works. The menu item display correctly and all is joyful. The thing is, I do not want to modify the library. Is there some kind of an encoding setting in either zend translate or zend navigation that I am not finding? Please Help! Zend Library Version: 1.11

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • How can I get the Terminal raster font to display alt codes in a text editor?

    - by grg-n-sox
    I am working on a project that includes making some ASCII art, except it isn't true ASCII art since I am using a far amount of Windows Alt codes to make it. Anyways, I wanted to make sure that as I am working on it, that it looks exactly how it will in a windows command prompt terminal session. So since command prompt defaults to the Terminal raster font, I figured I would use that. But I quickly noticed that when I use the Terminal typeface in a text editor, it will not render ASCII codes, either at all (as is the case most of the time) or incorrectly. Now, I understand if a font just doesn't support non-ASCII characters, but what I don't get is how the characters do show up correctly in command prompt when they don't in a text editor. I checked the output of the 'chcp' and it was set to 437 by default, which is what I need. Well, either that or 850 but preferably 437 since they got rid of some of the graphics in 437 and replaced them with other Latin characters. Command prompt terminal settings show I am using the Terminal raster font with a 8x12 glyph size. So I try using size 12 in the text editor but no good, even after switching the text encoding to either MS-DOS OEM-US (supposedly an alternative name for CP437) or UTF-8. I just don't get how I am not getting the characters to show up. Also, if it helps, the art I am making is basically modified screen shots from a game I play called Dwarf Fortress that uses characters from the Terminal/Curses typeset, or at least that is how it is reported in the forums by those who make graphics sets to replace the default character set. However, the game doesn't actually use the system's Terminal font. The game's data files includes a bitmap image that is a grid of all the characters the game uses. So it uses this bitmap to render graphics instead of the actual font file. And I basically want to get a text editor to make it so if I type up some ASCII art to look like a screenshot from Dwarf Fortress, that it will actually look like Dwarf Fortress other than the lack of color. Any help?

    Read the article

  • mysql encoding problem

    - by Syom
    i have a proble, when insert something in foreign language into database. i have set the collation of database to utf8_general_ci(try utf8_unicod_ci too). but when i insert some text into table, it was saved like this Õ€Õ¡ÕµÕ¥Ö€Õ¥Õ¶ Ô±Õ¶Õ¸Ö‚Õ¶ but when i read from database, text shows in correct form. it looks like that only in database. i have set encoding in my html document to charset=UTF-8 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> and i set mysql_query("SET NAMES UTF-8"); mysql_query("SET CHARACTER SET UTF-8"); when conecting to database. so i think that i' ve done everything, but it still save in that anknown format. could you help me. thanks in advance

    Read the article

  • Allowing asterisk in URL - ASP.NET MVC 2 - .NET 4.0 or encoding

    - by raRaRa
    I'm having a trouble allowing asterisk (*) in the URL of my website. I am running ASP.NET MVC 2 and .NET 4.0. Here's an example that describes the problem: http://mysite.com/profile/view/Nice* The username is Nice* and ASP.NET says there are illegal characters in the URL: Illegal characters in path. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: Illegal characters in path. I have tried all the Web.config methods I've seen online such as: <pages validateRequest="false"> and <httpRuntime requestPathInvalidCharacters="" requestValidationMode="2.0" /> So my question is: Is it possible to allow asterisk in URL? If not, is there some encoding method in .NET that can encode asterisk(*) ? Thanks!

    Read the article

  • gVim doesn't recognize the Meta (Alt) Key in an imap after changing the encoding

    - by René Nyffenegger
    In order to edit html files, I have the following three imaps in a file that I source for filetype html: imap <buffer> <M-[> &uuml; imap <buffer> <M-;> &ouml; imap <buffer> <M-'> &auml; This works fine until I change the encoding of the html file with set enc=utf-8. Now, pressing Alt-[ for example gives me a Û. Interestingly, after sourcing the same file again, it expands the imaps correctly. This doesn't really make sense to me. So, why is this and how can I have a more constistent environment regarding imap and utf-8. This is occuring with gVim 7.1 for Windows.

    Read the article

  • A definitive guide to Url Encoding in ASP .NET

    - by cbp
    I am starting to realise that there are about a bazillion different methods for encoding urls in .NET. I keep finding new ones. They all work slightly differently, but they all have essentially the same summary comments. Does anyone have a definitive matrix that shows the exact differences between the following methods: HttpUtility.UrlEncode HttpUtility.UrlPathEncode Server.UrlEncode Uri.EscapeUriString Uri.EscapeDataString ... are they any more? Also it would be good to match these up with use-cases e.g.: Urls in href attributes of a tags Urls to be displayed to the user in HTML Urls as querystring values (i.e. to be sent in GET requests) Urls to be sent in POST requests etc

    Read the article

  • Optimum encoding standard for flowplayer to play mp4

    - by renjucool
    I'm using flow player 3.1.1 for streaming videos to my browser.The videos are uploaded by the users and they may upload different formats. What will be solution to stream the videos as mp4 , what ever be the format they upload. I'm currently using ffmpeg commands. ffmpeg -i "InputFile.mp4" -sameq -vcodec libx264 -r 35 -acodec libfaac -y "OutputFile.mp4" But video files of more size(say 100mb) are taking a minute more for laoding in to the flowplayer and buffering. I think the problem with my encoding. Welcome your valuable Suggestions!!!

    Read the article

  • FOP Encoding issue

    - by Ravi chandra
    Hi Guys, I have a similar issue.. I have HTML stored in DB clob. I am retrieving that and converting to XHTML using TIDY.jar. Once i got XHTML then using FOP i am converting to XSL-FO. Finally XSL-FO is rendering in PDF. Previously everything is working fine with Linux-WAS5-java1.4. Recently we migrated the apps to Linux-WAS6-Java1.5. Now XHTML to XSL-FO is messing up everything. XSL-FO contains ???(Question marks) in the place of Euro, spase(nbsp), Agrave, egrave ..etc. I tried changing the JVM encoding to UTF-8 and also i have modified my servlet request and response to support UTF-8. I am helfless and unable to figure where exactly the issue is coming out. Can someone please check this and suggest me some solution. Thanks in advance

    Read the article

  • C# web request with POST encoding question

    - by rlandster
    On the MSDN site there is an example of some C# code that shows how to make a web request with POST'ed data. Here is an excerpt of that code: WebRequest request = WebRequest.Create ("http://www.contoso.com/PostAccepter.aspx "); request.Method = "POST"; string postData = "This is a test that posts this string to a Web server."; byte[] byteArray = Encoding.UTF8.GetBytes (postData); // (*) request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream (); dataStream.Write (byteArray, 0, byteArray.Length); dataStream.Close (); WebResponse response = request.GetResponse (); ...more... The line marked (*) is the line that puzzles me. Shouldn't the data be encoded using the UrlEncode function than UTF8? Isn't that what application/x-www-form-urlencoded implies?

    Read the article

  • Accept-Encoding headers being sent by browser but not received by server

    - by Daniel Jacobs
    I have been trying to debug this for weeks. All of the browsers on all of the clients on my home network are sending 'Accept-Encoding: gzip,deflate'. However, that header is somehow, somewhere being dropped before the request makes it to a web server. For example, http://www.whatsmyip.org/http_compression/ says 'No, your browser is not requesting compressed content'. I've used Fiddler to make sure that all of my browsers are indeed sending the header. I've swapped out my router. I've turned off all anti-virus software. Brighthouse/Roadrunner (the local cable ISP) says they are not doing any filtering (and I can't see why they would in this case). Any suggestions would be most welcome!

    Read the article

  • How do I modify the -encoding argument to javac in the Android Ant build system

    - by Paul Butcher
    Apologies if this is a stupid question - I'm an Android and Ant newbie. I have utf8 encoded source files that I need to compile with the Android Ant build system. By default, the encoding is set to ascii. I'd be very grateful for a pointer to whatever I need to do to let the build system know that my files are utf8. Incidentally, it works fine if I build in Eclipse, but I need to build from the command line. Thanks!

    Read the article

  • fancybox encoding issue

    - by Noam Smadja
    i am using fancybox.net for my light boxes. it works great with images. but when i use iframe option i am having problems with encoding. i use hebrew and english and russian on my website. when i use UTF-8 on the iframe page i get squares instead of hebrew letters. when i change to iso-8859-8 i have hebrew but inverted, and russian goes: ??????????? any idea? my website is UTF-8 and every thing works great.. except fancybox :(

    Read the article

  • Html encoding in MVC input

    - by fearofawhackplanet
    I'm working through NerdDinner and I'm a bit confused about the following section... First they've added a form for creating a new dinner, with a bunch of textboxes delcared like: <%= Html.TextArea("Description") %> They then show two ways of binding form input to the model: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create() { Dinner dinner = new Dinner(); UpdateModel(dinner); ... } or: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Dinner dinner) { ... } Ok, great, that all looks really easy so far. Then a bit later on they say: It is important to always be paranoid about security when accepting any user input, and this is also true when binding objects to form input. You should be careful to always HTML encode any user-entered values to avoid HTML and JavaScript injection attacks Huh? MVC is managing the data binding for us. Where/how are you supposed to do the HTML encoding?

    Read the article

  • doc file created in iPhone documents encoding issue

    - by Saurabh Verma
    I'm trying to write a MSword file in document directory by the following code: NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory , NSUserDomainMask, YES); NSString* path; NSString* gradeDoc = [self fetchCommentsDesc]; NSString* str = [self.objStudent.strName stringByAppendingFormat:@".doc"]; path = [[paths objectAtIndex:0] stringByAppendingPathComponent:str]; [gradeDoc writeToFile:path atomically:YES encoding:NSUTF8StringEncoding error:nil]; [self fetchCommentsDesc] returns NSString. self.student.strName is a String Issue: When i Open the doc file created in document directory of iphone, all the special characters in the doc appears as boxes or some arabic chars. Please help!

    Read the article

  • How can i add encoding to the python generated CSV file

    - by user1958218
    I am following this post http://stackoverflow.com/a/9016545 and i want to know that how can i do that in Python. I don't know how can i insert BOM data in there This is my current code response = HttpResponse(content_type='text/csv') response['Content-Type'] = 'application/octet-stream' response['Content-Disposition'] = 'attachment; filename="results.csv"' writer = UnicodeWriter(response, quoting=csv.QUOTE_ALL, encoding="utf-8") I want to convert to utf -16 . BOm data is this but don't know how to insert it From here http://stackoverflow.com/a/4440143 echo "\xEF\xBB\xBF"; // UTF-8 BOM But i want it for python and utf-16 I tried opening that csv in notepad and insert \xef\xbb\xb in beginning and excel displayed that correctly. But it is also visible before first column. How can i hide that because user wont like that

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >