Search Results

Search found 5325 results on 213 pages for 'huffman encoding'.

Page 17/213 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Mass audio encoder

    - by bessman
    I have a few thousand FLAC files which I would like to transcode to OGG Vorbis, but I can't find any suitable tools for the job. To name a few I have tried so far and why they are unsuitable: oggenc is single-threaded and would require me to automate it myself, mencoder requires the input to also contain video, and abcde assumes the input is a CD. The ideal tool should be multi-threaded, and support inputing multiple files located in different directories simultaneously. CLI or GUI makes no matter. Does such a tool exist?

    Read the article

  • Are there disadvantages an literal + instead of an encoded + (%2B) in an URL?

    - by M_rk
    A client of mine has a product ending with a plus-sign (e.g. Google+) and would like the webpage of this product to have an URL that is human-readable (i.e. an URL that doesn't contain %2B). Since our projects use the following .htaccess RewriteRule RewriteRule ^(.*)$ index.php?$1 it is possible to use an urlencoded space in an URL like that. However, while the url would read like /google+, the actual meaning of the URL would be /google[space]. (The markup won't let me place a real space there.) Now my concern is that this would have disadvantages for SEO. Is this concern valid and/or are there other culprits to this approach?

    Read the article

  • Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    - by matt_tm
    Our PHP application is installed as 'root' on a Redhat5/CentOS system at: /var/www/html/beta/ After disabling SELINUX in order to allow these scripts to execute other programs on the system - http://serverfault.com/questions/192951/what-permissions-are-needed-to-run-a-system-command-within-a-php-script-that-wr I faced the error that the Apache error_log showed this: Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    Read the article

  • windows dvd maker encoding

    - by Greg Rains
    My Windows DVD Maker stops/freezes the encoding process on some movies that I have converted to AVI files. Is there an answer to this using Windows DVD Maker? Or is there another software product that is better? (using Windows 7 Professional)

    Read the article

  • Why do (Russian) characters in some received emails change when reading in David InfoCenter?

    - by waszkiewicz
    I'm using David InfoCenter as email Software, and I have troubles with some of my emails in Russian. It's only a few letters, in some emails (sent from different people), like for example the "R" ("P" in russian) will be shown as a "T". In other emails in Russian, the problem doesn't appear. Isn't it strange? Does anyone had the same problem already and found where it came from? When I transmit that email to an external mailbox (internet email account), it's even worse, and gives me symbols instead of all Russian letters... The default encoding was "Russian (ISO)", I changed it to "Russian (Windows)", but same problem. Another weird reaction is when I write an intern email and name it TEST in Russian (????), with ???? in the text window, it changes the title to "Oano"? But the content stays in Russian... With Mailinator I got the following, for message and subject "????": Subject: ???? [..] MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_000_00017783.4AF7FB71" This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 0KLQtdGB0YI= ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxNRVRBIG5hbWU9R0VORVJBVE9SIGNvbnRl bnQ9Ik1TSFRNTCA4LjAwLjYwMDEuMTg4NTIiPjwvSEVBRD4NCjxCT0RZIHN0eWxlPSJGT05UOiAx MHB0IENvdXJpZXIgTmV3OyBDT0xPUjogIzAwMDAwMCIgbGVmdE1hcmdpbj01IHRvcE1hcmdpbj01 Pg0KPERJViBzdHlsZT0iRk9OVDogMTBwdCBDb3VyaWVyIE5ldzsgQ09MT1I6ICMwMDAwMDAiPtCi 0LXRgdGCPFNQQU4gDQppZD10b2JpdF9ibG9ja3F1b3RlPjxTUEFOIGlkPXRvYml0X2Jsb2NrcXVv dGU+PC9ESVY+PC9TUEFOPjwvU1BBTj48L0JPRFk+PC9IVE1MPg== ------_=_NextPart_000_00017783.4AF7FB71--

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • How can one prevent double encoding of html entities when they are allowed in the input

    - by Bob
    How can I prevent double encoding of html entities, or fix them programmatically? I am using the encode() function from the HTML::Entities perl module to encode HTML entities in user input. The problem here is that we also allow users to input HTML entities directly and these entities end up being double encoded. For example, a user may enter: Stackoverflow & Perl = Awesome&hellip; This ends up being encoded to Stackoverflow &amp; Perl = Awesome&amp;hellip; This renders in the browser as Stackoverflow & Perl = Awesome&hellip; We want this to render as Stackoverflow & Perl = Awesome... Is there a way to prevent this double encoding? Or is there a module or snippet of code that can easily correct these double encoding issues? Any help is greatly appreciated!

    Read the article

  • Printing UTF-16 strings in JSP is outputted as HTML encoding (&#xxxx)

    - by Ori Osherov
    Hello, When I try to print a UTF-16 string in JSP, specifically Hebrew, it ends up showing up as HTML encoding (&#xxxx). This problem occurs because I print an array of variables into the web page and then parse them. The variables are all UTF-16 strings, but once the servlet prints the variables, it becomes translated to HTML encoding. Is there any way to get rid of the encoding? Thanks in advance Edit for a bit more background: The JSP that I'm printing is not the entirety of the page. It's used in a manner I don't quite understand by a server app which prints the JSPs output into its built in page. As a result, I can't, for instance, use a tag because the will have already been placed somewhere else. This isn't a frame or anything like that. It's just redirected output.

    Read the article

  • Generate encoding String according to creation order.

    - by Tony
    I need to generate encoding String for each item I inserted into the database. for example: x00001 for the first item x00002 for the sencond item x00003 for the third item The way I chose to do this is counting the rows. Before I insert the third item, I count against the database, I know there're already 2 rows, so the next encoding is ended with 3. But there is a problem. If I delete the second item, the forth item will not be the x00004,but x00003. I can add additional columns to table, to store the next encoding, I don't know if there's other better solutions ?

    Read the article

  • ISO-8859-1 to UTF8 in ASP.NET 2

    - by Gordon Carpenter-Thompson
    We've got a page which posts data to our ASP.NET app in ISO-8859-1 <head> <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1"> <title>`Sample Search Invoker`</title> </head> <body> <form name="advancedform" method="post" action="SearchResults.aspx"> <input class="field" name="SearchTextBox" type="text" /> <input class="button" name="search" type="submit" value="Search &gt;" /> </form> and in the code behind (SearchResults.aspx.cs) System.Collections.Specialized.NameValueCollection postedValues = Request.Form; String nextKey; for (int i = 0; i < postedValues.AllKeys.Length; i++) { nextKey = postedValues.AllKeys[i]; if (nextKey.Substring(0, 2) != "__") { // Get basic search text if (nextKey.EndsWith(XAEConstants.CONTROL_SearchTextBox)) { // Get search text value String sSentSearchText = postedValues[i]; System.Text.Encoding iso88591 = System.Text.Encoding.GetEncoding("iso-8859-1"); System.Text.Encoding utf8 = System.Text.Encoding.UTF8; byte[] abInput = iso88591.GetBytes(sSentSearchText); sSentSearchText = utf8.GetString(System.Text.Encoding.Convert(iso88591, utf8, abInput)); this.SearchText = sSentSearchText.Replace('<', ' ').Replace('>',' '); this.PreviousSearchText.Value = this.SearchText; } } } When we pass through Merkblätter it gets pulled out of postedValues[i] as Merkbl?tter The raw string string is Merkbl%ufffdtter Any ideas?

    Read the article

  • Unable to Retrieve Simplified Chinese Characters From Form

    - by Bullines
    I have a page that displays content retrieved from XML with no problems: <?xml version="1.0" encoding="UTF-8"?> <Root> <Fields> <NamePrompt>??</NamePrompt> </Fields> </Root> Page encoding is set to GB18030 and it displays perfectly. However, when I retrieve inputted text from HttpContext.Current.Request.Form that's been entered with double-byte characters, the retrieved string contains unreadable characters. Single-byte characters are fine, obviously. I've tried the following to no avail: byte[] valueBytes = Encoding.UTF8.GetBytes(HttpContext.Current.Request.Form["fullName"]); string value = Encoding.UTF8.GetString(valueBytes); I don't see this problem with other double-byte languages like Japanese or Korean. How can I successfully retrieve double-byte characters from a page that's GB18030 encoded?

    Read the article

  • How can I determine file encodings on Windows / IIS ?

    - by Dylan Beattie
    From the answers to this question it appears there's a file somewhere on our server that's been saved with the wrong encoding. I've seen this happen before - most often when pasting from Word into Visual Studio, when "smart quotes" can interfere with Visual Studio's encoding settings when saving the file. Thing is - the problem I'm having involves 20-30 different script files, include files and so on (hey, that was how we kept it modular back in the day...) and I really don't want to open every one of them in Visual Studio and check the file encodings individually. Is there any way I can analyze a folder tree full of files and spit out a list of each filename along with the text encoding used to save the file? (Or - if encodings aren't clearly specified - work out what encoding Microsoft IIS thinks was used to save the file?)

    Read the article

  • using NSString + stringWithContentsOfFile:usedEncoding:error:

    - by Sergey
    Hello, all! I've got problem with use + stringWithContentsOfFile:usedEncoding:error: My problem in usedEncoding:(NSStringEncoding *)enc I don't know how can i set pointer to encoding. If i make it - programm is fail. For example, in similar function we have encoding:(NSStringEncoding)enc - without pointer! I want loading file (file has encoding ISOLatin1) in NSString and use NSString as UTF8String. how can i make it ? thanks.

    Read the article

  • How do I configure encodings (UTF-8) for code executed by Quartz scheduled Jobs in Spring framework

    - by Martin
    I wonder how to configure Quartz scheduled job threads to reflect proper encoding. Code which otherwise executes fine within Springframework injection loaded webapps (java) will get encoding issues when run in threads scheduled by quartz. Is there anyone who can help me out? All source is compiled using maven2 with source and file encodings configured as UTF-8. In the quartz threads any string will have encoding errors if outside ISO 8859-1 characters: Example config <bean name="jobDetail" class="org.springframework.scheduling.quartz.JobDetailBean"> <property name="jobClass" value="example.ExampleJob" /> </bean> <bean id="jobTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerBean"> <property name="jobDetail" ref="jobDetail" /> <property name="startDelay" value="1000" /> <property name="repeatCount" value="0" /> <property name="repeatInterval" value="1" /> </bean> <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean"> <property name="triggers"> <list> <ref bean="jobTrigger"/> </list> </property> </bean> Example implementation public class ExampleJob extends QuartzJobBean { private Log log = LogFactory.getLog(ExampleJob.class); protected void executeInternal(JobExecutionContext ctx) throws JobExecutionException { log.info("ÅÄÖ"); log.info(Charset.defaultCharset()); } } Example output 2010-05-20 17:04:38,285 1342 INFO [QuartzScheduler_Worker-9] ExampleJob - vÖvÑvñ 2010-05-20 17:04:38,286 1343 INFO [QuartzScheduler_Worker-9] ExampleJob - UTF-8 The same lines of code executed within spring injected beans referenced by servlets in the web-container will output proper encoding. What is it that make Quartz threads encoding dependent?

    Read the article

  • Is it possible to set two encodings for one hml?

    - by Horace Ho
    Is there a way to specify certain part of a html file as another encoding? The default encoding for the (generated) html is utf-8. However, some of the included data to be inserted in the html is in another encoding. It's something like: <div> the normal html in utf-8 </div> <div> <%= raw_data_in_another_encoding %> </div> Is there a way to hint a browser to render the 2nd <div> in another encoding? thanks

    Read the article

  • Detecting Request that uses invalid Encoding using Modsecurity

    - by Ali Ahmad
    I am trying write a virtual patch using modsecurity for my hosted web application using following rule i.e. <Location /index.php> SecDefaultAction phase:2,t:none,log,deny # Validate parameter names SecRule ARGS_NAMES "!^(articleid)$" \ "msg:'Unknown parameter: %{MATCHED_VAR_NAME}'" # Expecting articleid only once SecRule &ARGS:articleid "!@eq 1" \ "msg:'Parameter articleid seen more than once'" # Validate parameter articleid SecRule ARGS:articleid "!^[0-9]{1,10}$" \ "msg:'Invalid parameter articleid'" </Location> The problem is how can i reject requests that use invalid encoding as a global WAF configuration so that this patch cannot be circumvented.

    Read the article

  • Recovering a word file (Select the encoding that makes your document readable)

    - by HOY
    My girlfriend requested me to recover a word file which is her 2 months of work :(, and this is her thesis for graduation. It shows the "Select the encoding that makes your document readable" screen when I tried to open it, I tried 2 recovery tools but didn't work. File can be downloaded from the below link. http://s3.dosya.tc/server3/bmu4bi/glava.doc.html I kindly request your help. *The history of the issue*** she said she was copy pasting from other files while creating this file(she copy pasted from a pdf too). 2 days ago she opened the file in company pc and worked on it. Wrote 2 pages and saved. Next morning she could not open it. it is possible that an error occured when saving. the computer she worked freezes sometimes , when she was working there was a file in usb she plug out and in it and continue to work. then saved.

    Read the article

  • SMTP hacked by spammer using base64 encoding to authenticate

    - by Throlkim
    Over the past day we've detected someone from China using our server to send spam email. It's very likely that he's using a weak username/password to access our SMTP server, but the problem is that he appears to be using base64 encoding to prevent us from finding out which account he's using. Here's an example from the maillog: May 5 05:52:15 195396-app3 smtp_auth: SMTP connect from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] May 5 05:52:15 195396-app3 smtp_auth: smtp_auth: SMTP user info : logged in from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] Is there any way to detect which account it is that he's using?

    Read the article

  • Filename encoding broken after unzip on windows

    - by flammi88
    I zipped a directory on my linux server. Many files in the directory have german umlauts in their filename. The filesystem is ext3 and the system locale is set to de_DE.utf8. I used the following command to create the zip file: zip -r somezip.zip somefolder/ I transfered this file via WinSCP to my windows laptop and unzipped it. The issue: All filenames with german umlauts are broken. On my linux server the filenames are displayed correctly. I assume that I made a mistake when i created the zip file. Has someone any ideas how i can perserve the right filename encoding when I zip the files with the zip command on linux?

    Read the article

  • Filename encoding broken after unzip on windows

    - by flammi88
    I zipped a directory on my linux server. Many files in the directory have german umlauts in their filename. The filesystem is ext3 and the system locale is set to de_DE.utf8. I used the following command to create the zip file: zip -r somezip.zip somefolder/ I transfered this file via WinSCP to my windows laptop and unzipped it. The issue: All filenames with german umlauts are broken. On my linux server the filenames are displayed correctly. I assume that I made a mistake when i created the zip file. Has someone any ideas how i can perserve the right filename encoding when I zip the files with the zip command on linux?

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • Apache Web Server character encoding

    - by OBY
    I've recently transferred my webapp from my localhost (LH) to a VPS, and have had hebrew chars-encoding probs since. Whenever I send a request with a heb-char it results in "?????" saved to the DB. My LH config was tomcat6, MySQL, and centOS 6.2, opened to the web. In the VPS env I'm behind an Apache Web Server, and the rest is quite the same (though I haven't done anything to its installation). Please note I already have had this problem before, on my LH when the request was sent from IE/chrome (not FF!). The solution was to apply a filter on the the context and change the char-type to UTF-8. My webapp content char-encode is utf-8, MySql server set to utf8 using charset utf8;, and my centOS set to iw_IL.UTF8 using export LANG=iw_IL.UTF8. When I use locale the bash output seems to be set correctly. Any suggestions?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >