Search Results

Search found 5325 results on 213 pages for 'huffman encoding'.

Page 16/213 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Urls parameters doesn't encoding correctly!

    - by Ivan90
    I'am using asp.net mvc version 1.0 and I've a problem with some parameter in a url! My url is look like so(http://localhost:2282/Tags/PostList/c#) routes.MapRoute( "TagsRoute", "Tags/PostList/{tag}", new { controller="Tags",Action="PostList",tag = "" } ); In effect the problem is that tag paramter isn't encoding and so simbol # is ignored! I am using an actionlink but maybe with version 1.0 isn't encoding parameter directly! <%=Html.ActionLink(itemtags.Tags.TagName, "PostList","Tags", new { tag = itemtags.Tags.TagName }, new { style = "color:red;" })%> With this actionlink only whitespace are encoding correctly, infact asp.net mvc become asp.net%20mvc and it work fine! But c# isn't encoding :( So I try to use Server.UrlEncode, and in effect it happen some stuff!!! Infact c# became c%2523 but it isn't correct again because hexadecimal of # is %23! Have you some solutions???? Route Contraints? Thanks

    Read the article

  • Problem using AudioRecord with 8-bit encoding in android

    - by maxsap
    Hello, I have made an application that records from the phones microphone using the AudioRecord and 16-bit encoding, and I am able to playback the recording. For some compatibility reason I need to use 8-bit encoding, but when I try to run the same program using that encoding I keep getting an Invalid Audio Format. my code is : int bufferSize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT); AudioRecord recordInstance = new AudioRecord( MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, bufferSize); Any one knows what is the problem? according to the documentation AudioRecord is capable of 8-bit encoding. thanks in advanced maxsap.

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • Apache gzip with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • Firefox or Chrome - how to force a specific encoding for a page

    - by Mike
    Hi, I am accessing an intranet site built by amateurs, that was constructed to be "best viewed by IE" (arghhh!). The site is in portuguese. All accented letters are jammed and do not appear as they should. As I create sites myself, I know that the best way to build a site in portuguese and other latin languages is to use the "charset=iso-8859-1" on the page's HTML encoding. This will ensure cross-browser and platforms compatibility. But I have no way to change this, because I am a visitor on this site. I don't know the encoding they are using. What I ask is: is there a way I can force my browser (Chrome or Firefox) to recode the page using the correct charset? I need this to work on Ubuntu.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • How to read special characters from stdin in Python?

    - by erickrf
    I'm having trouble reading special characters from stdin. Here are my attempts: import os dir = raw_input("Dir name: ") Dir name: c:/á os.chdir(dir) WindowsError: [Error 2] The system cannot find the file specified: 'c:/\x81\xe1' Ok, so I tried to get the default system encoding and recode the string from stdin: import locale encoding = locale.getdefaultlocale()[1] print encoding cp1252 unicode(dir, encoding) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\encodings\cp1252.py", line 15, in decode return codecs.charmap_decode(input,errors,decoding_table) UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3: character maps to <undefined> Now, I don't know how to solve this. Nor can I understand - why is there a problem when I try to access a directory with a name written in the system default encoding itself??

    Read the article

  • Job queueing in Toast Titanium 10?

    - by moonslug
    I have a bunch of .MP4 video files I'm burning to DVD-Video using Toast Titanium 10 on my MacBook Pro. Right now, I'm doing them one at a time. Because my computer is several years old, encoding video for a single DVD takes approximately six hours. I've discovered that it appears I can encode the video directly to a .toast format — however, I have yet to figure out if I can burn these directly to DVD. Also, I have quite a bit of video left to burn, and even that method would require me intervening manually to start a new encoding or burn job every six hours. Would it be possible to somehow queue up multiple DVD-Video encoding jobs at once, and have the computer work through them automatically? The actual writing to DVD disc doesn't take nearly as long, and if I had all my video encoded for me to begin with my job would be a lot quicker. Maybe this can be accomplished with a different piece of software?

    Read the article

  • How to strip out 0x0a special char from utf8 file using c# and keep file as utf8?

    - by user1013388
    The following is a line from a UTF-8 file from which I am trying to remove the special char (0X0A), which shows up as a black diamond with a question mark below: 2464577 ????? True s6620178 Unspecified <1?1009-672 This is generated when SSIS reads a SQL table then writes out, using a flat file mgr set to code page 65001. When I open the file up in Notepad++, displays as 0X0A. I'm looking for some C# code to definitely strip that char out and replace it with either nothing or a blank space. Here's what I have tried: string fileLocation = "c:\\MyFile.txt"; var content = string.Empty; using (StreamReader reader = new System.IO.StreamReader(fileLocation)) { content = reader.ReadToEnd(); reader.Close(); } content = content.Replace('\u00A0', ' '); //also tried: content.Replace((char)0X0A, ' '); //also tried: content.Replace((char)0X0A, ''); //also tried: content.Replace((char)0X0A, (char)'\0'); Encoding encoding = Encoding.UTF8; using (FileStream stream = new FileStream(fileLocation, FileMode.Create)) { using (BinaryWriter writer = new BinaryWriter(stream, encoding)) { writer.Write(encoding.GetPreamble()); //This is for writing the BOM writer.Write(content); } } I also tried this code to get the actual string value: byte[] bytes = { 0x0A }; string text = Encoding.UTF8.GetString(bytes); And it comes back as "\n". So in the code above I also tried replacing "\n" with " ", both in double quotes and single quotes, but still no change. At this point I'm out of ideas. Anyone got any advice? Thanks.

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • Java - Need help with binary/code string manipulation

    - by ShrimpCrackers
    For a project, I have to convert a binary string into (an array of) bytes and write it out to a file in binary. Say that I have a sentence converted into a code string using a huffman encoding. For example, if the sentence was: "hello" h = 00 e = 01, l = 10, o = 11 Then the string representation would be 0001101011. How would I convert that into a byte? <-- If that question doesn't make sense it's because I know little about bits/byte bitwise shifting and all that has to do with manipulating 1's and 0's.

    Read the article

  • EGit (Eclipse) wrongly interpreting file names with non-ASCII characters?

    - by Stefan Seidel
    I recently switched to using a Git repository within Eclipse (Juno SR2), using EGit. In our project, some file names contains umlauts and other special non-ASCII-characters. On the command line, git status show no changes, workspace clean, but Eclipse marks those files as changed: How can I make Eclipse/EGit use the correct encoding for filenames? I tried setting LANG, file.encoding and the git config svn.pathnameencoding all to no avail. And again, on the command line there are no such errors.

    Read the article

  • base64-Encoding breaks smime-encrypted emaildata

    - by Streuner
    I'm using Mime::Lite to create and send E-Mails. Now I need to add support for S/Mime-encryption and finally could encrypt my E-Mail (the only Perllib I could install seems broken, so I'm using a systemcall and openssl smime), but when I try to create a mime-object with it, the E-Mail will be broken as soon as I set the Content-Transfer-Encoding to base64. To make it even more curious, it happens only if I set it via $myMessage->attr. If I'm using the constructor -new everything is fine, besides a little warning which I suppress by using MIME::Lite->quiet(1); Is it a bug or my fault? Here are the two ways how I create the mime-object. Setting the Content-Transfer-Encoding via construtor and suppress the warning: MIME::Lite->quiet(1); my $msgEncr = MIME::Lite->new(From =>'[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage, 'Content-Transfer-Encoding' => 'base64'); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->send; MIME::Lite->quiet(0); Setting the Content-Transfer-Encoding via $myMessage->attr which breaks the encrypted Data, but won't cause a warning: my $msgEncr = MIME::Lite->new(From => '[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->attr('Content-Transfer-Encoding' => 'base64'); $msgEncr->send; I just don't get why my message is broken when I'm using the attribute-setter. Thanks in advance for your help! Besides that i'm unable to attach any file to this E-Mail without breaking the encrypted message again.

    Read the article

  • What browser is sending user agent beginning mozilla/5.0+, tramslates & into &amp;

    - by Patrick
    We've got a website which has been running for a few years now. One of our customers has just started having an intermittent problem. Looking at our iis6.0 logs the service works correctly when they have a user agent beginning "mozilla/4.0+" but fails when the user agent begins "mozilla/5.0+". The particular customer only started having this problem on Wednesday. Does anyone know the browser/upgrade which changes the 4.0 to 5.0? The actual problem caused is that an "&" in a url parameter list is being encoded as "&amp;". Anyone seen anything similar? We have other users sending from browsers with the 5.0+ user agent without trouble. Sorry about the tags but don't have the rep to create new ones. Thanks in advance, Patrick Edit: hi Viper_sb, It is most probably a custom script (I'm primarily a c++ developer so don't really understand). Our site services requests from other customer developed sites, this one was done in Java script as far as I know. we're actually getting a variety of user agents (presumably depending on which of our customers customers is accessing the service), here's a few: Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+fr;+rv:1.9.1.11)+Gecko/20100701+Firefox/3.5.11 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.126+Safari/533.4 302 0 0 Mozilla/5.0+(Macintosh;+U;+PPC+Mac+OS+X;+fr)+AppleWebKit/523.12+(KHTML,+like+Gecko)+Version/3.0.4+Safari/523.12 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+fr;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8+(.NET+CLR+3.5.30729)

    Read the article

  • Javascript and PHP how should I en/decode my data

    - by Ron
    Hello everyone. I whould like to know in what encryption should I encode my data and why first of all, I use GET method because it is search engine inside website. second, I use RTL language (hebrew) and thrid which basically is why I ask this question - firefox and safari (as I understood) encode and decode urls automaticly so if I encoded url, in firefox I will see it decoded which is good but if I copy-paste the url to the address bar and than enter the site firefox encode the uncoded url to utf (i think). anyway, what en/decode should I use, and how can I overcome the firefox auto en/decode?

    Read the article

  • How can I install new locale to Ubuntu?

    - by UniMouS
    $ locale -a get output like this: C C.UTF-8 en_AG en_AG.utf8 en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_IN.utf8 en_NG en_NG.utf8 en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZM en_ZM.utf8 en_ZW.utf8 POSIX zh_CN.utf8 zh_SG.utf8 How can I install a new locale, for example en_US.ISO-8859-1. After this operation, I wish en_us.ISO-8859-1 could be listed in the locale -a list.

    Read the article

  • How does it matter if a character is 8 bit or 16 bit or 32 bit

    - by vin
    Well, I am reading Programing Windows with MFC, and I came across Unicode and ASCII code characters. I understood the point of using Unicode over ASCII, but what I do not get is how and why is it important to use 8bit/16bit/32bit character? What good does it do to the system? How does the processing of the operating system differ for different bits of character. My question here is, what does it mean to a character when it is a x-bit character?

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Is the carriage-return char considered obsolete

    - by Evan Plaice
    I wrote an open source library that parses structured data but intentionally left out carriage-return detection because I don't see the point. It adds additional complexity and overhead for little/no benefit. To my surprise, a user submitted a bug where the parser wasn't working and I discovered the cause of the issue was that the data used CR line endings as opposed to LF or CRLF. Hasn't OSX been using LF style line-endings since switching over to a unix-based platform? I know there are applications like Notepad++ where line endings can be changed to use CR explicitly but I don't see why anybody would want to. Is it safe to exclude support for the statistically insignificant percentage of users who decide (for whatever reason) to the old Mac OS style line-endings?

    Read the article

  • convert video file to .ogg

    - by Levan
    I've been having trouble with this because I'm new to Linux: I would like to convert different video formats to ogv. I found some terminal commands like this: ffmpeg -i input.avi -acodec libvorbis -ac 1 -b 768k output.ogg The problem with these type of commands is that they are intended to change bit rate, fps, or even resolution. I would like to just change the file format without changing anything else about the video. I looked at the man pages for ffmpeg and found some useful info but I don't know how to space command-line options. Are there any easy ways to do this? In addition, is there a command to change the bit rate so that it doesn't go over a certain rate?

    Read the article

  • What issues lead people to use Japanese-specific encodings rather than Unicode?

    - by Nicolas Raoul
    At work I come across a lot of Japanese text files in Shift-JIS and other encodings. It causes many mojibake (unreadable character) problems for all computer users. Unicode was intended to solve this sort of problem by defining a single character set for all languages, and the UTF-8 serialization is recommended for use on the Internet. So why doesn't everybody switch from Japanese-specific encodings to UTF-8? What issues with or disadvantages of UTF-8 are holding people back? EDIT: The W3C lists some known problems with Unicode, could this be a reason too?

    Read the article

  • Why do we need to put N before strings in Microsoft SQL Server?

    - by user61752
    I'm learning T-SQL. From the examples I've seen, to insert text in a varchar() cell, I can write just the string to insert, but for nvarchar() cells, every example prefix the strings with the letter N. I tried the following query on a table which has nvarchar() rows, and it works fine, so the prefix N is not required: insert into [TableName] values ('Hello', 'World') Why the strings are prefixed with N in every example I've seen? What are the pros or cons of using this prefix?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >