Search Results

Search found 2224 results on 89 pages for 'charset'.

Page 2/89 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Choosing a W3C valid DOCTYPE and charset combination?

    - by George Carter
    I have a homepage with the following: <DOCTYPE html> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> My choice of the DOCTYPE "html" is based on a recommendation for html pages using jQuery. My choice of charset=utf=8 is based on a recommendation to make my pages readable on most browsers. But these choices may be wrong. When I run this page thru the W3C HTML validator, I get messages you see below. Any way I can eliminate the 2 errors? ! Using experimental feature: HTML5 Conformance Checker. The validator checked your document with an experimental feature: HTML5 Conformance Checker. This feature has been made available for your convenience, but be aware that it may be unreliable, or not perfectly up to date with the latest development of some cutting-edge technologies. If you find any issue with this feature, please report them. Thank you. Validation Output: 2 Errors 1. Error Line 18, Column 70: Changing character encoding utf-8 and reparsing. …ntent-Type" content="text/html; charset=utf-8"> 2. Error Line 18, Column 70: Changing encoding at this point would need non-streamable behavior. …ntent-Type" content="text/html; charset=utf-8">

    Read the article

  • How to set Content-Type header charset in OpenRasta

    - by Sergey Mirvoda
    When I return my object as JSON via JsonDataContractCodec OpenRasta sets Content-Type header to application/json but ignores charset part of content type. When I use Chrome it sends GET request with folowing header: Accept-Charset:windows-1251,utf-8;q=0.7,*;q=0.3 and all my utf-8 encoded json objects goes wrong. I tried to override OperationResult with no luck. OpenRasta overwrites my header with codec's one.

    Read the article

  • Apache returns 304, I want it to ignore anything from client and send the page

    - by Ayman
    I am using Apache HTTPD 2.2 on Windows. mod_expires is commented out. Most other stuff are not changed from the defaults. gzip is on. I made some changes to my .js files. My client gets one 304 response for one of the .js files and never gets the rest. How can I force Apache to sort of flush everything and send all new files to the client? The main html file includes these scripts in the head section of the main page: <script src="js/jquery-1.7.1.min.js" type="text/javascript"> </script> <script src="js/jquery-ui-1.8.17.custom.min.js" type="text/javascript"></script> <script src="js/trex.utils.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.core.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.codes.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.emv.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.b24xtokens.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.span2.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.amex.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.abi.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.barclays.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.bnet.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.visa.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.atm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.apacs.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.pstm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.stm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.thales.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-saf.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.app.js" type="text/javascript" charset="utf-8"></script> Apache access log has the following: [07/Jul/2013:16:50:40 +0300] "GET /trex/index.html HTTP/1.1" 200 2033 "-" [07/Jul/2013:16:50:40 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 [08/Jul/2013:07:54:35 +0300] "GET /trex/index.html HTTP/1.1" 304 - "-" [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.iso.js HTTP/1.1" 200 12417 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.amex.js HTTP/1.1" 200 6683 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-saf.js HTTP/1.1" 200 2925 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 Chrome request headers are as below: THis file is ok, latest: Request URL:http://localhost/trex/js/trex.iso.js Request Method:GET Status Code:200 OK (from cache) THis file is ok, latest: Request URL:http://localhost/trex/js/trex.amex.js Request Method:GET Status Code:200 OK (from cache) This one is also ok: Request URL:http://localhost/trex/js/trex.fps-iso.js Request Method:GET Status Code:200 OK (from cache) The rest of the scrips all have 200 OK (from cache).

    Read the article

  • Parse and charset: why my script doesn't work

    - by Rebol Tutorial
    I want to extract attribute1 and attribute3 values only. I don't understand why charset doesn't seem to work in my case to "skip" any other attributes (attribute3 is not extracted as I would like): content: {<tag attribute1="valueattribute1" attribute2="valueattribute2" attribute3="valueattribute3"> </tag> <tag attribute2="valueattribute21" attribute1="valueattribute11" > </tag> } attribute1: [{attribute1="} copy valueattribute1 to {"} thru {"}] attribute3: [{attribute3="} copy valueattribute3 to {"} thru {"}] spacer: charset reduce [tab newline #" "] letter: complement spacer to-space: [some letter | end] attributes-rule: [(valueattribute1: none valueattribute3: none) [attribute1 | none] to-space [attribute3 | none] (print valueattribute1 print valueattribute3) | [attribute3 | none] to-space [attribute1 | none] (print valueattribute3 print valueattribute1 valueattribute1: none valueattribute3: none ) | none ] rule: [any [to {<tag } thru {<tag } attributes-rule {>} to {</tag>} thru {</tag>}] to end] parse content rule output is >> parse content rule valueattribute1 none == true >>

    Read the article

  • How to properly backup mediawiki database (mysql) without messing up the data?

    - by Toto
    I want to backup a mediawiki database stored in a MySQL server 5.1.36 using mysqldump. Most of the wiki articles are written in spanish and a don't want to mess up with it by creating the dump with the wrong character set. mysql> status -------------- ... Current database: wikidb Current user: root@localhost ... Server version: 5.1.36-community-log MySQL Community Server (GPL) .... Server characterset: latin1 Db characterset: utf8 Client characterset: latin1 Conn. characterset: latin1 ... Using the following command: mysql> show create table text; I see that the table create statement set the charset to binary: CREATE TABLE `text` ( `old_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `old_text` mediumblob NOT NULL, `old_flags` tinyblob NOT NULL, PRIMARY KEY (`old_id`) ) ENGINE=InnoDB AUTO_INCREMENT=317 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=10240 How should I use mysqldump to properly generate a backup for that database?

    Read the article

  • Detect remote charset in php

    - by yallaa
    Hello, I would like to determine a remote page's encoding through detection of the Content-Type header tag <meta http-equiv="Content-Type" content="text/html; charset=XXXXX" /> if present. I retrieve the remote page and try to do a regex to find the required setting if present. I am still learning hence the problem below... Here is what I have: $EncStart = 'charset='; $EncEnd = '" \/\>'; preg_match( "/$EncStart(.*)$EncEnd/s", $RemoteContent, $RemoteEncoding ); echo = $RemoteEncoding[ 1 ]; The above does indeed echo the name of the encoding but it does not know where to stop so it prints out the rest of the line then most of the rest of the remote page in my test. Example: When testing a remote russian page it printed: windows-1251" / rest of page .... Which means that $EncStart was okay, but the $EncEnd part of the regex failed to stop the matching. This meta header usually ends in 3 different possibility after the name of the encoding. "> | "/> | " /> I do not know weather this is usable to satisfy the end of the maching and if yes how to escape it. I played with different ways of doing it but none worked. Thank you in advance for lending a hand.

    Read the article

  • Firefox 15.0 ignores meta charset="uft-8"

    - by flapjack
    I have this simple html <!Doctype html> <head> <title>Uft-8</title> <meta charset="uft-8"> <style type="text/css"> .tr_deco{ background-color:pink; border:1px solid red; } </style> </head> <body> <a class="new_krud_slider" href="">make new</a> </body> </html> When i try out the code on firefox 15,i get this firebug error. An unsupported character encoding was declared for the HTML document using a meta tag. The declaration was ignored. My firebug version is 1.7.3. What could be causing this error?.

    Read the article

  • Apache Web Server character encoding

    - by OBY
    I've recently transferred my webapp from my localhost (LH) to a VPS, and have had hebrew chars-encoding probs since. Whenever I send a request with a heb-char it results in "?????" saved to the DB. My LH config was tomcat6, MySQL, and centOS 6.2, opened to the web. In the VPS env I'm behind an Apache Web Server, and the rest is quite the same (though I haven't done anything to its installation). Please note I already have had this problem before, on my LH when the request was sent from IE/chrome (not FF!). The solution was to apply a filter on the the context and change the char-type to UTF-8. My webapp content char-encode is utf-8, MySql server set to utf8 using charset utf8;, and my centOS set to iw_IL.UTF8 using export LANG=iw_IL.UTF8. When I use locale the bash output seems to be set correctly. Any suggestions?

    Read the article

  • Charset and POST request

    - by jriff
    Hi All! I have a Rails 2.3.5 application that is working fine with UTF-8 and international characters. Now I have made some integration to a payment gateway where I POST some data, wait a while and get a POST back. The problem is that when I get that post back the international characters are broken. Instead of "sørensen" I get: "sørensen". If I do an iconv -fISO-8859-1 -tUTF8 it gets correctly converted to the former (I do that from a OS X command prompt). I have examined the POST request with logger.info(request.headers.inspect) in my controller and I can see that no charset parameter is given. As far as I can see the POST from the gateway must be UTF8 since one character (ø) gets translated to two (ø). So why does Rails think that the POST is ISO-8859-1? I know that one solution is to simply convert the params-hash with Iconv in the controller but I would like to know what is happening. Thanks in advance. Regards, Jacob

    Read the article

  • determing server response encoding

    - by user121196
    not java specific, but when I say OutputStream os = sock.getOutputStream(); is there a way to determine stream's encoding charset? or do I have to know encoding charset ahead of time to properly read it? This is for arbitrary socket connection.

    Read the article

  • Json_encode Charset problem

    - by Oguz
    When I use json_encode to encode my multi lingual strings , It also changes special characters.What should I do to keep them same . For example <? echo json_encode(array('sügçö')); It returns something like ["\u015f\u00fc\u011f\u00e7\u00f6"] But I want ["sügçö"]

    Read the article

  • how to set charset for MySQL in RODBC

    - by lokheart
    I have a data with chinese characters as field names and data, I have imported them from xls to access 2007 and export them to ODBC. Then I use RODBC to read them in R, the field names is OK, but for the data, all of the chinese characters are shown as ?. I have read the RODBC manual and it said: If it is possible to set the DBMS or ODBC driver to communicate in the character set of the R session then this should be done. For example, MySQL can set the communication character set via SQL, e.g. SET NAMES 'utf8'. I guess this is the problem, but how can I provide this command to MySQL via RODBC? Thanks!

    Read the article

  • HTML Encoding Charset Problem I think?

    - by ETFairfax
    Hi People. I've been asked to add a testimonial to this page... http://www.orchardkitchens.com/Showroom/testimonials.html As you will see there are funny characters showing up all over the place, and it has thrown the structure of the page out. I've since reloaded the backup and the funny chars are still appearing. Any ideas what I need to do?? Please ask if you need more info from me about the problem in hand. Many thanks, ETFairfax.

    Read the article

  • Java InputStream encoding/charset

    - by Tobbe
    Running the following (example) code import java.io.*; public class test { public static void main(String[] args) throws Exception { byte[] buf = {-27}; InputStream is = new ByteArrayInputStream(buf); BufferedReader r = new BufferedReader(new InputStreamReader(is, "ISO-8859-1")); String s = r.readLine(); System.out.println("test.java:9 [byte] (char)" + (char)s.getBytes()[0] + " (int)" + (int)s.getBytes()[0]); System.out.println("test.java:10 [char] (char)" + (char)s.charAt(0) + " (int)" + (int)s.charAt(0)); System.out.println("test.java:11 string below"); System.out.println(s); System.out.println("test.java:13 string above"); } } gives me this output test.java:9 [byte] (char)? (int)63 test.java:10 [char] (char)? (int)229 test.java:11 string below ? test.java:13 string above How do I retain the correct byte value (-27) in the line-9 printout? And consequently receive the expected output of the System.out.println(s) command (å).

    Read the article

  • How to allow utf-8 charset in preg_match ???

    - by Shri.harry
    Hello everone, I am using preg_match() function only to allow specific charachters to accept. It is allowing all alphabates and numbers but along with that i also want to allow utf-8 characters such as "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖ" so how can i allow this charachters from preg_match() function.Plase suggest me. Thanks in advance. Regards Shri

    Read the article

  • python parallel computing: split keyspace to give each node a range to work on

    - by MatToufoutu
    My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible. I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. I'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python? I really don't know how to do that, so any help would be much appreciated

    Read the article

  • MySQL charset issue.

    - by Shagymoe
    I'm not sure exactly why this happened, but I'm assuming it was an dump and import. The db is full of characters like — for commas and such. I've tried various solutions on the web, but nothing seems to work. I've verified that the html header specifies utf8. Any ideas on how I can get the entire db back to normal characters?

    Read the article

  • Set character_set_results UTF8 in MySQL my.cnf

    - by Marc
    Hi Folks, how can i set the Variable character_set_results from latin1 to uft8? I thought it would be enough if I would add the following variable in the my.cnf: default-character-set=utf8 But it not seem so: mysql SHOW VARIABLES LIKE 'character_set_%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ Some body have an idea how i can set character_set_results to utf8?

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • How can be changed the linux system's default character set?

    - by JPCF
    Hi, I'm working in a software development team using svn. Since many developer's computers are working using windows, text file codification has to be agreed with everyone. I decided to use Linux, and probably I will have to change my machine's default character codification. How can this be done in Linux? Thanks

    Read the article

  • How to set a character set per application in *nix?

    - by SimmaDoWN
    I am attempting to set a character set of IBM850 on slackware linux for a particular application (epic5). Im using rxvt-unicode and have setup LANG/LC_*=en_US. Now if I set the encoding to IBM850 in kde's konsole program im able to display certain characters correctly. I'd rather not use IBM850 for everything; is there a way to set/alias a command for a per application execution? Ive tried things like: LC_CTYPE=IBM850 epic5 LC_ALL=IBM850 epic5 No success. Any help would be appreciated

    Read the article

  • jQuery AJAX Character Encoding Problem

    - by Salty
    Hi everyone, I'm currently coding a French website. There's a schedule page, where a link on the side can be used to load another day's schedule. http://aquate.us/film/horaire.html (At the moment, only the links for November 13th and November 14th work) Here's the JS I'm using to do this: <script type="text/javascript"> function load(y) { $.get(y,function(d) { $("#replace").html(d); mod(); }); } function mod() { $("#dates a").click(function() { y = $(this).attr("href"); load(y); return false; }); } mod(); </script> The actual AJAX works like a charm. My problem lies with the response to the request. Because it is a French website, there are many accented letters. I'm using the ISO-8859-15 charset for that very reason. However, in the response to my AJAX request, the accents are becoming ?'s because the character encoding seems to be changed back to UTF-8. How do I avoid this? I've already tried adding some PHP at the top of the requested documents to set the character set: <?php header('Content-Type: text/html; charset=ISO-8859-15'); ?> But that doesn't seem to work either. Any thoughts? Also, while any of you are looking here...why does the rightmost column seem to become smaller when a new page is loaded, causing the table to distort and each <li> within the <td> to wrap to the next line? Cheers

    Read the article

  • mysql utf encoding

    - by user121196
    java.sql.SQLException: Incorrect string value: '\xAC\xED\x00\x05sr...' for column 'xxxx' the column is a longtext in MYSQL with utf8 charset and utf8_general_ci collation. what's wrong?

    Read the article

  • Debugging ASP.NET Strings Downloaded to Browser (Montréal instead of Montréal)

    - by jdk
    I'm downloading a vCard to the browser using Response.Write to output .NET strings with special accented characters. Mime type is text/x-vcard and French characters are appearing wrong in Outlook, for example Montréal;Québec .NET string shows as Montréal Québec in browser. I'm using this vCard generator code from CodeProject.com I've played with the System.Encoding sample code at the bottom of this linked MSDN page to convert the unicode string into bytes and then write the ascii bytes but then I get Montr?al Qu?bec (progress but not a win). Also I've tried setting content type to both us-ascii and utf-8 of the response. If I open the downloaded vCard in Windows Notepad and save it as ANSI text (instead of default unicode format) and open in Outlook it's okay. So my assumption is I need to cause download of ANSI charset but am unsure if I'm doing it wrong or have a misunderstanding of where to start. Update: Looking at the raw HTTP, it appears my French characters are being downloaded in the unexpected format so it looks like I need to do some work on the server side... (full size)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >