Search Results

Search found 25009 results on 1001 pages for 'content encoding'.

Page 12/1001 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Dreaded python encoding errors, how to stop them?

    - by Rhubarb
    These have been plaguing me endlessly. Why? It seems that my console can't handle the encoding. I take it that the my browser and word processor can handle it. I don't have a master list of all the possible characters that it's choking on. What is the best way to relieve this without modifying my data? 'charmap' codec can't encode character u'\xca'

    Read the article

  • Real-time wmv video encoding in C#

    - by Greg Roberts
    How to encode video on the fly and send it trough the network from C#? Can't find a suitable library. I need to encode in WMV and don't mind if the actual encoding is made in C++ as long as the library has a .NET assembly available. Thanks

    Read the article

  • ASP/VBScript ServerXmlHttp Encoding

    - by colinramsay
    I'm pulling an RSS feed from a remote location using ServerXmlHttp: Dim httpRequest set httpRequest = server.createObject("Msxml2.ServerXMLHTTP.6.0") httpRequest.open "GET", "http://www.someurl.com/feed.xml", false httpRequest.send() response.write httpRequest.responseXML.xml However there must be encoding issues somewhere along the line as I'm seeing ???? where there should be some Japanese characters. Does anyone have any guidance when working with ServerXmlHttp? Thanks.

    Read the article

  • Encoding Issue [NSFW]

    - by azz0r
    Hello, I am having issues correcting an encoding type issue on a site. Unfortunately the site is non work safe (gay porn). For the brave: http://www.alphamalemedia.com/index/news Ive tried setting the meta content from utf8 to iso-8859-1. Ive switched tables over to utf8 from latin1_swedish_ci but no luck.

    Read the article

  • Java Unicode encoding

    - by Marcus
    A Java char is 2 bytes (max size of 65,536) but there are 95,221 Unicode characters. Does this mean that you can't handle certain Unicode characters in a Java application? Does this boil down to what character encoding you are using?

    Read the article

  • utf8 and encoding

    - by Dan
    I have a sting in unicode is "hao123--??????", while in utf8 in C++ string is "hao123???????????>", but I should write it to a file in this format "hao123\uFF0D\uFF0D\u6211\u7684\u4E0A\u7F51\u4E3B\u9875", how can I do it. I know little about this encoding. Can anyone help? thanks!

    Read the article

  • Ruby encoding problem

    - by Fossmo
    I'm just starting to learn Ruby and have a problem with encoding; require 'rubygems' require 'mechanize' agent = Mechanize.new agent.get('myurl.....') agent.page.search('#reciperesult a').each do |item| c = Mechanize.new c.get(item.attributes['href']) puts c.page.search('#ingredients li').text end The output text are shown like this h+©nsekj+©tt when it should have been shown like this hønsekjøtt. I'm using Ruby 1.8.7. Can anybody point me in the right direction?

    Read the article

  • PHP: simple form encoding/decoding

    - by Lennart
    Hi guys, Probably, this question has been asked before, though, I'll ask it again. Currently, I'm facing a problem with form encoding. When posting my form, all spaces are replaced by the "+" character. I would like to replace this "+" character by a real space. Does someone has a PHP solution for this? Thanks in advance. Cheers, Lennart

    Read the article

  • .NET Weird character encoding issue

    - by born to hula
    Our globalization mechanism stores error messages in a SQL 2005 DB. Some of the error messages are used as subjects on email messages sent to the development team. Recently, with no clear reason, we started receiving emails with strangely encoded subjects, such as: =?utf-8?B?Qm1mQm92ZXNwYS5Qb3NUcmFkaW5nRXNwZWNpZmljYWNhbyAtIFN1Y2Vzc28gbm8gcmVwcm 9jZXNzYW1lbnRvLiBEYXRhIFByZWfDo28gPSAzMS8wMy8yMDEwIDAwOjAwOjAwIC0gTsO6bWVyby BkbyBFdmVudG8gZGUgTmVnw7NjaW8gPSAxMDAyIC0gQ8OzZGlnbyBOYXR1cmV6YSBkYSBPcGVyY cOnw6NvID0gQyAtIFNlcn... We don't have any clue on the reason this is happening, nor which encoding pattern is being used here (maybe utf-8?). I'd really appreciate some help.

    Read the article

  • Django Encoding Issues with MySQL

    - by Jordan Reiter
    Okay, so I have a MySQL database set up. Most of the tables are latin1 and Django handles them fine. But, some of them are UTF-8 and Django does not handle them. Here's a sample table (these tables are all from django-geonames): DROP TABLE IF EXISTS `geoname`; SET @saved_cs_client = @@character_set_client; SET character_set_client = utf8; CREATE TABLE `geoname` ( `id` int(11) NOT NULL, `name` varchar(200) NOT NULL, `ascii_name` varchar(200) NOT NULL, `latitude` decimal(20,17) NOT NULL, `longitude` decimal(20,17) NOT NULL, `point` point default NULL, `fclass` varchar(1) NOT NULL, `fcode` varchar(7) NOT NULL, `country_id` varchar(2) NOT NULL, `cc2` varchar(60) NOT NULL, `admin1_id` int(11) default NULL, `admin2_id` int(11) default NULL, `admin3_id` int(11) default NULL, `admin4_id` int(11) default NULL, `population` int(11) NOT NULL, `elevation` int(11) NOT NULL, `gtopo30` int(11) NOT NULL, `timezone_id` int(11) default NULL, `moddate` date NOT NULL, PRIMARY KEY (`id`), KEY `country_id_refs_iso_alpha2_e2614807` (`country_id`), KEY `admin1_id_refs_id_a28cd057` (`admin1_id`), KEY `admin2_id_refs_id_4f9a0f7e` (`admin2_id`), KEY `admin3_id_refs_id_f8a5e181` (`admin3_id`), KEY `admin4_id_refs_id_9cc00ec8` (`admin4_id`), KEY `fcode_refs_code_977fe2ec` (`fcode`), KEY `timezone_id_refs_id_5b46c585` (`timezone_id`), KEY `geoname_52094d6e` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; SET character_set_client = @saved_cs_client; Now, if I try to get data from the table directly using MySQLdb and a cursor, I get the text with the proper encoding: >>> import MySQLdb >>> from django.conf import settings >>> >>> conn = MySQLdb.connect (host = "localhost", ... user = settings.DATABASES['default']['USER'], ... passwd = settings.DATABASES['default']['PASSWORD'], ... db = settings.DATABASES['default']['NAME']) >>> cursor = conn.cursor () >>> cursor.execute("select name from geoname where name like 'Uni%Hidalgo'"); 1L >>> g = cursor.fetchone() >>> g[0] 'Uni\xc3\xb3n Hidalgo' >>> print g[0] Unión Hidalgo However, if I try to use the Geoname model (which is actually a django.contrib.gis.db.models.Model), it fails: >>> from geonames.models import Geoname >>> g = Geoname.objects.get(name__istartswith='Uni',name__icontains='Hidalgo') >>> g.name u'Uni\xc3\xb3n Hidalgo' >>> print g.name Unión Hidalgo There's pretty clearly an encoding error here. In both cases the database is returning 'Uni\xc3\xb3n Hidalgo' but Django is (incorrectly?) translating the '\xc3\xb3n' to ó. What can I do to fix this?

    Read the article

  • Trouble with encoding and urllib

    - by Ockonal
    Hello, I'm loading web-page using urllib. Ther eis russian symbols, but page encoding is 'utf-8' 1 pageData = unicode(requestHandler.read()).decode('utf-8') UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 262: ordinal not in range(128) 2 pageData = requestHandler.read() soupHandler = BeautifulSoup(pageData) print soupHandler.findAll(...) UnicodeEncodeError: 'ascii' codec can't encode characters in position 340-345: ordinal not in range(128)

    Read the article

  • Encoding problem (Hebrew UTF8) in WordPress

    - by Tal Galili
    Hi all, I have a blog (of a friend) I am failing to fix: http://www.nivcalderon.com/ The language of the website is Hebrew, but the encoding scrambles the output, and I can't find how to fix it. I tried changing the DB colliation to be utf8_general_ci. I added this: define('DB_COLLATE', 'utf8_general_ci'); To the wp-config (and also this: define('DB_CHARSET', 'utf8'); But removed it later, since it didn't seem to fix the problem) Any ideas of what else to do ? Thanks

    Read the article

  • Encoding Issue [NWS]

    - by azz0r
    Hello, I am having issues correcting an encoding type issue on a site. Unfortunately the site is non work safe (gay porn). For the brave: http://www.alphamalemedia.com/index/news Ive tried setting the meta content from utf8 to iso-8859-1. Ive switched tables over to utf8 from latin1_swedish_ci but no luck.

    Read the article

  • In Python, how do I decode GZIP encoding?

    - by alex
    I downloaded a webpage in my python script. In most cases, this works fine. However, this one had a response header: GZIP encoding, and when I tried to print the source code of this web page, it had all symbols in my putty. How do decode this to regular text?

    Read the article

  • Parsing mail subject with inline specified encoding

    - by Sergej Andrejev
    Hi, I'm trying to parse Email Subject which have encoding specified in format itself. I get the format and imagine how this can be done, but maybe there is any free .Net solution available already so I wouldn't waste time on it? Here is an example of subject I want to parse: =?ISO-8859-13?Q?Fwd=3A_Dvira=E8iai_vasar=E0_vagiami_da=FEniau=2C_bet_draust?=

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • Oracle UCM 11g

    - by [email protected]
    Ya se ha lanzado la última versión de Oracle UCM11g. Grandes novedades, sobre todo en la arquitectura del producto, nos hacen ser muy optimistas sobre todo después de ver los resultados de rendimiento y escalabilidad obtenidos.El enlace a toda la información sobre el lanzamiento está aquí:Oracle Enterprise Content Management 11gLas novedades más importantes son:Mejor integración en tu entorno de trabajo: Nueva integración del escritorio: los contenidos se manejan usando herramientas estándares de oficina.Gestión de contenidos web en un clic: que permite a los desarrolladores y editores web acceder y actualizar contenido con un solo clic.Más funcionalidad a través de integraciones con otros productos de Oracle. Unificación del stack tecnológico de gestión de contenidosAhora Oracle ECM Suite 11g unifica todos los repositorios de contenido para facilitar su gestión en una única infraestructura.Infraestructura Oracle Fusion Middleware: Oracle ECM Suite 11g se ha trasladado completamente a la plataforma Oracle Fusion Middleware, con todas las aplicaciones soportadas por Oracle WebLogic Server y gestionado con el cuadro de mando Oracle Enterprise Manager. Rendimiento y escalabilidad ExtremosLos datos de los test de rendimiento son espectaculares corriendo en una máquina Exadata.Podéis ver un vídeo del rendimiento aquí: Bueno... 172 millones de documentos por día!!! y 124 páginas por segundo con 2 cpu's... quien quiere ser el primero en probarlo?

    Read the article

  • Encoding MySQL text fields into UTF-8 text files - problems with special characters

    - by Matt Andrews
    I'm writing a php script to export MySQL database rows into a .txt file formatted for Adobe InDesign's internal markup. Exports work, but when I encounter special characters like é or umlauts, I get weird symbols (eg Chloë Hanslip instead of Chloë Hanslip). Rather than run a search and replace for every possible weird character, I need a better method. I've checked that when the text hits the database, it's saved properly - in the database I see the special characters. My export code basically runs some regular expressions to put in the InDesign code tags, and I'm left with the weird symbols. If I just output the text to the browser (rather than prompt for a text file download), it displays properly. When I save the file I use this code: header("Content-disposition: attachment; filename=test.txt"); header("Content-Type: text/plain; charset=utf-8"); I've tried various combinations of utf8_encode() and iconv() to no avail. Can anybody point me in the right direction here?

    Read the article

  • MediaFileUpload of HTML in UTF-8 encoding using Python and Google-Drive-SDK

    - by Victoria
    Looking for example using MediaFileUpload has a reference to the basic documentation for creating/uploading a file to Google Drive. However, while I have code that creates files, converting from HTML to Google Doc format. It works perfectly when they contain only ASCII characters, but when I add a non-ASCII character, it fails, with the following traceback: Traceback (most recent call last): File "d:\my\py\ckwort.py", line 949, in <module> rids, worker_documents = analyze( meta, gd ) File "d:\my\py\ckwort.py", line 812, in analyze gd.mkdir( **iy ) File "d:\my\py\ckwort.py", line 205, in mkdir self.create( **( kw['subop'])) File "d:\my\py\ckwort.py", line 282, in create media_body=kw['media_body'], File "D:\my\py\gdrive2\oauth2client\util.py", line 120, in positional_wrapper return wrapped(*args, **kwargs) File "D:\my\py\gdrive2\apiclient\http.py", line 676, in execute headers=self.headers) File "D:\my\py\gdrive2\oauth2client\util.py", line 120, in positional_wrapper return wrapped(*args, **kwargs) File "D:\my\py\gdrive2\oauth2client\client.py", line 420, in new_request redirections, connection_type) File "D:\my\py\gdrive2\httplib2\__init__.py", line 1597, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "D:\my\py\gdrive2\httplib2\__init__.py", line 1345, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "D:\my\py\gdrive2\httplib2\__init__.py", line 1282, in _conn_request conn.request(method, request_uri, body, headers) File "C:\Python27\lib\httplib.py", line 958, in request self._send_request(method, url, body, headers) File "C:\Python27\lib\httplib.py", line 992, in _send_request self.endheaders(body) File "C:\Python27\lib\httplib.py", line 954, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 812, in _send_output msg += message_body UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 370: ordinal not in range(128) I don't find any parameter to use to specify what file encoding should be used by MediaFileUpload (My files are using UTF-8). Am I missing something?

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • Regex, encoding, and characters that look a like

    - by hack.augusto
    First, a brief example, let's say I have this "/[0-9]{2}°/" regex and this text "24º". The text won't match, obviusly ... (?) really, it depends on the character encoding. Here is my problem, I do not have control on which chars the user uses, so, I need to cover all possibilities in the regex /[0-9]{2}[°º]/, or even better, assure that the text has only the chars I'm expecting °. But I can't just remove the unknow chars otherwise the regex won't work, I need to change it to the chars that looks like it and I'm expecting. I have done this through a little function that maps the "look like" to "what I expect" and change it, the problem is, I have not covered all possibilities, for example, today I found a new "-", now we got three of them, just like latex =D - -- --- ,cool , but the regex didn't work. Does anyone knows how I might solve this?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >