Search Results

Search found 1003 results on 41 pages for 'utf8'.

Page 11/41 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Invalid UTF-8 for Postgres, Perl thinks they're ok

    - by gorilla
    I'm running perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag & ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However perl seems to think that the strings are valid, but it still throws the exceptions. How can I get perl to detect the bad UTF-8?

    Read the article

  • NpgSQL insert file path containing backslashes "\\"

    - by Chau
    I am trying to create a record containing the path to a file. The insertion is done into a Postgres database where UTF8 is enabled, using the NpqSQL driver. My table definition: CREATE TABLE images ( id serial, file_location character varying NOT NULL ) My SQL statement (boiled down to a minimum): INSERT INTO images (file_location) VALUES (E'\\2010') When using pgAdmin to insert the above statement, it works fine. Using the NpgSQL driver through Visual Studio C#, it fails with this exception: "ERROR: 22021: invalid byte sequence for encoding \"UTF8\": 0x81" Replacing my SQL statement with the following: INSERT INTO images (file_location) VALUES (E'\\a2010') ^ The E is encouraged by pgAdmin when using double backslashes. So a quick recap: Why is NpqSQL stopping my insertion of the \\2010?

    Read the article

  • WCF Method is returning xml fragment but no xml UTF-8 header

    - by horls
    My method does not return the header, just the root element xml. internal Message CreateReturnMessage(string output, string contentType) { // create dictionaryReader for the Message byte[] resultBytes = Encoding.UTF8.GetBytes(output); XmlDictionaryReader xdr = XmlDictionaryReader.CreateTextReader(resultBytes, 0, resultBytes.Length, Encoding.UTF8, XmlDictionaryReaderQuotas.Max, null); if (WebOperationContext.Current != null) WebOperationContext.Current.OutgoingResponse.ContentType = contentType; // create Message return Message.CreateMessage(MessageVersion.None, "", xdr); } However, the output I get is: <Test> <Message>Hello World!</Message> </Test> I would like the output to render as: <?xml version="1.0" encoding="utf-8" standalone="yes"?> <Test> <Message>Hello World!</Message> </Test>

    Read the article

  • Convert a Mysql database from latin to UTF-8

    - by Matthieu
    I am converting a website from ISO to UTF-8, so I need to convert the Mysql database too. On internet, I read various solutions, I don't know wich one to choose. Do I really need to convert my varchar columns to binary, then to utf8 like that : ALTER TABLE t MODIFY col BINARY(150); ALTER TABLE t MODIFY col CHAR(150) CHARACTER SET utf8; This is long to do that for each column, of each table, of each database. I have 10 database, wich 20 tables each, wich around 2 - 3 varchar colums (2 queries each column), this gives me around 1000 queries to write ! How come ? How to do ?

    Read the article

  • Delphi 10, .NET, how do I convert a hex UTF-8 string to its unicode character?

    - by Evan V.
    Hi all, I am trying to make my web app compatible with international languages and I am stuck with trying to convert escaped characters in my Delphi .NET DLL. The front end code is passing the UTF-8 hex notation with an escape character e.g for ? I pass \uE3818A. In my DLL I capture this and constract the following string '$E3828A'. I need to convert this back to ? and send it to my database, I've been trying to use Encoding.UTF8.GetBytes and Encoding.UTF8.GetString but with no luck. Anyone could help me figure this out? Thank you.

    Read the article

  • Unable to Retrieve Simplified Chinese Characters From Form

    - by Bullines
    I have a page that displays content retrieved from XML with no problems: <?xml version="1.0" encoding="UTF-8"?> <Root> <Fields> <NamePrompt>??</NamePrompt> </Fields> </Root> Page encoding is set to GB18030 and it displays perfectly. However, when I retrieve inputted text from HttpContext.Current.Request.Form that's been entered with double-byte characters, the retrieved string contains unreadable characters. Single-byte characters are fine, obviously. I've tried the following to no avail: byte[] valueBytes = Encoding.UTF8.GetBytes(HttpContext.Current.Request.Form["fullName"]); string value = Encoding.UTF8.GetString(valueBytes); I don't see this problem with other double-byte languages like Japanese or Korean. How can I successfully retrieve double-byte characters from a page that's GB18030 encoded?

    Read the article

  • Encoding in python with lxml - complex solution

    - by Vojtech R.
    Hi, I need to download and parse webpage with lxml and build UTF-8 xml output. I thing schema in pseudocode is more illustrative: from lxml import etree webfile = urllib2.urlopen(url) root = etree.parse(webfile.read(), parser=etree.HTMLParser(recover=True)) txt = my_process_text(etree.tostring(root.xpath('/html/body'), encoding=utf8)) output = etree.Element("out") output.text = txt outputfile.write(etree.tostring(output, encoding=utf8)) So webfile can be in any encoding (lxml should handle this). Outputfile have to be in utf-8. I'm not sure where to use encoding/coding. Is this schema ok? (I cant find good tutorial about lxml and encoding, but I can find many problems with this...) I need robust approved solution so I ask you seniors. Many thanks

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • Problem with ColdFusion communicating with MySQL database

    - by Greg
    Hi, I have been working to migrate a non-profit website from a local server (running Windows XP) to a GoDaddy hosting account (running Linux). Most of the pages are written in ColdFusion. Things have gone smoothly, up until this point. There is a flash form within the site (see this page: http://www.preservenet.cornell.edu/employ/submitjob.cfm) which, when completed, takes the visitor to this page: submitjobaction.cfm . I'm not quite sure what to make of this error, since I copied exactly what had been in the old MySQL database, and the .cfm files are exactly as they had been when they worked on the old server. Am I missing something? Below is the code from the database that the error seems to be referring to. When I change "Positionlat" to some default value in the database as it suggests in the error, it says that another field needs a default value, and it's a neverending chain of errors as I try to correct it. This is probably a stupid error that I'm missing, but I've been working at it for days and can't find what I'm missing. I would really appreciate any help. Thanks! -Greg DROP TABLE IF EXISTS employopp; CREATE TABLE employopp ( POSTID int(10) NOT NULL auto_increment, USERID varchar(10) collate latin1_general_ci default NULL, STATUS varchar(10) collate latin1_general_ci NOT NULL default 'ACTIVE', TYPE varchar(50) collate latin1_general_ci default 'professional', JOBTITLE varchar(70) collate latin1_general_ci default NULL, NUMBER varchar(30) collate latin1_general_ci default NULL, SALARY varchar(40) collate latin1_general_ci default NULL, ORGNAME varchar(70) collate latin1_general_ci default NULL, DEPTNAME varchar(70) collate latin1_general_ci default NULL, ORGDETAILS mediumtext character set utf8 collate utf8_unicode_ci, ORGWEBSITE varchar(200) collate latin1_general_ci default NULL, ADDRESS varchar(60) collate latin1_general_ci default 'none given', ADDRESS2 varchar(60) collate latin1_general_ci default NULL, CITY varchar(30) collate latin1_general_ci default NULL, STATE varchar(30) collate latin1_general_ci default NULL, COUNTRY varchar(3) collate latin1_general_ci default 'USA', POSTALCODE varchar(10) collate latin1_general_ci default NULL, EMAIL varchar(75) collate latin1_general_ci default NULL, NOMAIL varchar(5) collate latin1_general_ci default NULL, PHONE varchar(20) collate latin1_general_ci default NULL, FAX varchar(20) collate latin1_general_ci default NULL, WEBSITE varchar(200) collate latin1_general_ci default NULL, POSTDATE varchar(10) collate latin1_general_ci default NULL, POSTUNTIL varchar(20) collate latin1_general_ci default 'select date', POSTUNTILFILLED varchar(20) collate latin1_general_ci NOT NULL default 'until filled', texteHTML mediumtext character set utf8 collate utf8_unicode_ci, HOWTOAPPLY mediumtext character set utf8 collate utf8_unicode_ci, CONFIRSTNM varchar(30) collate latin1_general_ci default NULL, CONLASTNM varchar(60) collate latin1_general_ci default NULL, POSITIONCITY varchar(30) collate latin1_general_ci default NULL, POSITIONSTATE varchar(30) collate latin1_general_ci default NULL, POSITIONCOUNTRY varchar(3) collate latin1_general_ci default 'USA', POSITIONLAT varchar(50) collate latin1_general_ci NOT NULL, POSITIONLNG varchar(50) collate latin1_general_ci NOT NULL, PRIMARY KEY (POSTID) ) ENGINE=MyISAM AUTO_INCREMENT=2007 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci;

    Read the article

  • How to use strange characters in a query string

    - by peter
    I am using silverlight / ASP .NET and C#. What if I want to do this from silverlight for instance, // I have left out the quotes to show you literally what the characters // are that I want to use string password = vtakyoj#"5 string encodedPassword = HttpUtility.UrlEncode(encryptedPassword, Encoding.UTF8); // encoded password now = vtakyoj%23%225 URI uri = new URI("http://www.url.com/page.aspx@password=vtakyoj%23%225"); HttpPage.Window.Navigate(uri); If I debug and look at the value of uri it shows up as this (we are still inside the silverlight app), http://www.url.com?password=vtakyoj%23"5 So the %22 has become a quote for some reason. If I then debug inside the page.aspx code (which of course is ASP .NET) the value of Request["pasword"] is actually this, vtakyoj#"5 Which is the original value. How does that work? I would have thought that I would have to go, HttpUtility.UrlDecode(Request["pswd"], Encoding.UTF8) To get the original value. Hope this makes sense? Thanks.

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • UTF-8 BOM in php response to mootools xmlhttprequest

    - by Jimmy
    Hi, I'm writing my first little AJAX-enabled Joomla component. I'm using mootools. I got a xmlhttprequest to contact my Joomla component, and the component returns a response - just plain text echoed by php, like echo 'Hello World!'; It's all working fine, except wireshark tells me that the response is prepended with \357\273\277\357\273\277 when it gets read by the javascript on the client side. This shows up as a little square before the response in an alert box that the script shows. I don't explicitly set the encoding on the xmlhttprequest; mootools docs say that it defaults to UTF8. What's the right way to handle this? Should I be setting the encoding on the request? Mime type? Should the javascript get rid of it? I'm not planning to have any characters requiring UTF8 in the response, so using plain old ascii would be ok for me too. Thanks

    Read the article

  • OneToOne JPA / Hibernate eager loading cause N+1 select

    - by Alexandre Lavoie
    I created a method to have multilingual text on different objects without creating field for each languages or tables for each objects types. Now the only problem I've got is N+1 select queries when doing a simple loading. Tables schema : CREATE TABLE `testentities` ( `keyTestEntity` int(11) NOT NULL, `keyMultilingualText` int(11) NOT NULL, PRIMARY KEY (`keyTestEntity`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts` ( `keyMultilingualText` int(11) NOT NULL auto_increment, PRIMARY KEY (`keyMultilingualText`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts_values` ( `languageCode` varchar(5) NOT NULL, `keyMultilingualText` int(11) NOT NULL, `value` text, PRIMARY KEY (`languageCode`,`keyMultilingualText`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; MultilingualText.java @Entity @Table(name = "common_multilingualtexts") public class MultilingualText implements Serializable { private Integer m_iKeyMultilingualText; private Map<String, String> m_lValues = new HashMap<String, String>(); public void setKeyMultilingualText(Integer p_iKeyMultilingualText) { m_iKeyMultilingualText = p_iKeyMultilingualText; } @Id @GeneratedValue @Column(name = "keyMultilingualText") public Integer getKeyMultilingualText() { return m_iKeyMultilingualText; } public void setValues(Map<String, String> p_lValues) { m_lValues = p_lValues; } @ElementCollection(fetch = FetchType.EAGER) @CollectionTable(name = "common_multilingualtexts_values", joinColumns = @JoinColumn(name = "keyMultilingualText")) @MapKeyColumn(name = "languageCode") @Column(name = "value") public Map<String, String> getValues() { return m_lValues; } public void put(String p_sLanguageCode, String p_sValue) { m_lValues.put(p_sLanguageCode,p_sValue); } public String get(String p_sLanguageCode) { if(m_lValues.containsKey(p_sLanguageCode)) { return m_lValues.get(p_sLanguageCode); } return null; } } And it is used like this on a object (having a foreign key to the multilingual text) : @Entity @Table(name = "testentities") public class TestEntity implements Serializable { private Integer m_iKeyEntity; private MultilingualText m_oText; public void setKeyEntity(Integer p_iKeyEntity) { m_iKeyEntity = p_iKeyEntity; } @Id @GeneratedValue @Column(name = "keyEntity") public Integer getKeyEntity() { return m_iKeyEntity; } public void setText(MultilingualText p_oText) { m_oText = p_oText; } @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "keyText") public MultilingualText getText() { return m_oText; } } Now, when doing a simple HQL query : from TestEntity, I get a query selecting TestEntity's and one query for each MultilingualText that need to be loaded on each TestEntity. I've searched a lot and found absolutely no solutions. I have tested : @Fetch(FetchType.JOIN) optional = false @ManyToOne instead of @OneToOne Now I am out of idea!

    Read the article

  • VB.NET, make a function with return type generic ?

    - by Quandary
    Currently I have written a function to deserialize XML as seen below. How do I change it so I don't have to replace the type every time I want to serialize another object type ? The current object type is cToolConfig. How do I make this function generic ? Public Shared Function DeserializeFromXML(ByRef strFileNameAndPath As String) As XMLhandler.XMLserialization.cToolConfig Dim deserializer As New System.Xml.Serialization.XmlSerializer(GetType(cToolConfig)) Dim srEncodingReader As IO.StreamReader = New IO.StreamReader(strFileNameAndPath, System.Text.Encoding.UTF8) Dim ThisFacility As cToolConfig ThisFacility = DirectCast(deserializer.Deserialize(srEncodingReader), cToolConfig) srEncodingReader.Close() srEncodingReader.Dispose() Return ThisFacility End Function Public Shared Function DeserializeFromXML1(ByRef strFileNameAndPath As String) As System.Collections.Generic.List(Of XMLhandler.XMLserialization.cToolConfig) Dim deserializer As New System.Xml.Serialization.XmlSerializer(GetType(System.Collections.Generic.List(Of cToolConfig))) Dim srEncodingReader As IO.StreamReader = New IO.StreamReader(strFileNameAndPath, System.Text.Encoding.UTF8) Dim FacilityList As System.Collections.Generic.List(Of cToolConfig) FacilityList = DirectCast(deserializer.Deserialize(srEncodingReader), System.Collections.Generic.List(Of cToolConfig)) srEncodingReader.Close() srEncodingReader.Dispose() Return FacilityList End Function

    Read the article

  • CakePHP Bake association problem

    - by Apu
    I have only two tables in my database with a one-to-many relationship between them (user hasMany messages) and am trying to get basic CRUD functionality going. Bake detects the associations correctly and specifies them correctly inside the model classes, but in controllers and views it looks like Cake doesn't know anything about those associations -- I don't even get a select tag for user_id when I go add a new message. Has anyone come across this problem before? What can I be doing wrong? Table structure appears to be fine: CREATE TABLE users ( id int(11) NOT NULL AUTO_INCREMENT, username varchar(255) NOT NULL, `password` varchar(255) NOT NULL, email varchar(255) NOT NULL, created datetime NOT NULL, modified datetime NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE IF NOT EXISTS `messages` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` int(11) NOT NULL, `content` varchar(255) NOT NULL, `created` datetime NOT NULL, `modified` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Zend Framework and UTF-8 characters (æøå)

    - by Randy Mayer
    Hi, I use Zend Framework and I have problem with JSON and UTF-8. Output \u00c3\u00ad\u00c4\u008d Ã­Ä I use... JavaScript (jQuery) contentType : "application/json; charset=utf-8", dataType : "json" Zend Framework $view->setEncoding('UTF-8'); $view->headMeta()->appendHttpEquiv('Content-Type', 'text/html;charset=utf-8'); header('Content-Type: application/json; charset=utf-8'); utf8_encode(); Zend_Json::encode Database resources.db.params.charset = "utf8" resources.db.params.driver_options.1002 = "SET NAMES utf8" resources.db.isDefaultTableAdapter = true Collation utf8_unicode_ci Type MyISAM Server PHP Version 5.2.6 What did I do wrong? Thank you for your reply!

    Read the article

  • Perl Encode - UK characters

    - by Phill Pafford
    This is a part 2 question from This Question. So I'm trying out the :encode functionality but having no luck at all. use Encode; use utf8; # Should print: iso-8859-15 print "Latin-9 Encoding: ".find_encoding("latin9")->name."\n"; my $encUK = encode("iso-8859-15", "UK €"); print "Encoded UK: ".$encUK."\n"; Results: Encoded UK: UK € Shouldn't the results be encoded? what am I doing wrong here? EDIT: Added the suggested: use utf8; and now I get this: Encoded UK: UK ? pulling hair out now :/

    Read the article

  • What are the alternative ways to export excel file

    - by fealin
    i m trying to generate an excel file from my web page but i have some issues with Turkish chars such as "I" "g" when i open the file some chars are incorrect (I seems Ä°) here is my code gvExpRequests.DataSource = dsExpRequests; gvExpRequests.DataBind(); gvExpRequests.GridLines = GridLines.Both; Page.EnableViewState = false; Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=export.xls"); Response.ContentType = "application/ms-excel"; Response.ContentEncoding = Encoding.UTF8; Response.BinaryWrite(Encoding.UTF8.GetPreamble()); StringWriter yaz = new StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(yaz); gvExpRequests.RenderControl(htw); i don't know what's wrong with here i tried a lot of encoding but i got same results every time i want to try something different to do this are there any another way to export a excel file from a gridview

    Read the article

  • Avoiding dispose of underlying stream

    - by danbystrom
    I'm attempting to mock some file operations. In the "real" object I have: StreamWriter createFile( string name ) { return new StreamWriter( Path.Combine( _outFolder, name ), false, Encoding.UTF8 ) ); } In the mock object I'd like to have: StreamWriter createFile( string name ) { var ms = new MemoryStream(); _files.Add( Path.Combine( _outFolder, name ), ms ); return new StreamWriter( ms, Encoding.UTF8 ) ); } where _files is a dictionary to store created files for later inspection. However, when the consumer closes the StreamWriter, it also disposes the MeamoryStream... :-( Any thoughts on how to pursue this?

    Read the article

  • iphone's nsxmlparser parsing RSS causes encoding problems

    - by Tankista
    Hi, Im working on simle RSS reader. This reader loads data from internet via this code: NSXMLParser *rss = [[NSXMLParser alloc] initWithURL:[NSURL URLWithString:@"http://twitter.com/statuses/user_timeline/50405236.rss"]]; My problem is with encoding. RSS 2.0 file is supposed to be UTF8 encoded according to encoding attribute in XML file. <?xml version="1.0" encoding="utf-8"?> So when I download URLs content I get text truncated after first occurance of char with diacritics, example: l š c t ž ý á í é, etc. I tried to solve the problem by downloading URL as UTF8 string, I used this code: NSString *rssXmlString = [NSString stringWithContentsOfURL: [NSURL URLWithString: @"http://www.macblog.sk/rss.xml"] encoding:NSUTF8StringEncoding error: nil]; NSData *rssXmlData = [rssXmlString dataUsingEncoding: NSUTF8StringEncoding]; Did not help. Thanx for your responses.

    Read the article

  • Syncronize an SVN repo (svnsync) with encoding errors

    - by Hamish
    Is it possible to fix/bypass non-UTF8 encoded svn:log records when syncronizing repositories with svnsync? Background I'm in the process of taking over the maintenance of an open source module that is stored within a large (well over 10,000 revisions) subversion (1.5.5) repository. I do not have admin access to the remote repository to dump/filter/load the module. The old repository is being discontinued and I am trying to sync the original sub module to my local (1.6+) repository with svnsync. For example: svnsync file://home/svn/temp-repo/ http://path.to.repo/modulename/ The problem is that the old repository didn't enforce UTF8 encoding and I'm hitting errors like: svnsync: Cannot accept 'svn:log' property because it is not encoded in UTF-8 I can't modify the log property in the source repository so I need to somehow modify or ignore the property value when the encoding is unknown/invalid. Any ideas? For example, is it possible to write a pre-revprop-change script to modify the log property in transit?

    Read the article

  • C# web request with POST encoding question

    - by rlandster
    On the MSDN site there is an example of some C# code that shows how to make a web request with POST'ed data. Here is an excerpt of that code: WebRequest request = WebRequest.Create ("http://www.contoso.com/PostAccepter.aspx "); request.Method = "POST"; string postData = "This is a test that posts this string to a Web server."; byte[] byteArray = Encoding.UTF8.GetBytes (postData); // (*) request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream (); dataStream.Write (byteArray, 0, byteArray.Length); dataStream.Close (); WebResponse response = request.GetResponse (); ...more... The line marked (*) is the line that puzzles me. Shouldn't the data be encoded using the UrlEncode function than UTF8? Isn't that what application/x-www-form-urlencoded implies?

    Read the article

  • Perl latin-9? Unicode - need to add support

    - by Phill Pafford
    I have an application that is being expanded to the UK and I will need to add support for Latin-9 Unicode. I have done some Googling but found nothing solid as to what is involved in the process. Any tips? Here is some code (Just the bits for Unicode stuff) use Unicode::String qw(utf8 latin1 utf16); # How to call $encoded_txt = $self->unicode_encode($item->{value}); # Function part sub unicode_encode { shift() if ref($_[0]); my $toencode = shift(); return undef unless defined($toencode); Unicode::String->stringify_as("utf8"); my $unicode_str = Unicode::String->new(); # encode Perl UTF-8 string into latin1 Unicode::String # - currently only Basic Latin and Latin 1 Supplement # are supported here due to issues with Unicode::String . $unicode_str->latin1( $toencode ); ... Any help would be great and thanks.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >