Search Results

Search found 22988 results on 920 pages for 'url encoding'.

Page 63/920 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Can I use asterisks in URLs?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterisk (*) in a URL? Background: With asterisks, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • Directory structure and script arguments

    - by felwithe
    I'll often see a URL that looks something like this: site.com/articles/may/05/02/2011/article-name.php Surely all of those subdirectories don't actually exist? It seems like it would be a huge redundancy, even if it was only an identical index file in every directory. To change anything you'd have to change every single one. I guess my question is, is there some more elegant way that sites usually accomplish this?

    Read the article

  • Why does text from Assembly.GetManifestResourceStream() start with three junk characters?

    - by flipdoubt
    I have a SQL file added to my VS.NET 2008 project as an embedded resource. Whenever I use the following code to read the file's content, the string returned always starts with three junk characters and then the text I expect. I assume this has something to do with the Encoding.Default I am using, but that is just a guess. Why does this text keep showing up? Should I just trim off the first three characters or is there a more informed approach? public string GetUpdateRestoreSchemaScript() { var type = GetType(); var a = Assembly.GetAssembly(type); var script = "UpdateRestoreSchema.sql"; var resourceName = String.Concat(type.Namespace, ".", script); using(Stream stream = a.GetManifestResourceStream(resourceName)) { byte[] buffer = new byte[stream.Length]; stream.Read(buffer, 0, buffer.Length); // UPDATE: Should be Encoding.UTF8 return Encoding.Default.GetString(buffer); } } Update: I now know that my code works as expected if I simply change the last line to return a UTF-8 encoded string. It will always be true for this embedded file, but will it always be true? Is there a way to test any buffer to determine its encoding?

    Read the article

  • Perl strings internals

    - by n0rd
    How does perl strings represented internally? What encoding is used? How do I handle different encodings properly? I've been using perl for quite a long time, but it didn't include a lot of string handling in different encodings, and when I encountered a minor problem that had something to do with encodings I usually resorted to some shamanic actions. Until this moment I thought about perl strings as sequences of bytes, which did fit pretty well for my tasks. Now I need to do some processing of UTF-8 encoded file and here starts trouble. First, I read file into string like this: open(my $in, '<', $ARGV[0]) or die "cannot open file $ARGV[0] for reading"; binmode($in, ':utf8'); my $contents; { local $/; $contents = <$in>; } close($in); then simply print it: print $contents; And I get two things: a warning Wide character in print at <scriptname> line <n> and a garbage in console. So I can conclude that perl strings have a concept of "character" that can be "wide" or not, but when printed these "wide" characters are represented in console as multiple bytes, not as single "character". (I wonder now why did all my previous experience with binary files worked quite how I expected it to work without any "character" issues). Why then I see garbage in console? If perl stores strings as character in some known encoding, I don't think there is a big problem to find out console encoding and print text properly. (I use Windows, BTW). If perl stores strings as multibyte sequences (e.g. using same UTF-8 encoding), why is it done this way? From my C experience handling multibyte strings is PAIN.

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • How to workaround Python "WindowsError messages are not properly encoded" problem?

    - by Victor Lin
    It's a trouble when Python raised a WindowsError, the encoding of message of the exception is always os-native-encoded. For example: import os os.remove('does_not_exist.file') Well, here we get an exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> WindowsError: [Error 2] ???????????: 'does_not_exist.file' As the language of my Windows7 is Traditional Chinese, the default error message I get is in big5 encoding (as know as CP950). >>> try: ... os.remove('abc.file') ... except WindowsError, value: ... print value.args ... (2, '\xa8t\xb2\xce\xa7\xe4\xa4\xa3\xa8\xec\xab\xfc\xa9w\xaa\xba\xc0\xc9\xae\xd7\xa1C') >>> As you see here, error message is not Unicode, then I will get another encoding exception when I try to print it out. Here is the issue, it can be found in Python issue list: http://bugs.python.org/issue1754 The question is, how to workaround this? How to get the native encoding of WindowsError? The version of Python I use is 2.6. Thanks.

    Read the article

  • requesting ajax via HttpWebRequest

    - by Sami Abdelgadir Mohammed
    Hi guys: I'm writing a simple application that will download some piece of data from a website then I can use it later for any purpose The following is the request and response copied from Firebug as the browser did that... when u type http://x5.travian.com.sa/ajax.php?f=k7&x=18&y=-186&xx=12&yy=-192 you will get a php file has some data.. But when I make a request with HttpWebRequest I get wrong data (some unknown letters) Can anyone help me in that.. and if I have to make some encodings or what?? I will be so appreciated.. Response Server nginx Date Tue, 04 Jan 2011 23:03:49 GMT Content-Type application/json; charset=UTF-8 Transfer-Encoding chunked Connection keep-alive X-Powered-By PHP/5.2.8 Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Tue, 04 Jan 2011 23:03:49 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Content-Encoding gzip Vary Accept-Encoding Request Host x5.travian.com.sa User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Cookie CAD=57878984%231292375897%230%230%23%230; T3E=%3DImYykTN2EzMmhjO5QTM2QDN2oDM1ITOyoDOxIjM4EDN5ITM6gjO4MDOxIWZyQWMipTZu9metl2ctl2c6MDNxADN6MDNxADNjMDNxADNjMDNxADN; orderby_b1=0; orderby_b=0; orderby2=0; orderby=0

    Read the article

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • Use HttpGet with illegal characters in the URL

    - by kaciula
    I am trying to use DefaultHttpClient and HttpGet to make a request to a web service. Unfortunately the web service URL contains illegal characters such as { (ex: domain.com/service/{username}). It's obvious that the web service naming isn't well written but I can't change it. When I do HttpGet(url), I get that I have an illegal character in the url (that is { and }). If I encode the URL before that, there is no error but the request goes to a different URL where there is nothing. The url, although has illegal characters, works from the browser but the HttpGet implementation doesn't let me use it. What should I do or use instead to avoid this problem?

    Read the article

  • Firefox logs invalid URL?

    - by thanks for help
    I'm writing an extension for firefox. Using dom.location to keep track of visited search results pages, i'm getting this url http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e . If you click it, the google search results for "hi" should come up. You'll know that from the title bar - because the rest of the page won't load. This happens with any google search. Oddly enough, if you cut part of it off, so say, http://www.google.com/search?hl=en&source=hp&q=hi - it works! But Googling "hi" myself does give me a longish URL - http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=db658cc5049dc510 . I know for a fact that the first time that URL was visited, the page loaded, I did it myself. Can anyone make reason out of this? I just tried my experiment again, this time saving the original URL in the location bar. It turns out, dom.location.href is giving a different value. How is this happening? Original: http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e dom.location.href http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e window.addEventListener("load", function() { myExtension.init(); }, false); var myExtension = { init: function() { var appcontent = document.getElementById("appcontent"); // browser if(appcontent) appcontent.addEventListener("DOMContentLoaded", myExtension.onPageLoad, true); var messagepane = document.getElementById("messagepane"); // mail if(messagepane) messagepane.addEventListener("load", function () { myExtension.onPageLoad(); }, true); }, onPageLoad: function(aEvent) { var doc = aEvent.originalTarget; // doc is document that triggered "onload" event // do something with the loaded page. // doc.location is a Location object (see below for a link). // You can use it to make your code executed on certain pages only. var url = doc.location.href; if (url.match(/(?:p|q)(?:=)([^%]*)/)) {alert("MATCH" + url);resultsPages.push(url);} else {alert(url); } } This snippet comes directly from Mozilla with the matching and alerts my own. I apologize for not posting the code earlier.

    Read the article

  • NSString to NSData Failing in Encoding

    - by Travis
    I'm trying to use NSXmlParser to parse ISO-8859-1 data. Using Apple's own example for parsing ISO-8859-1, I have the following. NSString *xmlFilePath = [[NSBundle mainBundle] pathForResource:sampleFileName ofType:@"xml"]; NSString *xmlFileContents = [NSString stringWithContentsOfFile:xmlFilePath encoding:NSISOLatin1StringEncoding error:nil]; NSLog(@"contents: %@", xmlFileContents); I see that in the console, the contents of the string is accurate. However when I try to convert it to an NSData object (for use with the parser), I do the following. NSData *xmlData = [xmlFileContents dataUsingEncoding:NSISOLatin1StringEncoding]; But then when my didStartElement delegate gets called, I see  showing up which I think is from an encoding discrepancy. Can NSXmlParser handle ISO-8859-1 and if so, what am I doing wrong?

    Read the article

  • Unable to encode to iso-8859-1 encoding for some chars using Perl Encode module

    - by ppant
    I have a HTML string in ISO-8859-1 encoding. I need to pass this string to HTML:Entities::decode_entities() for converting some of the HTML ASCII codes to respective chars. To so i am using a module HTML::Parser::Entities 3.65 but after decode_entities() operation my whole string changes to utf-8 string. This behavior seems fine as the documentation of the HTML::Parse. As i need this string back in ISO-8859-1 format for further processing so i have used Encode::encode("iso-8859-1",$str) to change the string back to ISO-8859-1 encoding. My results are fine excepts for some chars, a question mark is coming instead. One example is single quote ' ASCII code (’) Can anybody help me if there any limitation of Encode module? Any other pointer will also be helpful to solve the problem. Thanks

    Read the article

  • Can not set Character Encoding using sun-web.xml

    - by stck777
    I am trying to send special characters like spanish chars from my page to a JSP page as a form parameter. When I try get the parameter which I sent, it shows that as "?" (Question mark). After searching on java.net thread I came to know that I should have following entry in my sun-web.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Sun ONE Application Server 8.0 Servlet 2.4//EN" "http://www.sun.com/software/sunone/appserver/dtds/sun-web-app_2_4-0.dtd"> <sun-web-app> <locale-charset-info default-locale="es"> <locale-charset-map locale="es" charset="UTF-8"/> <parameter-encoding default-charset="UTF-8"/> </locale-charset-info> </sun-web-app> But it did not work with this approach still the character goes as "?".

    Read the article

  • invalid token error while parsing an XML file with UTF-8 encoding

    - by Niranjan
    invalid token error while parsing an XML file with UTF-8 encoding. This error is coming when it encountered extended ASCII character 'â' { "â", "â" }. When I have changed the encoding from UTF-8 to ISO-8859-1 the parsing is successful. But my application should support UTF-8, ASCII and extended ASCII characters. What should I do for this? Any ideas are welcome. Thanks in Advance for your time and solution.

    Read the article

  • Tomcat Compression Does Not Add a Content-Encoding: gzip in the Header

    - by Julien Chastang
    I am using Tomcat to compress my HTML content like this: <Connector port="8080" maxHttpHeaderSize="8192" maxProcessors="150" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="150" connectionTimeout="20000" disableUploadTimeout="true" compression="on" compressionMinSize="128" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html" URIEncoding="UTF-8" /> In the HTTP header (as observed via YSlow), however, I am not seeing Content-Encoding: gzip resulting in a poor YSlow score. All I see is HeadersPost Response Headers Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Language: en-US Content-Length: 5251 Date: Sat, 14 Feb 2009 23:33:51 GMT I am running an apache mod_jk Tomcat configuration. How do I compress HTML content with Tomcat, and also have it add "Content-Encoding: gzip" in the header?

    Read the article

  • Intra-Unicode "lean" Encoding Converters

    - by Mystagogue
    Windows provides encoding conversion functions ("MultiByteToWideChar" and "WideCharToMultiByte") which are capable of UTF-8 to/from UTF-16 conversions, among other things. But I've seen people offer home-grown 30 to 40 line functions that claim also to perform UTF-8 / UTF-16 encoding conversions. My question is, how reliable are such tiny converters? Can such a tiny amount of code handle problems such as converting a UTF-16 surrogate pair (such as ) into a UTF-8 single four byte sequence (rather than making the mistake of converting into a pair of three byte sequences)? Can they correctly spot "unpaired" surrogate input, and provide an error? In short, are such tiny converters mere toys, or can they be taken seriously? For that matter, why does unicode.org seemingly offer no advice on an algorithm for accomplishing such conversions?

    Read the article

  • Double encoded url being fully decoded in ASP.NET

    - by Brad R
    I have just come across something that is quite strange and yet I haven't found any mention on the interwebs of others having the same problem. If I hit my ASP.NET application with a double encoded url then the Request["myQueryParam"] will do a double decode of the query for me. This is not desirable as I have double encoded my query string for a good reason. Can others confirm I'm not doing something obviously wrong, and why this would happen. A solution to prevent it, without doing some nasty query string parsing, would be great too! As an example if you hit the url: http://localhost/MyApp?originalUrl=http%3a%2f%2flocalhost%2fAction%2fRedirect%3fUrl%3d%252fsomeUrl%253futm_medium%253dabc%2526utm_source%253dabc%2526utm_campaign%253dabc (For reference %25 is the % symbol) Then look at the Request["originalUrl"] (page or controller) the string returned is: http://localhost/Action/Redirect?Url=/someUrl?utm_medium=abc&utm_source=abc&utm_campaign=abc I would expect: http://localhost/Action/Redirect?Url=%2fsomeUrl%3futm_medium%3dabc%26utm_source%3dabc%26utm_campaign%3dabc I have also checked in Fiddler and the URL is being passed to the server correctly (one possible culprit could have been the browser decoding the URL before sending).

    Read the article

  • Syncronize an SVN repo (svnsync) with encoding errors

    - by Hamish
    Is it possible to fix/bypass non-UTF8 encoded svn:log records when syncronizing repositories with svnsync? Background I'm in the process of taking over the maintenance of an open source module that is stored within a large (well over 10,000 revisions) subversion (1.5.5) repository. I do not have admin access to the remote repository to dump/filter/load the module. The old repository is being discontinued and I am trying to sync the original sub module to my local (1.6+) repository with svnsync. For example: svnsync file://home/svn/temp-repo/ http://path.to.repo/modulename/ The problem is that the old repository didn't enforce UTF8 encoding and I'm hitting errors like: svnsync: Cannot accept 'svn:log' property because it is not encoded in UTF-8 I can't modify the log property in the source repository so I need to somehow modify or ignore the property value when the encoding is unknown/invalid. Any ideas? For example, is it possible to write a pre-revprop-change script to modify the log property in transit?

    Read the article

  • Php/ODBC encoding problem

    - by JohnM2
    I use ODBC to connect to SQL Server from PHP. In PHP I read some string (nvarchar column) data from SQL Server and then want to insert it to mysql database. When I try to insert such value to mysql database table I get this mysql error: Incorrect string value: '\xB3\xB9ow...' for column 'name' at row 1 For string with all ASCII characters everything is fine, the problem occurs when non-ASCII characters (from some European languages) exist. So, in more general terms: there is a Unicode string in MS SQL Server database, which is retrieved by PHP trough ODBC. Then it is put in sql insert query (as value for utf-8 varchar column) which is executed for mysql database. Can someone explain to me what is happening in this situation in terms of encoding? At which step what character encoding convertions may take place? I use: PHP 5.2.5, MySQL5.0.45-community-nt, MS Sql Server 2005.

    Read the article

  • Video encoding by servlet with MEncoder

    - by Andrey
    Hello. I was developing an application for video encoding on the server and got a problem with encoding video with MEncoder. This decoder doesn't work correctly when runned by a command line with Runtime.getRuntime().exec(“D:\mencoder\mnc\mencoder.exe video1.avi -o outvideo1.flv -of lavf -oac mp3lame -lameopts abr:br=64 -srate 22050 -ovc lavc -lavcopts vcodec=flv:vbitrate=300:mbd=2:mv0:trell:v4mv:cbp:last_pred=3 -vf scale=320:240,harddup -quiet”) ; The decoder launches and works in windows console with my parameters, but when it's run from a servlet it just hangs in process list and doesn't do anything before the web-server is stopped. When trying to use decoder from a simple java applcation, it runs correctly. Thanks for help.

    Read the article

  • Loading xml with encoding UTF 16 using XDocument

    - by Sangram
    Hi, I am trying to read the xml document using XDocument method . but i am getting an error when xml has <?xml version="1.0" encoding="utf-16"?> When i removed encoding manually.It works perfectly. I am getting error " There is no Unicode byte order mark. Cannot switch to Unicode. " i tried searching and i landed up here-- Why does C# XmlDocument.LoadXml(string) fail when an XML header is included? But could not solve my problem. My code : XDocument xdoc = XDocument.Load(path); Any suggestions ?? thank you.

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

  • Android Read contents of a URL (content missing after in result)

    - by josnidhin
    I have the following code that reads the content of a url public static String DownloadText(String url){ StringBuffer result = new StringBuffer(); try{ URL jsonUrl = new URL(url); InputStreamReader isr = new InputStreamReader(jsonUrl.openStream()); BufferedReader in = new BufferedReader(isr); String inputLine; while ((inputLine = in.readLine()) != null){ result.append(inputLine); } }catch(Exception ex){ result = new StringBuffer("TIMEOUT"); Log.e(Util.AppName, ex.toString()); } in.close(); isr.close(); return result.toString(); } The problem is I am missing content after 4065 characters in the result returned. Can someone help me solve this problem. Note: The url I am trying to read contains a json response so everything is in one line I think thats why I am having some content missing.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >