Search Results

Search found 324 results on 13 pages for 'rfc 1918'.

Page 9/13 | < Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • OTP or S/KEY - Conversion of Hex string into 6 readable words

    - by Garbit
    As seen in RFC2289 (S/KEY), there is a list of words that must be used when converting the hexadecimal string into a readable format. How would i go about doing so? The RFC mentions: The one-time password is therefore converted to, and accepted as, a sequence of six short (1 to 4 letter) English words. Each word is chosen from a dictionary of 2048 words; at 11 bits per word, all one-time passwords may be encoded. Read more: http://www.faqs.org/rfcs/rfc1760.html#ixzz0fu7QvXfe Does this mean converting a hex into decimal and then using that as an index for an array of words. The other thing it could be is using a text encoding e.g. 1111 might equal dog in UTF-8 encoding thanks in advance for your help!

    Read the article

  • Best way to represent Bit Arrays in C#??

    - by divinci
    Hi all, I am currently building a DHCPMessage class in c#. RFC is available here : http://www.faqs.org/rfcs/rfc2131.html Pseudo public object DHCPMessage { bool[8] op; bool[8] htype; bool[8] hlen; bool[8] hops; bool[32] xid; bool[16] secs; bool[16] flags; bool[32] ciaddr; bool[32] yiaddr; bool[32] siaddr; bool[32] giaddr; bool[128] chaddr; bool[512] sname; bool[1024] file; bool[] options; } If we imagine that each field is a fixed length bit array, what is : The most versitile Best practice way of representing this as a class??? OR.. how would you write this? :)

    Read the article

  • Diff Algorithm

    - by Daniel Magliola
    I've been looking like crazy for an explanation of a diff algorithm that works and is efficient. The closest I got is this link to RFC 3284 (from several Eric Sink blog posts), which describes in perfectly understandable terms the data format in which the diff results are stored. However, it has no mention whatsoever as to how a program would reach these results while doing a diff. I'm trying to research this out of personal curiosity, because I'm sure there must be tradeoffs when implementing a diff algorithm, which are pretty clear sometimes when you look at diffs and wonder "why did the diff program chose this as a change instead of that?"... Does anyone know where I can find a description of an efficient algorithm that'd end up outputting VCDIFF? By the way, if you happen to find a description of the actual algorithm used by SourceGear's DiffMerge, that'd be even better. NOTE: longest common subsequence doesn't seem to be the algorithm used by VCDIFF, it looks like they're doing something smarter, given the data format they use. Thanks!

    Read the article

  • Replace letters in a secret text

    - by kame
    Hello! I want to change every letter in a text to after next following letter. But this program doesnt work. Does anyone know why. Thanks in advance. There is also a minor problem with y and z. import string letters = string.ascii_lowercase text=("g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj. ") for x in range(1,24): text.replace(letters[x],letters[x+2]) print(text)

    Read the article

  • simpledateformat parsing date with 'Z' literal

    - by DanInDC
    I am trying to parse a date that looks like this: 2010-04-05T17:16:00Z This is a valid date per http://www.ietf.org/rfc/rfc3339.txt. The 'Z' literal "imply that UTC is the preferred reference point for the specified time." If I try to parse it using SimpleDateFormat and this pattern: yyyy-MM-dd'T'HH:mm:ss It will be parsed as a Mon Apr 05 17:16:00 EDT 2010 SimpleDateFormat is unable to parse the string with these patterns: yyyy-MM-dd'T'HH:mm:ssz yyyy-MM-dd'T'HH:mm:ssZ I can explicitly set the TimeZone to use on the SimpleDateFormat to get the expected output, but I don't think that should be necessary. Is there something I am missing? Is there an alternative date parser?

    Read the article

  • What one-time-password devices are compatible with mod_authn_otp?

    - by netvope
    mod_authn_otp is an Apache web server module for two-factor authentication using one-time passwords (OTP) generated via the HOTP/OATH algorithm defined in RFC 4226. The developer's has listed only one compatible device (the Authenex's A-Key 3600) on their website. If a device is fully compliant with the standard, and it allows you to recover the token ID, it should work. However, without testing, it's hard to tell whether a device is fully compliant. Have you ever tried other devices (software or hardware) with mod_authn_otp (or other open source server-side OTP program)? If yes, please share your experience :)

    Read the article

  • Email mime parsing

    - by Ashish
    Hi, I was trying to find a user friendly mime parser for java that could just get rid of all that message part parsing a user have to do. see this for more info about my requirement. Until now i have not been able to find one, so i think i need to write one for myself, that should be robust enough to handle all kind of emails. (I know this is not going to be easy.) Since there are a ton of email RFC's , can somebody guide me in the right direction from where should i start.

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • How Does Entourage 2008 (for Mac) Decide Which Emails Form a Conversation?

    - by David M
    This is a little bit like http://stackoverflow.com/questions/288757/how-to-identify-email-belongs-to-existing-thread-or-conversation but I am more interested in how Entourage 2008 really does threading as opposed to how it ought to. I have the parent message that has something like Message-ID: <[email protected]/> then some replies that have (in addition to their own Message-ID) In-Reply-To: <[email protected]/> However, these show up as two conversations! The first conversation consists solely of the parent message, and the second conversation consists of the other replies. Would adding a References: header (as described in RFC 2822) resolve this?

    Read the article

  • Import and Export for CSV are both broken in Mathematica

    - by dreeves
    Consider the following 2 by 2 array: x = {{"a b c", "1,2,3"}, {"i \"comma-heart\" you", "i \",heart\" u, too"}} If we Export that to CSV and then Import it again we don't get the same thing back: Import[Export["tmp.csv", d]] Looking at tmp.csv it's clear that the Export didn't work, since the quotes are not escaped properly. According to the RFC which I presume is summarized correctly on Wikipedia's entry on CSV, the right way to export the above array is as follows: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" Importing the above does not yield the original array either. So Import is broken as well. I've reported these bugs to [email protected] but I'm wondering if others have workarounds in the meantime. One workaround is to just use TSV instead of CSV. I tested the above with TSV and it seems to work (even with tabs embedded in the entries of the array).

    Read the article

  • Handling over-long UTF-8 sequences

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle over-long utf8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long utf8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • how to handle CONNECT http request

    - by davidshen84
    hi, i want to implement a simple web server for my self. i can handle GET and POST request now. but i have no idea what to do with CONNECT request. CONNECT request is send when the client is going to access a https site. according to http://muffin.doit.org/docs/rfc/tunneling_ssl.html, it says i should response '200 Connection established'. but i got 'A TLS packet with unexpected length was received' on the client. the wiki described about the ssl handshake protocol, but it did not mention how to implement it.

    Read the article

  • PCAP Web Service Usage Logging for Dummies

    - by nick
    I've been assigned the task (for work) of working with PCAP for the first time in my life. I've read through the tutorials and have hacked together a real simple capture program which, it turns out, isn't that hard. However, making use of the data is more difficult. My goal is to log incomming and outgoing web service requests. Are there libraries (C or C++) that stitch together the packets from PCAP that would make reporting on this simple? Baring that is there something short of reading all of the RFC's from soup to nuts that will allow me to have an "ah-ha!" moment (all of the tutorials seem to stop at the raw packet level which isn't useful for me)? It looks like PERL has a library that may do this and I may eventually attempt a reverse engineer from PERL. NOTE BENE: Web Server logs aren't acceptable here as I will be intercepting on a routing device. If I had access to those I'd be done and happy...I don't.

    Read the article

  • SDP media field format

    - by TacB0sS
    Hey, I would like to create a SDP media field with its attributes, and there are a few things I don't understand. I've skimmed and read the relevant RFC and I understand most of what each field means, but what I don't understand is how do I derive from the Audio/Video Format of the JMF, which parameters of the format compose the rtpmap registry entries I need to use. I see many times the fields m=audio 12548 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=silenceSupp:off - - - - a=ptime:20 a=sendrecv these are received from the pbx server I'm connecting to, what do they mean in the terms of the JMF audio format properties. (I do understand these are standard audio format commonly used in telecommunication) UPDATE: I was more wondering about the format parameter '0 8 101' at the end of m=audio 12548 RTP/AVP 0 8 101 Thanks in advance, Adam Zehavi.

    Read the article

  • Why aren't double quotes and backslashes allowed in strings in the JSON standard?

    - by Dan Herbert
    If I run this in a JavaScript console in Chrome or Firebug, it works fine. JSON.parse('"\u0027"') // Escaped single-quote But if I run either of these 2 lines in a Javascript console, it throws an error. JSON.parse('"\u0022"') // Escaped double-quote JSON.parse('"\u005C"') // Escaped backslash RFC 4627 section 2.5 seems to imply that \ and " are allowed characters as long as they're properly escaped. The 2 browsers I've tried this in don't seem to allow it, however. Is there something I'm doing wrong here or are they really not allowed in strings? I've also tried using \" and \\ in place of \u0022 and \u005C respectively. I feel like I'm just doing something very wrong, because I find it hard to believe that JSON would not allow these characters in strings, especially since the specification doesn't seem to mention anything that I could find saying they're not allowed.

    Read the article

  • Sending BCC emails using a SMTP server?

    - by Alix Axel
    I've had this noted down on some of my code for a while: /** * Add a BCC. * * Note that according to the conventions of the SMTP protocol all * addresses, including BCC addresses, are included in every email as it * is sent over the Internet. The BCC addresses are stripped off blind * copy email only at the destination email server. * * @param string $email * @param string $name * @return object Email */ I don't remember where I got it from but that shouldn't be relevant to this question. Basically, whenever I try to send an email with BCCs via SMTP the BCC addresses are not hidden - I've read the whole RFC for the SMTP protocol (a couple years ago) and I don't think I'm missing anything. The strange thing is, if I send an email with BCCs using the built-in mail() function everything works just right and I've no idea why - I would like to roll my own email sender but I fail to understand this. Can someone please shed some light into this dark subject?

    Read the article

  • geographic location uri scheme

    - by diciu
    I'd like to use a URI scheme to enable the users of one of my apps to share geographic locations. I don't want to invent my own URI scheme and "geo" seems the most appropriate but there are only two Internet Drafts on the subject (draft-mayrhofer-geo-uri-01, draft-mayrhofer-geo-uri-02), both expired and wildly different in the way they approach the standard. Is there an URI that's suited for encoding latitude and longitude and that made it as an RFC? Should I use a generic URI such as the tag URI scheme?

    Read the article

  • Is there a lightweight multipart/form-data parser in C or C++?

    - by Hongli
    I'm looking at integrating multipart form-data parsing in a web server module so that I can relieve backend web applications (often written in dynamic languages) from parsing the multipart data themselves. The multipart grammar (RFC 2046) looks non-trivial and if I implement it by hand a lot of things can go wrong. Is there already a good, lightweight multipart/form-data parser written in C or C++? I'm looking for one with no external dependencies other than the C or C++ standard library. I don't need email attachment handling or buffered I/O classes or a portability runtime or whatever, just multipart/form-data parsing. Things that I've considered: GMime - depends on glib, so no go. libapreq - too large, depends on APR, badly documented, no unit tests. I've also looked at writing a parser with Ragel, but I can't figure out how to do it because the grammar is not static: the boundary can change arbitrarily.

    Read the article

  • Parsing content-disposion header's filename in multipart/from-data

    - by Artyom
    Hello According to RFC, in multipart/form-data content-disposition header filename field receives as parameter HTTP quoted string - string between quites where character '\' can escape any other ascii character. Problem web browsers don't do it. IE6 sends: Content-Disposition: form-data; name="file"; filename="z:\tmp\test.txt" Instead of expected Content-Disposition: form-data; name="file"; filename="z:\\tmp\\test.txt" Which should be parsed as z:tmptest.txt according to rules instead of z:\tmp\test.txt. Firefox, Konqueror and Chrome don't escape " characters for example: Content-Disposition: form-data; name="file"; filename=""test".txt" Instead of expected Content-Disposition: form-data; name="file"; filename="\"test\".txt" So... how would you suggest to deal with this issue?

    Read the article

  • Should I convert overlong UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overlong UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Twitter feed appears to be both RSS 2.0 and Atom?

    - by Greg K
    I'm parsing various site feeds, and putting together a small library to help me do it. Looking at the Atom RFC and RSS 2.0 specification, feeds from Twitter seem to be a combination. Twitter specifies an Atom namespace in an RSS 2.0 structure? GitHub uses Atom, whereas Flickr (offers multiple but the default 'Latest' feed from user profiles) appears to be RSS 2.0. How can Twitter specify a Atom namespace and then use RSS? This makes parsing feeds a little ambiguous, unless I ignore any specified namespace and just examine the document structure.

    Read the article

  • Valid HTTP header? `GET /page.html Http1.0`?

    - by Earlz
    Ok so I've been reading up on HTTP and found this page. This is an example HTTP request that was posted there: GET /http.html Http1.1 Host: www.http.header.free.fr Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, Accept-Language: Fr Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0) Connection: Keep-Alive I tried it in telnet and it worked. But everywhere else I see this kind of request line GET /http.html HTTP/1.1 The important different is that HTTP is all caps and the / character. Are they both correct? They both seem to work on the sites I've tested it on. I've skimmed the RFC of HTTP but didn't find anything of use. Has anyone else seen this kind of request header? Is it officially supported?

    Read the article

  • In the JSON spec, what does "Since the first two characters of a JSON text will always be ASCII characters" mean?

    - by dan gibson
    The spec is http://www.ietf.org/rfc/rfc4627.txt?number=4627 It contains this: Encoding JSON text SHALL be encoded in Unicode. The default encoding is UTF-8. Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. What does it mean "Since the first two characters of a JSON text will always be ASCII characters [RFC0020]"? I've looked at RFC0020 but couldn't find anything about it. JSON could be {" or { " (ie whitespace before the quote.

    Read the article

  • Is there anything in the FTP protocol like the HTTP Range header?

    - by Cheeso
    Suppose I want to transfer just a portion of a file over FTP - is it possible using a standard FTP protocol? In HTTP I could use a Range header in the request to specify the data range of the remote resource. If it's a 1mb file, I could ask for the bytes from 600k to 700k. Is there anything like that in FTP? I am reading the FTP RFC, don't see anything, but want to make sure I'm not missing anything. There's a Restart command in FTP - would that work?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >