Search Results

Search found 5371 results on 215 pages for 'church encoding'.

Page 43/215 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Conditional deserialization

    - by Yordan Pavlov
    I am still not sure whether the title of my question is correct and it most probably is not. However I have spent some time searching both the net and stackoverflow and I can not find a good description of the issue I am facing. Basically what I want to achieve is the ability to read some raw bytes and based on the value of some of them to interpret the rest in different ways. This is how TLV works in a way, you check the tag and depending on it - interpret the result. Of course I can always keep that logic in my C++ code, however I am looking for a solution which would move the logic out of the source code (maybe to some xml description). This would allow me to describe different encodings (protocols) more easily. I am familiar with Protocol Buffers and some other serialization libraries, however all of them solve different issue. They assume they are on both ends of the communication, while I want to describe the communication (sort of). Is such solution available, if no - why not? Are there some inherent difficulties I will face trying to implement it.

    Read the article

  • PHP getting a bunch of weird code \u0644\u064a\u0646\u0643 \u0627\u0644

    - by Webby
    Hello I'm getting a bunch of weird html output in users messages e.g. \u0644\u064a\u0646\u0643 \u0627\u0644 \u0639\u0627\u0645\u0644 I assume their aribic characters decoded? How can I perhaps preg replace all these codes with something a little more useful? because search results are filled with pages and pages of this stuff Perhaps even display them as they're supposed to be? Any advice what to do with such strings and how to implement them appreciated.. Please keep in mind this stuff is mixed in been common language letters / numbers many thanks

    Read the article

  • C#, XmlDocument, and special ISO Latin characters

    - by Trent
    I'm trying to load up xml into an XmlDocument, but it doesn't recognize the encoded '&eacute' and throws an error 'An error occurred while parsing Entity Name'. Now I can add a custom entity set in a DTD of my xml, so the XmlDocument loads properly. But what I'm hoping is that I can reference a url that has a common set of these ISO Latin encoded characters. Is this possible, or do I need to inject a custom list of DTD sets?

    Read the article

  • PRoblems with encondig in ASP.MVC

    - by George
    Hello experts! I'm having a weird issue here. I have a bunch of Views, in which i have characters like this: é, á, ó, etc. In one of my views I can fetch data from the database with accents just fine, but in another one I simply get the "weird" characters :P WHat can i be doing wrong? Do i need to configure something in order to this work? Thanks!

    Read the article

  • Sphinx - delimiters

    - by yoda
    Hi, I would like to know if the Sphinx engine works with any delimiters (like commas and periods in normal MySQL). My question comes from the urge, not to use them at all, but to escape them or at least thay they don't enter in conflict when performing MATCH operations with FULLTEXT searches, since I have problems dealing with them in MySQL by default and I would prefer not to be forced to replace those delimiters by any other characters to provide a good set of results. Sorry if I'm saying something stupid, but I don't have experience with Sphinx or other complementary (?) search engines. To give you an example, if I perform a search with "Passat 2.0 TDI" MySQL by default would identify the period in this case as a delimiter and since the "2" and "0" are too short to be considered words by default, the results would be a bit messed up. Is it easy to handle with Sphinx (or other search engine)? I'm open to suggestions. This is for a large project, with probably more than 500.000 possible records (not trivial at all). Cheers!

    Read the article

  • Has anyone succeeded in converting files to the mpg-format that a Sony Digital Camera can play?

    - by user645552
    I'm trying to play a mpeg movie on my Sony Cybershot digital camera, it's from a DV-camera converted to MPEG1. But sony refuses to play it with File error. The only files sony can play are from my previous Sony camera. The other way around gives no problem, the movies the camera takes play on various software media players. Has anyone succeeded in converting files to the mpg-format that a Sony Digital Camera can play?

    Read the article

  • Which is the best way to encode batch videos on server side?

    - by albanx
    Hello I am making a general question since I am a developer and I have no advance experience on video elaboration. I have to preparare a web application with the purpose to allow video files upload on our company server and then video elaboration by server, on user command. The purpose of the web application is to allow to the user to make some elaboration on video depending on user action launch from the web app: (server has to ) convert video in different format(mp4, flv...) extact keyframes from video and saves them in jpeg format possibility to extract audio from video automatic control of quality audio & video (black frames,silences detection) change scene detection and keyframe extraction ..... This what's my bosses wanted from the web based application (with the server support obviously), and I understand only the first 3 points of this list, the rest for me was arabic.... My question is: Which is the best and fastest server side application for this works, that can support multiple batch video conversions, from command line (comand line for php-soap-socket interaction or something else..)? Is suitable Adobe Media Server for batch video conversion? Which are adobe products that can be used for this purpose? Note: I have experience with Indesign Server scripting programing (sending xml with php and soap call...), and I am looking to something similiar for video elaboration. I will appreciate any answers. THANKS ALL

    Read the article

  • Non-Latin characters in URLs - is it better to encode them or replace with their Latin "counterparts

    - by Pawel Krakowiak
    We're implementing a blog for a site which supports six different languages and five of them have non-Latin characters in their alphabets. We are not sure whether we should have them encoded (that is what we're doing at the moment) Létání s potravinami: Co je dovoleno? becomes l%c3%a9t%c3%a1n%c3%ad-s-potravinami-co-je-dovoleno and the browser displays it as létání-s-potravinami-co-je-dovoleno. or if we should replace them with their Latin "counterparts" (similar looking letters) Létání s potravinami: Co je dovoleno? becomes letani-s-potravinami-co-je-dovoleno. I can't find a definitive answer as to what's better from SEO perspective? Search engine optimization is very important for us. Which approach would you suggest?

    Read the article

  • strange characters at beginning of file

    - by luca
    there are strange characters at the beginning of a file I'm editing (using textmate..) I don't know when they appeared, they're invisible in textmate but my script that reads the file goes crazy.. this is the first few chars in the file (as seen with od command): 0000000 177377 000120 000105 000117 000120 000114 000105 000072 the first 2 shouldn't be there I think.. maybe they were caused by some strange dropbox sync? Or something else.. but they tend to reappear (I don't yet know when..) My question: what is that 177377 and a simple way to remove it in my ruby script? thanks

    Read the article

  • Get Python 2.7's 'json' to not throw an exception when it encounters random byte strings

    - by Chris Dutrow
    Trying to encode a a dict object into json using Python 2.7's json (ie: import json). The object has some byte strings in it that are "pickled" data using cPickle, so for json's purposes, they are basically random byte strings. I was using django.utils's simplejson and this worked fine. But I recently switched to Python 2.7 on google app engine and they don't seem to have simplejson available anymore. Now that I am using json, it throws an exception when it encounters bytes that aren't part of UTF-8. The error that I'm getting is: UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte It would be nice if it printed out a string of the character codes like the debugging might do, ie: \u0002]q\u0000U\u001201. But I really don't much care how it handles this data just as long as it doesn't throw an exception and continues serializing the information that it does recognize. How can I make this happen? Thanks!

    Read the article

  • another file_exists with special chars problem

    - by Camran
    I have some folders with special characters in their names. I run currently at a test-computer with Windows OS, but later I will use LINUX. My problem is that the folders with special chars in their names cannot be recognized somehow. ex: file_exists('../Bilar/27733691_1.jpg') // TRUE file_exists('../Båtar/27733691_1.jpg') // FALSE because of the special char in folder name... How should I solve this? I plan to run LINUX in the future when website is online... would that matter? Please explain thoroughly because I am a newb at this Thanks

    Read the article

  • convert the key in MIME encoded form in python

    - by jaysh
    this is the code : f = urllib.urlopen('http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search= 0x58e9390daf8c5bf3') #Retrieve the public key from PKS data = f.read() decoded_bytes = base64.b64decode(data) print decoded_bytes i need to convert the key in MIME encoded form which is presently comes in (ascii armored) radix 64 format.for that i have to get this radix64 format in its binary form and also need to remove its header and checksum than coversion in MIME format but i didnt find any method which can do this conversion. i used the base64.b64decode method and its give me error: Traceback (most recent call last): File "RetEnc.py", line 12, in ? decoded_bytes = base64.b64decode(data) File "/usr/lib/python2.4/base64.py", line 76, in b64decode raise TypeError(msg) TypeError: Incorrect padding what to do i'didnt getting .can anybody suggest me something related to this...... thanks!!!!

    Read the article

  • how to convert char * to uchar16 in JNI C++

    - by Sagar Hatekar
    Hello, here's what I am trying to do: typedef uint16_t uchar16_t; uchar16_t buf[32]; // buf will contain timezone information like GMT-6, Eastern Daylight Time, etc char * str = "Test"; for (int i = 0; i <= strlen(str); i++) buf[i] = str[i]; I guess that's not correct since uchar16_t would contain 2 bytes and str contains 1 byte. What is it that I am supposed to do ?

    Read the article

  • How do I read UTF-8 characters via a pointer?

    - by Jen
    Suppose I have UTF-8 content stored in memory, how do I read the characters using a pointer? I presume I need to watch for the 8th bit indicating a multi-byte character, but how exactly do I turn the sequence into a valid Unicode character? Also, is wchar_t the proper type to store a single Unicode character? This is what I have in mind: wchar_t readNextChar (char** p) { char ch = *p++; if (ch & 128) { // This is a multi-byte character, what do I do now? // char chNext = *p++; // ... but how do I assemble the Unicode character? ... } ... }

    Read the article

  • Console output spits out Chinese(?) characters

    - by a_person
    This is a real shot in the dark, however maybe someone had a similar issue. Some console apps are being invoked by either SQL Server 2008, or Autosys (job schedule) under Windows Server 2008; output results of execution are being saved into .txt files. Every so often, with no definite pattern as far as I can tell saved output is displayed as a series of what I presume are Chinese characters. Have anyone encountered phenomenon above?

    Read the article

  • Special character in "entrée" cannot be displayed correctly if defined in a separate javascript file

    - by Jazure
    Example: The following string is defined in a json.js file. var test = "One complimentary entrée with the purchase of an entrée."; It is included in an index.html file by <script type="text/JavaScript" src="./json.js"></script> When the string is displayed in UI, it shows up as "One complimentary entr?e with the purchase of an entr?e." But if string is defined directly in the index.html, then it is not a problem. Can anyone suggest a solution on how to keep the text in the separate .js file?

    Read the article

  • headers already sent displays after moving files from one server to another

    - by PERR0_HUNTER
    hey there! I have a project running on dreamhost hosting and it's working fine, but since DH has been getting really slow I'm moving the project to my new dedicated server. The thing is that after I move all of my file over to the new dedicated (ubuntu 8.4) I get see warnings all over the place telling me that the headers had been already sent. The first thing I tried was moving the files via FTP: download to my machine, upload to server - Didn't worked Second try was tar.gz the folder on the first server and untar it on the new one, didn't worked either I tried chaing enconding to ANSI and they start working, however most of my files contain accents so ANSI is not an option for all my files i need UTF8 Any ideas on how to fix this? I'm sure it must be some sort of config

    Read the article

  • Echo cancellation

    - by Jorg B Jorge
    Can any of you suggest a good and stable echo cancelation package (gnu or not) to be linked with my videoconference application (C/C++) (Windows / Linux / MacOSX) ? My application should be freeware, so i do not want to pay for each user who download the app.

    Read the article

  • case insenstive string replace that correctly works with ligatures like "ß" <=> "ss"

    - by usr
    I have build a litte asp.net form that searches for something and displays the results. I want to highlight the search string within the search results. Example: Query: "p" Results: a<b>p</b>ple, banana, <b>p</b>lum The code that I have goes like this: public static string HighlightSubstring(string text, string substring) { var index = text.IndexOf(substring, StringComparison.CurrentCultureIgnoreCase); if(index == -1) return HttpUtility.HtmlEncode(text); string p0, p1, p2; text.SplitAt(index, index + substring.Length, out p0, out p1, out p2); return HttpUtility.HtmlEncode(p0) + "<b>" + HttpUtility.HtmlEncode(p1) + "</b>" + HttpUtility.HtmlEncode(p2); } I mostly works but try it for example with HighlightSubstring("ß", "ss"). This crashes because in Germany "ß" and "ss" are considered to be equal by the IndexOf method, but they have different length! Now that would be ok if there was a way to find out how long the match in "text" is. Remember that this length can be != substring.Length. So how do I find out the length of the match that IndexOf produces in the presence of ligatures and exotic language characters (ligatures in this case)?

    Read the article

  • character_set_filesystem not present in show variables

    - by Diego
    I'm a little worried about the absence of this variable when I execute the show variables command. This is what I get when I execute show variables like 'char%': character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ I wonder why this is happening. What does it mean? can I just add it to the my.cnf file? Thank you... Edit: Sorry, I recently noted that I didn't specify which variable we're talking about (though I said it in the title). The variable is character_set_filesystem. Thanks.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >