Search Results

Search found 1474 results on 59 pages for 'unicode'.

Page 34/59 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How to parse kanji numeric characters using ICU?

    - by Aki
    I'm writing a function using ICU to parse an Unicode string which consists of kanji numeric character(s) and want to return the integer value of the string. "?" = 5 "???" = 31 "???????" = 5972 I'm setting the locale to Locale::getJapan() and using the NumberFormat::parse() to parse the character string. However, whenever I pass it any Kanji characters, the parse() method is returning U_INVALID_FORMAT_ERROR. Does anyone know if ICU supports Kanji character strings in the NumberFormat::parse() method? I was hoping that since I'm setting the Locale to Japanese that it would be able to parse Kanji numeric values. Thanks! #include <iostream> #include <unicode/numfmt.h> using namespace std; int main(int argc, char **argv) { const Locale &jaLocale = Locale::getJapan(); UErrorCode status = U_ZERO_ERROR; NumberFormat *nf = NumberFormat::createInstance(jaLocale, status); UChar number[] = {0x4E94}; // Character for '5' in Japanese '?' UnicodeString numStr(number); Formattable formattable; nf->parse(numStr, formattable, status); if (U_FAILURE(status)) { cout << "error parsing as number: " << u_errorName(status) << endl; return(1); } cout << "long value: " << formattable.getLong() << endl; }

    Read the article

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • IDN aware tools to encode/decode human readable IRI to/from valid URI

    - by Denis Otkidach
    Let's assume a user enter address of some resource and we need to translate it to: <a href="valid URI here">human readable form</a> HTML4 specification refers to RFC 3986 which allows only ASCII alphanumeric characters and dash in host part and all non-ASCII character in other parts should be percent-encoded. That's what I want to put in href attribute to make link working properly in all browsers. IDN should be encoded with Punycode. HTML5 draft refers to RFC 3987 which also allows percent-encoded unicode characters in host part and a large subset of unicode in both host and other parts without encoding them. User may enter address in any of these forms. To provide human readable form of it I need to decode all printable characters. Note that some parts of address might not correspond to valid UTF-8 sequences, usually when target site uses some other character encoding. An example of what I'd like to get: <a href="http://xn--80aswg.xn--p1ai/%D0%BF%D1%83%D1%82%D1%8C?%D0%B7%D0%B0%D0%BF%D1%80%D0%BE%D1%81"> http://????.??/???????????</a> Are there any tools to solve these tasks? I'm especially interested in libraries for Python and JavaScript.

    Read the article

  • Flex special characters not embedding

    - by Hanpan
    Hi, I am using the following code to embed Arial into my application: [Embed(source='../assets/fonts/Arial.ttf',fontFamily='CustomFont',fontWeight='regular', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT:Class; [Embed(source='../assets/fonts/Arial Bold.ttf',fontFamily='CustomFont',fontWeight='bold', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_BOLD:Class; [Embed(source='../assets/fonts/Arial Italic.ttf',fontFamily='CustomFont',fontWeight='regular',fontStyle="italic", unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_ITALIC:Class; [Embed(source='../assets/fonts/Arial Bold Italic.ttf',fontFamily='CustomFont',fontWeight='bold',fontStyle="italic", unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_ITALIC_BOLD:Class; [Embed(source='../assets/fonts/Arial Unicode.ttf',fontFamily='CustomFont',fontWeight='regular', unicodeRange='U+0020-U+0040,U+0041-U+005A,U+005B-U+0060,U+0061-U+007A,U+007B-U+007E,U+0080-U+00FF,U+0100-U+017F,U+0400-U+04FF,U+0370-U+03FF,U+1E00-U+1EFF,U+2022,U+2219,U+20AC-U+21AC', mimeType='application/x-font-truetype' )] public static var MY_FONT_UNICODE:Class; It's working fine for foreign characters, but no special characters (copyright, trademark, euro sign etc) are working. Can anyone help? I've checked my unicode ranges, they should work fine!

    Read the article

  • .Net Finalizer Order / Semantics in Esent and Ravendb

    - by mattcodes
    Help me understand. I've read that "The time and order of execution of finalizers cannot be predicted or pre-determined" Correct? However looking at RavenDB source code TransactionStorage.cs I see this ~TransactionalStorage() { try { Trace.WriteLine( "Disposing esent resources from finalizer! You should call TransactionalStorage.Dispose() instead!"); Api.JetTerm2(instance, TermGrbit.Abrupt); } catch (Exception exception) { try { Trace.WriteLine("Failed to dispose esent instance from finalizer because: " + exception); } catch { } } } The API class (which belongs to Managed Esent) which presumable takes handles on native resources presumably using a SafeHandle? So if I understand correctly the native handles SafeHandle can be finalized before TransactionStorage which could have undesired effects, perhaps why Ayende has added an catch all clause around this? Actually diving into Esent code, it does not use SafeHandles. According to CLR via C# this is dangerous? internal static class SomeType { [DllImport("Kernel32", CharSet=CharSet.Unicode, EntryPoint="CreateEvent")] // This prototype is not robust private static extern IntPtr CreateEventBad( IntPtr pSecurityAttributes, Boolean manualReset, Boolean initialState, String name); // This prototype is robust [DllImport("Kernel32", CharSet=CharSet.Unicode, EntryPoint="CreateEvent")] private static extern SafeWaitHandle CreateEventGood( IntPtr pSecurityAttributes, Boolean manualReset, Boolean initialState, String name) public static void SomeMethod() { IntPtr handle = CreateEventBad(IntPtr.Zero, false, false, null); SafeWaitHandle swh = CreateEventGood(IntPtr.Zero, false, false, null); } } Managed Esent (NativeMEthods.cs) looks like this (using Ints vs IntPtrs?): [DllImport(EsentDll, CharSet = EsentCharSet, ExactSpelling = true)] public static extern int JetCreateDatabase(IntPtr sesid, string szFilename, string szConnect, out uint dbid, uint grbit); Is Managed Esent handling finalization/dispoal the correct way, and second is RavenDB handling finalizer the corret way or compensating for Managed Esent?

    Read the article

  • ODBC and NLS_LANG

    - by Michael S.
    Let's say that I've created two different program executables, e.g. in C++. For some reason, the two programs internals representation of text are different from each other. Let's say the first program is using text representation A and the other text representation B. It could be a specific 8-bit ANSI codepage, Unicode/UTF-8 or Unicode/UTF-16 or whatever. Now each program want to communicate text (add/retrieve data) to/from the same database table on a (database) server. Each program communicates with the database through ODBC. So the programs do not know what database system they they are communicating with. In this specific case through the database is actually a Oracle RDMS database and the database server administrator has setup the database to use UTF-8. On the system on which the programs are running an appropriate ODBC driver is available, so that the programs can connect through ODBC. Each program will treat and convert from the ODBC data type SQL_C_CHAR to its internal text representation appropriately. I assume that the programs cannot do no other than to assume a specific encoding returned for SQL_C_CHAR text. If not the programs has to be told which encoding that is. For Oracle, I know that the NLS_LANG environment variable can be used on the client. I assume it affects the ODBC driver (related to SQL_C_CHAR) to convert from a specific encoding (as given by NLS_LANG) to the internal encoding of the database (in this example UTF-8) and vice-versa. If the machine running my programs are having a NLS_LANG this setting will affect the byte sequences returned for SQL_C_CHAR so my programs cannot suddenly assume a specific encoding for the text returned via SQL_C_CHAR. Is it possible to setup the ODBC connection (preferably programmatically at runtime), so that it takes care of text conversions appropriately for the two programs, i.e. from/to representation to/from UTF-8 and from/to representation B to/from UTF-8? Regards, /Michael PS. As the programs are connecting through ODBC I don't think it would be nice that they should now anything about NLS_LANG as this is a Orcacle specific environment variable.

    Read the article

  • Django db encoding

    - by realshadow
    Hey, I have a little problem with encoding. The data in db is ok, when I select the data in php its ok. Problem comes when I get the data and try to print it in the template, I get - Å port instead of Šport, etc. Everything is set to utf-8 - in settings.py, meta tags in template, db table and I even have unicode method specified for the model, but nothing seems to work. I am getting pretty hopeless here... Here is some code: class Category_info(models.Model): objtree_label_id = models.AutoField(primary_key = True) node_id = models.IntegerField(unique = True) language_id = models.IntegerField() label = models.CharField(max_length = 255) type_id = models.IntegerField() class Meta: db_table = 'objtree_labels' def __unicode__(self): return self.label I have even tried with return u"%s" % self.label. Here is the view: def categories_list(request): categories_list = Category.objects.filter(parent_id = 1, status = 1) paginator = Paginator(categories_list, 10) try: page = int(request.GET.get('page', 1)) except ValueError: page = 1 try: categories = paginator.page(page) except (EmptyPage, InvalidPage): categories = paginator.page(paginator.num_pages) return render_to_response('categories_list.html', {'categories': categories}) Maybe I am just blind and/or stupid, but it just doesnt work. So any help is appreciated, thanks in advance. Regards

    Read the article

  • EasyHook Windows Hooking problem/.dll injection

    - by Tom
    Ok can someone try and find the error with this code, it should produce all the registry keys every time something accesses them but i keep getting: System.MissingMethodException: The given method does not exist at EasyHook.LocalHook.GetProcAdress(String InModule, String InChannelName) An example code can be found here: http://www.codeproject.com/KB/DLL/EasyHook64.aspx I can get the CcreateFileW example to work! My code is here: public class Main : EasyHook.IEntryPoint { FileMon.FileMonInterface Interface; LocalHook LocalHook; Stack<String> Queue = new Stack<String>(); public Main(RemoteHooking.IContext InContext,String InChannelName) { // connect to host... Interface = RemoteHooking.IpcConnectClient<FileMon.FileMonInterface>(InChannelName); Interface.Ping(); } public void Run(RemoteHooking.IContext InContext,String InChannelName) { // install hook... try { LocalHook localHook = LocalHook.Create(LocalHook.GetProcAddress("Advapi32.dll", "RegOpenKeyExW"),new DMyRegOpenKeyExW(MyRegOpenKeyExW),this); localHook.ThreadACL.SetExclusiveACL(new int[] { }); } catch (Exception ExtInfo) { Interface.ReportException(ExtInfo); return; } Interface.IsInstalled(RemoteHooking.GetCurrentProcessId()); RemoteHooking.WakeUpProcess(); // wait for host process termination... try { while (true) { Thread.Sleep(500); // transmit newly monitored file accesses... if (Queue.Count > 0) { String[] Package = null; lock (Queue) { Package = Queue.ToArray(); Queue.Clear(); } Interface.OnCreateFile(RemoteHooking.GetCurrentProcessId(), Package); } else Interface.Ping(); } } catch { // Ping() will raise an exception if host is unreachable } } [DllImport("Advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true, CallingConvention = CallingConvention.StdCall)] static extern int RegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult); [UnmanagedFunctionPointer(CallingConvention.StdCall, CharSet = CharSet.Unicode, SetLastError = true)] delegate int DMyRegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult); int MyRegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult) { Console.WriteLine(string.Format("Accessing: {0}", subKey)); return RegOpenKeyExW(hKey, subKey, ulOptions, samDesired, out hkResult); } }

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • How can I see which shared folders my program has access to?

    - by Kasper Hansen
    My program needs to read and write to folders on other machines that might be in another domain. So I used the System.Runtime.InteropServices to add shared folders. This worked fine when it was hard coded in the main menu of my windows service. But since then something went wrong and I don't know if it is a coding error or configuration error. What is the scope of a shared folder? If a thread in my program adds a shared folder, can the entire local machine see it? Is there a way to view what shared folders has been added? Or is there a way to see when a folder is added? [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern System.UInt32 NetUseAdd(string UncServerName, int Level, ref USE_INFO_2 Buf, out uint ParmError); [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal struct USE_INFO_2 { internal LPWSTR ui2_local; internal LPWSTR ui2_remote; internal LPWSTR ui2_password; internal DWORD ui2_status; internal DWORD ui2_asg_type; internal DWORD ui2_refcount; internal DWORD ui2_usecount; internal LPWSTR ui2_username; internal LPWSTR ui2_domainname; } private void AddSharedFolder(string name, string domain, string username, string password) { if (name == null || domain == null || username == null || password == null) return; USE_INFO_2 useInfo = new USE_INFO_2(); useInfo.ui2_remote = name; useInfo.ui2_password = password; useInfo.ui2_asg_type = 0; //disk drive useInfo.ui2_usecount = 1; useInfo.ui2_username = username; useInfo.ui2_domainname = domain; uint paramErrorIndex; uint returnCode = NetUseAdd(String.Empty, 2, ref useInfo, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } }

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

  • django simple approach to multi-field search

    - by Scott Willman
    I have a simple address book app that I want to make searchable. The model would look something like: class Address(models.Model): address1 = models.CharField("Address Line 1", max_length=128) address2 = models.CharField("Address Line 2", max_length=128) city = models.CharField("City", max_length=128) state = models.CharField("State", max_length=24) zipCode = models.CharField("Zip Code", max_length=24) def __unicode__(self): return "%s %s, %s, %s, %s" % (self.address1, self.address2, self.city, self.state, self.zipCode) class Entry(models.Model): name = models.CharField("Official School Name", max_length=128) createdBy = models.ForeignKey(User) address = models.ForeignKey(Address, unique=True) def __unicode__(self): return "%s - %s, %s" % (self.name, self.address.city, self.address.state) I want the searching to be fairly loose, like: Bank of America Los Angeles 91345. It seems like I want a field that contains all of those elements into one that I can search, but that also seems redundant. I was hoping I could add a method to the Entry model like this: def _getSearchText(self): return "%s %s %s" % (self.name, self.address, self.mascot) searchText = property(_getSearchText) ...and search that as a field, but I suppose that's wishful thinking... How should I approach this using basic Django and SqLite (this is a learning exercise). Thank you!!

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • Django: Determining if a user has voted or not

    - by TheLizardKing
    I have a long list of links that I spit out using the below code, total votes, submitted by, the usual stuff but I am not 100% on how to determine if the currently logged in user has voted on a link or not. I know how to do this from within my view but do I need to alter my below view code or can I make use of the way templates work to determine it? I have read http://stackoverflow.com/questions/1528583/django-vote-up-down-method but I don't quite understand what's going on ( and don't need any ofjavascriptery). Models (snippet): class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Views (snippet): def hot(request): links = Link.objects.select_related().annotate(votes=Count('vote')).order_by('-created') for link in links: delta_in_hours = (int(datetime.now().strftime("%s")) - int(link.created.strftime("%s"))) / 3600 link.popularity = ((link.votes - 1) / (delta_in_hours + 2)**1.5) if request.user.is_authenticated(): try: link.voted = Vote.objects.get(link=link, user=request.user) except Vote.DoesNotExist: link.voted = None links = sorted(links, key=lambda x: x.popularity, reverse=True) links = paginate(request, links, 15) return direct_to_template( request, template = 'links/link_list.html', extra_context = { 'links': links, }) The above view actually accomplishes what I need but in what I believe to be a horribly inefficient way. This causes the dreaded n+1 queries, as it stands that's 33 queries for a page containing just 29 links while originally I got away with just 4 queries. I would really prefer to do this using Django's ORM or at least .extra(). Any advice?

    Read the article

  • Django 1.1 template question

    - by Bovril
    Hi All, I'm a little stuck trying to get my head around a django template. I have 2 objects, a cluster and a node I would like a simple page that lists... [Cluster 1] [associated node 1] [associated node 2] [associated node 3] [Cluster 2] [associated node 4] [associated node 5] [associated node 6] I've been using Django for about 2 days so if i've missed the point, please be gentle :) Models - class Node(models.Model): name = models.CharField(max_length=30) description = models.TextField() cluster = models.ForeignKey(Cluster) def __unicode__(self): return self.name class Cluster(models.Model): name = models.CharField(max_length=30) description = models.TextField() def __unicode__(self): return self.name Views - def DSAList(request): clusterlist = Cluster.objects.all() nodelist = Node.objects.all() t = loader.get_template('dsalist.html') v = Context({ 'CLUSTERLIST' : clusterlist, 'NODELIST' : nodelist, }) return HttpResponse(t.render(v)) Template - <body> <TABLE> {% for cluster in CLUSTERLIST %} <tr> <TD>{{ cluster.name }}</TD> {% for node in NODELIST %} {% if node.cluster.id == cluster.id %} <tr> <TD>{{ node.name }}</TD> </tr> {% endif %} {% endfor %} </tr> {% endfor %} </TABLE> </body> Any ideas ?

    Read the article

  • How to retain similar character encoding

    - by Mystere Man
    I have a logfile that contains the half character ½, I need to process this log file and rewrite certain lines to a new file, which contain that character. However, when I write out the file the characters appear in notepad incorrectly. I know this is some kind of encoding issue, and i'm not sure if it's just that the files i'm writing don't contain the correct bom or what. I've tried reading and writing the file with all the available encoding options in the Encoding enumeration. I'm using this code: string line; // Note i've used every version of the Encoding enumeration using (StreamReader sr = new StreamReader(file, Encoding.Unicode)) using (StreamWRiter sw = new StreamWriter(newfile, false, Encoding.Unicode)) { while ((line = sr.ReadLine()) != null) { // process code, I do not alter the lines, they are copied verbatim // but i do not write every line that i read. sw.WriteLine(line); } } When I view the original log in notepad, the half character displays correctly. When I view the new file, it does not. Can anyone help me to solve this?

    Read the article

  • C++ UTF-8 output with ICU

    - by Isaac
    I'm struggling to get started with the C++ ICU library. I have tried to get the simplest example to work, but even that has failed. I would just like to output a UTF-8 string and then go from there. Here is what I have: #include <unicode/unistr.h> #include <unicode/ustream.h> #include <iostream> int main() { UnicodeString s = UNICODE_STRING_SIMPLE("??????"); std::cout << s << std::endl; return 0; } Here is the output: $ g++ -I/sw/include -licucore -Wall -Werror -o icu_test main.cpp $ ./icu_test пÑÐ¸Ð²ÐµÑ My terminal and font support UTF-8 and I regularly use the terminal with UTF-8. My source code is in UTF-8. I think that perhaps I somehow need to set the output stream to UTF-8 because ICU stores strings as UTF-16, but I'm really not sure and I would have thought that the operators provided by ustream.h would do that anyway. Any help would be appreciated, thank you.

    Read the article

  • Why initialize an object to empty

    - by ProgEnthu
    I am learning windows programming with the help of MSDN.Why would somebody initialize an object like the following? WNDCLASS wc = { }; Will this zero all the memory of the object? Whole source code is following: #ifndef UNICODE #define UNICODE #endif #include <windows.h> LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam); int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE, PWSTR pCmdLine, int nCmdShow) { // Register the window class. const wchar_t CLASS_NAME[] = L"Sample Window Class"; WNDCLASS wc = { }; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.lpszClassName = CLASS_NAME; RegisterClass(&wc); // Create the window. HWND hwnd = CreateWindowEx( 0, // Optional window styles. CLASS_NAME, // Window class L"Learn to Program Windows", // Window text WS_OVERLAPPEDWINDOW, // Window style // Size and position CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, // Parent window NULL, // Menu hInstance, // Instance handle NULL // Additional application data ); if (hwnd == NULL) { return 0; } ShowWindow(hwnd, nCmdShow); // Run the message loop. MSG msg = { }; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } return 0; } LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_PAINT: { PAINTSTRUCT ps; HDC hdc = BeginPaint(hwnd, &ps); FillRect(hdc, &ps.rcPaint, (HBRUSH) (COLOR_WINDOW+1)); EndPaint(hwnd, &ps); } return 0; } return DefWindowProc(hwnd, uMsg, wParam, lParam); }

    Read the article

  • Javascript memory leak/ performance issue?

    - by Tom
    I just cannot for the life of me figure out this memory leak in Internet Explorer. insertTags simple takes string str and places each word within start and end tags for HTML (usually anchor tags). transliterate is for arabic numbers, and replaces normal numbers 0-9 with a &#..n; XML identity for their arabic counterparts. fragment = document.createDocumentFragment(); for (i = 0, e = response.verses.length; i < e; i++) { fragment.appendChild((function(){ p = document.createElement('p'); p.setAttribute('lang', (response.unicode) ? 'ar' : 'en'); p.innerHTML = ((response.unicode) ? (response.surah + ':' + (i+1)).transliterate() : response.surah + ':' + (i+1)) + ' ' + insertTags(response.verses[i], '<a href="#" onclick="window.popup(this);return false;" class="match">', '</a>'); try { return p } finally { p = null; } })()); } params[0].appendChild( fragment ); fragment = null; I would love some links other than MSDN and about.com, because neither of them have sufficiently explained to me why my script leaks memory. I am sure this is the problem, because without it everything runs fast (but nothing displays). I've read that doing a lot of DOM manipulations can be dangerous, but the for loops a max of 286 times (# of verses in surah 2, the longest surah in the Qur'an).

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Error in django using Apache & mod_wsgi

    - by Ignacio
    Hey, I've been doing some changes to my django develpment env, as some of you suggested. So far I've managed to configure and run it successfully with postgres. Now I'm trying to run the app using apache2 and mod_wsgi, but I ran into this little problem after I followed the guidelines from the django docs. When I access localhost/myapp/tasks this error raises: Request Method: GET Request URL: http://localhost/myapp/tasks/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: argument 1 must be a string or unicode object Original Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/local/lib/python2.6/dist-packages/django/template/defaulttags.py", line 126, in render len_values = len(values) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) TypeError: argument 1 must be a string or unicode object ... ... ... And then it highlights a {% for t in tasks %} template tag, like the source of the problem is there, but it worked fine on the built-in server. The view associated with that page is really simple, just fetch all Task objects. And the template just displays them on a table. Also, some pages get rendered ok. Don't want to fill this Question with code, so if you need some more info I'd be glad to provide it. Thanks

    Read the article

  • Is there any time estimation about sqlite3's open speed?

    - by sxingfeng
    I am using SQLite3 in C++, I found the opening time of sqlite seems unstable at the first time( I mean ,open windows and open the db at the first time) It takes a long tiom on 50M db, about 10s in windows? and vary on different times. Has any one met the same problem? I am writting an desktop application in windows, so the openning speed is really important for me. Thanks in advance! int nRet; #if defined(_UNICODE) || defined(UNICODE) nRet = sqlite3_open16(szFile, &mpDB); // not tested under window 98 #else // For Ansi Version //*************- Added by Begemot szFile must be in unicode- 23/03/06 11:04 - **** OSVERSIONINFOEX osvi; ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX)); osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); GetVersionEx ((OSVERSIONINFO *) &osvi); if ( osvi.dwMajorVersion == 5) { WCHAR pMultiByteStr[MAX_PATH+1]; MultiByteToWideChar( CP_ACP, 0, szFile, _tcslen(szFile)+1, pMultiByteStr, sizeof(pMultiByteStr)/sizeof(pMultiByteStr[0]) ); nRet = sqlite3_open16(pMultiByteStr, &mpDB); } else nRet = sqlite3_open(szFile,&mpDB); #endif //************************* if (nRet != SQLITE_OK) { LPCTSTR szError = (LPCTSTR) _sqlite3_errmsg(mpDB); throw CppSQLite3Exception(nRet, (LPTSTR)szError, DONT_DELETE_MSG); } setBusyTimeout(mnBusyTimeoutMs);

    Read the article

  • What are your favourite free fonts?

    - by Rich Bradshaw
    As super users, using nicer fonts is often an important issue. Now that many sites are using @font-face to embed fonts in the page, having the same fonts installed locally means less downloading, and producing documents with custom fonts often look nicer. What are your favourite fonts? Ideally fonts that have unicode support and that have proper ligatures and kerning. Please add pictures if possible!

    Read the article

  • Why do Chinese filenames displays as boxes in Windows 7?

    - by Roddy
    I'm running Windows 7 Professional (UK), and trying to get filenames containing Chinese characters to display correctly in Explorer. I can create Chinese filenames in explorer by pasting text from a webpage or using the Chinese IME to rename files, but the characters just display as boxes (Unicode 'missing character' glyph). The Chinese fonts are installed on the system, and web pages display OK in the browser. In particular, I can see the correct Chinese filenames by pointing chrome at file://C:\, for example.

    Read the article

  • List all files and dirs without recursion with junctions

    - by naxa
    Is there a native|portable tool that can give me an unicode (or at least system local-compatible) list of all files and directories under a path recursively, without recursing into junction points or links, in Windows? For example, the built-in dir command, as well as takeown and icacls run into an infinite loop with the Application Data directory (1). EDIT I would like to be able to get a text file or at least easy clipboard transfer as output.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >