Search Results

Search found 1649 results on 66 pages for 'unicode normalization'.

Page 30/66 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • Escaping In Expressions

    The expressions language is a C style syntax, so you may need to escape certain characters, for example: "C:\FolderPath\" + @VariableName Should be "C:\\FolderPath\\" + @VariableName Another use of the escape sequence allows you to specify character codes, like this \xNNNN, where NNNN is the Unicode character code that you want. For example the following expression will produce the same result as the previous example as the Unicode character code 005C equals a back slash character: "C:\x005CFolderPath\x005C" + @VariableName For more information about Unicode characters see http://www.unicode.org/charts/ Literals are also supported within expressions, both string literals using the common escape sequence syntax as well as modifiers which influence the handling of numeric values. See the "Literals (SSIS)":http://msdn2.microsoft.com/en-US/library/ms141001(SQL.90).aspx topic. Using the Unicode escaped character sequence you can make up for the lack of a CHAR function or equivalent.

    Read the article

  • What is the real meaning of the "Select a language [for] non-Unicode programs..." dialog?

    - by Joshua Fox
    What is the real meaning of the "Select a language to match the language version of the non-Unicode programs you want to use" dialog under Control Panel-Regional Settings-Advanced in WinXP and Win2003? According to the dialog text, Windows will use this to display the resource strings such as menus. The treatment of text files is application-specific, so this setting will not affect that. But can I expect any other change in behavior from this setting? Any insights into what is really going wrong?

    Read the article

  • How to get a Unicode-supporting font for Windows 7 command-line?

    - by Tim
    I've pointed the command-line to the right codepage (chcp 65001), but there's a lot of Unicode characters that Consolas and Lucida Console can't show. Specifically, I want the printable IPA characters to show up. It's not important to fix multi-codepoint glyphs, although it would be nice. How can I get such a font and install it for the command-line? Below is an example of some characters that can't be rendered.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • How to convert Beautiful Soup Unicode into a decimal value?

    - by MikeTheCoder
    I'm trying to Use python's Beautiful Soup Library to grab a bunch of divs from an html file, and from there get the string - which is a money value - that's inside the div. Then remove the dollar sign and convert it to a decimal so that I can use a greater than and less than conditional statement to compare values. I have googled the heck out of it and can't seem to come up with a way to convert this unicode string into a decimal value. I really could use some help here. How do I convert unicode into a decimal value? This was my last attempt: import unicodedata from bs4 import BeautifulSoup soup = BeautifulSoup(open("/Users/sm/Documents/python/htmldemo.html")) for tag in soup.findAll("div",attrs={"itemprop":"price"}) : val = tag.string new_val = val[8:] workable = int(new_val) if workable > 250: print(type(workable)) else: print(type(workable)) Edit: When I print the type of new_val I get : print(type(new_val))

    Read the article

  • How to map code points to unicode characters depending on the font used?

    - by Alex Schröder
    The client prints labels and has been using a set of symbolic (?) fonts to do this. The application uses a single byte database (Oracle with Latin-1). The old application I am replacing was not Unicode aware. It somehow did OK. The replacement application I am writing is supposed to handle the old data. The symbols picked from the charmap application often map to particular Unicode characters, but sometimes they don't. What looks like the Moon using the LAB3 font, for example, is in fact U+2014 (EM DASH). When users paste this character into a Swing text field, the character has the code point 8212. It was "moved" into the Private Use Area (by Windows? Java?). When saving this character to the database, Oracle decides that it cannot be safely encoded and replaces it with the dreaded ¿. Thus, I started shifting the characters by 8000: -= 8000 when saving, += 8000 when displaying the field. Unfortunately I discovered that other characters were not shifted by the same amount. In one particular font, for example, ž has the code point 382, so I shifted it by +/-256 to "fix" it. By now I'm dreading the discovery of more strange offsets and I wonder: Can I get at this mapping using Java? Perhaps the TTF font has a list of the 255 glyphs it encodes and what Unicode characters those correspond to and I can do it "right"? Right now I'm using the following kludge: static String fromDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { Font font = new Font(fontFamily, Font.PLAIN, 1); boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { if (font.canDisplay(chars[i] + 0xF000)) { // WE8MSWIN1252 + WinXP chars[i] += 0xF000; changed = true; } else if (chars[i] >= 128 && font.canDisplay(chars[i] + 8000)) { // WE8ISO8859P1 + WinXP chars[i] += 8000; changed = true; } else if (font.canDisplay(chars[i] + 256)) { // ž in LAB1 Eastern = 382 chars[i] += 256; changed = true; } } if (changed) str = new String(chars); } return str; } static String toDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { int chr = chars[i]; if (chars[i] > 0xF000) { // WE8MSWIN1252 + WinXP chars[i] -= 0xF000; changed = true; } else if (chars[i] > 8000) { // WE8ISO8859P1 + WinXP chars[i] = (char) (chars[i] - 8000); changed = true; } else if (chars[i] > 256) { // ž in LAB1 Eastern = 382 chars[i] = (char) (chars[i] - 256); changed = true; } } if (changed) return new String(chars); } return str; }

    Read the article

  • How to parse time stamps with Unicode characters in Java or Perl?

    - by ram
    I'm trying to make my code as generic as possible. I'm trying to parse install time of a product installation. I will have two files in the product, one that has time stamp I need to parse and other file tells the language of the installation. This is how I'm parsing the timestamp public class ts { public static void main (String[] args){ String installTime = "2009/11/26 \u4e0b\u5348 04:40:54"; //This timestamp I got from the first file. Those unicode charecters are some Chinese charecters...AM/PM I guess //Locale = new Locale();//don't set the language yet SimpleDateFormat df = (SimpleDateFormat)DateFormat.getDateTimeInstance(DateFormat.DEFAULT,DateFormat.DEFAULT); Date instTime = null; try { instTime = df.parse(installTime); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(instTime.toString()); } } The output I get is Parsing Failed java.text.ParseException: Unparseable date: "2009/11/26 \u4e0b\u5348 04:40:54" at java.text.DateFormat.parse(Unknown Source) at ts.main(ts.java:39) Exception in thread "main" java.lang.NullPointerException at ts.main(ts.java:45) It throws exception and at the end when I print it, it shows some proper date... wrong though. I would really appreciate if you could clarify me on these doubts How to parse timestamps that have unicode characters if this is not the proper way? If parsing is failed, how could instTime able to hold some date, wrong though? I know its some chinese,Korean time stamps so I set the locale to zh and ko as follows.. even then same error comes again Locale = new Locale("ko"); Locale = new Locale("ja"); Locale = new Locale("zh"); How can I do the same thing in Perl? I can't use Date::Manip package; Is there any other way?

    Read the article

  • How do I create self-relationships in polymorphic inheritance in Elixir and Pylons?

    - by Turukawa
    I am new to programming and am following the example in the Pylons documentation on creating a Wiki. The database I want to link to the wiki was created with Elixir so I rewrote the Wiki database schema and have continued from there. In the wiki there is a requirement for a Navigation table which is inherited by Pages and Sections. A section can have many pages, while a page can only have one section. In addition, each sibling node can be chain-referenced to each other. So: Nav has "section" (OneToMany) and "before" (OneToOne - to reference preceeding node) Page has "section" (ManyToOne - many pages in one section) and inherits "before" Section inherits all from Nav The code I've written looks like this: class Nav(Entity): using_options(inheritance='multi') name = Field(Unicode(30), default=u'Untitled Node') path = Field(Unicode(255), default=u'') section = OneToMany('Page', inverse='section') after = OneToOne('Nav', inverse='before') before = OneToMany('Nav', inverse='after') class Page(Nav): using_options(inheritance='multi') content = Field(UnicodeText, nullable=False) posted = Field(DateTime, default=now()) title = Field(Unicode(255), default=u'Untitled Page') heading = Field(Unicode(255)) tags = ManyToMany('Tag') comments = OneToMany('Comment') section = ManyToOne('Nav', inverse='section') class Section(Nav): using_options(inheritance='multi') Errors received on this: sqlalchemy.exc.OperationalError: (OperationalError) table nav has no column named aftr_id u'INSERT INTO nav (name, path, aftr_id, row_type) VALUES (?, ?, ?, ?)' I've also tried: before = ManyToMany('Nav', inverse='before') on Nav in the hopes this might break the problem, but also not. The original SQLAlchemy code from the tutorial for these declarations is as follows: nav_table = schema.Table('nav', meta.metadata, schema.Column('id', types.Integer(), schema.Sequence('nav_id_seq', optional=True), primary_key=True), schema.Column('name', types.Unicode(255), default=u'Untitled Node'), schema.Column('path', types.Unicode(255), default=u''), schema.Column('section', types.Integer(), schema.ForeignKey('nav.id')), schema.Column('before', types.Integer(), default=None), schema.Column('type', types.String(30), nullable=False) ) page_table = schema.Table('page', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), schema.Column('content', types.Text(), nullable=False), schema.Column('posted', types.DateTime(), default=now), schema.Column('title', types.Unicode(255), default=u'Untitled Page'), schema.Column('heading', types.Unicode(255)), ) section_table = sa.Table('section', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), ) orm.mapper(Nav, nav_table, polymorphic_on=nav_table.c.type, polymorphic_identity='nav') orm.mapper(Section, section_table, inherits=Nav, polymorphic_identity='section') orm.mapper(Page, page_table, inherits=Nav, polymorphic_identity='page', properties={ 'comments':orm.relation(Comment, backref='page', cascade='all'), 'tags':orm.relation(Tag, secondary=pagetag_table) }) Any help is much appreciated.

    Read the article

  • Calculate posterior distribution of unknown mis-classification with PRTools in MATLAB

    - by Samuel Lampa
    I'm using the PRTools MATLAB library to train some classifiers, generating test data and testing the classifiers. I have the following details: N: Total # of test examples k: # of mis-classification for each classifier and class I want to do: Calculate and plot Bayesian posterior distributions of the unknown probabilities of mis-classification (denoted q), that is, as probability density functions over q itself (so, P(q) will be plotted over q, from 0 to 1). I have that (math formulae, not matlab code!): P(q|k,N) = Posterior * Prior / Normalization constant = P(k|q,N) * P(q|N) / P(k|N) The prior is set to 1, so I only need to calculate the posterior and normalization constant. I know that the posterior can be expressed as (where B(N,k) is the binomial coefficient): P(k|q,N) = B(N,k) * q^k * (1-q)^(N-k) ... so the Normalization constant is simply an integral of the posterior above, from 0 to 1: P(k|N) = B(N,k) * integralFromZeroToOne( q^k * (1-q)^(N-k) ) (The Binomial coefficient ( B(N,k) ) can be omitted thoughappears in both the posterior and normalization constant, so it can be omitted.) Now, I've heard that the integral for the normalization constant should be able to be calculated as a series ... something like: k!(N-k)! / (N+1)! Is that correct? (I have some lecture notes from with this series, but can't figure out if it is for the normalization constant integral, or for the posterior distribution of mis-classification (q)) Also, hints are welcome as how to practically calculate this? (factorials are easily creating truncation errors right?) ... AND, how to practically calculate the final plot (the posterior distribution over q, from 0 to 1).

    Read the article

  • How do I encode Unicode strings using pyodbc to save to a SAS dataset?

    - by Chris B.
    I'm using Python to read and write SAS datasets, using pyodbc and the SAS ODBC drivers. I can load the data perfectly well, but when I save the data, using something like: cursor.execute('insert into dataset.test VALUES (?)', u'testing') ... I get a pyodbc.Error: ('HY004', '[HY004] [Microsoft][ODBC Driver Manager] SQL data type out of range (0) (SQLBindParameter)') error. The problem seems to be the fact I'm passing a unicode string; what do I need to do to handle this?

    Read the article

  • How to specify an association relation using declarative base

    - by sam
    I have been trying to create an association relation between two tables, intake and module . Each intake has a one-to-many relationship with the modules. However there is a coursework assigned to each module, and each coursework has a duedate which is unique to each intake. I tried this but it didnt work: intake_modules_table = Table('tg_intakemodules',metadata, Column('intake_id',Integer,ForeignKey('tg_intake.intake_id', onupdate="CASCADE",ondelete="CASCADE")), Column('module_id',Integer,ForeignKey('tg_module.module_id', onupdate ="CASCADE",ondelete="CASCADE")), Column('dueddate', Unicode(16)) ) class Intake(DeclarativeBase): __tablename__ = 'tg_intake' #{ Columns intake_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) commencement = Column(DateTime) completion = Column(DateTime) #{ Special methods def __repr__(self): return '"%s"' %self.code def __unicode__(self): return self.code #} class Module(DeclarativeBase): __tablename__ ='tg_module' #{ Columns module_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) title = Column(Unicode(30)) #{ relations intakes = relation('Intake', secondary=intake_modules_table, backref='modules') #{ Special methods def __repr__(self): return '"%s"'%self.title def __unicode__(self): return '"%s"'%self.title #} When I do this the column duedate specified in the intake_module_table is not created. Please some help will be appreciated here. thanks in advance

    Read the article

  • django modeling

    - by SledgehammerPL
    Concept: Drinks are made of components. E.g. 10ml of Vodka. In some receipt the component is very particular (10ml of Finlandia Vodka), some not (10 ml of ANY Vodka). I wonder how to model a component to solve this problem - on stock I have particular product, which can satisfy more requirements. The model for now is: class Receipt(models.Model): name = models.CharField(max_length=128) (...) components = models.ManyToManyField(Product, through='ReceiptComponent') def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) class Admin: pass def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive class Product(models.Model): name = models.CharField(max_length = 128) (...) class Admin: pass def __unicode__(self): return self.name class Stock(Store): products = models.ManyToManyField(Product) class Admin: pass def __unicode__(self): return self.name I think about making some table which joins real product (on stock) with abstract product (receiptcomponent). But maybe there's easy solution?

    Read the article

  • How Unicode strings can be passed from a managed to an unmanaged function...

    - by Who Cares
    I will really appreciate anybody's help about how a Unicode string can be passed (marshaled) from a managed (Delphi .NET) to an unmanaged (Delphi's Win32 DLL) function. The managed environment (Delphi .NET): ... interface ... const TM_PRO_CONVERTER = 'TM.PROFileConverter.dll'; function ImportLineworksFromPROFile(FileName :String; TargetFileNameDXF :String): Integer; ... implementation ... [DllImport(TM_PRO_CONVERTER, EntryPoint = 'ImportLineworksFromPROFile', CharSet = CharSet.Ansi, SetLastError = True, CallingConvention = CallingConvention.StdCall)] function ImportLineworksFromPROFile(FileName :String; TargetFileNameDXF :String): Integer; external; ... The unmanaged environment (Delphi's Win32 DLL): library TM.PROFileConverter; ... function ImportLineworksFromPROFile(FileName :String; TargetFileNameDXF :String) :Integer; stdcall; exports ImportLineworksFromPROFile; ... Thank you for your time.

    Read the article

  • Python - converting wide-char strings from a binary file to Python unicode strings...

    - by Mikesname
    It's been a long day and I'm a bit stumped. I'm reading a binary file that contains lots of wide-char strings and I want to dump these out as Python unicode strings. (To unpack the non-string data I'm using the struct module, but I don't how to do the same with the strings.) For example, reading the word "Series": myfile = open("test.lei", "rb") myfile.seek(44) data = myfile.read(12) # data is now 'S\x00e\x00r\x00i\x00e\x00s\x00' How can I encode that raw wide-char data as a Python string? Edit: I'm using Python 2.6

    Read the article

  • How to convert UTF-8 and Unicode to normal text ?

    - by Mehdi Amrollahi
    I have a downloader program that download pages from internet . the encoding of each page is different , some are in UTF-8 and some are Unicode. For example : &#97; that shows 'a' character ; pages full of this characters .We should convert this encodings to normal text . I used the UnicodeEncoding class in c# , but they do not help me . How can i decode this encodings to real characters? Is there a class or method that converting this ? Thanks .

    Read the article

  • django left join with null

    - by SledgehammerPL
    The model: class Product(models.Model): name = models.CharField(max_length = 128) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) components = models.ManyToManyField(Product, through='ReceiptComponent') class Admin: pass def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive The idea: there are a components on stock. I'd like to find out which recipes I can made with components which I have. It's not easy - but possible - I made a SQL view, which gets the solution. But I'm learning python and Django so I'd like to make it Django-style ;D The concept of solution: get the set of recipes which has at last one component: list_of_available_components = ReceiptComponent.objects.filter(product__in=list_of_available_products).distinct() list_of_related_receipts = Receipt.objects.filter(receiptcomponent__in = list_of_available_components).distinct() get recipes (from list_of_related_receipts) which has not at last one component list_of_incomplete_recipes = (SELECT * FROM drinkbook_receiptcomponent LEFT JOIN drinkstore_stock_products USING(product_id) WHERE drinkstore_stock_products.stock_id IS NULL AND receipt_id IN (SELECT receipt_id FROM drinkbook_receiptcomponent JOIN drinkstore_stock_products USING(product_id))) get recipes (from list_of_related_receipts) which are not in "list_of_incomplete_recipes"

    Read the article

  • django access to parent

    - by SledgehammerPL
    model: class Product(models.Model): name = models.CharField(max_length = 128) (...) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) (...) components = models.ManyToManyField(Product, through='ReceiptComponent') def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive And now I'd like to get list of the most often useable products: ReceiptComponent.objects.values('product').annotate(Count('product')).order_by('-product__count' the example result: [{'product': 3, 'product__count': 5}, {'product': 6, 'product__count': 4}, {'product': 5, 'product__count': 3}, {'product': 7, 'product__count': 2}, {'product': 1, 'product__count': 2}, {'product': 11, 'product__count': 1}, {'product': 8, 'product__count': 1}, {'product': 4, 'product__count': 1}, {'product': 9, 'product__count': 1}] It's almost what I need. But I'd prefer having Product object not product value, because I'd like to use this in views.py for generating list.

    Read the article

  • UnicodeDecodeError from a GET-parameter in webapp2

    - by Aneon
    I'm getting a UnicodeDecodeError when recieving a GET-parameter from webapp2 that contains unicode characters, and then using it to do a NDB query. I get the same error message when manually running a unicode() on the parameter in the handler, so there either seems to be a problem in webapp2's URL routing or I've missed something. Preferably, all GET-parameters should be converted to unicode before getting passed into the handler so I don't need to do manual conversions in all of my handlers. I actually think it's worked before in an earlier version. The full error message read: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128) The GET-parameter contains the following string: göteborg. It looks fine when I raise an Exception on it, but gives me an error when I (or NDB) use unicode() on it. EDIT: In NDB, it fails on the following code: File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\datastore_types.py", line 1562, in PackString pbvalue.set_stringvalue(unicode(value).encode('utf-8')) Thanks.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >