Search Results

Search found 357 results on 15 pages for 'japanese'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • Compiling PHP with GD and libjpeg support

    - by Robin Winslow
    I compile my own PHP, partly to learn more about how PHP is put together, and partly because I'm always finding I need modules that aren't available by default, and this way I have control over that. My problem is that I can't get JPEG support in PHP. Using CentOS 5.6. Here are my configuration options when compiling PHP 5.3.8: './configure' '--enable-fpm' '--enable-mbstring' '--with-mysql' '--with-mysqli' '--with-gd' '--with-curl' '--with-mcrypt' '--with-zlib' '--with-pear' '--with-gmp' '--with-xsl' '--enable-zip' '--disable-fileinfo' '--with-jpeg-dir=/usr/lib/' The ./configure output says: checking for GD support... yes checking for the location of libjpeg... no checking for the location of libpng... no checking for the location of libXpm... no And then we can see that GD is installed, but that JPEG support isn't there: # php -r 'print_r(gd_info());' Array ( [GD Version] => bundled (2.0.34 compatible) [FreeType Support] => [T1Lib Support] => [GIF Read Support] => 1 [GIF Create Support] => 1 [JPEG Support] => [PNG Support] => 1 [WBMP Support] => 1 [XPM Support] => [XBM Support] => 1 [JIS-mapped Japanese Font Support] => ) I know that PHP needs to be able to find libjpeg, and it obviously can't find a version it's happy with. I would have thought /usr/lib/libjpeg.so or /usr/lib/libjpeg.so.62 would be what it needs, but I supplied it with the correct lib directory (--with-jpeg-dir=/usr/lib/) and it doesn't pick them up so I guess they can't be the right versions. rpm says libjpeg is installed. Should I yum remove and reinstall it, and all it's dependent packages? Might that fix the problem? Here's a paste bin with a collection of hopefully useful system information: http://pastebin.com/ied0kPR6

    Read the article

  • Visual Studio 2010 Beta 2, built-in font smoothing

    - by L. Shaydariv
    I've just installed Visual Studio 2010 Beta 2 onto my Windows XP to evaluate it and check whether it meets my preferences the way it did before. Okay, I've temporary defeated an urgent bug with a strange workaround (I could not open any file from the Solution Explorer), and it left bad memories to me. But however, it's okay. The first thing I've seen just opening the code editor was ClearType font rendering. Wow, so unexpectedly. I must note that I do not use standard Windows rendering techniques, but I still prefer GDI++, a font renderer developed by Japanese developers. (GDI++ allows to render the fonts in Mac/Win-Safari style over entire Windows.) Personally for me, GDI++ reaches the great font-rendering results allowing me to use the Dejavu Sans Mono font with really nice smoothing in Visual Studio 2008 (VS 2005 too, though VS 2005 crashes in this case). But GDI++ cannot affect Visual Studio 2010 Beta 2 text editor - it uses ClearType (right?), and it does not care about the system font smoothing settings. It could be an editor based on WPF, right? So as far as I can see, I can't use GDI++ anymore because it uses Windows GDI(+) but no WPF? So I've got several questions: Is it possible to disable VS 2010 b2 built-in ClearType or override it with another font smoother? Is it possible to install a Safari-like font renderer for Visual Studio 2010 [betas]? Thanks a lot.

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

  • Permanent fix for unicode characters not displaying correctly (as boxes)

    - by Chase
    Please read this entire message before replying. First I know how to fix the issue on a temporary basis. I am looking for a permanent fix. I work with foreign language files a lot. Unfortunately sometimes all the unicode characters in windows explorer, notepad, and other places (as rendered by windows, probably GDI) do not display correctly. That is they display as square blocks, where as they had just been displaying correctly. There are countless methods to temporarily correct the issue. But again, I want a way to permanently resolve the issue. What I have tried: The silly "Hide fonts based on language settings". This setting only applies to what fonts you see in the fonts folder and font dropdowns. It doesn't disable foreign fonts (doesn't work, or if it does, it is temporary). Deleting the font cache file and rebooting (works.. usually, temporary solution). Changing my locale and then back (sometimes works, temporary solution). Rebooting my PC and getting lucky (50-50 chance, temporary solution). Changing my keyboard input/adding foreign keyboard (temporary solution that only seems to work once). Reinstalling windows (temporary solution, sometimes lasts a few months though, I have done this 7 times across 3 computers) What I have not tried: Buying Windows Ultimate and installing the interface packs. This is not a solution. I can't read Japanese/Chinese and I do not want my interface in those languages. What I will not do: Switch to a different brand operating system (unix, linux, mac os x) Switch to an older version of windows (Windows Vista, XP, 2000, etc). So can anyone recommend a permanent fix for the problem?

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • Easy and Rapid Deployment of Application Workloads with Oracle VM

    - by Antoinette O'Sullivan
    Oracle VM is designed for easy and rapid deployment of application workloads. In addition to allowing for rapid deployment of an entire application stack, Oracle VM now gives administrators more fine-grained control of the application payloads inside the virtual machine. To get started on Oracle VM Server for x86 or Oracle VM Server fo SPARC, what better solution than to take the corresponding training course. You can take this training from your own desk, by choosing from a selection of live-virtual events already on the schedule on the Oracle University Portal. Alternatively, you can travel to an education center to take these courses. Below is a selection of in-class events already on the schedule for each course: Oracle VM Administration: Oracle VM Server for x86  Location  Date  Delivery Language  Paris, France  11 December 2013  French  Rome, Italy  22 April 2014  Italian  Budapest, Hungary  4 November 2013  Hungarian  Riga, Latvia  3 February 2014  Latvian  Oslo, Norway  9 December 2013  English  Warsaw, Poland  12 February 2014  Polish  Ljubjana, Slovenia  25 November 2013 Slovenian   Barcelona, Spain  29 October 2013  Spanish  Istanbul, Turkey  23 December 2013  Turkish  Cairo, Egypt  1 December 2013  Arabic  Johannesburg, South Africa  9 December 2013   English   Melbourne, Australia  12 February 2014  English  Sydney, Australia  25 November 2013   English   Singapore 27 November 2013    English   Montreal, Canada 18 February 2014  English  Ottawa, Canada  18 February 2014  English  Toronto, Canada  18 February 2014  English  Phoenix, AZ, United States  18 February 2014   English   Sacramento, CA, United States 18 February 2014    English   San Francisco, CA, United States 18 February 2014   English  San Jose, CA, United States  18 February 2014  English  Denver, CO, United States 22 January 2014   English  Roseville, MN, United States 10 February 2014    English   Edison, NJ, United States  18 February 2014  English  King of Prussia, PA, United States  18 February 2014  English  Reston, VA, United States  26 March 2014  English Oracle VM Server for SPARC: Installation and Configuration  Location  Date  Delivery Language  Prague, Czech Republic  2 December 2013  Czech  Paris, France  9 December 2013  French  Utrecht, Netherlands  9 December 2013  Dutch  Madrid, Spain  28 November 2013  Spanish  Dubai, United Arab Emirates  5 February 2014  English  Melbourne, Australia  31 October 2013  English  Sydney, Australia  10 February 2014  English  Tokyo, Japan  6 February 2014  Japanese  Petaling Jaya, Malaysia  23 December 2013  English  Auckland, New Zealand  21 November 2013  English  Singapore  7 November 2013  English  Toronto, Canada  25 November 2013  English  Sacramento, CA, United States  2 December 2013  English  San Francisco, CA, United States  2 December 2013  English  San Jose, CA, United States  2 December 2013  English  Caracas, Venezuela 5 November 2013   Spanish

    Read the article

  • Oracle Announces Oracle Insurance Policy Administration for Life and Annuity 9.4

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Today's global insurers require the ability to provide higher levels of service and quickly bring to market life insurance and annuity products that not only help them stand out from the competition, but also stay current with local legislation. To succeed, they require agile and flexible core systems that enable them to meet the unique localization requirements of the markets in which they operate, whether in North America, Asia Pacific or the Pan-European Region. The release of Oracle Insurance Policy Administration for Life and Annuity 9.4, announced today, helps insurers meet this need with expanded international market capabilities that enable them to reduce risk and profitably compete wherever their business takes them. It offers expanded multi-language along with unit-linked product and fund processing capabilities that enable regional and global insurers to rapidly configure and deliver localized products – along with providing better service for end users through a single policy admin solution. Key enhancements include: Kanji/Kana language support, pre-defined content, and imperial date processing for the Japanese market New localization flexibility for configuring and managing international mailing addresses along with regional variations for client information Enhanced capability to calculate unit-linked pricing and valuation, in addition to market-based processing and pre-configured unit linked content Expanded role-based security and masking capability to further protect sensitive customer data Enhanced capability to restrict processing specified activities based on time of day and user role, reducing exposure to market timing risks Further capability to eliminate duplicate client records, helping to reduce underwriting risks and enhance servicing through a single view of the client "The ability to leverage a single, rules-driven policy administration system for multiple global operation centers can help insurers realize significant improvements in speed to market, customer service, compliance with regional regulations, and consolidation efforts,” noted Celent's Craig Weber, senior vice president, Insurance. “We believe such initiatives are necessary to help the industry address service and distribution imperatives." Helping our customers meet these mission-critical business imperatives is a key objective for Oracle Insurance. Active, ongoing dialogue with our customers is an important part of the process to help understand how our solutions are and can continue to help them achieve success in the marketplace. I had the opportunity to meet with several of our insurance customers at the Oracle Insurance Policy Administration Client Advisory Board meeting last week in Philadelphia, Penn. (View photos on the Oracle Insurance Facebook page.)   It was a great forum for Oracle Insurance and our clients. Discussion centered on the latest business and IT trends, with opportunities to learn more about the latest release of Oracle Insurance Policy Administration for Life and Annuity and other Oracle Insurance solutions such as data warehousing / business intelligence, while exchanging best practices for product innovation and servicing customers and sales channels. Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Content Challenge: You Can Only Get it Here

    - by Mike Stiles
    Part of the content conundrum for brands is figuring out what kind of content customers would find cool, desirable, and relevant. The mere fact many brands have no idea what this content might be is, in itself, pretty alarming. You’d have to have a pretty thorough lack of involvement with and understanding of your customers to not know what they might like. But despite what should be a great awakening in which consumers are using every technology and trick in the book to shield themselves from ads and commercials, brand self-obsession continues as marketers concentrate on their message, their campaign, what they want to say, and what they want social users to do. When individuals conduct themselves in that same fashion on Facebook and Twitter, it gets tiresome and starts losing value pretty quickly. Their posts eventually get hidden. Conversely, friends who post things that consistently entertain or inform, with little self-marketing desperation involved, win the coveted “show all updates” setting. Of course brands are going to use social to market. It’s pretty much the point of having social in the marketing mix. And yes, people who follow a brand’s Twitter account or “Like” a brand’s Facebook Page implicitly state they want to know what’s going on with that brand’s products and services. But if you have a Facebook friend that assumes you want every one of her posts to be about what wine she likes (Mitsubishi’s current campaign is even based around weeding out pretentious Facebook friends, then running them over), then you know how it must feel for your fans and followers to get a sales pitch for your crackers or whatever you’re selling every single time. Is there such a thing as content that doesn’t sell but that still advances the brand and makes the consumer more involved and valuable? Of course. And perhaps there are no better companies than enterprise brands to do it. Enterprise organizations are large enough to go beyond a product and engage readers/viewers at higher, broader levels…communicating expertise across entire sectors, subjects and industries. You’re going from pitchman to news source, and getting full credit for it as the presenter. A recent GigaOM article pointed out the success a San Francisco-based startup called Crunchyroll is having. Their niche (and they proudly admit it’s a niche) is providing Japanese anime, Korean drama and Asian live action content to countries that can’t get it any other way via licensing deals. Shows are available in HD and on the same day they air in the host country. Crunchyroll not only gets 8 million viewers a month, they have 100,000 paying subscribers at $7-12/month. Got a point, Mike? I do happen to have one. Crunchyroll illustrates the content opportunity enterprise companies have…which is to determine your “area,” the interest graph of your customers, then provide content that speaks to and satisfies those interests that can’t be found anywhere else. At least not in the same style, or of the same quality, or with the same authority. Do what no one else is doing. Provide what no one else is providing in your sector. If underserved users are willing to pay monthly for access to awkwardly moving cartoon dragons, imagine the audience you could attract with free, useful, non-sales content in your customers’ area of interest. It’s an audience you’ll want in place when the time does come to put out that marketing message. A content challenge is better than a content conundrum any day.

    Read the article

  • Launching Ops Center 12c

    - by user12601629
    Oracle Enterprise Manager Ops Center 12c is most ambitious version of the Ops Center tooling that we've ever released. I think that make it appropriate that we launched it in grand style! When it became clear we were going to be complete with the 12c final release about this time of year, the marketing team proposed that we roll the launch of 12c into Oracle OpenWorld Tokyo.  I thought that sounded like a fine idea!  You see, I have always loved Japan.  I even studied a bit of Japanese language back in school. OpenWorld Tokyo was an outstanding even this year.  It was held in Roppongi, one of the most stylish districts in Tokyo. And, to make things even better, the Sakura (cherry blossoms) were blooming.  If you've never been in Japan for cherry blossom season, it's a must see!  Here are a couple of pics for you. Here is a picture from Roppongi, near the conference.  Here's a picture near the Imperial Palace.  A couple of friends from the local sales team took me here before my flight out. So, now back to the product launch! We choose to launch the product in John Fowler's "Engineered Systems" keynote address.  It made perfect sense because of the close ties of Ops Center to the Systems portfolio of products.  It was a packed house for the keynote.  Here's a picture I took just before we started -- there were also hundreds more people in "overflow" rooms in other parts of the venue. Here's a picture of me on stage during the launch. While there are countless new features in Ops Center 12c that customers will love, I had to limit myself to discussing just three. Mission Critical Clouds Solaris 11 Engineered Systems So, what does Mission Critical Cloud mean?  It means we've expanded EM's cloud capabilities in a couple of key areas. First, we've expanded the "self service provisioning" capabilities we have to include SPARC -- not just x86.  Now you can build clouds of Solaris Zones with ease!  Second, we've much more deeply integrated high-end storage and network management into the cloud layers.  These may our IaaS story is now much more powerful! For Solaris 11, we didn't simply port our monitoring agent to S11.  That would have been easy, but also boring! We support S11 deeply.  Full access to the power of the IPS packaging system, the new virtualized networking stack, new Zones features, the Auto Install framework.  If you're ready to try Solaris 11 then Ops Center is ready for you. Last is on the area of Engineered Systems.  These combinations of hardware and software are fast and powerful. However, we're also on a mission to make them ever easier to manage.  We've made major strides with Ops Center 12c. Manage these systems as racks, not individual components.  The new capabilities for the new engineered systems like Exalogic and SPARC SuperCluster and striking. You can read more here: Oracle Unveils Oracle Enterprise Manager Ops Center 12c So, I'll wrap this up with one final bit of fun. One of my friends from the Oracle marketing department found a super cool place to get dinner.  It's a restaurant called Gonpachi. It turns out this is the place that inspired the scene in the Quentin Taratino movie Kill Bill where Uma Thurman fights 88 Ninjas.  Here is a picture I snapped while we were there. It was surely a good time. Check it out next time you're in Tokyo.

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • How to read Unicode characters from command-line arguments in Python on Windows

    - by Craig McQueen
    I want my Python script to be able to read Unicode command line arguments in Windows. But it appears that sys.argv is a string encoded in some local encoding, rather than Unicode. How can I read the command line in full Unicode? Example code: argv.py import sys first_arg = sys.argv[1] print first_arg print type(first_arg) print first_arg.encode("hex") print open(first_arg) On my PC set up for Japanese code page, I get: C:\temp>argv.py "PC·??????08.09.24.doc" PC·??????08.09.24.doc <type 'str'> 50438145835c83748367905c90bf8f9130382e30392e32342e646f63 <open file 'PC·??????08.09.24.doc', mode 'r' at 0x00917D90> That's Shift-JIS encoded I believe, and it "works" for that filename. But it breaks for filenames with characters that aren't in the Shift-JIS character set—the final "open" call fails: C:\temp>argv.py Jörgen.txt Jorgen.txt <type 'str'> 4a6f7267656e2e747874 Traceback (most recent call last): File "C:\temp\argv.py", line 7, in <module> print open(first_arg) IOError: [Errno 2] No such file or directory: 'Jorgen.txt' Note—I'm talking about Python 2.x, not Python 3.0. I've found that Python 3.0 gives sys.argv as proper Unicode. But it's a bit early yet to transition to Python 3.0 (due to lack of 3rd party library support). Update: A few answers have said I should decode according to whatever the sys.argv is encoded in. The problem with that is that it's not full Unicode, so some characters are not representable. Here's the use case that gives me grief: I have enabled drag-and-drop of files onto .py files in Windows Explorer. I have file names with all sorts of characters, including some not in the system default code page. My Python script doesn't get the right Unicode filenames passed to it via sys.argv in all cases, when the characters aren't representable in the current code page encoding. There is certainly some Windows API to read the command line with full Unicode (and Python 3.0 does it). I assume the Python 2.x interpreter is not using it.

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Treebeard admin in Django

    - by Sharath
    I've setup Treebeard in Django and everything seems to have gone well. I tried to setup the admin system and I can see my models being presented in the admin interface. However, when I try to add new data using the admin interface, I get the following error in my template. The code still works fine, and I did a check in my DB and the data seems to be inserted properly. However, the view doesn't seem to load properly. Any idea about what is causing this?? The exception am getting is.. Caught an exception while rendering: Failed lookup for key [request] in u'[{\'action_index\': 0, \'block\': , , , , , , ]}, {\'block\': , , , ], , , , , \n \', ], , ], , , , ], , , \n \', , , , , , , , , ], , ], \n \']}, {\'cl\': , \'root_path\': None, \'actions_on_bottom\': False, \'title\': u\'Select album to change\', \'has_add_permission\': True, \'media\': , \'is_popup\': False, \'action_form\': , \'actions_on_top\': True, \'app_label\': \'gallery\'}, {\'MEDIA_URL\': \'\'}, {\'LANGUAGES\': ((\'ar\', \'Arabic\'), (\'bn\', \'Bengali\'), (\'bg\', \'Bulgarian\'), (\'ca\', \'Catalan\'), (\'cs\', \'Czech\'), (\'cy\', \'Welsh\'), (\'da\', \'Danish\'), (\'de\', \'German\'), (\'el\', \'Greek\'), (\'en\', \'English\'), (\'es\', \'Spanish\'), (\'et\', \'Estonian\'), (\'es-ar\', \'Argentinean Spanish\'), (\'eu\', \'Basque\'), (\'fa\', \'Persian\'), (\'fi\', \'Finnish\'), (\'fr\', \'French\'), (\'ga\', \'Irish\'), (\'gl\', \'Galician\'), (\'hu\', \'Hungarian\'), (\'he\', \'Hebrew\'), (\'hi\', \'Hindi\'), (\'hr\', \'Croatian\'), (\'is\', \'Icelandic\'), (\'it\', \'Italian\'), (\'ja\', \'Japanese\'), (\'ka\', \'Georgian\'), (\'ko\', \'Korean\'), (\'km\', \'Khmer\'), (\'kn\', \'Kannada\'), (\'lv\', \'Latvian\'), (\'lt\', \'Lithuanian\'), (\'mk\', \'Macedonian\'), (\'nl\', \'Dutch\'), (\'no\', \'Norwegian\'), (\'pl\', \'Polish\'), (\'pt\', \'Portuguese\'), (\'pt-br\', \'Brazilian Portuguese\'), (\'ro\', \'Romanian\'), (\'ru\', \'Russian\'), (\'sk\', \'Slovak\'), (\'sl\', \'Slovenian\'), (\'sr\', \'Serbian\'), (\'sv\', \'Swedish\'), (\'ta\', \'Tamil\'), (\'te\', \'Telugu\'), (\'th\', \'Thai\'), (\'tr\', \'Turkish\'), (\'uk\', \'Ukrainian\'), (\'zh-cn\', \'Simplified Chinese\'), (\'zh-tw\', \'Traditional Chinese\')), \'LANGUAGE_BIDI\': False, \'LANGUAGE_CODE\': \'en-us\'}, {}, {\'perms\': , \'messages\': [], \'user\': }, {}]' This happens after I hit the save button in Django admin. This is my admin.py implementation.. class MP_Album_Admin(TreeAdmin): pass admin.site.register(Album,MP_Album_Admin)

    Read the article

  • doublechecking: no db-wide 'unicode switch' for sql server in the foreseeable future, i.e. like Orac

    - by user72150
    Hi all, I believe I know the answer to this question, but wanted to confirm: Question Does Sql server (or will it in the foreseeable future), offer a database-wide "unicode switch" which says "store all characters in unicode (UTF-16, UCS-2, etc)", i.e. like Oracle. The Context Our application has provided "CJK" (Chinese-Japanese-Korean) support for years--using Oracle as the db store. Recently folks have been asking for the same support in sql server. We store our db schema definition in xml and generate the vendor-specific definitions (oracle, sql server) using vendor-specific xsl. We can make the change easily. The problem is for upgrades. Generated scripts would need to change the column types for 100+ columns from varchar to nvarchar, varchar(max) to nvarchar(max), etc. These changes require dropping and recreating indexes and foreign keys if the any indexes/fk's exist on the column. Non-trivial. Risky. DB-wide character encodings for us would eliminate programming changes. (I.e. we would not to change the column types from varchar to nvarchar; sql server would correctly store unicode data in varchar columns). I had thought that eventually sql server would "see the light" and allow storing unicode in varchar/clob columns. Evidently not yet. Recap So just to triple check: does mssql offer a database-wide switch for character encoding? Will it in SQL2008R3? or 2010? thanks, bill

    Read the article

  • Unicode strings in my C# App are shown with question marks

    - by mrbamboo
    Hi, I have a header file in C++/CLR project, which contains some strings in different languages. arabic, english, german, chinese, french, japanese etc... I have a second project written in C#. Here I access the strings stored in the header file of the C++/CLR project. The encoding of the header file is Unicode - Codepage 1200 or UTF-8. the visual studio editor is able to display the strings correctly. At runtime I access these strings and assign them into a local String variable. Here I recognized that many strings are not shown correctly. Doesn't matter if I assign them or not. Accessing the original place (while debugging) shows me all the foreign strings with question marks. Especially chinese, just question marks. Example : "So?e St?ange ?ext in Ch?n?se" (This is not the best example, I know) What is the problem? I read that C# is by default UTF-16, My header file containing the strings is UTF-16 or UTF-8. I must be able to handle strings in different languages. What am I doing wrong?

    Read the article

  • Java UTF-8 to ASCII conversion with supplements

    - by bozo
    Hi, we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages) We have a small utility to get rid of all the diacritics: public static final String toBaseCharacters(final String sText) { if (sText == null || sText.length() == 0) return sText; final char[] chars = sText.toCharArray(); final int iSize = chars.length; final StringBuilder sb = new StringBuilder(iSize); for (int i = 0; i < iSize; i++) { String sLetter = new String(new char[] { chars[i] }); sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC); try { byte[] bLetter = sLetter.getBytes("UTF-8"); sb.append((char) bLetter[0]); } catch (UnsupportedEncodingException e) { } } return sb.toString(); } The question is how to replace all the german sharp s (ß, Ð, d) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Ð supplement would be either "D" or "Dj"). Is there some simple way to do it, without million of .replaceAll() calls? So for example: Ðonardan = Djonardan, Blaß = Blass and so on. We can replace all "problematic" chars with empty space, but would like to avoid this to make the output as similar to the input as possible. Thank you for your answers, Bozo

    Read the article

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

  • Using mercurial and beyond compare 3(bc3) as the diff tool? help needed

    - by mhd
    Hi, in windows I am able to use winmerge as the external diff tool for hg using mercurial.ini,etc. Using some options switch that you can find in web(I think it's a japanese website) Anyway, here for example: hg winmerge -r1 -r2 will list file(s) change(s) between rev1 and rev2 in winmerge. I can just click which file to diff but for bc3: hg bcomp -r1 -r2 will make bc3 open a dialog which stated that a temp dir can't be found. The most I can do using bc3 and hg is hg bcomp -r1 -r2 myfile.cpp which will open diff between rev1 and rev2 of myfile.cpp So,it seems that hg+bc3 can't successfully acknowledge of all files change between revision. Only able to diff 1 file at a time. Anyone can use bc3 + hg better ? edit: Problem Solved ! Got the solution from scooter support page. I have to use bcompare instead of bcomp Here's a snippet of my mercurial.ini [extensions] hgext.win32text = ;mhd adds hgext.extdiff = ;mhd adds for bc [extdiff] cmd.bc3 = bcompare opts.bc3 = /ro ;mhd adds for winmerge ;[extdiff] ;cmd.winmerge = WinMergeU ;opts.winmerge = /r /e /x /ub

    Read the article

  • Python: Copying files with special characters in path

    - by erikderwikinger
    Hi is there any possibility in Python 2.5 to copy files having special chars (Japanese chars, cyrillic letters) in their path? shutil.copy cannot handle this. here is some example code: import copy, os,shutil,sys fname=os.getenv("USERPROFILE")+"\\Desktop\\testfile.txt" print fname print "type of fname: "+str(type(fname)) fname0 = unicode(fname,'mbcs') print fname0 print "type of fname0: "+str(type(fname0)) fname1 = unicodedata.normalize('NFKD', fname0).encode('cp1251','replace') print fname1 print "type of fname1: "+str(type(fname1)) fname2 = unicode(fname,'mbcs').encode(sys.stdout.encoding) print fname2 print "type of fname2: "+str(type(fname2)) shutil.copy(fname2,'C:\\') the output on a Russian Windows XP C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname0: <type 'unicode'> C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname1: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname2: <type 'str'> Traceback (most recent call last): File "C:\Test\getuserdir.py", line 23, in <module> shutil.copy(fname2,'C:\\') File "C:\Python25\lib\shutil.py", line 80, in copy copyfile(src, dst) File "C:\Python25\lib\shutil.py", line 46, in copyfile fsrc = open(src, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\Documents and Settings\\\x80\ xa4\xac\xa8\xad\xa8\xe1\xe2\xe0\xa0\xe2\xae\xe0\\Desktop\\testfile.txt'

    Read the article

  • Have to click twice to submit the form

    - by phil
    Intended function: require user to select an option from the drop down menu. After user clicks submit button, validate if an option is selected. Display error message and not submit the form if user fails to select. Otherwise submit the form. Problem: After select an option, button has to be clicked twice to submit the form. I have no clue at all.. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script src="jquery-1.4.2.min.js" type="text/javascript"></script> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <style> p{display: none;} </style> </head> <script> $(function(){ // language as an array var language=['Arabic','Cantonese','Chinese','English','French','German','Greek','Hebrew','Hindi','Italian','Japanese','Korean','Malay','Polish','Portuguese','Russian','Spanish','Thai','Turkish','Urdu','Vietnamese']; $('#muyu').append('<option value=0>Select</option>'); //loop through array for (i in language) //js unique statement for iterate array { $('#muyu').append($('<option>',{id:'muyu'+i,val:language[i], html:language[i]})) } $('form').submit(function(){ alert('I am being called!'); // check if submit event is triggered if ( $('#muyu').val()==0 ) {$('#muyu_error').show(); } else {$('#muyu_error').hide(); return true;} return false; }) }) </script> <form method="post" action="match.php"> I am fluent in <select name='muyu' id='muyu'></select> <p id='muyu_error'>Tell us your native language</p> <input type="submit" value="Go"> </form>

    Read the article

  • Which function pair in QString to use for converting to/from std::string?

    - by Noah Roberts
    I'm working on a project that we want to use Unicode and could end up in countries like Japan, etc... We want to use std::string for the underlying type that holds string data in the data layer (see Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world as to why). The problem is that I'm not completely sure which function pair (to/from) to use for this and be sure we're 100% compatible with anything the user might enter in the Qt layer. A look at to/fromStdString indicates that I'd have to use setCodecForCStrings. The documentation for that function though indicates that I wouldn't want to do this for things like Japanese. This is the set that I'd LIKE to use though. Does someone know enough to explain how I'd set this up if it's possible? The other option that looks like I could be pretty sure of it working is the to/fromUTF8 functions. Those would require a two step approach though so I'd prefer the other if possible. Is there anything I've missed?

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >