Search Results

Search found 9788 results on 392 pages for 'character limit'.

Page 92/392 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • "Time Control" in a 2d Platformer

    - by Woody Zantzinger
    I am making a 2d platformer where the player can press a button, and restart the level, only their previous character will also run the level at the same time, like they are traveling back in time. I know other games have done this before, and the way I have thought of doing it is to make the game character have a set of actions (Idle, Jumping, Walking Left etc.) and then detect changes in those actions and log them into a list along with the game time. So then when I need the character to run the level again on its own, I can just go through the list changing its actions at the right time. Is this the best way to do it? Does anyone have any experience in this? Thanks.

    Read the article

  • Cannot type '^' (caret) in native KDE apps

    - by nebukadnezzar
    I cannot type the caret character '^' in native KDE apps, such as plasmoids, editors (Kate, KWrite), etc. It worked in previous (older) versions of KDE (3.5, I think). I believe it may be related to the fact that I'm using a keyboard with a german layout, however, it would seem farfetched - I can type the caret character just fine in non-KDE apps (such as chrome, firefox, etc). I tried switching keyboard layouts, but no cigar. What's going on? What do I have to change to get the caret character back?

    Read the article

  • Is it possible to give an animated GIF a transparent background?

    - by Phil
    I'm making a Fire Emblem-esque game. There are very cute 2D frames I made for each character, and, like a game like Fire Emblem, I want these characters to animate constantly. To circumvent the graphics programming involved I came up with a novel idea! I would make each character an animated gif, and only in special conditions ever halt their constant movement - in that case just change what image is being displayed. Simple enough. But I have a dilemma - I want the background of my .gifs to be transparent (so that the "grass" behind each character naturally shows, as per the screenshot - which has them as still images with transparent backgrounds). I know how to make a background transparent in numerous tools (GIMP, Photoshop). But it seems every .gif creator replaces the transparent background with something and I can't edit it back to transparent. Is it possible to have a .gif with a transparent "background"? Perhaps my knowledge of file formats is limiting me here.

    Read the article

  • How can I imitate interaction and movement in Diablo II?

    - by user422318
    I'm prototyping a simple browser-based game. It's played from a top down perspective on a 2d canvas. You left-click on a point on the map, and your character will begin walking to it. If you click on a different point on the map, then your character will begin walking to the new point. It's similar to Diablo II: http://www.youtube.com/watch?v=EvDKt-To6K0&feature=related How can I best imitate this movement system for a player? Ideas... Track current coords and target coords If target coords are exactly up, left, right, or down, then increment appropriate direction until you get there Implied else: target coords are in a quadrant. To make this movement look natural, character will have to move diagonally. For example, pretend the target is to the northeast. For each game frame, alternate incrementing current coordinates in the north and then east directions.

    Read the article

  • 2D fighting bounding boxes

    - by user36420
    I'm prototyping a 2D platformer/brawler game for uni and I'm having some trouble with creating collision/bounding boxes. This is most likely going to end up on a Vita so I do have some library constraints as well as performance implications. None of this has yet been implemented but is all theory. My idea was to have the artist create a sprite sheet for the character animation and then a second identical sprite sheet with the corresponding collisions in a solid colour (e.g green for where the character can be hit and red for dealing damage, near the foot if kicking etc.) With this, I would then parse the collision sheet and generate the various collisions required storing them in the character model. This is the point I feel would be most inefficient. While I think this is a possible solution, I was wondering if there was a more standard way of doing this or a more efficient way as I feel this would have severe performance problems.

    Read the article

  • Set Position of multiple bodies

    - by philipp
    I have a character composed of five bodies which are tied together by a lot of joints. On of them is the overall chassis, to which all forces and impulses are applied to move the whole Character. All in all that works very fine, except one thing: I need to set the Position of the Character so that it get Beamed from one place to the other in one single frame. Unfortunately I cannot get this to work. I tried the following code, without any success… playerbodies.forEach(function (bd) { bd.SetLinearVelocity(new b2.Vec2()); var t = bd.GetTransform(); t.p.x -= 10; bd.SetTransform(t, bd.GetAngle()); }); How can I make that happen?

    Read the article

  • How Would I create alternate players (Turn base Event)

    - by Blue
    The picture above shows 2 players. Each containing 3 characters. I want to know how to make a Turn based event starting with player 1 alternating turns with player 2. And in every alternation each character gets a turn. If a character dies, the next character on the same team goes, and so on. How would I create this? Is there a tutorial? I haven't made any turn-based games so I don't know how to program these kinds of stuff.

    Read the article

  • How can I efficiently create/store/implement animations as I add to my game?

    - by nickbadal
    My game's characters are made up of different parts (head/body/legs/etc), and whatever items they have equipped. As I'm creating the animation system for my game, I want to try to anticipate a large number of combinations for different pieces for each character. Originally, I had planned on having a frame-by frame animation for each piece for each animation, and then layer them to combine them into a character, but this seems like it would be a lot of work for my artist, and that the memory/disk size would start to add up as well since we would need a sprite for every frame, of every customization of every piece, in every animation, for every character. What efficient ways are there to create/implement these animations as we add more and more configurations to our game?

    Read the article

  • How can you monitor internet download usage?

    - by dv3500ea
    Some broadband providers impose a monthly download limit, charging extra if you go over. It is also quite easy to exceed some of the lower limits just by installing/updating packages and 'normal' browsing (which to me includes streaming TV programs and movies). This means that you need to limit the amount you use the internet, yet it is hard to know when. The System Monitor helps a bit with this by giving a total received/total sent in the networking section of the Resources tab. However, this is reset every reboot. It would be good if there was a way to have a monthly total received so you can know how close you are to exceeding your limit and maybe even be given warnings if it looks like you are going to exceed the limits. Does anyone know of a way to achieve this?

    Read the article

  • Long lines of text in source code [closed]

    - by ale
    Possible Duplicate: Is the 80 character limit still relevant in times of widescreen monitors? I used to set a vertical line set at 80 characters in my text editor and then I added carriage returns if the lines got too long. I later increased the value to 135 characters. I started using word wrap and not giving myself a limit but tried to keep lines short if I could because it took a lot of time shortening my lines. People at work use word wrap and don't give themselves a limit.. is this the correct way? What are you meant to do ? Many thanks.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Text Expansion Awareness for UX Designers: Points to Consider

    - by ultan o'broin
    Awareness of translated text expansion dynamics is important for enterprise applications UX designers (I am assuming all source text for translation is in English, though apps development can takes place in other natural languages too). This consideration goes beyond the standard 'character multiplication' rule and must take into account the avoidance of other layout tricks that a designer might be tempted to try. Follow these guidelines. For general text expansion, remember the simple rule that the shorter the word is in the English, the longer it will need to be in English. See the examples provided by Richard Ishida of the W3C and you'll get the idea. So, forget the 30 percent or one inch minimum expansion rule of the old Forms days. Unfortunately remembering convoluted text expansion rules, based as a percentage of the US English character count can be tough going. Try these: Up to 10 characters: 100 to 200% 11 to 20 characters: 80 to 100% 21 to 30 characters: 60 to 80% 31 to 50 characters: 40 to 60% 51 to 70 characters: 31 to 40% Over 70 characters: 30% (Source: IBM) So it might be easier to remember a rule that if your English text is less than 20 characters then allow it to double in length (200 percent), and then after that assume an increase by half the length of the text (50%). (Bear in mind that ADF can apply truncation rules on some components in English too). (If your text is stored in a database, developers must make sure the table column widths can accommodate the expansion of your text when translated based on byte size for the translated character and not numbers of characters. Use Unicode. One character does not equal one byte in the multilingual enterprise apps world.) Rely on a graceful transformation of translated text. Let all pages to resize dynamically so the text wraps and flow naturally. ADF pages supports this already. Think websites. Don't hard-code alignments. Use Start and End properties on components and not Left or Right. Don't force alignments of components on the page by using texts of a certain length as spacers. Use proper label positioning and anchoring in ADF components or other technologies. Remember that an increase in text length means an increase in vertical space too when pages are resized. So don't hard-code vertical heights for any text areas. Don't be tempted to manually create text or printed reports this way either. They cannot be translated successfully, and are very difficult to maintain in English. Use XML, HTML, RTF and so on. Check out what Oracle BI Publisher offers. Don't force wrapping by using tricks such as /n or /t characters or HTML BR tags or forced page breaks. Once the text is translated the alignment will be destroyed. The position of the breaking character or tag would need to be moved anyway, or even removed. When creating tables, then use table components. Don't use manually created tables that reply on word length to maintain column and row alignment. For example, don't use codeblock elements in HTML; use the proper table elements instead. Once translated, the alignment of manually formatted tabular data is destroyed. Finally, if there is a space restriction, then don't use made-up acronyms, abbreviations or some form of daft text speak to save space. Besides being incomprehensible in English, they may need full translations of the shortened words, even if they can be figured out. Use approved or industry standard acronyms according to the UX style rules, not as a space-saving device. Restricted Real Estate on Mobile Devices On mobile devices real estate is limited. Using shortened text is fine once it is comprehensible. Users in the mobile space prefer brevity too, as they are on the go, performing three-minute tasks, with no time to read lengthy texts. Using fragments and lightning up on unnecessary articles and getting straight to the point with imperative forms of verbs makes sense both on real estate and user experience grounds.

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • Facebook Application Tab On Page Not Working

    - by Pankaj Khurana
    Hi, I have a working facebook fbml application tab on a page. It was working perfectly but today when i checked it was generating an error. Errors while loading page from application Parse errors: FBML Error (line 18): illegal tag "body" under "fb:tab-position" FBML Error (line 26): illegal tag "noscript" under "fb:tab-position" FBML Error (line 44): illegal tag "noscript" under "fb:tab-position" Runtime errors: HTML error while rendering tag "link": There is a hard limit of 2 css link tags on profile tabs in order to remain under the IE 31 tag limit. HTML error while rendering tag "link": There is a hard limit of 2 css link tags on profile tabs in order to remain under the IE 31 tag limit. Cannot allow external script My settings are: Canvas Callback URL: http://mydomain/myfile/ Canvas URL: http://mydomain/myfeedback/ Tab Name: Feedback Tab URL: http://apps.facebook.com/myfeedback/ This is an fbml application without any body tags I am unable to find out the reason for the same. Please help me on this. Thanks

    Read the article

  • LINQ: GroupBy with maximum count in each group

    - by polarbear2k
    Hi, I have a list of duplicate numbers: Enumerable.Range(1,3).Select(o => Enumerable.Repeat(o, 3)).SelectMany(o => o) // {1,1,1,2,2,2,3,3,3} I group them and get quantity of occurance: Enumerable.Range(1,3).Select(o => Enumerable.Repeat(o, 3)).SelectMany(o => o) .GroupBy(o => o).Select(o => new { Qty = o.Count(), Num = o.Key }) Qty Num 3 1 3 2 3 3 What I really need is to limit the quantity per group to some number. If the limit is 2 the result for the above grouping would be: Qty Num 2 1 1 1 2 2 1 2 2 3 1 3 So, if Qty = 10 and limit is 4, the result is 3 rows (4, 4, 2). The Qty of each number is not equal like in example. The specified Qty limit is the same for whole list (doesn't differ based on number). Thanks

    Read the article

  • Set android datepicker date limits

    - by matt
    I am using datePicker in android to display images based on user selected dates. I need to limit said dates to certain days for instance Jan 1st 2010 to Dec 31st 2010. Simple as that i thought but no where can i find the answer on how to limit these dates. Does anyone know how to limit the dates for Android DatePicker

    Read the article

  • cakephp paginate using mysql SQL_CALC_FOUND_ROWS

    - by michael
    I'm trying to make Cakephp paginate take advantage of the SQL_CALC_FOUND_ROWS feature in mysql to return a count of total rows while using LIMIT. Hopefully, this can eliminate the double query of paginateCount(), then paginate(). http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows I've put this in my app_model.php, and it basically works, but it could be done better. Can someone help me figure out how to override paginate/paginateCount so it executes only 1 SQL stmt? /** * Overridden paginateCount method, to using SQL_CALC_FOUND_ROWS */ public function paginateCount($conditions = null, $recursive = 0, $extra = array()) { $options = array_merge(compact('conditions', 'recursive'), $extra); $options['fields'] = "SQL_CALC_FOUND_ROWS `{$this->alias}`.*"; Q: how do you get the SAME limit value used in paginate()? $options['limit'] = 1; // ideally, should return same rows as in paginate() Q: can we somehow get the query results to paginate directly, without the extra query? $cache_results_from_paginateCount = $this->find('all', $options); /* * note: we must run the query to get the count, but it will be cached for multiple paginates, so add comment to query */ $found_rows = $this->query("/* do not cache for {$this->alias} */ SELECT FOUND_ROWS();"); $count = array_shift($found_rows[0][0]); return $count; } /** * Overridden paginate method */ public function paginate($conditions, $fields, $order, $limit, $page = 1, $recursive = null, $extra = array()) { $options = array_merge(compact('conditions', 'fields', 'order', 'limit', 'page', 'recursive'), $extra); Q: can we somehow get $cache_results_for_paginate directly from paginateCount()? return $cache_results_from_paginateCount; // paginate in only 1 SQL stmt return $this->find('all', $options); // ideally, we could skip this call entirely }

    Read the article

  • Rails on server syntax error?

    - by Danny McClelland
    Hi Everyone, I am trying to get my rails application running on my web server, but when I run the rake db:migrate I get the following error: r oot@oak [/home/macandco/rails_apps/survey_manager]# rake db:migrate (in /home/macandco/rails_apps/survey_manager) == Baseapp: migrating ======================================================== -- create_table(:settings, {:force=>true}) -> 0.0072s -- create_table(:users) -> 0.0072s -- add_index(:users, :login, {:unique=>true}) -> 0.0097s -- create_table(:profiles) -> 0.0084s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0067s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0064s -- create_table(:roles) -> 0.0052s -- create_table(:roles_users, {:id=>false}) -> 0.0060s rake aborted! An error has occurred, all later migrations canceled: 555 5.5.2 Syntax error. g9sm2526951gvc.8 Has anyone come across this before? Thanks, Danny Main Migration file c lass Baseapp < ActiveRecord::Migration def self.up # Create Settings Table create_table :settings, :force => true do |t| t.string :label t.string :identifier t.text :description t.string :field_type, :default => 'string' t.text :value t.timestamps end # Create Users Table create_table :users do |t| t.string :login, :limit => 40 t.string :identity_url t.string :name, :limit => 100, :default => '', :null => true t.string :email, :limit => 100 t.string :mobile t.string :signaturenotes t.string :crypted_password, :limit => 40 t.string :salt, :limit => 40 t.string :remember_token, :limit => 40 t.string :activation_code, :limit => 40 t.string :state, :null => :false, :default => 'passive' t.datetime :remember_token_expires_at t.string :password_reset_code, :default => nil t.datetime :activated_at t.datetime :deleted_at t.timestamps end add_index :users, :login, :unique => true # Create Profile Table create_table :profiles do |t| t.references :user t.string :real_name t.string :location t.string :website t.string :mobile t.timestamps end # Create OpenID Tables create_table :open_id_authentication_associations, :force => true do |t| t.integer :issued, :lifetime t.string :handle, :assoc_type t.binary :server_url, :secret end create_table :open_id_authentication_nonces, :force => true do |t| t.integer :timestamp, :null => false t.string :server_url, :null => true t.string :salt, :null => false end create_table :roles do |t| t.column :name, :string end # generate the join table create_table :roles_users, :id => false do |t| t.column :role_id, :integer t.column :user_id, :integer end # Create admin role and user admin_role = Role.create(:name => 'admin') user = User.create do |u| u.login = 'admin' u.password = u.password_confirmation = 'advices' u.email = '[email protected]' end user.register! user.activate! user.roles << admin_role end def self.down # Drop all BaseApp drop_table :settings drop_table :users drop_table :profiles drop_table :open_id_authentication_associations drop_table :open_id_authentication_nonces drop_table :roles drop_table :roles_users end end

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • Perl strings internals

    - by n0rd
    How does perl strings represented internally? What encoding is used? How do I handle different encodings properly? I've been using perl for quite a long time, but it didn't include a lot of string handling in different encodings, and when I encountered a minor problem that had something to do with encodings I usually resorted to some shamanic actions. Until this moment I thought about perl strings as sequences of bytes, which did fit pretty well for my tasks. Now I need to do some processing of UTF-8 encoded file and here starts trouble. First, I read file into string like this: open(my $in, '<', $ARGV[0]) or die "cannot open file $ARGV[0] for reading"; binmode($in, ':utf8'); my $contents; { local $/; $contents = <$in>; } close($in); then simply print it: print $contents; And I get two things: a warning Wide character in print at <scriptname> line <n> and a garbage in console. So I can conclude that perl strings have a concept of "character" that can be "wide" or not, but when printed these "wide" characters are represented in console as multiple bytes, not as single "character". (I wonder now why did all my previous experience with binary files worked quite how I expected it to work without any "character" issues). Why then I see garbage in console? If perl stores strings as character in some known encoding, I don't think there is a big problem to find out console encoding and print text properly. (I use Windows, BTW). If perl stores strings as multibyte sequences (e.g. using same UTF-8 encoding), why is it done this way? From my C experience handling multibyte strings is PAIN.

    Read the article

  • shortest digest of a string

    - by meta
    [Description] Given a string of char type, find a shortest digest, which is defined as: a shortest sub-string which contains all the characters in the original string. [Example] A = "aaabedacd" B = "bedac" is the answer. [My solution] Define an integer table with 256 elements, which is used to record the occurring times for each kind of character in the current sub-string. Scan the whole string, statistic the total kinds of character in the given string by using the above table. Use two pointers, start, end, which are initially pointing to the start and (start + 1) of the given string. The current kinds of character is 1. Expand sub-string[start, end) at the end until it contains all kinds of character. Update the shortest digest if possible. Contract sub-string[start, end] at the start by one character each time, try to restore its digest property if necessary by step 4. The time cost is O(n), and the extra space cost is constant. Any better solution without extra space?

    Read the article

  • why this sql not working?

    - by user295189
    I have a query public static function TestQuery( $start=0, $limit=0){ $sql = " SELECT count(*) AS total FROM db.table1 JOIN db.table2 ON table1.fieldID = {$fieldID} AND table2.assigned = 'N'"; $qry = new SQLQuery; $qry-query($sql); if($row = $qry-fetchRow()){ $total = intval($row-total); } return $total; } which works fine but when I add the limit as below, then it doesnt work and gives me errors public static function TestQuery( $start=0, $limit=0){ $sql = " SELECT count(*) AS total FROM db.table1 JOIN db.table2 ON table1.fieldID = {$fieldID} AND table2.assigned = 'N'"; //this fails if($recordlimit 0) $sql .= "LIMIT {$startRecord}, {$recordLimit} "; // $qry = new SQLQuery; $qry-query($sql); if($row = $qry-fetchRow()){ $total = intval($row-total); } return $total; } Any help will be appreciated

    Read the article

  • Delphi dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. So we can not just change the database and the connection code page to Unicode. We also have to modify all datamodules to use the new field type. The modified datamodule however will not be backwards compatible.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >