Search Results

Search found 4647 results on 186 pages for 'localizable strings'.

Page 47/186 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • struts2 trim all string obtained from forms

    - by aelkin
    Hi All, I develop web application using struts2. I want to improve getting string from forms. For this need trim all string and if obtained string is empty then set null to field. For this, I created StringConverter. public class StringConverter extends StrutsTypeConverter { @Override public Object convertFromString(Map context, String[] strings, Class toClass) { if (strings == null || strings.length == 0) { return null; } String result = strings[0]; if (result == null) { return null; } result = result.trim(); if (result.isEmpty()) { return null; } return result; } @Override public String convertToString(Map context, Object object) { if (object != null && object instanceof String) { return object.toString(); } return null; } } Next, I added row to xwork-conversion.properties java.lang.String=com.mypackage.StringConverter Thats all. But I did not get the desired result. convertToString() method is called when jsp build form, but convertFromString() doesn't invoke. What I do wrong? How can I get the same behaviour using another way? Please, not offer solutions such as: remove the value of such form elements using javascript. create util method which will make it using reflection. Then call it for each form bean. Thanks in advance, Alexey.

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

  • Binding a ListBox's SelectedItem in the presence of BindingNavigator

    - by Reinderien
    Hello. I'm trying to bind a ListBox's SelectedItem data to a property. The following code is an example: using System; using System.Collections.Generic; using System.Windows.Forms; namespace BindingFailure { static class Program { class OuterObject { public string selected { get; set; } public List<string> strings { get; set; } } public static void Main() { List<OuterObject> objs = new List<OuterObject>() { new OuterObject(), new OuterObject() }; objs[0].strings = new List<string> { "one", "two", "three" }; objs[1].strings = new List<string> { "four", "five", "six" }; Form form = new Form(); BindingSource obs = new BindingSource(objs, null), ibs = new BindingSource(obs, "strings"); BindingNavigator nav = new BindingNavigator(obs); ListBox lbox = new ListBox(); lbox.DataSource = ibs; lbox.DataBindings.Add(new Binding("SelectedItem", obs, "selected")); form.Controls.Add(nav); form.Controls.Add(lbox); lbox.Location = new System.Drawing.Point(30, 30); Application.Run(form); } } } If you just select an item, move forward, select an item and then exit, it works as expected. But if you switch back and forth between the two outer objects with the navigator, the selected item seems to be overwritten with an incorrect value. Ideas on how to fix this? Thanks in advance.

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • How to send parameters with same encoding from javascript?

    - by nimcap
    I have a javascript file that lots of people have embedded to their pages. Since I am hosting the file, I have control over that javascript file; I cannot control the way it is embedded because lots of people is using it already. This javascript file sends GET requests to my servlets, and the parameters passed with the request are recorded to DB. For example, javascript sends a request to http://myserver.com/servlet?p1=123&p2=aString and then servlet records 123 and aString to DB somehow. Before sending strings I use encodeURIComponent() to encode it. But what I figured out is every client sends the same string with different encodings depending on either their browser or the site they are visiting. As a result, same strings are represented with different characters when it reaches servlet (so they are different strings). What I am trying to do is to convert the strings to one kind of encoding from javascript so when they reach the client same words are represented with same characters. How is this possible? PS. If there is a way to convert the encoding from Java it is also applicable.

    Read the article

  • Getting a "for each" loop in Java to run in different order everytime

    - by R Stokes
    Hi, Basically my problem is that I'm trying to write a method that finds a random path in a graph of strings, which takes as it's parameters a start string, an integer length and a Vector of strings that will store the path. I'm attempting to do this by first adding the starting string to our blank vector, recursing through its neighbors until the vector's length (not including the start node) is the same as the integer length specified in the parameters. I've provided my code so far here: public Vector<String> findRandomPathFrom(String n, int len, Vector<String> randomPath){ randomPath.add(n); if (randomPath.size() == len + 1) return randomPath; for (String m : this.neighbours(n)){ if (!randomPath.contains(m) && findRandomPathFrom(m, len, randomPath) != null) return randomPath; } path.setSize(path.size() - 1); return null; } It seems to be working fine, returning a path with exactly the number of strings specified after the given start string. However, for any given starting string it generates the EXACT same path everytime, which kind of defeats the purpose of it being a random path generator. I'm guessing this problem is related to my "for each" loop, which loops through all the neighbouring strings of your current string. It seems to just be going with the first string in the neighbours vector every single time. Can anyone help me fix this problem so that it will choose a random neighbour instead of going in order? tl;dr - Any way of getting a "for each" loop in java to process a collection in a random order as oppoosed to start-to-finsih? Thanks in advance

    Read the article

  • How to speed-up a simple method? (possibily without changing interfaces or data structures)

    - by baol
    Hello. I have some data structures: all_unordered_mordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::map<size_t, size_t> position_m; // positions of strings in ordered_m const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • Optimizing a "set in a string list" to a "set as a matrix" operation

    - by Eric Fournier
    I have a set of strings which contain space-separated elements. I want to build a matrix which will tell me which elements were part of which strings. For example: "" "A B C" "D" "B D" Should give something like: A B C D 1 2 1 1 1 3 1 4 1 1 Now I've got a solution, but it runs slow as molasse, and I've run out of ideas on how to make it faster: reverseIn <- function(vector, value) { return(value %in% vector) } buildCategoryMatrix <- function(valueVector) { allClasses <- c() for(classVec in unique(valueVector)) { allClasses <- unique(c(allClasses, strsplit(classVec, " ", fixed=TRUE)[[1]])) } resMatrix <- matrix(ncol=0, nrow=length(valueVector)) splitValues <- strsplit(valueVector, " ", fixed=TRUE) for(cat in allClasses) { if(cat=="") { catIsPart <- (valueVector == "") } else { catIsPart <- sapply(splitValues, reverseIn, cat) } resMatrix <- cbind(resMatrix, catIsPart) } colnames(resMatrix) <- allClasses return(resMatrix) } Profiling the function gives me this: $by.self self.time self.pct total.time total.pct "match" 31.20 34.74 31.24 34.79 "FUN" 30.26 33.70 74.30 82.74 "lapply" 13.56 15.10 87.86 97.84 "%in%" 12.92 14.39 44.10 49.11 So my actual questions would be: - Where are the 33% spent in "FUN" coming from? - Would there be any way to speed up the %in% call? I tried turning the strings into factors prior to going into the loop so that I'd be matching numbers instead of strings, but that actually makes R crash. I've also tried going for partial matrix assignment (IE, resMatrix[i,x] <- 1) where i is the number of the string and x is the vector of factors. No dice there either, as it seems to keep on running infinitely.

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • $PATH is not updated

    - by matr0sk1n
    It seems all about this was already discussed, but I can't resolve my problem. I have all necessary strings in /etc/paths /usr/bin /bin /usr/sbin /sbin /usr/local/bin in ~/.bash_profile export PATH=$PATH:/usr/local/mysql/bin export PATH=$PATH:$HOME/.rvm/bin export PATH="$(brew --prefix php54)/bin:$PATH" export PATH="$(brew --prefix)/bin:$PATH" But every time I execute echo $PATH in terminal, I get only /usr/local/bin if I put .bash_profile strings to .profile or .bashrc I have no effect.

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • mod_rewrite and % character

    - by pekrimen
    I need to rewrite a URL that contains one or more strings of characters incling a % character (for instance %123) into another string of characters including a % character (for instance %234). I am able to do this using the special THE_REQUEST attribute with something like this: RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /(.*)\%123(.*)\ HTTP RewriteRule .* /%1\%234%2 [R,NE] However, this does not work for URLs that contain more than one "%123" strings... The N option has no efect as it seems that the value of THE_REQUEST is not changed. Any ideas?

    Read the article

  • Declaring data types in SQLite

    - by dan04
    I'm familiar with how type affinity works in SQLite: You can declare column types as anything you want, and all that matters is whether the type name contains "INT", "CHAR", "FLOA", etc. But is there a commonly-used convention on what type names to use? For example, if you have an integer column, is it better to distinguish between TINYINT, SMALLINT, MEDIUMINT, and BIGINT, or just declare everything as INTEGER? So far, I've been using the following: INTEGER REAL CHAR(n) -- for strings with a known fixed with VARCHAR(n) -- for strings with a known maximum width TEXT -- for all other strings BLOB BOOLEAN DATE -- string in "YYYY-MM-DD" format TIME -- string in "HH:MM:SS" format TIMESTAMP -- string in "YYYY-MM-DD HH:MM:SS" format (Note that the last three are contrary to the type affinity.)

    Read the article

  • Algorithm for measuring distance between disordered sequences

    - by Kinopiko
    The Levenshtein distance gives us a way to calculate the distance between two similar strings in terms of disordered individual characters: quick brown fox quikc brown fax The Levenshtein distance = 3. What is a similar algorithm for the distance between two strings with similar subsequences? For example, in quickbrownfox brownquickfox the Levenshtein distance is 10, but this takes no account of the fact that the strings have two similar subsequences, which makes them more "similar" than completely disordered words like quickbrownfox qburiocwknfox and yet this completely disordered version has a Levenshtein distance of eight. What distance measures exist which take the length of subsequences into account, without assuming that the subsequences can be easily broken into distinct words?

    Read the article

  • LabVIEW: converting numeric array to string array

    - by JaysonFix
    Using LabVIEW 2009, I have a VI that outputs an array of U64 integers. I'd like the user to be able to perform discrete selection from among the elements of this array. I'm thinking of accomplishing this by programmatically populating a Menu Ring (as shown at http://digital.ni.com/public.nsf/allkb/FB0409491FAB16FA86256D08004FCE7E). However, I apparently need to convert my array of U64 ints to an array of strings, as it is an array of strings that is used to populate the Menu Ring. My question: how can I convert the array of U64 ints to an array of strings?

    Read the article

  • How do I get an unexpanded REG_EXPAND_SZ string from a remote registry?

    - by dalehhirt
    I am currently using RegistryKey.GetValue(string name, object defaultValue, RegistryValueOptions options) with RegistryValueOptions.DoNotExpandEnvironmentNames for the options value. However, this is only valid when run on the local machine. Digging down via Reflector, I find it expands the strings locally. Which means that irrespective of the setting, the strings come down remotely already expanded. Has anyone come across a solution to this that does not require running a process directly on the remote machine to get a REG_EXPAND_SZ value? Update: I attempted to use WMI's StdRegProv provider to gain access, but it still expands the strings before sending them back.

    Read the article

  • Internationalize HelloWorld program .NET

    - by RockStarInTraining
    I have small test app which has 2 resource files (Resources.resx & Resources.de-DE.resx) with the same exact string names, but one has the strings converted to German. For my form I set the Localize property to ture. In my application I am getting the strings as such: this.Text = Properties.Resources.frmCaption; In my release folder I get a de-DE folder with a dll named International_test.resources.dll. I try to distribute this to a machine which is set to German and all of the strings pulled are still english. I tried keeping the International_test.resources.dll in the de-DE folder or just put in in my apps directory. What am I doing wrong or what do I need to do to get the German resource file to be used?

    Read the article

  • Python + PostgreSQL + strange ascii = UTF8 encoding error

    - by Claudiu
    I have ascii strings which contain the character "\x80" to represent the euro symbol: >>> print "\x80" € When inserting string data containing this character into my database, I get: psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encodi ng expected by the server, which is controlled by "client_encoding". I'm a unicode newbie. How can I convert my strings containing "\x80" to valid UTF-8 containing that same euro symbol? I've tried calling .encode and .decode on various strings, but run into errors: >>> "\x80".encode("utf-8") Traceback (most recent call last): File "<pyshell#14>", line 1, in <module> "\x80".encode("utf-8") UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)

    Read the article

  • Searching Database by Arbitrary Date in PHP

    - by jverdi
    Suppose you have a messaging system built in PHP with a MySQL database backend, and you would like to support searching for messages using arbitrary date strings. The database includes a messages table, with a 'date_created' field represented as a datetime. Examples of the arbitrary date strings that would be accepted by the user should mirror those accepted by strtotime. For the following examples, searches performed on March 21, 2010: "January 26, 2009" would return all messages between 2009-01-26 00:00:00 and 2009-01-27 00:00:00 "March 8" would return all messages between 2010-03-08 00:00:00 and 2010-01-26 00:00:00 "Last week" would return all messages between 2010-03-14 00:00:00 and 2010-03-21 018:25:00 "2008" would return all messages between 2008-01-01 00:00:00 and 2008-12-31 00:00:00 I began working with date_parse, but the number of variables grew quickly. I wonder if I am re-inventing the wheel. Does anyone have a suggestion that would work either as a general solution or one that would capture most of the possible input strings?

    Read the article

  • Tips for optimizing C#/.NET programs

    - by Bob
    It seems like optimization is a lost art these days. Wasn't there a time when all programmers squeezed every ounce of efficiency from their code? Often doing so while walking 5 miles in the snow? In the spirit of bringing back a lost art, what are some tips that you know of for simple (or perhaps complex) changes to optimize C#/.NET code? Since it's such a broad thing that depends on what one is trying to accomplish it'd help to provide context with your tip. For instance: When concatenating many strings together use StringBuilder instead. If you're only concatenating a handful of strings it's ok to use the + operator. Use string.Compare to compare 2 strings instead of doing something like string1.ToLower() == string2.ToLower()

    Read the article

  • How can I set the minDate/maxDate for jQueryUI Datepicker using a string?

    - by leo
    jQueryUI Datepicker documentation states that the minDate option can be set using "a string in the current dateFormat". So I've tried the following to initialize datepickers: $("input.date").datepicker({ minDate: "01/01/2010", maxDate: "12/31/2010" }); However, this results in my datepicker having a selectable date range that goes from 11/06/2015 to 12/17/2015. I've checked the current dateformat and its mm/dd/yy, which is supposed to mean 2 digits for the month, 2 for the day, and 4 for the year, separated by slashes. I've also tried including dateFormat: "mm/dd/yy" in the inizialization statement. I've also checked the values for minDate and maxDate afterwards and they ARE being set to the values I want: 01/01/2010 and 12/31/2010. I want to be able to set min/maxDate with strings because I'm being passed these values as strings from somewhere else. Maybe someone knows why this happens and how to solve this, or a workaround to achieve this, perphaps changing the format of the date strings or something? Thanks EDIT: Using: jQuery v1.3.2 and jQuery UI v1.7.2

    Read the article

  • TStringList, Dynamic Array or Linked List in Delphi?

    - by lkessler
    I have a choice. I have an array of ordered strings that I need to store and access. It looks like I can choose between using: A TStringList A Dynamic Array of strings, and A Linked List of strings In what circumstances is each of these better than the others? Which is best for small lists (under 10 items)? Which is best for large lists (over 1000 items)? Which is best for huge lists (over 1,000,000 items)? Which is best tor minimize memory use? Which is best to minimize loading and/or access time? For reference, I am using Delphi 2009.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >