Search Results

Search found 4647 results on 186 pages for 'localizable strings'.

Page 150/186 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • PHP mysqli Insert not working, but not giving any errors.

    - by asdasdas
    As the title says Im trying to do a simple insert, but nothing actually is inserted into the table. I try to print out errors, but nothing is reported. My users table has many more fields than these 4, but they should all default. $query = 'INSERT INTO users (username, password, level, name) VALUES (?, ?, ?, ?)'; if($stmt = $db -> prepare($query)) { $stmt -> bind_param('ssis', $username, $password, $newlevel, $realname); $stmt -> execute(); $stmt -> close(); echo 'Any Errors: '.$db->error.PHP_EOL; } There are no errors given, but when I go to look at the table in phpmyadmin there is not a new row added. I know for sure that the types are correct (strings and integers). Is there something really wrong here or does it have something to do with the fact that I'm ignoring other columns. I have about 8 columns in the user table.

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Parsing a string, Grammar file.

    - by defn
    How would I separate the below string into its parts. What I need to separate is each < Word including the angle brackets from the rest of the string. So in the below case I would end up with several strings 1. "I have to break up with you because " 2. "< reason " (without the spaces) 3. " . But Let's still " 4. "< disclaimer " 5. " ." I have to break up with you because <reason> . But let's still <disclaimer> . below is what I currently have (its ugly...) boolean complete = false; int begin = 0; int end = 0; while (complete == false) { if (s.charAt(end) == '<'){ stack.add(new Terminal(s.substring(begin, end))); begin = end; } else if (s.charAt(end) == '>') { stack.add(new NonTerminal(s.substring(begin, end))); begin = end; end++; } else if (end == s.length()){ if (isTerminal(getSubstring(s, begin, end))){ stack.add(new Terminal(s.substring(begin, end))); } else { stack.add(new NonTerminal(s.substring(begin, end))); } complete = true; } end++;

    Read the article

  • Parsing Lisp S-Expressions with known schema in C#

    - by Drew Noakes
    I'm working with a service that provides data as a Lisp-like S-Expression string. This data is arriving thick and fast, and I want to churn through it as quickly as possible, ideally directly on the byte stream (it's only single-byte characters) without any backtracking. These strings can be quite lengthy and I don't want the GC churn of allocating a string for the whole message. My current implementation uses CoCo/R with a grammar, but it has a few problems. Due to the backtracking, it assigns the whole stream to a string. It's also a bit fiddly for users of my code to change if they have to. I'd rather have a pure C# solution. CoCo/R also does not allow for the reuse of parser/scanner objects, so I have to recreate them for each message. Conceptually the data stream can be thought of as a sequence of S-Expressions: (item 1 apple)(item 2 banana)(item 3 chainsaw) Parsing this sequence would create three objects. The type of each object can be determined by the first value in the list, in the above case "item". The schema/grammar of the incoming stream is well known. Before I start coding I'd like to know if there are libraries out there that do this already. I'm sure I'm not the first person to have this problem.

    Read the article

  • How can I filter a JTable?

    - by Jonas
    I would like to filter a JTable, but I don't understand how I can do it. I have read How to Use Tables - Sorting and Filtering and I have tried with the code below, but with that filter, no rows at all is shown in my table. And I don't understand what column it is filtered on. private void myFilter() { RowFilter<MyModel, Object> rf = null; try { rf = RowFilter.regexFilter(filterFld.getText(), 0); } catch (java.util.regex.PatternSyntaxException e) { return; } sorter.setRowFilter(rf); } MyModel has three columns, the first two are strings and the last column is of type Integer. How can I apply the filter above, consider the text in filterFld.getText() and only filter the rows where the text is matched on the second column? I would like to show all rows that starts with the text specified by filterFld.getText(). I.e. if the text is APP then the JTable should contain the rows where the second column starts with APPLE, APPLICATION but not the rows where the second column is CAR, ORANGE. I have also tried with this filter: RowFilter<MyModel, Integer> itemFilter = new RowFilter<MyModel, Integer>(){ public boolean include(Entry<? extends MyModel, ? extends Integer> entry){ MyModel model = entry.getModel(); MyItem item = model.getRecord(entry.getIdentifier()); if (item.getSecondColumn().startsWith("APP")) { return true; } else { return false; } } }; How can I write a filter that is filtering the JTable on the second column, specified by my textfield?

    Read the article

  • Write binary data as a response in an ASP.NET MVC web control

    - by Lou Franco
    I am trying to get a control that I wrote for ASP.NET to work in an ASP.NET MVC environment. Normally, the control does the normal thing, and that works fine Sometimes it needs to respond by clearing the response, writing an image to the response stream and changing the content type. When you do this, you get an exception "OutputStream is not available when a custom TextWriter is used". If I were a page or controller, I see how I can create custom responses with binary data, but I can't see how to do this from inside a control's render functions. To simplify it -- imagine I want to make a web control that renders to: <img src="pageThatControlIsOn?controlImage"> And I know how to look at incoming requests and recognize query strings that should be routed to the control. Now the control is supposed to respond with a generated image (content-type: image/png -- and the encoded image). In ASP.NET, we: Response.Clear(); Response.OutputStream.Write(thePngData); // this throws in MVC // set the content type, etc Response.End(); How am I supposed to do that in an ASP.NET MVC control?

    Read the article

  • How to debug anomalous C memory/stack problems

    - by EBM
    Hello, Sorry I can't be specific with code, but the problems I am seeing are anomalous. Character string values seem to be getting changed depending on other, unrelated code. For example, the value of the argument that is passed around below will change merely depending on if I comment out one or two of the fprintf() calls! By the last fprintf() the value is typically completely empty (and no, I have checked to make sure I am not modifying the argument directly... all I have to do is comment out a fprintf() or add another fprintf() and the value of the string will change at certain points!): static process_args(char *arg) { /* debug */ fprintf(stderr, "Function arg is %s\n", arg); ...do a bunch of stuff including call another function that uses alloc()... /* debug */ fprintf(stderr, "Function arg is now %s\n", arg); } int main(int argc, char *argv[]) { char *my_arg; ... do a bunch of stuff ... /* just to show you it's nothing to do with the argv array */ my_string = strdup(argv[1]); /* debug */ fprintf(stderr, "Argument 1 is %s\n", my_string); process_args(my_string); } There's more code all around, so I can't ask for someone to debug my program -- what I want to know is HOW can I debug why character strings like this are getting their memory changed or overwritten based on unrelated code. Is my memory limited? My stack too small? How do I tell? What else can I do to track down the issue? My program isn't huge, it's like a thousand lines of code give or take and a couple dynamically linked external libs, but nothing out of the ordinary. HELP! TIA!

    Read the article

  • Parse Text using scanner useDelimiter

    - by Brian
    Looking to parse the following text file: Sample text file: <2008-10-07text entered by user<2008-11-26additional text entered by user I would like to parse the above text so that I can have three variables: v1 = 2008-10-07 v2 = text entered by user v3 = Ted Parlor v1 = 2008-11-26 v2 = additional text entered by user v3 = Ted Parlor I attempted to use scanner and useDelimiter, however, I'm having issue on how to set this up to have the results as stated above. Here's my first attempt: enter code here import java.io.*; import java.util.Scanner; public class ScanNotes { public static void main(String[] args) throws IOException { Scanner s = null; try { //String regex = "(?<=\<)([^\*)(?=\)"; s = new Scanner(new BufferedReader(new FileReader("cur_notes.txt"))); s.useDelimiter("[<]+"); while (s.hasNext()) { String v1 = s.next(); String v2= s.next(); System.out.println("v1= " + v1 + " v2=" + v2); } } finally { if (s != null) { s.close(); } } } } The results is as follows: v1= 2008-10-07text entered by user v2=Ted Parlor What I desire is: v1= 2008-10-07 v2=text entered by user v3=Ted Parlor v1= 2008-11-26 v2=additional text entered by user v3=Ted Parlor Any help that would allow me to extract all three strings separately would be greatly appreciated.

    Read the article

  • Class; Struct; Enum confusion, what is better?

    - by Angel Brighteyes
    I have 46 rows of information, 2 columns each row ("Code Number", "Description"). These codes are returned to the client dependent upon the success or failure of their initial submission request. I do not want to use a database file (csv, sqlite, etc) for the storage/access. The closest type that I can think of for how I want these codes to be shown to the client is the exception class. Correct me if I'm wrong, but from what I can tell enums do not allow strings, though this sort of structure seemed the better option initially based on how it works (e.g. 100 = "missing name in request"). Thinking about it, creating a class might be the best modus operandi. However I would appreciate more experienced advice or direction and input from those who might have been in a similar situation. Currently this is what I have: class ReturnCode { private int _code; private string _message; public ReturnCode(int code) { Code = code; } public int Code { get { return _code; } set { _code = value; _message = RetrieveMessage(value); } } public string Message { get { return _message; } } private string RetrieveMessage(int value) { string message; switch (value) { case 100: message = "Request completed successfuly"; break; case 201: message = "Missing name in request."; break; default: message = "Unexpected failure, please email for support"; break; } return message; } }

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • Endless problems with a very simple python subprocess.Popen task

    - by Thomas
    I'd like python to send around a half-million integers in the range 0-255 each to an executable written in C++. This executable will then respond with a few thousand integers. Each on one line. This seems like it should be very simple to do with subprocess but i've had endless troubles. Right now im testing with code: // main() u32 num; std::cin >> num; u8* data = new u8[num]; for (u32 i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (u32 i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; Building an array of strings ("data"), each like "255\n", in python and then using: output = proc.communicate("".join(data))[0] ...doesn't work (says stdin is closed, maybe too much at one time). Neither has using proc.stdin and proc.stdout worked. This should be so very simple, but I'm getting constant exceptions, and/or no output data returned to me. My Popen is currently: proc = Popen('aux/test_cpp_program', stdin=PIPE, stdout=PIPE, bufsize=1) Advise me before I pull my hair out. ;)

    Read the article

  • Preserving hierarchy when converting .csv file to xml or json

    - by Simon Levinson
    Hello I have a question concerning translating data from a CSV into XML or JSON where it is essential to preserve the heirarchy of the data. For example, if I have CSV data like this: type,brand,country,quantity apple,golden_delicious,english,1 apple,golden_delicious,french,2 apple,cox,,4 apple,braeburn,,1 banana,,carribean,6 banana,,central_america,7 clememtine,,,3 What I want is to preserve hierarchy in the XML so that I get something like: <fruit> <type = "apple"> <brand = "golden_delicious"> <country = "english" quantity = "1"> <country = "french" quantity = "2"> </brand> <brand = "cox"> <quantity = "4"> </brand> <brand = "braeburn"> <quantity = "1"> </brand> </type> <type = "banana"> <country = "carribean" quantity = "6"> <country = "central_america" quantity = "7"> </type> <type = "clementine"> <quantity = "3"> </type> <fruit /> Is it best to try to use JAXP or to convert the above into a table simply of parent, child and then writing the data to an array of strings for processing,? Like this: parent,child fruit,apple apple,golden_delicious golden_delicious,english golden_delicious,french english,1 french,2 apple,cox cox,4 apple,braeburn braeburn,1 And so on. Or is there a better way? Thanks Simon Levinson

    Read the article

  • Mirth is Not Picking Up Updated JAR File?

    - by ashes999
    I have some Mirth code (javascript) that's consuming one of my Java classes (let's call it SimpleClass). It seems that Mirth is not picking up my changes to my class. SimpleClass used to look something like this: public class Simpleclass { public SimpleClass(String one) { // do something with one } } Now it looks like: public class Simpleclass { public SimpleClass(String one, String two) { // store one and two } public void Execute() { // do something with one and two } } When I try and call SimpleClass with two strings, I get the error "Java constructor for SimpleClass with arguments string, string not found." When I try to run Execute, I get an error similar to "TypeError: method Execute not found." Perplexingly, I can still call SimpleClass("one string") to call the one-string constructor (which is now nowhere to be seen in the code). To get my code to work, I build a JAR file using ant, and deploy it to \\lib\custom. I have tried: Copying the new JAR and restarting Mirth while it's live Copying the new JAR after stopping Mirth, then starting Mirth Looking for other copies of my JAR or Mirth installations (there are none) Unzipping and reverse-engineering the JAR (it has the two-constructor code) Hashing the JAR (it matches what I'm building in ant) Restarting my computer (hey, you never know) I'm at a loss. I'm not sure why I'm not seeing the latest code. My channel has very simple Javascript: var sp = new Packages.com.hs.channel.scarborough.ScarboroughParser("one", "two"); I know it's not a configuration error of any kind, since this used to work (and does still work, albeit with the old code -- single-parameter constructor).

    Read the article

  • Blob in Java/Hibernate/sql-server 2005

    - by Ramy
    Hi, I'm trying to insert an HTML blob into our sql-server2005 database. i've been using the data-type [text] for the field the blob will eventually live in. i've also put a '@Lob' annotation on the field in the domain model. The problem comes in when the HTML blob I'm attempting to store is larger than 65536 characters. Its seems that is the caracter-limit for a text data type when using the @Lob annotation. Ideally I'd like to keep the whole blob in tact rather than chunk it up into multiple rows in the database. I appreciate any help or insight that might be provided. Thanks! _Ramy Allow me to clarify annotation: @Lob @Column(length = Integer. MAX_VALUE) //per an answer on stackoverflow private String htmlBlob; database side (sql-server-2005): CREATE TABLE dbo.IndustrySectorTearSheetBlob( ... htmlBlob text NULL ... ) Still seeing truncation after 65536 characters... EDIT: i've printed out the contents of all possible strings (only 10 right now) that would be inserted into the Database. Each string seems to contain all cahracters, judging by the fact that the close html tag is present at the end of the string....

    Read the article

  • SQL INSTR() using CSV. Need exact match rather than part

    - by Alastair Pitts
    This is a follow up issue relating to the answer for http://stackoverflow.com/questions/2445029/sql-placeholder-in-where-in-issue-inserted-strings-fail Quick background: We have a SQL query that uses a placeholder value to accept a string, which represents a unique tag/id. Usually, this is only a single tag, but we needed the ability to use a csv string for multiple tags, returning a combined result. In the answer we received from the vendor, they suggested the use of the INSTR function, ala: select * from pitotal where tag IN (SELECT tag from pipoint WHERE INSTR(?, tag) <> 0) and time between 'y' and 't' This works perfectly well 99% of the time, the issue is when the tag is also a subset of 2 parts of the CSV string. Eg the placeholder value is: 'northdom,southdom,eastdom,westdom' and possible tags include: north or northdom What happens, as north is a subset of northdom, is that the two tags are return instead of just northdom, which is actually what we want. I'm not strong on SQL so I couldn't work out how to set it as exact, or split the csv string, so help would be appreciated. Is there a way to split the csv string or make it look for an exact match?

    Read the article

  • Take most significant 8 bytes of the MD5 hash of a string as a long (in Ruby)

    - by Nate Murray
    Hey Friends, I'm trying to implement a java "hash" function in ruby. Here's the java side: import java.nio.charset.Charset; import java.security.MessageDigest; /** * @return most significant 8 bytes of the MD5 hash of the string, as a long */ protected long hash(String value) { byte[] md5hash; md5hash = md5Digest.digest(value.getBytes(Charset.forName("UTF8"))); long hash = 0L; for (int i = 0; i < 8; i++) { hash = hash << 8 | md5hash[i] & 0x00000000000000FFL; } return hash; } So far, my best guess in ruby is: # WRONG - doesn't work properly. #!/usr/bin/env ruby -wKU require 'digest/md5' require 'pp' md5hash = Digest::MD5.hexdigest("0").unpack("U*") pp md5hash hash = 0 0.upto(7) do |i| hash = hash << 8 | md5hash[i] & 0x00000000000000FF end pp hash Problem is, this ruby code doesn't match the java output. For reference, the above java code given these strings returns the corresponding long: "00038c53790ecedfeb2f83102e9115a522475d73" => -2059313900129568948 "0" => -3473083983811222033 "001211e8befc8ac22dd265ecaa77f8c227d0007f" => 3234260774580957018 Thoughts: I'm having problems getting the UTF8 bytes from the ruby string In ruby I'm using hexdigest, I suspect I should be using just digest instead The java code is taking the md5 of the UTF8 bytes whereas my ruby code is taking the bytes of the md5 (as hex) Any suggestions on how to get the exact same output in ruby?

    Read the article

  • Doctesting functions that receive and display user input - Python (tearing my hair out)

    - by GlenCrawford
    Howdy! I am currently writing a small application with Python (3.1), and like a good little boy, I am doctesting as I go. However, I've come across a method that I can't seem to doctest. It contains an input(), an because of that, I'm not entirely sure what to place in the "expecting" portion of the doctest. Example code to illustrate my problem follows: """ >>> getFiveNums() Howdy. Please enter five numbers, hit <enter> after each one Please type in a number: Please type in a number: Please type in a number: Please type in a number: Please type in a number: """ import doctest numbers = list() # stores 5 user-entered numbers (strings, for now) in a list def getFiveNums(): print("Howdy. Please enter five numbers, hit <enter> after each one") for i in range(5): newNum = input("Please type in a number:") numbers.append(newNum) print("Here are your numbers: ", numbers) if __name__ == "__main__": doctest.testmod(verbose=True) When running the doctests, the program stops executing immediately after printing the "Expecting" section, waits for me to enter five numbers one after another (without prompts), and then continues. As shown below: I don't know what, if anything, I can place in the Expecting section of my doctest to be able to test a method that receives and then displays user input. So my question (finally) is, is this function doctestable?

    Read the article

  • Extending ASP.NET role providers

    - by Quick Joe Smith
    Because the RoleProvider interface seems to treat roles as nothing more than simple strings, I'm wondering if there is any non-hacky way to apply an optional value for a role on a per-user basis. Our current login management system implements roles as key-value pairs, where the value part is optional and usually used to clarify or limit the permissions granted by a role. For example, a role 'editor' might contain a user 'barry', but for 'barry' it will have an optional value 'raptors', which the system would interpret to mean that Barry can only edit articles filed under the 'raptors' category. I have seen elsewhere a suggestion to simply create additional delimited roles, such as 'editor.raptors' or somesuch. That's not really going to be ideal because it would bloat the number of roles greatly, and I can tell it's going to be a very hard sell to replace our current implementation (which is also very less than ideal, but has the advantage of being custom made to work with our user database). I can tell already that the concatenation method mentioned above is going to involve a lot of tedious string-splitting and partial matching. Is there a better way?

    Read the article

  • IValueConverter from string

    - by Aleksandar Toplek
    I have an Enum that needs to be shown in ComboBox. I have managed to get enum values to combobox using ItemsSource and I'm trying to localize them. I thought that that could be done using value converter but as my enum values are already strings compiler throws error that IValueConverter can't take string as input. I'm not aware of any other way to convert them to other string value. Is there some other way to do that (not the localization but conversion)? I'm using this marku extension to get enum values [MarkupExtensionReturnType(typeof (IEnumerable))] public class EnumValuesExtension : MarkupExtension { public EnumValuesExtension() {} public EnumValuesExtension(Type enumType) { this.EnumType = enumType; } [ConstructorArgument("enumType")] public Type EnumType { get; set; } public override object ProvideValue(IServiceProvider serviceProvider) { if (this.EnumType == null) throw new ArgumentException("The enum type is not set"); return Enum.GetValues(this.EnumType); } } and in Window.xaml <Converters:UserTypesToStringConverter x:Key="userTypeToStringConverter" /> .... <ComboBox ItemsSource="{Helpers:EnumValuesExtension Data:UserTypes}" Margin="2" Grid.Row="0" Grid.Column="1" SelectedIndex="0" TabIndex="1" IsTabStop="False"> <ComboBox.ItemTemplate> <DataTemplate DataType="{x:Type Data:UserTypes}"> <Label Content="{Binding Converter=userTypeToStringConverter}" /> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> And here is converter class, it's just a test class, no localization yet. public class UserTypesToStringConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (int) ((Data.UserTypes) value) == 0 ? "Fizicka osoba" : "Pravna osoba"; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return default(Data.UserTypes); } }

    Read the article

  • What Language Feature Can You Just Not Live Without?

    - by akdom
    I always miss python's built-in doc strings when working in other languages. I know this may seem odd, but it allows me to cut down significantly on excess comments while still providing a clean description of my code and any interfaces therein. What Language Feature Can You Just Not Live Without? If someone were building a new language and they asked you what one feature they absolutely must include, what would it be? This is getting kind of long, so I figured I'd do my best to summarize: Paraphrased to be language agnostic. If you know of a language which uses something mentioned, please at it in the parenthesis to the right of the feature. And if you have a better format for this list, by all means try it out (if it doesn't seem to work, I'll just roll back). Regular Expressions ~ torial (Perl) Garbage Collection ~ SaaS Developer (Python, Perl, Ruby, Java, .NET) Anonymous Functions ~ Vinko Vrsalovic (Lisp, Python) Arithmetic Operators ~ Jeremy Ross (Python, Perl, Ruby, Java, C#, Visual Basic, C, C++, Pascal, Smalltalk, etc.) Exception Handling ~ torial (Python, Java, .NET) Pass By Reference ~ Chris (Python) Unified String Format WalloWizard (C#) Generics ~ torial (Python, Java, C#) Integrated Query Equivalent to LINQ ~ Vyrotek (C#) Namespacing ~ Garry Shutler () Short Circuit Logic ~ Adam Bellaire ()

    Read the article

  • How is the 'is' keyword implemented in Python?

    - by Srikanth
    ... the is keyword that can be used for equality in strings. >>> s = 'str' >>> s is 'str' True >>> s is 'st' False I tried both __is__() and __eq__() but they didn't work. >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __is__(self, s): ... return self.s == s ... >>> >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work False >>> >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __eq__(self, s): ... return self.s == s ... >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work, but again failed False >>> Thanks for your help!

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • A function where small changes in input always result in large changes in output

    - by snowlord
    I would like an algorithm for a function that takes n integers and returns one integer. For small changes in the input, the resulting integer should vary greatly. Even though I've taken a number of courses in math, I have not used that knowledge very much and now I need some help... An important property of this function should be that if it is used with coordinate pairs as input and the result is plotted (as a grayscale value for example) on an image, any repeating patterns should only be visible if the image is very big. I have experimented with various algorithms for pseudo-random numbers with little success and finally it struck me that md5 almost meets my criteria, except that it is not for numbers (at least not from what I know). That resulted in something like this Python prototype (for n = 2, it could easily be changed to take a list of integers of course): import hashlib def uniqnum(x, y): return int(hashlib.md5(str(x) + ',' + str(y)).hexdigest()[-6:], 16) But obviously it feels wrong to go over strings when both input and output are integers. What would be a good replacement for this implementation (in pseudo-code, python, or whatever language)?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >