Search Results

Search found 2235 results on 90 pages for 'dictionary'.

Page 30/90 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • Is there a class like a Dictionary without a Value template? Is HashSet<T> the correct answer?

    - by myotherme
    I have 3 tables: Foos, Bars and FooBarConfirmations I want to have a in-memory list of FooBarConfirmations by their hash: FooID BarID Hash 1 1 1_1 2 1 2_1 1 2 1_2 2 2 2_2 What would be the best Class to use to store this type of structure in-memory, so that I can quickly check to see if a combination exists like so: list.Contains("1_2"); I can do this with Dictionary<string,anything>, but it "feels" wrong. HashSet looks like the right tool for the job, but does it use some form of hashing algorithm in the background to do the lookups efficiently?

    Read the article

  • Convert Dynamic to Type and convert Type to Dynamic

    - by Jon Canning
    public static class DynamicExtensions     {         public static T FromDynamic<T>(this IDictionary<string, object> dictionary)         {             var bindings = new List<MemberBinding>();             foreach (var sourceProperty in typeof(T).GetProperties().Where(x => x.CanWrite))             {                 var key = dictionary.Keys.SingleOrDefault(x => x.Equals(sourceProperty.Name, StringComparison.OrdinalIgnoreCase));                 if (string.IsNullOrEmpty(key)) continue;                 var propertyValue = dictionary[key];                 bindings.Add(Expression.Bind(sourceProperty, Expression.Constant(propertyValue)));             }             Expression memberInit = Expression.MemberInit(Expression.New(typeof(T)), bindings);             return Expression.Lambda<Func<T>>(memberInit).Compile().Invoke();         }         public static dynamic ToDynamic<T>(this T obj)         {             IDictionary<string, object> expando = new ExpandoObject();             foreach (var propertyInfo in typeof(T).GetProperties())             {                 var propertyExpression = Expression.Property(Expression.Constant(obj), propertyInfo);                 var currentValue = Expression.Lambda<Func<string>>(propertyExpression).Compile().Invoke();                 expando.Add(propertyInfo.Name.ToLower(), currentValue);             }             return expando as ExpandoObject;         }     }

    Read the article

  • Group method parameter or individual parameter?

    - by Nassign
    I would like to ask on method parameters design consideration. I am usually deciding between using individual variables as parameters versus grouping them to a class or dictionary as one parameter. Is there such a rule when you should use individual parameter against using a class or a dictionary to group the parameter? Individual parameter - Straight forward, strongly typed Dictionary parameter - Very extensible, like HTTP request but cannot be strongly typed. Class parameter - Extensible by adding member to the class parameter, strongly typed. I am looking for a design reference on when to use which? Note: I am not sure if this question is valid in programmers but I definitely think it would be closed in stackoverflow, If it is still not valid, please point me to the proper page.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Reusing elements in StandardStyles.xaml

    - by nmarun
    In one of my previous blogs (second point) , I mentioned not to modify the StandardStyles.xaml, but instead to create your own resource dictionary. I also mentioned how to declare your custom styles dictionary in the App.xaml file. If you want to reference or build upon from an existing style in the StandardStyles.xaml file, do the following: 1. Remove the entry for StandardStyles.xaml in the App.xaml file 2. Add your custom resource dictionary in the App.xaml file 1: <!-- App.xaml --> 2: <...(read more)

    Read the article

  • writing NSDictionary to plist

    - by ADude
    Hi I'm trying to write an NSDictionary to a plist but when I open the plist no data has been written to it. From the log my path looks correct and my code is pretty standard. Any ideas? NSArray *keys = [NSArray arrayWithObjects:@"key1", @"key2", @"key3", nil]; NSArray *objects = [NSArray arrayWithObjects:@"value1", @"value2", @"value3", nil]; NSDictionary *dictionary = [NSDictionary dictionaryWithObjects:objects forKeys:keys]; for (id key in dictionary) { NSLog(@"key: %@, value: %@", key, [dictionary objectForKey:key]); } NSString *path = [[NSBundle mainBundle] pathForResource:@"FormData" ofType:@"plist"]; NSLog(@"path:%@", path); [dictionary writeToFile:path atomically:YES];

    Read the article

  • Microsoft SyncFramework - Sync different tables into one

    - by evnu
    Hello, we are trying to get the Microsoft SyncFramework running in our application to synchronize an oracle db with a mobile device. Problem The queries that we need to gather the data on the oracle db take much time (and we haven't found a way to speed them up yet), so we try to split them up in as much portions as possible. One big part of the whole problem is, that we need different information out of one big table, that bloats a query if combined. Unfortunately, the SyncFramework allows only one TableAdapter per SyncTable. Now this is a problem for our application: If we were able to use more than one TableAdapter per SyncTable, we could easily spread the queries in a more efficient way. Using one query per Table which combines all the needed data takes way too much time. Ideas I thought of creating different TableAdapters for each one of the required queries and then merge the resulting datasets afterwards (preferably on the server). This seems to work, but is a rather awkward solution. Does someone of you know a better solution? Or do you have some ideas that could help? Thanks in advance, evnu EDIT: So, I implemented the merge solution. If you are interested, take a look at the following code. I'll give more details if there are questions. <WebMethod()> _ Public Function GetChanges(ByVal groupMetadata As SyncGroupMetadata, ByVal syncSession As SyncSession) As SyncContext Dim stream As MemoryStream Dim format As BinaryFormatter = New BinaryFormatter Dim anchors As Dictionary(Of String, Byte()) ' keep track of the tables that will be updated Dim addTables As Dictionary(Of String, List(Of SyncTableMetadata)) = New Dictionary(Of String, List(Of SyncTableMetadata)) ' list of all present anchors Dim allAnchors As Dictionary(Of String, Byte()) = New Dictionary(Of String, Byte()) ' fill allAnchors - deserialize all given anchors For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata If Table.LastReceivedAnchor Is Nothing Or Table.LastReceivedAnchor.IsNull Then Continue For stream = New MemoryStream(Table.LastReceivedAnchor.Anchor) anchors = format.Deserialize(stream) For Each item As KeyValuePair(Of String, Byte()) In anchors allAnchors.Add(item.Key, item.Value) Next stream.Dispose() Next For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata If allAnchors.ContainsKey(Table.TableName) Then Table.LastReceivedAnchor.Anchor = allAnchors(Table.TableName) End If Dim addSyncTables As List(Of SyncTableMetadata) If syncSession.SyncParameters.Contains(Table.TableName) Then Dim tableNames() As String = syncSession.SyncParameters(Table.TableName).Value.ToString.Split(":") addSyncTables = New List(Of SyncTableMetadata) For Each tableName As String In tableNames Dim newSynctable As SyncTableMetadata = New SyncTableMetadata newSynctable.TableName = tableName If allAnchors.ContainsKey(tableName) Then Dim anker As SyncAnchor = New SyncAnchor(allAnchors(tableName)) newSynctable.LastReceivedAnchor = anker Else newSynctable.LastReceivedAnchor = Nothing End If newSynctable.SyncDirection = Table.SyncDirection addSyncTables.Add(newSynctable) Next addTables.Add(Table.TableName, addSyncTables) End If Next ' add the newly created synctables For Each item As KeyValuePair(Of String, List(Of SyncTableMetadata)) In addTables For Each Table As SyncTableMetadata In item.Value groupMetadata.TablesMetadata.Add(Table) Next Next ' fire queries Dim context As SyncContext = servSyncProvider.GetChanges(groupMetadata, syncSession) ' merge resulting datasets For Each item As KeyValuePair(Of String, List(Of SyncTableMetadata)) In addTables For Each Table As SyncTableMetadata In item.Value If context.DataSet.Tables.Contains(Table.TableName) Then If Not context.DataSet.Tables.Contains(item.Key) Then Dim tmp As DataTable = context.DataSet.Tables(Table.TableName).Copy tmp.TableName = item.Key context.DataSet.Tables.Add(tmp) Else context.DataSet.Tables(item.Key).Merge(context.DataSet.Tables(Table.TableName)) context.DataSet.Tables.Remove(Table.TableName) End If End If Next Next ' create new anchors Dim allAnchorsDict As Dictionary(Of String, Byte()) = New Dictionary(Of String, Byte()) For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata allAnchorsDict.Add(Table.TableName, context.NewAnchor.Anchor) Next stream = New MemoryStream format.Serialize(stream, allAnchorsDict) context.NewAnchor.Anchor = stream.ToArray stream.Dispose() Return context End Function

    Read the article

  • Creating Dynamic Objects

    - by Ramesh Durai
    How to dynamically create objects? string[] columnNames = { "EmpName", "EmpID", "PhoneNo" }; List<string[]> columnValues = new List<string[]>(); for (int i = 0; i < 10; i++) { columnValues.Add(new[] { "Ramesh", "12345", "12345" }); } List<Dictionary<string, object>> testData = new List<Dictionary<string, object>>(); foreach (string[] columnValue in columnValues) { Dictionary<string, object> data = new Dictionary<string, object>(); for (int j = 0; j < columnNames.Count(); j++) { data.Add(columnNames[j], columnValues[j]); } testData.Add(data); } Imaginary Class(Class is not available in code): class Employee { string EmpName { get;set; } string EmpID { get;set; } string PhoneNo { get;set; } } Note: Property/column names are dynamic. Now I want to convert the List<Dictionary<string, object>> to a class of type List<object> (i.e) List<Employee>. Is it Possible? Suggestions please.

    Read the article

  • DataContractJsonSerializer produces list of hashes instead of hash

    - by Jacques
    I would expect a Dictionary object of the form: Dictionary<string,string> dict = new Dictionary<string,string>() {["blah", "bob"], ["blahagain", "bob"]}; to serialize into JSON in the form of: { "blah": "bob", "blahagain": "bob" } NOT [ { "key": "blah", "value": "bob" }, { "key": "blahagain", "value": "bob"}] What is the reason for what appears to be a monstrosity of a generic attempt at serializing collections? The DataContractJsonSerializer uses the ISerializable interface to produce this thing. It seems to me as though somebody has taken the XML output from ISerializable and mangled this thing out of it. Is there a way to override the default serialization used by .Net here? Could I just derive from Dictionary and override the Serialization methods? Posting to hear of any caveats or suggestions people might have.

    Read the article

  • Smart way to find the corresponding nullable type?

    - by Marc Wittke
    How could I avoid this dictionary (or create it dynamically)? Dictionary<Type,Type> CorrespondingNullableType = new Dictionary<Type, Type> { {typeof(bool), typeof(bool?)}, {typeof(byte), typeof(byte?)}, {typeof(sbyte), typeof(sbyte?)}, {typeof(char), typeof(char?)}, {typeof(decimal), typeof(decimal?)}, {typeof(double), typeof(double?)}, {typeof(float), typeof(float?)}, {typeof(int), typeof(int?)}, {typeof(uint), typeof(uint?)}, {typeof(long), typeof(long?)}, {typeof(ulong), typeof(ulong?)}, {typeof(short), typeof(short?)}, {typeof(ushort), typeof(ushort?)}, {typeof(Guid), typeof(Guid?)}, };

    Read the article

  • Best and simple data structure

    - by anshu
    I am trying to create the below matrix in my vb.net so during processing I can get the match scores for the alphabets, for example: What is the match for A and N?, I will look into my inbuilt matrix and return -2 Similarly, What is the match for P and L?, I will look into my inbuilt matrix and return -3 Please suggest me how to go about it, I was trying to use nested dictionary like this: Dim myNestedDictionary As New Dictionary(Of String, Dictionary(Of String, Integer))() Dim lTempDict As New Dictionary(Of String, Integer) lTempDict.Add("A", 4) myNestedDictionary.Add("A", lTempDict) The other way could be is to read the Matrix from a text based file and then fill the two dimensional array. Thanks.

    Read the article

  • creating Object equality "HashMap" in ActionScript3 as java HashMap

    - by jason
    const jonny1 : Person = new Person("jonny", 26); const jonny2 : Person = new Person("jonny", 26); const table : Dictionary = new Dictionary(); table[jonny1] = "That's me"; trace(table[jonny1]) // traces: "That's me" trace(table[jonny2]) // traces: undefined. But I want use Dictionary like this way: trace(table[jonny2]) // traces: "That's me". in a word, I want implements a data-structure works like HashMap in java

    Read the article

  • Sockets server design advice

    - by Rob
    We are writing a socket server in c# and need some advice on the design. Background: Clients (from mobile devices) connect to our server app and we leave their socket open so we can send data back down to them whenever we need to. The amount of data varies but we generally send/receive data from each client every few seconds, so it's quite intensive. The amount of simultaneous connections can range from 50-500 (and more in the future). We have already written a server app using async sockets and it works, however we've come across some stumbling blocks and we need to make sure that what we're doing is correct. We have a collection which holds our client states (we have no socket/connection pool at the moment, should we?). Each time a client connects we create a socket and then wait for them to send us some data and in receiveCallBack we add their clientstate object to our connections dictionary (once we have verified who they are). When a client object then signs off we shutdown their socket and then close it as well as remove them from our collection of clients dictionary. Presumably everything happens in the right order, everything works as expected. However, almost everyday it stops accepting connections, or so we think, either that or it connects but doesn't actually do anything past that and we can't work out why it's just stopping. There are few things that we'r'e unsure about 1) Should we be creating some kind of connection pool as opposed to just a dictionary of client sockets 2) What happens to the sockets that connect but then don't get added to our dictionary, they just linger around in memory doing nothing, should we create ANOTHER dictionary that holds the sockets as soon as they are created? 3) What's the best way of finding if clients are no longer connected? We've read some many methods but we're not sure of the best one to use, send data or read data, if so how? 4) If we loop through the connections dictonary to check for disposed clients, should we be locking the dictionary, if so how does this affect other clients objects trying to use it at the same time, will it throw an error or just wait? 5) We often get disposedSocketException within ReceiveCallBack method at random times, does this mean we are safe to remove that socket from the collection? We can't seem to find any production type examples which show any of this working. Any advice would be greatly received

    Read the article

  • Improve Efficiency for This Text Processing Code

    - by johnv
    I am writing a program that counts the number of words in a text file which is already in lowercase and separated by spaces. I want to use a dictionary and only count the word IF it's within the dictionary. The problem is the dictionary is quite large (~100,000 words) and each text document has also ~50,000 words. As such, the codes that I wrote below gets very slow (takes about 15 sec to process one document on a quad i7 machine). I'm wondering if there's something wrong with my coding and if the efficiency of the program can be improved. Thanks so much for your help. Code below: public static string WordCount(string countInput) { string[] keywords = ReadDic(); /* read dictionary txt file*/ /*then reads the main text file*/ Dictionary<string, int> dict = ReadFile(countInput).Split(' ') .Select(c => c) .Where(c => keywords.Contains(c)) .GroupBy(c => c) .Select(g => new { word = g.Key, count = g.Count() }) .OrderBy(g => g.word) .ToDictionary(d => d.word, d => d.count); int s = dict.Sum(e => e.Value); string k = s.ToString(); return k; }

    Read the article

  • build a bst as an array using recursion?

    - by Jack B
    String[] dictionary = new String[dictSize]; //arrray of strings from dictionary String[] tree = new String[3*dictSize]; //array of tree void makeBST() { recMakeBST(0, dictionary.length-1); }//makeBST() int a=0; void recMakeBST(int low, int high) { if(high-low==0){ return; } else{ int mid=(high-low)/2; tree[a]=dictionary[mid]; a=a+1; recMakeBST(low, mid-1); a=a+1; recMakeBST(mid+1, high); } }

    Read the article

  • C# - How to override GetHashCode with Lists in object

    - by Christian
    Hi, I am trying to create a "KeySet" to modify UIElement behaviour. The idea is to create a special function if, eg. the user clicks on an element while holding a. Or ctrl+a. My approach so far, first lets create a container for all possible modifiers. If I would simply allow a single key, it would be no problem. I could use a simple Dictionary, with Dictionary<Keys, Action> _specialActionList If the dictionary is empty, use the default action. If there are entries, check what action to use depending on current pressed keys And if I wasn't greedy, that would be it... Now of course, I want more. I want to allow multiple keys or modifiers. So I created a wrapper class, wich can be used as Key to my dictionary. There is an obvious problem when using a more complex class. Currently two different instances would create two different key, and thereby he would never find my function (see code to understand, really obvious) Now I checked this post: http://stackoverflow.com/questions/638761/c-gethashcode-override-of-object-containing-generic-array which helped a little. But my question is, is my basic design for the class ok. Should I use a hashset to store the modifier and normal keyboardkeys (instead of Lists). And If so, how would the GetHashCode function look like? I know, its a lot of code to write (boring hash functions), some tips would be sufficient to get me started. Will post tryouts here... And here comes the code so far, the Test obviously fails... public class KeyModifierSet { private readonly List<Key> _keys = new List<Key>(); private readonly List<ModifierKeys> _modifierKeys = new List<ModifierKeys>(); private static readonly Dictionary<KeyModifierSet, Action> _testDict = new Dictionary<KeyModifierSet, Action>(); public static void Test() { _testDict.Add(new KeyModifierSet(Key.A), () => Debug.WriteLine("nothing")); if (!_testDict.ContainsKey(new KeyModifierSet(Key.A))) throw new Exception("Not done yet, help :-)"); } public KeyModifierSet(IEnumerable<Key> keys, IEnumerable<ModifierKeys> modifierKeys) { foreach (var key in keys) _keys.Add(key); foreach (var key in modifierKeys) _modifierKeys.Add(key); } public KeyModifierSet(Key key, ModifierKeys modifierKey) { _keys.Add(key); _modifierKeys.Add(modifierKey); } public KeyModifierSet(Key key) { _keys.Add(key); } }

    Read the article

  • Unpacking Argument Lists and Instantiating WTForms objects from web.py

    - by Morris Cornell-Morgan
    After a bit of searching, I've found that it's possible to instantiate a WTForms object in web.py using the following code: form = my_form(**web.input()) web.input() returns a "dictionary-like" web.storage object, but without the double asterisks WTForms will raise an exception: TypeError: formdata should be a multidict-type wrapper that supports the 'getlist' method From the Python documentation I understand that the two asterisks are used to unpack a dictionary of named arguments. That said, I'm still a bit confused about exactly what is going on. What makes the web.storage object returned by web.input() "dictionary-like" enough that it can be unpacked by ** but not "dictionary-like" enough that it can be passed as-is to the WTForms constructor? I know that this is an extremely basic question, but any advice to help a novice programmer would be greatly appreciated!

    Read the article

  • How to Extract Properties for Refactoring

    - by Ngu Soon Hui
    I have this property public List<PointK> LineList {get;set;} Where PointK consists of the following structure: string Mark{get;set;} double X{get;set;} doible Y{get;set;} Now, I have the following code: private static Dictionary<string , double > GetY(List<PointK> points) { var invertedDictResult = new Dictionary<string, double>(); foreach (var point in points) { if (!invertedDictResult.ContainsKey(point.Mark)) { invertedDictResult.Add(point .Mark, Math.Round(point.Y, 4)); } } return invertedDictResult; } private static Dictionary<string , double > GetX(List<PointK> points) { var invertedDictResult = new Dictionary<string, double>(); foreach (var point in points) { if (!invertedDictResult.ContainsKey(point.Mark)) { invertedDictResult.Add(point .Mark, Math.Round(point.X, 4)); } } return invertedDictResult; } How to restructure the above code?

    Read the article

  • Efficient method of getting all plist arrays into one array?

    - by cannyboy
    If I have a plist which is structured like this: Root Array Item 0 Dictionary City String New York People Array Item 0 String Steve Item 1 String Paul Item 2 String Fabio Item 3 String David Item 4 String Penny Item 1 Dictionary City String London People Array Item 0 String Linda Item 1 String Rachel Item 2 String Jessica Item 3 String Lou Item 2 Dictionary City String Barcelona People Array Item 0 String Edward Item 1 String Juan Item 2 String Maria Then what is the most efficient way of getting all the names of the people into one big NSArray?

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • Dynamic object property populator (without reflection)

    - by grenade
    I want to populate an object's properties without using reflection in a manner similar to the DynamicBuilder on CodeProject. The CodeProject example is tailored for populating entities using a DataReader or DataRecord. I use this in several DALs to good effect. Now I want to modify it to use a dictionary or other data agnostic object so that I can use it in non DAL code --places I currently use reflection. I know almost nothing about OpCodes and IL. I just know that it works well and is faster than reflection. I have tried to modify the CodeProject example and because of my ignorance with IL, I have gotten stuck on two lines. One of them deals with dbnulls and I'm pretty sure I can just lose it, but I don't know if the lines preceding and following it are related and which of them will also need to go. The other, I think, is the one that pulled the value out of the datarecord before and now needs to pull it out of the dictionary. I think I can replace the "getValueMethod" with my "property.Value" but I'm not sure. I'm open to alternative/better ways of skinning this cat too. Here's the code so far (the commented out lines are the ones I'm stuck on): using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; public class Populator<T> { private delegate T Load(Dictionary<string, object> properties); private Load _handler; private Populator() { } public T Build(Dictionary<string, object> properties) { return _handler(properties); } public static Populator<T> CreateBuilder(Dictionary<string, object> properties) { //private static readonly MethodInfo getValueMethod = typeof(IDataRecord).GetMethod("get_Item", new [] { typeof(int) }); //private static readonly MethodInfo isDBNullMethod = typeof(IDataRecord).GetMethod("IsDBNull", new [] { typeof(int) }); Populator<T> dynamicBuilder = new Populator<T>(); DynamicMethod method = new DynamicMethod("Create", typeof(T), new[] { typeof(Dictionary<string, object>) }, typeof(T), true); ILGenerator generator = method.GetILGenerator(); LocalBuilder result = generator.DeclareLocal(typeof(T)); generator.Emit(OpCodes.Newobj, typeof(T).GetConstructor(Type.EmptyTypes)); generator.Emit(OpCodes.Stloc, result); int i = 0; foreach (var property in properties) { PropertyInfo propertyInfo = typeof(T).GetProperty(property.Key, BindingFlags.Public | BindingFlags.Instance | BindingFlags.IgnoreCase | BindingFlags.FlattenHierarchy | BindingFlags.Default); Label endIfLabel = generator.DefineLabel(); if (propertyInfo != null && propertyInfo.GetSetMethod() != null) { generator.Emit(OpCodes.Ldarg_0); generator.Emit(OpCodes.Ldc_I4, i); //generator.Emit(OpCodes.Callvirt, isDBNullMethod); generator.Emit(OpCodes.Brtrue, endIfLabel); generator.Emit(OpCodes.Ldloc, result); generator.Emit(OpCodes.Ldarg_0); generator.Emit(OpCodes.Ldc_I4, i); //generator.Emit(OpCodes.Callvirt, getValueMethod); generator.Emit(OpCodes.Unbox_Any, property.Value.GetType()); generator.Emit(OpCodes.Callvirt, propertyInfo.GetSetMethod()); generator.MarkLabel(endIfLabel); } i++; } generator.Emit(OpCodes.Ldloc, result); generator.Emit(OpCodes.Ret); dynamicBuilder._handler = (Load)method.CreateDelegate(typeof(Load)); return dynamicBuilder; } } EDIT: Using Marc Gravell's PropertyDescriptor implementation (with HyperDescriptor) the code is simplified a hundred-fold. I now have the following test: using System; using System.Collections.Generic; using System.ComponentModel; using Hyper.ComponentModel; namespace Test { class Person { public int Id { get; set; } public string Name { get; set; } } class Program { static void Main() { HyperTypeDescriptionProvider.Add(typeof(Person)); var properties = new Dictionary<string, object> { { "Id", 10 }, { "Name", "Fred Flintstone" } }; Person person = new Person(); DynamicUpdate(person, properties); Console.WriteLine("Id: {0}; Name: {1}", person.Id, person.Name); Console.ReadKey(); } public static void DynamicUpdate<T>(T entity, Dictionary<string, object> properties) { foreach (PropertyDescriptor propertyDescriptor in TypeDescriptor.GetProperties(typeof(T))) if (properties.ContainsKey(propertyDescriptor.Name)) propertyDescriptor.SetValue(entity, properties[propertyDescriptor.Name]); } } } Any comments on performance considerations for both TypeDescriptor.GetProperties() & PropertyDescriptor.SetValue() are welcome...

    Read the article

  • Building an external list while filtering in LINQ

    - by Khnle
    I have an array of input strings that contains either email addresses or account names in the form of domain\account. I would like to build a List of string that contains only email addresses. If an element in the input array is of the form domain\account, I will perform a lookup in the dictionary. If the key is found in the dictionary, that value is the email address. If not found, that won't get added to the result list. The code below will makes the above description clear: private bool where(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return true; } else { try { string value = dict[input]; return true; } catch (KeyNotFoundException) { return false; } } } private string select(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return input; } else { try { string value = dict[input]; return value; } catch (KeyNotFoundException) { return null; } } } public void run() { Dictionary<string, string> dict = new Dictionary<string, string>() { { "gmail\\nameless", "[email protected]"} }; string[] s = { "[email protected]", "gmail\\nameless", "gmail\\unknown" }; var q = s.Where(p => where(p, dict)).Select(p => select(p, dict)); List<string> resultList = q.ToList<string>(); } While the above code works (hope I don't have any typo here), there are 2 problems that I do not like with the above: The code in where() and select() seems to be redundant/repeating. It takes 2 passes. The second pass converts from the query expression to List. So I would like to add to the List resultList directly in the where() method. It seems like I should be able to do so. Here's the code: private bool where(string input, Dictionary<string, string> dict, List<string> resultList) { if (input.Contains("@")) { resultList.Add(input); //note the difference from above return true; } else { try { string value = dict[input]; resultList.Add(value); //note the difference from above return true; } catch (KeyNotFoundException) { return false; } } } The my LINQ expression can be nicely in 1 single statement: List<string> resultList = new List<string>(); s.Where(p => where(p, dict, resultList)); Or var q = s.Where(p => where(p, dict, resultList)); //do nothing with q afterward Which seems like perfect and legal C# LINQ. The result: sometime it works and sometime it doesn't. So why doesn't my code work reliably and how can I make it do so?

    Read the article

  • How to save timers with connection to OrderId?

    - by Adrian Serafin
    Hi! I have a system where clients can make orders. After making order they have 60 minutes to pay fot it before it will be deleted. On the server side when order is made i create timer and set elapsed time to 60 minutes System.Timer.Timers timer = new System.Timers.Timer(1000*60*60); timer.AutoReset = false; timer.Elapsed += HandleElapsed; timer.Start(); Because I want to be able to dispose timer if client decides to pay and I want to be able to cancel order if he doesn't I keep two Dictionaries: Dictionary<int, Timer> _orderTimer; Dictionary<Timer, int> _timerOrder; Then when client pay's I can access Timer by orderId with O(1) thanks to _orderTimer dictionary and when time elapsed I can access order with O(1) thanks to _timerOrder dictionary. My question is: Is it good aproach? Assuming that max number of rows I have to keep in dictionary in one moment will be 50000? Maybe it would be better to derive from Timer class, add property called OrderId, keep it all in List and search for order/timer using linq? Or maybe you I should do this in different way?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >