Search Results

Search found 7420 results on 297 pages for 'generic collections'.

Page 26/297 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • Generic Aggregation of C++ Objects by Attribute When Attribute Name is Unknown at Runtime

    - by stretch
    I'm currently implementing a system with a number of class's representing objects such as client, business, product etc. Standard business logic. As one might expect each class has a number of standard attributes. I have a long list of essentially identical requirements such as: the ability to retrieve all business' whose industry is manufacturing. the ability to retrieve all clients based in London Class business has attribute sector and client has attribute location. Clearly this a relational problem and in pseudo SQL would look something like: SELECT ALL business in business' WHERE sector == manufacturing Unfortunately plugging into a DB is not an option. What I want to do is have a single generic aggregation function whose signature would take the form: vector<generic> genericAggregation(class, attribute, value); Where class is the class of object I want to aggregate, attribute and value being the class attribute and value of interest. In my example I've put vector as return type, but this wouldn't work. Probably better to declare a vector of relevant class type and pass it as an argument. But this isn't the main problem. How can I accept arguments in string form for class, attribute and value and then map these in a generic object aggregation function? Since it's rude not to post code, below is a dummy program which creates a bunch of objects of imaginatively named classes. Included is a specific aggregation function which returns a vector of B objects whose A object is equal to an id specified at the command line e.g. .. $ ./aggregations 5 which returns all B's whose A objects 'i' attribute is equal to 5. See below: #include <iostream> #include <cstring> #include <sstream> #include <vector> using namespace std; //First imaginativly names dummy class class A { private: int i; double d; string s; public: A(){} A(int i, double d, string s) { this->i = i; this->d = d; this->s = s; } ~A(){} int getInt() {return i;} double getDouble() {return d;} string getString() {return s;} }; //second imaginativly named dummy class class B { private: int i; double d; string s; A *a; public: B(int i, double d, string s, A *a) { this->i = i; this->d = d; this->s = s; this->a = a; } ~B(){} int getInt() {return i;} double getDouble() {return d;} string getString() {return s;} A* getA() {return a;} }; //Containers for dummy class objects vector<A> a_vec (10); vector<B> b_vec;//100 //Util function, not important.. string int2string(int number) { stringstream ss; ss << number; return ss.str(); } //Example function that returns a new vector containing on B objects //whose A object i attribute is equal to 'id' vector<B> getBbyA(int id) { vector<B> result; for(int i = 0; i < b_vec.size(); i++) { if(b_vec.at(i).getA()->getInt() == id) { result.push_back(b_vec.at(i)); } } return result; } int main(int argc, char** argv) { //Create some A's and B's, each B has an A... //Each of the 10 A's are associated with 10 B's. for(int i = 0; i < 10; ++i) { A a(i, (double)i, int2string(i)); a_vec.at(i) = a; for(int j = 0; j < 10; j++) { B b((i * 10) + j, (double)j, int2string(i), &a_vec.at(i)); b_vec.push_back(b); } } //Got some objects so lets do some aggregation //Call example aggregation function to return all B objects //whose A object has i attribute equal to argv[1] vector<B> result = getBbyA(atoi(argv[1])); //If some B's were found print them, else don't... if(result.size() != 0) { for(int i = 0; i < result.size(); i++) { cout << result.at(i).getInt() << " " << result.at(i).getA()->getInt() << endl; } } else { cout << "No B's had A's with attribute i equal to " << argv[1] << endl; } return 0; } Compile with: g++ -o aggregations aggregations.cpp If you wish :) Instead of implementing a separate aggregation function (i.e. getBbyA() in the example) I'd like to have a single generic aggregation function which accounts for all possible class attribute pairs such that all aggregation requirements are met.. and in the event additional attributes are added later, or additional aggregation requirements, these will automatically be accounted for. So there's a few issues here but the main one I'm seeking insight into is how to map a runtime argument to a class attribute. I hope I've provided enough detail to adequately describe what I'm trying to do...

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • java generics and the addAll method

    - by neesh
    What is the correct type of argument to the addAll(..) method in Java collections? If I do something like this: Collection<HashMap<String, Object[]>> addAll = new ArrayList<HashMap<String, Object[]>>(); // add some hashmaps to the list.. currentList.addAll(newElements); //currentList is of type: List<? extends Map<String, Object[]>> I understand I need to initialize both variables. However, I get a compilation error (from eclipse): Multiple markers at this line - The method addAll(Collection<? extends capture#1-of ? extends Map<String,Object[]>>) in the type List<capture#1-of ? extends Map<String,Object[]>> is not applicable for the arguments (List<capture#2-of ? extends Map<String,Object[]>>) - The method addAll(Collection<? extends capture#1-of ? extends Map<String,Object[]>>) in the type List<capture#1-of ? extends Map<String,Object[]>> is not applicable for the arguments (Collection<HashMap<String,Object[]>>) what am I doing wrong?

    Read the article

  • Validating collection elements in WPF

    - by Chris
    I would like to know how people are going about validating collections in WPF. Lets say for example that I have an observable collection of ViewModels that I am binding to the items source of a grid, and the user can add new rows to the grid and needs to fill them. First of all I need to validate on each row to ensure that required fields of each ViewModel are filled in. This is fine and simple to do for each row. However, the second level of validation is on the collection as a whole. For example i want to ensure that no two rows of the collection have the same identifier, or that no two rows have the same name. Im basically checking for duplicate properties within different rows. I also have more complex conditions where I must ensure that there is at least one item within the collection that has some property set. How do I get a validation rule that would allow me to check these rules, validating on the whole collection rather than the individual items. I also want to print any validation error above the datagrid so that the user can fix the problem and the message will update or disappear as the user fixes each different rule. Anyone have any experience of the proper way to do this? Thanks Chris

    Read the article

  • Java Collection performance question

    - by Shervin
    I have created a method that takes two Collection<String> as input and copies one to the other. However, I am not sure if I should check if the collections contain the same elements before I start copying, or if I should just copy regardless. This is the method: /** * Copies from one collection to the other. Does not allow empty string. * Removes duplicates. * Clears the too Collection first * @param target * @param dest */ public static void copyStringCollectionAndRemoveDuplicates(Collection<String> target, Collection<String> dest) { if(target == null || dest == null) return; //Is this faster to do? Or should I just comment this block out if(target.containsAll(dest)) return; dest.clear(); Set<String> uniqueSet = new LinkedHashSet<String>(target.size()); for(String f : target) if(!"".equals(f)) uniqueSet.add(f); dest.addAll(uniqueSet); } Maybe it is faster to just remove the if(target.containsAll(dest)) return; Because this method will iterate over the entire collection anyways.

    Read the article

  • Modifying C# dictionary value

    - by minjang
    I'm a C++ expert, but not at all for C#. I created a Dictionary<string, STATS>, where STATS is a simple struct. Once I built the dictionary with initial string and STATS pairs, I want to modify the dictionary's STATS value. In C++, it's very clear: Dictionary<string, STATS*> benchmarks; Initialize it... STATS* stats = benchmarks[item.Key]; // Touch stats directly However, I tried like this in C#: Dictionary<string, STATS> benchmarks = new Dictionary<string, STATS>(); // Initialize benchmarks with a bunch of STATS foreach (var item in _data) benchmarks.Add(item.app_name, item); foreach (KeyValuePair<string, STATS> item in benchmarks) { // I want to modify STATS value inside of benchmarks dictionary. STATS stat_item = benchmarks[item.Key]; ParseOutputFile("foo", ref stat_item); // But, not modified in benchmarks... stat_item is just a copy. } This is a really novice problem, but wasn't easy to find an answer. EDIT: I also tried like the following: STATS stat_item = benchmarks[item.Key]; ParseOutputFile(file_name, ref stat_item); benchmarks[item.Key] = stat_item; However, I got the exception since such action invalidates Dictionary: Unhandled Exception: System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Enumerator.MoveNext() at helper.Program.Main(String[] args) in D:\dev\\helper\Program.cs:line 75

    Read the article

  • Iterating through Event Log Entry Collection, IndexOutOutOfBoundsException

    - by fjdumont
    Hello, in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message. In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´. Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...). How can I iterate through the collection without trying to access already deleted entries? Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help? Thanks for any advice and hints Frank Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Quickly or concisely determine the longest string per column in a row-based data collection

    - by ccornet
    Judging from the failure of my last inquiry, I need to calculate and preset the widths of a set of columns in a table that is being made into an Excel file. Unfortunately, the string data is stored in a row-based format, but the widths must be calculated in a column-based format. The data for the spreadsheets are generated from the following two collections: var dictFiles = l.Items.Cast<SPListItem>().GroupBy(foo => foo.GetSafeSPValue("Category")).ToDictionary(bar => bar.Key); StringDictionary dictCols = GetColumnsForItem(l.Title); Where l is an SPList whose title determines which columns are used. Each SPListItem corresponds to a row of data, which are sorted into separate worksheets based on Category (hence the dictionary). The second line is just a simple StringDictionary that has the column name (A, B, C, etc.) as a key and the corresponding SPListItme field display name as the corresponding value. So for each Category, I enumerate through dictFiles[somekey] to get all the rows in that sheet, and get the particular cell data using SPListItem.Fields[dictCols[colName]]. What I am asking is, is there a quick or concise method, for any one dictFiles[somekey], to retrieve a readout of the longest string in each column provided by dictCols? If it is impossible to get both quickness and conciseness, I can settle for either (since I always have the O(n*m) route of just enumerating the collection and updating an array whenever strCurrent.Length strLongest.Length). For example, if the goal table was the following... Item# Field1 Field2 Field3 1 Oarfish Atmosphere Pretty 2 Raven Radiation Adorable 3 Sunflower Flowers Cute I'd like a function which could cleanly take the collection of items 1, 2, and 3 and output in the correct order... Sunflower, Atmosphere, Adorable Using .NET 3.5 and C# 3.0.

    Read the article

  • Can a lambda can be used to change a List's values in-place ( without creating a new list)?

    - by Saint Hill
    I am trying to determine the correct way of transforming all the values in a List using the new lambdas feature in the upcoming release of Java 8 without creating a **new** List. This pertains to times when a List is passed in by a caller and needs to have a function applied to change all the contents to a new value. For example, the way Collections.sort(list) changes a list in-place. What is the easiest way given this transforming function and this starting list: String function(String s){ return [some change made to value of s]; } List<String> list = Arrays.asList("Bob", "Steve", "Jim", "Arbby"); The usual way of applying a change to all the values in-place was this: for (int i = 0; i < list.size(); i++) { list.set(i, function( list.get(i) ); } Does lambdas and Java 8 offer: an easier and more expressive way? a way to do this without setting up all the scaffolding of the for(..) loop?

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • Inheritance and type parameters of Traversable

    - by Jesper
    I'm studying the source code of the Scala 2.8 collection classes. I have questions about the hierarchy of scala.collection.Traversable. Look at the following declarations: package scala.collection trait Traversable[+A] extends TraversableLike[A, Traversable[A]] with GenericTraversableTemplate[A, Traversable] trait TraversableLike[+A, +Repr] extends HasNewBuilder[A, Repr] with TraversableOnce[A] package scala.collection.generic trait HasNewBuilder[+A, +Repr] trait GenericTraversableTemplate[+A, +CC[X] <: Traversable[X]] extends HasNewBuilder[A, CC[A] @uncheckedVariance] Question: Why does Traversable extend GenericTraversableTemplate with type parameters [A, Traversable] - why not [A, Traversable[A]]? I tried some experimenting with a small program with the same structure and got a strange error message when I tried to change it to Traversable[A]: error: Traversable[A] takes no type parameters, expected: one I guess that the use of the @uncheckedVariance annotation in GenericTraversableTemplate also has to do with this? (That seems like a kind of potentially unsafe hack to force things to work...). Question: When you look at the hierarchy, you see that Traversable inherits HasNewBuilder twice (once via TraversableLike and once via GenericTraversableTemplate), but with slightly different type parameters. How does this work exactly? Why don't the different type parameters cause an error?

    Read the article

  • how do I best create a set of list classes to match my business objects

    - by ken-forslund
    I'm a bit fuzzy on the best way to solve the problem of needing a list for each of my business objects that implements some overridden functions. Here's the setup: I have a baseObject that sets up database, and has its proper Dispose() method All my other business objects inherit from it, and if necessary, override Dispose() Some of these classes also contain arrays (lists) of other objects. So I create a class that holds a List of these. I'm aware I could just use the generic List, but that doesn't let me add extra features like Dispose() so it will loop through and clean up. So if I had objects called User, Project and Schedule, I would create UserList, ProjectList, ScheduleList. In the past, I have simply had these inherit from List< with the appropriate class named and then written the pile of common functions I wanted it to have, like Dispose(). this meant I would verify by hand, that each of these List classes had the same set of methods. Some of these classes had pretty simple versions of these methods that could have been inherited from a base list class. I could write an interface, to force me to ensure that each of my List classes has the same functions, but interfaces don't let me write common base functions that SOME of the lists might override. I had tried to write a baseObjectList that inherited from List, and then make my other Lists inherit from that, but there are issues with that (which is really why I came here). One of which was trying to use the Find() method with a predicate. I've simplified the problem down to just a discussion of Dispose() method on the list that loops through and disposes its contents, but in reality, I have several other common functions that I want all my lists to have. What's the best practice to solve this organizational matter?

    Read the article

  • Marionette js itemview not defined: then on browser refresh it is defined and all works well - race condition?

    - by Robert
    Yeah it's just the initial browser load or two after a cache clear. Subsequent refreshes clear the problem up. I'm thinking the item views just aren't fully constructed in time to be used in the collection views on the first load. But then they are on a refresh? Don't know. There must be something about the code sequence or loading or the load time itself. Not sure. I'm loading via require.js. Have two collections - users and messages. Each renders in its own list view. Each works, just not the first time or two the browser loads. The first time you load after clearing browser cache the console reports, for instance: "Uncaught ReferenceError: MessageItemView is not defined" A simple browser refresh clears it up. Same goes for the user collection. It's collection view says it doesn't know anything about its item view. But a simple browser refresh and all is well. My views (item and collection) are in separate files. Is that the problem? For instance, here is my message collection view in its own file: messagelistview.js var MessageListView = Marionette.CollectionView.extend({ itemView: MessageItemView, el: $("#messages") }); And the message item view is in a separate file: messageview.js var MessageItemView = Marionette.ItemView.extend({ tagName: "div", template: Handlebars.compile( '<div>{{fromUserName}}:</div>' + '<div>{{message}}</div>' + ) }); Then in my main module file, which references each of those files, the collection view is constructed and displayed: main.js //Define a model MessageModel = Backbone.Model.extend(); //Make an instance of MessageItemView - code in separate file, messagelistview.js MessageView = new MessageItemView(); //Define a message collection var MessageCollection = Backbone.Collection.extend({ model: MessageModel }); //Make an instance of MessageCollection var collMessages = new MessageCollection(); //Make an instance of a MessageListView - code in separate file, messagelistview.js var messageListView = new MessageListView({ collection: collMessages }); App.messageListRegion.show(messageListView); Do I just have things sequenced wrong? I'm thinking it's some kind of race condition only because over 3G to an iPad the item views are always undefined. They never seem to get constructed in time. PC on a hard wired connection does see success after a browser refresh or two.

    Read the article

  • How are the conceptual pairs Abstract/Concrete, Generic/Specific, and Complex/Simple related to one another in software architecture?

    - by tjb1982
    (= 2 (+ 1 1)) take the above. The requirement of the '=' predicate is that its arguments be comparable. Any two structures are comparable in this case, and so the contract/requirement is pretty generic. The '+' predicate requires that its arguments be numbers. That's more specific. (socket domain type protocol) the arguments here are much more specific (even though the arguments are still just numbers and the function itself returns a file descriptor, which is itself an int), but the arguments are more abstract, and the implementation is built up from other functions whose abstractions are less abstract, which are themselves built from less and less abstract abstractions. To the point where the requirements are something like move from one location to another, observe whether the switch at that location is on or off, turn the switch on or off, or leave it the same, etc. But are functions also less and less complex the less abstract they are? And is there a relationship between the number and range of arguments of a function and the complexity of its implementation, as you go from more abstract to less abstract, and vice versa? (= 2 (+ 1 1) 2r10) the '=' predicate is more generic than the '+' predicate, and thus could be more complex in its implementation. The '+' predicate's contract is less generic, and so could be less complex in its implementation. Is this even a little correct? What about the 'socket' function? Each of those arguments is a number of some kind. What they represent, though, is something more elaborate. It also returns a number (just like the others do), which is also a representation of something conceptually much more elaborate than a number. To boil it down, I'm asking if there is a relationship between the following dimensions, and why: Abstract/Concrete Complex/Simple Generic/Specific And more specifically, do different configurations of these dimensions have a specific, measurable impact on the number and range of the arguments (i.e., the contract) of a function?

    Read the article

  • How set EnqueueCallBack to my generic callback

    - by CrazyJoe
    using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsistec.Domain; using Microsistec.Client; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Collections.Generic; using Microsistec.Tools; using System.Json; using Microsistec.SystemConfig; using System.Threading; using Microsoft.Silverlight.Testing; namespace Test { [TestClass] public class SampleTest : SilverlightTest { [TestMethod, Asynchronous] public void login() { List<PostData> data = new List<PostData>(); data.Add(new PostData("email", "xxx")); data.Add(new PostData("password", MD5.GetHashString("xxx"))); WebClient.sendData(Config.DataServerURL + "/user/login", data, LoginCallBack); EnqueueCallback(?????????); EnqueueTestComplete(); } [Asynchronous] public void LoginCallBack(object sender, System.Net.UploadStringCompletedEventArgs e) { string json = Microsistec.Client.WebClient.ProcessResult(e); var result = JsonArray.Parse(json); Assert.Equals("1", result["value"].ToString()); TestComplete(); } } Im tring to set ???????? value but my callback is generic, it is setup on my WebClient .SendData, how i implement my EnqueueCallback to a my already functio LoginCallBack???

    Read the article

  • Reflective Generic Detection

    - by Aren B
    Trying to find out if a provided Type is of a given generic type (with any generic types inside) Let me Explain: bool IsOfGenericType(Type baseType, Type sampleType) { /// ... } Such that: IsOfGenericType(typeof(Dictionary<,>), typeof(Dictionary<string, int>)); // True IsOfGenericType(typeof(IDictionary<,>), typeof(Dictionary<string, int>)); // True IsOfGenericType(typeof(IList<>), typeof(Dictionary<string,int>)); // False However, I played with some reflection in the intermediate window, here were my results: typeof(Dictionary<,>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>).IsAssignableFrom(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsSubclassOf(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsInstanceOfType(typeof(Dictionary<,>)) false typeof(Dictionary<,>).IsInstanceOfType(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsAssignableFrom(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsSubclassOf(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>) is typeof(Dictionary<string,int>) Type expected typeof(Dictionary<string,int>).IsAssignableFrom(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsSubclassOf(typeof(Dictionary<,>)) false typeof(Dictionary<string,int>).IsInstanceOfType(typeof(Dictionary<,>)) false typeof(Dictionary<,>).IsInstanceOfType(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsAssignableFrom(typeof(Dictionary<string,int>)) false typeof(Dictionary<,>).IsSubclassOf(typeof(Dictionary<string,int>)) false So now I'm at a loss because when you look at the base.Name on typeof(Dictionary) you get Dictionary`2 Which is the same as typeof(Dictionary<,>).Name

    Read the article

  • Generic wrapper for System.Web.Caching.Cache functions

    - by David Neale
    I've created a generic wrapper for using the Cache object: public class Cache<T> where T : class { public Cache Cache {get;set;} public CachedKeys Key {get;set;} public Cache(Cache cache, CachedKeys key){ Cache = cache; Key = key; } public void AddToCache(T obj){ Cache.Add(Key.ToString(), obj, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null); } public bool TryGetFromCache(out T cachedData) { cachedData = Cache[Key.ToString()] as T; return cachedData != null; } public void RemoveFromCache() { Cache.Remove(Key.ToString()); } } The CachedKeys enumeration is just a list of keys that can be used to cache data. The trouble is, to call it is quite convuluted: var cache = new Cache<MyObject>(Page.Cache, CachedKeys.MyKey); MyObject myObject = null; if(!cache.TryGetFromCache(out myObject)){ //get data... cache.AddToCache(data); //add to cache return data; } return myObject; I only store one instance of each of my objects in the cache. Therefore, is there any way that I can create an extension method that accepts the type of object to Cache and uses (via Reflection) its Name as the cache key? public static Cache<T> GetCache(this Cache cache, Type cacheType){ Cache<cacheType> Cache = new Cache<cacheType>(cache, cacheType.Name); } Of course, there's two errors here: Extension methods must be defined in a non-generic static class The type or namespace name 'cacheType' could not be found This is clearly not the right approach but I thought I'd show my working. Could somebody guide me in the right direction?

    Read the article

  • Problem with MessageContract, Generic return types and clientside naming

    - by Soeteman
    I'm building a web service which uses MessageContracts, because I want to add custom fields to my SOAP header. In a previous topic, I learned that a composite response has to be wrapped. For this purpose, I devised a generic ResponseWrapper class. [MessageContract(WrapperNamespace = "http://mynamespace.com", WrapperName="WrapperOf{0}")] public class ResponseWrapper<T> { [MessageBodyMember(Namespace = "http://mynamespace.com")] public T Response { get; set; } } I made a ServiceResult base class, defined as follows: [MessageContract(WrapperNamespace = "http://mynamespace.com")] public class ServiceResult { [MessageBodyMember] public bool Status { get; set; } [MessageBodyMember] public string Message { get; set; } [MessageBodyMember] public string Description { get; set; } } To be able to include the request context in the response, I use a derived class of ServiceResult, which uses generics: [MessageContract(WrapperNamespace = "http://mynamespace.com", WrapperName = "ServiceResultOf{0}")] public class ServiceResult<TRequest> : ServiceResult { [MessageBodyMember] public TRequest Request { get; set; } } This is used in the following way [OperationContract()] ResponseWrapper<ServiceResult<HCCertificateRequest>> OrderHealthCertificate(RequestContext<HCCertificateRequest> context); I expected my client code to be generated as ServiceResultOfHCCertificateRequest OrderHealthCertificate(RequestContextOfHCCertificateRequest context); Instead, I get the following: ServiceResultOfHCCertificateRequestzSOTD_SSj OrderHealthCertificate(CompType1 c1, CompType2 c2, HCCertificateRequest context); CompType1 and CompType2 are properties of the RequestContext class. The problem is that a hash is added to the end of ServiceResultOfHCCertificateRequestzSOTD_SSj. How do I need define my generic return types in order for the client type to be generated as expected (without the hash)?

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • Generic object to object mapping with parametrized constructor

    - by Rody van Sambeek
    I have a data access layer which returns an IDataRecord. I have a WCF service that serves DataContracts (dto's). These DataContracts are initiated by a parametrized constructor containing the IDataRecord as follows: [DataContract] public class DataContractItem { [DataMember] public int ID; [DataMember] public string Title; public DataContractItem(IDataRecord record) { this.ID = Convert.ToInt32(record["ID"]); this.Title = record["title"].ToString(); } } Unfortanately I can't change the DAL, so I'm obliged to work with the IDataRecord as input. But in generat this works very well. The mappings are pretty simple most of the time, sometimes they are a bit more complex, but no rocket science. However, now I'd like to be able to use generics to instantiate the different DataContracts to simplify the WCF service methods. I want to be able to do something like: public T DoSomething<T>(IDataRecord record) { ... return new T(record); } So I'd tried to following solutions: Use a generic typed interface with a constructor. doesn't work: ofcourse we can't define a constructor in an interface Use a static method to instantiate the DataContract and create a typed interface containing this static method. doesn't work: ofcourse we can't define a static method in an interface Use a generic typed interface containing the new() constraint doesn't work: new() constraint cannot contain a parameter (the IDataRecord) Using a factory object to perform the mapping based on the DataContract Type. does work, but: not very clean, because I now have a switch statement with all mappings in one file. I can't find a real clean solution for this. Can somebody shed a light on this for me? The project is too small for any complex mapping techniques and too large for a "switch-based" factory implementation.

    Read the article

  • Receiving generic typed <T> custom objects through remote object in Flex

    - by Aaron
    Is it possible to receive custom generic typed objects through AMF? I'm trying to integrate a flex app with an existing C# service but flex is choking on custom generic typed objects. As far as I can tell Flex doesn't even support generics, but I'd like to be able to even just read in the object and cast its members as necessary. I basically just want flex to ignore the <T>. I'm hopeful that there's a way to do this, since flex doesn't complain about typed collections (a server call returning List works fine and flex converts it to an ArrayCollection just like an un-typed List). Here's a trimmed down example of what's going on for me: The custom C# typed class public class TypeTest<T> { public T value { get; set; } public TypeTest () { } } The server method returning the typeTest public TypeTest<String> doTypeTest() { TypeTest<String> theTester = new TypeTest<String>("grrrr"); return theTester; } The corresponding flex value object: [RemoteClass(alias="API.Model.TypeTest")] public class TypeTest { private var _value:Object; public function get value():Object { return _value; } public function set value(theValue:Object):void { _value = value; } public function TypeTest() { } } and the result handler code: public function doTypeTest(result:TypeTest):void { var theString:String = result.value as String; trace(theString); } When the result handler is called I get the runtime error: TypeError: Error #1034: Type Coercion failed: cannot convert mx.utils::ObjectProxy@11a98041 to com.model.vos.TypeTest. Irritatingly if I change the result handler to take parameter of type Object it works fine. Anyone know how to make this work with the value object? I feel like i'm missing something really obvious.

    Read the article

  • hadoop implementing a generic list writable

    - by Guruprasad Venkatesh
    I am working on building a map reduce pipeline of jobs(with one MR job's output feeding to another as input). The values being passed around are fairly complex, in that there are lists of different types and hash maps with values as lists. Hadoop api does not seem to have a ListWritable. Am trying to write a generic one, but it seems i can't instantiate a generic type in my readFields implementation, unless i pass in the class type itself: public class ListWritable<T extends Writable> implements Writable { private List<T> list; private Class<T> clazz; public ListWritable(Class<T> clazz) { this.clazz = clazz; list = new ArrayList<T>(); } @Override public void write(DataOutput out) throws IOException { out.writeInt(list.size()); for (T element : list) { element.write(out); } } @Override public void readFields(DataInput in) throws IOException{ int count = in.readInt(); this.list = new ArrayList<T>(); for (int i = 0; i < count; i++) { try { T obj = clazz.newInstance(); obj.readFields(in); list.add(obj); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } } } } But hadoop requires all writables to have a no argument constructor to read the values back. Has anybody tried to do the same and solved this problem? TIA.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >