Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 263/336 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • OOP, Interface Design and Encapsulation

    - by Mau
    C# project, but it could be applied to any OO languages. 3 interfaces interacting: public interface IPublicData {} public /* internal */ interface IInternalDataProducer { string GetData(); } public interface IPublicWorker { IPublicData DoWork(); IInternalDataProducer GetInternalProducer(); } public class Engine { Engine(IPublicWorker worker) {} IPublicData Run() { DoSomethingWith(worker.GetInternalProducer().GetData()); return worker.DoWork(); } } Clearly Engine is parametric in the actual worker that does the job. A further source of parametrization is how we produce the 'internal data' via IInternalDataProducer. This implementation requires IInternalDataProducer to be public because it's part of the declaration of the public interface IPublicWorker. However, I'd like it to be internal since it's only used by the engine. A solution is make the IPublicWorker produce the internal data itself, but that's not very elegant since there's only a couple of ways of producing it (while there are many more worker implementations), therefore it's nice to delegate to a couple of separate concrete classes. Moreover, the IInternalDataProducer is used in more places inside the engine, so it's good for the engine to pass around the actual object. I'm looking for elegant ideas/patterns. Cheers :-)

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • How to map combinations of things to a relational database?

    - by Space_C0wb0y
    I have a table whose records represent certain objects. For the sake of simplicity I am going to assume that the table only has one row, and that is the unique ObjectId. Now I need a way to store combinations of objects from that table. The combinations have to be unique, but can be of arbitrary length. For example, if I have the ObjectIds 1,2,3,4 I want to store the following combinations: {1,2}, {1,3,4}, {2,4}, {1,2,3,4} The ordering is not necessary. My current implementation is to have a table Combinations that maps ObjectIds to CombinationIds. So every combination receives a unique Id: ObjectId | CombinationId ------------------------ 1 | 1 2 | 1 1 | 2 3 | 2 4 | 2 This is the mapping for the first two combinations of the example above. The problem is, that the query for finding the CombinationId of a specific Combination seems to be very complex. The two main usage scenarios for this table will be to iterate over all combinations, and the retrieve a specific combination. The table will be created once and never be updated. I am using SQLite through JDBC. Is there any simpler way or a best practice to implement such a mapping?

    Read the article

  • Circular dependency with generics

    - by devoured elysium
    I have defined the following interface: public interface IStateSpace<State, Action> where State : IState where Action : IAction<State, Action> // <-- this is the line that bothers me { void SetValueAt(State state, Action action); Action GetValueAt(State state); } Basically, an IStateSpace interface should be something like a chess board, and in each position of the chess board you have a set of possible movements to do. Those movements here are called IActions. I have defined this interface this way so I can accommodate for different implementations: I can then define concrete classes that implement 2D matrix, 3D matrix, graphs, etc. public interface IAction<State, Action> { IStateSpace<State, Action> StateSpace { get; } } An IAction, would be to move up(this is, if in (2, 2) move to (2, 1)), move down, etc. Now, I'll want that each action has access to a StateSpace so it can do some checking logic. Is this implementation correct? Or is this a bad case of a circular dependence? If yes, how to accomplish "the same" in a different way? Thanks

    Read the article

  • Create new or update existing entity at one go with JPA

    - by Alex R
    A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp. As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code: // Load existing entity, if any. Entity e = entityManager.find(Entity.class, id); if (e == null) { // Could not find entity with the specified id in the database, so create new one. e = entityManager.merge(new Entity(id)); } // Set current time... e.setTimestamp(new Date()); // ...and finally save entity. entityManager.flush(); Please note that in this example entity identifier is not generated on insert, it is known in advance. When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error. I think that there are few solutions to the problem. Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way? Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it. Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination. There is a magic trick to fix it using standard JPA API only, but I don't know it yet. Please help.

    Read the article

  • Dealing with multiple generics in a method call

    - by thaBadDawg
    I've been dealing a lot lately with abstract classes that use generics. This is all good and fine because I get a lot of utility out of these classes but now it's making for some rather ugly code down the line. For example: abstract class ClassBase<T> { T Property { get; set; } } class MyClass : ClassBase<string> { OtherClass PropertyDetail { get; set; } } This implementation isn't all that crazy, except when I want to reference the abstract class from a helper class and then I have to make a list of generics just to make reference to the implemented class, like this below. class Helper { void HelpMe<C, T>(object Value) where C : ClassBase<T>, new() { DoWork(); } } This is just a tame example, because I have some method calls where the list of where clauses end up being 5 or 6 lines long to handle all of the generic data. What I'd really like to do is class Helper { void HelpMe<C>(object Value) where C : ClassBase, new() { DoWork(); } } but it obviously won't compile. I want to reference ClassBase without having to pass it a whole array of generic classes to get the function to work, but I don't want to reference the higher level classes because there are a dozen of those. Am I the victim of my own cleverness or is there an avenue that I haven't considered yet?

    Read the article

  • Prolog singleton variables in Python

    - by Rubens
    I'm working on a little set of scripts in python, and I came to this: line = "a b c d e f g" a, b, c, d, e, f, g = line.split() I'm quite aware of the fact that these are decisions taken during implementation, but shouldn't (or does) python offer something like: _, _, var_needed, _, _, another_var_needed, _ = line.split() as well as Prolog does offer, in order to exclude the famous singleton variables. I'm not sure, but wouldn't it avoid unnecessary allocation? Or creating references to the result of the split call does not count up as overhead? EDIT: Sorry, my point here is: in Prolog, as far as I'm concerned, in an expression like: test(L, N) :- test(L, 0, N). test([], N, N). test([_|T], M, N) :- V is M + 1, test(T, V, N). The variable represented by _ is not accessible, for what I suppose the reference to the value that does exist in the list [_|T] is not even created. But, in Python, if I use _, I can use the last value assigned to _, and also, I do suppose the assignment occurs for each of the variables _ -- which may be considered an overhead. My question here is if shouldn't there be (or if there is) a syntax to avoid such unnecessary attributions.

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • How do I optimize this postfix expression tree for speed?

    - by Peter Stewart
    Thanks to the help I received in this post: I have a nice, concise recursive function to traverse a tree in postfix order: deque <char*> d; void Node::postfix() { if (left != __nullptr) { left->postfix(); } if (right != __nullptr) { right->postfix(); } d.push_front(cargo); return; }; This is an expression tree. The branch nodes are operators randomly selected from an array, and the leaf nodes are values or the variable 'x', also randomly selected from an array. char *values[10]={"1.0","2.0","3.0","4.0","5.0","6.0","7.0","8.0","9.0","x"}; char *ops[4]={"+","-","*","/"}; As this will be called billions of times during a run of the genetic algorithm of which it is a part, I'd like to optimize it for speed. I have a number of questions on this topic which I will ask in separate postings. The first is: how can I get access to each 'cargo' as it is found. That is: instead of pushing 'cargo' onto a deque, and then processing the deque to get the value, I'd like to start processing it right away. I don't yet know about parallel processing in c++, but this would ideally be done concurrently on two different processors. In python, I'd make the function a generator and access succeeding 'cargo's using .next(). But I'm using c++ to speed up the python implementation. I'm thinking that this kind of tree has been around for a long time, and somebody has probably optimized it already. Any Ideas? Thanks

    Read the article

  • How to enforce users to create objects of class derived from mine with "new" only?

    - by sharptooth
    To implement reference counting we use an IUnknown-like interface and a smart pointer template class. The interface has implementation for all the reference-count methods, including Release(): void IUnknownLike::Release() { if( --refCount == 0 ) { delete this; } } The smart pointer template class has a copy constructor and an assignment operator both accepting raw pointers. So users can do the following: class Class : public IUnknownLike { }; void someFunction( CSmartPointer<Class> object ); //whatever function Class object; someFunction( &object ); and the program runs into undefined behavior - the object is created with reference count zero, the smart pointer is constructed and bumps it to one, then the function returns, smart pointer is destroyed, calls Release() which leads to delete of a stack-allocated variable. Users can as well do the following: struct COuter { //whatever else; Class inner;// IUnknownLike descendant }; COuter object; somefunction( &object.Inner ); and again an object not created with new is deleted. Undefined behavior at its best. Is there any way to change the IUnknownLike interface so that the user is forced to use new for creating all objects derived from IUnknownLike - both directly derived and indirectly derived (with classes in between the most derived and the base)?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • How to easily map c++ enums to strings

    - by Roddy
    I have a bunch of enum types in some library header files that I'm using, and I want to have a way of converting enum values to user strings - and vice-versa. RTTI won't do it for me, because the 'user strings' need to be a bit more readable than the enumerations. A brute force solution would be a bunch of functions like this, but I feel that's a bit too C-like. enum MyEnum {VAL1, VAL2,VAL3}; String getStringFromEnum(MyEnum e) { switch e { case VAL1: return "Value 1"; case VAL2: return "Value 2"; case VAL1: return "Value 3"; default: throw Exception("Bad MyEnum"); } } I have a gut feeling that there's an elegant solution using templates, but I can't quite get my head round it yet. UPDATE: Thanks for suggestions - I should have made clear that the enums are defined in a third-party library header, so I don't want to have to change the definition of them. My gut feeling now is to avoid templates and do something like this: char * MyGetValue(int v, char *tmp); // implementation is trivial #define ENUM_MAP(type, strings) char * getStringValue(const type &T) \ { \ return MyGetValue((int)T, strings); \ } ; enum eee {AA,BB,CC}; - exists in library header file ; enum fff {DD,GG,HH}; ENUM_MAP(eee,"AA|BB|CC") ENUM_MAP(fff,"DD|GG|HH") // To use... eee e; fff f; std::cout<< getStringValue(e); std::cout<< getStringValue(f);

    Read the article

  • Encapsulating user input of data for a class (C++)

    - by Dr. Monkey
    For an assignment I've made a simple C++ program that uses a superclass (Student) and two subclasses (CourseStudent and ResearchStudent) to store a list of students and print out their details, with different details shown for the two different types of students (using overriding of the display() method from Student). My question is about how the program collects input from the user of things like the student name, ID number, unit and fee information (for a course student) and research information (for research students): My implementation has the prompting for user input and the collecting of that input handled within the classes themselves. The reasoning behind this was that each class knows what kind of input it needs, so it makes sense to me to have it know how to ask for it (given an ostream through which to ask and an istream to collect the input from). My lecturer says that the prompting and input should all be handled in the main program, which seems to me somewhat messier, and would make it trickier to extend the program to handle different types of students. I am considering, as a compromise, to make a helper class that handles the prompting and collection of user input for each type of Student, which could then be called on by the main program. The advantage of this would be that the student classes don't have as much in them (so they're cleaner), but also they can be bundled with the helper classes if the input functionality is required. This also means more classes of Student could be added without having to make major changes to the main program, as long as helper classes are provided for these new classes. Also the helper class could be swapped for an alternative language version without having to make any changes to the class itself. What are the major advantages and disadvantages of the three different options for user input (fully encapsulated, helper class or in the main program)?

    Read the article

  • Help me write my LISP :) LISP environments, Ruby Hashes...

    - by MikeC8
    I'm implementing a rudimentary version of LISP in Ruby just in order to familiarize myself with some concepts. I'm basing my implementation off of Peter Norvig's Lispy (http://norvig.com/lispy.html). There's something I'm missing here though, and I'd appreciate some help... He subclasses Python's dict as follows: class Env(dict): "An environment: a dict of {'var':val} pairs, with an outer Env." def __init__(self, parms=(), args=(), outer=None): self.update(zip(parms,args)) self.outer = outer def find(self, var): "Find the innermost Env where var appears." return self if var in self else self.outer.find(var) He then goes on to explain why he does this rather than just using a dict. However, for some reason, his explanation keeps passing in through my eyes and out through the back of my head. Why not use a dict, and then inside the eval function, when a new "sub-environment" needs to be created, just take the existing dict and update the key/value pairs that need to be updated, and pass that new dict into the next eval? Won't the Python interpreter keep track of the previous "outer" envs? And won't the nature of the recursion ensure that the values are pulled out from "inner" to "outer"? I'm using Ruby, and I tried to implement things this way. Something's not working though, and it might be because of this, or perhaps not. Here's my eval function, env being a regular Hash: def eval(x, env = $global_env) ........ elsif x[0] == "lambda" then ->(*args) { eval(x[2], env.merge(Hash[*x[1].zip(args).flatten(1)])) } ........ end The line that matters of course is the "lambda" one. If there is a difference, what's importantly different between what I'm doing here and what Norvig did with his Env class? If there's no difference, then perhaps someone can enlighten me as to why Norvig uses the Env class. Thanks :)

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • Orientation issue while presenting Modal ViewController

    - by Jacky Boy
    Current scenario: Right now I am showing a UIViewController using a segue with the style Modal and presentation Sheet. This Modal gets its superview bounds change, in order to have the dimensions I want, like this: - (void)viewWillLayoutSubviews { [super viewWillLayoutSubviews]; self.view.superview.bounds = WHBoundsRect; } The only allowed orientations are UIInterfaceOrientationLandscapeLeft and UIInterfaceOrientationLandscapeRight. Since the Modal has some TextFields and the keyboard would be over the Modal itself, I am changing its center so it moves a bit to the top. The problem: What I am noticing right now, is that I am unable to work with the Y coordinate. In order for it move vertically (remember it's on landscape) I need to work with the X. The problem is that when it's UIInterfaceOrientationLandscapeLeft I need to come with a negative X. And when it's UIInterfaceOrientationLandscapeRight I need to come with a positive X. So it seems that the X/Y Coordinate System is "glued" to the top left corner while in Portrait and when an orientation occurs, it's still there: What I have done So I have something like this: UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation]; NSInteger newX = 0.0f; if (orientation == UIInterfaceOrientationLandscapeLeft) { // Logic for calculating the negative X. } else { // Logic for calculating the positive X. } It works exactly like I want, but it seems a very fragile implementation. Am I missing something? Is this the expected behaviour?

    Read the article

  • OSGI, Servlets and JPA hello world / tutorial / example

    - by Kamil
    I want to build a web application which basically is a restful web-service serving json messages. I would like it to be as simple as possible. I was thinking about using servlets (with annotations). JPA as a database layer is a must - Toplink or Hibernate. Preferably working on Tomcat. I want to have app divided into modules serving different functionality (auth service, customer service, etc..). And I would like to be able to update those modules without reinstalling whole application on the server - like eclipse plugins, user is notified (when he enters webapp's home url) that update is available, clicks it, and app is downloading and installing updated module. I think this functionality can be made with OSGI, but I can't find any example code, or tutorial with simple hello world updatable servlet providing some data from database through jpa. I'm looking for an advice: - Is OSGI the right tool for this or it can be done with something simpler? - Where can I find some examples covering topic (or topics) which I need for this project. - Which OSGI implementation would be best-simplest for this task. *My knowledge of OSGI is basic. I know how bundles are described, I understand concept of OSGI container and what it does. I have never created any OSGI app yet.

    Read the article

  • Iterate through every node in a XML

    - by Rachel
    Hi I am trying to iterate through every node in a xml, be it the element node, text node or comment. With the below XSL in the very first statement prints the complete xml. How do i copy the very first node in $nodes and call the template process-nodes again removing teh first node in my next iteration? <?xml version='1.0'?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="//node()" as="node()*"/> </xsl:call-template> </xsl:template> <xsl:template name="process-nodes"> <xsl:param name="nodes" as="node()*" /> <xsl:copy-of select="$nodes[1]"/> <xsl:if test="$nodes"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="remove($nodes, 1)" /> </xsl:call-template> </xsl:if> </xsl:template> </xsl:stylesheet> Note: I am looking for fixing the issue in this kind of implementation rather than changing the template match to <xsl:template match="@* | node()"> as I need to have some processing which requires this approach. Thanks.

    Read the article

  • Where to define a filter function for a form field in my Joomla component's preferences

    - by Herman
    I am creating a component in Joomla 2.5. This component has some options that are defined in its config.xml, so they can be set in the preferences of the component. Now I would like to apply a filter to one of these option fields, using the attribute filter="my_filter". In the source code of JForm I saw the following lines at the very end of the implementation of JForm::filterField(): if (strpos($filter, '::') !== false && is_callable(explode('::', $filter))) { $return = call_user_func(explode('::', $filter), $value); } elseif (function_exists($filter)) { $return = call_user_func($filter, $value); } That's what I needed for using a filter function defined by myself! I managed to do this for form fields used in the views of my component. I defined the filter function as MyComponentHelper::my_filter(), where MyComponentHelper is a helper class which I always load in the very base of my component. And in the form's xml I added filter="MyComponentHelper::my_filter" to the fields that have to be filtered. However... when I am trying to apply the filter function to a form field in my component's preferences, I am not in my own component, but in com_config instead, so my helper class is not available! So, therefore, my question: where to define my own filter function in such a way that it can be found and called by JForm::filterField() in com_config?? Help is very much appreciated.

    Read the article

  • Access Controller Context/ TempData from business objects

    - by thanikkal
    I am trying to build a session/tempdata provider that can be swapped. The default provider will work on top of asp.net mvc and it needed to access the .net mvc TempData from the business object class. I know the tempdata is available through the controller context, but i cant seem to find if that is exposed through HttpContext or something. I dont really want to pass the Controller context as an argument as that would dilute my interface definition since only asp.net based session provider needs this, other (using NoSQL DB etc) doesn't care about Controller Context. To clarify further, adding little more code here. my ISession interface look like this. and when this code goes to production, the session/tempdata is expected to work using NoSql db. But i also like to have another implementation that works on top of asp.net mvc session/tempdata for my dev testing etc. public interface ISession { T GetTempData<T>(string key); void PutTempData<T>(string key, T value); T GetSessiondata<T>(string key); void PutSessiondata<T>(string key, T value); }

    Read the article

  • Performance of C# method polymorphism with generics

    - by zildjohn01
    I noticed in C#, unlike C++, you can combine virtual and generic methods. For example: using System.Diagnostics; class Base { public virtual void Concrete() {Debug.WriteLine("base concrete");} public virtual void Generic<T>() {Debug.WriteLine("base generic");} } class Derived : Base { public override void Concrete() {Debug.WriteLine("derived concrete");} public override void Generic<T>() {Debug.WriteLine("derived generic");} } class App { static void Main() { Base x = new Derived(); x.Concrete(); x.Generic<PerformanceCounter>(); } } Given that any number of versions of Generic<T> could be instantiated, it doesn't look like the standard vtbl approach could be used to resolve method calls, and in fact it's not. Here's the generated code: x.Concrete(); mov ecx,dword ptr [ebp-8] mov eax,dword ptr [ecx] call dword ptr [eax+38h] x.Generic<PerformanceCounter>(); push 989A38h mov ecx,dword ptr [ebp-8] mov edx,989914h call 76A874F1 mov dword ptr [ebp-4],eax mov ecx,dword ptr [ebp-8] call dword ptr [ebp-4] The extra code appears to be looking up a dynamic vtbl according to the generic parameters, and then calling into it. Has anyone written about the specifics of this implementation? How well does it perform compared to the non-generic case?

    Read the article

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • How can I simply change a class variable from another class in ObjectiveC?

    - by Daniel
    I simply want to change a variable of an object from another class. I can compile without a problem, but my variable always is set to 'null'. I used the following code: Object.h: @interface Object : NSObject { //... NSString *color; //... } @property(nonatomic, retain) NSString* color; + (id)Object; - (void)setColor:(NSString*)col; - (NSString*)getColor; @end Object.m: +(id)Object{ return [[[Object alloc] init] autorelease]; } - (void)setColor:(NSString*)col { self.color = col; } - (NSString*)getColor { return self.color; } MyViewController.h #import "Object.h" @interface ClassesTestViewController : UIViewController { Object *myObject; UILabel *label1; } @property UILabel *label1; @property (assign) Object *myObject; @end MyViewController.m: #import "Object.h" @implementation MyViewController @synthesize myObject; - (void)viewDidLoad { [myObject setColor:@"red"]; NSLog(@"Color = %@", [myObject getColor]); [super viewDidLoad]; } The NSLog message is always Color = (null) I tried many different ways to solve this problem, but no success. Any help would be appreciated.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >