Search Results

Search found 604 results on 25 pages for 'removal'.

Page 19/25 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Design patter to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Design pattern to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • 404 Not Found When Requesting URI With Encoded Patameters

    - by Richard Knop
    I am pretty sure this is some problem with the Apache configuration because it used to work on the previous hosting provider with the same PHP/MySQL configuration. In my application, users are able to delete photos by going to URIs like this: http://example.com/my-account/remove-media/id/9/ret/my-account%252Fedit-album%252Fid%252F1 The paramater id is an id of a photo to be removed, the parameter ret is a relative URL where user should be redirected after the removal of the photo but after clicking on a link like that I get 404 Not Found error with the text: Not Found The requested URL /public/my-account/remove-media/id/9/ret/my-account/edit-album/id/1 was not found on this server. Though it used to work on my previous hosting provider so I guess it is just some easy Apache configuration issue? One more thing, there is a htaccess file that changes the document root to /public: RewriteEngine On php_value upload_max_filesize 15M php_value post_max_size 15M php_value max_execution_time 200 php_value max_input_time 200 # Exclude some directories from URI rewriting #RewriteRule ^(dir1|dir2|dir3) - [L] RewriteRule ^\.htaccess$ - [F] RewriteCond %{REQUEST_URI} ="" RewriteRule ^.*$ /public/index.php [NC,L] RewriteCond %{REQUEST_URI} !^/public/.*$ RewriteRule ^(.*)$ /public/$1 RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^.*$ - [NC,L] RewriteRule ^public/.*$ /public/index.php [NC,L] In the public folder there is a second htaccess file for MVC: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] # Turn off magic quotes #php_flag magic_quotes_gpc off

    Read the article

  • MVVM pattern: ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • How to restrict text search to a certain subset of the database ?

    - by Nikhil Garg
    I have a large central database of around 1 million heavy records. In my app, for every user I would have a subset of rows from central table, which would be very small (probably 100 records each).When a particular user has logged in , I would want to search on this data set only. Example: Say I have a central database of all cars in the world. I have a user profile for General Motors(GM) , Ferrari etc. When GM is logged in I just want to search(a full text search and not fire a sql query) for those cars which are manufactured by GM. Also GM may launch/withdraw a model in which case central db would be updated & so would be rowset associated with GM. In case of acquisitions, db of certain profiles may change without launch/removal of new car. So central db wont change then , but rowsets may. Whats the best way to implement such a design ? These smaller row sets would need to be dynamic depending on user activities. We are on Rails 2.3.5 and use thinking_sphinx as the connector and Sphinx/MySQL for search and relational associations.

    Read the article

  • How to minify JS in PHP easily...Or something else

    - by RickyAYoder
    I've done some looking around, but I'm still confused a bit. I tried Crockford's JSMin, but Win XP can't unzip the executable file for some reason. What I really want though is a simple and easy-to-use JS minifier that uses PHP to minify JS code--and return the result. The reason why is because: I have 2 files (for example) that I'm working between: scripts.js and scripts_template.js scripts_template is normal code that I write out--then I have to minify it and paste the minified script into scripts.js--the one that I actually USE on my website. I want to eradicate the middle man by simply doing something like this on my page: <script type="text/javascript" src="scripts.php"></script> And then for the contents of scripts.php: <?php include("include.inc"); header("Content-type:text/javascript"); echo(minify_js(file_get_contents("scripts_template.js"))); This way, whenever I update my JS, I don't have to constantly go to a website to minify it and re-paste it into scripts.js--everything is automatically updated. Yes, I have also tried Crockford's PHP Minifier and I've taken a look at PHP Speedy, but I don't understand PHP classes just yet...Is there anything out there that a monkey could understand, maybe something with RegExp? How about we make this even simpler? I just want to remove tab spaces--I still want my code to be readable. It's not like the script makes my site lag epically, it's just anything is better than nothing. Tab removal, anyone? And if possible, how about removing completely BLANK lines?

    Read the article

  • Binary Search Tree for specific intent

    - by Luís Guilherme
    We all know there are plenty of self-balancing binary search trees (BST), being the most famous the Red-Black and the AVL. It might be useful to take a look at AA-trees and scapegoat trees too. I want to do deletions insertions and searches, like any other BST. However, it will be common to delete all values in a given range, or deleting whole subtrees. So: I want to insert, search, remove values in O(log n) (balanced tree). I would like to delete a subtree, keeping the whole tree balanced, in O(log n) (worst-case or amortized) It might be useful to delete several values in a row, before balancing the tree I will most often insert 2 values at once, however this is not a rule (just a tip in case there is a tree data structure that takes this into account) Is there a variant of AVL or RB that helps me on this? Scapegoat-trees look more like this, but would also need some changes, anyone who has got experience on them can share some thougts? More precisely, which balancing procedure and/or removal procedure would help me keep this actions time-efficient?

    Read the article

  • How to eliminate duplicate nodes bases on values of multiple attributes?

    - by JayRaj
    Hello All, How can I eliminate duplicate nodes based on values of multiple (more than 1) attributes? Also the attribute names are passed as parameters to the stylesheet. Now I am aware of the Muenchian method of grouping that uses a <xsl:key> element. But I came to know that XSLT 1.0 does not allow paramters/variables in <xsl:key>. Is there another method(s) to achieve duplicate nodes removal? It is fine if it not as efficient as the Munechian method. Update from previus question: XML: <data id = "root"> <record id="1" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="2" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="3" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="4" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="5" operator1='xxx' operator2='lkj' operator3='tyu'/> <record id="6" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="7" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="8" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="9" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="10" operator1='rrr' operator2='yyy' operator3='zzz'/> </data>

    Read the article

  • RSKeyMgmt -r unable to remove installtion ID

    - by Eves
    I have backed up databases on one 2005 SQL Server and have restored those databases on a second 2005 SQL Server. I am currently trying to remove the new server's OLD key instance ID using the Reporting Services Key Manager (RSKeyMgmt -r). Prior to running the removal command the list of the current instances shows the new server's OLD instance ID as well as the NEW instance ID from the first SQL Server. Executing the RSKeyMgmt -r command results in: "The command completed successfully". However, when I recheck the listing of current instance IDs I see both the OLD and NEW instance IDs. In addition, when I check the Application Event Viewer I see an error: Report Server Windows Service (MSSQLSERVER) has not been granted access to the catalog content Does anyone know why I would be getting the above application error? Or...does anyone know what I would need to do to give access to the catalog to the Report Server Window Service? The first SQL Server where the databases were backed up is an Enterprise edition SQL Server and the second SQL Server where the databases were restored is Standard edition. Could this be the cause of the problem? Is there a way to make this backup and restore migration work?

    Read the article

  • jQuery: one-liner for removing all children but one

    - by user1352530
    HTML code is: <li> <a href="#" /> <ul> <li> <a href="#"/> </li> </ul> </li> I want to delete any node under $(this) = li that is not a link. If there are more than one link, just the first one is saved, and if there are links inside the ul tag, they are not included because they are descendants and no childrens. Something like this (I am cloning and outputting html): $(this).clone().remove('not(a:first)').html(); //Assuming remove looks for "a" as a first child Does not work, so I try with this: $(this).clone().children().remove('not(a:first)').parent().html(); Does not work so I try with this: $(this).clone().remove('not(>a:first)').html(); I need the original element as a reference after removal for chaining more operations EDIT: Ok! Second one works with a little syntax correction as some appointed (:not( instead of not() but I would like to know if there is any other solution similar to the third line so I don't have to do children().remove().parent(). Thank you!

    Read the article

  • anti-if campaign

    - by Andrew Siemer
    I recently ran against a very interesting site that expresses a very interesting idea - the anti-if campaign. You can see this here at www.antiifcampaign.com. I have to agree that complex nested IF statements are an absolute pain in the rear. I am currently on a project that up until very recently had some crazy nested IFs that scrolled to the right for quite a ways. We cured our issues in two ways - we used Windows Workflow Foundation to address routing (or workflow) concerns. And we are in the process of implementing all of our business rules utilizing ILOG Rules for .NET (recently purchased by IBM!!). This for the most part has cured our nested IF pains...but I find myself wondering how many people cure their pains in the manner that the good folks at the AntiIfCampaign suggest (see an example here) by creating numerous amounts of abstract classes to represent a given scenario that was originally covered by the nested IF. I wonder if another way to address the removal of this complexity might also be in using an IoC container such as StructureMap to move in and out of different bits of functionality. Either way... Question: Given a scenario where I have a nested complex IF or SWITCH statement that is used to evaluate a given type of thing (say evaluating an Enum) to determine how I want to handle the processing of that thing by enum type - what are some ways to do the same form of processing without using the IF or SWITCH hierarchical structure? public enum WidgetTypes { Type1, Type2, Type3, Type4 } ... WidgetTypes _myType = WidgetTypes.Type1; ... switch(_myType) { case WidgetTypes.Type1: //do something break; case WidgetTypes.Type2: //do something break; //etc... }

    Read the article

  • Update Options on Existing jQuery Object

    - by Vince Kronlein
    I'm using Bootstrap and DataTables in my app and I have a default initializer for tables based on class. I can just add the class data-table to the table and it gets instantiated with the default values I want. I'd like to know how to change or update specific options based on a specific table. if ($.fn.dataTable) { $('.data-table').dataTable( { sDom: "R<'row'<'span6'l><'span6'f>r>t<'row'<'span6'i><'span6'p>>", sPaginationType: "bootstrap", oLanguage: { "sLengthMenu": "_MENU_ &nbsp; records per page" }, aoColumnDefs: [ { "bSortable": false, "aTargets": [ 0 ] } ] }); } All my data tables have a checkbox in the first column so the above removal of sorting works for all of them. But I'd like to be able to update the aoColumnDefs on a table by table basis so I can add other columns that I don't want sorted. So let's say I have a table: $('#member-list'), how do I access this object and update it's datatables options in jQuery? I can't find any reference or help anywhere. Thanks a lot! -V

    Read the article

  • Item in multiple lists

    - by Evan Teran
    So I have some legacy code which I would love to use more modern techniques. But I fear that given the way that things are designed, it is a non-option. The core issue is that often a node is in more than one list at a time. Something like this: struct T { T *next_1; T *prev_1; T *next_2; T *prev_2; int value; }; this allows the core have a single object of type T be allocated and inserted into 2 doubly linked lists, nice and efficient. Obviously I could just have 2 std::list<T*>'s and just insert the object into both...but there is one thing which would be way less efficient...removal. Often the code needs to "destroy" an object of type T and this includes removing the element from all lists. This is nice because given a T* the code can remove that object from all lists it exists in. With something like a std::list I would need to search for the object to get an iterator, then remove that (I can't just pass around an iterator because it is in several lists). Is there a nice c++-ish solution to this, or is the manually rolled way the best way? I have a feeling the manually rolled way is the answer, but I figured I'd ask.

    Read the article

  • remove the blank rows after i removed xml elements?

    - by fayer
    i removed some elements from a xml file with simpledom. the code: $this->xmlDocument->removeNodes("//entity[name='mac']"); here is the initial file: <entity id="1000070"> <name>apple</name> <type>category</type> <entities> <entity id="7002870"> <name>mac</name> <type>category</type> </entity> <entity id="7024080"> <name>iphone</name> <type>category</type> </entity> <entity id="7024080"> <name>ipad</name> <type>category</type> </entity> </entities> </entity> the file afterwards: <entity id="1000070"> <name>apple</name> <type>category</type> <entities> <entity id="7024080"> <name>iphone</name> <type>category</type> </entity> <entity id="7024080"> <name>ipad</name> <type>category</type> </entity> </entities> </entity> i wonder how i could also remove the blank lines that are left after i ran the removal code? thanks!

    Read the article

  • Getting tree construction with ANTLR

    - by prosseek
    As asked and answered in http://stackoverflow.com/questions/2999755/removing-left-recursion-in-antlr , I could remove the left recursion E - E + T|T T - T * F|F F - INT | ( E ) After left recursion removal, I get the following one E - TE' E' - null | + TE' T - FT' T' - null | * FT' Then, how to make the tree construction with the modified grammar? With the input 1+2, I want to have a tree ^('+' ^(INT 1) ^(INT 2)). Or similar. grammar T; options { output=AST; language=Python; ASTLabelType=CommonTree; } start : e - e ; e : t ep - ??? ; ep : | '+' t ep - ??? ; t : f tp - ??? ; tp : | '*' f tp - ??? ; f : INT | '(' e ')' - e ; INT : '0'..'9'+ ; WS: (' '|'\n'|'\r')+ {$channel=HIDDEN;} ;

    Read the article

  • Using boost::random to select from an std::list where elements are being removed

    - by user144182
    See this related question on more generic use of the Boost Random library. My questions involves selecting a random element from an std::list, doing some operation, which could potentally include removing the element from the list, and then choosing another random element, until some condition is satisfied. The boost code and for loop look roughly like this: // create and insert elements into list std::list<MyClass> myList; //[...] // select uniformly from list indices boost::uniform_int<> indices( 0, myList.size()-1 ); boost::variate_generator< boost::mt19937, boost::uniform_int<> > selectIndex(boost::mt19937(), indices); for( int i = 0; i <= maxOperations; ++i ) { int index = selectIndex(); MyClass & mc = myList.begin() + index; // do operations with mc, potentially removing it from myList //[...] } My problem is as soon as the operations that are performed on an element result in the removal of an element, the variate_generator has the potential to select an invalid index in the list. I don't think it makes sense to completely recreate the variate_generator each time, especially if I seed it with time(0).

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >