Search Results

Search found 908 results on 37 pages for 'cascading deletes'.

Page 29/37 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Calling NSFetchedResultsController & CoreData experts

    - by JK
    I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on. Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution. Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store. Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution. Any help would be greatly appreciated. Thank you!

    Read the article

  • How do I delete folders in bash after successful copy (Mac OSX)?

    - by cohortq
    Hello! I recently created my first bash script, and I am having problems perfecting it's operation. I am trying to copy certain folders from one local drive, to a network drive. I am having the problem of deleting folders once they are copied over, well and also really verifying that they were copied over). Is there a better way to try to delete folders after rsync is done copying? I was trying to exclude the live tv buffer folder, but really, I can blow it away without consequence if need be. Any help would be great! thanks! #!/bin/bash network="CBS" useracct="tvcapture" thedate=$(date "+%m%d%Y") folderToBeMoved="/users/$useracct/Documents" newfoldername="/Volumes/Media/TV/$network/$thedate" ECHO "Network is $network" ECHO "date is $thedate" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername rsync -av $folderToBeMoved/"EyeTV Archive"/*.eyetv $newfoldername --exclude="Live TV Buffer.eyetv" # this fails when there is more than one *.eyetv folder if [ -d $newfoldername/*.eyetv ]; then #this deletes the contents of the directories find $folderToBeMoved/"EyeTV Archive"/*.eyetv \( ! -path $folderToBeMoved/"EyeTV Archive"/"Live TV Buffer.eyetv" \) -delete #remove empty directory find $folderToBeMoved/"EyeTV Archive"/*.eyetv -type d -exec rmdir {} \; fi

    Read the article

  • Recreating a workflow instance with the same instance id

    - by Miron Brezuleanu
    We have some objects that have an associated workflow instance. The objects are identified with a GUID, which is also the GUID of the workflow instance associated with the object. We need to restart (see NOTE 3 for the meaning of 'restart') the workflow instance if the workflow definition changed (there is no state in the workflow itself and it is written to support restarting in this manner). The restarting is performed by calling Terminate on the WorkflowInstance, then recreating the instance with the same GUID. The weird part is that this works every other attempt (odd attempts - the workflow is stopped, but for some reason doesn't restart, even attempt - the already terminated workflow is recreated and started successfully). While I admit that using 'second hand' GUIDs is a sign of extraordinary cheapness (and something we plan to change), I'm wondering why this isn't working. Any ideas? NOTES: The terminated workflow instance is passivated (waiting for a notification) at the time of the termination. The Terminate call successfully deletes the data persisted in the database for that instance. We're using 'restarting' with a meaning that's less common in the context of WF - not restarting a passivated instance, but force the workflow to start again from the beginning of its definition. Thanks!

    Read the article

  • Jquery passing an HTML element into a function

    - by christian
    I have an HTML form where I am going to copy values from a series of input fields to some spans/headings as the user populates the input fields. I am able to get this working using the following code: $('#source').keyup(function(){ if($("#source").val().length == 0){ $("#destinationTitle").text('Sample Title'); }else{ $("#destinationTitle").text($("#source").val()); } }); In the above scenario the html is something like: Sample Title Basically, as the users fills out the source box, the text of the is changed to the value of the source input. If nothing is input in, or the user deletes the values typed into the box some default text is placed in the instead. Pretty straightforward. However, since I need to make this work for many different fields, it makes sense to turn this into a generic function and then bind that function to each 's onkeyup() event. But I am having some trouble with this. My implementation: function doStuff(source,target,defaultValue) { if($(source).val().length == 0){ $(target).text(defaultValue); }else{ $(target).text($(source).val()); } } which is called as follows: $('#source').keyup(function() { doStuff(this, '"#destinationTitle"', 'SampleTitle'); }); What I can't figure out is how to pass the second parameter, the name of the destination html element into the function. I have no problem passing in the element I'm binding to via "this", but can't figure out the destination element syntax. Any help would be appreciated - many thanks!

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Rails 3) Delete, Destory, and Routing

    - by Maximus S
    The problem is the code below <%= button_to t('.delete'), @post, :method => :delete, :class => :destroy %> My Post model has many relations that are dependent on delete. However, the code above will only remove the post, leaving its relations intact. The problem is that methods delete and destroy are different in that method delete doesn't instantiate the object. So I need to use "destroy" instead of "delete" my post. <%= button_to t('.delete'), @post, :method => :destroy %> gives me routing error. No route matches [POST] "/posts/2" <%= button_to t('.delete'), @post, Post.destroy(@post) %> deletes the post without clicking the button. Could anyone help me with this? UPDATE: application.js //= require jquery //= require jquery-ui //= require jquery_ujs //= require bootstrap-modal //= require bootstrap-typeahead //= require_tree . rake routes DELETE (/:locale)/posts/:id(.:format) posts#destroy Post model has_many :tag_links, :dependent => :destroy has_many :tags, :through => :tag_links Tag model has_many :tag_links, :dependent => :destroy has_many :posts, :through => :tag_links Problem: When I delete a post, all the tag_links are destroyed but tags still exist.

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Updating iOS application content which include images

    - by azamsharp
    I am working on a Vegetable gardening application. Apart from the vegetable name and description I also have vegetable image. Currently, I have all the images in the Supported Files folder in the Xcode project. But later on I want to update the application dynamically without having the user download a new version. When the user updates the application or downloads new data from the server that data will include the images. Can I store those images in the supporting file folder or somewhere where they can be references by just the name. RELATED QUESTION: I will also allow the user to take pictures of their vegetables and then write notes about the vegetables like "just planted", "about to harvest" etc. What is the recommended approach for storing pictures/photos. I can always store them in the user's photo library and then store the reference in the local database and then fetch and display the picture using the reference. The problem with that approach might be that if the user accidentally deletes the picture from the library then it will no longer be displayed in my application. The only way I see if to store the picture in the app local database as a BLOB.

    Read the article

  • Runtime Error 1004 using Select with several workbooks

    - by Johaen
    I have an Excel workbook which pulls out data from two other workbooks. Since the data changes hourly there is the possibility that this macro is used more than one time a day for the same data. So I just want to select all previous data to this date period and want to delete them. Later on the data will be copied in anyway. But as soon as I want to use WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select the code stopes with Error 1004 Application-defined or object-defined error. Followed just a snippet of the code with the relevant part. What is wrong here? 'Set source workbook Dim currentWb As Workbook Set currentWb = ThisWorkbook Set WBSH = currentWb.Sheets("Tracking") 'Query which data from the tracking files shoud get pulled out to the file CheckDate = Application.InputBox(("From which date you want to get data?" & vbCrLf & "Format: yyyy/mm/dd "), "Tracking data", Format(Date - 1, "yyyy/mm/dd")) 'states the last entry which is done ; know where to start ; currentWb File With currentWb.Sheets("Tracking") lastRow = .Range("D" & .Rows.Count).End(xlUp).Row lastRow = lastRow + 1 End With 'just last 250 entries get checked since not so many entries are made in one week j = lastRow - 250 'Check if there is already data to the look up date in the analyses sheet and if so deletes these records Do j = j + 1 'Exit Sub if there is no data to compare to prevent overflow If WBSH.Cells(j + 1, "C").Value = "" Then Exit Do End If Loop While WBSH.Cells(j, "C").Value < CheckDate If j <> lastRow - 1 Then 'WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select 'Selection.ClearContents End If Thank you!

    Read the article

  • Hoe to convert a JSON array of paths to images stored on a server into a javaScript array to display them? Using AJAX

    - by MichaelF
    I need for html file/ ajax code to take the JSON message and store the PATHS as a javaScript array. Then my buildImage function can display the first image in the array. I'm new to AJAX and believe my misunderstanding lies within the converting of the JSON to Javascript. I'm confused also about if my code creates a JSON array or object or either. I might need also to download a library to my app to understand JSON? Below is a PHP file loading the paths of the images. I believe json ecode is converting the PHP Array in a Json message. <?php include("mysqlconnect.php"); $select_query = "SELECT `ImagesPath` FROM `offerstbl` ORDER by `ImagesId` DESC"; $sql = mysql_query($select_query) or die(mysql_error()); $data = array(); while($row = mysql_fetch_array($sql,MYSQL_BOTH)){ $data[] = $row['ImagesPath']; } echo $images = json_encode($data); ?> Below is the script in is going to be loaded on an Cordova app. <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="css/styles.css"> <link rel="stylesheet" href="css/cascading.css"> <script> function importJson(str) { // console.log(typeof xmlhttp.responseText); if (str=="") { document.getElementById("content").innerHTML=""; return; } if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else { // code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { //var images = JSON.parse(xmlhttp.responseText); document.getElementById("content").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","http://server/content.php"); xmlhttp.send(); } function buildImage(src) { var img = document.createElement('img') img.src = src alert("1"); document.getElementById('content').appendChild(img); } for (var i = 0; i < images.length; i++) { buildImage(images[i]); } </script> </head> <body onload= "importJson();"> <div class="contents" id="content" ></div> <img src="img/logo.png" height="10px" width="10px" onload= "buildImage();"> </body>

    Read the article

  • Simple C++ code (what's wrong here?)

    - by JW
    Noob to C++. I'm trying to get user input (Last Name, First Name Middle Name), change part of it (Middle Name to Middle Initial) and then rearrange it (First Middle Initial Last). Where am I messing up in my code? --Thanks for ANY help you can offer! ... #include <iostream> using std::cout; using std::cin; #include <string> using std::string; int main() { string myString, last, first, middle; cout << "Enter your name: Last, First Middle"; cin >> last >> first >> middle; char comma, space1, space2; comma = myString.find_first_of(','); space1 = myString.find_first_of(' '); space2 = myString.find_last_of(' '); last = myString.substr (0, comma); // user input last name first = myString.substr (space1+1, -1); // user input first name middle = myString.substr (space2+1, -1); // user input middle name middle.insert (0, space2+1); // inserts middle initial in front of middle name middle.erase (1, -1); // deletes full middle name, leaving only middle initial myString = first + ' ' + middle + ' ' + last; // return 0; }

    Read the article

  • printing out dictionnaires

    - by kyril
    I have a rather specific question: I want to print out characters at a specific place using the \033[ syntax. This is what the code below should do: (the dict cells has the same keys as coords but with either '*' or '-' as value.) coords = {'x'+str(x)+'y'+str(y) : (x,y) for x,y, in itertools.product(range(60), range(20))} for key, value in coords.items(): char = cells[key] x,y = value HORIZ=str(x) VERT=str(y) char = str(char) print('\033['+VERT+';'+HORIZ+'f'+char) However, I noticed that if I put this into a infinite while loop, it does not always prints the same characters at the same position. There are only slight changes, but it deletes some and puts them back in after some loops. I already tried it with lists, and there it seems to behave just fine, so I tend to think it has something todo with the dict, but I can not figure out what it could be. You can see the Problem in a console here: SharedConsole.I am happy for every tip on this matter. On a related topic: After the printing, some changes should be made at the values of the cells dict, but for reason unknown to me, the only the first two rules are executed and the rest is ignored. The rules should test how many neighbours (which is in population) are around the cell and apply the according rule. In my implemention of this I have some kind of weird tumor growth (which should not happen, as if there more than three around they should the cell should die) (see FreakingTumor): if cells_copy [coord] == '-': if population == 3: cells [coord] = '*' if cells_copy [coord] == '*': if population > 3: cells [coord] = '-' elif population <= 1: cells [coord] = '-' elif population == 2 or 3: cells [coord] = '*' I checked the population variable several times, so I am quite sure that is not the matter. I am sorry for the slow consoles. Thanks in advance! Kyril

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Best tree/heap data structure for fixed set of nodes with changing values + need top 20 values?

    - by user350139
    I'm writing something like a game in C++ where I have a database table containing the current score for each user. I want to read that table into memory at the start of the game, quickly change each user's score while the game is being played in response to what each user does, and then when the game ends write the current scores back to the database. I also want to be able to find the 20 or so users with the highest scores. No users will be added or deleted during the short period when the game is being played. I haven't tried it yet, but updating the database might take too much time during the period when the game is being played. Fixed set of users (might be 10,000 to 50,000 users) Will map user IDs to their score and other user-specific information. User IDs will be auto_increment values. If the structure has a high memory overhead that's probably not an issue. If the program crashes during gameplay it can just be re-started. Quickly get a user's current score. Quickly add to a user's current score (and return their current score) Quickly get 20 users with highest score. No deletes. No inserts except when the structure is first created, and how long that takes isn't critical. Getting the top 20 users will only happen every five or ten seconds, but getting/adding will happen much more frequently. If not for the last, I could just create a memory block equal to sizeof(user) * max(user id) and put each user at user id * sizeof(user) for fast access. Should I do that plus some other structure for the Top 20 feature, or is there one structure that will handle all of this together?

    Read the article

  • Ouch, how to escape this in sed? Cleaning up iframe malware

    - by user1769783
    I'm helping someone clean up a malware infection on a site and I'm having a difficult time correctly matching some strings in sed so I can create a script to mass search and replace / remove it. The strings are: <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://www.iws-leipzig.de/contacts.php"></iframe></div>');</script> <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://vidintex.com/includes/class.pop.php"></iframe></div>');</script> <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://www.iws-leipzig.de/contacts.php"></iframe></div>');</script> I cant seem to figure out how to escape the various characters in those lines... If I try to just say delete the entire line if it matches http://vidintex.com/includes/class.pop.php it also deletes the closing "" in the .html files as well. Any help would be greatly appreciated!

    Read the article

  • Is there anyway to show a hidden div in the last row of a html table

    - by oo
    i have an html table. here is a simplified version: <table> <tr> <td><div style="display: none;" class="remove0">Remove Me</div></td> </tr> <tr> <td><div style="display: none;" class="remove1">Remove Me</div></td> </tr> <tr> <td><div class="remove2">Remove Me</div></td> </tr> </table> i have javascript that clicks on Remove Me in the last row and it deletes the html row using: $(this).parents("tr:first").remove(); the issue is that when i remove this last row, i also want the "Remove Me" text to now show up on the second row (which is now the new last row). how would i show this div so that it would dynamically show the "remove me" from the new last row?

    Read the article

  • Getting the error "Missing $ inserted" in LaTeX

    - by Espenhh
    Hey, I try to write the following in latex: \begin{itemize} \item \textbf{insert(element|text)} inserts the element or text passed at the start of the selection. \item \textbf{insert_after(element|text)} inserts the element or text passed at the end of the selection. \item \textbf{replace(element|text)} replaces the selection with the passed text/element. \item \textbf{delete()} deletes the selected text. \item \textbf{annotate(name,value)} annotates the selected text with the passed name and value-pair. This can either be a hidden meta-data about the selection, or can alter the visible appearance. \item \textbf{clear_annotation()} removes any annotation for this specific selection. \item \textbf{update_element(value)} performs an update of the element at the selection with the passed value. \end{itemize} For some reason, I get a bunch of errors. I think there is something with the use of the word "insert". I get errors like "Missing $ inserted", so it seems like the parses tries to fix some "errors" on my parts. Do I need to escape words like "insert", how do I do that?

    Read the article

  • Intercept creation of activities when the application is restored

    - by Johan Bilien
    Most of our activities access a user-specific model. All these activities inherit from a ModelActivity base class, which provides a getModel() call. When one of these activities detect that the user has signed out (through the AccountManager callback), it sticks to its existing model, but prepares to exit back to the root activity (which is not user-specific) by starting its intent with FLAG_ACTIVITY_CLEAR_TOP. If however the user deletes an account while the app is not running, we run into trouble when the activity is restored. Now the activity needs to handle there not being a model, which makes the code more complicated and bug-prone. Ideally we would intercept the application restore process before the activity is created. Then we would check whether we have an account and a model, and if not clear up the saved stack of activities, and restart from our root activity instead of the last saved activity. But as far as I can tell the first place where we can run code is in the onCreate callback of the activity. Is there a way to run some code when the application is restored from background-saving, but before the saved activity is created?

    Read the article

  • populating an nsarray

    - by MoKaM
    I intend to make a program that does the following: Create an NSArray populated with numbers from 1 to 100,000. Loop over some code that deletes certain elements of the NSArray when certain conditions are met. Store the resultant NSArray. However the above steps will also be looped over many times and so I need a fast way of making this NSArray that has 100,000 number elements. So what is the fastest way of doing it? Is there an alternative to iteratively populating an Array using a for loop? Such as an NSArray method that could do this quickly for me? Or perhaps I could make the NSArray with the 100,000 numbers by any means the first time. And then create every new NSArray (for step 1) by using method arraywithArray? (is it quicker way of doing it?) Or perhaps you have something completely different in mind that will achieve what I want. edit: Replace NSArray with NSMutableArray in this post.

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • saving mySql row checkpoint in table ?

    - by Keet
    hello, I am having a wee problem, and I am sure there is a more convenient/simpler way to achieve the solution, but all searches are throw in up a blanks at the moment ! I have a mysql db that is regularly updated by php page [ via a cron job ] this adds or deletes entries as appropriate. My issue is that I also need to check if any details [ie the phone number or similar] for the entry have changed, but doing this at every call is not possible [ not only does is seem to me to be overkill, but I am restricted by a 3rd party api call limit] Plus this is not critical info. So I was thinking it might be best to just check one entry per page call, and iterate through the rows/entires with each successive page call. What would be the best way of doing this, ie keeping track of which entry/row in the table that the should be checked next? I have 2 ideas of how to implement this: 1 ) The id of current row could be save to a file on the server [ surely not the best way] 2) an extra boolean field [check] is add to the table, set to True on the first entry and false to all other. Then on each page call it; finds 'where check = TRUE' runs the update check on this row, 'set check = FALSE' 'set [the next row] check = TRUE' Si this the best way to do this, or does anyone have any better sugestion ? thanks in advance ! .k PS sorry about the title

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >