Search Results

Search found 695 results on 28 pages for 'deletes'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • sqlite3 DELETE problem "Library Routine Called Out Of Sequence"

    - by Michael Bordelon
    Here is my second stupid Noob problem. I am trying to do a simple Delete and I keep blowing up on the prepare step. I already have other Deletes, Inserts, Updates and Selects working. I am sure it is something simple. I appreciate your help. + (void)flushTodaysWorkouts { sqlite3_stmt *statement = nil; //open the database if (sqlite3_open([[BIUtility getDBPath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to opendatabase"); } NSArray *woList = [self todaysScheduledWorkouts]; for (Workout *wo in woList) { NSInteger woID = wo.woInstanceID; if(statement == nil) { const char *sql = "DELETE FROM IWORKOUT WHERE WOINSTANCEID = ?"; if(sqlite3_prepare_v2(database, sql, -1, &statement, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating delete statement. '%s'", sqlite3_errmsg(database)); } //When binding parameters, index starts from 1 and not zero. sqlite3_bind_int(statement, 1, woID); if (SQLITE_DONE != sqlite3_step(statement)) NSAssert1(0, @"Error while deleting. '%s'", sqlite3_errmsg(database)); sqlite3_finalize(statement); } if(database) sqlite3_close(database); }

    Read the article

  • ASP.NET GridView "Client-Side Confirmation when Deleting" stopped working on ie - how come?

    - by tarnold
    A few months ago, I have programmed an ASP.NET GridView with a custom "Delete" LinkButton and Client-Side JavaScript Confirmation according to this msdn article: http://msdn.microsoft.com/en-us/library/bb428868.aspx (published in April 2007) or e.g. http://stackoverflow.com/questions/218733/javascript-before-aspbuttonfield-click The code looks like this: <ItemTemplate> <asp:LinkButton ID="deleteLinkButton" runat="server" Text="Delete" OnCommand="deleteLinkButtonButton_Command" CommandName='<%# Eval("id") %>' OnClientClick='<%# Eval("id", "return confirm(\"Delete Id {0}?\")") %>' /> </ItemTemplate> Surprisingly, "Cancel" doesn't work no more with my ie (Version: 6.0.2900.2180.xpsp_sp2_qfe.080814-1242) - it always deletes the row. With Opera (Version 9.62) it still works as expeced and described in the msdn article. More surprisingly, on a fellow worker's machine with the same ie version, it still works ("Cancel" will not delete the row). The generated code looks like <a onclick="return confirm(...);" href="javascript:__doPostBack('...')"> As confirm(...) returns false on "Cancel", I expect the __doPostBack event in the href not to be fired. Are there any strange ie settings I accidentally might have changed? What else could be the cause of this weird behaviour? Or is this a "please reinstall WinXP" issue?

    Read the article

  • how to monitor the program code execution? (file creation and modification by code lines etc)

    - by infant programmer
    My program is about triggering XSL transformation, Its fact that this code for carrying out the transformation, creates some dll and tmp files and deletes them pretty soon after the transformation is completed. It is almost untraceable for me to monitor the creation and deletion of files manually, so I want to include some chunk of codelines to display "which codeline has created/modified which tmp and dll files" in console window. This is the relevant part of the code: string strXmlQueryTransformPath = @"input.xsl"; string strXmlOutput = string.Empty; StringReader srXmlInput = null; StringWriter swXmlOutput = null; XslCompiledTransform xslTransform = null; XPathDocument xpathXmlOrig = null; XsltSettings xslSettings = null; MemoryStream objMemoryStream = null; objMemoryStream = new MemoryStream(); xslTransform = new XslCompiledTransform(false); xpathXmlOrig = new XPathDocument("input.xml"); xslSettings = new XsltSettings(); xslSettings.EnableScript = true; xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); xslTransform.Transform(xpathXmlOrig, null, objMemoryStream); objMemoryStream.Position = 0; StreamReader objStreamReader = new StreamReader(objMemoryStream); strXmlOutput = objStreamReader.ReadToEnd(); // make use of Data in string "strXmlOutput" google and msdn search couldn't help me much..

    Read the article

  • Android Camera intent creating two files

    - by Kyle Ramstad
    I am making a program that takes a picture and then shows it's thumbnail. When using the emulator all goes well and the discard button deletes the photo. But on a real device the camera intent saves the image at the imageUri variable and a second one that is named like if I had just opened up the camera and took a picture by itself. private static final int CAMERA_PIC_REQUEST = 1337; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.camera); //start camera values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, "New Picture"); values.put(MediaStore.Images.Media.DESCRIPTION,"From your Camera"); imageUri = getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); image = (ImageView) findViewById(R.id.ImageView01); Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri); startActivityForResult(intent, CAMERA_PIC_REQUEST); //save the image buttons Button save = (Button) findViewById(R.id.Button01); Button close = (Button) findViewById(R.id.Button02); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == CAMERA_PIC_REQUEST && resultCode == RESULT_OK) { try{ thumbnail = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri); image.setImageBitmap(thumbnail); } catch(Exception e){ e.printStackTrace(); } } else{ finish(); } } public void myClickHandler(View view) { switch (view.getId()) { case R.id.Button01: finish(); break; case R.id.Button02: dicard(); } } private void dicard(){ getContentResolver().delete(imageUri, null, null); finish(); }

    Read the article

  • Some questions about dotnetopenauth

    - by chobo2
    Hi I have a couple outstanding questions mainly reguarding twitter and facebook In the FacebookGraph class there are properties such as Id,name,etc. I am wondering how do I add to this list? Like what happens if I want a users hometown? I tried to add a property called hometown but it always is null. What should I store their id(1418) or the whole url(http://www.facebook.com/profile.php?id=1418) for lookup later in my db to grab their data and to see if they have an account with my site? Is it actually good to use this id as it seems like it is common knowledge. Can't someone just find the profile id or whatever and do a fake request on my site? how do you setup dotnetopenauth to deal with the case when a user goes to facebook and deletes access to my website. I know you can send a deauthorization code to your site and then delete their account but I don't know how to do that through dotnetopenauth Twitter Is it possible to do number 4 with twitter? Ajax Is it possible to make the openid stuff ajax? I don't see a sample anywhere in the dotnetopenauth samples.

    Read the article

  • MySQL LIMIT 1 but query 15 rows?

    - by Ian
    Basically what I'm trying to do is compare the ID's of rows against 15 results in MySQL, eliminating all but 1 (using NOT IN) and then pull that result. Now normally this would be fine by itself, however the order of the 15 rows I'm doing the SQL query for are constantly changing based on a ranking, so there is a possibility that between the time the ranking updates, and the ajax request (which I submit the ID's for NOT IN) more than just one ID has changed, which would of course bring back more than one row which I do not want. So in short, is there a way in which I can query 15 rows, but only return one? Without having to run two separate queries. Any help is appreciated, thank you. EXAMPLE: Say I have 7 items in my database, and I'm displaying 5 on the page to the user. These are what are being displayed to the user: Apple Orange Kiwi Banana Grape But in the database I also have Peach Blackberry Now what I want to do is if the user deletes an item from their list, it will add another item (based on a ranking they have) Now the issue is, in order to know what they have on their list at the moment I send the remaining items to the database (say they deleted Kiwi, I would send Apple, Orange, Banana, and Grape) So now I select the highest ranked 5 items from are remaining six items, make sure they are not the ones already displayed on the page, and then add the new one to list (either Peach or Blackberry) All good and well, except that if both peach and blackberry now outrank grape, then I will be returning two results instead of just one. Because it would've searched... Apple Orange Banana Peach Blackberry and excluded... Apple Orange Banana Grape Which leaves us with both Peach and Blackberry, instead of just Peach or Blackberry

    Read the article

  • hibernate insert to a collection causes a delete then all the items in the collection to be inserted

    - by Mark
    I have a many to may relationship CohortGroup and Employee. Any time I insert an Employee into the CohortGroup hibernate deletes the group from the resolution table and inserts all the members again, plus the new one. Why not just add the new one? The annotation in the Group: @ManyToMany(cascade = { PERSIST, MERGE, REFRESH }) @JoinTable(name="MYSITE_RES_COHORT_GROUP_STAFF", joinColumns={@JoinColumn(name="COHORT_GROUPID")}, inverseJoinColumns={@JoinColumn(name="USERID")}) public List<Employee> getMembers(){ return members; } The other side in the Employee @ManyToMany(mappedBy="members",cascade = { PERSIST, MERGE, REFRESH } ) public List<CohortGroup> getMemberGroups(){ return memberGroups; } Code snipit Employee emp = edao.findByID(cohortId); CohortGroup group = cgdao.findByID(Long.decode(groupId)); group.getMembers().add(emp); cgdao.persist(group); below is the sql reported in the log delete from swas.MYSITE_RES_COHORT_GROUP_STAFF where COHORT_GROUPID=? insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) insert into swas.MYSITE_RES_COHORT_GROUP_STAFF (COHORT_GROUPID, USERID) values (?, ?) This seams really inefficient and is causing some issues. If sevral requests are made to add an employee to the group then some get over written.

    Read the article

  • C++ Memory Leak, Can't find where

    - by Nicholas
    I'm using Visual Studio 2008, Developing an OpenGL window. I've created several classes for creating a skeleton, one for joints, one for skin, one for a Body(which is a holder for several joints and skin) and one for reading a skel/skin file. Within each of my classes, I'm using pointers for most of my data, most of which are declared using = new int[XX]. I have a destructor for each Class that deletes the pointers, using delete[XX]. Within my GLUT display function I have it declaring a body, opening the files and drawing them, then deleting the body at the end of the display. But there's still a memory leak somewhere in the program. As Time goes on, it's memory usage just keep increasing, at a consistent rate, which I'm interpreting as something that's not getting deleted. I'm not sure if it's something in the glut display function that's just not deleting the Body class, or something else. I've followed the steps for memory leak detection in Visual Studio 2008 and it doesn't report any leak, but I'm not 100% sure if it's working right for me. I'm not fluent in C++, so there maybe something I'm overlooking, can anyone see it?

    Read the article

  • C++ polymorphism and slicing

    - by Draco Ater
    The following code, prints out Derived Base Base But I need every Derived object put into User::items, call its own print function, but not the base class one. Can I achieve that without using pointers? If it is not possible, how should I write the function that deletes User::items one by one and frees memory, so that there should not be any memory leaks? #include <iostream> #include <vector> #include <algorithm> using namespace std; class Base{ public: virtual void print(){ cout << "Base" << endl;} }; class Derived: public Base{ public: void print(){ cout << "Derived" << endl;} }; class User{ public: vector<Base> items; void add_item( Base& item ){ item.print(); items.push_back( item ); items.back().print(); } }; void fill_items( User& u ){ Derived d; u.add_item( d ); } int main(){ User u; fill_items( u ); u.items[0].print(); }

    Read the article

  • Azure batch operations delete several blobs and tables

    - by reft
    I have a function that deletes every table & blob that belongs to the affected user. CloudTable uploadTable = CloudStorageServices.GetCloudUploadsTable(); TableQuery<UploadEntity> uploadQuery = uploadTable.CreateQuery<UploadEntity>(); List<UploadEntity> uploadEntity = (from e in uploadTable.ExecuteQuery(uploadQuery) where e.PartitionKey == "uploads" && e.UserName == User.Idendity.Name select e).ToList(); foreach (UploadEntity uploadTableItem in uploadEntity) { //Delete table TableOperation retrieveOperationUploads = TableOperation.Retrieve<UploadEntity>("uploads", uploadTableItem.RowKey); TableResult retrievedResultUploads = uploadTable.Execute(retrieveOperationUploads); UploadEntity deleteEntityUploads = (UploadEntity)retrievedResultUploads.Result; TableOperation deleteOperationUploads = TableOperation.Delete(deleteEntityUploads); uploadTable.Execute(deleteOperationUploads); //Delete blob CloudBlobContainer blobContainer = CloudStorageServices.GetCloudBlobsContainer(); CloudBlockBlob blob = blobContainer.GetBlockBlobReference(uploadTableItem.BlobName); blob.Delete(); } Each table got its own blob, so if the list contains 3 uploadentities, the 3 table and the 3 blobs will be deleted. I heard you can use table batch operations for reduce cost and load. I tried it, but failed miserable. Anyone intrested in helping me:)? Im guessing tablebatch operations are for tables only, so its a no go for blobs, right? How would you add tablebatchoperations for this code? Do you see any other improvements that can be done? Thanks!

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Can this code cause a memory leak (Arduino)

    - by tbraun89
    I have a arduino project and I created this struct: struct Project { boolean status; String name; struct Project* nextProject; }; In my application I parse some data and create Project objects. To have them in a list there is a pointer to the nextProject in each Project object expect the last. This is the code where I add new projects: void RssParser::addProject(boolean tempProjectStatus, String tempData) { if (!startProject) { startProject = true; firstProject.status = tempProjectStatus; firstProject.name = tempData; firstProject.nextProject = NULL; ptrToLastProject = &firstProject; } else { ptrToLastProject->nextProject = new Project(); ptrToLastProject->nextProject->status = tempProjectStatus; ptrToLastProject->nextProject->name = tempData; ptrToLastProject->nextProject->nextProject = NULL; ptrToLastProject = ptrToLastProject->nextProject; } } firstProject is an private instance variable and defined in the header file like this: Project firstProject; So if there actually no project was added, I use firstProject, to add a new one, if firstProject is set I use the nextProject pointer. Also I have a reset() method that deletes the pointer to the projects: void RssParser::reset() { delete ptrToLastProject; delete firstProject.nextProject; startProject = false; } After each parsing run I call reset() the problem is that the memory used is not released. If I comment out the addProject method there are no issues with my memory. Someone can tell me what could cause the memory leak?

    Read the article

  • Calling NSFetchedResultsController & CoreData experts

    - by JK
    I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on. Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution. Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store. Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution. Any help would be greatly appreciated. Thank you!

    Read the article

  • How do I delete folders in bash after successful copy (Mac OSX)?

    - by cohortq
    Hello! I recently created my first bash script, and I am having problems perfecting it's operation. I am trying to copy certain folders from one local drive, to a network drive. I am having the problem of deleting folders once they are copied over, well and also really verifying that they were copied over). Is there a better way to try to delete folders after rsync is done copying? I was trying to exclude the live tv buffer folder, but really, I can blow it away without consequence if need be. Any help would be great! thanks! #!/bin/bash network="CBS" useracct="tvcapture" thedate=$(date "+%m%d%Y") folderToBeMoved="/users/$useracct/Documents" newfoldername="/Volumes/Media/TV/$network/$thedate" ECHO "Network is $network" ECHO "date is $thedate" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername rsync -av $folderToBeMoved/"EyeTV Archive"/*.eyetv $newfoldername --exclude="Live TV Buffer.eyetv" # this fails when there is more than one *.eyetv folder if [ -d $newfoldername/*.eyetv ]; then #this deletes the contents of the directories find $folderToBeMoved/"EyeTV Archive"/*.eyetv \( ! -path $folderToBeMoved/"EyeTV Archive"/"Live TV Buffer.eyetv" \) -delete #remove empty directory find $folderToBeMoved/"EyeTV Archive"/*.eyetv -type d -exec rmdir {} \; fi

    Read the article

  • Recreating a workflow instance with the same instance id

    - by Miron Brezuleanu
    We have some objects that have an associated workflow instance. The objects are identified with a GUID, which is also the GUID of the workflow instance associated with the object. We need to restart (see NOTE 3 for the meaning of 'restart') the workflow instance if the workflow definition changed (there is no state in the workflow itself and it is written to support restarting in this manner). The restarting is performed by calling Terminate on the WorkflowInstance, then recreating the instance with the same GUID. The weird part is that this works every other attempt (odd attempts - the workflow is stopped, but for some reason doesn't restart, even attempt - the already terminated workflow is recreated and started successfully). While I admit that using 'second hand' GUIDs is a sign of extraordinary cheapness (and something we plan to change), I'm wondering why this isn't working. Any ideas? NOTES: The terminated workflow instance is passivated (waiting for a notification) at the time of the termination. The Terminate call successfully deletes the data persisted in the database for that instance. We're using 'restarting' with a meaning that's less common in the context of WF - not restarting a passivated instance, but force the workflow to start again from the beginning of its definition. Thanks!

    Read the article

  • Jquery passing an HTML element into a function

    - by christian
    I have an HTML form where I am going to copy values from a series of input fields to some spans/headings as the user populates the input fields. I am able to get this working using the following code: $('#source').keyup(function(){ if($("#source").val().length == 0){ $("#destinationTitle").text('Sample Title'); }else{ $("#destinationTitle").text($("#source").val()); } }); In the above scenario the html is something like: Sample Title Basically, as the users fills out the source box, the text of the is changed to the value of the source input. If nothing is input in, or the user deletes the values typed into the box some default text is placed in the instead. Pretty straightforward. However, since I need to make this work for many different fields, it makes sense to turn this into a generic function and then bind that function to each 's onkeyup() event. But I am having some trouble with this. My implementation: function doStuff(source,target,defaultValue) { if($(source).val().length == 0){ $(target).text(defaultValue); }else{ $(target).text($(source).val()); } } which is called as follows: $('#source').keyup(function() { doStuff(this, '"#destinationTitle"', 'SampleTitle'); }); What I can't figure out is how to pass the second parameter, the name of the destination html element into the function. I have no problem passing in the element I'm binding to via "this", but can't figure out the destination element syntax. Any help would be appreciated - many thanks!

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Rails 3) Delete, Destory, and Routing

    - by Maximus S
    The problem is the code below <%= button_to t('.delete'), @post, :method => :delete, :class => :destroy %> My Post model has many relations that are dependent on delete. However, the code above will only remove the post, leaving its relations intact. The problem is that methods delete and destroy are different in that method delete doesn't instantiate the object. So I need to use "destroy" instead of "delete" my post. <%= button_to t('.delete'), @post, :method => :destroy %> gives me routing error. No route matches [POST] "/posts/2" <%= button_to t('.delete'), @post, Post.destroy(@post) %> deletes the post without clicking the button. Could anyone help me with this? UPDATE: application.js //= require jquery //= require jquery-ui //= require jquery_ujs //= require bootstrap-modal //= require bootstrap-typeahead //= require_tree . rake routes DELETE (/:locale)/posts/:id(.:format) posts#destroy Post model has_many :tag_links, :dependent => :destroy has_many :tags, :through => :tag_links Tag model has_many :tag_links, :dependent => :destroy has_many :posts, :through => :tag_links Problem: When I delete a post, all the tag_links are destroyed but tags still exist.

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Updating iOS application content which include images

    - by azamsharp
    I am working on a Vegetable gardening application. Apart from the vegetable name and description I also have vegetable image. Currently, I have all the images in the Supported Files folder in the Xcode project. But later on I want to update the application dynamically without having the user download a new version. When the user updates the application or downloads new data from the server that data will include the images. Can I store those images in the supporting file folder or somewhere where they can be references by just the name. RELATED QUESTION: I will also allow the user to take pictures of their vegetables and then write notes about the vegetables like "just planted", "about to harvest" etc. What is the recommended approach for storing pictures/photos. I can always store them in the user's photo library and then store the reference in the local database and then fetch and display the picture using the reference. The problem with that approach might be that if the user accidentally deletes the picture from the library then it will no longer be displayed in my application. The only way I see if to store the picture in the app local database as a BLOB.

    Read the article

  • Runtime Error 1004 using Select with several workbooks

    - by Johaen
    I have an Excel workbook which pulls out data from two other workbooks. Since the data changes hourly there is the possibility that this macro is used more than one time a day for the same data. So I just want to select all previous data to this date period and want to delete them. Later on the data will be copied in anyway. But as soon as I want to use WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select the code stopes with Error 1004 Application-defined or object-defined error. Followed just a snippet of the code with the relevant part. What is wrong here? 'Set source workbook Dim currentWb As Workbook Set currentWb = ThisWorkbook Set WBSH = currentWb.Sheets("Tracking") 'Query which data from the tracking files shoud get pulled out to the file CheckDate = Application.InputBox(("From which date you want to get data?" & vbCrLf & "Format: yyyy/mm/dd "), "Tracking data", Format(Date - 1, "yyyy/mm/dd")) 'states the last entry which is done ; know where to start ; currentWb File With currentWb.Sheets("Tracking") lastRow = .Range("D" & .Rows.Count).End(xlUp).Row lastRow = lastRow + 1 End With 'just last 250 entries get checked since not so many entries are made in one week j = lastRow - 250 'Check if there is already data to the look up date in the analyses sheet and if so deletes these records Do j = j + 1 'Exit Sub if there is no data to compare to prevent overflow If WBSH.Cells(j + 1, "C").Value = "" Then Exit Do End If Loop While WBSH.Cells(j, "C").Value < CheckDate If j <> lastRow - 1 Then 'WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select 'Selection.ClearContents End If Thank you!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >