Search Results

Search found 2041 results on 82 pages for 'deleting'.

Page 14/82 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • visual studio 2005 is deleting the .svn folder in the bin\Debug directory - how to prevent this?

    - by M K Saravanan
    For some reason I need to check in a couple of files in the bin\Debug directory. For the past few weeks, I am noticing a strange behaviour from VS2005. Every time I recompile the code, it is deleting the .svn folder in the bin\Debug directory and hence svn is showing "obstructed" error. Even svn clean up doesn't help due to missing .svn folder. Is there any settings on VS2005 to prevent this? In the first place, why it is deleting .svn folder? This thread http://svn.haxx.se/tsvnusers/archive-2008-10/0019.shtml discuss about it but no useful solution to prevent this from happening. Any other suggestions?

    Read the article

  • How do I can linux flock command to prevent another root process deleting a file?

    - by Danmaxis
    Hello there, I would like to prevent one of my root process from deleting a certaing file. So I came across the flock command, it seems to fit my need, but I didnt get its sintax. If I only indicate a shared lock, it doesnt work: flock -s "./file.xml" If I add a timeout parameter, it still doesnt work flock -s -w5 "./file.xml" It seems that way, it fits in flock [-sxun][-w #] fd# way. (What is this fd# parameter?) So, I tried the flock [-sxon][-w #] file [-c] command Using flock -s -w5 "./file.xml" -c "tail -3 ./file.xml" and it worked, tail command at ./file.xml was executed. But I would like to know, does the lock end after the command or does it last 5 seconds after the end of the command execution? My main question is, how can I prevent another root process deleting a file in linux?

    Read the article

  • Why does deleting from the command line take significantly less time than from a GUI?

    - by Jordan Plahn
    So this is probably the dumbest question you'll read today, but it's something I just wondered about as I was deleting a dozen or so images from my computer. With a quick rm -rf command on the directory's contents, all the images were gone in a snap. When I drag the same dozen or so images to a trash can/recycle ban, it takes sometimes 10 seconds or more. Now I'm sure some of it comes from the overhead of the GUI and such, and some of it may be the fact that the file still "exists" in some form if it's put into the recycle bin, but is there anything else that accounts for such a huge time disparity? Are "rm" and "delete" just such fundamentally different commands so I'm trying to compare apples and oranges? Enlighten me, please!

    Read the article

  • After deleting a local machines offline file cache, the same user's "my documents" no longer redirects to the network location.

    - by stead1984
    One of my apprentices was tasked with clearing out unused local profiles and clearing the offline file cache. After he cleared the offline file cache and rebooted the machine, he would log in as himself and no longer have his "my documents" redirected to the set network location. More over this seemed to then affect ANY other networked machine he logged into, except his own laptop. All our standard workstations run Windows XP Service Pack 3, the apprentice's laptop runs Windows 7 Professional. I can understand how clearing the offline file cache after deleting old local profiles could cause this issue but draw a complete blank as to why it would affect all networked machines. It's a strange one so this question may be a little hard to understand so any questions or further understanding required please ask.

    Read the article

  • Is disabling password login for SSH the same as deleting the password for all users?

    - by Arsham Skrenes
    I have a cloud server with only a root user. I SSH to it using RSA keys only. To make it more secure, I wanted to disable the password feature. I know that this can be done by editing the /etc/ssh/sshd_config file and changing PermitRootLogin yes to PermitRootLogin without-password. I was wondering if simply deleting the root password via passwd -d root would be the equivalent (assuming I do not create more users or new users have their passwords deleted too). Are there any security issues with one approach verses the other?

    Read the article

  • (N)Hibernate: deleting orphaned ternary association rows when either associated row is deleted.

    - by anthony
    I have a ternary association table created using the following mapping: <map name="Associations" table="FooToBar"> <key column="Foo_id"/> <index-many-to-many class="Bar" column="Bar_id"/> <element column="AssociationValue" /> </map> I have 3 tables, Foo, Bar, and FooToBar. When I delete a row from the Foo table, the associated row (or rows) in FooToBar is automatically deleted. This is good. When I delete a row from the Bar table, the associated row (or rows) in FooToBar remain, with a stale reference to a Bar id that no longer exists. This is bad. How can I modify my hbm.xml to remove stale FooToBar rows when deleting from the Bar table?

    Read the article

  • Whats the best data-structure for storing 2-tuple (a, b) which support adding, deleting tuples and c

    - by bhups
    Hi So here is my problem. I want to store 2-tuple (key, val) and want to perform following operations: - keys are strings and values are Integers - multiple keys can have same value - adding new tuples - updating any key with new value (any new value or updated value is greater than the previous one, like timestamps) - fetching all the keys with values less than or greater than given value - deleting tuples. Hash seems to be the obvious choice for updating the key's value but then lookups via values will be going to take longer (O(n)). The other option is balanced binary search tree with key and value switched. So now lookups via values will be fast (O(lg(n))) but updating a key will take (O(n)). So is there any data-structure which can be used to address these issues? Thanks.

    Read the article

  • Deleting a node in a circular linked list c++?

    - by angad Soni
    I was wondering if anyone could help me understand if this code for deleting a node from a circular linked list would work, or if there is something i'm missing out on. using c++ to code. void circularList::deleteNode(int x) { node *current; node *temp; current = this-start; while(current->next != this->start) { if(current->next->value == x) { temp = current->next; current->next = current->next->next; delete current->next; } } }

    Read the article

  • Gtk, Does deleting builder pointer deletes all the Widgets created using it.

    - by PP
    I am creating builder pointer as follows. GtkBuilder *builder_ptr; builder_ptr = gtk_builder_new(); if( ! gtk_builder_add_from_file(builder_ptr, "Test.glade", &error ) ) printf("\n Error Builder, Exit!\n"); and i am deleting this builder pointer as follows: g_object_unref(G_OBJECT(m_builder)); this builder pointer contains 2-3 GtkWindows and other widgets. So my question is that do i need to delete all the windows in this builder manually when i delete this builder or all the windows will get destroyed when i delete builder pointer. Thanks, PP.

    Read the article

  • Way to check for foreign key references before deleting in MySQL?

    - by Chad Johnson
    I'm working with a content management system, and users are prompted with a confirmation screen before deleting records. Some records are foreign key referenced in other tables, and therefore they cannot be deleted. I would like to display a message beside a given record if it has foreign key references. To know whether I should display the message for a record, I could just query the referencing table and see if there are references. But the problem is, there are about a dozen tables with records potentially referencing this record, and a lookup could take a "long" time. Is there an easy way to tell whether the record is delete-ready (ie. has no foreign key references)?

    Read the article

  • Unlink deleting more than just the files passed to it.

    - by RMcLeod
    I'm creating an SQL file, placing this file into a zip file with some images and then deleting the SQL file with unlink.Strange thing is it deletes the zip file as well. if (file_put_contents($sqlFileName, $sql) !== false) { $zip = new ZipArchive; if ($zip->open($workingDir . $now . '.zip', ZipArchive::CREATE) === true) { $zip->addFile($sqlFileName, basename($sqlFileName)); if(! empty($images)) { foreach ($images as $image) { $zip->addFile($imagesDir . $image, $image); } } } unlink($sqlFileName); }

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • Does SQL Server Maintainance Cleanup consider exact times when deleting old backups?

    - by Heinzi
    Let's say I have a daily maintainance task that: Backups all databases and then removes backups that are older than 3 days. Now let's say the first backup at day 1, starting at 10:00, results in the following files db1.bak 2012-01-01 10:04 db2.bak 2012-01-01 10:06 Now let's say at day 4 the first step of the maintainance task (backup the DBs) happens to finish at 10:05. Will SQL Server delete db1.bak and keep db2.bak (would be logical, but might be surprising for the user) or keep both or remove both?

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Deleting a tag from lots of images at once in Aperture?

    - by Bart B
    Aperture makes it easy to tag lost of pictures at once by just selecting all the images, and the dragging and dropping tags from the tags pallet onto the selected images. But when you need to do the reverse, I can't find a way other than editing each image individually. Is there a way I could select multiple images at once and strip a tag out of all of them? Thanks, Bart.

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • Why after deleting a 110+ GB collection, my /var/lib/mongodb directory still have same size?

    - by tunnuz
    I am having some troubles with MongoDB and space usage. In particular, I once used to have a large collection of about 600 million records totaling 110+ GB on disk. Recently I decided to drop it because the data was outdated, to do so I dropped the collection through rockmongo's web interface. Accordingly, rockmongo doesn't show me the collection anymore, however my disk usage hasn't changed at all. Is there any clean operation which I am not aware of, which must be run in order to synchronize the database with database files on disk? I have tried to perform a "repair" but the system complains that there's not enough space on disk ... that's because it is all used by MongoDB.

    Read the article

  • Why does tomcat like deleting my context.xml file?

    - by staticsan
    I'm developing a web-based Java application at work and (obviously) have to run it locally during development. I've figured out the Tomcat docs and have a suitable context.xml file in /etc/tomcat6/Catalina/localhost/ but every so often, Tomcat decides to delete it! Which means I have to put it back and restart Tomcat. Why does it do this? I have searched the Tomcat docs about it and am none the wiser. (Oh yes: it's not actually called context.xml but owners.xml as that's the HTTP path prefix for this application.) Update I've now seen Tomcat delete the file whilst Tomcat was running. I think I need to file a bug...

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password [email protected] failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • Deleting certain files sits at "preparing to recycle" on Windows 7?

    - by Rachel
    We recently setup one of our users with a brand new Windows 7 computer, however she is unable to delete certain files. With some testing, I found I cannot move, rename, or view properties of these files either. When trying to delete the file, it just sits at the "Preparing to recycle" popup, however the "from" section says "Discovering items..." Clicking "More Details" on the popup shows me that it can't find the file name or where it's recycling from: Other notes... All the affected files are .pdf files that get created via a scanner. Other pdf files are fine. Opening the files works fine. I can open the file, Save As a new file, and delete the new one just fine Trying to delete the file via command prompt just sits there Rebooting the computer will let me manipulate the files like normal, however this user is responsible for scanning hundreds of documents a day and I'd rather not have to tell her to reboot her computer to delete files. The user is part of the administrator group on the computer The Owner of the affected files is the user attrib of files is just A

    Read the article

  • Exchange 2003 -- Mailbox Management not deleting ALL messages aged 30 days or older...

    - by tcv
    I've recently created a Mailbox Management task within Exchange 2003 that, every night, looks at the contents of the Deleted Items within a particular mailbox and deletes mail that's 30 days or older. The scheduled task ran on its own last night and I have confirmed that messages within the right mailbox and the right folder were, in fact, processed. Many mails were deleted ... but not never email older than 30 days. In fact, the choice seems kinda random. Last night 3/10/2010 was the 30 day watermark. Mails were deleted from 3/10/2010, sure enough, but not all of them. Mails older than 3/10/2010 were deleted as well, but, again, not all of them. The only criteria I have on the management -- aside from the single mailbox and single folder scopes -- is the age criteria. The size criteria is set to Any, meaning I don't care about the size. I care about the age. It's made me wonder where there is some sort of limit on how many mails can be processed? The schedule is set for 12am and 1am every night. Any hints appreciated.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >