Search Results

Search found 28590 results on 1144 pages for 'best'.

Page 69/1144 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to simply remove everything from a directory on Linux

    - by Tometzky
    How to simply remove everything from a current or specified directory on Linux? Several approaches: rm -fr * rm -fr dirname/* Does not work — it will leave hidden files — the one's that start with a dot, and files starting with a dash in current dir, and will not work with too many files rm -fr -- * rm -fr -- dirname/* Does not work — it will leave hidden files and will not work with too many files rm -fr -- * .* rm -fr -- dirname/* dirname/.* Don't try this — it will also remove a parent directory, because ".." also starts with a "." rm -fr * .??* rm -fr dirname/* dirname/.??* Does not work — it will leave files like ".a", ".b" etc., and will not work with too many files find -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr find dirname -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr As far as I know correct but not simple. find -delete find dirname -delete AFAIK correct for current directory, but used with specified directory will delete that directory also. find -mindepth 1 -delete find dirname -mindeph 1 -delete AFAIK correct, but is it the simplest way?

    Read the article

  • How important is sender validation, and what matters?

    - by Charles Stewart
    When I started learning how to configure email, SPF existed but there were doubts about whether it was a good thing, and the value of offering SPF records in DNS. Now it seems that it is widely accepted that some form of well-known sender validation is good practice. Is this really true? Am I being a bad postmaster by not supporting SPF/DKIM/whatever?

    Read the article

  • Best way to migrate IIS6 from one server to another

    - by darko-romanov
    Hi, I need to move all my sites on a server with IIS 6 to another one, that has same OS (Windows Server 20003) and same IIS version. I'm trying to understand which is the best way to do it. Searching on Google I've found that there are at least 2 methods, one uses IIS Migration Tool, and another Web Deployment Tool. I don't know which method is best, it also seems that both methods can export one site at once, and I have about 100 sites hosted. What would you do?

    Read the article

  • Advantages / disadvantages of having DynDNS access on a computer vs the router

    - by Margaret
    I have a shiny new toy, a Cisco Wireless-N Gigabit Security Router with VPN (WRVS4400N). While looking through the instruction manual, I discovered that it had support for DynDNS built-in. We've currently got the DynDNS client running on one of the servers (that people SSH to, as documented in this question); but the reason for the router update is to move away from SSH to VPN. To that end, is there any difference in behaviour/functionality/maintainability to run it off the computer, as opposed to the router? Thus far, DynDNS has more or less a set-and-forget setup, but since the feature was there, I wanted to know if it was a better location for the process...

    Read the article

  • Strategy to allow emergency access to colocation crew

    - by itsadok
    I'm setting up a server at a new colocation center half way around the world. They installed the OS for me and sent me the root password, so there's obviously a great amount of trust in them. However, I'm pretty sure I don't want them to have my root password on a regular basis. And anyway, I intend to only allow key-based login. On some cases, though, it might be useful to let their technical support log in through a physical terminal. For example, if I somehow mess up the firewall settings. Should I even bother worrying about that? Should I set up a sudoer account with a one-time password that will change if I ever use it? Is there a common strategy for handling something like this?

    Read the article

  • SQL Server 2008 Optimization

    - by hgulyan
    I've learned today, if you append to your query OPTION (MAXDOP 0) your query will run on multiple processors and if it's huge query, query will perform faster. I know general guidelines on query optimizations (using indexes, selecting only needed fields etc.), my question is about SQL Server optimization. Maybe changing some options in configurations or anything else. What guidelines are there for SQL Server Optimization? Thank you. P.S. I suppose, this is not the right place to ask server related questions. Should I delete it or maybe it can be migrated to serverfault?

    Read the article

  • How do we keep Active Directory resilient across multiple sites?

    - by Alistair Bell
    I handle much of the IT for a company of around 100 people, spread across about five sites worldwide. We're using Active Directory for authentication, mostly served to Linux (CentOS 5) systems via LDAP. We've been suffering through a spate of events where the IP tunnel between the two major sites goes down and the secondary domain controller at one site can't contact the primary domain controller at the other. It seems that the secondary domain controller starts denying user authentication within minutes of losing connectivity to the primary. How do we make the secondary domain controller more resilient to downtime? Is there a way for it to cache the entire directory and/or at least keep enough information locally to survive a multi-hour disconnection? (We're all in a single organizational unit if that makes any difference.) (The servers here are Windows Server 2003; don't assume that we set this up correctly. I'm a software engineer, not an IT specialist.)

    Read the article

  • Are Plesk server backups useful?

    - by Michael T. Smith
    I'm working for a startup now, and I'm the programmer. Because of our small team size, I'm also handling the server management for now (until we get a dedicated server administrator.) I've never used Plesk before, and the server we're using (a Media Temple Dedicated Virtual server) had it installed when I got here. One of my first jobs was to set up backups: Plesk was already running it's nightly server-wide backups. I created a small script to dump the web app, it's DBs and any assets, tar them, store them, and then copy them to another small server we have (to backup the backups.) But, we're constantly running into hard drive space issues because of the Plesk backups. And I'm wondering, are they useful? If I have the web app and all of it's assets, I could easily enough get another server up and running. Do we need to keep running Plesk's backups? Thoughts?

    Read the article

  • Monitoring remote laptops

    - by kaerast
    We're looking for something to monitor around 30 remote laptops that are constantly out on the road, never returning to base except for when there are serious hardware faults that need repairing. These laptops won't always be connected to the internet, they'll have mobile broadband and may work offline most of the time. They will be running a mixture of Windows XP, Vista and 7 and there is currently no server setup. We're primarily interested in making sure that Windows Updates and antivirus updates are happening, and I guess we should also be monitoring remaining disk space, what software is installed and ideally hardware health. It might also be nice if we could gain remote access to perform work on them. My main reason for wanting to monitor them is that it's going to be a real pain to get them back to base if anything goes wrong, so I want to be proactive in ensuring they last as long as possible. Can you recommend what I should be monitoring to ensure a long life? What tools would you use to monitor and maintain these computers?

    Read the article

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • re: 3ware raid 10 (4drive) suggested stripe size suggestions?

    - by dasko
    looked around on the site but nothing really concrete on my question. i will have about 120GB of data total, files are made up of 5MB files, excel, word and about 25 .pst files that are about 1.2GB each. Yes they use .pst over network, even though it is not recommended this is legacy setup without issue so we will continue to support this for another year or so. I need to know what you think about a stripe size of 256kb for the raid 10 based on the above requirements. I did try and bench with these settings and it seems alright without any real issue, just trying to rule out anything i might of missed. thanks.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • How should I configure my Active Directory servers so that if one goes down, users are not kicked off SQL?

    - by Matty Brown
    Today, we shut down one of our Active Directory servers during office hours to check the loading on a UPS. Since all the server did was provide Active Directory in a separate building incase the main building caught fire, or whatever, we didn't think it would have any effect on our users. Seconds after the server was shut down, we had a dozen phone calls from users experiencing this issue:- [Microsoft SQL Server Login] SQLState: '28000' [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed. The login is from an untrusted domain and cannot be used with authentication. Once we realized what had happened, we quickly rebooted the down Active Directory server. Problem solved. But why did this happen. And what if one day a server has a breakdown and is offline for hours, or days? Shouldn't the other Active Directory servers in the domain service authentication requests without disruption to users? We have 3 Windows Server 2003 Standard servers running Active Directory as Domain Controllers with Global Catalogs, all physically located on the same network at Gigabit speeds. I believe the domain was originally Windows Server 2000, or maybe even NT 4.0. Could the issue be to down to old Group Policies inherited from these old server OS's, or some default setting in Active Directory that needs changing?

    Read the article

  • raid 6 vs raid 10? which would you choose.

    - by dasko
    my choice would be raid 6 for a file server since you can lose two drives and it does not matter which set of two can die. from what i understand with raid 10 you can lose two drives but if they happen to be off the same raid 1 then you are a out of luck? any suggestions? basic file server with about 200gb of data and it would act as a single point of backup for other workstations and servers. thanks in advance.

    Read the article

  • How can I combine code from an old revision when I didn't branch in TortoiseMerge?

    - by gr33d
    I need to combine (merge?) some parts of an old revision with a newer revision of a file. I'm still pretty new to subversion, so I'm not sure what I'll bomb in the process. I did not branch--these are simply different revisions of a file. How do I send the sections of code from r1 to r3 where they are needed. The keyboard shortcuts and menu options for "theirs", "mine", "left block", "right block", etc aren't very intuitive. If I need 5 blocks from r1 to be after the first 10 blocks of r3, how do I do it? Shouldn't I be able to go through r1 block by block and decide if and where it belongs in r3? Thanks in advance!

    Read the article

  • Macbook Pro 15-inch replacement battery.

    - by ricbax
    So I've finally got to the 300+ cycles with my MBP 2008's original battery. Apple is pretty much "on the money" too! I am at 79% Health and getting the Condition: Replace Soon warning. So I went out to the closest Apple Store and bought a replacement. I would like to get the same lifespan out of my replacement if possible. My question is: The battery comes with a 2 dot (green) charge on the indicator, should I put the battery in and let it run down and do a full recharge OR begin charging it immediately and then let it run all the way to empty and recharge?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >