Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 85/348 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • SQL Server 2008 Optimization

    - by hgulyan
    I've learned today, if you append to your query OPTION (MAXDOP 0) your query will run on multiple processors and if it's huge query, query will perform faster. I know general guidelines on query optimizations (using indexes, selecting only needed fields etc.), my question is about SQL Server optimization. Maybe changing some options in configurations or anything else. What guidelines are there for SQL Server Optimization? Thank you. P.S. I suppose, this is not the right place to ask server related questions. Should I delete it or maybe it can be migrated to serverfault?

    Read the article

  • Most suitable high availability solution

    - by Alex Bagnolini
    My company is hosting a website in a server with IIS, SQL Server and a 3rd party windows service (written in C#, source code available for amendments). We bought a new identical server, composed by: 1x Quad Core, 12GB RAM, 4x160GB SATA Raid 5, Windows Server 2008 R2 Datacenter, Public IP. We aim to put all webpages and the 3rd party windows service in an high-availability state. After some lab-testing on how to configure Failover Clustering and Hyper-V, we have deep doubts on what the "best" solution would be, by "best" meaning maintainable and able to correctly handle a physical server failure. Any suggestion on how we should configure the two servers? We don't need all the configuration's step, just an hint on the right direction to follow.

    Read the article

  • Monitoring remote laptops

    - by kaerast
    We're looking for something to monitor around 30 remote laptops that are constantly out on the road, never returning to base except for when there are serious hardware faults that need repairing. These laptops won't always be connected to the internet, they'll have mobile broadband and may work offline most of the time. They will be running a mixture of Windows XP, Vista and 7 and there is currently no server setup. We're primarily interested in making sure that Windows Updates and antivirus updates are happening, and I guess we should also be monitoring remaining disk space, what software is installed and ideally hardware health. It might also be nice if we could gain remote access to perform work on them. My main reason for wanting to monitor them is that it's going to be a real pain to get them back to base if anything goes wrong, so I want to be proactive in ensuring they last as long as possible. Can you recommend what I should be monitoring to ensure a long life? What tools would you use to monitor and maintain these computers?

    Read the article

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • How to disable apache mods without any problems

    - by Saif Bechan
    I have an apache installation where every single mod is enabled. I want my server to be as light as possible so I want to disable everything i do now need. What is the best way to go about this. I know its just removing the ; before the line in the conf file. But what if some hidden service somewhere need that at some random point in time. Can i get some suggestion on what to do.

    Read the article

  • re: 3ware raid 10 (4drive) suggested stripe size suggestions?

    - by dasko
    looked around on the site but nothing really concrete on my question. i will have about 120GB of data total, files are made up of 5MB files, excel, word and about 25 .pst files that are about 1.2GB each. Yes they use .pst over network, even though it is not recommended this is legacy setup without issue so we will continue to support this for another year or so. I need to know what you think about a stripe size of 256kb for the raid 10 based on the above requirements. I did try and bench with these settings and it seems alright without any real issue, just trying to rule out anything i might of missed. thanks.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • How should I configure my Active Directory servers so that if one goes down, users are not kicked off SQL?

    - by Matty Brown
    Today, we shut down one of our Active Directory servers during office hours to check the loading on a UPS. Since all the server did was provide Active Directory in a separate building incase the main building caught fire, or whatever, we didn't think it would have any effect on our users. Seconds after the server was shut down, we had a dozen phone calls from users experiencing this issue:- [Microsoft SQL Server Login] SQLState: '28000' [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed. The login is from an untrusted domain and cannot be used with authentication. Once we realized what had happened, we quickly rebooted the down Active Directory server. Problem solved. But why did this happen. And what if one day a server has a breakdown and is offline for hours, or days? Shouldn't the other Active Directory servers in the domain service authentication requests without disruption to users? We have 3 Windows Server 2003 Standard servers running Active Directory as Domain Controllers with Global Catalogs, all physically located on the same network at Gigabit speeds. I believe the domain was originally Windows Server 2000, or maybe even NT 4.0. Could the issue be to down to old Group Policies inherited from these old server OS's, or some default setting in Active Directory that needs changing?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • NTFS Permission Structure to allow Traversal but no Modification except in Leaf Nodes?

    - by pepoluan
    Assume there's this folder structure: D:\ --+-- Acctg --+-- Payable | +-- Receivable | +-- Fin --+-- Inv | +-- Tax | +-- Treas | +-- Mrktg --+-- Ads +-- Promo Users are not allowed to change the structure, but they are free to create & delete files & folders in the leaf nodes (i.e., the rightmost folders). AGDLP principle said that I should assign permissions on the above folders to DL-Groups. Let's say I have a G-Group of users, G-Accounting-Payable, containing users that have access to the D:\Acctg\Payable folder. The way I see it, I have two strategies: - Strategy 1 Create three DL-Groups and assign them permissions: DL-D-Acctg_T -- allowed traversal of D:\Acctg folder DL-D-Acctg-Pay_LF -- allowed listing of D:\Acctg\Payable folder contents DL-D-Acctg-Pay__RW -- allowed full permissions to the contents of D:\Acctg\Payable folder Add G-Accounting-Payable as member to all the above DL-Groups - Strategy 2 Create just one DL-Group DL-D-Acctg-Pay__RW, and assign it the proper permissions for each level of the folder. Then, add G-Accounting-Payable as member to that DL-Group. - Which strategy is the Recommended Best Practice, and why?

    Read the article

  • raid 6 vs raid 10? which would you choose.

    - by dasko
    my choice would be raid 6 for a file server since you can lose two drives and it does not matter which set of two can die. from what i understand with raid 10 you can lose two drives but if they happen to be off the same raid 1 then you are a out of luck? any suggestions? basic file server with about 200gb of data and it would act as a single point of backup for other workstations and servers. thanks in advance.

    Read the article

  • where to put core services in two-node cluster

    - by Veniamin
    I'm currently configuring two-node HA cluster based on CentOS with DRBD. Most services are packed in virtual machines with migration available. I have not made decision where to put some core services as: dhcp, ldap, dns - which are critical for all network infrastructure. There are two possibilities: Configure them as redundant HA services on cluster hosts. Pack them all into dedicated virtual machine. What is the best practice?

    Read the article

  • Which iptables rule do you think is a 'must have'

    - by Saif Bechan
    I have some basic iptable rules set up now for my vps. Just block everything except some default ports, 80,21,22,443. I do get brute forced a lot. I have heard that iptables is very powerful but I have not seen many use cases. Can you give me an example of a(some) rule(s) you always use and give a small example why. I can not find a general best practice post here on SF, if there is any I would like the link. If this is a duplicate I am sorry and it can be closed.

    Read the article

  • Software RAID to hardware RAID, can it be done?

    - by gtaylor85
    Can it be done ... well. (For the record, I did not set this server up.) In my server there are 4 disks. 3 of them are in a software RAID5, and 1 has the OS installed. I want to buy a RAID controller, 4 new HDs and set up a hardware RAID5. If possible, I'd like to just image the current setup, and use it to build my new one. My questions are: Can I image a 3 disk RAID5 to 4 disk RAID5? Are there problems with this? What is considered best practice for your OS. To have it on a separate disk like it currently is, or to install it on the RAID5? Thank you. I can clarify anything. I'm not sure what other info might be pertinent.

    Read the article

  • How can I combine code from an old revision when I didn't branch in TortoiseMerge?

    - by gr33d
    I need to combine (merge?) some parts of an old revision with a newer revision of a file. I'm still pretty new to subversion, so I'm not sure what I'll bomb in the process. I did not branch--these are simply different revisions of a file. How do I send the sections of code from r1 to r3 where they are needed. The keyboard shortcuts and menu options for "theirs", "mine", "left block", "right block", etc aren't very intuitive. If I need 5 blocks from r1 to be after the first 10 blocks of r3, how do I do it? Shouldn't I be able to go through r1 block by block and decide if and where it belongs in r3? Thanks in advance!

    Read the article

  • Macbook Pro 15-inch replacement battery.

    - by ricbax
    So I've finally got to the 300+ cycles with my MBP 2008's original battery. Apple is pretty much "on the money" too! I am at 79% Health and getting the Condition: Replace Soon warning. So I went out to the closest Apple Store and bought a replacement. I would like to get the same lifespan out of my replacement if possible. My question is: The battery comes with a 2 dot (green) charge on the indicator, should I put the battery in and let it run down and do a full recharge OR begin charging it immediately and then let it run all the way to empty and recharge?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Regarding AD Domain controllers and remote branch offices

    - by Alex
    We have central HQ building and a lot of small branch offices connecting via VPN and want to implement AD (If you can believe we still haven't). We want everyone to log in using domain accounts and be policed centrally. We are OK with having a RODC in a branch office with like 10 computers. But we have these small branches with two to four PCs only. Some of these branches connect to HQ via IPSec site-to-site VPN, some via remote access (client-based) VPN. So there is no problem with ones that have local RODC or connecting to HQ DCs via VPN router. But how about small branches? We don't really want to set up a machine there, neither we want to invest into Windows Server licenses or fancy network equipment. Also, the problem is that we cannot access HQ DCs via VPN because we are not logged in and connected to HQ internal network yet, so DCs aren't reachable. What is typically done in that situation if it is needed to have central management over policies on those PCs? Or is it better to let 'em loose and use local policies and accounts in this situation?

    Read the article

  • Separate tables or single table with queries?

    - by Joe
    I'm making an employee information database. I need to handle separated employees. Should I a. set up a query with a macro to send separated employees to a separate table, or b. just add a flag to the single table denoting separation? I understand that it's best practice to take choice b, and the one reason I can think of for this is that any structural changes I make to the table later will have to be done in both places. But it also seems like setting up a flag forces me to filter out that flag for basically every useful query I'm going to make in the future.

    Read the article

  • Do you work in your server room?

    - by Gary Richardson
    I once had a job offer from a company that wanted my workstation to be in the AC controlled, noisy server room with no natural light. I'm not sure what their motivation was. Possibly it made sense to them for me to be close to the servers, or possibly they wanted to save the desk space for other employees. I turned down the job (for many reasons, including the working environment). Is this a common practice? Do you work in your LAN room? How do you cope?

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

  • Where to put my backup.sh?

    - by Temnovit
    I'm writing a shell script that will make backups of my system ( like this: https://help.ubuntu.com/8.04/serverguide/C/backup-shellscripts.html ). What is the best location in my system, to store this file? I know, I can put it anywhere, but what will be if it will be stored in a directory being backed up? What is the best practice here? I'm running an Ubuntu-server.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >