Search Results

Search found 12439 results on 498 pages for 'bad practice'.

Page 21/498 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Bad ways to secure wireless network.

    - by Moshe
    I was wondering if anybody had any thoughts on this, as I recently saw a Verizon DSL network set up where the WEP key was the last 8 characters of the router's MAC address. (It's bad enough that hey were using WEP in the first place...)

    Read the article

  • Best practice Raid groups for EqualLogic PS6510X

    - by 20th Century Boy
    We are thinking about purchasing 4 x EqualLogic PS6510X SANs (the Sumo boxes). Each has 48 x 600GB 10k SAS drives. They will be stacked to form a logical pool of storage (all in the same location). I understand that when you create a RAID group its done on a "per box" basis. So one box could be Raid 50, another Raid 10 etc. My question is, should I make one box a "performance" box ie Raid 10, and the other boxes "standard" ie Raid50? How do people configure their EQL arrays in the real world?

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Blocking a country (mass iP Ranges), best practice for the actual block

    - by kwiksand
    Hi all, This question has obviously been asked many times in many different forms, but I can't find an actual answer to the specific plan I've got. We run a popular European Commercial deals site, and are getting a large amount of incoming registrations/traffic from countries who cannot even take part in the deals we offer (and many of the retailers aren't even known outside Western Europe). I've identified the problem area to block a lot of this traffic, but (as expected) there are thousands of ip ranges required. My question now (finally!). On a test server, I created a script to block each range within iptables, but the amount of time it took to add the rules was large, and then iptables was unresponsive after this (especially when attempting a iptables -L). What is the most efficient way of blocking large numbers of ip ranges: iptables? Or a plugin where I can preload them efficiantly? hosts.deny? .htaccess (nasty as I'd be running it in apache on every load balanced web server)? Cheers

    Read the article

  • how to connect to server continuously using an bad internet connetion

    - by Nikhil
    I have a bad Internet connection, it disconnects frequently and on reconnect, I'm assigned a different IP address by the ISP. The problem is that I connect to a remote VPS (Ubuntu), and when Internet connection is disrupted n reconnected, I can no longer do anything on the terminal. I have to restart the terminal and re-initiate the connection. Is there a way I can have persistent connection with server.

    Read the article

  • RHN Satellite / Spacewalk custom channels, best practice?

    - by tore-
    Hi, I'm currently setting up RHN Satellite, and all works well. I'm in the process of creating custom channels, since we have certain software which should be available for all nodes of satellite, e.g. puppet, facter, subversion, php (newer version than present in base). I've tried to find documentation on best practices on this. How should they be set up, how to handle different arch, how to handle noarch packages. How to sync updates to dependencies when updating a custom package in a custom channel (e.g. php is updated, how to fetch all updated dependencies). The channel management documentation from RHEL (http://www.redhat.com/docs/en-US/Red_Hat_Network_Satellite/5.3/Channel_Management_Guide/html/Channel_Management_Guide-Custom_Channel_and_Package_Management.html) doesn't provide me with enough information on how to solve any of theese issues. All tips, tricks and information regarding this would be great!

    Read the article

  • Intermittent 400 bad request header field is missing ':' with Apache and SSL

    - by David Tinker
    Apache is returning rare intermittent 400 "bad request header field is missing ':' olhuaqv3o1t29flvr0 (random string)" errors. This seems to be related to https access and happens from Firefox, IE, Chrome etc. I am using a certificate from rapidssl. Apache/2.2.14 (Ubuntu) DAV/2 SVN/1.6.6 mod_jk/1.2.28 PHP/5.3.2-1ubuntu4.5 with Suhosin-Patch mod_ssl/2.2.14 OpenSSL/0.9.8k Anyone know how to fix this?

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Eject a bad disk from optical drive

    - by Chuck
    I have an Alienware computer with one of the optical DVD drives that does not have a manual tray, just a slot to insert the disk. I recently inserted a disk that was apparently bad. It is unreadable does not show up in Windows Explorer. I tried right clicking on the Drive letter and hitting eject, but get an error message that there is no disk in the drive. How do I get the d--ned disk out so I can use the drive?

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • Puppet variables best practice, generalise or specialise?

    - by Andrei Serdeliuc
    I'm trying to figure out which things should be in git within the puppet manifest and which should be in env vars like FACTER_my_var and use that in the manifest instead. Scenario: you are deploying 3 php apps and you've already built all the layers up to the app in other manifests (base system, php extensions, users, etc), and all that's left is installing the correct app (from an apt repo) and creating a vhost. I'm tempted to have something along the lines of: apache::vhost { $::project_hostname: priority => '10', port => '80', docroot => $::project_document_root, logroot => "/var/log/apache2/${$::project_name}", serveradmin => '[email protected]', require => Package[httpd], ssl => false, override => 'all', setenv => ["APP_KERNEL dev"] } This would run on each server, and the FACTER_project_* vars would be set on a per server basis. An obvious restriction of this would be that you can't run more than one app with this specific example. Or would you rather have project_x.pp, project_y.pp which have hardcoded paths and names?

    Read the article

  • NetApp and Hyper-V 2012 best practice/whitepapers?

    - by grimstoner
    We've recently acquired a NetApp/Cisco UCS solution, and I'd like to gather some background knowledge as to the best practices when setting up Hyper-V 2012 on such a solution. There is an upcoming seminar (in the Netherlands, http://www.realdolmen.com/nl/MSHyper-v-2012_NetApp), but it's in Dutch, and a couple of weeks away... Does anyone have some whitepapers/documentation about such a setup, or hasn't it been done before?

    Read the article

  • Is Fedora a bad choice for a server?

    - by Jakobud
    I'm taking over IT responsibilities at a small company. Most of the servers appear to be running various releases of Fedora (file servers, backup servers, oracle servers, etc). I don't have much experience with Fedora, but I was under the impression its geared for end user desktops/workstations/laptops. Is Fedora a bad choice for servers?

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • Best Practice: Migrating Email Boxes (maildir format)

    - by GruffTech
    So here's the situation. I've got about 20,000 maildir email accounts chewing up a several hundred GB of space on our email server. Maildir by nature keeps thousands of tiny a** little files, instead of one .mbox file or the like... So i need to migrate all of these several millions of files from one server to the other, for both space and life-cycle reasons. the conventional methods i would use all work just fine. rsync is the option that comes immediately to mind, however i wanted to see if there are any other "better" options out there. Rsync not handling multi-threaded transfers in this situation sucks because it never actually gets up to speed and saturates my network connection, because of this the transfer from one server to another will take hours beyond hours, when it shouldn't really take more then one or two. I know this is highly opinionated and subjective and will therefore be marked community wiki.

    Read the article

  • Nginx bad gateway and connection errors

    - by r2b2
    I've followed this tutorial for a basic installation of nginx. I always get bad gateway errors and when i look at the logs i see : [error] 3226#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client Here is my nginx.conf and contents of my sites-available/defaults nginx.conf, defaults I am also seeing this error : conflicting server name "explorable.com" on 0.0.0.0:80, ignored I am using Ubuntu 12.04, PHP5-FPM Thank you!

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • WSUS Updates - Best Practice

    - by What'sTheStoryWishBone
    We have an isolated enviornment of a few hundred servers in which we use WSUS to push updates too. We have thousands of updates which to manage and push to devices testing along the way to ensure the update will not break anything. What are the best practices that you all follow in your enteprise networks to ensure an update does not go out to all the machines that will break something? We currently have ours broken into customized groups for each type of machine. There is one "Test Group" which has one PC of each type which we apply updates to for error checking. Is this a similar procedure others follow or is their an easier safer way to manage the thousands of WSUS updates?

    Read the article

  • Images as links bad for SEO?

    - by Karl
    I am just about finished with my website and my as I was reading over Google's SEO information, they mentioned images as links without text is bad. What if it is simply a link to enlarge the picture? such as the following: <a href="images/image1.jpg"><img src="images/image1.jpg" height="200" width="100"></a> Any help on this would be great! Thanks, -K

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >