Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 398/853 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini EDIT: I found more info on the topic here for those interested: http://serverfault.com/questions/34078/how-do-i-set-up-per-site-php-ini-files-on-a-lamp-server-using-namevirtualhosts

    Read the article

  • Block access to certain websites from other computers on shared internet connection

    - by Dheeraj V.S.
    I have very little knowledge about networking. I have an internet connection on my PC (running Windows XP). I've enabled internet connection sharing to other computers on my network connected using a WIFI router. I want to block use of GMail (in general, specific domains) from other computers, but should be able to access them from my own machine. I can't edit my hosts file because my machine too would lose access. I guess I need to configure my wireless router for that. But I have no idea how to do so. My wireless router is a Zyxel P-660HW-T1 v2. I'd appreciate any pointers.

    Read the article

  • How can I mitigate DNS Server outages?

    - by Eric Belair
    Let's say I have a root domain of "mysite.com". That domain and its sub-domains have DNS served by an external service - let's call them Setwork Nolutions. If this external company is hit with a DDoS attack, my interally-hosted websites under this domain are no longer accessible at "mysite.com" or "*.mysite.com", even though the website(s) is/are fully up and operational. How can I mitigate such a problem so as to keep end users happy? The only solution others at my company have come up with is to create a second domain - i.e. "mysite2.com", and host its DNS at another company, and then communicate to all end users that this is the website they should use. I think this is ridiculous, and just leads to a bunch of other problems. I'd like to find a solution where we can point to the same website with the same URL without the original DNS host being operational. Any thoughts?

    Read the article

  • Cannot login to ISCSI Target - hangs after sending login details

    - by Frank
    I have an ISCSI target volume, to which i am trying to connect using CentOS Linux server. Everything works fine, but cannot its stuck at login. Here are the steps i am performing: [root@neon ~]# iscsiadm -m node -l iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session20 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session21 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session22 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session23 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session30 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session31 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session78 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session79 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session80 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session81 Logging in to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) After this step, its stucks, waits for some time and then gives this output: Logging in to [iface: iface1, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) iscsiadm: Could not login to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260]. My iscsi.conf is this: node.startup = automatic node.session.timeo.replacement_timeout = 15 # default 120; RedHat recommended node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 20 node.session.initial_login_retry_max = 8 # default 8; Dell recommended node.session.cmds_max = 1024 # default 128; Equallogic recommended node.session.queue_depth = 32 # default 32; Equallogic recommended node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.iscsi.FastAbort = Yes Also, in access control, i have given full access to Any IP, Any CHAP user and fixed iscsi initiator name. With same access level, all other volumes on rest of servers are working, except this one.

    Read the article

  • Debian: Should I add vlan interface into bridge for KVM?

    - by javano
    I am setting up a Debian Squeeze box as a KVM host. I want to add multiple interfaces to each KVM guest so I want them to be on different VLANs. After reading about this, I believe the best method is to add multiple logical VLAN (sub)-interfaces to the physical NICs and then create a bridge adapter for each VLAN interace, and assign each bridge as a NIC for KVM guests. Does this make good sense, or madness? Do I have to use bridged interfaces with KVM like this? Can't I just add eth1.xx and eth1.yy to my interfaces config below and then configure those directly as bridged KVM guest NICs? If so, how should this look in the interfaces config file below? user@host:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Management Interface auto eth0 iface eth0 inet static address 172.22.0.31 netmask 255.255.255.0 gateway 172.22.0.1 # Interface for guest VMs auto eth1 # Guest1 : Use VLAN 117 auto eth1.117 iface eth1.117 inet manual # Set up br1 for guest 1, bridging with vlan 117 auto br1.117 iface br1.117 inet manual bridge_ports eth1.117 bridge_stp off user@host:~$ uname -a Linux hostname 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux UPDATE I would really like it if someone could clarify the config for me, as I have also seen the above configured with this syntax, so I don't see why one would be preferred over the other; # Interface for guest VMs auto eth1 allow-hotplug eth1 iface eth1 inet static # Vlan 117 for guest 1 auto vlan 117 iface vlan111 inet static vlan_raw_device eth1 # Guest 1 : NIC 1 auto br1.117 iface br1.117 inet manual bridge_ports vlan117 bridge_stp off

    Read the article

  • Can't communicate with Primary DNS Server

    - by horsley
    A computer, with Windows 7, can't access any website by domain suddenly. Whether this computer use a wired link or connect to the WLAN, The fault persists IP and DNS obtained automatically, and seems normal (ipconfig /all return the correct info) I can visit websites by using HTTP proxy The DNS server is available, other computer in my room works properly. I can ping myself, the gateway and any other IP, but domains I can use nslookup and obtain the correct IP info There are some error information in the event log about dns client events explaining the client can not verify the DNS server available Windows network diagnosis explain that Windows can't communicate with the device or resource (Primary DNS Server) I guess the dns client should be blame. I tried to do the following things but the fault persist. Reinstall the driver of network adapter Reset TCP/IP (netsh int ip reset) Reset Winsock (netsh winsock reset) Reset LSP I don't want to reinstall the whole os, what should I do?

    Read the article

  • Understanding Exchange User Monitor (ExMon) Output

    - by SturdyErde
    I recently downloaded and ran ExMon while trying to troubleshoot Outlook connectivity problems due to high CPU usage on Exchange Server 2010 SP2 UR8. The tool provides a great set of data, but I have not yet figured out how to make great use of it. My first question is why the Exchange Server itself shows up as a high-use MAPI client in the ExMon data. Among the users' client versions I see build numbers listed for Outlook 2013, 2010, and yes, even 2007 clients. I also see build number 14.2.387.0, which represents Exchange Server 2010 SP2 Update Rollup 8 (+/- some other patch that makes it not quite match the UR8 number). There are many user rows that list only "::1" and/or the short hostname of my Exchange server in the 'Client IP Addresses' column. Some other columns include the end-user's actual IP address and the Exchange server's IP address. ExMon shows that it is actually Exchange Server that is utilizing the highest percentage of CPU that is used for MAPI calls. I had expected to see 1 IP address and version number for each user reported by ExMon. Instead, most records show multiple version #'s (Exchange ver and Outlook ver) and multiple IPs (Exchange IP and client IP). Can anyone explain the reason for this to me, please?

    Read the article

  • Hosting provider that allows you to host your own VM image?

    - by Timo Geusch
    I've already looked at the 'Best Hosted VM Provider' question and checked the recommendations there, but I seem to have slightly odd requirements. Basically, I am looking for a host that allows me to host a VM image I supply (FreeBSD, which most of the suggested hosters don't support, they only seem to support various Linuxes) instead of one of their standard images. I'm a long time BSD user and have had colo BSD servers in the past so I'm pretty sure that i don't need much in the way of software support, but I'd basically like to run my server on managed hardware without having to rent the whole server. For the usage I have that would way OTT as we're talking a couple of small apps with very few users, a couple of blogs and (most importantly) email hosting for about 6-10 domains with moderate traffic levels. Oh, and reliability trumps cost to a certain extent.

    Read the article

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • Mails bounce because of invalid character ('@') in username

    - by user1598585
    I have a working exim setup with virtual users, working alright, except for when I try to send email to certain servers. These servers reject my emails because of #5.1.3 Invalid character ('@') in username. The offending header parts seem to be: Return-path: <"[email protected]"@smtp.example.com> and ...(envelope-from <"[email protected]"@smtp.example.com>)... The problem is that I cannot find where and why the usernames are being generated like this. My router for submission is: dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more And the respective transport: remote_smtp: driver = smtp What can be producing this problem?

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • DNS record question

    - by Just plain me
    So I have two Windows domains/seperate forests. One forest consist of what is left of the bought-out company's domain. They have 5 servers that still have important data and need to be worked with on a daily basis by a large group of employees. We have a forest level trust setup to ease file access. We manually create DNS A records for the 5 servers so their short names would resolve to the IP addresses. I need the FQDN to resolve though. Should I create CName records to achieve this? I hope this question makes sense, I am learning DNS on the fly... :)

    Read the article

  • URGENT: Need a Company to Fix My Plesk & IIS on windows 2008

    - by DevCompany
    Hello: I need to know a name of a company to fix my server install and get my sites running again. Windows 2008 IIS 7 Plesk 9.x The problem started when Level 2 of my hosting company adjusted a FTP users to have certain permissions....apparently he did this outside of plesk and it has been nothing but permission headaches all around. Connections....uploading files, etc. now somethings gone wrong where plesk isnt even loading properly and IIS too... I need to see if someone can fix this remotely before I give the green light to format and reinstall server, IIS, Plesk and Domains! looking to pay a company to get this working ASAP - This is not requested as a free job so I need someone good and who can fix it without the huge hassle of formatting and such. Need to resolve today Post expires on Monday 04-26-20010

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • How to selectively route network traffic through VPN on Mac OS X Leopard?

    - by newtonapple
    I don't want to send all my network traffic down to VPN when I'm connected to my company's network (via VPN) from home. For example, when I'm working from home, I would like to be able to backup my all files to the Time Capsule at home and still be able to access the company's internal network. I'm using Leopard's built-in VPN client. I've tried unchecking "Send all traffic over VPN connection." If I do that I will lose access to my company's internal websites be it via curl or the web browser (though internal IPs are still reachable). It'd be ideal if I can selectively choose a set of IPs or domains to be routed through VPN and keep the rest on my own network. Is this achievable with Leopard's built-in VPN client? If you have any software recommendations, I'd like to hear them as well.

    Read the article

  • email spam filering in bridging

    - by User4283
    I've a mail server, which handle multiple domains. Due to concern of spam and mail server performance. I've configured another machine which will be in bridging and mail server would be behind that it. How can i filter spam emails in bridging server without running any smtp services. Scenario Internet +------+ Spamfilter Server (in bridging mode) +---------+ MailServer SmartHost will work for outgoing email. In this scenario i can filter all incoming emails. There is also another option of DNS which i don't want to use.

    Read the article

  • Google Apps claims my domain is registered but when I try to access it claims it is not

    - by user32953
    Hi all, we have registered one of our domains with Google Apps 2 years ago. However, we didn't even use it. Now if I try to access: http://google.com/a/mydomain.com What I see is: Server error Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again. Then I go to Google Apps Standard Edition signup page and type mydomain.com. However, what I get is: This domain has already been registered with Google Apps. Please contact your domain administrator for instructions on using Google Apps with this domain. Is there anyone who can explain me what this inconsistency is caused by and what I can do? Since Google Apps Standard Edition doesn't grant me to contact with Google, I can't even submit a bug report. Any help would be appreciated.

    Read the article

  • Setting up a router as a DNS server only

    - by Jacob R
    I have a Linksys WRT54GL router that I don't need anymore, since I had to buy a 3G capable router (Dovado 3GN). As I only have a 3G connection at home, I want to optimize it as much as possible. I want to setup a caching DNS server, including some blacklisting of ad domains. The router currently runs the DD-WRT firmware. Is it possible to use this router as an ordinary computer, running only a DNS server, disabling all other features such as DHCP, WLAN, etc? Connecting it to my other router, should I simply run a cable into the WAN-port of the Linksys router?

    Read the article

  • Delivering from Postfix to Exchange

    - by Van Gale
    I have someone with two domains, a.com and b.com. a.com is running a postfix server on the mx host for the domain and I have total control of the server. b.com is running an exchange server on the mx host for the domain and I do not have any control of this server. They have been using b.com as their primary mail address and use the exchange calender with outlook. They want all the same functionality but want to start using a.com as primary mail address. I opened up postfix to allow relay from the ip address of the exchange server and hopefully that's enough from the outgoing side. For delivery though what can I do to forward all incoming emails to the exchange server? I have some aliases defined in /etc/aliases that should take higher priority.

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • If I re-key a SSL certificate for a 2nd/backup server, does the original still work?

    - by Matt
    We have a production server with a wildcard SSL certificate. I'm in the process of creating a backup/failover server that will host the same domains, and therefore will also need the SSL certificate. The certificate on the primary server was installed with the private key non-exportable, so I am unable to export the certificate for installation on the failover server. My question then is - if I re-key the certificate from Go Daddy, does the original certificate installed on the primary server cease to be valid? As an aside, the original (primary) server is IIS 6, the failover is IIS 7 (once the failover is operational, we'll likely upgrade the primary).

    Read the article

  • How do I point one virtual host to another instance of apache running at another port on the same bo

    - by sacamano
    Hi there. I've got two apache2 instances running on my box. One came with a bitnami redmine stack which sole purpose is to host Redmine at host:8080/redmine. The other apache instance is running with php and such and is where I specify all the VHosts for my domains. Now I'd like to point redmine.somedomain.com at www.somedomain.com:8080/redmine so that redmine is accessible through a subdomain and on port 80. Redmine is a Ruby on Rails app and runs with Phusion Passenger so I can't just point the vhost to the htdocs directory of the redmine install. How is this done? I've tinkered with ProxyPass and ProxyPassReverse but I just can't get it working. All help is greatly appreciated.

    Read the article

  • prevent search engines indexing depending on domain

    - by Javier
    We have a dedicated server with a hosting company with a couple of dozens of webs in it. It happens that the nameservers (EG: ns1.domain.com, ns2.domain.com) ip's are coincident with some client webs, let's say webclient1.com and webclient2.com Problem is that for a certain searches in google, some results are showing up like ns1.domain.com/result instead of webclient1.com/result which is pretty wrong and annoying for our clients. Actually if you type in the browser ns1.domain.com or ns2.domain.com it will load some pageclients instead. Is there any way to prevent google to track those results only in case the robots are coming to check ns domains? It may be not correct to ask this as well, but why is it happening? is it a result of a bad server configuration? I'm pretty new on these matters, so thank you in advance for any help!

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >