Search Results

Search found 7821 results on 313 pages for 'high dpi'.

Page 31/313 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • BGP Router reccomendations for simple redundancy [closed]

    - by Jona
    We have two sites that each have an internet connection and have a dedicated dark fibre between them. Each site has it's own IP space and we have an AS number. We're looking to be resilient to failure of the internet connection to either site and so need to buy a pair of approriate routers. Requirements are: Able to run 2 bgp sessions (one with the ISP, one with the other site router) Option to take a full table from the upstream ISPs would be nice. Able to provide HA gateways on the LAN side (e.g. 192.168.0.254 will automatically migrate if it's host router lost power) A dedicated device rather than a server running Linux / BSD Not crazy expensive. Any help / advice much appreciated.

    Read the article

  • Should an HA failover occur in this scenario?

    - by joeqwerty
    I'm running vSphere 5 in an HA cluster across two hosts (vsphereA and vsphereB). I have the HA cluster configured for host monitoring and datastore heartbeat monitoring with admission control disabled (hopefully I rightfully understand that datastore heartbeat monitoring prevents inadvertent and unwanted HA failovers due to management network isolation). Each host has a single connection to a dedicated iSCSI network and iSCSI target (no MPIO). All vmdk's for all VM's exist on the iSCSI datastore. As a test of HA I disconnected the iSCSI connection on vsphereB and was surprised to see that the running VM's on vsphereB continued to run on vsphereB. The powered off VM's were showing as inaccessible (which I expected due to the fact that they weren't running and the connection from vsphereB to the iSCSI target was severed) but the running VM's continued to run and continued to be "owned" by vsphereB. I expected to see an HA failover occur for those VM's and expected to see them "owned" by vsphereA after the HA failover (which didn't occur). I'm at a loss to understand why an HA failover didn't occur for those VM's. Am I misunderstanding in which cases an HA failover should occur?

    Read the article

  • MySQL HA and Magento DB

    - by Raj
    Is it possible to use MySQL cluster for Magento DB? I have Web app developed in Magento E-commerce platform and I want to make DB highly available using the MySQL cluster. Magento supports only InnoDB database engine and MySQL HA uses it's own engine NDB. The Percona XtraDB Cluster, Does it change the InnoDB storage engine to XtraDB? Can I rollback to the MySQL native replication from Percona XtraDB Cluster?

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

  • IIS/MSSQL HA on two servers? NLB + Mirroring

    - by Igor K
    Currently have the one server doing MSSQL/IIS. Can use NLB with two servers running IIS for HA and can use database mirroring and put the failover partner in the connection string for HA. Can we use NLB + Mirroring together? So if one of the servers died (ie power plug removed), everything will continue to work (after the timeout for the mirror to become the principal)?

    Read the article

  • How to do client side NFS failover in Linux?

    - by Doug
    I have a CentOS 6.3 client that needs to access NFS storage. There are two NFS servers that serve up the same content stored on a SAN with a clustered filesystem. How do I set up CentOS to failover to the backup NFS server if needed? When I Google, I keep reading that Linux does not support this, but that would be strange since there is plenty of information out there on how to set up a clustered Linux NFS server farm...

    Read the article

  • Can I replace a router and DHCP server without disrupting traffic?

    - by SRobertJames
    I have a device which acts as a router and DHCP server. I'm replacing it, and would like to minimize down time. If I unplug it, and plug in a different device with the same IP, will all the PCs with DHCP leases keep on working? (I have DHCP Conflict detect on, so it shouldn't reassign a DHCP address already used). What if I want to change the IP (new subnet) - is there anyway to tell all the clients (Windows PCs) to release their DHCP leases and request new ones in a minute? If before unplugging the old device, I have it release all DHCP leases, will the Windows PCs automatically ask for new DHCP addresses?

    Read the article

  • 250k connections for comet with node.js

    - by Nenad
    How to implement node.js to be able to handle 250k connections as comet server (client side we use socket.io)? Would the use of nginx as proxy/loadbalancer be the right solution? Or will HA-Proxy be the better way? Has anyone real world experience with 100k+ connections and can share his setup? Would a setup like this be the right one (Quad core CPU per server - start 4 Instances of node.js per Server?): nginx (as proxy / load balancing server) / | \ / | \ / | \ / | \ node server #1 node server #2 node server #3 4 instances 4 instances 4 instances

    Read the article

  • Hard disk permission after bootcamp ??

    - by Sladiki
    Hi all, I have a question concerning hard disk permission after Using boot camp, i have a macbook pro 17 i7, 500gb, yesterday i installed window 7 ultimate in 80 gb (bootcamp) ntfs offcourse. I was testing my HD permission since i found that the start up is slow in mac side. I found there's alot of changes in permissions is that normal or i should to repair all this permission problems, need to mention that from Windows side i can see my mac drive which i don't want... Any idea... Regards, Sami

    Read the article

  • How can HAProxy improve availibility, or "how can I prevent my site from going down"? [closed]

    - by Joe Hopfgartner
    I am aware of what HAProxy does, but what if my HAproxy goes down? Or what if my DNS servers go down? Yes, dns is less the problem. However dns only solves to an IP and an IP is announced via BGP to be routed over some router. What if that router goes down? Of course if I have complicated application servers that are likely to fail HAProxy can significantly improve uptime. But my application isnt. In fact my application may very well just be delivering a small static html file via HTTP. Basically if any user anywhere types in MYDOMAIN.COM, I want the user to get SOMETHING on the screen other than a timeout or DNS resolution error. How can I do that? The point of entry is difficult. The so called "initial closure mechanism".

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • How do I activate the F_LINE input in a transplanted HP chassis?

    - by admin
    I have an HP Pavilion Media Center PC chassis, vintage 2003 or so and I replaced the motherboard in it with a newer (vintage 2009) HP motherboard, M2N68-LA (Narra 5). I have scoured the internet trying to find pinouts for the motherboard to no avail. My question concerns the front panel audio, specifically Line In. The old chassis was built for AC97 but the new mobo is build for the newer HD audio standard. I figured out by comparison & experimentally how to connect the Mic & Headphone jacks to the HD audio header of the mobo by adding a manual switch to set the SENSE lines. Now all works fine for Mic & headphone. The old chassis also has a front panel Line In jack that the newer HP chassis does not have. However, the new mobo has a 4 pin white connector labeled F_LINE that I believe is a line input. Under Windows 7 I see the two Line Inputs in the mixer but I can't get one of them to become active. The 4 pin F_LINE connector uses the two middle pins for ground, and presumably the other two for left and right audio inputs. There are no pins for sensing on that connector. Can anyone tell me how to use that F_LINE input for the front panel, or how to activate it?

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • Looking for a solid redirection infrastructre

    - by isoman
    We have critical servers (webservers and databases) that are fully replicated, except for the reverse proxy that we use to hide the internal stuff. This proxy is acting like a router that filters and redirects traffic to the main server and switch for failover if the main one is down. We want to find an alternative to this proxy because one single entry point is not enough. Is there any company that has a solid and redundant infrastructure that offers redirection to an IP and allows quick switching to another one?

    Read the article

  • 1080p HD TV + what is minimum spec pc required to stream HD movie files to it?

    - by rutherford
    I want to stream hi-def (non flash-based) movies from my future minimum spec pc to my network-ready HDTV. What I want to know is a) when streaming from a computer (local wifi network), is the computer's cpu/video/ram resources used to the same extent as it would be if playing back on the computers local screen? If not what are the differences? b) So with streaming hd content what is the minimum spec processor I should go for, if i) only one TV is acting as client ii) two TVs are simultaneous clients.

    Read the article

  • How to choose NoSQL database engine?

    - by Poma
    We have a database with following specs: 30k records, 7mb in size 20 inserts/second 1000 updates/second 1000 range selects/second, by secondary index, approx 10 rows each needs at least one secondary index needs some mechanism to expire keys if they are not updated for 75 secs (can be done via programmatic garbage collector but will require additional 'last_update' index and will add some load) consistency is not required durability is not required db should be stored in memory For now we use Redis, but it does not have secondary index and it's keys index:foo:* is too slow. Membase also does not have secondary index (as far as I know). MongoDB and MySQL memory engine have table-level locks. What engine will fit our use case?

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • HD video is slower than audio output

    - by Star
    I have an HD video files (1920x1080 H.264 DUAL AUDIO FLAC) file type: MKV file size: 1.25 GB file length: 24 minutes the problem is the video output is not synchronized with audio output, something slow too much sometime it gets too fast I tried running it on Windows Media Player , Media Player Classic , and a few other players, but the result is the same Additional Info: for device information I'm on LG S510 labtop

    Read the article

  • Why the Indian link builders or SEO companies can make so many high quality links at the same time? [closed]

    - by chiba
    There are a lot of Indian SEO companies or link builders that offer a lot of high quality link. Some of them for example offer links just from "co.uk" or "French site" with high page ranks. I have heard that even the SEO companies from other countries outsource link building to India. Do they have special connections for building links ? or Do they exchange the information between another Indian companies and have a big database of the sites where they can link?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >