Search Results

Search found 3282 results on 132 pages for 'individual'.

Page 88/132 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Hosting django backend for iPhone / Android app

    - by Ashok Fernandez
    I am looking to make an iPhone / Android app for my university using the Appcelerator Titanium framework. The app will rely heavily on a server backend which will pull information from other sites, figuring out what is relevant to the user then deliver the content. Some of the information is individual to the user (calendar data), other bits are updates frequently but are shared (bus timetables) and others are static and the same for everyone (magazine articles). I was going to use django as I am fairly proficent in python so I thought it would save time. My question is, which hosting services do you recommend to host the server backend? I am expecting about 9000 people to use the app with very random spikes in traffic, but unfortunately I have very little to go on at this stage. I have heard a lot about Webfaction, is it suitable for something like this or am I likely to need something bigger? I don't really want to fork out for a VPS at this stage. What about Amazons EC2? Would that be more suitable than Webfaction? Sorry for the fairly open ended question, Im sort of new to this so I open to all suggestions.

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • SUSE Linux and Xen on Mac Pro - How best to prepare and configure?

    - by Andrew J. Brehm
    This is a longwinded question, so bear with me please. I have a 2009 Mac Pro with two CPUs and 8 GB of memory which is totally overpowered for Mac OS X. I am also in the process of slowly moving away from Mac OS X as my main platform. Since the Mac Pro is really new and nice I have finally decided to use it for another platform. I am familiar with Linux and SUSE Linux. Ultimately I want to run some version of SUSE Linux (recommend one, doesn't have to be free as in no money) and Xen. Here are the individual questions: Which version of SUSE Linux should I use and how do I install it on a Mac Pro? Note that the distribution must come with usable Xen. I am willing to pay. I assume Xen will work on my computer (it has VT support etc.). Is my assumption correct? I want to run Windows 7 and another instance of SUSE Linux under Xen. Is it possible to run Mac OS X Server under Xen (on a Mac Pro)? Which email client under Linux supports imap is is best-suited for integrating with MobileMe? Does SUSE Linux support the ATI Radeon HD 4870 and the Apple Cinema Display 1920 x 1200 resolution? What else should I take into account?

    Read the article

  • postfix takes 60-90ms to queue email -- normal?

    - by Jeff Atwood
    We're seeing some (maybe?) strange delays when submitting individual emails to our local Postfix server. To help diagnose the issue, I wrote a little test program which sends 5 emails: get smtp 1ms ( 1 ms) email 0 677ms (676 ms) email 1 802ms (125 ms) email 2 890ms ( 88 ms) email 3 973ms ( 83 ms) email 4 1088ms (115 ms) Discounting the handshaking in the first email, that's about 90ms per email. These timings have also been corroborated with another test app written by someone else using a different codepath, so it appears to be server related. I turned on detailed logging and I can see that the delay is between the end of message \r\n\r\n and the receive: [16:31:29.95] [SEND] \r\n.\r\n [16:31:30.05] [RECV] 250 2.0.0 Ok: queued as B128E1E063\r\n [16:31:30.08] [SEND] \r\n.\r\n [16:31:30.17] [RECV] 250 2.0.0 Ok: queued as 4A7DE1E06E\r\n [16:31:30.19] [SEND] \r\n.\r\n [16:31:30.27] [RECV] 250 2.0.0 Ok: queued as 68ACC1E072\r\n [16:31:30.28] [SEND] \r\n.\r\n [16:31:30.34] [RECV] 250 2.0.0 Ok: queued as 7EFFE1E079\r\n [16:31:30.39] [SEND] \r\n.\r\n [16:31:30.45] [RECV] 250 2.0.0 Ok: queued as 9793C1E07A\r\n The time intervals tell the story (discounting the handshaking required for the initial email) -- each email is waiting about 60-90 milliseconds for postfix to queue! This seems .. excessive .. to me. Is it "normal" for postfix to take 60-90 ms for every email you send it? Or do I just have unreasonable expectations? I would expect the local postfix server to queue the email in about 20ms, tops!

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • What's the best way to completely remove everything from a computer, without re-installing?

    - by Connor W
    I have a friend who wants to sell their computer, but obviously all personal information and software that it is on it needs to be removed before doing so. Usually I would format and reinstall it, but I cannot easily get hold of the required XP DVDs and I'm not 100% sure the serial number is stuck on the case as usual so getting hold of it will probably require more effort than I'm prepared to spend. So, what's the best and quickest way to remove and uninstall everything from the PC without reinstalling it? Thanks. EDITS: I'm looking to remove things like Internet History and all installed programs, too. I know how to remove the history and each individual program, but that could take hours. The machine is not branded and therefore there is no website I can go to download recovery software. There is no recovery partition on the computer and I'm not aware of any recovery DVDs for it either. I can only assume it was installed from a retail copy, and therefore there is no way to recover it to factory settings. It needs to have XP installed, not any distribution of Linux. Like most average people, the person getting the computer will not understand what to do with a computer that doesn't have Windows installed, and software like Office does not work on Linux either. Buying another licence is not really an option either. She has just brought a laptop to replace the computer, so buying another licence for a computer that she's getting rid of doesn't really make sense. Thanks for all the help so far!

    Read the article

  • Access Denied on Some Subfolders/Files Within a Share

    - by Tim
    First thing this morning, I find that users on one of our share drives are all getting "access denied". I tried the same drive and also received "access denied" as a Domain Admin. Previous to this, all specified users and admins could get access. I checked share permissions I checked NTFS permissions I temporarily made both types of permissions read/write to "Everyone" -- This worked for one user It turns out that this is occurring for only some files/folders. When I try to manually alter the share of that single share, it can't be shared, access denied. xcacls also gets access denied rebooted the server (not a big deal - this is a smallish company). Does anybody have any insight, my google-fu is coming up blank. Thanks. EDIT: More info, I just ran AccessEnum. There were a lot of "access denied", but I noticed the pattern that all of the access denied had a parent with an owner of "???". When I look at the properties, the "Unable to display owner" message is in the box and I can only make my user account the owner. I can then share the individual file/folder, but it doesn't seem to propogate down to subfolders/files.

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • Procurve Primary VLAN

    - by fukawi2
    I'm trying to depreciate usage of VLAN 1 on my ProCurve switches; 1 is unused. I understand that VLAN 1 must exist, but I want to remove it from all ports, especially trunks between switches. The problem I have is that stacking does not seem to work without VLAN 1. I have changed the primary VLAN and management VLAN on all the switches: (config)# primary-vlan 42 (config)# management-vlan 42 (config)# no vlan 1 untagged 25 Port 25 is the link between the 2 switches I'm testing with; the stack master and a member switch; I only want tagged traffic between the switches, no untagged frames. show stacking on the master shows all members as "UP" but I can not telnet any of them: Telnet failed: Connection timed out. All switches have manually assigned (static) IP addresses on VLAN 42, and all exist in the same /25 subnet, as does my desktop. I can telnet the switches directly from my desktop to the individual switch IP addresses, just not from the master switch. Do I need to reboot the switches to have the primary-vlan change take effect? Or is there something else I'm missing?

    Read the article

  • How could one archive all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. They are currently hosted through GoDaddy for web and email. For legal reasons the client would like to archive emails sent by their employees. Currently the emails are all done through POP3 so all the email is basically housed in files on individual machines (remember, small business). It's been proposed an inexpensive solution to this would be to have all emails BCC'd to a main account so that conversations with the outside would could be archived and tracked. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect employees not users. The goal of this is to archive sent emails for legal reasons. This is something the employees will be cognizant of and on board with. The bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. If the answer is "can't do it, buy an Exchange license" that's fine. My apologies for phrasing this so poorly. I understand why there was so much confusion.

    Read the article

  • Proper end of day sequence to maintain monitor config

    - by WarmBeer
    I've got an HP EliteBook 6930p that travels from home, where it is connected to individual cables, and work where there is a docking station. At both locations I have an external monitor as the secondary monitor and like to have the laptop screen as the primary, i.e. with the task bar. At the end of the day I close the laptop, which is supposed to set it to standby. When I get home I plug in the power cord and the external monitor cord and open the computer. When heading into work I close the computer and unplug everything. Inevitably when I open the computer at the new location the monitors are reversed, i.e. the primary, task bar display is on the external monitor and the laptop shows the secondary, even though when i click identify the laptop has the 1. I then have to disable the secondary display, switch the primary to the laptop and re-enable the secondary. I've tried locking the computer before closing and occasionally that works to keep the setup in place but not always. Any suggestions for how to keep the config in place during transport? ed

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • PCs on domain can not resolve external IP addresses using the DC's DNS Server

    - by Ben
    I currently have a domain controller which handles all DHCP and DNS. The DHCP works just fine and the domain controller itself can use the internet with no issues. However, PCs that are part of the domain are not able to use external websites, only internal. Does anyone have any way I can solve this issue? Thank you Server: Windows Server 2008 R2 PC: Win7 Enterprise x64 Edit: (domain controller) C:\Users\bcollyer>nslookup google.com Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: google.com Addresses: 2a00:1450:4009:809::100e 173.194.41.166 173.194.41.165 173.194.41.169 173.194.41.162 173.194.41.161 173.194.41.160 173.194.41.168 173.194.41.167 173.194.41.164 173.194.41.163 173.194.41.174 Edit 2: C:\Users\bcollyernetstat -rn Interface List 12...30 85 a9 f7 8a 21 ......Atheros AR8161/8165 PCI-E Gigabit Ethernet Control ler (NDIS 6.20) 1...........................Software Loopback Interface 1 13...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11...00 00 00 00 00 00 00 e0 Microsoft Teredo Tunneling Adapter IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.16.0.67 172.16.0.202 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 172.16.0.0 255.255.0.0 On-link 172.16.0.202 276 172.16.0.202 255.255.255.255 On-link 172.16.0.202 276 172.16.255.255 255.255.255.255 On-link 172.16.0.202 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 172.16.0.202 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 172.16.0.202 276 Persistent Routes: None IPv6 Route Table Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link Persistent Routes: None BTW I have no javascript on the server so can't reply to individual answers... Sorry!

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >